Category Archives: Tech Support

Fix “ESLint: Unsafe call of an `any` typed value” using express (expressJS) and TypeScript

When trying to be diligent about keeping strict types via ESLint in a TypeScript based ExpressJS, You may hit the dreaded, “ESLint: Unsafe call of an any typed value. (@typescript-eslint/no-unsafe-call)” while trying to use the default express() method via:

import express from 'express';

const app = express();

After an npm i -D @types/express the error will remain. After searching around and seeing so many fixes for importing Request and Response types, I finally found an explanation of how to fix the any typed value error on the express() method via smichel17’s response to, “express() function and express namespace ambiguity #37920”. The nuance is in that “import” returns plain objects and “require” returns returns objects and functions. Note that you cannot destructure the “require” call so to get your “Request” and “Response” types without using “express.Request” and “express.Response” you still need the traditional import statement as well.

Update your code to the following and resolve the “Unsafe call of an any typed value” error without using the dreaded escape hatch of "noImplicitAny": false.

import express = require('express');
import { Request, Response } from 'express';

const app = express();
const port = 3000;

app.get('/', (req: Request, res: Response) => {
  res.status(200).send(
    'Express + TypeScript Server ' +
    'With Strict Types by Joe');
});

app.listen(port, () => {
  console.log(
    `⚡️[server]: Server is running ' + 
    ' at http://localhost:${port}`);
});

Windows Setup Load driver for NVMe m2 PCIe intel disk missing

TLDR: Switch your BIOS out of UEFI only and enable legacy boot options and legacy boot mode. Put the BIOS in classic mode etc. After install you can put it back to UEFI mode.

Background: Installing Windows on a new machine using UEFI mode only can result in Windows Setup not finding you disk or asking you to install a driver for your Intel Chipset and even if you happen to find it on Intel’s website, it won’t find the disk. It’ll say,

Load driver

No new devices drivers were found

No new devices drivers were found. Make sure the installation media contains the correct drivers, and then click OK.


For anyone who ran servers this’ll bring back broadcom raid driver hell flashbacks. The funny thing is I run Arch Linux and drivers have never been a problem in UEFI mode but Window’s Setup is an old dog it seems.

I had a hard time and even finally gave up and engaged Dell support. I flashed the drive with the latest firmware. Confirmed it works on another PC.  Dell support tried but they didn’t have this one in the database. Finally somehow I decided to try putting the BIOS back into classic mode and enabled legacy boot options and tried again and Windows saw the drive fine. I let Dell know how to fix it just in case someone else had it but then I saw the same issue on Twitter so i wrote this post. Hopefully no one else is in the dark on this.

Enable legacy boot options

 

aws-cdk template porting migration tips

AWS-CDK Migration Tips

When migrating to a new framework there are going to be some working pains. This is just a collection of frustrations that I encountered while adopting aws-cdk. I did some research prior and found the article “Hey CDK, how can I migrate my existing CloudFormation templates?” by Philipp Garbe and the “core module AWS CDK” documentation most helpful in thinking about migrating initially.

Import Immutable Roles

Use the mutable flag when importing existing roles with Role.fromRoleArn() otherwise the precision of the aws-cdk may lead to the dreaded “Maximum policy size of 10240 bytes” error. Eventually aws-cdk issue #4465 will be fixed and we will welcome the precise IAM policies the CDK generates.

The maximum policy size error was most often encountered on CodePipeline deploy roles where we had a large number of independent artifacts deploying CloudFormations.

Explicit to_string() in python

Having to explicitly call the core.Fn.get_att(‘Foo’, ‘Bar’).to_string() operator instead of using a str() for f'{var}’ style tripped me up.

I noticed in my IDE that the signature called for a string (thanks types!) so I tried:

str(core.Fn.get_att('Foo','Bar'))

and

f'{core.Fn.get_att('Foo','Bar')}'

but because only __repr__ is defined in the python interface I got an ugly object name when I expected __str__ to be implemented. I overlooked the to_string(), which is a pretty common method for many object oriented languages, as I expected the class to behave more pythonically.

Beware of Copy Paste / Naming

Name a stack the same as another? You get a diff but if you aren’t paying attention you’ll blow away a stack before realizing it. Also you end up with the old stack as well not updated because of this config SNAFU.

Import Pains

CfnImport is great for importing old CloudFormations but the stack is immutable upon import. Any changes to the stack must happen prior to making the call. We leaned on PyYAML but then had to undo a few of the niceties of only processing the template with AWS based systems.

Intrinsic Function Shortcuts

For example all bang, “!” ie “!Ref” or “!Sub”, references need to be updated to be the full function command, ie “Ref:” and “Fn::Sub:”.

Attributeerror: ‘datetime.date’

IAM Policy Documents specifying the Version unquoted instead of as a string, ie Version: 2012-10-17 instead of Version: ‘2012-10-17’, will have the cdk synth command greet them with following obscure error.

This error also occurs on AWSTemplateFormatVersion blocks so beware.

 AttributeError: 'datetime.date' object has no attribute '__jsii__type__'.

Example CfnImport

Here is an example of using the CfnImport to inject parameters into a traditional template and then load it into the CDK stack.


import yaml
from aws_cdk import core
class RawStack(core.Stack):
def __init__(self, scope: core.Construct, name: str, template_path: str, wrapped_parameters=None,
**kwargs) -> None:
"""import a stack off a path and munge in ssm variables if desired
:param template_path: path to raw stack being imported
:param wrapped_parameters: map of Parameter keys and default values
:param kwargs: all the stack stuff
"""
super().__init__(scope=scope, name=name, **kwargs)
if not wrapped_parameters:
wrapped_parameters = {}
template_path = Path(template_path)
with open(template_path, 'r') as f:
template = yaml.load(f, Loader=yaml.SafeLoader)
if project_name:
for pk, pv in template['Parameters'].items():
if 'Default' in pv:
if pk in wrapped_parameters:
template['Parameters'][pk]['Default'] = wrapped_parameters[pk]
elif pv['Default'] == "":
template['Parameters'][pk]['Type'] = 'AWS::SSM::Parameter::Value<String>'
template['Parameters'][pk]['Default'] = str(pk)
else:
if pk in wrapped_parameters:
template['Parameters'][pk]['Default'] = wrapped_parameters[pk]
else:
template['Parameters'][pk]['Type'] = 'AWS::SSM::Parameter::Value<String>'
template['Parameters'][pk]['Default'] = str(pk)
core.CfnInclude(self, 'RawStack', template=template)

view raw

raw_stack.py

hosted with ❤ by GitHub

raw_stack.py gist link

 

AWS CDK CLI can only be used with apps created by CDK error

I upgraded my AWS CDK to 1.10.1 today because it prompted me via:

**************************************************
*** Newer version of CDK is available [1.10.0] ***
*** Upgrade recommended                        ***
**************************************************

After doing the upgrade via

npm install -i -g aws-cdk

I went to do a cdk ls or cdk diff and was greeted with the error:

CDK CLI can only be used with apps created by CDK >= 1.10.0

Googling around wasn’t too helpful but finally I figured out that it was complaining that my python dependencies  had the old aws-cdk libraries installed.

A quick

rm -r .env/
python -m venv .env
pip install -r requirements.txt

And I was back in business

cdk ls
integration-pipeline

ZyXEL EMG3425-Q10A Port Forwarding

The ZyXEL EMG3425-Q10A has NAT Port Forwaring but it doesn’t seem to work well because the Remote Management section has been patched out. This causes the remote management screen to always boot on the IP that is the same as the Port Forwarding Default Server Setup. You need to do two things to fix this mess. First change the default server setup to be the target you normally will port forward to. Turn on a firewall on this host. Second, you need to forward the WWW and HTTPS rules to that host.

When adding additional rules, click “add” instead of apply. Add ports like 25565 to host minecraft or 27015 to TF2.

You cannot delete rules or you have to re-enter all of them in the right order. AGAIN.

EDID Reading on Arch Linux

There are several tools to read the Extended Display Identification Data, EDID, from systems but I found LinuxTV’s edid-decode the most thorough when debugging for a linux 5.0.x display boot flicking problem.

On arch I ran installed edid-decode-git and then ran a quick script:

for f in `find /sys/devices -name 'edid'`; do sudo cat $f| edid-decode;done

and I got something like:

EDID version: 1.4
Manufacturer: BOE Model 65a Serial Number 0
Made in week 1 of 2015
Digital display
6 bits per primary color channel
DisplayPort interface
Maximum image size: 34 cm x 19 cm
Gamma: 2.20
Supported color formats: RGB 4:4:4, YCrCb 4:4:4
First detailed timing includes the native pixel format and preferred refresh rate
Display x,y Chromaticity:
  Red:   0.6416, 0.3437
  Green: 0.3183, 0.6103
  Blue:  0.1494, 0.0439
  White: 0.3125, 0.3281
Established timings supported:
Standard timings supported:
Detailed mode: Clock 139.770 MHz, 344 mm x 194 mm
               1920 1968 2000 2080 hborder 0
               1080 1083 1089 1120 vborder 0
               +hsync -vsync 
               VertFreq: 59 Hz, HorFreq: 67197 Hz
Detailed mode: Clock 111.820 MHz, 344 mm x 194 mm
               1920 1968 2000 2080 hborder 0
               1080 1083 1089 1120 vborder 0
               +hsync -vsync 
               VertFreq: 47 Hz, HorFreq: 53759 Hz
ASCII string: J125V
Manufacturer-specified data, tag 0
Checksum: 0xa9 (valid)

This helped when trying to diagnose: black screen on Dell XPS 15 with kernel 5.0 and Bug 109959 – REGRESSION: black screen with linux 5.0 when starting X

Linux Client VPN using Meraki Cloud Controller authentication

If you want to VPN into your network using the Meraki Cloud Controller the Client VPN Instructions indicate that you may be out of luck when trying to use xl2tp.

Note: The xl2tp package does not send user credentials properly to the MX when using Meraki Cloud Controller authentication, and this causes the authentication request to fail. Active Directory or RADIUS authentication can be used instead for successful authentication.
Note: The xl2tp package does not send user credentials properly to the MX when using Meraki Cloud Controller authentication, and this causes the authentication request to fail. Active Directory or RADIUS authentication can be used instead for successful authentication.

It turns out that if you setup the IPSEC phase1 and phase2 algorithms then it’ll work.

It took some googling to bring it all around but combined with the Project network-manager-l2tp Github issue 34 of  “IPSec options hard coded” and the Ubuntu question “L2tp IPSEC PSK VPN client on (x)ubuntu 16.04“, I found that setting IPSEC Phase1 Algorithms to 3des-sha1-modp1024 and Phase2 Algorithms to 3des-sha1 works.

Phase1 Algorithms: 3des-sha1-modp1024 Phase2 Algorithms: 3des-sha1
Phase1 Algorithms: 3des-sha1-modp1024 Phase2 Algorithms: 3des-sha1

Now I can connect to the VPN no problem. On Arch Linux!

Install Firefox on Amazon Linux x86_64 Compiling GTK+

Amazon Linux doesn’t offer the Gimp Tool Kit (GTK+) so if you want to run Firefox on an Amazon Linux system, say for Selenium testing, you are left having to compile the system yourself.  Luckily you have found this post.  Create the script below, run it as root and it will build all the components needed for GTK+ and its dependencies for Firefox to run just fine on the system.

vi ./gtk-firefox
chmod 755 ./gtk-firefox
sudo ./gtk-firefox

After you have built the packages, add the /usr/local/bin to your path by updating your .bashrc file.

cat << EOF >> ~/.bashrc
PATH=/usr/local/bin:\$PATH
export PATH
EOF

Here is the gtk-firefox file for your pleasure.


#!/bin/bash
# GTK+ and Firefox for Amazon Linux
# Written by Joseph Lawson 2012-06-03
# http://joekiller.com
# https://joekiller.com/2012/06/03/install-firefox-on-amazon-linux-x86_64-compiling-gtk/
# chmod 755 ./gtk-firefox.sh
# sudo ./gtk-firefox.sh
TARGET=/usr/local
function init()
{
export installroot=$TARGET/src
export workpath=$TARGET
yum –assumeyes install make libjpeg-devel libpng-devel \
libtiff-devel gcc libffi-devel gettext-devel libmpc-devel \
libstdc++46-devel xauth gcc-c++ libtool libX11-devel \
libXext-devel libXinerama-devel libXi-devel libxml2-devel \
libXrender-devel libXrandr-devel libXt dbus-glib \
libXdamage libXcomposite
mkdir -p $workpath
mkdir -p $installroot
cd $installroot
PKG_CONFIG_PATH="$workpath/lib/pkgconfig"
PATH=$workpath/bin:$PATH
export PKG_CONFIG_PATH PATH
bash -c "
cat << EOF > /etc/ld.so.conf.d/firefox.conf
$workpath/lib
$workpath/firefox
EOF
ldconfig
"
}
function finish()
{
cd $workpath
wget -r –no-parent –reject "index.html*" -nH –cut-dirs=7 http://download.cdn.mozilla.net/pub/mozilla.org/firefox/releases/latest/linux-x86_64/en-US/
tar xvf firefox*
cd bin
ln -s ../firefox/firefox
ldconfig
}
function install()
{
wget $1
FILE=`basename $1`
if [ ${FILE: -3} == ".xz" ]
then tar xvfJ $FILE
else tar xvf $FILE
fi
SHORT=${FILE:0:4}*
cd $SHORT
./configure –prefix=$workpath
make
make install
ldconfig
cd ..
}
init
install ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.xz
install http://download.savannah.gnu.org/releases/freetype/freetype-2.4.9.tar.gz
install http://www.freedesktop.org/software/fontconfig/release/fontconfig-2.9.0.tar.gz
install http://ftp.gnome.org/pub/gnome/sources/glib/2.32/glib-2.32.3.tar.xz
install http://cairographics.org/releases/pixman-0.26.0.tar.gz
install http://cairographics.org/releases/cairo-1.12.2.tar.xz
install http://ftp.gnome.org/pub/gnome/sources/pango/1.30/pango-1.30.0.tar.xz
install http://ftp.gnome.org/pub/gnome/sources/atk/2.4/atk-2.4.0.tar.xz
install http://ftp.gnome.org/pub/GNOME/sources/gdk-pixbuf/2.26/gdk-pixbuf-2.26.1.tar.xz
install http://ftp.gnome.org/pub/gnome/sources/gtk+/2.24/gtk+-2.24.10.tar.xz
finish
# adds the /usr/local/bin to your path by updating your .bashrc file.
cat << EOF >> ~/.bashrc
PATH=/usr/local/bin:\$PATH
export PATH
EOF

view raw

gtk-firefox.sh

hosted with ❤ by GitHub

If you are running OSX Mountain Lion or above and cannot get Firefox to run via the SSH -X command, make sure you have XQuartz installed as Apple removed X11 by default.

Edited to make Firefox latest release more reliable. Updated with Gist.

Edit 11/21/2012: Added dbus-glib dependency to gist. Added notes about running on OSX

Creating a X.509 or Signing Certificate for AWS EC2 using Powershell and Windows SDK

Currently Amazon AWS only allows Base-64 encoded certificates to be used as an EC2 credential.  Further when creating a user in IAM, Amazon doesn’t provide a convenient certificate generator which it does allow for the root user.  If you want to create these type of certificates on Windows you will find that it is not easy to get the certificate out of a binary (DER) format.  Many will point you to OpenSSL to do the conversion and that is fantastic however some may not be able to use OpenSSL.

I am going to lay out some steps that will help you quickly create an X.509 certificate and private key using the Windows SDK makecert.exe utility and Powershell.

First download the Windows SDK.  When installing, only the Tools option is necessary.  Usually the SDK installs to C:\Program Files\Windows SDK\version\bin.  I would suggest that you modify your path to include the SDK bin directory if you are going to make a lot of these certificates.  These instructions assume that makecert is in your path.

Makecert has a number of functions, but the feature we are interested in is its ability to generate self signed certificates with a straightforward command.  All certificates output are in a DER binary format so they are currently unsuitable for AWS consumption.  We will use powershell to convert from a binary object to a Base-64 string.  Note that makecert normally creates a single file containing both the private key and the public key.  Since we want these elements in separate files, we use the -sv toggle which saves the private key to a .pvk file.  One last gotcha to note is that the tool seems to want you to specify the resulting files with the extensions as show in the help and examples.  If you don’t use the .pvk and .cer extensions it might not output the file.

Assuming that you have the SDK install and can run makecert, here are the steps to get your certificate AWS ready.

Create the self signed certificate and corresponding private key file using makecert:

makecert -sv privatekey.pvk certificate.cer

Next we are going to use powershell and some .NET magic to process the binary files into a text friendly BASE-64 format (PEM).

Process the certificate first:

[byte[]] $x = get-content -encoding byte -path .\certificate.cer

[System.Convert]::ToBase64String($x) > .\cer-ec2creds.PEM

Next Process the private key:

[byte[]] $x = get-content -encoding byte -path .\privatekey.pvk

[System.Convert]::ToBase64String($x) > .\pk-ec2creds.PEM

You can now examine the resulting files in notepad to confirm that they are indeed in a BASE-64 format.

notepad .\cer-ec2creds.PEM

notepad .\pk-ec2creds.PEM

The files should work fine even if they are missing the proper headers and footers.  If you want to include them, they should be as follows.  Remember to add an end line character to the file as well.

For the certificate PEM file:

-----BEGIN CERTIFICATE-----

-----END CERTIFICATE-----

For the private key PEM file:

-----BEGIN PRIVATE KEY-----

-----END PRIVATE KEY-----

I hope this is useful.  Please feel free to comment and share other methods.

Here are some references:

-Joe

Testing iexplore with Selenium Server as a Jenkins/Hudson Slave via Seleniumhq Plugin

Selenium Server, the 2.0 blend of Selenium RC and Webdriver, is the latest in CI Testing goodness from the Selenium project and SeleniumHQ.  During my experimenting with trying to get Selenium to take scripts made in the Selenium IDE and run them with the new selenium-server-standalone-2.0.0.jar via the Jenkins Seleniumhq Plugin on a Jenkins/Hudson slave, I had a few different issues.  My primary problem was getting *iexplore tests to execute from a Jenkins/Hudson slave node.  The slave is running as a service started as a domain user instead of Local Service.  The slave has to run as a domain user because the Jenkins slave is also doubling as a Windows build server running off a Linux master.  The goal was to test with Firefox 5 and Internet Explorer 7 in a Windows Server 2003 R2 x64 environment.  In the end, I could only get *iexplore tests to run reliably by using Window’s automatic logon and then launch the Hudson/Jenkins slave as a startup shortcut which was just:

javaws http://hudsonhost.local:8080/hudson/computer/slave-agent.jnlp

I believe this will also work with nearly any other Windows distribution up to the latest 7/2008R2 series. It was undesirable to run the Slave service in this way however it may just be what is necessary to test older software with a Windows Server 2003 platform.  This approach locks you out of the console of the server, but you can leave the user with just user privileges and then remote in to administer if needed.

By the way, Firefox 5 ran flawlessly as a domain user after creating a Firefox profile for Selenium.