Category Archives: AWS

aws-cdk template porting migration tips

AWS-CDK Migration Tips

When migrating to a new framework there are going to be some working pains. This is just a collection of frustrations that I encountered while adopting aws-cdk. I did some research prior and found the article “Hey CDK, how can I migrate my existing CloudFormation templates?” by Philipp Garbe and the “core module AWS CDK” documentation most helpful in thinking about migrating initially.

Import Immutable Roles

Use the mutable flag when importing existing roles with Role.fromRoleArn() otherwise the precision of the aws-cdk may lead to the dreaded “Maximum policy size of 10240 bytes” error. Eventually aws-cdk issue #4465 will be fixed and we will welcome the precise IAM policies the CDK generates.

The maximum policy size error was most often encountered on CodePipeline deploy roles where we had a large number of independent artifacts deploying CloudFormations.

Explicit to_string() in python

Having to explicitly call the core.Fn.get_att(‘Foo’, ‘Bar’).to_string() operator instead of using a str() for f'{var}’ style tripped me up.

I noticed in my IDE that the signature called for a string (thanks types!) so I tried:

str(core.Fn.get_att('Foo','Bar'))

and

f'{core.Fn.get_att('Foo','Bar')}'

but because only __repr__ is defined in the python interface I got an ugly object name when I expected __str__ to be implemented. I overlooked the to_string(), which is a pretty common method for many object oriented languages, as I expected the class to behave more pythonically.

Beware of Copy Paste / Naming

Name a stack the same as another? You get a diff but if you aren’t paying attention you’ll blow away a stack before realizing it. Also you end up with the old stack as well not updated because of this config SNAFU.

Import Pains

CfnImport is great for importing old CloudFormations but the stack is immutable upon import. Any changes to the stack must happen prior to making the call. We leaned on PyYAML but then had to undo a few of the niceties of only processing the template with AWS based systems.

Intrinsic Function Shortcuts

For example all bang, “!” ie “!Ref” or “!Sub”, references need to be updated to be the full function command, ie “Ref:” and “Fn::Sub:”.

Attributeerror: ‘datetime.date’

IAM Policy Documents specifying the Version unquoted instead of as a string, ie Version: 2012-10-17 instead of Version: ‘2012-10-17’, will have the cdk synth command greet them with following obscure error.

This error also occurs on AWSTemplateFormatVersion blocks so beware.

 AttributeError: 'datetime.date' object has no attribute '__jsii__type__'.

Example CfnImport

Here is an example of using the CfnImport to inject parameters into a traditional template and then load it into the CDK stack.

raw_stack.py gist link

 

Fixing unhandled instruction bytes error Running Valgrind on AWS CodeBuild

When running Valgrind against one of our C libraries we encountered some discrepancies in the build where locally all would pass but on AWS CodeBuild using the aws/codebuild/standard:2.0 image we would get errors like:

vex amd64->IR: unhandled instruction bytes: 0x62 0xF1 0x7D 0x48 0xEF 0xC0 0xC5 0xF9 0x2E 0x45

The full message was like:

==16128== Memcheck, a memory error detector
==16128== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==16128== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==16128== Command: /root/build/meh/test/.libs/state
==16128== 
[==========] Running 2 test(s).
[ RUN      ] test1
vex amd64->IR: unhandled instruction bytes: 0x62 0xF1 0x7D 0x48 0xEF 0xC0 0xC5 0xF9 0x2E 0x45
vex amd64->IR:   REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
vex amd64->IR:   VEX=0 VEX.L=0 VEX.nVVVV=0x0 ESC=NONE
vex amd64->IR:   PFX.66=0 PFX.F2=0 PFX.F3=0
==16128== valgrind: Unrecognised instruction at address 0x1173bd.
=...
==16128== Your program just tried to execute an instruction that Valgrind
==16128== did not recognise.  There are two possible reasons for this.
==16128== 1. Your program has a bug and erroneously jumped to a non-code
==16128==    location.  If you are running Memcheck and you just saw a
==16128==    warning about a bad jump, it's probably your program's fault.
==16128== 2. The instruction is legitimate but Valgrind doesn't handle it,
==16128==    i.e. it's Valgrind's fault.  If you think this is the case or
==16128==    you are not sure, please let us know and we'll try to fix it.
==16128== Either way, Valgrind will now raise a SIGILL signal which will
==16128== probably kill your program.

The error seems to indicate that the architecture doesn’t seem to match what the docker image had so going off of a Linux Headers Reinstall article we added the following and then the architecture packages were fine.

apt upgrade --fix-missing -y && apt autoremove -y && apt autoclean -y

 

AWS CDK CLI can only be used with apps created by CDK error

I upgraded my AWS CDK to 1.10.1 today because it prompted me via:

**************************************************
*** Newer version of CDK is available [1.10.0] ***
*** Upgrade recommended                        ***
**************************************************

After doing the upgrade via

npm install -i -g aws-cdk

I went to do a cdk ls or cdk diff and was greeted with the error:

CDK CLI can only be used with apps created by CDK >= 1.10.0

Googling around wasn’t too helpful but finally I figured out that it was complaining that my python dependencies  had the old aws-cdk libraries installed.

A quick

rm -r .env/
python -m venv .env
pip install -r requirements.txt

And I was back in business

cdk ls
integration-pipeline

Install Firefox on Amazon Linux x86_64 Compiling GTK+

Amazon Linux doesn’t offer the Gimp Tool Kit (GTK+) so if you want to run Firefox on an Amazon Linux system, say for Selenium testing, you are left having to compile the system yourself.  Luckily you have found this post.  Create the script below, run it as root and it will build all the components needed for GTK+ and its dependencies for Firefox to run just fine on the system.

vi ./gtk-firefox
chmod 755 ./gtk-firefox
sudo ./gtk-firefox

After you have built the packages, add the /usr/local/bin to your path by updating your .bashrc file.

cat << EOF >> ~/.bashrc
PATH=/usr/local/bin:\$PATH
export PATH
EOF

Here is the gtk-firefox file for your pleasure.

If you are running OSX Mountain Lion or above and cannot get Firefox to run via the SSH -X command, make sure you have XQuartz installed as Apple removed X11 by default.

Edited to make Firefox latest release more reliable. Updated with Gist.

Edit 11/21/2012: Added dbus-glib dependency to gist. Added notes about running on OSX

Creating a X.509 or Signing Certificate for AWS EC2 using Powershell and Windows SDK

Currently Amazon AWS only allows Base-64 encoded certificates to be used as an EC2 credential.  Further when creating a user in IAM, Amazon doesn’t provide a convenient certificate generator which it does allow for the root user.  If you want to create these type of certificates on Windows you will find that it is not easy to get the certificate out of a binary (DER) format.  Many will point you to OpenSSL to do the conversion and that is fantastic however some may not be able to use OpenSSL.

I am going to lay out some steps that will help you quickly create an X.509 certificate and private key using the Windows SDK makecert.exe utility and Powershell.

First download the Windows SDK.  When installing, only the Tools option is necessary.  Usually the SDK installs to C:\Program Files\Windows SDK\version\bin.  I would suggest that you modify your path to include the SDK bin directory if you are going to make a lot of these certificates.  These instructions assume that makecert is in your path.

Makecert has a number of functions, but the feature we are interested in is its ability to generate self signed certificates with a straightforward command.  All certificates output are in a DER binary format so they are currently unsuitable for AWS consumption.  We will use powershell to convert from a binary object to a Base-64 string.  Note that makecert normally creates a single file containing both the private key and the public key.  Since we want these elements in separate files, we use the -sv toggle which saves the private key to a .pvk file.  One last gotcha to note is that the tool seems to want you to specify the resulting files with the extensions as show in the help and examples.  If you don’t use the .pvk and .cer extensions it might not output the file.

Assuming that you have the SDK install and can run makecert, here are the steps to get your certificate AWS ready.

Create the self signed certificate and corresponding private key file using makecert:

makecert -sv privatekey.pvk certificate.cer

Next we are going to use powershell and some .NET magic to process the binary files into a text friendly BASE-64 format (PEM).

Process the certificate first:

[byte[]] $x = get-content -encoding byte -path .\certificate.cer

[System.Convert]::ToBase64String($x) > .\cer-ec2creds.PEM

Next Process the private key:

[byte[]] $x = get-content -encoding byte -path .\privatekey.pvk

[System.Convert]::ToBase64String($x) > .\pk-ec2creds.PEM

You can now examine the resulting files in notepad to confirm that they are indeed in a BASE-64 format.

notepad .\cer-ec2creds.PEM

notepad .\pk-ec2creds.PEM

The files should work fine even if they are missing the proper headers and footers.  If you want to include them, they should be as follows.  Remember to add an end line character to the file as well.

For the certificate PEM file:

-----BEGIN CERTIFICATE-----

-----END CERTIFICATE-----

For the private key PEM file:

-----BEGIN PRIVATE KEY-----

-----END PRIVATE KEY-----

I hope this is useful.  Please feel free to comment and share other methods.

Here are some references:

-Joe