ievms – Easily bring up IE browsers on any environment with Vagrant and VirtualBox

Thinking about incorporating IE browser testing for your site, but unsure how to proceed given that you’re developing on a macOS or Linux environment? Do you want to bring up a Windows virtual machine with something like VirtualBox, but not sure how to grab a valid Windows license? Luckily, Microsoft is aware of how difficult it is to develop and test for Edge and IE browsers, so they have released free virtual machines that already have these browsers built in.

If you go to the Microsoft developer website above, you have the options of downloading these VM images for different virtualization softwares (for e.g. VirtualBox and VMWare). However, these steps still feel pretty manual. I wanted to have a simple set of instructions that I could pass around to folks on my team to do IE testing with. Ideally, it would just be a script that anyone can run from any development environment. Thankfully, Microsoft also made vagrant images available, so I was able to create a simple wrapper around this.

ievms – a simple way to start IE and Edge VMs with Vagrant and Virtual Box

Vagrant is a developer tool that allows you to create automated and reproducible development environment using virtual machines. It works pretty well with VirtualBox, a free and well-supported virtualizer.

ievms relies on vagrant, VirtualBox and the images provided Microsoft to create a simple workflow for managing IE browsers across different Windows versions.

Bringing up a Windows 10 virtual machine with IE 10 installed and ready to use:

:; vagrant up ie10-win7

This will download the image if it’s not installed, add it to vagrant, and then bring up the virtual machine with a graphical UI. Once you’re done testing, you can suspend the machine with vagrant suspend ie10-win7, or remove it completely with vagrant destroy ie10-win7. Even if you removed it, the next time you need to bring it up, the image is already cached locally, so you will not have to wait for the download again.

In addition to IE10 on Windows 7, other browser-platform combinations are supported by default:
– ie6-xp
– ie8-xp
– ie7-vista
– ie8-win7
– ie9-win7
– ie10-win7
– ie11-win7
– ie10-win8
– ie11-win81
– msedge-win10

Bonus: You can also test a local website that’s running on the host development environment. For example, if you have a site running on port 8080 locally, you can reach it from within the virtual machine by going to http://192.168.33.1:8080.

Check out ievms‘s README for more instruction on how to get started.

Authentication for Google Cloud Functions with JWT and Auth0

Surprised that there was no built-in authentication mechanism for Google Cloud Functions, I made an attempt to implement a simple one with JWT and Auth0

With all the hype around serverless, I recently took a stab at creating a cloud function and see how it goes. I went with Google Cloud Functions instead of AWS Lambda, because I had some free signup credits on Google.

I started with this tutorial https://cloud.google.com/functions/docs/tutorials/http, and it seemed pretty straightforward. I created a cloud function with a HTTP trigger in about 30 minutes.

The function I deployed adds an entry to a Cloud Datastore database, and would do so every time I make a curl request to the function’s endpoint. That was pretty thrilling.

curl -X POST -H "Content-Type: application/json" \
			-d '{"foo": "bar"}' \
			"https://.cloudfunctions.net/post"
			

However, it soon dawned on me that this is pretty insecure, as anyone who knows of this endpoint could write to the database. Imagine if I wrote a delete function! I thought surely Google must have built in some sort of authentication scheme for Cloud Functions. But after googling around for a while, it didn’t seem so. I did next what any clueless developer would, and post a question on StackOverflow.

After a few days, the answers I got back seemed pretty disappointing. Apparently if I had used AWS Lambda, I could leverage API Gateway, which has support for auth. But I am on my own for Google Cloud Functions.

So I decided to implement an authentication check for my cloud function with a JWT token passed in in the form of an Authorization header access token, with the help of Auth0.

Here’s the implementation in Node, and the explanation is after.

const jwksClient = require('jwks-rsa');
			const jwt = require('jsonwebtoken');
			
			const client = jwksClient({
			  cache: true,
			  rateLimit: true,
			  jwksRequestsPerMinute: 5,
			  jwksUri: "https://.auth0.com/.well-known/jwks.json"
			});
			
			function verifyToken(token, cb) {
			  let decodedToken;
			  try {
			    decodedToken = jwt.decode(token, {complete: true});
			  } catch (e) {
			    console.error(e);
			    cb(e);
			    return;
			  }
			  client.getSigningKey(decodedToken.header.kid, function (err, key) {
			    if (err) {
			      console.error(err);
			      cb(err);
			      return;
			    }
			    const signingKey = key.publicKey || key.rsaPublicKey;
			    jwt.verify(token, signingKey, function (err, decoded) {
			      if (err) {
			        console.error(err);
			        cb(err);
			        return
			      }
			      console.log(decoded);
			      cb(null, decoded);
			    });
			  });
			}
			
			function checkAuth (fn) {
			  return function (req, res) {
			    if (!req.headers || !req.headers.authorization) {
			      res.status(401).send('No authorization token found.');
			      return;
			    }
			    // expect authorization header to be
			    // Bearer xxx-token-xxx
			    const parts = req.headers.authorization.split(' ');
			    if (parts.length != 2) {
			      res.status(401).send('Bad credential format.');
			      return;
			    }
			    const scheme = parts[0];
			    const credentials = parts[1];
			
			    if (!/^Bearer$/i.test(scheme)) {
			      res.status(401).send('Bad credential format.');
			      return;
			    }
			    verifyToken(credentials, function (err) {
			      if (err) {
			        res.status(401).send('Invalid token');
			        return;
			      }
			      fn(req, res);
			    });
			  };
			}
			

I use jwks-rsa to retrieve the public key part of the key that was used to sign the JWT token, and jsonwebtoken to decode and verify the token. I use Auth0, so jwks-rsa reaches out to the list of public keys to retrieve them.

The checkAuth function can then be used to safeguard the cloud function as:

exports.get = checkAuth(function (req, res) {
			  // do things safely here
			});
			

You can see the entire Google Cloud Functions repo at https://github.com/tnguyen14/functions-datastore/

The JWT / access token can be generated in a number of way. For Auth0, the API doc can be found at https://auth0.com/docs/api/authentication#authorize-client

Once this is in place, the HTTP trigger cloud function can be invoked with:

curl -X POST -H "Content-Type: application/json" \
			-H "Authorization: Bearer access-token" \
			-d '{"foo": "bar"}' \
			"https://.cloudfunctions.net/get"
			

Next leg: NYC

This has been one of the hardest/ most terrifying decisions for me to make: leave a growing career at Demandware/ Salesforce Commerce Cloud and come to New York City and work for Bloomberg.

Now that I’ve been in New York for a few months, I’d like to jot down a few thoughts, so that I could look back on at some point in the future.

At this juncture, looking back, it is still a toss-up whether this decision has turned out to be the right one. I am going through a lot of challenges, both personally and professionally, that make me constantly question the move. I hope that in a year or so, the outlook on things will improve. I knew that it was a long-term investment that would require a bit of short-term pain. I should try to stay positive and see the rewards.

TIL – Linking

I was unaware of the difference between static linking and dynamic linking in linux. Thankfully Ben Kelly explained to me in some details these concepts on a Slack chat. I wanted to document them here for future references.

Static linking: when you link the program (the step after compilation that combines all the compiler outputs into a single runnable program), the linker tracks down the libraries the program needs and copies them into the final program file.

Dynamic linking: at link time, the linker merely records which libraries are needed, and when you run the program, the “dynamic linker” reads that information and runs around loading those libraries into memory and making them accessible to the program.

Advantage of the latter is smaller (potentially *much* smaller programs) and you can upgrade the libraries without rebuilding everything that uses them. Disadvantage is that those upgrades can break compatibility, and it’s another external dependency for the program (and thus another point of failure).

The dynamic linker has a bunch of ways it figures out where the libraries are stored; `man ld.so` for all the gory details.

But the tl;dr is that it has a few system paths it looks in (typically /lib and /usr/lib), plus whatever is listed in the environment variable LD_LIBRARY_PATH, plus whatever is recorded as the “rpath” in the executable itself.