Authentication for Google Cloud Functions with JWT and Auth0

Surprised that there was no built-in authentication mechanism for Google Cloud Functions, I made an attempt to implement a simple one with JWT and Auth0

With all the hype around serverless, I recently took a stab at creating a cloud function and see how it goes. I went with Google Cloud Functions instead of AWS Lambda, because I had some free signup credits on Google.

I started with this tutorial https://cloud.google.com/functions/docs/tutorials/http, and it seemed pretty straightforward. I created a cloud function with a HTTP trigger in about 30 minutes.

The function I deployed adds an entry to a Cloud Datastore database, and would do so every time I make a curl request to the function’s endpoint. That was pretty thrilling.

curl -X POST -H "Content-Type: application/json" \
			-d '{"foo": "bar"}' \
			"https://.cloudfunctions.net/post"
			

However, it soon dawned on me that this is pretty insecure, as anyone who knows of this endpoint could write to the database. Imagine if I wrote a delete function! I thought surely Google must have built in some sort of authentication scheme for Cloud Functions. But after googling around for a while, it didn’t seem so. I did next what any clueless developer would, and post a question on StackOverflow.

After a few days, the answers I got back seemed pretty disappointing. Apparently if I had used AWS Lambda, I could leverage API Gateway, which has support for auth. But I am on my own for Google Cloud Functions.

So I decided to implement an authentication check for my cloud function with a JWT token passed in in the form of an Authorization header access token, with the help of Auth0.

Here’s the implementation in Node, and the explanation is after.

const jwksClient = require('jwks-rsa');
			const jwt = require('jsonwebtoken');
			
			const client = jwksClient({
			  cache: true,
			  rateLimit: true,
			  jwksRequestsPerMinute: 5,
			  jwksUri: "https://.auth0.com/.well-known/jwks.json"
			});
			
			function verifyToken(token, cb) {
			  let decodedToken;
			  try {
			    decodedToken = jwt.decode(token, {complete: true});
			  } catch (e) {
			    console.error(e);
			    cb(e);
			    return;
			  }
			  client.getSigningKey(decodedToken.header.kid, function (err, key) {
			    if (err) {
			      console.error(err);
			      cb(err);
			      return;
			    }
			    const signingKey = key.publicKey || key.rsaPublicKey;
			    jwt.verify(token, signingKey, function (err, decoded) {
			      if (err) {
			        console.error(err);
			        cb(err);
			        return
			      }
			      console.log(decoded);
			      cb(null, decoded);
			    });
			  });
			}
			
			function checkAuth (fn) {
			  return function (req, res) {
			    if (!req.headers || !req.headers.authorization) {
			      res.status(401).send('No authorization token found.');
			      return;
			    }
			    // expect authorization header to be
			    // Bearer xxx-token-xxx
			    const parts = req.headers.authorization.split(' ');
			    if (parts.length != 2) {
			      res.status(401).send('Bad credential format.');
			      return;
			    }
			    const scheme = parts[0];
			    const credentials = parts[1];
			
			    if (!/^Bearer$/i.test(scheme)) {
			      res.status(401).send('Bad credential format.');
			      return;
			    }
			    verifyToken(credentials, function (err) {
			      if (err) {
			        res.status(401).send('Invalid token');
			        return;
			      }
			      fn(req, res);
			    });
			  };
			}
			

I use jwks-rsa to retrieve the public key part of the key that was used to sign the JWT token, and jsonwebtoken to decode and verify the token. I use Auth0, so jwks-rsa reaches out to the list of public keys to retrieve them.

The checkAuth function can then be used to safeguard the cloud function as:

exports.get = checkAuth(function (req, res) {
			  // do things safely here
			});
			

You can see the entire Google Cloud Functions repo at https://github.com/tnguyen14/functions-datastore/

The JWT / access token can be generated in a number of way. For Auth0, the API doc can be found at https://auth0.com/docs/api/authentication#authorize-client

Once this is in place, the HTTP trigger cloud function can be invoked with:

curl -X POST -H "Content-Type: application/json" \
			-H "Authorization: Bearer access-token" \
			-d '{"foo": "bar"}' \
			"https://.cloudfunctions.net/get"
			

Next leg: NYC

This has been one of the hardest/ most terrifying decisions for me to make: leave a growing career at Demandware/ Salesforce Commerce Cloud and come to New York City and work for Bloomberg.

Now that I’ve been in New York for a few months, I’d like to jot down a few thoughts, so that I could look back on at some point in the future.

At this juncture, looking back, it is still a toss-up whether this decision has turned out to be the right one. I am going through a lot of challenges, both personally and professionally, that make me constantly question the move. I hope that in a year or so, the outlook on things will improve. I knew that it was a long-term investment that would require a bit of short-term pain. I should try to stay positive and see the rewards.

TIL – Linking

I was unaware of the difference between static linking and dynamic linking in linux. Thankfully Ben Kelly explained to me in some details these concepts on a Slack chat. I wanted to document them here for future references.

Static linking: when you link the program (the step after compilation that combines all the compiler outputs into a single runnable program), the linker tracks down the libraries the program needs and copies them into the final program file.

Dynamic linking: at link time, the linker merely records which libraries are needed, and when you run the program, the “dynamic linker” reads that information and runs around loading those libraries into memory and making them accessible to the program.

Advantage of the latter is smaller (potentially *much* smaller programs) and you can upgrade the libraries without rebuilding everything that uses them. Disadvantage is that those upgrades can break compatibility, and it’s another external dependency for the program (and thus another point of failure).

The dynamic linker has a bunch of ways it figures out where the libraries are stored; `man ld.so` for all the gory details.

But the tl;dr is that it has a few system paths it looks in (typically /lib and /usr/lib), plus whatever is listed in the environment variable LD_LIBRARY_PATH, plus whatever is recorded as the “rpath” in the executable itself.

TIL – Proper Tail Calls

I’ve heard of Proper Tail Calls (PTC) being tossed around a lot as a new ES6 thing that a lot of people are excited about, but have no idea what it actually is.

This 2ality article is really thorough and helpful in explaining what PTC is and how it works.

I was first pointed to this topic from a blog post by the V8 team. In it, they also reference an alternative to CTO called Syntactic Tail Call (STC), which is a more explicit way to opt in and use PTC. They prefer this approach due to a couple of reasons

1. It makes it more difficult to understand during debugging how execution arrived at a certain point since the stack contains discontinuities and
2. Error.prototype.stack contains less information about execution flow which may break telemetry software that collects and analyzes client-side errors.

These concerns, as well as a third about performance hit, are addressed by the WebKit team in a Github issue response.

A note about the terminology: when learning about this, another term is also used to describe PTC, i.e. tail call optimization (CTO). I think that PTC is preferred, as it should be a mandatory feature, as it is currently spec-ed in ES6, and not an optional optimization. See this tweet .

tesseraic, an engineer at Apple, has been incredibly helpful in pointing me to some of these resources and explaining the concept in more details.