Today I learned more in-depth about CSRF (Cross-Site Request Forgery) attack and more specifically how to stop it in node.
I’ve been thinking more about CSRF after having some conversations at work about it, as well as taking a (mandatory) security training, and wonder how I would go about implementing CSRF in my node application. Understanding CSRF is a pretty good, quick read on the topic.
At first, I suspected implementing a solution would be rather involved as I’d have to keep track of a time-based token (a unique random value that would last for some time, such as 30 minutes) on the server and validate any form request based on this token.
Turns out, it could be as simple as setting a strong random string as the token in the cookie (if the request does not have it set yet), and then expecting this value in the POST request payload (hidden field). The payload field is then compared against the cookie token to determine whether it’s a valid request.The cookie acts as the maintainer of the state of the CSRF token in this case.
In the hapijs world, this is done with crumb. I’m not familiar with hapi, but when looking into it, I learned about cryptiles, which is a general purpose utility that could create the cryptographically strong random string mentioned above.
In the expressjs world, there’s csurf. It uses csrf under the hood, which took a slightly different approach where instead of a random string, it generates a token using ‘sha1’ salted hash with a secret. The secret is attached to the session (either via cookie or session middleware (httpOnly cookie)). In the csurf’s example, the token is not sent to the client via a cookie, but just rendered to the form directly. It can later be verified by the server using the secret. It is recommended that the token not be sent back in a response body.
For my use case, I am not using an express server-side rendering architecture, so the token will most likely be sent to the client via a cookie, and then read and embedded by the client before POST-ing back to the server. I will update this post when I try this out successfully.
Sidenote: while reading about these implementations, I also learned about timing attacks when doing simple string comparison. So both of the libraries I mentioned above (cryptiles and csrf) use constant time comparison. The former implements its own method, while the latter uses scmp.
Update: upon further thinking, for csurf and csrf, the secret is actually the thing that is comparable to the token in crumb. They’re both created using node’s crypto randomBytes method, which is presumably strong enough to not be guessable. In my limited understanding, I am not sure why the extra step of generating the token is necessary, as the token can be regenerated with the secret, and the salt is stored with the token as well. Perhaps one benefit is to tie the token to a particular session when storing the secret in a session-specific manner (not setting it as a regular cookie).
I spent quite a bit of time at work last year thinking about end-to-end test infrastructures. The main idea behind end-to-end tests is to use browser automation to check whether a web page has rendered correctly, and behave correctly under user interactions.
18 months ago, I first set up a simple architecture using mocha and webdriver.io. Mocha is the main test runner, and as it invokes webdriver.io, the latter will communicate with a selenium server running in the background and send along commands to the browser. What this looks like is, when you run a command like
:; mocha test/e2e/index.js
it will open up a browser of your choice, interact with the browser as instructed, such as scrolling, clicking, getting element text content, assert any result, close the browser window and finish the test.
6 months ago, I updated to using the wdiotest runner instead, which works with mocha, so the majority of the test code continued to work. It is a bit nicer to use as it provides a global handle to call browser commands from, and thus avoids a bit of boilerplate code to set that up (as much as I usually dislike anything global).
Up until now, the way our tests are set up requires a selenium server to be run in the background, and we use selenium-standalone for that, which is quite pleasant. This however becomes an annoyance sometimes for a couple reasons. The selenium version and browser drivers versions were not tracked in our source code, so sometimes the tests could be broken all of a sudden if a bug is introduced in the latest version, or an outdated version is installed. It is also kind of cumbersome to have a separate terminal window open to run this process, or use a process manager to keep it running in the background, especially when many are used to running test with a simple npm test command.
Now comes the exciting part – the reason for this blog post in the first place. I put together a relatively simple script that could eliminate the need for this background selenium process using docker-selenium. The idea is that a selenium docker container is spun up before each test run so the test will connect to it via a port (default to 4444), and then removed after the test has finished (either successfully or otherwise). The script looks something like this:
#!/usr/bin/env bash
# bin/e2e-docker.sh
docker run -d --name chrome -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:2.52.0
# run actual test command in a subshell to be able to rm docker container afterwards
(
# passing arguments along to the test process
npm run test:e2e -- "$@"
)
docker stop chrome 1> /dev/null && docker rm chrome 1> /dev/null
# save exit code of subshell
testresult=$?
exit $testresult
The chrome docker container exposed port 4444, which is the default port for the wdio test runner to connect to. The above script could be invoked via npm run scripts described in package.json as
Now we can simply run our end-to-end tests with a simple npm test command. I thought of using the pretest and posttest to set up and tear down the docker container, but posttest will not be executed if the test fails, thus leaving the container running after the test has finished. Pulling the setup and teardown steps into a bash script also allowed me to do a bit more complex setup, which will be described below.
While removing the need to run selenium server in the background, this approach requires that you have docker running on your host machine. If you’re on Windows or OS X and use docker-machine to run the docker server, the docker container lives at a different IP address than localhost. To get around that, we can tell the wdio test runner what host it should connect to.
:; npm test -- --host $(docker-machine ip)
With this approach of using docker-selenium, I can now track the version of selenium drivers in my source code. Furthermore, the browsers are now opened and run inside the docker container instead of the host machine, which I find to be a bit faster and less disruptive to the developer experience. However, there are times during the test writing process where you might want to see what is actually going on for each test command. Fortunately, docker-selenium provides debug images that has a running VNC server that we can connect to in order to visually see the browser window. Keeping the same npm run scripts, I could enable debug mode by running npm test -- --debug with the following modification to the bin/e2e-docker.sh script:
#!/usr/bin/env bash
CONTAINER_NAME="chrome"
# if a debug flag is passed in, use the debug image and open vnc screen sharing
if [[ $@ == *"--debug"* ]]; then
# parse the script arguments for the docker server IP address
ip=$(grep -Eo '([0-9]{1,3}.){3}[0-9]{1,3}' <<< "$@")
docker run -d --name $CONTAINER_NAME -p 4444:4444 -p 5900:5900 -v /dev/shm:/dev/shm selenium/standalone-chrome-debug:2.52.0
sleep 2 # wait a bit for container to start
open vnc://"$ip":5900
else
docker run -d --name $CONTAINER_NAME -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:2.52.0
fi
# run actual test command in a subshell to be able to rm docker container afterwards
(
npm run test:e2e -- "$@" 2> /dev/null
)
# save exit code of subshell
testresult=$?
docker stop $CONTAINER_NAME 1> /dev/null && docker rm $CONTAINER_NAME 1> /dev/null
exit $testresult
A couple of changes have been made to this script. First, the container name was abstracted to the variable CONTAINER_NAME to keep it DRY. If you run this script in a Jenkins environment, this could potentially be parameterized with the build number as well. Second, an if/else block is used to detect whether a --debug flag was used, and decide which docker image to use accordingly. If debug mode is on, it will also try to open a VNC viewer (I have only tested this on OS X) so you can see the browser window actions (the password is secret).
Notice that I keep the wdio test/e2e/wdio.conf.js command behind its own npm run script instead of putting it in the bin/e2e-docker.sh script. This is so that I could still use the old method of running a local selenium server manually and run the test through there. While more cumbersome, it is sometimes helpful to be able to pause the browser and interact directly with it, such as inspecting the DOM elements. Writing end-to-end tests could be rather tricky, and having the ability to inspect could save a lot of time debugging.
While the temptation to do more work work is strong, I decided that it will be a day for personal growth.
So I woke up at a (reasonably) late time, took the time to fix myself some breakfast, setting up my work station with some soft coffeehouse music in the background. All of this put me in a peaceful, productive and concentrative mood.
I hope I will finally finish converting ledge to react and redux. Then maybe I will finally publish those 2 blog posts I have in drafts. I also have 7 tabs open of articles I should read and videos I should watch.
Small goals, but I’d be very happy if I could get through most of them.
I could really get used to this feeling and work environment. Is this what remote work like?
2015 has been a crazy and weird year. Wouldn’t expect anything less next year. Hopefully I will be able to focus more time on personal growth, learning new technologies, solving more interesting problems, and being a better friend, boyfriend and family member.
As I was laying in bed tonight, for some reason I thought of the time when I went back to Vietnam about more than three years ago. My first night there was at my uncle’s place, where I slept on the floor. Even though I wasn’t at my own house, that night’s sleep felt very peaceful. It is a sense of peace that I haven’t felt in a long time.
I guess it was partly because I was on vacation from school. But more importantly, I was in a place where I belonged, where I didn’t feel threatened, and where my status wasn’t constantly in question and needed to be validated.
Being here in the US has been a blessing in many ways, and will continue to be. But I think this sense of peace, or lack thereof, will continue to haunt me for quite a while.