This has been one of the hardest/ most terrifying decisions for me to make: leave a growing career at Demandware/ Salesforce Commerce Cloud and come to New York City and work for Bloomberg.
Now that I’ve been in New York for a few months, I’d like to jot down a few thoughts, so that I could look back on at some point in the future.
- One of the biggest reasons for leaving was to grow my career as a technical person. I was doing really well at Demandware, and I was getting comfortable. It is a great spot to be in. However, drawing from past experience, I figured that the most growth I undergo comes from putting myself in unfamiliar/ uncomfortable position. It’s hard. It sucks. And as the last few months of being in NYC indicates, it really does suck. But, I have also grown. I have been exposed to more technologies and more interesting ways of solving problems.
- I had to restart my PERM application process all over again. At the time, I figured this would be a good thing – I still have enough buffer time to do so. Looking back, I think perhaps it might have been wiser to stay on for another couple years and complete that.
- Another reason was for Mark and I to experience a new lifestyle. Being in a big city, using public transportation, and most importantly, being more active with easier access to walking/ biking. This has been true. We do enjoy the new lifestyle. On the other hand, it has been dampened by the frustration of Mark’s job seeking process. Hopefully, this is only a temporary setback.
- My parents are coming to visit for an extended period of time again this year. I’d like to be in a place where they can get out of the house on their own and explore. It seems like it shouldn’t be a factor in such a life-changing decision, but if I were honest with myself, I think this is definitely one of the reasons. After being away from home for 12+ years, I’d like to spend more quality time with them.
- After being at Demandware for 3 years, I started to feel like I wasn’t progressing fast enough towards my career goals of being more involved in architecting applications (technical) and in building up teams (non-technical). I understood that I was (and still am) relatively inexperienced in either of those things. From the little I could deduce, those roles require more varied background and experiences. The role at Bloomberg isn’t exactly the big next step up, but I decided to make a leap of faith, hoping that it will add to the diversity of my experiences. This could turn out to be a mistake, who knows. After sharing my decision with the management at Demandware, it seemed like I had a chance to make a shortcut towards my goals with the company. That seemed like too little too late, and given the other reasons, I decided to let go of the leverage and made the jump.
At this juncture, looking back, it is still a toss-up whether this decision has turned out to be the right one. I am going through a lot of challenges, both personally and professionally, that make me constantly question the move. I hope that in a year or so, the outlook on things will improve. I knew that it was a long-term investment that would require a bit of short-term pain. I should try to stay positive and see the rewards.
I was unaware of the difference between static linking and dynamic linking in linux. Thankfully Ben Kelly explained to me in some details these concepts on a Slack chat. I wanted to document them here for future references.
Static linking: when you link the program (the step after compilation that combines all the compiler outputs into a single runnable program), the linker tracks down the libraries the program needs and copies them into the final program file.
Dynamic linking: at link time, the linker merely records which libraries are needed, and when you run the program, the “dynamic linker” reads that information and runs around loading those libraries into memory and making them accessible to the program.
Advantage of the latter is smaller (potentially *much* smaller programs) and you can upgrade the libraries without rebuilding everything that uses them. Disadvantage is that those upgrades can break compatibility, and it’s another external dependency for the program (and thus another point of failure).
The dynamic linker has a bunch of ways it figures out where the libraries are stored; `man ld.so` for all the gory details.
But the tl;dr is that it has a few system paths it looks in (typically /lib and /usr/lib), plus whatever is listed in the environment variable LD_LIBRARY_PATH, plus whatever is recorded as the “rpath” in the executable itself.
I’ve heard of Proper Tail Calls (PTC) being tossed around a lot as a new ES6 thing that a lot of people are excited about, but have no idea what it actually is.
This 2ality article is really thorough and helpful in explaining what PTC is and how it works.
I was first pointed to this topic from a blog post by the V8 team. In it, they also reference an alternative to CTO called Syntactic Tail Call (STC), which is a more explicit way to opt in and use PTC. They prefer this approach due to a couple of reasons
1. It makes it more difficult to understand during debugging how execution arrived at a certain point since the stack contains discontinuities and
2. Error.prototype.stack contains less information about execution flow which may break telemetry software that collects and analyzes client-side errors.
These concerns, as well as a third about performance hit, are addressed by the WebKit team in a Github issue response.
A note about the terminology: when learning about this, another term is also used to describe PTC, i.e. tail call optimization (CTO). I think that PTC is preferred, as it should be a mandatory feature, as it is currently spec-ed in ES6, and not an optional optimization. See this tweet .
tesseraic, an engineer at Apple, has been incredibly helpful in pointing me to some of these resources and explaining the concept in more details.
Today I learned more in-depth about CSRF (Cross-Site Request Forgery) attack and more specifically how to stop it in node.
I’ve been thinking more about CSRF after having some conversations at work about it, as well as taking a (mandatory) security training, and wonder how I would go about implementing CSRF in my node application. Understanding CSRF is a pretty good, quick read on the topic.
At first, I suspected implementing a solution would be rather involved as I’d have to keep track of a time-based token (a unique random value that would last for some time, such as 30 minutes) on the server and validate any form request based on this token.
Turns out, it could be as simple as setting a strong random string as the token in the cookie (if the request does not have it set yet), and then expecting this value in the POST request payload (hidden field). The payload field is then compared against the cookie token to determine whether it’s a valid request.The cookie acts as the maintainer of the state of the CSRF token in this case.
In the hapijs world, this is done with crumb. I’m not familiar with hapi, but when looking into it, I learned about cryptiles, which is a general purpose utility that could create the cryptographically strong random string mentioned above.
In the expressjs world, there’s csurf. It uses csrf under the hood, which took a slightly different approach where instead of a random string, it generates a token using ‘sha1’ salted hash with a secret. The secret is attached to the session (either via cookie or session middleware (httpOnly cookie)). In the csurf’s example, the token is not sent to the client via a cookie, but just rendered to the form directly. It can later be verified by the server using the secret. It is recommended that the token not be sent back in a response body.
For my use case, I am not using an express server-side rendering architecture, so the token will most likely be sent to the client via a cookie, and then read and embedded by the client before POST-ing back to the server. I will update this post when I try this out successfully.
Sidenote: while reading about these implementations, I also learned about timing attacks when doing simple string comparison. So both of the libraries I mentioned above (
csrf) use constant time comparison. The former implements its own method, while the latter uses scmp.
Update: upon further thinking, for
csrf, the secret is actually the thing that is comparable to the token in
crumb. They’re both created using node’s crypto
randomBytes method, which is presumably strong enough to not be guessable. In my limited understanding, I am not sure why the extra step of generating the token is necessary, as the token can be regenerated with the secret, and the salt is stored with the token as well. Perhaps one benefit is to tie the token to a particular session when storing the secret in a session-specific manner (not setting it as a regular cookie).