How I get started on using Vim, part 1 – my .vimrc explained

For some reason, I’ve had it in my head that as a programmer, I should learn how to use a command-line-based text editor like Vim. It is more universal across devices, so knowing it would at least give me an option to navigate around computers other than my own, is an advice I often hear. It would be more applicable if I were an ops person, but on the infrequent (though increasingly less so) occasions that I have to remotely log in into a server box, I do find that to be true. Furthermore, since for most codebases I deal with, I already have a terminal window open for git and build tools, I figure keeping text editing in the same context will be beneficial.

I tend to agree with Yehuda Katz’s experience learning Vim, that it is something one should learn incrementally. I started down this path almost two years ago, only to decide on several occasions that it wasn’t the right tool for me. It is not until very recently that I feel comfortable enough to use it as my primary text editor. Even then, there are times when I find myself opening up Sublime Text or Atom because they’re more appropriate for what I need to do.

What allowed me to be more comfortable in Vim is less about learning the shortcuts and muscle memory, but more the heavy customization that I have put on top of Vim to give it the convenience editors like Sublime have out of the box, and also the deeper insight into how Vim actually works.

I have been thinking about writing this blog post for a while, but never felt like I was ready to have an “authority” on this topic. But I decided today that it’s time I break down and explain my Vim setup (through the .vimrc file) before it gets too complicated.

To me, the most fundamental difference between Vim and a GUI editor like Sublime is that Vim is a modal editor. That means Vim has different “modes” that a user could be in at any given time. In Sublime, you’re always in edit mode. Anything you type on the keyboard is recorded in the text buffer. Other actions, such as saving, linting or navigating, are done through special key combinations, such as key on Windows or Cmd key on OS X. In Vim, the default mode is NORMAL, which is where you can do most of these “meta” actions on the document, without modifying the text itself. You need to explicitly enter the INSERT mode to edit something, and escape out of it to perform those actions. Being aware of the current active mode is important, and will help beginners feel less confused and lost. You can make vim show the mode with the command :set showmode while in NORMAL mode, or see the .vimrc breakdown to see how to set that permanently.

The second most powerful difference in Vim as compared to a GUI editor is the concept of motions, or text objects. There are a lot of great tutorials and resources out there explaining this. I find Vim Text Objects: The Definitive Guide to be a really great place to start, as it was the “ah ha!” moment for me. This way of moving around a document is probably what makes Vim much more appealing and efficient for me.

As I walk through my .vimrc setup, I will denote the ones that I think are basic developer use as [basic], so search for those if you just want a quick way to get started with Vim. My .vimrc file actually changes quite often as I refine the setup, so take a look at the source code to see the latest version. Here goes:

While plugins and basic setup establish a friendlier stage for a beginner to Vim to start exploring around, one of the most frustrating things when trying Vim out is not knowing how to do basic developer tasks in Vim. In addition to the “[basic]” setup I have outlined here, I will document more of what those tasks are, such as find/replace, copy/paste, indentation etc. in the second part of this series.

Foray into IRC, with the help of ZNC

Why IRC?

As someone relatively new to the developers scene, I’ve been rather intimidated by chatting on IRC. I used it a couple times in the past in order to get some help on open-source projects, and those experiences were at the most confusing. There is usually a lot of noise, so it’s difficult to keep track of what’s going on. This problem is exacerbated by the fact that most IRC clients do not keep track of history, so if for some reason you’re disconnected from a chat, say you need to close your computer for a while or your network fails, there is no way to retrieve what was said during that downtime.

I tried out irccloud, and it was a pretty good option. With the free plan, I am able to stay connected for up to 2 hours after I become inactive, which definitely helps with the scenario described above. I mostly use IRC on my desktop, but the irccloud mobile apps come in pretty handy if I need to step out for a bit. On the desktop, the only option is to use the webapp, which has desktop notification, and since I use my email as webapps anyway, I didn’t mind it too much.

What is ZNC?

Recently, I’ve been more interested in doing some self-hosting, and IRC seems like an interesting experiment to try that out with. More specifically, I wanted to try hosting my own IRC bouncer. I had no idea what that means before, until somebody on the LGBTQ Slack chat mentioned ZNC, an IRC bouncer. Basically, it is a middle man between your IRC client and the IRC server. Instead of connecting to the IRC server directly, you connect it the bouncer, which then relays you to the server. If you become disconnected, the bouncer will stay connected and maintain a buffer of messages while you’re gone, and allow you to catch up later. By default, the size of this buffer is 50, which I think is most likely sufficient for IRC (just enough to provide you with the context of the current conversation), but you can definitely increase this size to be whatever you want.

Just like most things related to IRC, the ZNC website contains a lot of useful information, but not the friendliest for the absolute beginners. So with this post, I want to detail my experience setting it up from scratch with very little knowledge on how it all works. I spent quite a bit of time figuring it out on my own, so I hope it will be useful and save someone else some of their precious time.

Run ZNC with docker

First, I decided to not install and run ZNC as the website instructed, but instead running it with docker. This way, it is more contained and repeatable, so I had some room to make mistakes with. I’ve been into the whole docker workflow lately. Thanks to jimeh/znc image, this setup is quite simplified. After installing docker on your host (in this case, I am using a Digital Ocean droplet), you can pull this image down and run the container:


			:; docker pull jimeh/znc  # this step is optional
			:; docker run -d --name znc -p 6667:6667 -v $HOME/.znc:/znc-data jimeh/znc
			

What each of the flag does:

Note: I use port 6667 here, while the doc for jimeh/znc shows an example of 36667. Either option would work, as you can specify which port to connect to from your client. Most clients default to port 6667, so I chose it for convenience. There’s a slight problem with this choice however – in the next step, when configuring ZNC at http://host-ip:6667, this page will not appear on Chrome as port 6667 is blocked. However, the ZNC server can still be accesssed on a different browser, such as Firefox.

Configure ZNC server

Now that we have ZNC running, we can start configuring it by going to http://host-ip:6667, where host-ip is the address of the host machine, and 6667 is the port on the host machine that we set in the docker run command above.

Log in with the username/password combination is admin/admin. You’re advised to create your own administrator user:

I’d recommend you create your own user by cloning the admin user, then ensure your new cloned user is set to be an admin user. Once you login with your new user go ahead and delete the default admin user.

Once logged in, you can add an IRC server for ZNC to connect to, such as freenode.

You can now add a list of channels you want to be able to connect to through the ZNC interface. You can also just join any channel with the command /join #channel-name from the IRC client, and ZNC will create these channels for you. Once you have a list of channels you belong to, you can change the size of buffer (to something other than 50) from the ZNC interface.

That is it. Now you can start using IRC through your ZNC bouncer and enjoying benefits such as the message buffer. To be completely honest, I am still pretty much a noob when it comes to ZNC. I am sure there are a lot of super amazing features you could do with ZNC to make your IRC experience better. If you know of some of these, please let me know! I would love to learn.

Set up IRC client with Textual

There are many IRC clients out there, include WeeChat, a CLI client. I went with Textual as it is highly recommended and has support for ZNC bouncer built in. It is not a free app however, though its cost is not too high ($4.99 right now).

Once you open Textual, add a new server and fill out the fields under the “General” tab as follows:

In the “Identity” tab, you can have “Username” set as <username>/<network>, while “Nickname” can be whatever nick name you desire.

If your channel requires your nickname to be registered with a password, you can do it with this command:

/msg NickServ REGISTER <password> <email>

Later on, when you connect again, you can identify yourself with:

/msg NickServ IDENTIFY <nickname> <password>

Bonus: connect to gitter through IRC and ZNC

An alternative to IRC for Github-hosted open-source projects that has been growing in popularity is gitter. It overcomes a lot of IRC limitations with some impressive features, such as full message history, markdown syntax support for writing messages (back ticks are amazing for writing code blocks), message editing and deleting etc. Some open-source projects have been adopting it, such as AmpersandJS. While I like using it, I do not particularly like how to desktop app (which is essentially a wrapper around the webapp) constantly displays an unread badge and I cannot seem to be able to hide it. This is too distracting for me. Thankfully, there is a way to actually connect to gitter via an IRC client, thanks to irc.gitter.im. This allows me to group gitter chats with regular IRC channels.

We can use our ZNC bouncer to connect to the irc.gitter.im server, and then our IRC client like Textual can just connect to the ZNC server.

First, you need to get the gitter token (from the irc.gitter.im page).

Then, add gitter as a network on your ZNC server. This is essentially the same as the steps outlined above, with a few differences:

The steps to connect from Textual is exactly the same as before, since ZNC already took care of authentication with gitter. The only difference is that you need to set the Nickname as your Github username.

These days, I have the Textual app open in the background and it automatically connects to the ZNC server. I am not active on any IRC channel, as I much prefer using something like Slack, but it’s kinda fun to pop in and see what’s going on. I would keep a few channels that I care about open, such as #node.js, #docker, #AmpersandJS/AmpersandJS or #w3c/a11ySlackers. I still think that IRC is rather intimidating, and its interface outdated and not very user-friendly. However, I find the idea of connecting to many others around the world who care about the same topic with such low barrier to entry to be quite powerful. That is I am investing some time and effort into making it as pleasant an experience as possible.

Simple Heroku-like workflow with git and docker compose

Inspired by Aria’s post about simple deployment with git using a new feature in git 2.3.0 receive.denyCurrentBranch = updateInstead, I have been experimenting with putting together a Heroku-like workflow (i.e. deploy with just git push heroku master) with just git and docker.

Since Heroku just comprises of a compute worker and a bunch of services that are accessible to the main process through environment variables, it fits quite nicely with the docker ecosystem. Specifically, the setup can be achieved quite seamlessly with docker-compose.

The primary reason that I wanted to move away from Heroku is cost. Most of what I do is just at the hobby level, but with Heroku’s recent changes to their pricing, my hobby app will be forced to sleep if it runs for more than 18 hours a day. It seems reasonable, but I kept running into issue with an app that is pinged every 10 minutes through a cron job.

For this new setup, I use Digital Ocean as the hosting provider. It isn’t exactly free either, but for $5-10/ month, I could host all of my hobby apps. Also, I was already paying for the droplet for running a couple other services.

Setup

Docker Compose

You develop your app as per usual. Then add a docker-compose.yml file to declare all the different services that your app might rely on (the main suspect is usually a database component). Even if your app only comprises of a single container, you can still use docker-compose to run it. The reason for using compose is that it has a feature called smart recreate that will only rebuild a container if its configuration has changed.

Here’s an example docker-compose.yml file I have for a node application:


			app:
			  build: .
			  links:
			    - db:mongodb
			  environment:
			    PORT: 3000
			  ports:
			    - "3000:3000"
			
			db:
			  image: mongo
			  volumes:
			    - ./data/db:/data/db
			

Git

On the hosting server, create a new git repository that will serve as your remote. For example:

:; git init myapp
			:; cd myapp
			:; git config receive.denyCurrentBranch updateInstead

This will allow the remote repo to accept push updates.

Now, back on the local machine, add the new remote:

:; git remote add do ssh://droplet-ip/path-to-remote-myapp

Lastly, the remote git repository needs to be configured to run docker-compose up -d every time it receives a push update. This can be simply achieved with git hooks:

:; cd myapp
			:; cd .git/hooks
			:; touch post-update
			:; chmod +x post-update

post-update file should have the following content in it:


			#!/usr/bin/env bash
			
			docker-compose build
			docker-compose up -d

With this setup, every time we make change to our app locally, we can simply deploy it with

:; git push do

WordPress

This setup can be used with any application stack out there. I have been able to use this quite successfully with setting up a WordPress site as well. Using the same instructions described above, here’s a sample docker-compose.yml file I use for a WordPress instance.


			wordpress:
			  image: wordpress
			  external_links:
			    - wordpress_db:mysql
			  environment:
			    WORDPRESS_DB_NAME: wp_blog
			  volumes:
			    - .:/var/www/html/wp-content/themes/mytheme
			    - ./uploads:/var/www/html/wp-content/uploads
			    - ./plugins:/var/www/html/wp-content/plugins
			  ports:
			    - "5000:80"
			

A couple of notes here:

In this setup, docker-compose up -d step is only needed to be run once once. Since the container is running just the stock WordPress iamge, it doesn’t need to be rebuilt with every code change. The post-update git hook is still used however for running an npm build step for compiling css and js assets for the theme:


			#!/usr/bin/env bash
			npm prune && npm install
			npm run build

I used to use VVV for my local WordPress setup, but after switching over to using docker, I can now truly guarantee that the ways my site is run locally and in production are the same. The set up is also much simpler. I can focus on working on the theme, which is my main goal. Everything else is scripted and automated.

Conclusion

This new set up is very refreshing to me, as it greatly simplifies the deployment process. It is also readily scalable by using docker out of the box. The best part however is that I get to control every part of the process and not rely on some opaque architecture, like Heroku’s. All of this is achieved with very little sysadmin overhead tasks on my part.

Publish static site with WordPress

When I first created tobiko, the use case I had in mind was small static sites or blogs for developers. I was inspired by Jekyll and Octopress, but did not want to deal with installing Ruby, and I wanted to use Node and Grunt to be the plumbing behind it.

After using tobiko for a while, I realized that there are certain limitations with maintaining a static blog, such as the inability to write any content on a mobile device because you’re stuck with using files to generate content. As the type of person whose ideas come and go readily, I often get frustrated not being able to put my thoughts down when I feel the rush of inspiration, only to find myself staring at the blank screen when I actually have the time to write.

As the WP-API project gained more traction, it dawned on me that I could take advantage of WordPress as a great CMS with awesome admin UI, while still craft and publish the final front-end in a static manner.

The idea is pretty simple: content (regular posts or even custom post types) is created and managed on a WordPress instance. It is then pulled into tobiko at build time, when the site is generated. The WP content is put on tobiko’s content tree, so that regular Handlebars templates are still used to generate the final HTML.

The extra configuration for WordPress is as simple as

wordpress: {
						  apiRoot: 'http://your-wordpress-instance/wp-json',
						  contents: [{
						    postType: 'posts',
						    folder: 'articles',
						    template: 'article.hbs'
						  }]
						}

I have documented this feature on tobiko https://github.com/tnguyen14/tobiko#wordpress.

This is a real eye-opening discovery for me. The WP-API project is really powerful in enabling the separation between a backend managed with WordPress (which is very delightful to use) versus the front end (which could sometimes be painful done in WordPress and PHP).

When used this way, tobiko provides all the benefits of a static site, i.e. fast, cache-able pages with cheap hosting, and the benefits of a fully-featured and well-known CMS that is used for what it’s good at, managing content. I am now able to write my drafts on my phone (the WordPress Android app is excellent, and has come a long way), and then publish it later with a simple command from the terminal.