State of the Webmaker VM

Since my last post I’ve done a lot of work and gone through many iterations of the Webmaker Vagrant VM. I originally had the thought to put the vagrant files right into the webmaker-suite. We could have then used the automatic folder sharing in vagrant to allow us to run the Webmaker apps in the VM while still being able to use IDEs and other tools on the host side. I got it all setup and Pomax offered to test it out and unfortunately he rediscovered a bug in VirtualBox that was a major stopping point. VirtualBox on Windows uses an outdated file system management that restricts paths to 250 characters, anything more then that and nothing will get created. If the VM was left setup that way it would limit where people would be able to clone the repository on their host OS. After some discussion with Pomax on IRC we decided that telling users that they must clone the repository at a specific directory structure level wasn’t an option. I had to come up with a different plan that would allow us to take advantage of the Linux file system while still exposing the Webmaker files the host OS.

From my past work on the BigBlueButton project I was familiar with a Linux program called Samba. Samba allows Linux and Unix machines to mount a directory as a virtual Windows file server. I had the thought to break the Vagrant VM specific files apart from the webmaker-suite repository and place them in their own repository. The end result being that users will clone the webmaker-suite-vm repository and start up the VM. Once it is started they will clone the webmaker-suite package to a specific directory on the guest OS. This directory will be exposed via Samba as a Windows file server. The user can then map a new network drive in Windows to \\{VM_IP}\webmaker and that will give them access to all of the files.

It’s important to keep in mind that Samba is only useful for Windows hosts because it creates a Windows specific file server, but this isn’t really a problem because Mac or Linux hosts don’t have the same problematic path length restrictions so they will be able to use the original shared folder method without issue.

I worked away at the Samba configuration for a little while and finally got it working perfectly, I was basically finished the main development. Then disaster struck. Right as I got it working Oracle released version 4.3 of VirtualBox and Vagrant released a new version 1.3.5 as well. In usual fashion the VirtualBox update had a bug. This bug breaks the creation of host-only network adapters by VirtualBox. There is a long discussion on the Vagrant repository about other people experiencing the issue and there have been a couple of potential workarounds posted, but unfortunately none of them have solve the problem for me. In order to get Samba to work the VM needs to have a private IP that only the host can connect to. In the past I achieved this with a host-only connection setup by Vagrant, but now when you try to create a private network the VM fails at start up. This bug has completely stalled any work or development from my end. Theoretically the VM should work, but I can’t get it running to actually test it. It is a rather frustrating predicament because it is completely out of my hands and I am basically at the mercy of Oracle.

Hopefully they fix it soon so I can get back to work.

Creating a VM for Webmaker

The full Webmaker suite is rather cumbersome and especially difficult to get running on Windows so I’ve taken on the task, at the suggestion of Dave Humphrey, of creating a virtual machine to hopefully make things easier. Dave had the suggestion of using Vagrant because it is supposed to be fairly easy to deploy and use.

I’ve spent a couple of hours today looking into whether or not it could work for the Webmaker project and so far it looks promising. You can install whatever packages we need for Webmaker and setup all of the per-requisites and then package it up into a Vagrant box so that other people can use it. I’ve already tested it with elasticsearch and it worked perfectly so it should be able to be extended to include the other dependencies.

Vagrant does have disadvantages to it as well though. The main disadvantage that I see is that any changes you make inside the VM aren’t saved after it is shut down. The only way to persist your changes is by packaging your VM’s state into a box and starting the VM based off of your new box. I also suspect that the size of the VM could get out of hand quickly as well. The base VM was 302 MB and just installing Java and elasticsearch raised that size to 518MB.

I think there’s definitely potential here and I think it can be made to work in the Webmaker project.

Improving the MakeAPI healthcheck

For OSD600 we have to choose a bug from the list and try and make significant progress towards fixing it. I prefer working on server side issues so I selected Bug 864942. The bug involves modifying the existing MakeAPI /healthcheck endpoint so that it does Basic Authentication (now Hawk Authentication) and then also checks the health of the linked ElasticSeach (ES) and MongoDB (DB) components. The bug also requires that if ?healthcheck?elb=true is requested it will return the existing 200 status code with no authentication required.

I decided to start off small because I’m new to nodejs and the webmaker project so I started with checking a property in the query string. I looked up the ExpressJS documentation and specifically the section on req.query. It turns out it is very easy to look for an “elb” property, all that is required is if (req.query.elb === "true") {//do something}. One part down.

The next thing I tackled was authenticating though HawkAuth. Unfortunately this caused some difficulties because in order to have both an unauthenticated and authenticated /healthcheck endpoint I have to break one of two conventions that the existing code follows. On one side I could add a /healthcheck/elb route and then add authenticate the /healthcheck route so that it follows the existing authentication format. The problem with this solution is that the current path for elb is /healthcheck?elb=true so any monitoring applications would have to be modified to work with the new URL format (/healthcheck/elb). The other possible solution is to perform the authentication in the /healthcheck endpoint method. The problem with this though is that all of the existing authentication is done in one file, server.js. It also means that I have to make quite a few extra changes so that I can include middleware.js. I decided to go with the second option because I feel that breaking the URL convention would be more potentially damaging to the other applications in the Webmaker ecosystem. I am interested to see if my approach will pass review because it’s definitely different from the previous functionality.

The first two parts were relatively simple to implement and they didn’t take a whole lot of thought. The second half of the bug though is a lot more difficult and is proving much more vexing.

I first tried to tackle the MongoDB stats. I talked with jp, the devops guys for the Webmaker project, in the #webmaker channel and he showed me the existing web-apps, Opsview Core and New Relic, that they use to keep tabs on the states of the various nodes. For MongoDB they keep track of various processes, load, and swap. This poses an issue for my bug though because the only way to get the aforementioned data is by having an agent running on each server that reports back to a web application, in my case nodejs, and that isn’t really a feasible solution in my situation. The other thing to consider is that the data is already being tracked separately from the MakeAPI nodes so why bother adding duplicate information to the /healthcheck call when it’s not needed. So getting the DB stats was put on the back burner.

The last part is the Elasticsearch stats. When I was looking through the monitoring apps I noticed that ES monitoring wasn’t anywhere to be found. I asked jp about it and he seemed surprised that it was missing as well. This brought up some questions of, “Well is ES monitoring even needed?” and it seems like at the moment it isn’t. I could always add it in, but if it’s not going to be used why bother.

After early success I had hit a roadblock. I asked Dave Humphrey about what to do because now I seemingly was stuck with a bug that didn’t need to be done after all. He suggested that I get in touch with Jon Buckley and ask him what his thoughts were. I contacted him the next morning and he seemed stumped as well. He said that he needed to talk to jp himself and left it at that.

At this point I’m not really sure what to do, but I guess I’ll just wait it out.

Making a change to the BigBlueButton source with git

One of the biggest issues we have had in our work here at CDOT on the BigBlueButton team is proper management of git branches. It has never been fully clear what kind of process we should be following when creating branches, but hopefully I can lay out what has worked for me for awhile now and stop more issues from appearing.

For the purposes of this blog “upstream” refers to this repository, https://github.com/bigbluebutton/bigbluebutton, and “origin” refers to this repository, https://github.com/SenecaCDOT-BigBlueButton/bigbluebutton. Also, as a general rule none of my commits are ever pushed or merged into the Seneca master branch. The master branch should just serve as a mirror to the upstream master branch.

Creating your remote reference
The first thing I do when I clone the Seneca repo is set up a remote reference with git to “https://github.com/bigbluebutton/bigbluebutton.git” and name it “upstream”. You can use the following command to achieve this:
git remote add upstream https://github.com/bigbluebutton/bigbluebutton.git

Creating your new branch
When I want to actually make a change, be it for a feature or bug fix, the first thing I do is create a branch for the specific issue I am trying to solve.
git checkout -b <branch-name> <parent-branch>
If I am trying to make a general change, such as fixing the displayed chat times in the client, I will choose a branch name like “fix-chat-times” and base the new branch off of the upstream master branch by using “upstream/master” for the parent branch.

Creating your commit
Once you have reached a natural stopping point (ex. finished the feature) you now have to commit and push your changes to github. The first thing you want to do is verify that only the proper files have been edited. You can do this by using “git status”. It will display all of the files that have been created, modified, or deleted. You can now use “git add(or rm) <file-path>” to include the relevant files to your commit. If there are modified files that you want to revert back to their original contents you can use “git checkout — <file-path>” to get rid of your local changes. Once you thing you have the required files added to your commit you should verify them again with a “git status”. If everything is correct you can now create your commit with “git commit -m “<commit-message>”.

Pushing your commits
Working on your local machine is fine, but eventually you will want to push your local commits to github for safekeeping. In order to push your commits you your remote repository you must tell git two things. The first is what repo to push to, in our case this is “origin”. The second is what name you want to use for the remote branch, for the purposes of this document we will use the same name for both the local and remote branches. The command to push is as follows:
git push -u origin <branch-name>
The “-u” option tells git to link the branches so that you can just do “git push” in the future without specifically referencing where it should be pushed to.

Merging upstream changes
If you notice on github that someone has pushed a commit to upstream/master that you need for your work you will want to be able to include those commits in your work. The first thing you need to do is make git update its local copy of upstream and to accomplish that you need to run the command git fetch upstream. You now need to merge in the changes. First, make sure that you are on the branch that you want to merge into. Second, merge by using the command git merge upstream/master.
If git tells you that there were conflicts in the merge you will need to manually resolve them. To do this you can use “git status” to see the conflicting files and deal with them. Once you have resolved a file use “git add” to apply the changes to the commit. Once all of the issues have been resolved you should commit your changes.

Submitting a pull request
Once you have your feature or bug fix finished and pushed to a github branch you need to submit a pull request so your changes can be merged in. One of the most important things to remember is that your pull request should be able to merge cleanly. In order to make sure this is the case you should fetch and merge any recent changes before submitting your pull request. When submitting your request make sure to include all of the relevant information about the changes included in your accompanying commit message.

Finishing up
You can now safely delete your branch from github or leave it there to stagnate.

Kansas City: Google Fibre is amazing

Dale and I are in Kansas City, Missouri for the Mozilla Hackfest “Hacking the Gigabit City”, we just arrived today and the actual event doesn’t start till tomorrow so we went to check out the Google Fibre office.

Google is AMAZING. The first thing we did was go on the Chrome books and test the speed of their network with speedtest.net and the results were awesome. They gave us 600 Mb upstream and downstream. I can’t wait till they finally come to Canada.

First business trip out of the country

Three days ago I discovered that I was invited to go to Kansas City, Missouri for a Mozilla Hackfest next Thursday (March 21). Even though it’s quite short notice I’m quite stoked to be going to the “Gigabit City”. I did run into one little hiccup though. I have no passport. I got one years ago, but it is two years expired now. I had to scramble to get all the required documentation together, but I was able to file an application in time for me to pick up my passport before I have to leave.

I’m not quite sure what I’m going to actually be doing while I’m there, but it will be a great learning experience I’m sure. I think I might be most excited about the food I’ll get a chance to eat. Kansas City is renowned for its BBQ and I’m hoping to be able to sample some of the local cuisine.

Finally put in a pull request with our accessibility work

Finally after months and months of fixes to our original attempt at making the client accessible we have submitted another pull request. It included a total of 67 commits and hopefully it functions correctly.

I’m going to go over it again and make sure everything is working correctly.