A Version Controlled $HOME

Just as mentioned in the overview of vcsh, I have tried putting all my important configuration files in a single repository and then moved to a symbolic link collection. It became a pain to keep my accounts set up to taste so I figured I'd give vcsh a try.

It took me a while to understand the role that myrepos (formerly mr) played in the whole story, though. As using vcsh tends to result in a largish number of repositories, a tool like mr comes in very handy. Actually, you may even end up maintaining most of your repositories with it, but let's keep that for another post.

In the following, I will bootstrap vcsh to use mr under the hood to manage all configuration "modules" that I will sooner or later put under version control. During the bootstrap, I will create one module to manage the configuration of all VCS repositories that I am tracking. Many will be using vcsh but you can track any type of VCS repository that mr supports.

Configurations of tracked repositories are kept in available.d/ so that I can easily share that among my accounts by making a symlink to my-repos module in enabled.d/.

First I initialize my-repos module and add an mr configuration file that reads all enabled configuration snippets, if any.

vcsh init my-repos
cat << EOF > $HOME/.mrconfig
[DEFAULT]
include = cat \${XDG_CONFIG_HOME:-\$HOME/.config}/my-repos/enabled.d/* 2>/dev/null || true
EOF
vcsh my-repos add -f $HOME/.mrconfig
vcsh my-repos commit

Next, I add a remote on GitLab and push my-repos repository using vcsh so I can easily share it across accounts.

vcsh my-repos remote add origin git@gitlab.com:dotter/my-repos.git
vcsh my-repos push -u origin master

Then, I set up the infra-structure to support the distinction between available.d/ and enabled.d/ repositories.

mkdir -p ${XDG_CONFIG_HOME:-$HOME/.config}/my-repos/available.d
mkdir -p ${XDG_CONFIG_HOME:-$HOME/.config}/my-repos/enabled.d

Registering repositories via mr will add a stanza in $HOME/.mrconfig which is not what I want. The my-repos module should have its own configuration file snippet, so I create one and add it to the module:

cat << EOF > ${XDG_CONFIG_HOME:-$HOME/.config}/my-repos/available.d/my-repos.vcsh
[\${XDG_CONFIG_HOME:-\$HOME/.config}/vcsh/repo.d/my-repos.git]
checkout = vcsh clone https://gitlab.com/dotter/my-repos.git my-repos
EOF
vcsh my-repos add -f ${XDG_CONFIG_HOME:-$HOME/.config}/my-repos/available.d/my-repos.vcsh
vcsh my-repos commit

Finally, I activate my-repos module and push the latest commit to the upstream repository with mr.

ln -s ../available.d/my-repos.vcsh ${XDG_CONFIG_HOME:-$HOME/.config}/my-repos/enabled.d
mr push

After this, you just make mr configuration snippets for all of your repositories available.d/ and simply mark the ones you need as enabled.d/ on each of your accounts. Remember, they don't have to be vcsh repositories.

Is Anybody Reading T&Cs?

The Norwegian Consumer Council is. Or rather, they were, because they have finished reading the Terms and Conditions (T&Cs) of thirty-three popular mobile phone apps. It took a whopping 31 hours, 49 minutes and 11 seconds to read the more than 250,000 words of the apps' T&Cs.

Thirty-three, well 32.5 to be precise, is the average number of apps Norwegians have installed (as of March 2013). That's a bit more than the global average of 26.2 and less than the 36.4 in Japan.

Why did the Consumer Council bother? They wanted to make a point. Nobody can be expected to have read all those terms, let alone understood those tomes of legalese, before clicking that "Accept" button. Yet, at the same time, clicking that button often gives the app vendors the rights to do pretty much anything they please. All that is often needed is a change of the T&C without notice. You've probably accepted that.

Don't like that and other onslaughts on your freedom and privacy? Let the Norwegian Consumer Council know. Change has got to start somewhere.

Running Dockerized GUI Applications

I had seen the blog on this by Jessie Frazelle before. Now I had to edit some files with a very non-free wire-frame mock GUI tool that we use for a project at the office. There was a binary package for Ubuntu/Debian. I figured I'd stuff it in a Docker container to be on the safe side and then just

docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=unix$DISPLAY image command

to give it a spin first and rerun with the files volume-mounted at a suitable location to get things over with.

Alas, no such luck. All I got was a

No protocol specified
Error: Can't open display: unix:0.0

A fair bit of reading later I finally came across a well-informed solution on Stack Overflow (SO) that solved things for me, after a bit of tinkering.

The Sledgehammer Approach

Plain and simple, turn off access control to the X server with

xhost +

Anyone who can access your X server can now display whatever they like on your display. That includes that docker container (as long as you volume-mount the /tmp/.X11-unix socket). If you're the only user on your system and your X is run with the -nolisten tcp option, that may be acceptable.

Just turning off the access control makes the above docker command work as expected. Great, why bother beyond this? Well, for one thing, you may not be the only user on the system. Maybe your X server does listen for TCP connections.

In that case, turn the access controls back on with

xhost -

Fine-Grained X Server Access Control

It's very likely that there is an $XAUTHORITY environment variable set in your current desktop session. In case there isn't, check if there's an .Xauthority file in your $HOME directory. Yes? Good, then we can use that to allow the docker container access to the X server.

docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
    -v $XAUTHORITY:$XAUTHORITY -e XAUTHORITY=$XAUTHORITY \
    -h $(hostname) \
    -e DISPLAY=$DISPLAY image command

Works like a charm, doesn't it? Thing is, you don't really want to run two containers with the same host name. The docker engine will let you (and the container names differ), but for a mere human like myself it is just too confusing. Problem is, the X server may just be checking against the hostname that's embedded in that $XAUTHORITY file. Now what?

The solution on SO basically gives the docker container access to a "wildcard" copy of your X11 session's authorisation cookie to access the X server. That means that the X server no longer gives a hoot about the host name. However, the SO solution creates that copy in a potentially insecure manner. This comes about because the -f option to xauth requires the target file to exist. Let's do this a bit more carefully and robustly

SESSIONXAUTH=${XAUTHORITY:-$HOME/.Xauthority}
DOCKER_XAUTH=${SESSIONXAUTH}.docker
cp --preserve=all $SESSIONXAUTH $DOCKER_XAUTH
echo "ffff 0000  $(xauth nlist $DISPLAY | cut -d\  -f4-)" \
    | xauth -f $DOCKER_XAUTH nmerge -

The above puts the file next to your regular authorisation file and takes care to preserve permissions. It also removes the host's name so that the file does not leak this information to the container.

Now you should have no problem running your dockerized GUI application with a command-line like

docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
    -v $DOCKER_XAUTH:$DOCKER_XAUTH -e XAUTHORITY=$DOCKER_XAUTH \
    -e DISPLAY=$DISPLAY image command

While the above tries to secure things as much as possible, the GUI application that runs in the container can still trash the host's X server and exploit any holes in it. Caveat emptor.

Further Tinkering

Please note that the $SESSIONXAUTH file is recreated with a different cookie every time you start your X session. Therefore, you need to update the $DOCKER_XAUTH file whenever that file changes. Also, that docker invocation is getting rather long. Looks like it's time for a little script.

That's it for today.

See also: XSecurity(7) manual page for more information on the various access methods and their security implications.

GitLab Pages with Nikola

The announcement of GitLab Pages piqued my interest. I had set up GitHub Pages about two years ago. Not happy with giving up the control of the rendering via GitHub's Jekyll, I had used Pelican for my pages. That led to an awkward workflow with two branches, one containing the sources, the other the static site. Both were managed with ghp-import.

The GitLab Pages promised rendering via GitLab CI. That means that the whole rendering process is under the user's control. A quick look through the example projects shows that both Jekyll and Pelican are supported, as well as a growing pile of other static site generators.

Since it had been two years since I first set up my GitHub Pages, I had a look at a few of the examples. A web search on static site generators turned up even more. I was curious about Nikola. At the time of my search it was not yet listed among the examples. I did find a merge request though, and from one thing came another.

Suggested a fix for the locale issue, suggested a way to improve build times and finally ended up building my own Docker Nikola image.

Now, I'm off exploring Nikola. Interesting findings should make it to this blog, eventually.

Gadget Reverse Engineering: Capturing

Capturing USB packets in my Qemu based Windows virtual machine (VM) did not go as swimmingly as I had hoped. The device showed up in the Devices and Printers output but it was marked with a warning sign. On top of that, the vendor provided application software completely ignored it. I spent a little time trying to figure out what was wrong but gave up. I wanted to capture packets rather than fix VM issues, so, grudgingly, I went back to my VirtualBox based Windows VM. There both Windows and the vendor application worked fine with my USB sport's watch.

To capture packets you could install USBPcap in the Windows VM. But why bother if you can use Wireshark on the host machine [1]. Not only can you capture USB packets, you can also capture network traffic. This may matter in my case because the vendor application is really just a conduit between my watch and their database/servers.

Installing Wireshark

Wireshark is packaged for any self-respecting Linux distribution, so just go ahead and install it. Command-line afficionados may like its tshark utility, which may be packaged separately for your distro.

$ sudo apt-get install wireshark tshark

While both don't require any extra help to capture network packages, you need to load the usbmon kernel module in order to capture USB packets. You can add it to /etc/modules so the module gets loaded at boot-time or run the command below when you first need it.

$ sudo modprobe usbmon

While this will set you up to do all the capturing, the various Linux distributions all have their own approaches to who exactly is allowed to capture packets. To keep it simple, I'll just run wireshark with administrative privileges here [2].

Capturing Packets

So I've started wireshark with

$ sudo wireshark

and dismissed the warnings. Now I am ready to select the interfaces to capture on. As I don't want to get inundated with boat loads of packets that I am not interested so selecting any is not a good idea. I definitely want the USB interface I connect my watch on. You can find out which one that is with something like

$ lsusb -d 1493:
Bus 006 Device 002: ID 1493:0010 Suunto

Replace the 1493 vendor ID with your device's vendor ID. Append a product ID after the colon if you wish to narrow things down further. The Bus number in the output corresponds to the USB interface you want to capture on, so that would be usbmon6 in this case.

Footnotes

[1] Wireshark is available for Windows too but it does not support capturing USB traffic there.
[2] A number of distribution already take extra precautions that allow selected user to perform network packet captures. However, for USB packet captures you will still(?) need root privileges. The safest and suggested approach is capturing with the dumpcap command-line utility. Its captures can then be analyzed with wireshark by any user.

Gadget Reverse Engineering Environment

I had been relying on a friendly neighbour to help me capture USB traffic for a sport's watch [1] I had bought a while back. While that sort of works, it is inconvenient and didn't feel quite right. I mean, why make someone else use non-free software on your behalf if your not prepared to do so yourself?

About 11.5 minutes into his TEDx presentation, Richard Stallman himself mentioned that bringing non-free software to school is okay for a "reverse engineering exercise". That finally made me have a go at putting together a setup where I could do my own USB captures. For the record, all of the below is done on Debian GNU/Linux' jessie.

Getting A Usable Windows Environment

Okay, okay, get back on your chair you ROTFL folks. "Usable Windows" may be an oxymoron for you but I'm talking about an environment that lets me use the vendor provided application software for my watch. Turns out that there are several Windows virtual machine images that meet these needs just fine.

I downloaded Windows 7 with IE 10 with

$ wget https://az412801.vo.msecnd.net/vhd/VMBuild_20131127/VirtualBox/IE10_Win7/Linux/IE10.Win7.For.LinuxVirtualBox.txt
$ wget -c -i IE10.Win7.For.LinuxVirtualBox.txt
$ ls IE10.Win7.For.LinuxVirtualBox.part*
IE10.Win7.For.LinuxVirtualBox.part1.sfx
IE10.Win7.For.LinuxVirtualBox.part2.rar
IE10.Win7.For.LinuxVirtualBox.part3.rar
IE10.Win7.For.LinuxVirtualBox.part4.rar

Depending on your network connection that may take a while, a long while. It downloads four parts of an archive totalling about 4GB. BTW, the second command can be used to pick up where your previous download stopped in case of network trouble. Make sure you check the md5 checksums [2].

The first part, the *.sfx file, is a self-extracting rar file, on i386 systems at least. Now I wouldn't blindly run that, even if I were to be using such a system, so I suggest you get the unrar utility [3] to do that job. While not exactly free, at least the unrar source code can be audited.

$ sudoedit /etc/apt/sources.list                      # enable non-free
$ sudo apt-get update
$ sudo apt-get install unrar
$ unrar e IE10.Win7.For.LinuxVirtualBox.part1.sfx     # empty lines removed
UNRAR 5.00 beta 8 freeware      Copyright (c) 1993-2013 Alexander Roshal
Extracting from IE10.Win7.For.LinuxVirtualBox.part1.sfx
Extracting  IE10 - Win7.ova                                           29%
Extracting from IE10.Win7.For.LinuxVirtualBox.part2.rar
...         IE10 - Win7.ova                                           58%
Extracting from IE10.Win7.For.LinuxVirtualBox.part3.rar
...         IE10 - Win7.ova                                           88%
Extracting from IE10.Win7.For.LinuxVirtualBox.part4.rar
...         IE10 - Win7.ova                                           OK
All OK
$ sudo apt-get purge unrar
$ sudoedit /etc/apt/sources.list                      # disable non-free

So now you have an IE10 - Win7.ova file for use with VirtualBox. Just out of curiosity, I checked what kind of file that was.

$ file "IE10 - Win7.ova"
IE10 - Win7.ova: POSIX tar archive (GNU)
$ tar tf "IE10 - Win7.ova"
IE10 - Win7.ovf
IE10 - Win7-disk1.vmdk

As Qemu can handle *.vmdk files, you should be able to use that instead of VirtualBox. We'll get to Running Windows Under Qemu after the following VirtualBox excursion.

Running Windows Under VirtualBox

Debian has moved its virtualbox package to the contrib area. You now need a non-free compiler to build the BIOS :-(. More bad news, to use USB you need a closed-source extension :-((, available from the VirtualBox download page. First, let's install virtualbox [4].

$ sudoedit /etc/apt/sources.list              # to activate contrib
$ sudo apt-get update
$ sudo apt-get install virtualbox virtualbox-dkms
$ sudoedit /etc/apt/sources.list              # to deactivate contrib

Next, the extension pack. Adjust version numbers for your version of virtualbox and what's available for download. Also, make sure to verify the checksum.

$ wget http://download.virtualbox.org/virtualbox/4.3.18/Oracle_VM_VirtualBox_Extension_Pack-4.3.18-96516.vbox-extpack
$ sha256sum Oracle_VM_VirtualBox_Extension_Pack-4.3.18-96516.vbox-extpack
b965c3565e7933bc61019d2992f4da084944cfd9e809fbeaff330f4743d47537  Oracle_VM_VirtualBox_Extension_Pack-4.3.18-96516.vbox-extpack
$ sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.3.18-96516.vbox-extpack
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully installed "Oracle VM VirtualBox Extension Pack".

With that out of the way, we are ready to import the appliance.

$ vboxmanage import "IE10 - Win7.ova"
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Interpreting /absolute/path/to/IE10 - Win7.ova...
OK.
Disks:  vmdisk1       136365211648    -1      http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized       IE10 - Win7-disk1.vmdk  -1      -1
Virtual system 0:
 0: Suggested OS type: "Windows7"
    (change with "--vsys 0 --ostype <type>"; use "list ostypes" to list all possible values)
 1: Suggested VM name "IE10 - Win7"
    (change with "--vsys 0 --vmname <name>")
 2: Number of CPUs: 1
    (change with "--vsys 0 --cpus <n>")
 3: Guest memory: 512 MB
    (change with "--vsys 0 --memory <MB>")
 4: Sound card (appliance expects "", can change on import)
    (disable with "--vsys 0 --unit 4 --ignore")
 5: USB controller
    (disable with "--vsys 0 --unit 5 --ignore")
 6: Network adapter: orig NAT, config 3, extra slot=0;type=NAT
 7: CD-ROM
    (disable with "--vsys 0 --unit 7 --ignore")
 8: IDE controller, type PIIX4
    (disable with "--vsys 0 --unit 8 --ignore")
 9: IDE controller, type PIIX4
    (disable with "--vsys 0 --unit 9 --ignore")
10: SATA controller, type AHCI
    (disable with "--vsys 0 --unit 10 --ignore")
11: Hard disk image: source image=IE10 - Win7-disk1.vmdk, target path=/absolute/path/to/IE10 - Win7/IE10 - Win7-disk1.vmdk, controller=8;channel=0
    (change target path with "--vsys 0 --unit 11 --disk path";
    disable with "--vsys 0 --unit 11 --ignore")
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully imported the appliance.

Now we have a IE10 - Win7 virtual machine located in a directory of the same name. You can start it with:

$ vboxmanage startvm "IE10 - Win7" --type sdl

The --type sld is needed because I didn't install the VirtualBox GUI client (virtualbox-qt). So that started Windows7 without a hitch but does not give you access to any USB devices yet. First, make sure you have been added to the vboxusers group [5]. Next, add a usbfilter for your virtual machine. For my sport's watch the following will do the trick. Adjust the vendor ID, add a product ID or even a serial number to suit your needs.

$ vboxmanage usbfilter add 0 --target "IE10 - Win7" \
    --name Suunto --action hold --vendorid 1493

You should now be able to find matching USB devices via the Devices and Printers entry on the start menu after a restart.

Running Windows Under Qemu

I'm not particularly happy about using a closed source extension pack for VirtualBox. Moreover, I'm also not too thrilled about using any software not in Debian's main area, like VirtualBox. Having seen that the IE10 - Win7.ova is just a simple tar archive with a *.vmdk file that should be usable with qemu, I decided to give that a spin.

First, extract the *.vmdk file with

$ tar xvf "IE10 - Win7.ova"
IE10 - Win7.ovf
IE10 - Win7-disk1.vmdk

We don't care for the *.ovf file so feel free to trash that. Next, create a regular qcow2 image with the *.vmdk file as a backing file. This will keep the *.vmdk file pristine so you can reuse it to create additional images. It also works around a read-only issue for *.vmdk files when starting the VM.

$ qemu-img create -f qcow2 -o backing_file="IE10 - Win7-disk1.vmdk" \
    ie10win7.qcow2
Formatting 'ie10win7.qcow2', fmt=qcow2 size=136365211648 backing_file='IE10 - Win7-disk1.vmdk' encryption=off cluster_size=65536 lazy_refcounts=off

Now, we can start the Windows VM using the ie10win7.qcow2 file. To make the same sport's watch as before available to the VM, the -usbdevice option is used. Adjust to needs and taste.

$ qemu-system-x86_64 -enable-kvm -m 512 ie10win7.qcow2 \
    -usbdevice "host:1493:*"

The above assumes you have qemu installed, of course. If you also install qemu-kvm and your hardware supports it, you can use the following shorter command-line instead.

$ kvm -m 512 ie10win7.qcow2 -usbdevice "host:1493:*"

Getting Rid Of Virtualbox

Choosing to sacrifice as little freedom as possible, I decided to go with qemu for my reverse engineering needs. Hence, I no longer have any use for VirtualBox. Time to clean up then. To undo the steps from Running Windows Under VirtualBox

$ vboxmanage usbfilter remove 0 --target "IE10 - Win7"
$ vboxmanage unregistervm "IE10 - Win7" --delete
$ sudo vboxmanage extpack uninstall "Oracle VM VirtualBox Extension Pack"
$ rm Oracle_VM_VirtualBox_Extension_Pack-4.3.18-96516.vbox-extpack
$ sudo apt-get purge virtualbox virtualbox-dkms

I'm off now, capturing and exploring the communication between my sport's watch and the vendor's non-free application software. That software is also intricately linked with a website that very much wants to get a hold of my data.

Footnotes

[1] The watch is only supported on Windows and MacOS and its maker's commitment to Linux support so far consists of empty promises. There is a reverse engineering effort for this and related watches but mine came with a firmware version that was considered "obsolete" already at the time of purchase.
[2] I have suggested they add text files with checksums for all the parts of each VM. That would make checking these checksums a much simpler chore.
[3] The Debian unrar-free package does not support *.sfx files. You need the unrar package from the non-free area. You can remove it after you extracted the archive ;-)
[4] I have my APT set up so it does not install recommended packages. Meaning that in this case the virtualbox-qt GUI will not be pulled in and we can stick to the VirtualBox command-line interface.
[5] If you add yourself to the vboxusers group at this point, you can make it effective immediately with the sg command. That way you don't have to logout and back in again.

GitHub User Pages

I created a repository for my GitHub User Pages ten days ago. I only "set up shop" three days ago and pushed my site for the first time today. What took me so long?

Making up my mind. Sure, I could have done that before creating the repository. As a matter of fact, I did. I was going to use a Jekyll backed site and have GitHub do the publishing for me. That is, until I did a web-search on the difference between Jekyll categories and tags. From one thing came another. The search results made me take a look at JekyllBootstrap, which in turn had me head over to the ruhoh documentation. Another web-search, on Jekyll versus ruhoh, led me to a post that mentioned Pelican and more.

Digesting all that information and trying out a couple of things took some time. Before I really knew it more than a week had gone by. I had changed my mind and chosen another static site generator.

Choosing A Static Site Generator

Actually, you don't really have to. Choose a site generator, that is. You can just write plain HTML yourself and push that to your GitHub User Pages. Now, writing plain HTML distracts from writing content. Quite a bit actually and that's where the generator comes in handy. You write content in a much simpler markup language and have the generator convert that to HTML for you. Commit the HTML, push, et voilà, your site is up.

So, of the four I mentioned before, which one made my cut, and why? JekyllBootstrap and ruhoh didn't make the cut because they are not packaged yet(?) for Debian testing/unstable. I have been using that for my day-to-day computing since ages, both at home and the office. Having the Debian maintainers look after my needs cuts the amount of work that I have to do.

That leaves us with Jekyll and Pelican. A Jekyll based site has the advantage that you can simply keep that in your master branch and push. GitHub will take care of the conversion for you. Less work for me, right? Yes, but at the cost of no longer being in control of my own site. The GitHub conversion process is not Just Jekyll. It has a few GitHub specific tweaks thrown in which may result in a different site than you had in mind. These tweaks may change at GitHub's whim and if you want to keep up with them, well, you're in the same boat as with JekyllBootstrap and ruhoh. You will have to do the legwork yourself.

So, let's cut out the GitHub middleman and use Jekyll to convert to HTML ourselves, you say. Good thinking! Perfectly good approach. But when you go down that path any static generator will do. That makes Pelican just as good an alternative as Jekyll. Then which one shall it be? Jekyll or Pelican? Both come with a pile of plugins and heaps of themes. Why prefer one over the other? In the end, it is really a matter of personal preference.

I went with Pelican because it's implemented in Python, a language I've been meaning to learn. Jekyll is written in Ruby, in case you are wondering.

Setting Up Shop

With my choice of generator out of the way, it's time to get going. By the way, this mostly follows what is documented in the Pelican Quickstart. First things first

$ sudo apt install python-pelican

When that's done, it's simply a matter of running

$ pelican-quickstart

in a suitable location and answering the questions. For a brand new site holding my GitHub user pages, I set the URL prefix to https://paddy-hack.github.io and said yes to uploading with GitHub pages. Also noted that this will be my personal page. Next

$ git init
$ git add .
$ git commit

to record my starting point and I'm ready to add my first post below the content/ directory. Either reStructuredText or MarkDown will do. Just use a file extension that matches the markup used in the file. For reStructuredText that would be something like

$ editor content/2014-08-03-setting-up-shop.rst
$ git add .
$ git commit

Previewing And Publishing

Before publishing the site, let's take a preview, just to make sure everything is all right. No problem

$ pelican -s pelicanconf.py
WARNING: Feeds generated without SITEURL set properly may not be valid
Done: Processed 1 article(s), 0 draft(s) and 0 page(s) in 0.36 seconds.
$ (cd output; python -m pelican.server)

and fire up a web browser to look at http://localhost:8000/. The warning can be ignored for previews. There's a bit of tweaking still to be done but that's for later. Kill the pelican.server with Ctrl-C and publish for real

$ pelican -s publishconf.py
Done: Processed 1 article(s), 0 draft(s) and 0 page(s) in 0.36 seconds.

No warning this time. Good! The output/ directory now contains the site. This is what has to be pushed to GitHub on a master branch to make it my GitHub user pages. The simplest way to do so is by putting that directory in a repository of its own. The drawback of this approach is that it separates the source from the site. What I much prefer is to keep the site's source together with the site that gets pushed to GitHub. This is made possible and simple by GitHub Pages Import. It is just a single Python script with no other dependencies.

I have cloned the GitHub Pages Import repository and added a symlink to the script in ~/bin/.

$ cd ~/code
$ git clone https://github.com/davisp/ghp-import.git
$ cd ~/bin
$ ln -s ../code/ghp-import/ghp-import

Now I can use ghp-import to do the heavy lifting of tracking my site in a master branch while I maintain its source branch. Before running ghp-import for the first time, move the current master branch to a safe place: source. This is important because ghp-import destroys the target branch, like totally, utterly and completely.

$ git branch -m master source
$ ghp-import -b master output

Last step is setting my repository's remote and pushing.

$ git remote add origin https://github.com/paddy-hack/paddy-hack.github.io.git
$ git push origin master
$ git push origin source

That last line makes sure that everyone can follow the site in its source form as well.

Done!