Apr 1, 2015 - Using keywords as functions for your own container in ClojureScript

In Clojure and ClojureScript, a common, maybe even idiomatic way to access values from a map is using this form:

(:mykey mymap)

We are using a keyword as the function to access mymap. What if I want to create my own container which can be accessed in similar way? Python programmers can implement getattror getitem and do similar things. ClojureScript is a powerful and flexible language so I should be able to do that right?

Let's dig into the source code to see what Keyword does.

(deftype Keyword
...
IFn
  (-invoke [kw coll]
    (get coll kw))
  (-invoke [kw coll not-found]
    (get coll kw not-found))
...

Ok, so Keyword implenents IFn which makes it a callable function. There are two signatures which call get. Let's check that next:

      (cond
        (implements? ILookup o)
        (-lookup ^not-native o k)
...
        (implements? ILookup o)
        (-lookup ^not-native o k not-found)

It turns out that get checks if the container (o) implements the ILookup protocol, it calls the -lookup methods on it. So what we need to do is create our own container type which implements that protocol. Let's try that out.

(deftype EntryWrapper [data-map]
  ILookup
  (-lookup
    [o k] [k (get data-map k)])
  (-lookup [coll k not-found] [k (if-let [val (k data-map)]
                                   val
                                   not-found)]))

This code creates a new type EntryWrapper. It's pretty stupid: it wraps an ordinary map and when it's -lookup methods are called, it delegates to the map it contains. But unlike ordinary map, the lookup returns a vector containing key and value (this is actually what find already does, but bare with me :)).

Now, because Keyword used get, and get used the ILookup methods, we should be able to fetch [key value] vectors like this:

(def ew (EntryWrapper. {:a 1}))
(:a ew)
->
[:a 1]

Nice, but not that useful. Now let's try to do something more interesting: a half-assed implementation of JavaScript-style prototype-based inheritance.

(declare proto-get)

(deftype ProtoContainer [values proto]
  ILookup
  (-lookup
    [o k] (proto-get (with-meta values {::proto proto}) k nil))
  (-lookup [coll k not-found] (proto-get (with-meta values {::proto proto}) k not-found)))

Above we declared a function proto-get which we will implement later. Then we implemented a new type ProtoContainer which again implements the protocol ILookup. It also takes a map of values and a prototype as it's "constructor parameters". All the interesting stuff has been delegated to proto-get. Let's see what it does:

(defn proto-get [values k not-found]
  (if values
      (let [v (get values k)]
        (if (not (nil? v))
            v
          (recur (::proto (meta values)) k not-found)))
    not-found))

Nice and simple, it just uses the get function to retrieve a value from the map of values. If it isn't there, the function calls itself recursively with the prototype of our map.

(defn proto-container
  ([values] (ProtoContainer. values nil))
  ([values proto] (ProtoContainer. values proto)))

Now let's see if the inheritance works. The following example is a variation of an example from Steve Yegge's Universal Design Pattern article and an example in the Joy of Clojure book in Chapter 9.

(def cat (proto-container {:likes-dogs true :likes-other-cats true}))
(def morris (proto-container {:name "Morris"} cat))
(:name morris)
-> "Morris"
(:likes-dogs morris)
-> true

Above we created a prototype cat and an instance of cat called morris. In addition to the base properties :likes-dogs and :likes-other-cats Morris has a property called name.

Next Morris has an encounter with a nasty dog and starts hating dogs:

(def post-traumatic-morris (proto-container {:likes-dogs false} morris))
(:name post-traumatic-morris)
-> "Morris"
(:likes-dogs post-traumatic-morris)
-> false
(:likes-dogs morris)
-> true

As seen above, we were able to specialize a new Morris which has the same properties as original Morris except for the not liking dogs part. And we did it by using keywords as functions to retrieve data from our own custom type.

Mar 9, 2015 - Docker image as a development environment

I've been playing with Docker lately. According to documentation It's most commonly used as a container for a single server-side process. My use-case is a bit different: trying to get a development environment running. Usually I'd use Vagrant for shared development environment configuration and implementation, but I ran into a case where it wasn't an option.

My requirements were basically these:

  • The end-result should be an interactive shell for compiling and running the software
  • I need be able to install various .deb packages into the Docker image
  • I need to copy files to the image when it's being built
  • I need to run various commands during installation
  • All these steps should be automated (no manually created massive image-files)

How to automate image creation

You can create an easily shareable text-file called Dockerfile for your image. Let's say I want to share an Ubuntu 14.04 image with:

  • a HELLO.TXT inside root's home directory which is echoed to root user when he logs in.
  • Emacs installed

This is what it would look like:

FROM ubuntu:14.04
RUN sudo apt-get install -y emacs
ADD HELLO.TXT /root/HELLO.TXT
RUN echo "cat /root/HELLO.TXT" >> /root/.bashrc

Store the above snippet to a directory as Dockerfile. Also store HELLO.TXT with some text to the same directory. Then run command:

sudo docker build .

Now it downloads Ubuntu as the base image, and applies our instructions. As a result we get an image with an ID:

Successfully built 1e49d046eb83

We can use that ID to run commands. Let's start an interactive shell.

Starting an interactive shell

To start an interactive shell in our new image, we tell docker to run /bin/bash:

sudo docker run -e 'HOME=/root' -i -t 1e49d046eb83

Now you should be greeted with "hello" and should be able to start the installed Emacs.

The parameters to run-command:

  • -i means that we are running an interactive command
  • -t is the ID or name/tag of the image
  • -e allows us to set environment parameters. For some weird reason $HOME doesn't work properly in Docker, so we'll have to set it explicitly here to get our .bashrc evaluated.

Now you have a working system, and with those basic ADD and RUN-commands you can install or alter almost anything. Just to make our dent in the universe, let's store a file to our image while we are in it:

echo "foo" > /bar.txt

Clean rebuild

Docker uses it's cache to see what commands needs to be run when you run build. This is faster and usually works ok. Sometimes you'll just want to do a clean build though. This can be done with this parameter:

--no-cache=true

Getting back

Let's get outta here! Press ctrl-D and you're back in your host operating system. Nice, let's go back to our docker image by running that same run command (above) again. Looks similar, but where's /bar.txt?

ls: cannot access /bar.txt: No such file or directory

It turns out that every time you run a command, you get a new container based on the image ID we give the run command. So if we run just one command like this:

docker run learn/tutorial echo "hello world"

It will create a new container for the command echo, run the command and return. This container still exists, but running the same command with the same image again will create another container based on the image "learn/tutorial".

Since we're building a development environment, surely we'd like to have some state there and not just start from scratch every time. You can list all containers you have created by running commands with:

sudo docker ps -a

Or if you want to see just the last container you created by your last command:

sudo docker ps -l

It will show when the container was created, with what command etc.

So the question remains, how do we get back to the container which had our /bar.txt?

We'll have to create a new image based on the /bin/bash command we ran. So with let's run that ps -a command, check the id of the container created by our /bin/bash -command and commit it as a new image:

sudo docker ps -a
<check the id of /bin/bash container, happens to be 2ab151606f4c>
sudo docker commit 2ab151606f4c image-with-bar-txt
sudo docker run -i -t image-with-bar-txt /bin/bash

Now we can see our precious bar.txt again! This doesn't seem to be what I want to do every time I go back to my development machine though.

Volumes

What if I could just always run the an image based on a Dockerfile which is in version control? What if I wanted to see and edit all my files on the host system? I can't really do these things if I start editing my source files inside the Docker container's Union File System.

This is why Docker has volumes. They are just shared directories between the host and Docker container. They are lighweight since there is no NFS or CIFS, they introduce a a direct disk access between host and the container.

Let's share our host's directory /home/clarence/work as /work inside the Docker container. We'll just start bash again, but with a -v flag:

sudo docker run -v /home/clarence/work:/work -i -t image-with-bar-txt /bin/bash

Now when you're in you can try:

touch /work/yeah

Then get out of the container, and check the directory (in the example /home/clarence/work):

ls /home/clarence/work
-> yeah

Just adding multiple -v (or --volume) parameters to the run command we can add as many volumes as we like. Note that the host path must be absolute. So if you want to use relative paths, you'll have to expand the path in beforehand e.g. in a shell script (EXAMPLE HERE)

Where does it all go?

All the files in the Union File System, the stuff you install and work on which is not under a volume directory go to /usr/lib/docker on your host. This can grow quite large so if your disks gets full, check the directory.

To free some space you can nuke all the containers with this:

sudo docker ps -aq | xargs sudo docker rm

And all the images with this:

sudo docker images -q | xargs sudo docker rmi

Of course you should only do that if all the state you need is in either in the Dockerfile or volumes. Otherwise do some more fine-grained deletions via checking the listings of the ps and images subcommands.

Dec 11, 2014 - Vagrant performance tuning

The Vagrant setup I talk about in this post: OSX as the host system, Ubuntu as guest and VirtualBox as the provider.

I use a Macbook Pro as my work machine, but the environment requires Linux tools for building C/C++ apps and distros etc; hence I'm using multiple Linux virtual machines.

I used to run Linux VMs on VirtualBox directly. Hunting down and hand-configuring images was a pain. I found out that Vagrant simplified things a lot for me. There are however some things which will bite you when you start using it for serious stuff.

I ran into two pretty bad performance issues. The first one was that my network speed was pretty miserable, about 70 times slower than the host system.

It turns out that the problem was with the VirtualBox NAT-interface and the default Network Adapter type. Adding this line to my Vagrant configuration made all the difference:

config.vm.provider "virtualbox" do |v|
    v.customize ["modifyvm", :id, "--nictype1", "virtio"]
end

Now the download speed improved to an acceptable rate, about 15x more than I had with the default settings. If you’re using plain VirtualBox without Vagrant, switch the Adapter Type of your NAT interface from the GUI to Paravirtualized network adapter:

Paravirtualized network adapter (virtio-net)

Or just use the bridged mode which seems to be faster anyway. This is not an option in Vagrant, it requires eth0 to be a NAT interface. You can add another bridged interface but eth0 still has to be NAT.

I also ended up asking and answering my own question on Superuser about this.

The second performance issue was with the shared folder which is accessible via /vagrant on guest. If you have any large project or any builds running from the shared folder, you’ll bump into this issue. I noticed it when the my code’s tab completion appeared to be super slow. By default vagrant uses CIFS for the shared folder. To speed this up we can change this to use NFS instead. The solution was adding these lines to the Vagrantfile:

config.vm.synced_folder ".", "/vagrant", type: "nfs"
config.vm.network "private_network", ip: "10.9.8.5"

The latter line is required by NFS, you’ll get an error without it. It creates a host-only network. This second interface will be handy anyway, you can e.g. access your HTTP server via that IP and skip port forwarding.