Introducing setlista

The problem

I usually go to a concert, come back home and want to listen again to the gigs songs.

Or, I get invited to a gig of a band I don’t know and I want to check out the music they have played live before.

The solution

Tired of copying and pasting songs names from I ended up hacking some JS code to connect their API to the Spotify ones.

Feedback is more than welcome, the code is very unstable and hacky but it does so far the job for me.

Another use case I might explore/implement is finding songs and artists in a webpage and produce a playlist from them, that’s another ‘manual’ activity I often perform and want to automate.

Riemann Learnings

We’ve been using and troubleshooting Riemann quite extensively in the last few months, here’s a little write up of the main learnings, with the hope that other users might find it useful and don’t repeat some of the mistakes we have done.

A praise for Riemann
To start with, Riemann is a great monitoring tool, its code and the code of the clients I’ve been using is clean, readable and all our issues have been caused by misconfiguration or misunderstanding the tool. The support of Riemann itself has been great, aphyr helped us in more than one occasion with quick, prompt suggestions.

Why Riemann
Why would you spend time in using, configuring and learning Riemann, which comes with a Clojure DSL for configuration?

It does add value, tools like New Relic can be run together with Riemann, but they solve a subset of what can Riemann can provide.

If New Relic is a tool for monitoring and alerting at a high level the performance of an application, Riemann can be configured to monitor business logic in the code and can be finely grained configured to alert. Obviously new relic is a simple tool, drop its jar, configure your java agent and you are done, you have your monitoring/alerting solution up and running.

Riemann providing more needs more, you’ll need to read its docs, understand its goals, limitations and configure it adequately depending on your needs.

A typical setup
We have a Riemann server running in production, few production applications instances (about 12 right now) send events which gets enriched, filtered and forwarded to graphite, to alerta and eventually forwarded for email reporting to an smtp server.

Deploy and ‘release’
Our process at the beginning was poor, a spike and learn approach, hacking the files on the server, day by day our configuration has grown and now we have a separate github project for the configuration, we copy the files over via Scp and we respect the Riemann file structure that the project has on github, this allow us to change the bash script to start the process, update and track a new version of the jar and keep the configs in the etc folder.

We test new changes locally by starting it with the exact same startup script, often we just copy the files over and reload the configuration by connecting to the Riemann Repl.

lein repl :connect

Riemann ‘good practises’

  • Split the configuration in multiple files
    (include "util.clj")
    (include "mailer.clj")
    (include "graphite.clj")
    (include "alerta.clj")
    (include "molads.clj")

    The configuration grows and we found beneficial to split in different files, each with a clear name and responsibility. We name them as Clojure files making our life easier within emacs.

  • Carefully define your functions and variables: a team on day changed a variable used by the mailer, it took quite some time to troubleshoot the problem and realise that being Riemann configuration written in Clojure everything can be redefined.
  • Leverage use of LBQ We are currently using LBQ both on the Riemann to graphite/Alerta side, using the function (async-queue!)[clojure]
    (let [index (default :ttl 300 (update-index (index)))
    alert (async-queue! :alerta {:queue-size 1000} alerta)
    graph (async-queue! :graphite {:queue-size 1000} graph)]

    and also on the Clojure side, wrapping the event sending within this function:

    (ns clj-components.utils.bounded-executor
    "See: to understand what's going on here"
    (:import (java.util.concurrent ThreadPoolExecutor TimeUnit LinkedBlockingQueue RejectedExecutionHandler)))
    (def reject-handler
    "Handles a rejection on the bounded executor. i.e. when the LBQ is full."
    (proxy [RejectedExecutionHandler] []
    (rejectedExecution [runnable executor])))
    (def bounded-executor
    "Bounded Execution, current settings are calcuated thinking on the current volumes of Riemann In Production"
    (let [cores (.availableProcessors (Runtime/getRuntime))]
    (ThreadPoolExecutor. 1 cores 5 TimeUnit/SECONDS (LinkedBlockingQueue. 250) reject-handler)))
    (defn run-bounded [f]
    "Exectutes f in a bounded executor"
    (let [executor bounded-executor]
    (.execute executor (Thread. f))))

    For a deep explaination on why using LBQ follow the discussion on this issue on github

  • We had problems with TCP without LBQ, we switched to UDP but then noticed that the java client used by the Riemann client to send events swallows exceptions when the connection is closed, until this bug will be fixed, I’d recommend using TCP with LBQs as there’s no way to recognise a disconnected UDP client. More on this on the same github issue, just down on the same thread.
  • Don’t use it as a datastore: a silly mistake: keeping data too long on the index will lead to poor performance, Riemann is designed to process events which have a short life and little state, keeping your events for more than few minutes will lead to obvious performance problems. Make sure you set a reasonable default ttl on the events in the index.
  • Don’t query the index too often This is the last finding: we were trying to set a flag for maintenance mode when deploying applications, to stop events propagation, however we were querying the index all the times to check if Riemann was set in maintenance mode for a certain application, this was growing somehow the heap allocation day by day. As it’s a recent finding/topic you might find some interesting comments in this github issue.
  • Leverage the java toolset: we configured Riemann to run with NewRelic, Jmx and Yourkit, without these tools it would have been really hard to find out where the problems where, the java command would be enriched with something like:
    JMX_OPTS=" -Djava.rmi.server.hostname="
    AGENT_OPTS="-javaagent:/opt/molsfw/newrelic/newrelic.jar -Dnewrelic.environment=production"
  • In conclusion I’ve been very positevely impressed by the tool and by the support and received, the DSL might be a little hard to remember if you don’t play with the code too often but the power of the tool is impressive. Reading the Guardian’s Riemann config has been a great source of inspiration and I encourage whoever can to share their Riemann configuration.

Bash Learnings


I recently refactored a pretty large and complex set of bash scripts.

I think Ruby influcenced my Bash coding style quite a lot.

Here’s a list of some of the patterns I’ve followed.

Have a basedir variable available

script_base="`dirname "$0"`"

Separation of concerns

Write loosely couple bash code in separate files, import them by doing:

. $script_base/utils/

Avoid global state

Limit as much as possible global state (variables), they are evil in any language

export PATH=/usr/pkg/sbin:/usr/pkg/bin:$PATH

Is my one and only export and ‘public’ variable in the script

Separation of concerns

Write independent, small functions

# Utility function to retrive configuration properties, uses jq    
function get_config { 
   local environment=$1 local config=$2
   jq -r ".$environment.$config" config.json


Document your functions, sadly Bash doesn’t allow named params, so it’s pretty hard to figure out what a function takes.


Leverage local scope: Use local variables

Principle Of Least Astonishment

Check error codes often and offer meaninful log messages

# Checks the return code of the last command run and outputs error / exit if not nil
function check_error { 
   if [ $? -ne 0 ]; 
      error "[ERR] $1" 
      exit 1 

User interaction

Use colors, Bash is fun with colors

function error { 
   echo "$(tty -s && tput setaf 1)$1$(tty -s && tput sgr0)"  

function ok { 
   echo "$(tty -s && tput setaf 2)$1$(tty -s && tput sgr0)"  

function warn { 
  echo "$(tty -s && tput setaf 3)$1$(tty -s && tput sgr0)" 

Defensive coding

Check error codes properly also when you run remote scripts

# Runs a command on remote ssh
function ssh_exec { 
   local remote_user=$1 
   local remote_command=$2 
   local results

   results=$(ssh -T -q "$remote_user" "$remote_command")
   if [ $? -ne 0 ]; 
      log "remote code execution return: $?" 
      log "remote code execution output: $results"
      error "[ERR] Failed running remote command: $remote_command"
      exit 1

In Conclusion

Don’t undersestimate Bash, it’s an awesome, Turing Complete language and requires no installation.

When the code starts to get too nasty, use your favourite scripting language!

Microservices and SOLID principles of Object Oriented Design

Microservices and Principles Of Object Oriented Design

I’ve taken the Principles Of Object Oriented Design from and tried to see how they fit describing microservices.

There are five principles of class design (aka SOLID):

  1. Single Responsibility Principle

    1.1. Each responsibility should be a separate microservice, because each responsibility is an axis of change.

    1.2. A microservice should have one, and only one, reason to change.

    1.3. If a change to the business rules causes a microservice to change, then a change to the database schema, GUI, report format, or any other segment of the system should not force that microservice to change.

  2. The Open Closed Principle

    2.1 You should never need to change existing code or microservices: rather rewrite it.
    This prevents you from introducing new bugs in existing code. If you never change it, you can’t break it. It also prevents you from fixing existing bugs in existing code, if taken to the extreme.

  3. The Liskov Substitution Principle

    3.1 If for each microservice instance m1 of type S there is a microservice instance m2 of type T such that for all other microservices P defined in terms of T, the behavior of P is unchanged when m1 is substituted for m2 then S is a rewrite of T.”

  4. The Interface Segregation Principle
    4.1 The dependency of one microservice to another one should depend on the smallest possible interface.

  5. The Dependency Inversion Principle

    5.1 We wish to avoid designs which are:

    • Rigid (Hard to change due to dependencies. Especially since dependencies are transitive.)
    • Fragile (Changes cause unexpected bugs.)
    • Immobile (Difficult to reuse due to implicit dependence on current application code.)


Principles 1, 4 and 5 have a direct translation, points 2 and 3 solved in OO by subclassing and extending is addressed by continuous rewrite.


  • Continous Rewrite

  • Granularity of microservices

  • Maintenance

  • Monitoring

  • Deploy

  • Resilience

Stay tuned and I’ll address how to mitigate those in the next few posts.

My first six months with Clojure

The story

Six months ago I’ve decided to join my friend Jon in rewriting what is acknowledged as the world’s most popular news site.

This blog post mainly focuses on my first impressions on the programming language we did choose to use Clojure.

Emacs, Paredit

I’ve tried few times to learn Clojure in the past and I’ve failed miserably.

Partially because I wasn’t forced to use it on any large production systems but right now I mainly blame the development environment.

The duo Emacs and Paredit in fact, even if definitely not quick and easy to learn will, on the longer term make your life as a developer very easy.

After a painful few weeks I was quickly fully in love with Emacs, its speed and its infinite power and extensibility, I can definitely say that it’s the best tool to write code I ever used.

ParEdit is a minor mode for performing structured editing of S-expression data.
In human language that means that you will avoid going crazy when balancing parenthesis, I barely read the parenthesis these days in my code and I write only half of them.

The only trouble I am seeing with it right now it’s the team work: everybody (including me) tends to create its own emacs files and pairing becomes challenging since everything is easily configurable and customizable.


The nREPL changed the way I write software, yes, I used in the past the rails console, irb, the Chrome console and even the console in Visual Studio, but the nREPL is way more powerful.
Its power comes from Clojure, I usually move inside a namespace, write some code in the repl and yank it back into the editing buffer.

The functions result in being small, the code has not too many dependencies, everything is pretty self-contained. I am far away from being a good functional programmer but I can say that I can read easily enough my code months after I wrote, it’s a decent sign!

From the repl we also easily restart, reload and debug our applications.

Jon recently wrote about TTD and Clojure, right now the nREPL is my way of doing TDD, the test gets thrown away straight after: it’s not far from what Dan North thinks about TDD.

Books, Community, Resources

Clojure is relative new, appeared in 2007, but pretty solid and stable, I love its mailing list: not too much traffic and high quality posts and there’s a good amount of books available, Programming Clojure being my favourite so far.

Thoughts on Object Oriented Design

I had an argument with Felix ages ago, I always thought that the language drives the coding style and the design.

Years of EJBs, Spring, IDEs that generates boiler template code for you created those big balls of mud that soon need rewrite.

But now I’ll add more, I reached a point where I think that Object Oriented Design is not suitable for the web.
The power of Clojure is in its data (maps, lists but not only) transformation capabilities.
A web application is nothing more than data transformation, from a storage to an html file, served on a socket.
Nothing more than that.

OOD is also very hard to get right, loads of tests, GOOS all the way and so on.

Functional programming is hard as well, and I am probably far from being able to judge if it’s easier or not, but I can certainly say that the speed of changing/moving/refactoring code is incredible fast.
I believe that this is happening also because of the small quantity of unit tests and because of the general decent quality of our code base but it still impress me.

High productivity

I rarely felt so productive with coding.

The only comparable project I can remember in my past was perhaps using Ruby, Sinatra and Mongo.
However Emacs with ParEdit and code completion makes writing code quicker, the compilation on save makes it easier to spot trivial typing errors.

The dependency model is a very well camouflaged version of maven: it just works, it’s not great as NPM probably but it does the job very very well.

Quick wins

We use Avout for storing our application configuration, it has never been so simple and reliable: configuration changes are stored in Zookeper and pushed to the application instances asynchronously and with no restart needed. It just works.

Handling state (or avoiding to have state) in Clojure is pleasant, elegant and safe.

Check this blog post for more on the subject.

How does it compare with other languages I’ve used?


Oh old good Java, it’s like talking about Cobol in the 90’s.

I hope that with the next JDK things will improve and I’ve a lot of respect for all those folks that write rather elegant libraries to solve the problems the languages is unable to solve elegantly.

The Ecosystem is broken: IDEs, Frameworks.

The Good Practices are followed by a way too limited number of people and I never had the luck to join a project and see an healthy code base, without bloated Frameworks wired in, without over-complex unnecessary design: Service Controller Repository anyone?

Clojure wins in conciseness and core API.


Left intentionally blank


I still like Ruby a lot, the performances are in most cases poor (I’ll save Goliath), I don’t like where Rails is going, I don’t like that too many people call themselves developers after a few weeks of Rails crash course.

It’s a nice scripting toy language, I still use if for deploy (somewhere where Clojure lacks in tools maybe?) and for my machine setup. I don’t see the point in using it on large, performance intense projects: if it’s not your butcher website you probably should move away from it.

Clojure wins in performances and dependency management.


I don’t like the dozens of way you can write javascript, there’s way too many good ways of writing good javascript code. It’s a broken language in so many way. Clojure wins in elegance.

Node.js is about 2 years younger than Clojure but it’s still crazy unstable, sure, this will make the platform evolve but I had to rewrite some code a few times just because of a node minor version update.
I’ll sit and wait for the version 1.0, perhaps writing the code in Coffescript.

Clojure wins in platform maturity and stability.


Obviously I need more time to start feeling the pains, I reckon at least another year, I am also very intrigued by Erlang and Go. But learning Clojure is an exercise I can only recommend, it will make your code better, whatever language you have to use in your daily job.

I think that what’s harder (or more costly) to learn it’s then easier (or cheaper) to use.
And that’s definitely the case of Clojure.

Composing html fragments with Mustache and Clojure

We got a fairly large code base of mustache templates (5386 LOC in total, 964K), we are using clojure with stencil to transform our data from ElasticSearch documents into html5.

The two main views on the sites are article details and channel. Here’s the code responsible for the mobile article rendering.

(article-channel-fns [this article]
    {:head (partial head/render-for-single-mobile-article article)
    :content (partial mobile-article/render-article article)
    :masthead #'masthead/render-for-mobile
    :navigation #'navigation/render-for-mobile
    :footer #'footer/render-for-mobile})

The head/render-for-single-mobile-article function looks like this:

(defn render-for-single-mobile-article [a channel]
    (musta :channel :head (article-defaults a channel)))

The main mustache template is rather simple:

<!DOCTYPE html>

{{> templates/mobile/channel/body.html}}

The trick is to have mustache variables that gets resolved into function calls, so that {{head}} will be resolved at page load.

Musta is a simple macro that just loads the templates from file system or cache and calls the stencil library.

The head template looks more or less like this:




<meta {{type}}="{{name}}" content="{{{content}}}" />

<meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no" />

<link rel="{{rel}}" href="{{href}}" {{#title}}title="{{.}}"{{/title}} {{#type}}type="{{.}}"{{/type}} />

<link rel="stylesheet" type="text/css" href="http://{{globals.settings.fe-repo-host}}/{{globals.settings.fe-repo-version}}/css/main.css" />


In conclusion

It’s a pretty neat and flexible way to compose your html pages, stencil proved to be very fast and bug free.

Mustache has a rather extreme approach: no logic (a part from loops and simple if else) are allowed in the syntax: nevertheless it plays quite nicely generating html from clojure maps; whenever we need some conditions we pass special variables to the templates with naming conventions such as ‘map-name-count’ and ‘map-name-?’ to respectively indicate the size and the existance of the data.

Two years of programmer anarchy

Two years passed since Fred George and I wrote the Programmer Anarchy paper, in those two years Fred went all around the world explaining what was happening here at Forward and meanwhile I was here experiencing the Anarchy. This blog post is a writeup of this last two years experience, what worked well, what worked less well. To start with, let’s call it with a different name, which doesn’t implies chaos and confusion, Anarchy is not a new thing in the agile world, many people refer at it as self-organising teams.

When does it work well?

It works well when the manager is absent or fully trusting the team. One of the main selling point of Fred Anarchy was the lack of managers in the picture. Well, some sort of business owner, idea creator still needs to be present. That person needs to fully trust the team, ideally needs to be an ex-developer. I never seen in my life a manager without a past in developing software that can trust and understand their team. I truly believe that the most performing teams have developers to lead them and drive the business, google apparently is one of those example. Brandon Keeper recently wrote about Github anarchy.

It works well with small teams

I always loved the magic number of 5 developers per team and believed that is enough to build anything in the world. Sometimes you need to increase the WIP and have more developers, without some sort of leadership the team will lack focus and direction. Selforganising team it’s one of the facets of Agile. It’s not an arrving point, it’s not a silver bullet. It’s something to try as any other practise. I did found however that it requires experience and time to glue the team up together. If I go back in time with my memories, back in 2007, the FM team was selforganizing, but it took us few months before reaching that level of maturity when everybody knew what to do and how. We did reach at that level of self organisation leveraging pair programming, a solid, team owned code base, a kanban wall. We had a great agile project manager to help us focus and a great tech leader who rather than leading was just coordinating us and helping us to climb the ladder of self organization.

What I didn’t like / What didn’t work well.

Not Pairing.

Assuming that you are a mature, highly skilled and performing team the code quality won’t fall down. What will feel down will be the knowledge sharing, you will need to introduce weekly showcases, increase artificially the communication inside the team.

Polyglot anarchy.

When I used to be a consultant I always suffered the lack of polyglotism in big enterprise companies. I had to be part of Forward to understand what full polyglot anarchy means. If you write your software in a new funky language, using a new funky application server in a new funky infrastructure you will have not only to maintain it but also to support it. And if the system has to be up and running 24/7 that may lead to some issues. Assuming that your sysadmins on support know everything from clojure to node.js, from golang to asyncronous javascript this choice is still pretty risky. The team should take responsability of keeping the system up and running, but on the long term having a whole team on call at night, day and holidays is not really feasable. I still don’t have a solution to this, I guess that the sysadmin should pair with the team, learn the caveats of any system built by the team itself. I also start to believe that fixing some constraints in the infrastructure is not such a big deal: let’s say everything will be built on the jvm, that would still give a decent choice to the teams, leaving some sort of consistency around the deployment and the live real time troubleshouting issues.

No Iterations.

I am not a big fan of Scrum and generally time boxed iterations, however, the human brain tends to forget the passing of the time, that’s why we have cuckoo clocks, bells towers and so on. Having iterations while releasing software helps you to realise that time is passing, helps you being more self conscious of the passing of the time. Having iterations creates a safe environment for other rituals such as: team dinners, retrospectives, one2ones with team members, feedback seessions. Most senior developers probably would have the concept of time passing always in the back of their mind, but again, why stopping having iterations if we will have, again, artificially set some dates on a calendar for having agile rituals? Without iterations it’s also hard to plan for slack, or Golden Cards.

No Estimations and no stories.

I realised that moving a user story on the cardwall is a ritual that causes happiness, sense of completition. If Fred is right when he talks about the story tiranny it’s also true that without users stories (see INVEST in Good Stories and SMART tasks) and doing continous deployment the risk of having continous requirments is pretty high. As a developer you are never done because there will be always something more to do, as a product owner you will never see the end, you will always add new features. People work in contexts and a context can be long as much as one year. When is the end of the context? Unknown. It’s hard to define done, impossible to estimate, adds way too much uncertainity to the work in progress.


Knowing what the team is doing and if the team needs help is a well established right and duty of any team (not only in IT). If you walk in the morning at Forward these days between 9 and 10 you will see almost every single team standing up. Unless you are a team of 2 people the standup is a must have, and it’s such a little effort.

No Tests.

Well Dan wrote quite a bit around this area, the spike and stabilize and Liz replied to that blog post here. Of all the practieses I’ve abbandoned in these last years tests is probably the one I missed the least. It’s still very dangerous to preach for stopping writing tests. Writing a lot of tests makes you beomce a better developer. Writing tests in most contexts is a must have.  

No Refactoring/Rewrite and write in micro services

Without tests, writing the code in a dynamic language forced us to write small components and rewrite them instead of refactoring them. What in the past was a module in an enterprise application became a separate codebase talking with other compoments mainly in json. A part from obvious performance (if performance is important in your context) issues, I found this approach a little wasteful as well. Rewrite comes from lack of analysis (lack of user stories) and lack of correct design in the first place (lack of test driven development). I found more satisfying (and probably more effective) writing my software the first time “good enough” and then improving it step by step with refactoring. My brain works that way not only for coding. Imagine finding the optimal walking path from home to work, optimizing different things, sightseens, traffic, pedestrian paths, shops you want to pass by. Refactoring is improving it every day. Rewriting is like coming back once every 3/6 months, for the first 3 months you will walk a shitty path, the next 3 months something better and so on. Rewriting it’s kaikaku while kaizen is refactoring. Again it all depends on the contexts, but, at least in my experience continous refactoring is a pleasant activity while continous rewriting is rather frustrating.