Pedantic console for JavaScript tests

When running JavaScript tests that fire up a browser, it can be useful to ensure that console.warn and console.error aren’t being called, and to have your automated build fail if this starts happening.

If you’re using a framework like Jasmine, you can place the following file in your helpers directory to ensure that an exception will be thrown whenever a call is made to console.warn or console.error:

console.error = (function () {

  var originalConsole = console.error;

  function myError () {
    originalConsole.apply( this, arguments );
    throw Error('console.error called, failing test');

  return myError;

console.warn = (function () {

  var originalConsole = console.warn;

  function myError () {
    originalConsole.apply( this, arguments );
    throw Error('console.warn called, failing test');

  return myError;

I recently used this to locate all the places that a deprecated Moment.js feature was being used in a codebase.

Haskell stack, Yesod and Docker

The why

Over the break I’ve been working on a web app to replace a fairly old MS Access Database that I built for my Dad to use in 2009 (he has a mobile vet business).

This seemed like a good chance to try out Yesod, a web framework for Haskell. The Yesod philosophy is to leverage the Haskell type system wherever possible. For example, in the Hamlet templating language everything from generating URLs to including static files and generating forms is checked at compile time.

stack is a cross platform build tool for developing Haskell programs which (when combined with Stackage makes Haskell development much more enjoyable.

I’d recommend stack for any new Haskell project, and hopefully this post can point someone else in the right direction for a stack/Yesod/Docker project.

I built a very basic site with Yesod and stack locally on OS X, based off the yesod-postgres template (stack new mysite yesod-postgres). The yesod dev server makes this all very easy, just running stack exec -- yesod devel will keep Yesod running and recompiling the app whenever any changes are made.

Soon it was time to make the app available somewhere in order to start getting some feedback. At this stage, the most basic requirement for the hosting was that it be low cost. Digital Ocean came to mind (use this link for $10 credit), and I’m now using their one-click Docker droplet at USD $5 per month. This host can run both the Yesod frontend and the PostgreSQL database.

Binaries built locally on OS X won’t run on a Linux-based Digital Ocean droplet, and it was a bit early to spend $$ on something like Snap or Travis, so I needed a VM or container to run the build in.

The how

Set up

stack has support for Docker to run the build and package the application, but it isn’t currently supported when using boot2docker (see these issues), so I used a simple Vagrantfile to start up a beefy VM:

Vagrant.configure(2) do |config| = "puppetlabs/centos-6.6-64-nocm"

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "4096"
    vb.cpus = 8
    vb.customize ["modifyvm", :id, "--ioapic", "on"]


Centos 6.6 might seem like an odd choice for a development environment, but the Vagrant box was already on my laptop, so using it saved me the download time (bandwidth is a precious commodity in semi-rural Australia).

I didn’t bother setting up provisioning properly, but here are the commands that are needed to get stack working:

# Add FP Complete repo & install stack
curl -sSL | sudo tee /etc/yum.repos.d/fpco.repo
sudo yum -y install stack

# Add EPEL repo and install Docker
sudo rpm -iUvh
sudo yum -y install docker-io

# stack requires the Docker client to be able to connect
# to the daemon without root, so add vagrant user to dockerroot group
sudo gpasswd -a ${USER} dockerroot

# At this point, you'll need to log out and back in for
# the group change to take effect

# Setup iptables to allow Docker links to work
sudo iptables -t filter -A DOCKER -d -i docker0 -j ACCEPT

Adding the vagrant user to dockerroot comes with all the usual security issues, but I’m not too worried about that here. There are some other suggestions in the stack documentation for how to handle this.

Running the build

So, stack and Docker are now both installed. Enabling stack’s Docker integration on the VM is as simple as modifying the ~/.stack/config file:

[[email protected] ~]$ cat ~/.stack/config.yaml
  enable: true

This gives the advantage of using Docker to run the builds on the VM, while not impacting the yesod devel workflow on OS X.

By default stack will use the fpco/stack-build repository to obtain an image to run the build in. This suits me, as there is also fpco/stack-run for running the binaries once they’re compiled. The build image includes everything that’s necessary for building the whole of Stackage, whereas stack-run is more trimmed down.

Running stack build kicked off the build in the Docker container, which completed successfully as all required dependencies (e.g. libpq and pg_config) were already in the image. Thanks to FP Complete for setting up these Docker containers for everyone to use!

Packaging Docker image

stack’s support for creating container images is a relatively recent addition, and currently isn’t covered well in the documentation. There is a reference to the image configuration section here, which was enough for my needs. After added the relevant bits, my stack.yml looks something like:

resolver: lts-3.20

- '.'

extra-deps: []

    library-only: false
    dev: false

extra-package-dbs: []

# This isn't required as it's set in the user config on the VM
# docker:
#     enable: true

    name: mdjnewman/mysite
    base: fpco/stack-run
      config: /app/config
      static: /app/static

With that in place, running stack image container produced a Docker container ready to be pushed to Docker Hub:

[[email protected] vagrant]$ stack image container
Sending build context to Docker daemon 34.25 MB
Sending build context to Docker daemon
Step 0 : FROM fpco/stack-run
 ---> db9b2a858ef5
Step 1 : ADD ./ /
 ---> e682c572d7ed
Removing intermediate container 1ce35ed1ccea
Successfully built e682c572d7ed

Running a regular docker push published the image to the registry.

Deploying to Digital Ocean

After all these steps, the final package was in Docker Hub and it was just a matter of running the following commands on the droplet, and the site was live:

docker run --name=postgres -e POSTGRES_PASSWORD=$(uuidgen) -e POSTGRES_USER=postgres -d postgres:9.3

# Yes, I know UUIDs don't make the best passwords :)

docker run              \
    -d                  \
    -w /app             \
    --link postgres:postgres         \
    -p 3000:3000        \
    mdjnewman/mysite    \

The last command is running the Yesod server, with environment variables set to the values provided by Docker (see here for info about setting configuration variables in Yesod).

The results

I’m very happy with the Yesod workflow, and stack’s Docker integration makes deploying a lot easier as I don’t have to worry about what packages are in my build and test environments. Seems like a good compromise between manually copying files around and having a full CD pipeline.

Using stack build with a LTS resolver and an isolated container is also the holy grail of repeatable builds!

There is still a lot that would need to be done (improving security, backups, logging etc) before this was closer to a production environment, but it’s good enough as a development region.

This whole process (including writing this post) from having something that I wanted to deploy to being able to view the live site took less than a day, and I was learning a lot along the way. Deciding where was the best place to host the site and waiting for Docker to pull the fpco/stack-build image took up most of the time!

Simon Peyton Jones on maintenance with strong types

People don’t like fixing type errors, but they are a lot easier to fix than runtime errors… The single biggest thing going for static typing is not so much that it helps you write your program in the first place, but it helps you maintain your program.

GHC itself is an example – it’s now 150,000 lines of Haskell & twenty years old and I regularly refactor it in pretty large scale ways. I just wouldn’t dare do that if I didn’t have static typing to keep me sane. It’s this enormous code base, chunks of which I’ve forgotten about, and yet I can confidently make systemic changes to it because I know the type checker is going to catch all the places that change needs to go.

Simon Peyton Jones, speaking on Functional Geekery Episode 11.

The part about GHC is anecdotal evidence, to be sure, but it makes the point nicely.

Why Functional Programming Matters

The following is the conclusion from a paper entitled ‘Why Functional Programming Matters’:

In this paper, we’ve argued that modularity is the key to successful programming. Languages that aim to improve productivity must support modular programming well. But new scope rules and mechanisms for separate compilation are not enough — modularity means more than modules. Our ability to decompose a problem into parts depends directly on our ability to glue solutions together. To support modular programming, a language must provide good glue. Functional programming languages provide two new kinds of glue — higher-order functions and lazy evaluation. Using these glues one can modularize programs in new and useful ways, and we’ve shown several examples of this. Smaller and more general modules can be reused more widely, easing subsequent programming. This explains why functional programs are so much smaller and easier to write than conventional ones. It also provides a target for functional programmers to aim at. If any part of a program is messy or complicated, the programmer should attempt to modularize it and to generalize the parts. He or she should expect to use higher-order functions and lazy evaluation as the tools for doing this.

Of course, we are not the first to point out the power and elegance of higher-order functions and lazy evaluation. For example, Turner shows how both can be used to great advantage in a program for generating chemical structures. Abelson and Sussman stress that streams (lazy lists) are a powerful tool for structuring programs. Henderson has used streams to structure functional operating systems. But perhaps we place more stress on functional programs’ modularity than previous authors.

This paper is also relevant to the present controversy over lazy evaluation. Some believe that functional languages should be lazy; others believe they should not. Some compromise and provide only lazy lists, with a special syntax for constructing them (as, for example, in Scheme). This paper provides further evidence that lazy evaluation is too important to be relegated to second-class citizenship. It is perhaps the most powerful glue functional programmers possess. One should not obstruct access to such a vital tool.

J. Hughes, “Why Functional Programming Matters,” Comput. J., vol. 32, pp. 98–107, 1989.

This paper was written twenty years ago – the more things change, the more they stay the same. I’d urge any programmer (whether currently interested in FP or otherwise) to read this paper if you haven’t already.

ThoughtWorks LevelUp EXP Brisbane – Take 2

LevelUp EXP Brisbane Attendees (thanks Alexandra Tran for the photo!)

ThoughtWorks ran another great express LevelUp event on Saturday at their Brisbane office. LevelUp events aim to help students bridge the gap between university and their first full-time job. LevelUp EXP is a mini-conference with a number of talks and hands on sessions, as well as lots of opportunities to mingle with the ThoughtWorks employees and other attendees. One theme that recurred through almost every talk of the day was focussing on the user at every stage of the development process – if you’re not building something that your users can use, then you’re wasting time. This does mean users, not customers. You can be building a product for a client, but it’s ultimately the user’s experience that will determine success.

Real Project Example: Domino’s HTML5 in Brisbane

First up was Mark Ryall (@markryall) discussing the recent consolidation of three separate websites into a single HTML5 site for the Dominos pizza franchise in Australia, New Zealand and the Netherlands. The new site was to replace separate Flash, mobile and accessibility-focussed sites, and was a greenfield project. From a software process perspective, the project kicked off with a two-week inception where BAs, developers, designers and testers worked together for two weeks to get initial requirements worked out. Mark pointed out that this is an effective alternative to the business struggling to work out requirements on their own for six months before getting anyone else involved. It reminded me of the quote that’s often attributed to Henry Ford:

If I’d asked people what they wanted, they would have said a faster horse.

The point is that the business doesn’t always know what they want, and even if they do, they may not be able to articulate that in such a way that both parties have the same vision. The rest of the project ran in two-week iterations, with showcases at the end of every iteration. Mark mentioned that the biggest challenges (from a technical perspective) in this project were internationalisation (I18N) and localisation. As Mark explained it, internationalisation ensures everything is available in the correct languages for the correct region, whereas localisation focusses on differences such as different address formats (another example of what this site in particular has to handle is that Domino’s in NL doesn’t provide catering, like it does in AU). There was a heavy emphasis on test automation in this project, and Selenium WebDriver was used for automated browser testing. WebDriver allows single tests to be written to target more than one browser, and are more robust than trying to guess what the browser is doing and interspersing your tests with loads of sleep() calls. In order to make tests more readable for everyone on the project (BAs/the business included), Specflow was used. Specflow is a .NET alternative to Cucumber, and uses a similar declarative style to define specifications & expected behaviour. Tests are then generated from these easy to read specifications, targeting whichever testing framework you’re using (NUnit in this case). Given this site was replacing an existing site with a focus on accessibility, Domino’s required this site to be just as accessible for people using screen readers and keyboard only navigation. Accessibility is something that is easy to overlook, but every developer should make the effort to do these things correctly (using alternative text, ARIA tags, skip links for navigation etc). If you want to know more about how important accessibility is, Scott Hanselman recently did an interesting interview with Katherine Moss, a blind software technician.

The Lego Game

The Lego Game is a game designed to give you a taste of the Agile software development process. Our task was building a Lego ‘animal’ with a client in mini sprints lasting a few minutes each. We worked through everything from estimation to a retrospective in less than ten minutes, and completed three iterations with a client who wasn’t afraid to change their mind. An interesting resource that was mentioned by the ThoughtWorks employees running the Lego game was the Retrospective Prime Directive by Norman Keith:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

I thought this was a good guideline for running retrospectives, as it ensures that you focus on what went well and what can be improved, and not on trying to assign blame. There are plenty of pages describing the agile Lego game on the net if you want to read more.

Agile 101 with a Business Analyst

Agile 101 with a BA by Henry Lawson was the first of the twenty-minute lightning talks for the day. In twenty minutes, Henry managed to cover a brief history of Agile, the Agile manifesto and different Agile methodologies. There was more about Agile on the cards but the twenty-minute timeline had to be adhered to! The following paragraphs sum up the main points of the talk.

Agile manifesto: the Agile manifesto is what gave the movement towards iterative, lightweight processes a name. Written in 2001 by several prominent figures in the software industry, the core of the Agile manifesto is the following four points:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The differences between traditional Waterfall methodologies and Agile: The waterfall model is a sequential design process, where each stage is completed and locked-down before the next is started.

Waterfall model of system development
Waterfall model of system development – avoid it! (you want the cycle above to take two weeks, not twelve months) (source)

While waterfall might work for other engineering projects (like building a bridge), it’s not really suitable for software development (too many software projects have elements of a wicked problem). I won’t go into the benefits of Agile or Agile processes in too much detail here, as there is plenty more information available.

Different agile methodologies: Examples of agile methodologies include Scrum, Kanban, Lean, XP and Crystal Clear. The main difference between these methodologies is the practices, such as daily Scrum meetings or pair programming in XP. The general processes are similar, and there is no reason you can’t mix and match.

Functional Programming

Hugo Firth (@hugo_firth) was giving the next lightning talk, this time with a quick intro to functional programming concepts. There are a number of programming paradigms, such as procedural and object-oriented (OO). Procedural code (e.g. C/BASIC code) doesn’t provide many ways to create abstractions and manage state. OO, such as you’ve probably encountered in Java/C#/C++ allows abstractions to hide global state. Functional programming on the other hand is all about removing state where possible, and programming with pure functions. Hugo went through some common functions you’ll meet in functional programming, such as map & filter by providing some simple examples in Clojure. If you want to see a less rushed presentation by Hugo, his recent talk at the Brisbane Functional Programming Group is available on Vimeo.

BFPG is a great community if you’re looking to learn more about functional programming – I’ve always found everyone there to be extremely willing to help.

Working in different environments… from Heaven to Hell

The last lightning talk in the first session was by Claudia Ferreira (@claudia_onfire), providing some practical advice for dealing with difficult work environments. You shouldn’t assume that the end of uni is the end of the hard work. Starting a new job, meeting new people and adapting to company culture is just as hard. Also, you don’t want to fall into the trap of thinking that you know everything and that you can stop learning. There is an almost endless variety of work environments, some good and some not so great. Claudia showed the following clip from The Matrix and told us about a time she was in this exact position.

Claudia’s boss basically said you’re here to work, not think, and there are hundreds of people who can fill this spot if you aren’t happy with that. Hopefully you won’t find yourself in a situation this extreme, but chances are you’ll have a boss you’re not overly excited about working with at some point in your career. If you find yourself in this situation, don’t just bottle it up and say nothing. Your team is there to help, you should be able to trust them and speak to them about issues. It’s important to have a good relationship with your team, as they’re the people that you spend the majority of your day with. So don’t stress if you’re confronted with a difficult situation, just take a deep breath, speak up if you really don’t agree and be the bigger person.

The Way of the Consultant

Sarbashrestha Panda has been a consultant for 5-6 years, working as an analyst and project manager on many projects. There are many types of consulting, from technical to business strategy. Technical consulting still has a strong focus on the business needs, no matter what area you’re in you are helping solve a problem the business is having. As Panda explained it, the role of the consultant is to always ask the difficult questions and have the difficult conversations. You want to make a problem visible, make people think and help to implement solutions. As a consultant, your knowledge should drive change.

Rhinoceros painting
Most problems are people problems - recognise that everyone has a different perspective (source)

If you’re working as a consultant, embrace the opportunity to work on many different projects – you can’t develop a broad skill set without experience. You can be a consultant in any language with any technology, and after a few years should develop the skills to become a consultant for a specific domain. Don’t let yourself get cornered into a specific technology – technologies change far too fast.

UI Dev. What and Why?

Eru Penkman gave the next talk, which was originally entitled ‘UI Dev. What and Why?’, but seemed to have transformed into a talk about programmer culture and day-to-day life. Eru pointed out a bunch of things that a lot of programmers/developers have in common. Most of the people in the room had read XKCD at some point, everyone had used Stack Overflow, lots of people read similar blogs etc. Having all these elements in common helps when working in a team, which as Eru pointed out is pretty much what development is about. It’s very rare to have a project where you don’t interact with a number of people throughout the day. There are plenty of resources for developers, as everyone uses Stack Overflow it means that SO can answer almost any question, and lots of problems you might want to solve have already been solved by one open source project or another. However, don’t just consume these resources – make accounts on SO/GitHub and start contributing!

What they didn’t tell you about Testing!

Do not assume anything. Ever.

The final lightning talk for the day was by Leonor Salazar & Brian Mason, covering testing and quality. The first point that Leo & Brian drilled home is that quality is subjective. Everyone has different ideas of quality, and you can’t do much meaningful testing without knowing what quality means for your project. Quality as defined by the users of the product is the most important. Everyone on a project should be thinking about quality, it’s not something that one person can ‘add in’ at the end. There is also a simple and compelling economic argument for having quality on everyone’s mind: it’s up to 1000x times more expensive to fix a bug in production than it is in analysis. Don’t let that cripple you or force you into a big-design-up-front type approach, but keep quality at the front of your mind.

Development is solving problems, testing is asking questions.

You need to ask yourself four questions:

  • Are we doing the right things?
  • Are we doing the things right?
  • Are we getting them done well?
  • Are we getting the benefits?

There is no point obsessing over whether you’re doing things right if you’re not doing the right things! Testers also have the opportunity to improve process, and take the product in new directions.

User Testing what you Build – Hands On Session

Carrying on with the theme of user first was Pete Chemsripong, with a hands on session where each team had to develop a ‘product’ that was then tested by a user. For example, a media remote control for a user with boxing gloves on or a lift control panel for someone carrying 15 bags of groceries. To design a good user experience, you need to ask why you’re building anything in the first place. You need to understand why people would use this product. Ultimately the answer to a question like ‘What makes Google better than Bing?’ is always that it offers a better user experience and satisfies the users needs. It doesn’t matter what you build or how you build it if people don’t enjoy using it. User testing also doesn’t have to be expensive. You can even grab a dev from another team who hasn’t touched your project. Chances are they’re going to find a lot of the same issues as a ‘real’ user of the software, allowing you to pick the low hanging fruit for minimal cost. Pete showed a video from a recent ThoughtWorks experiment, where an in-store innovation lab at a Woolworths store produced loads of great feedback in a very short time. The other ThoughtWorkers present mentioned Usability Testing with Morae as a good read for more information about inexpensive usability testing.

TDD – from woe to go in 10 minutes – Hands On Session

As professional developers, the burden of proving a system works lies with us.

Steve Morris held the floor for the final session of the day, and did a great job showing everyone how simple test driven development can be. TDD provides the following benefits:

  1. Shows the code does what you say it does
  2. Improves quality by making code more testable
  3. Allows you to figure out what to test only once – no more writing a monolithic class/function and then spending time working through what needs to be tested over and over
  4. At any time, you can see what is broken and whether your code is shippable (as there is never code that isn’t covered by tests)

TDD is coding with intent – don’t code by accident.

The ‘test’ in TDD could just as easily be replaced by ‘expectation’, ‘behaviour’, ‘specification’ or ‘intent’. It’s as much about helping to define expected behaviour and a minimal specification as it is about testing actual code. It doesn’t matter if you’re a developer or not, TDD provides the same benefits to BAs, managers and QA testers. I couldn’t write this post without including the TDD mantra, lest Steve think we all learnt nothing at all from the talk. The TDD mantra is as follows

  • Red – write a minimal failing test
  • Green – write the minimum amount of code to make the test pass
  • Refactor – incrementally refactor the code (both code under test and the tests themselves – test code is first class code!)

Writing tests before code forces you to think about naming, architecture, APIs and design without considering implementation details. This helps with design and forces us to progress with intent, to avoid building unnecessary features. The refactor step encourages design improvements in incremental steps, rather than writing a bunch of code and then struggling through a ‘big bang’ refactor later in the project. Failing tests are not a failure of any kind, rather, they are statements of intent. The minimal failing test doesn’t even have to compile. A test that doesn’t compile signifies an intent to write the code to make it compile. In this way tests are documentation of future intent as well as progress to date. The hands on component of this talk included getting testing environments for C# and Node.js set up and writing our first code using TDD. Set-up for C# is as simple as installation of a testing framework like NUnit, which in a newer version of Visual Studio is as easy as running

Install-Package Nunit

in the package manager console. The set up for Node.js was just as simple, with npm installed simply run

npm install karma
npm install phantomjs
npm install jasmine

from your favourite shell. In the talk, it was possible to get this environment set up and have our first test running in under ten minutes for each environment. There really isn’t any excuse for not giving TDD a try! Steve also pointed out that TDD (like most concepts in software development) isn’t as new as you might think.

The second LevelUp EXP event in Brisbane was eye-opening, and I think the attendees learnt a lot to complement what is taught at university. If you attended/presented and I’ve omitted anything, please let me know and I’ll fix it up. My post about the first Brisbane LevelUp event is here, and more about ThoughtWorks LevelUp can be found on Twitter (@TW_LevelUp) and Facebook.