Testing five lines of JavaScript

In this post James Coglan, a developer at FutureLearn, shares how he tested five lines of javascript, improved them, and why this was important.

In this post James Coglan, a developer at FutureLearn, shares how he tested five lines of javascript, improved them, and why more code can be a good thing.

Last week I wrote a very small snippet of JavaScript. A tiny fragment, the sort of thing you might add to a page without really thinking about it. Here it is:

setInterval(function() {
  $.get(location.pathname).then(function(response) {
    if (response.redirect) location.href = response.redirect
  })
}, 5000)

Here’s what this code does: we are building a workflow that involves some data being processed ‘in the background’, and the user sees a holding page while the processing happens. When the processing is complete, they should be redirected to a success or failure page. To accomplish this, the above code polls the server every five seconds, and the server returns a JSON response that will contain a "redirect" field only once the processing is complete. If that field is present in the response, the JavaScript follows the redirect.

The problem is, this whole workflow involves quite a few moving parts and integration testing it is quite hard, and any single piece of it could easily be broken by accident without anyone noticing until it reached a user. I wanted really good unit test coverage of each part to put my mind at ease. So: how to test those five lines of JavaScript?

In this post, I will explain how we tested the three main facets of this code: timers, server interaction, and redirects.

Dealing with timers

Let’s start with the minimum possible thing we could do to make this testable: wrap the code up in a function so that a test can invoke it.

var pollForRedirect = function() {
  setInterval(function() {
    $.get(location.pathname).then(function(response) {
      if (response.redirect) location.href = response.redirect
    })
  }, 5000)
}

Now, let’s think about the functionality we’re trying to test. The code makes an Ajax request every five seconds and, depending on the contents of the response, it redirects the browser to a new page.

Let’s try testing one aspect of this: the fact that one request is made every five seconds. We’ll start with a simple mock-based test that invokes pollForRedirect(), waits for five seconds using setTimeout(), then checks that one call has been made to $.get(). Since we don’t have a server to talk to, we don’t want $.get() to call through; instead we make it return an unresolved $.Deferred object which is just enough to stop the then() call from failing. We’re not interested in the response handling yet and an unresolved deferred will never invoke the response-handling logic.

describe('poller', function() {
  beforeEach(function() {
    pollForRedirect()
  })

  it('makes one request after five seconds', function(done) {
    spyOn($, 'get').and.returnValue($.Deferred())

    setTimeout(function() {
      expect($.get.calls.count()).toEqual(1)
      done()
    }, 5000)
  })
})

Immediately we run into our first problem; running this test gives an error:

Async callback was not invoked within timeout specified 
by jasmine.DEFAULT_TIMEOUT_INTERVAL

Jasmine imposes a 5-second time limit on async tests. Since our code takes this long to execute, we need to up this limit by passing a time to it():

describe('poller', function() {
  beforeEach(function() {
    pollForRedirect()
  })

  it('makes one request after five seconds', function(done) {
    spyOn($, 'get').and.returnValue($.Deferred())

    setTimeout(function() {
      expect($.get.calls.count()).toEqual(1)
      done()
    }, 5000)
  }, 10000)
})

This works, so let’s try something more complicated. To check the polling, we want to check there are two requests after ten seconds, so let’s add a test for that:

describe('poller', function() {
  beforeEach(function() {
    pollForRedirect()
  })

  it('makes one request after five seconds', function(done) {
    spyOn($, 'get').and.returnValue($.Deferred())

    setTimeout(function() {
      expect($.get.calls.count()).toEqual(1)
      done()
    }, 5000)
  }, 10000)

  it('makes two requests after ten seconds', function(done) {
    spyOn($, 'get').and.returnValue($.Deferred())

    setTimeout(function() {
      expect($.get.calls.count()).toEqual(2)
      done()
    }, 10000)
  }, 15000)
})

Running this exposes two more problems. First, the new test fails:

Expected 4 to equal 2

That’s because we have two tests, and so pollForRedirect() is invoked twice. For the first test, we invoke it, starting a new timer. After 5 seconds, that test finishes; Jasmine tears down the spy on $.get() but that timer keeps running. Then, the second test begins, starts a new spy on $.get(), starts a new timer, and runs for 10 seconds. During this time, two timer loops are running, and they make four requests, not two as we were expecting.

Second, after the tests stop running, the spy on $.get() is removed and the actual implementation is restored. Since the timers are still running, jQuery is now trying to make real requests to a server, which is failing and spitting error messages into the console.

The root cause of both these problems is we’re not stopping the timer after each test ends. Let’s fix that by adding a function for stopping polling:

var redirectPollingTimer;

var pollForRedirect = function() {
  redirectPollingTimer = setInterval(function() {
    $.get(location.pathname).then(function(response) {
      if (response.redirect) location.href = response.redirect
    })
  }, 5000)
}

var stopPollingForRedirect = function() {
  clearInterval(redirectPollingTimer);
  redirectPollingTimer = null;
}

And we’ll amend the tests to stop the timer after each test:

  afterEach(function() {
    stopPollingForRedirect()
  })

Now the tests pass. However, they’re extremely slow: fifteen seconds to run two tests! It won’t be long before this tiny test suite takes over a minute to run. We can fix this by using Jasmine’s fake scheduler. This replaces the setInterval, clearInterval, setTimeout and clearTimeout functions with fakes that operate off a scheduler that you control from your tests.

I’m also going to move the spy setup into beforeEach(); we never want $.get() to hit the server so let’s make sure it’s stubbed out in every test.

describe('poller', function() {
  beforeEach(function() {
    jasmine.clock().install()
    pollForRedirect()
    spyOn($, 'get').and.returnValue($.Deferred())
  })

  afterEach(function() {
    stopPollingForRedirect()
    jasmine.clock().uninstall()
  })

  it('makes one request after five seconds', function() {
    jasmine.clock().tick(5000)
    expect($.get.calls.count()).toEqual(1)
  })

  it('makes two requests after ten seconds', function() {
    jasmine.clock().tick(10000)
    expect($.get.calls.count()).toEqual(2)
  })
})

Now the tests run almost instantaneously, and they’re clearer to read. setTimeout() has an annoying tendency to separate the action you’re taking from the time delay you’re waiting, whereas these tests tell a linear story: spyOn $.get(), wind the clock forward, check how many calls were made. The tests are no longer asynchronous, which further reduces the possibility for race conditions to cause surprising test failures.

Testing the server interaction

Now we’ve got the timers under control, we can move one layer deeper into the tests and look at the server interaction:

    $.get(location.pathname).then(function(response) {
      if (response.redirect) location.href = response.redirect
    })

We request the path of the current page. This interacts with the global location object, but this should be fine. We’re not really talking to the server, and we can just refer to this in the tests to check the code is requesting the right thing. I didn’t want to hard-code a path here because the URL we’re deploying this on contains a dynamic resource ID, and using location.pathname means it should do the right thing whatever the URL happens to be.

Let’s add a test that checks the request is made to the right place:

  it('requests the current page', function() {
    jasmine.clock().tick(5000)
    expect($.get).toHaveBeenCalledWith(location.pathname)
  })

This works fine, and there’s nothing more to test about how the request is made – there’s no conditions or alternative values that might be used, we just need to check it passes the right URL in.

Mocking the redirect

We’ve tested how often the request is made, we’ve tested what gets requested, and now all that remains is to test how the response is handled. Let’s introduce a variable for the response into the tests so we can resolve it into different states:

describe('poller', function() {
  var response

  beforeEach(function() {
    jasmine.clock().install()
    pollForRedirect()

    response = $.Deferred()
    spyOn($, 'get').and.returnValue(response)
  })

  // ...
})

Now, let’s try adding a test for what happens if the server returns a redirect URL:

  it('redirects if the server returns a URL', function() {
    response.resolve({redirect: '/redirect-url'})
    jasmine.clock().tick(6000)
  })

This introduces our final problem: if the response contains a redirect value, we change the value of location.href, which is what causes the browser to navigate to a new URL. When this test runs, the browser jumps to file:///redirect-url, abandoning the test suite that was running. We need to prevent this from happening; the browser must stay on the same page to complete the tests, but we nevertheless need to assert that a redirect was done.

Unfortunately, we can’t do that simply by inspecting the location object. Any change to location.href will unload the current page and load a new one. To get around this, we can introduce some indirection: a function that performs redirects, and some code that calls that function:

var pollForRedirect = function() {
  // ...
      if (response.redirect) redirect(response.redirect)
  // ...
}

var redirect = function(url) {
  location.href = url
}

This redirect function is so simple that I’m not worried about it breaking, so long as it’s left to just perform the location change and we don’t add more logic to it over time. Having this function means there’s now something we can mock in our test:

  it('redirects if the server returns a URL', function() {
    spyOn(window, 'redirect')

    response.resolve({redirect: '/redirect-url'})
    jasmine.clock().tick(6000)

    expect(redirect).toHaveBeenCalledWith('/redirect-url')
  })

We can also write the opposing test to check that nothing happens if no redirect URL is returned:

  it('does not redirect if the server does not return a URL', 
  function() {
    spyOn(window, 'redirect')

    response.resolve({})
    jasmine.clock().tick(6000)

    expect(redirect).not.toHaveBeenCalled()
  })

Cleaning things up

Here’s the code we’ve ended up with:

var redirectPollingTimer;

var pollForRedirect = function() {
  redirectPollingTimer = setInterval(function() {
    $.get(location.pathname).then(function(response) {
      if (response.redirect) redirect(response.redirect)
    })
  }, 5000)
}

var stopPollingForRedirect = function() {
  clearInterval(redirectPollingTimer);
  redirectPollingTimer = null;
}

var redirect = function(url) {
  location.href = url
}

That’s three top-level functions and a global variable. To keep the global namespace clean, I’d rather encapsulate all this inside a class; instances of the class can hold the necessary methods and a reference to the timer.

var RedirectPoller = function() {
  this._path  = location.pathname
  this._delay = 5000
}

RedirectPoller.prototype.start = function() {
  var self = this

  this._timer = setInterval(function() {
    $.get(self._path).then(function(response) {
      if (response.redirect) self.redirect(response.redirect)
    })
  }, this._delay)
}

RedirectPoller.prototype.stop = function() {
  clearInterval(this._timer)
  delete this._timer
}

RedirectPoller.prototype.redirect = function(url) {
  location.href = url
}

Only a few changes are needed to the tests: we need to replace pollForRedirect() and stopPollingForRedirect() with poller.start() and poller.stop(), and mock the redirect function on poller rather than window.

describe('poller', function() {
  var poller, response

  beforeEach(function() {
    jasmine.clock().install()

    poller = new RedirectPoller()
    poller.start()

    response = $.Deferred()
    spyOn($, 'get').and.returnValue(response)
  })

  afterEach(function() {
    poller.stop()
    jasmine.clock().uninstall()
  })

  it('makes one request after five seconds', function() {
    jasmine.clock().tick(5000)
    expect($.get.calls.count()).toEqual(1)
  })

  it('makes two requests after ten seconds', function() {
    jasmine.clock().tick(10000)
    expect($.get.calls.count()).toEqual(2)
  })

  it('requests the current page', function() {
    jasmine.clock().tick(5000)
    expect($.get).toHaveBeenCalledWith(location.pathname)
  })

  it('redirects if the server returns a URL', function() {
    spyOn(poller, 'redirect')

    response.resolve({redirect: '/redirect-url'})
    jasmine.clock().tick(6000)

    expect(poller.redirect).toHaveBeenCalledWith('/redirect-url')
  })

  it('does not redirect if the server does not return a URL', 
  function() {
    spyOn(poller, 'redirect')

    response.resolve({})
    jasmine.clock().tick(6000)

    expect(poller.redirect).not.toHaveBeenCalled()
  })
})

Now, one could argue that replacing my original five lines with a class with three methods has made it more complicated. But I’d say the code was already complicated: everything we’ve added has come from making the complexity more explicit, isolating things we can’t test and so on. The resulting code has tests around it, whereas the original was completely untestable.

Rather than looking at whether a piece of code is complicated or not in isolation, consider the cost of ownership over time. How likely is it that someone could break the code and your test/deploy process wouldn’t notice? How much time will it take people to diagnose and fix bugs in the code? Sometimes, making code more verbose makes it clearer and so changes can be made faster, and certainly having tests means we’ll spot defects we wouldn’t notice before. There might be more code, but I feel like the time spent on it is a net win for ensuring it continues to work into the future.

To find out more about how we build FutureLearn – check out Making FutureLearn.

Category Making FutureLearn

Comments (0)

0/1200