ponyfoo.com

Pragmatic Unit Testing in JavaScript

Fix
A relevant ad will be displayed here soon. These ads help pay for my hosting.
Please consider disabling your ad blocker on Pony Foo. These ads help pay for my hosting.
You can support Pony Foo directly through Patreon or via PayPal.

More often than not, companies completely (and irresponsibly) disregard JavaScript as code that should be unit tested. They might test their back-end code, it may be in C#, Ruby, Java, or even PHP, or just about any other language. But there’s a good chance that the front-end code is thoroughly untested.

Integration level testing with tools such as Selenium is nice in theory, but way too impractical (you have to set up a server), and particularly slow (loading browsers and computing the recorded actions takes its toll). As such it’s rarely part of build processes, and it’s run manually (with a single command, but manually nonetheless).

So why is that JavaScript gets treated so differently from the rest of languages?

js-discrimination.jpg
js-discrimination.jpg

Arguments against testing JavaScript in the client-side

  • JavaScript, and the web in general are tremendously fault tolerant
  • Errors in front-end code are not perceived to be as impactful as back-end errors. Since this is client-side code, no sensitive data will get lost or accidentally deleted. As long as the back-end is safe, data is safe
  • JavaScript is challenging to test

While it’s true that errors in the front-end are not as likely as permeate a robust back-end layer and cause trouble, there are real threats out there, such as XSS attacks, which are enabled by front and back-end alike.

Why is testing JavaScript hard?

I could give a list of reasons why testing JavaScript is hard, but it all boils down to it being a dynamic language. There’s no compiler. Sometimes that it great, we’ve come to love the language for its dynamic nature. However, it makes testing harder.

As such, our first line of defense should be linting. This is the closest we have to a compiler, in terms of assurance that our code won’t break.

But that obviously isn’t enough. Linting is just the first step in the right direction. Our code should be tested.

Back to the dynamic nature of JavaScript

I think another important factor in testing is visbility. In statically typed languages such as C#, variables can be private, public, protected, internal, or some combination of those. In JS it’s either private or public.

There are no statically defined interfaces. You might be used to interfaces such as:

public interface ITrackable
{
    int TrackingNumber { get; }
    void Track();
    bool Untrack();
    bool IsBeingTracked { get; }
}

Bear with me for this small example of a testable class, written in C#:

public class Testable
{
    private readonly ITrackable _trackable;
    
    public Testable(ITrackable trackable)
    {
        _trackable = trackable;
    }
    
    public bool CallMeTracy()
    {
        _trackable.Track();
        return true;
    }
}

The code doesn’t make any sense. I know. The case in point is that, using Dependency Injection, Testable becomes very easily testable. Here’s a sample test:

[TestCase]
public class TestableTests
{
    private Testable testable;
    
    [SetUp]
    public void Setup()
    {
        var mock = new FakeTrackable();
        testable = new Testable(mock);
    }
    
    [Test]
    public void should_call_me_tracy_and_return_true()
    {
        bool result = testable.CallMeTracy();
        
        Assert.IsTrue(result, "Expected CallMeTracy to return true.");
    }
}

public class FakeTrackable : ITrackable
{
    public void Track()
    {
    }

    // ... other implementation stubs ...
}

How do we reach a similar state of affairs in JavaScript? We simply can’t. We must adapt to the dynamism, embrace it.

How to test, then?

It’s not as bad as you might be thinking right now. It’s just a matter of changing the way you think about testing.

JavaScript testing needs to be even more thorough. Your code can’t statically provide the interface you desire? Then write tests to ensure it exposes that interface. Welcome to TDD!

Lets refer to a similar example in JavaScript

function Testable(trackable){
    this.callMeTracy = function(){
        trackable.track();
        return true;
    };
}

And the test, which might be something like this

describe('Testable', function(){
    it('should return true when calling him Tracy', function(){
        var him = new Testable({
            track: function(){
                // this is just a mock
            }
        });
        
        expect(him.callMeTracy).toBeDefined();
        expect(him.callMeTracy()).toBeTruthy();
    });
});

Obviously, a clear difference here is that trackable can be anything, unless it’s constrained by guard clauses. But, that’s generally something that JavaScript developers shy away from, given the sheer power it provides.

Dependency Injection in JavaScript

There’s a catch though, injecting dependencies in JavaScript is kind of a mess.

If you are writing tests for Node.JS code, you might be in luck. I have been using proxyquire, which basically allows you to, without modifying a single line of source code, test modules and mock dependencies on other modules loaded with require. It requires some setting up, but it made me pretty happy thus far.

Browser code is a different story, it often follows patterns similar to this:

!function(window, $){
    window.myThing = {
        annoy: function(){
            alert('about to become very annoying!');
            $('a, span, b, em').wrap('<marquee/>');
        }
    };
}(window, jQuery);

This kind of code is indeed hard to test, but you could always just load the affected JS file in isolation, after creating stubs for the global objects you need. An example would be:

var window = {},
    jQuery = function(){
        return {
            wrap: function(){
            }
        };
    };

Once you get tired of helplessly mocking your way out of trouble, you should use a real stubbing and mocking framework, such as Sinon.JS. Also, remember that using spies to verify that callbacks (such as callMeTracy, and wrap) are invoked, and passed the correct parameters!

Unit Testing Frameworks

I like Jasmine for my unit testing, but there are plenty of frameworks to choose from. Mocha is another popular one.

Once you realize that testing in JavaScript is not that bad, and look at it from another perspective, you can start exploiting the very dynamic nature you feared to help you build even better tests!

A pattern I commonly use when defining unit tests in Jasmine, is to prepare a list of test cases (expected input and output), and then run them all at once. Here’s an example taken directly from one of my GitHub repositories:

describe('test cases', function(){
    var cases = [],
        context = {
            plain: 'plain',
            foo: {
                bar: 'baz',
                undef: undefined,
                nil: null,
                num: 12
            },
            color: 'red',
            how: { awesome: 'very' }
        };

    function include(input,output){ cases.push({ input: input, output: output }); }

    include('@plain','plain');
    include('@foo.bar','baz');
    include('@foo.undef',undefined);
    include('@foo.nil',null);
    include('@foo.num',12);
    include('@@foo.bar','@foo.bar');

    cases.forEach(function(testCase,i){
        var replace = text.replace(Object.create(context));

        it('should return expected output for case #' + (i+1), function(){
            expect(replace(testCase.input)).toEqual(testCase.output);
        });
    });
});

This is something that simply cannot be accomplished statically. You could get here with reflection, but it’s just unnatural to static languages.

Liked the article? Subscribe below to get an email when new articles come out! Also, follow @ponyfoo on Twitter and @ponyfoo on Facebook.
One-click unsubscribe, anytime. Learn more.

Comments