Testing CSS

For some time now I have been wondering why we test our source code so thoroughly but when it comes to CSS we just plainly stop caring about it. 

Maybe I'm wrong, I'm still relatively new to the TDD business, but looking at my colleagues, everybody is quite eager to have their Java or JavaScript code covered. But speaking of CSS, there isn't much help around for doing tests here.

Looking at the test pyramid, it is mentioned that tests through the UI are brittle, in fact you are testing the whole stack from top to bottom and anything anywhere can go wrong. However that doesn't mean that testing the UI needs to be similarly brittle. In fact you can mock out the underlying functionality that your process rendering the UI depends on.

A broken UI can break the user experience just like a faulty functionally (i.e. source code) does. Especially in a bigger project where several people are involved possibly across teams it is hard to keep the UI consistent and errors out.

In my current project a glitch in the UI can keep the product owner from pushing the next release candidate to production. And there are several teams that together deliver a single page to the user, meaning that bits of the page including the layout come from different sources. In the end we need to make sure that everything comes together just right.

On top of that there is this browser issue. Each browser renders a page quite differently. Consistently checking that changes don't break the layout in browser X can be a very tedious manual task.

I've heard from some people that Selenium is used to do a screenshot comparison, i.e. regression testing on reference images. One example is Needle. There have been undertakings to test actual values of properties on DOM elements, e.g. at Nokia Maps.

Why am I saying all that? Because I'm currently looking into developing yet another css testing tool that I want to share with you.

My take on this problem builds on the image comparison technic similar to the Selenium stuff. However my approach is to keep the stack simple and to make it dead simple to use: everything should be done from inside a browser window.

With the feedback from my colleagues at ThoughtWorks I've set up a small project on Github to implement an experimental solution with the goal of driving out a feasible solution. 

The steps to verify a layout should be pretty straightforward: A new page (either from production or a mock page) that includes the CSS under test is registered with a "regression runner". That is a simple HTML page running the test suite (if you know jasmine and its SpecRunner.html you get the point). On the first run the page under test is rendered and can be saved as a future reference. In subsequent runs this image is used for the page to be compared against. Running the tests is as simple as opening the runner in a browser. If the layout changes, the regression test will fail. If the change was intentional a new reference needs to be created, if not you found your bug. 

Technically this works by rendering the page under test to a HTML5 canvas element inside the browser and using a handy JS library for doing a diff between the canvas and the reference image.

Open points: So far works in Firefox only, and as browsers do not render pages across systems consistently, the solution is local only.

Do watch the screencast to see how it works: 

Advertisements

rasterizeHTML.js – Drawing HTML to the browser’s canvas

I wanted to have my HTML documents in a rasterized form and had a look at the browser’s canvas. It turns out that you can draw a lot of things with a canvas inside an HTML page, however you cannot easily draw an HTML page inside a canvas.

Digging a bit deeper I found Robert O’Callahan’s post on how to render HTML to a canvas by embedding the code inside an SVG image. There’s also some documentation back at the Mozilla Developer Network on how to achieve this based on the blog post.

The idea is pretty simple. SVG has a <foreignObject> element which allows mostly any HTML to be embedded. Such an SVG can then be easily drawn to the canvas using context.drawImage().

There is only one issue. Rendering SVGs is very restrictive. Loading of external resources is not allowed. The only way out is embedding CSS and images into the document. The latter by using data: URIs. If embedding of resources is done dynamically via JavaScript then there are further restrictions. Unless techniques such as CORS are used, you may only load content from a same origin.

Long story short, I sat down and started a small library that takes care of all the stuff that is needed to draw HTML to the canvas. Most of the code deals with finding elements in the DOM that need to be replaced, loading these resources and embedding them in the document. There are three convenience methods for drawing a DOM, an HTML string and/or a URL to the canvas easily.

Here’s a simple example how to use the code:

https://gist.github.com/2962400

After playing around with this for some days now I should mention that browser support seems a bit fragile. Firefox and Chrome are not consistent on rendering background images, and sometimes need a gental reload for doing so. Both Chrome and Safari have an issue with the origin-clean flag which made testing a bit more difficult. Stuff that turns up will be noted down in the wiki on Github. You can find the code here. I should probably file a few bug reports as a follow-up.

For me it was the first time dealing with a lot of asynchronous calls and it was fun to see how easy it was doable with JavaScript. Using JSHint and PhantomJS to run the Jasmine tests was easy and it just works. Also rasterizeHTML.js uses imagediff.js for testing that the results look just like the reference images. Travis CI makes sure I don’t break the build 🙂 What proved difficult during testing and also implementation was that all three browsers, Firefox, Chrome and Safari behaved differently (and basically also PhantomJS as a forth). This is especially interesting for the two WebKit-based browsers. Chrome supports the BlobBuilder interface, recently deprecated, while Safari is waiting for the official Blob specs to come. In some respect Chrome was more similar to Firefox than WebKit. One way of assuring full tests was to fallback to a simple manual test on Chrome and Safari for some code parts, due to said origin-clean flag.

Get your JUnit XML reports (e.g. from Jasmine) in readable HTML

Whether you do Test Driven Development or just write your tests last, hopefully you have a good unit testing suite covering your code. It is very likely that you end up with unit test results in the JUnit XML format. Here is a short snippet on how to convert your XML reports into readble HTML.

In my current project we have Gradle as a build tool, and since it is easy to use ant from there, we will use the nice JUnitReport. The main issue was getting the classpath right, and the solution to that was to redefine the ant task, so to pass the right path along.

In addition, if you are using Jasmine (e.g. under PhantomJS) which is currently still waiting for HTML reporting and you are using the JUnitXmlReporter, you end up with consolidated testsuites where several testsuite entries will be combined. Here the solution is to explicitly tell the reporter to omit that behaviour. 

Without further ado here is the Gist:

https://gist.github.com/2295010

Continuous Integration for your jQuery plugins

TL;DR If you have tests for Javascript code written in QUnit & Jasmine that depend on the Document Object Model (DOM), here is a way to set up Travis CI using PhantomJS.

My colleagues recently made me aware of a relatively new continuous integration software called Travis CI which, originally built to serve the Ruby community, is a distributed build service able to run code in various languages, including Python & Javascript. As far as I know, it currently only works together with Github, so your code needs to be hosted there.

As Travis’ workers (the ones running the actual build) come with node.js included, I played around a bit getting my QUnit tests to run with jsdom and the QUnit node adaptation. While there are some guides out there on how to test your Javascript with node.js, it gets complicated when depending on the DOM, which most likely is the case when you are developing a plugin for jQuery. However, after reading criticism on testing against something that the original system didn’t target (or are you running jQuery on the server side?) I gave up on that pretty quickly.

Now, in a different context I came upon PhantomJS, a headless browser stack based on Qt’s Webkit. It provides an executable that can work without a graphical system (hence the name headless) and is perfectly suited for testing environments. Ariya, the guy behind PhantomJS, is clearly aware of that and already provides the proper integration for running tests based on QUnit and Jasmine. The test runner is a neat piece of code, that just scrapes the QUnit output from the generate HTML. Installing that locally was easy and running the test suite provides a short output on how many tests were run and how many failed, if any.

The problem was getting PhantomJS running on Travis CI. Travis CI comes with a good set of software (and already includes some of PhantomJS’ dependencies); so far no one has written a cookbook for PhantomJS though. However, this guy came up with an easy solution, after all the worker is just a (virtual) Ubuntu machine and you can install anything on it.

So here is the quick run through: In the .travis.yml which describes the build, we

  • run a small install script setting up the remaining dependency of PhantomJS and PhantomJS itself,
  • start up a virtual framebuffer (xvfb, “headless” is not completely true when on Linux) running on port 99
  • and finally run PhantomJS with the QUnit (alternatively Jasmine) test runner on our test suite.

Here is the full .travis.yml file:

rvm:  - 1.9.3before_script:  - "sudo bash install_phantomjs > /dev/null"  - sh -e /etc/init.d/xvfb startscript:  - DISPLAY=:99.0 phantomjs run-qunit.js test/index.html

The first line indicates that we are wanting Ruby version 1.9.3, even though we don’t need it. I believe we have to chose some target system, so there it goes.

Here is the install_phantomjs script:

#!/bin/bashapt-get install libqtwebkit-dev -ygit clone git://github.com/ariya/phantomjs.gitcd phantomjsqmake-qt4makecp bin/phantomjs /usr/local/bin/

We are ready to test this on Travis. If you haven’t registered there yet, get an account, set up the hook by visiting your profile page, and commit your own .travis.yml together with the PhantomJS install script and the relevant test runner described above. You should pretty quickly find your project in the build queue on travis-ci.org.

Happy testing!

Making Deniz a single-file-app

Following up to the previous post, Deniz, the RDF browser written in HTML, Javascript & CSS, can now be distributed as one single file.

This is possible due do

The last step missing was the image embedding part which is nicely solved through https://github.com/nzakas/cssembed. In addition Deniz will now go through the Google Closure Javascript compiler and Yahoo’s YUI Compressor for CSS to save bandwidth.

Thanks to the Makefile by Benjamin Lupton (https://github.com/balupton/jquery-sparkle/blob/master/Makefile) it was easy to set the process up for Deniz.

Two steps will build the file:

$ make build-update

to download JAR dependencies, and

$ make

to finally minimize and integrate all contents.

That’s it.

Embedding external CSS & Javascript into the base HTML document

So I’m stuck on the train for some hours, why not solve a problem that is far from pressing?

I am developing a web application based only on HTML, CSS & Javascript, called Deniz (http://cburgmer.github.com/deniz/). It’s a browser for RDF data and only needs a browser to run in, as it will connect to public data endpoints. So while it is build up from many different sources it would be nice if the whole application could be delivered in a single file. While this could speed up loading, the main idea here is to distribute just one HTML file.

Looking around there are many services and libraries for compressing and aggregating CSS & JS files, but so far I haven’t found a solution specifically for what I try to achieve.

I’ve now come up with an implementation which parses the DOM tree and looks for elements with references to stylesheets and <script> tags
referecing external Javascript code. The program will read in the contents of the referenced files and paste it into the document. This is harder than it initially seems: XHTML which I assume here, needs to have data wrapped in a CDATA directive. I had to fight with the Python lxml library for some time to get this straight:

  1. The parser needs to be passed “strip_cdata” so that read CDATA blocks are preserved.
  2. Code needs to be wrapped in an instance of the CDATA class
  3. A dirty hack to quote the encapsulated CDATA blocks in multi-line comments to accommodate older browsers:

        html.replace('<![CDATA[', '/*<![CDATA[*/').replace(']]>', '/*]]>*/')

  4. While a proper solution would need to parse CSS & Javascript code to quote invalid HTML entities, another dirty hack makes sure that the text ‘</script>’
    in Javascript strings gets quoted:
            content = (content.replace('</script>"', '</scr" + "ipt>"')                          .replace("</script>'", "</scr' + 'ipt>'"))

Warning: This script is not suited to parse any JS & CSS. It does though work for my task.

The source can be found here: http://github.com/cburgmer/deniz/blob/master/embed_media.py

The next step will be to include images as base64 urls.

jquery-shiftenter

Wanting to quickly post a HTML form made of a textarea I found nothing on the web that would quickly allow me to press Enter to submit the contents. Similar to how Facebook got rid of their comment button (simply press enter when finished commenting) I want to use the Return key to submit the form while retaining the possibility to create line breaks by hitting Shift+Enter.

I thus came up with jquery-shiftenter, a simple jQuery plug-in to turn your textareas into an input accepting Enter to submit and showing a textual hint on how to generate newlines.

You can find an example here. The code is on github. Enjoy.

Deniz, a simple Javascript-based RDF browser

I’d like to announce a little project started recently called “Deniz”, a browser to view your RDF data.

 

While developing our software Trip based on an RDF “triple” store we need to look at our data on a daily basis. Most often this is just for debugging purposes and bad enough we are limited to what the triple stores offer. In case of Franz AllegroGraph the store already ships with an AJAX-based browser, but for example Virtuoso only supports a simple SPARQL interface.

 

Goals

 

The goal of Deniz is to implement a simple and lightweight browsing application to query RDF stores using SPARQL. More specific it builds on top of stores implementing the SPARQL protocol defined by the W3C which makes the application dead simple. A SPARQL query string sent to such a server will return a JSON structure of results that can easily be turned into a human-readable table.

 

Inspired by AllegroGraph’s browser it will probably inherit many ideas, but so far I want to keep the following points in mind when improving Deniz:

 

  • Easy to use
    • Deniz is not designed to become a phpMyAdmin for RDF nor anyway near.
  • Transparent SPARQL usage
    • You can quickly learn SPARQL by looking at the SPARQL code the different views use. The expert in turn will quickly see what the views offer and what not.
  • Practical usage
  • We use RDF to solve problems. Deniz should help us rather then offer a complete set of operations on the triple store.

 

SPARUL (SPARQL/Update) might go into the interface in the future but I can’t say for sure.

 

CORS

 

One technique needed to access an RDF store via AJAX is CORS (“Cross-Origin Resource Sharing”). CORS offers a standardised solution around the same domain limitation forced upon Javascript connections out of security concerns. In particular the browser will disallow any request made to servers from a domain different to where the originating query’s page is served from. Via CORS a server can explicitly allow cross-domain requests and we will use that here.

 

In theory this means that Deniz will completely work without the need of any server-side deployments. You only need a SPARQL endpoint that does support CORS. This is the case for example for dbpedia.org. Deniz already queries DBpedia by default. If you don’t have such support for the store you want to query, read below for a simple solution.

 

Implementation

 

Deniz is implemented in Javascript, HTML & CSS – so far my first project with this setting. You can run it from your local hard drive or deploy it on a standard non-CGI/PHP web server. It uses jQuery, jQuery UI (both pretty basic) and CodeMirror as syntax highlighter (can you believe it ships with a SPARQL highlighter by default?). A nice addition is the jQuery history plug-in which offers back & forward browsing as if Deniz was a fullblown web application.

 

Currently the monolithic deniz.html could do with some refactoring. This will probably come once my initial feature set is implemented. For example one point missing on my list is easy GRAPH support.

 

SPARQL protocol proxy

 

If you either don’t have a SPARQL protocol compatible store or need the CORS support described above then the SPARQL protocol proxy might work for you. It was explicitly started to offer the missing layer to Deniz and is implemented as a small HTTP-server written in pure Python. It is far from supporting anything near 100% of the SPARQL protocol and until now has only been tested with Virtuoso, but it might suite your needs.

 

Demo & License

 

You can find the demo here . By default dbpedia.org is selected as endpoint, but you can change it to your own triple store (see above though for CORS support).

 

Deniz is released under a new BSD license, so you are pretty free to do what ever you like with it.

 

And before I forget, “deniz” is Turkish for “sea”. Now you also improved your language skills while reading about the semantic web, isn’t that nice?

 

Update: Virtuoso has been release some days ago with CORS support. I am still looking into how to enable it, and eventually I’ll find out how.

Update2: See http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtTipsAndTricksGuide… on how to configure Virtuoso.