The pace of change is quickening when it comes to testing websites before they are deployed. Five years ago you might have been concerned about your site working on three different browsers, and how those browsers performed on a Desktop PC and a Laptop.
With the new Responsive websites, designers and developers are being forced into re-learning their craft in an attempt to stay relevant. This has meant changes at every level: from planning all the way to deployment.
At Splice we've had to completely overhaul our process to accommodate this and one of the key changes it to our testing procedures.
Out with the Old...
Before Responsive Web Design exploded, testing was a fairly simple process as the usage of old web browsers, that didn't follow standards, slowly tailed off.
The entire process involved 3 stages:
- Functionality - does the website work as expected?
- Aesthetics - does the website match the designs?
- Consistency - does the website work and look correct in all browsers?
On the whole, it was a pretty robust process that caught and resolved lots of issues before they were seen or found by end users. However the one issue that caused the most problems was ‘consistency’ - did the website display and behave as intended in all browsers?
To most users the browser experience is a seamless one. You open the program, visit various websites and enjoy your ‘surf’. But to designers each browser had its faults and quirks that could easily be addressed ‘under the hood’ but we didn't need to consider resolution, because the standard practise was to "fix" a website to a set width (usually 960px).
So even a complex e-Commerce site could be tested in a couple of days and web designers were safe in the knowledge that their creations looked good on a laptop or desktop. But then things changed.. a lot.
Along Came Mobile
So Apple released the iPhone and the iPad, and Google released Android and the number of people using these devices to visit websites grew rapidly. The industry was forced to adjust to seven major operating systems (iOS, Android, Blackberry, WebOS, Windows Phone 7 and 8 and Symbian) and around six major browsers (Safari, Android Browser, Chrome for Mobile, Blackberry, Firefox for Mobile and Opera Mobile) and about twenty-one lesser used browsers (I won't list these, but some examples are Dolphin and Opera Mini).
Add to this the growing number of new mobile devices with various physical sizes, pixel densities and resolutions, led to the question: "How are we going to test things now?!".
There wasn't (and still isn't) a hard and fast solution but it meant returning to the drawing board to come up with a viable, cost-effective solution.
Back to Stats and Standards
Secondly, studying real-world usage statistics to decide where best to spend the testing budget.
For example: if 21% of a website's visitors are using an iPhone with Safari (a real statistic I just grabbed from one of our highest traffic sites) and 0.01% are using a Blackberry 8520 with the default browser then it is 21000 times more productive to test the iPhone and time should be allocated accordingly.
There's no "catch all" method to ensure everything is perfect everywhere but if you can confidently cover 95% of all possible viewing platforms, while following standards, you should only be looking at less than 1% conversion-preventing issues.By Luke Hopkins
Comments (0)Post has no comments.