How we test a browser

Browser testing has come a long way in the last 15 years. Back then I worked for a small embedded browser company with a test team that manually checked websites. This was tedious, and inefficient as there are only so many sites a person can visit in a day.

When I joined Ekioh, I was pleased to see they had taken a more modern approach from the start. There was a genuine passion for product stability and a strong desire to avoid the embarrassment of regression bugs.

Security

It’s not just embarrassing when your product crashes, it can also be a security concern. Modern websites are extremely complex and usually use third party JavaScript libraries. This complexity, and the separation of control, can inadvertently lead to browser instability. Hackers prey on this instability as a way to take control of a device. 

 

Ensuring stability

The earlier we spot an issue, the easier it is to fix. We designed Flow, our multithreaded browser, to be thread safe and ThreadSanitizer helps ensure we maintain this thread safety. We also use Clang Static Analyzer, Valgrind and AddressSanitizer to help ensure that memory corruptions and uninitialised variables are spotted immediately. The tools run after each code check in.

Targeting the embedded and consumer electronics markets means we need to support a large number of toolchains. These vary in age, their level of C++ support and their support for different chip architectures. Sometimes we’ll get an error or warning from a compiler so we have to re-factor that piece of code to maintain support for that toolchain. To manage this, we build our products for each toolchain every night. This helps spot potential problems well ahead of deployment.

 

Testing feature compliance

In common with all other browsers, we use the W3C’s web-platform-tests project to help us test each feature’s compliance. We enable new tests from the project for each new feature we develop. Every hour we run a subset of these browser tests and each night we run the full set. This ensures that our JavaScript APIs behave as content developers expect. It also means that any regressions are quickly spotted and fixed.

The web is an incredibly complex place. Content developers can achieve their desired website look and functionality in many different ways. This means it’s important to test a wide variety of different sites. The content itself is also constantly changing so it’s vital to repeat this browser testing regularly.  To keep on top of this we run automated scripts so that Flow visits a vast number of external websites every day. 

For each site Flow visits, our scripts check for stalls, asserts and crashes. The scripts also check for long run endurance because we don’t restart the browser between each test. Sites are checked in batches and multiple batches are tested simultaneously across a bank of test machines. Each site visit lasts approximately 60 seconds. 

 

Benchmarking performance

There is another aspect of Flow that’s just as important as its feature coverage and stability. Because Flow is a super fast, multithreaded browser, maintaining its performance is paramount. Every night, scripts test Flow’s performance using a combination of external benchmarks and internal stress tests. Data from these tests are automatically plotted on graphs so that any downward trend in performance is quickly spotted.

 

Looking to the future

Our scripts capture the console output from all of the websites we visit during our automated testing. We analyse this output for features that we don’t currently support. This helps us understand the way content development is evolving and ensures that we are able to prioritise our feature roadmap. 

Browser testing really has come a long way…

 

BACK TO BLOG