Automated Integration Testing with Cypress

I’ve had the pleasure of working with Cypress lately and have been impressed by the testing framework. The test runner interface allows you to inspect the state of the test at that given moment where you can also “pin” that step for further inspection.

The debugging process is easier when you can visually see the tests run and you get more details in the console, within the test runner. I also like the ability to run a log output which will show up in the test runner user interface, when you click the logged events they will show up in the browsers console.

Visual test runner allows for faster debugging

When writing the integration tests, I’ve found that following the best practices guide will help you organize and set up the [data-cy] attribute selectors. I also found organizing selectors with an object map makes it easier for writing and updating if the interface or test needs to be updated.

Grouping tests

When you start to build your integration tests, it’s helpful to group the tests. You can always let the test runner pick up and run all of your test suite but by using groups, you can run a subset of tests as needed. This is especially useful for when you want to run a group of smoke tests against a production URL.

Reporting options

You can select from several different reporter options and can define these in the configuration. I went with the mochawesome npm package and it wasn’t difficult to run add the screenshot context so that they show up within the failed reports.

When you generate your reports using JSON, the report path will have a set of JSON files that you will want to merge into a single report. Once these are merged, the report will generate a single HTML file. Here’s a good example: 3-steps to awesome test reports with cypress

Network Requests

This is an area that really shines with Cypress, that I found it very easy to get going with creating assertions on xhr requests. You can easily decide from stubbing out a response to just watching for an event to happen. This might be one of the best ways to verify that web services are performing as expected and will expose bugs earlier on.

I found this especially useful for testing multiple step forms that have validation and dynamic content. You can also test fields such as autocomplete drop downs, which allow you to simulate keyboard events such as {downarrow}{enter}.

Continuous Integration

You can run tests within a CI/CD pipeline and don’t necessarily have to use the Dashboard feature. A lot of the cloud examples run the reporting process in a post-test routine, which seems to be a recommended pattern. In my experience, the async nature of writing your own test runner is a bit of a challenge to get the reporting options just right.