Loading...

Increasing Test Coverage

Guy Baron

Head of R&D at Wix.com

Loading...

Problem

Many years ago I was leading a team of 250 engineers and was tasked to figure out how much we should invest in our test coverage. The product was suffering from quality deficiencies, and I had no doubts that we had to increase our test coverage. While it was clear that the existing test coverage was not sufficient to make an impact on the quality, I was still unsure if we should engage all of 250 engineers to increase our test coverage.

Actions taken

First off, I conducted a thorough study that resulted in quantifiable data. The study mapped out our coverage across different components of the system segmented by different types of tests (unit test, integration test, etc.).Then I calculated a defect density score that showcased how many defects each component suffered which I normalized by the size of the software component. That provided a rough quality score to start with.

I plotted all the data on a graph with two axes -- a percentage of coverage on the x-axis and a defect score on the y-axis. Each point on the graph referred to a specific component. I was able to identify what was the coverage for any type of test (unit test, integration test, etc.) that was needed to create a substantial decrease in the defect rate.

We found that the integration test yielded the most results, and 70 percent scored in the integration test provided me a clear understanding of how much more we should invest in order to increase the quality. Also, since the experiment was segmented by components, we could make diligent decisions on the amount of effort we should allocate to achieve our goal and where the threshold at which allocating more resources wouldn’t yield as many results.

Following my strategy, we were able to identify where we should invest more. We refined the list of components and ranked them based on their criticality within the system. We tackled first the most critical components that were lacking the coverage of the integration test. We invested in bringing their coverage up the threshold which resulted in improved quality since the defect rate for those components was reduced significantly.

Lessons learned

  • As engineers, we often don’t rely on data as much as we should. If we relied more on data to optimize our execution, our results would be better. While we expect product managers and business people to be data-driven, we rarely have the same expectation for engineers.
  • I find many software engineers to be biased toward their intuition, and it was not always easy to convince other engineers to adopt the data-driven approach.

Be notified about next articles from Guy Baron

Guy Baron

Head of R&D at Wix.com


Engineering LeadershipLeadership DevelopmentCommunicationOrganizational StrategyDecision MakingCulture DevelopmentEngineering ManagementPerformance MetricsTechnical Expertise

Connect and Learn with the Best Eng Leaders

We will send you a weekly newsletter with new mentors, circles, peer groups, content, webinars,bounties and free events.


Product

HomeCircles1-on-1 MentorshipBounties

© 2024 Plato. All rights reserved

LoginSign up