Loading...

Developing Better Incident Response Policies

Harrison Hunter

CTO at MaestroQA

Loading...

Problem

As we became a more enterprise-focused company, we realized that we have to support higher levels of uptime and ensure greater performance guarantees for our customers. At the same time, the number of people contributing to a codebase was increasing, which consequently implied an increased number of changes. Therefore, we had to make sure that the new layer of changes won’t result in instability and a more significant number of issues for our customers.

Actions taken

The two biggest areas to make improvements and hence make a significant impact on our incident response policies were centered around automation of testing and on-call scheduling.

Automated testing

For starters, we added automation to the process in several places. We already had some unit testing, UI, and click-through testing, but we put a lot more emphasis on testing as such and hired a team to expand the level of automated and end to end testing for an increased number of changes. We particularly expanded testing to include load tests; we didn’t just test under a small load and for the existing size of our customers, but we tested under a large load projecting the future size of our customer base. By doing so, we would catch issues before going into production and thus prevent incidents from happening.

We also added automated status check-ins and tied our tools together. By linking tools together, we ensured that our health checks, database, and server metrics were altering us in the right places. All information was piped into Slack or email, which allowed people to respond quickly to issues and set up a diagnosis in no time with monitors that would be in alarm when things would get wrong. When an alert would be triggered, we would get an indication that something was not working someplace rather than going to each place to check.

Scheduling

To ensure that our policies could be efficiently implemented, we established clear points of contact and communication mechanisms by adding on-call rotation planning, scheduling, and alerting. That required a change that was both cultural and process-related.

We didn’t merely create an on-call schedule and informed people when they would be on call. We created a checklist for people to go through and ensured that everyone had access to all the tools and systems. Also, we set up clear expectations for doing testing incident response, which helped with the cultural aspect of the change.

While this is still a work in progress, we noticed significant improvements. For example, we redefined incidents and created dedicated places for communication. We also organized communication around the incident response playbook that we compiled. Consequently, clear communication resulted in the faster resolution of issues.

Lessons learned

  • Ensure end-to-end testing, not only the regular one but also for a projected scale that will generate a significant load.
  • Make sure that your alerting system is all piped into one place and that all communication is happening there. You should also have a unified view of alerts across the system to be able to respond quickly.
  • When you create an on-call schedule, make sure you also create a checklist for everyone going on call to be comfortable and all set tool- and access-wise. Make sure to refresh those regularly since processes are changing over time, and people should feel comfortable and have access at all times. Also, develop a detailed playbook for people who should be able to apply the right investigation and mitigation techniques in the heat of the moment.

Be notified about next articles from Harrison Hunter

Harrison Hunter

CTO at MaestroQA


Communication

Connect and Learn with the Best Eng Leaders

We will send you a weekly newsletter with new mentors, circles, peer groups, content, webinars,bounties and free events.


Product

HomeCircles1-on-1 MentorshipBountiesBecome a mentor

© 2024 Plato. All rights reserved

LoginSign up