In November of 2020, I woke up to the blaring sound of the Amazon pager at 5am. As a service owner at Amazon EKS, I knew this couldn’t be good, and every subsequent page I received in the next two hours was a harbinger of an impending catastrophe. As I logged into the systems, I saw that some of our customers were unable to reach our service. It was a defect in the software, a completely avoidable defect.
And what’s frustrating is that these failures are not a one-time event. I have experienced hundreds of events, where our customers (DevOps, SREs and Platform Engineers …) relive avoidable failures over and over, without any effective way for them to learn from each other’s errors and stop them from happening again.
I kept thinking there’s got to be a better way. I noticed that the aviation and security industries have solved the problem of learning from each other. They collectively learn from failures, and implement mechanisms to ensure failures never happen again. I thought it was time we did the same for the software industry so I decided to leave Amazon and founded Chkk to avoid repetition of previously-encountered failures and eliminate operational pains of incidents caused by these failures.
I believe reactive approaches to infrastructure availability will not be sufficient in the coming years, and will be complemented with a new discipline of availability assurance which prevents downtime instead of fighting fires. We need a way to 1/ collectively learn and codify knowledge from developer communities and already-occurred incidents in a structured, authoritative, and interoperable catalog for all types of infrastructure, 2/ prevent introduction of new Availability Risks in all infrastructure deployment tools, and 3/ remediate Availability Risks latent in your infrastructure, before they become incidents.
Cloud, SaaS vendors and open-source communities provide the building blocks of modern applications. I saw that most incidents occur at the interconnects of these building blocks, the responsibility of infrastructure availability has shifted from vendors to DevOps, SRE, Infrastructure/Platform teams inside each organization. These are lean, hyper-specialized developers chartered to ensure availability, performance and scaling of their respective infrastructure blocks. Talent is scarce and 80% of their time is spent on operations and maintenance. A new way of solving this problem is required.
Mental models have already shifted. Developers go to community forums to learn and share knowledge about incidents, remediations and optimizations. Collaboration for risk resolution is already happening through standardized schemas/taxonomies in other verticals (CVE, SLSA, …).
Building blocks are now available. IaC deployments have been standardized through a handful of popular tools (Terraform, CDK, GitOps …) Enforcement abstractions are available at all layers (eBPF, Admission Controllers, Policy Engines, …) And adoption of tools for availability risk discovery, detection, remediation, and prevention is now attainable. While “there is no compression algorithm for experience”, proven mechanisms and hyper-specialized tools are available to catalyze a culture of availability assurance, learning and collaboration across infrastructure teams.
Imagine if all DevOps, SREs, and platform engineers could learn from each other and avoid repeating the same failures. Wouldn’t that be a great world to live in? To achieve this, we must build a network of infrastructure developers, cloud providers, software vendors, and open-source communities who collectively learn from each other to propagate a culture of availability assurance. I believe that’s how everyone will operate the infrastructure. I’m super excited about how Collective Learning will transform how we think about availability–read Ali’s blog for details. I invite you to join us on this exciting journey and sign up for early access