In 2003, I had the good fortune of being a founding member of the AWS team and to have spent more than decade helping define, build and operate the services that comprise Amazon's cloud offering.

Since AWS' inception, I have given a great deal of thought to what the cloud should enable and how it can fundamentally redefine how developers build, deploy and manage applications. From the very beginning, our goal was to make it as easy for a college student in his or her dorm room to build scalable and reliable applications that hitherto had required larger organizations to deliver. While there's still much more innovation possible to realize the vision of a comprehensive cloud platform, I believe that AWS has largely delivered on that early goal.

Recently, however, I have been thinking more deeply about the factors behind why the cloud came to be in the first place. Many people credit virtualization, the growth of the Internet and the emergence of web scale computing as primary drivers. While these are certainly important, I don't believe they adequately capture the fundamental inversion that made cloud computing possible, and indeed necessary.

It's sometimes easy to forget how things were before the Cloud.
Tweet this

When I first started as a professional software developer in the late 1980s, the machines were the scarce resource and the humans that programmed them were relatively plentiful and inexpensive. I was perpetually operating in a machine resource constrained environment. I spent a lot of time waiting around for my code to compile and link, and hours reducing the memory footprint of my programs and optimizing my algorithms to save processing time. Disk space was at such a premium that I'd bias towards using bits instead of bytes. It took weeks if not months to procure more compute and storage.

My experience as a developer was not atypical. Since machines were the scarce resource, organizations made significant investments in people and infrastructure to optimize their ability to get the most value from their computers, and were willing to bear high coordination costs to extract the maximum business-serving value from the applications the machines enabled them to run. In traditional IT infrastructure, coordination costs mainly comprise the people who monitor and manage the hardware and software developers use to create business-serving value, along with the servers, storage, networking hardware and other physical resources required to deliver it.

However, something fundamental happened, starting in the early 2000s as best I can estimate. Moore's Law, which had compounded for more than 30 years prior, seemingly suddenly changed the economics of IT. Instead of the machines being the scarce resource, now the bottleneck was the developers building the business-serving applications. It no longer made sense for the people to wait around for the machines. Rather, the right optimization was the inverse; the machines should be waiting for the unscalable and ever more expensive resource: the developer.

So how did this lead to the emergence of the Cloud? Quite simply, the inversion in where the value lay (with the developer and not the machines) meant that the hitherto acceptable coordination costs of an organization operating its own IT infrastructure were no longer tenable. As a result, the scarce compute resources that had previously been carefully rationed and managed were now plentiful and instantaneously accessible to developers at the swipe of a credit card. The equation shifted from optimizing for machine utilization to optimizing for developer productivity to maximize the value of the new scarce resource.

A key implication of this inversion is that traditional on-premises IT infrastructure is no longer the most efficient way to deliver the hardware and software components necessary to create applications. Why continue to invest capex building expensive data centers and paying for the hardware, software and staff required to operate them (e.g. coordination costs) when you can procure these resources on-demand and pay only for what you need using the Cloud?

While Cloud operators will continue to drive down the cost of delivering on-demand application infrastructure, going forward I believe the largest savings will come from improving developer productivity and agility.
Tweet this

Development methodologies such as Agile and Lean, along with DevOps where developers are empowered to manage the entire application lifecycle, will help developers iterate more quickly, saving significant time and people expense. Platform services will provide developers with automated and higher-level abstractions so they can more easily and quickly build, test, deploy and operate their applications. Machine learning and AI will augment developer productivity by increasingly eliminating rote, tedious and repetitive programming and testing tasks.

The Cloud has inaugurated an irreversible change in how applications will be built in the future that has placed the developer in the center.

So, why the Cloud?

Because, the Developer.