8 Insights from AWS re:Invent 2022–Staying the Course for 10 Years

Torsten Volk
7 min readDec 5, 2022

--

AWS re:Invent 2022 was back to its old massive self. 2713 sessions, 364 sponsors (8 emerald, 23 diamond, 35 platinum, 75 gold, 125 silver, 46 showcase sponsors, 27 marketplace sponsors, 17 public sector sponsors, and 8 AI/ML partners), 6 crowded and often overcrowded venues, 69 product-related announcements, and a brand new CEO, made up the framework of Amazon’s big event of 2022. And we can certainly say that in its 10th year, re:Invent has stayed true to itself and still is laser-focused on providing organizations with building blocks that aim to eliminate developer constraints. Take a look at Werner Vogels’ (CTO) keynote from the first re:Invent in 2012.

Werner Vogels’ Keynote at AWS re:Invent 2012

1. Maximizing Developer Productivity

Werner Vogels, the ‘mad scientist’ behind AWS, showed the blockbuster slide of re:Invent 2022 during his keynote. The slide shows a punch list of 8 developer challenges in 2022. All 69 announcements aim providing new product capabilities that enable organizations to effectively attack these challenges and set free the developer time they had to previously dedicate to manually addressing them.

AWS CTO, Werner Vogels

My Take: Dead On!

There are hundreds of seemingly good reasons why software developers have to deal with overhead tasks such as setting up and updating environments, integrating the various layers of their app stacks, worrying about consistent configuration and insidious configuration drift, building contingency plans to get around code and technology dependencies, writing their own code to create and manage application workflows, and so on. However, none of these reasons are valid. It may not always be intuitively obvious how to (mostly) eliminate these tasks, but the payout of eventually getting it done is worth the initial learning curve. And AWS consistently making it its mission to help organizations figure out this key challenge is absolutely the right thing to do.

2. Grab a Stack, Start Coding, Stay for the AI

Amazon CodeCatalyst (available in preview) is an ambitious new offering with the potential of tying customers to the AWS platform in return for a turnkey developer experience. While this reminds me a lot of Goethe’s Faust who sold his soul to the red guy with the pointy tail, horns, and hoves, I’m at the same time impressed and intrigued by this attempt to punch the items on the above list, all in one go. And then of course it is not far fetched for customers to also consume some AWS services that they would have normally sourced elsewhere, such as, let me think, AI.

Werner Vogels Introduction of Amazon CodeCatalyst

My Take: World Domination Requires Bold Moves

CodeCatalyst goes significantly beyond delivering a number of canned application stacks. CodeCatalyst projects can automatically deploy everything that belongs to the software value chain, including on-demand development environments, built-in issue management, GitOps capabilities, build automation, status dashboards, IAM, unified search, and probably a few more things that I’m forgetting. This is Amazon’s attempt of wrapping their customers’ entire software value chain with, mostly, their own products. For customers to accept this level of lock-in, these value chains would need to fit like the literal glove.

3. The Future Is Event Driven (and Distributed)

Staying on the track of developer productivity and adding scalability to the mix brings us to the rise of event-driven architectures. The example of Trustpilot demonstrated nicely how Amazon envisions organizations to create an evergrowing mosaic of AWS-driven microservices. While not an announcement, the Trustpilot example during Adam Selipsky’s keynote was important and impressive enough to make it into my list of highlights.

Angela Timofte (Director of Engineering, Trustpilot)

My Take: Not Unique But Important

Collaboration and scalability become an increasingly daunting challenge the larger a software project becomes. This typically leads to exponentially increasing time spent on overhead tasks to ensure that nothing breaks when new code is released. Adopting an event-driven mostly serverless architecture can make it much easier to understand code relationships and interdependencies, due to the modular character of the application. Of course this is not exclusive to AWS, but can also be done on Azure, GCP, IBM Cloud, and friends.

4. ETL Must Die

Replacing the need to extract, transform, and load data (ETL) before analyzing it was a significant part of the data-centric keynote at re:Invent 2022. AWS has started out on this road by offering direct integrations between major products, such as S3, Aurora, Redshift, Sagemaker, Athena, and Kinesis, in order to replace ETL with real-time data integrations that eliminate the need to create and maintain redundant data copies.

Swami Sivasubramanian, VP of Database, Analytics, and Machine Learning at AWS

My Take: Yes, ETL Must Die

The direct integration of data management and analytics services without requiring the usual extract, transform, and load-routine is a great place for AWS to incentivise the use of an AWS-only data pipeline. It is one of these times where doing the right thing for a cloud vendor is also great for the customer.

5. We Can Do Machine Learning Now

One of the painful truths that AWS had to come to terms with over the past few years is that customers do not see the AWS portfolio as a market leader in machine learning and AI. I have witnessed numerous large accounts that took it upon themselves to embark on the rather daunting journey of rewriting a good part of their application to be able to leverage Google’s AI portfolio, rather than that of AWS. Unsurprisingly, Amazon was very aware of this ‘leakage’ and has doubled down to gain AI credibility, by expanding product capabilities and aggressively hiring new talent.

Swami Sivasubramanian, VP of Database, Analytics, and Machine Learning at AWS

My Take: Integration Is King

Globally comparing the AWS AI/ML portfolio with that of GCP or Azure is impossible and unnecessary. Organizations will select vendors based on their support for the tools and platforms they like. We might think that Tensorflow, as it came out of Google, runs best on GCP, but maybe it runs similarly well on AWS and provides integration with my favorite AWS analytics service. The ‘right choice’ becomes a matter of developer preference based on portfolio integration.

6. AND We Can Make it (Machine Learning) Secure and Compliant

Amazon DataZone is a new product that reminds me of IBM Cloud Pak for Data, a federated data management and governance platform launched by IBM a few years ago. On paper, both products do roughly the same things, but in practice, DataZone integrates with popular AWS data services such as Redshift, Athena, and Quicksight.

Adam Selipsky Introducing Amazon DataZone

My Take: Shifting Left Compliance Unlocks AI Projects

Governing data while leaving it in place is of course a critical foundation for successful analytics, machine learning, and AI projects, making a product like DataZone overdue. DataZone is simply a matter of filling in a rather important gap to lower the threshold for spinning up more AI/ML projects on AWS. AWS is not one of the first movers in this arena, but that’s OK.

7. And We Have the Silicon

To hammer home that AWS is the place to go for all your AI needs, Adam Selipsky announced EC2 instances with the new Inf2 silicon that was purpose-built to accelerate inference workloads and broaden the adoption of deep learning by lowering cost while increasing performance.

Adam Selipsky Announcing Inf2

My Take: It’s a (Highly Sophisticated) Commodity

Amazon, Microsoft, and Google have all created their own chips specialized for AI/ML workloads. In today’s world of three massive hyerscalers and without any disrespect for the tremendous research and design efforts involved in creating these chips, we need to declare them table stakes.

8. And We Can Cure Cancer

Predicting the growth of seemingly normal tissue into cancer has been part of many AI vendors’ dog and pony shows and over the past decade we have learned to carefully examine the background of this type of claims. AWS had a customer who was willing to talk about this topic on stage, so they understandably jumped on the opportunity. Could this have been done on Azure or GCP too? Most likely yes, but showing on-stage that someone has used AWS services to create this type of production application still makes for a strong keynote segment.

My Take: AI Credibility Comes from Big Achievements

The Watson disaster has made us all more cynical about claims of consistently and reliably finding cancer through AI/ML models. However, in the right hands and with the right budget for ongoing model supervision and control of the results, this type of use case can be tremendously impactful and life changing for patients.

That’s It. Now Go Build.

AWS presented itself as a massive vending machine of enterprise-grade infrastructure, platform, and software components that can make up the entire software supply chain. To beat out GCP and Azure, AWS is completely focused on finding more ways of getting users hooked, and at times locked-in. At the end of the day, AWS has focused reInvent 2022 on all the right things, without hitting any grand slams or even triples. But that may not be necessary to defend its marketshare.

--

--

Torsten Volk
Torsten Volk

Written by Torsten Volk

Artificial Intelligence, Cognitive Computing, Automatic Machine Learning in DevOps, IT, and Business are at the center of my industry analyst practice at EMA.

No responses yet