2 Highlights from VMware Explore 2022
VMworld is now Explore. VMware is now an app- and API-first company and therefore has purged the “VM”- part from the name of its annual flagship conference.
Accordingly, the company focused the keynote and product announcements on how VMware admins can transform into platform engineers, providing consumption-ready developer services on top of datacenter infrastructure, public clouds, and edge devices. This is reflected in the following topic map that is based on a basic correlation analysis of the full text transcript of Raghu’s keynote (1,012 lines).
For a deeper data-driven analysis of VMware Explore 2022, take a look at this dashboard revealing all the hot topics from the conference.
But when you are a large company whose revenue comes from 500,000 enterprise accounts around the globe, shifting focus from an infrastructure-centric perspective to a cloud native one, by definition, is tricky.
While Raghu, Sumit, and the rest of the VMware executives were laser- focused on getting the keynote audience excited about the “new VMware”, the audience often showed little enthusiasm around announcements related to VMware becoming a distributed developer platform.
While it was easy for us to cringe about Raghu and his team’s pumped, amped, jazzed and even jacked appearance, I do feel their pain. Based on my own conversations, I know that most attendees came to VMware Explore to learn about specific, mostly infrastructure-platform related topics. Getting these guys excited about cloud native developer platforms and data processing units (DPU, see below) is no easy task.
Highlight #1 — vSphere 8: It’s the Hypervisor, Stupid!
With version 8, vSphere is no longer a hypervisor but an “enterprise workload platform” where customers can run VMs and containers side-by-side on a scale-out architecture. The new vSphere Distributed Services Engine leverages DPUs (data processing units) to enable, well, data processing across networks in order to optimize performance while squeezing more value out of currently underutilized CPUs and GPUs across the network.
What if you could freely connect enterprise storage with any CPU or GPU across the network and without increasing latency compared to local storage? The Distributed Services Engine that is part of VMware vSphere 8 leverages Data Processing Units (DPUs) to enable the ad-hoc composition and scaling of hyperconverged platforms from resources across the corporate network. This means that if your VM or Kubernetes POD temporarily requires lots of CPU or GPU cycles for demanding tasks such as the training of machine learning models, video processing, any other kind of batch processing, data compression and deduplications, etc., it can source these CPU and GPU cycles from other, less busy, hosts.
This ability to temporarily borrow data processing capabilities from across the network comes with significant implications in terms of resource efficiency and performance optimization. For example, data scientists often struggle to acquire sufficient infrastructure resources for training deep learning models. vSphere’s Distributed Services Engine could quickly add as many currently unused CPU cores as it can find on the network to temporarily create a super VM or VM cluster to rapidly train this deep learning model much faster than it would otherwise be feasible.
Highlight #2 — VMware Aria Graph: One Graph to Rule Them All
Keeping track of application stacks, Kubernetes clusters, data sources, and other resources across vCenter, AWS, Azure, GCP, etc. is a major pain point for almost anyone. Resource over-provisioning, compliance violations, cost overruns, and reliability and performance issues are a selection of possible side effects of a fast growing hybrid multi-cloud footprint within the enterprise.
Aria Graph creates a common object model as the foundation for VMware’s portfolio of Aria cost, operations, and automation services. This common object model collects all connections and interdependencies between and within application stacks and infrastructure resources across public clouds and data centers. Aria Graph makes this data available via one unified API, enabling SREs, DevOps engineers, IT operations staff, and any other persona to receive a complete near real-time view of all objects relevant to their specific persona. For example, a security engineer receives a complete view of everything related to security-relevant topics, such as firewall configuration, encryption, data compliance, password policies and other security-related topics. An SRE could easily write a query that surfaces all Kubernetes hosts across AWS, Azure, and GKE together with their CPU and RAM utilization. A security engineer could request a report on all data encryption policies across applications and clouds.
Thinking about this further, this common object model is key for finally solving the key challenge that prevents us from training AI models that can autonomously predict failure, optimize resources, or move applications based on SLOs. This key challenge is “how to create a digital twin” of an organization’s app environments in data center and public cloud. If completely populated, Aria Graph provides the backbone for the creation of this digital twin that can then be used to continuously train AI models by “showing” them the impact of changes to one part of an application environment on another (or on a different app environment). Now this is exciting.
VMware is in a unique situation. The company needs to transform into a cloud native leader while at the same time continuing to deliver infrastructure virtualization solutions to their 500,000 existing customers. This is a daunting task, especially with the wildcard of the company being acquired by Broadcom. However, products such as vSphere 8 with DPU support and Aria Graph show that there is a strong vision within VMware, aimed at becoming the hybrid application platform to rule cloud native and traditional applications in clouds and data centers across the globe.