Monday at VMworld. General sessions, break-out sessions, and more!
General Session & Beyond
I live blogged the General Session this morning, in an editorial style. The biggest takeaway being Project Pacific, the integration of Kubernetes into vSphere. While there are obvious benefits for container infrastructure, not least of which is the performance improvement (even against physicals!) but there are interesting applications beyond that.
I was lucky enough to attend a session this morning that expanded the announcements this morning, however I'm not able to share. It's certainly interesting getting a bit of an idea of what VMware's thinking beyond the literal announcements of the day. Like all thoughts and intentions, we shall see when we shall see.
Better Together
First breakout session of the conference for me was VMware Validated Design: Introduction and its Future [HBI1637BU] with Forbes Guthrie.
What is VMware Validated Designs (VVD)? It's a set of prescriptive blueprints with comprehensive deployment and operational practices.
VVD documentation can be found at the following links:
VVD for SDDC supports 10,000 running VMs, with 150 VM deployments per hour.
It supports a multi-region design, and contains nearly 400 design decisions.
The VVD is made up of about 20 different VMware components, each with their own release cycles.
Before each VVD release, the components on the bill of materials (BOM) are integrated as per the guidance and extensively tested for interoperability and resilience.
Want to do something that varies from one of the design decisions? Check the tech notes, which acknowledge and provide some guidance when you want to vary (ex. OSPF vs. BGP).
Since last VMworld VVD 5.0 released which included Cloud Builder, started aligned with VMware Cloud Foundations (VCF), and had a full BOM refresh and document set.
Make sure to check the VVD documentation map to find the state of a particular VVD document. Some documents may not yet be updated in the VVD version you're looking at. The map will help you find which VVD version the doc is in.
Cloud Builder currently requires you to fill out an Excel spreadsheet with all of your deployment parameters. The file is then injected into the Cloud Building app.
VVD 5.0.1 added VCF workflows to Cloud Builder.
VVD 5.1 was based on vSphere 6.7 U2 and included a full doc set.
VVD 5.1 supports SRM 8.2, which saw SRM move off of a Windows-based VM to a virtual appliance. The upgrade guides will help with that transition.
VVD announcements at VMworld 2019 include a NIST 800-53kit add-on, support for VMware Cloud on AWS as a region, early access support for vRealize Automation Cloud, and support for availability zones on an NSX-T workload domain.
What is Cloud Foundation (specifically the regular on-premises VCF)? It's an automated deployment and lifecycle management of full SDDC.
SDDC Manager does what it says on the tin, and manages the full SDDC environment. A design goal for it is to avoid needless duplication of existing management interfaces, so where it makes sense it defers to a components native management UI.
The future of VVD and VCF is to more tightly integrate them.
VVD.next will see Cloud Builder use SDDC Manager for much or all deployment.
VCF customers will be able to refer to the VVD documents for operation ("day 2") guidance, as well as to understand the common design.
Transitionally, anything that's currently in VVD but not VCF will have to be manually deployed following the published guidelines. Eventually those components may be added to SDDC Manager.
VVD + VCF will eventually support "ingesting compliant Brownfield Environments". The intention of which is to try to analyze existing environments to determine how close to the BOM and deployment the current environment is, in order to provide prescriptive guidance to fully implement VVD/VCF.
I think that the prescriptive nature of the VVD and VCF can be immensely valuable to most organizations. Especially as IT departments begin to realize that their businesses value the ability to quickly get up & running in order to deploy business workloads.
HPE Briefing
I was invited to attend the HPE vExpert & Blogger briefing again his year. It's an opportunity to get a glimpse into how HPE's looking to move forward with their infrastructure offerings. There was a weighted focus on service offerings over hardware, which makes sense in a more software-defined market.
Half of HPE staff are on the GreenLake side of the HPE house. Showing considerable focus on GreenLake and its subscription oriented offerings.
HPE gear still accounts for the majority of traditional on-premises vSphere deployments.
Recently vSAN Ready Nodes have been introduced that focus on particular workload types. Gives the customer a more tailored choice to meet their needs.
GreenLake, more generally, is HPE's on-premises "as a Service" offering, which utilizes metered, "rented" equipment with a customer's environment.
GreenLake environments are regularly monitored for consumption trends, and the on-premises gear adjusted proactively based on the capacity needs of the customer. Done right, and the customer shouldn't run out of capacity within their own physical data centres.
GreenLake now integrates with VMware Cloud Foundation, so a VCF environment can be easily provisioned.
The VCF integration could allow for a scenario where hardware is on site at a customer environment, but "cold" (not actively consumed and therefore not being billed for). The "cold" gear could then be spun up on demand when capacity needs dictate, and leverage VCF to extend the virtual infrastructure in a matter of minutes.
HPE can work with customers to come to an agreed approach on how and what is managed on site. For example, if a customer wants HPE to manage all infrastructure, they can, if a customer wants to retain control over a particular area, like the network, that can be accommodated too.
Primera storage is the successor to 3PAR.
A number of HPE's storage offerings now support data mobility. There's a demo on the Solution Exchange floor this year showing a migration of data out of AWS and into Google Compute Platform (GCP) without incurring any AWS egress charges.
Analytics offer prescriptive recommendations and optional execution of optimizations for data on HPE storage.
Over all it's interesting to note HPE's expanding focus on services over hardware. It seems like the hardware itself has reached a commodity threshold, with the software layer providing the differentiating service. For example HPE's Synergy platform of composable infrastructure leverages its software layer to carve up and present its hardware in whatever ways make the most sense and deliver the most value to the customer.
It's an interesting evolution for a company that has deep hardware roots. The tenacity and perseverance demonstrated by HPE means it will continue to be one of the big tech companies to keep an eye on.
Hackathon
One of my VMworld highlights is the VMware{code} Hackathon. This year was no exception. I was privileged to be on the team that placed first this year. Our team put together a module that provided a pipeline for checking and linting PowerCLI Examples.
The judges stated that they believed this effort will have broader benefits to the community, showing the potential lasting effects of efforts like the Hackathon for the community at large. On a more individual level, a number of participants were introduced to development concepts, some executing their first GitHub commit that night. It's these sorts of impacts, from an individual scale up to broader community, that make this effort worthwhile.
That, and the ability to meet and spend time with like-minded folks problem solving and noodling over tech. What's not to love?
Wrap Up
Monday in the can. One more General Session tomorrow (the typical Thursday session isn't on the agenda this year), some more breakouts, more conversations, more VMworld! Stay tuned. As always, make sure to stop and say hi if you can.