Finally Released – VCF 9 and the Comeback of Private Cloud

Finally Released – VCF 9 and the Comeback of Private Cloud

The empire strikes back – My bad – the Private Cloud is back. For years, we’ve been building private clouds in the image of VMware Cloud Foundation: standardized, software-defined, and (ideally) automated. But anyone who’s run more than a single environment knows the limits—one SDDC Manager per instance, NSX pain points, no clean multi-instance control, and rigid storage design for the management domain.

With VCF 9.0, things are shifting. And it’s not just a product version bump—it’s a conceptual realignment.

Let’s walk through what’s new, what’s important, and why this might finally make VCF a serious candidate for scaled-out, AI-ready, and ops-friendly private clouds.

Being part of the Broadcom Knight program is both an honor and a responsibility. It’s a recognition of individuals who have demonstrated strength in both the technical and sales aspects of VMware’s evolving portfolio—something I’ve had the chance to prove in many projects over the years.

As a Knight, I’ve had the unique opportunity to gain early access to VMware Cloud Foundation 9.0, including deep technical sessions and direct interactions with the engineering and product teams. This gave me valuable insight into the architectural shifts, the rationale behind certain design decisions, and the operational vision that’s now embedded into the product.

Many of the concepts outlined in this blog—like the fleet-based model, automation layer, and updated storage flexibility—are things I’ve seen evolve firsthand during the pre-GA phase. It’s been a great experience to learn, challenge, and contribute feedback along the way.

Modular by Design: From Monolith to Fleet

One of the most fundamental changes: VCF 9.0 introduces a new modular architecture for large-scale deployments. Instead of each environment operating in isolation, VMware now defines three layers:

  • A VCF Private Cloud is the top-level construct, representing the overall environment.
  • Within that, you can run one or more VCF Fleets—each managed centrally.
  • Each fleet contains one or more VCF Instances, which is the traditional combination of management and workload domains.

This allows organizations to scale horizontally, while maintaining centralized governance, policy enforcement, lifecycle management, and automation.

Importantly, VCF Operations now takes over the central control role from the legacy SDDC Manager. It becomes the fleet-level management engine—covering what used to be local to each instance. Lifecycle, observability, compliance, and visibility are no longer trapped in the four walls of a single deployment.

VCF Automation: The Control Plane We’ve Been Waiting For

VCF Automation complements Operations by exposing a fleet-wide provisioning and consumption layer. Think blueprints, templates, service catalogs—and most importantly: an interface not just for infrastructure admins, but also for platform teams, developers, and anyone consuming infrastructure as a service.

It finally answers the question: “How do I consume my cloud like a cloud?” – The big question for our service provider customers and friend – Will it replace Cloud Director? …. Not yet … while multi-tenancy is backed in we will still stick with the Cloud Director communicated end of support – Oct 2027. But let’s face it – VCF Automation will be the future for all kind of self-service offerings – also for Service Providers

No More vSAN Mandate: Fibre Channel and NFS for the Management Domain

In previous VCF releases, the management domain was tightly coupled to vSAN. While vSAN ESA has its benefits, many environments—especially in the enterprise and service provider space some customers have existing investments made in SAN or NAS arrays for core infrastructure services.

With VCF 9.0, you can now use Fibre Channel or NFS as the principal storage for the management domain (No iSCSI). This change aligns VCF with reality: where storage decisions are driven by operational efficiency, not feature enforcement. Besides that keep in mind that vVols are still supported but deprecated.

This also makes VCF more accessible to customers who weren’t ready (or willing) to refactor their storage strategy just to adopt the platform.

Performance Matters: Memory Tiering, vSAN Global Deduplication, and GPU Mobility

Let’s talk performance.

VCF 9.0 introduces memory tiering using NVMe. This allows lower-cost memory to supplement DRAM, without degrading workload density or performance. Especially relevant as AI workloads and memory-hungry apps enter the private cloud. Selling 2TB memory by only having 1.5TB in hardware -> Sounds like a business for me :P

For storage, vSAN ESA now includes global deduplication, meaning dedup isn’t limited to a single disk group or host. The impact? Reduced footprint, more predictable scaling, and improved storage economics—especially in mixed workload clusters.

And for those investing in AI/ML infrastructure: GPU vMotion is now up to 6× faster. That’s the kind of improvement that actually enables maintenance without downtime in production training environments or dynamic resource optimization in inference clusters.

NSX: Finally Simpler, Finally Integrated

Let’s be honest—NSX has always been powerful, but in many cases, not easy.

With VCF 9.0, NSX finally feels like a first-class citizen of the cloud platform:

  • NSX lifecycle is aligned with vSphere and VCF upgrades
  • A single NSX Manager setup is now supported for smaller or edge deployments
  • VPC Model is simpliying how virtual networks can be consumed.
  • NSX is now integrated directly in vCenter, simplifying visibility and operations

Licensing & Costs

VCF Operations is now responsilbe for all the licensing and will connect directly to your broadcom account / subscription. No more keys – No more perptual licenses. This is where the industry has moved to.

From a pricing point nothing has changed. 350$ per physical Core list price includes the pure VCF feature set (with 1 TiB of vSAN per comitted core). Other features can be extended as add-ons like Microsegmentation/AVI Load Balanacer/Private AI. While many say that sounds expensive the TCO improvements in hardware investments can be reduced tremendously if you size it right.

And P.S. If I listen to the talks on the street about the price of competitors like Nutanix / Red Hat OpenShift – VCF is equally expensive by offering more value.

Conclusion: VCF is Growing Up

This release of VMware Cloud Foundation signals more than product maturity—it reflects architectural maturity.

With modular VCF Fleets, central control via VCF Operations and Automation, storage flexibility, and real performance improvements, we’re finally getting a foundation that scales with the real-world complexity of modern data centers.

If you’re running multiple VCF instances, planning a private AI infrastructure, or trying to regain cloud-like control without giving away sovereignty or margin to hyperscalers—VCF 9.0 might be the platform you’ve been waiting for.

And if you’re wondering how to align your current architecture with the new fleet model, or whether Fibre Channel makes sense for your management domain—reach out. Always happy to dive deep.

Ready for VCF 9?

Since IT infrastructure is critical besides all simplicity made with VCF 9 the correct architecture and methodology to bring VCF 9 into production is critical. If you want to get challenged if you, your hardware, your organization and processed are ready for VCF 9 – Drop me a massage. W

Better together: #vSphere 7 and #NSX-T 3.0 OR We are slowly getting friends ;)

Let me start with a quote I wrote down 5 years ago (wow did time pass by).

IMO this quote is still correct (maybe I should have used network & security skills) and really changed the quality of services I can deliver within the IT field. My NSX journey startet at this time with homelabbing and learning NSX-v from the one and only fellow comdivision partner & friend Matthias Eisner.

NSX-v; NSX-T; NSX Datacenter; NSX Cloud; NSX Advanced Load Balancer; NSX Intelligence.

Woa woa woa ….. that’s a lot of NSX…. Remember; NSX is not a single product. NSX is a suite of network & security products fullfilling VMware’s dream: Deliver every network & security service in software.

NSX-v (or NSX for vSphere) has been the software defined networking solution for vSphere evolved from the product vCloud Networking & Security. We were able to to create software-defined VXLAN networks or software defined network services with edge services gateway in a simple manor & create microsegmentation around vSphere based virtual machines.

It was good but it was a product around vSphere and not a real network product (which made it easier for me to get started with it).

Read more

Better together: #vSphere 7 and #Horizon 7.12 with #Nvidia vGPUs in high end #vdi environments

Current situations accelerate the demand for virtual desktops and a proper virtual desktop infrastructure. I am working for years in the field of the software-defined datacenter and virtual desktops delivered via VMware Horizon. While standard office virtual desktops have become something like a commodity, the usage of hardware graphics accelerated workload and high-end demand is still kind of ‘newer’ and not as mature as the non-gpu workload.

I am involved in a large scale product that creates a virtual desktop landscape for engineers doing a lot computer aided engineering (CAE). This is a very interesting environment that comes with a lot of difficult user requirements and how it can be solved.

  • 3D Acceleration required during normal “working” operations when modelling / analyzing components. (NVIDA vGPU for the win)
  • Huge amount of local persistent storage to cache models loaded/checked out from a network location. (vSAN is everywhere in the cluster, included in Horizon Advanced & brutal fast if you do it right).
  • High throughput between the virtual desktops and the network location (The big benefit of a VDI setup).
  • Secure access from everywhere & sharing of the session (Another big benefit of VDI).
  • Windows & Linux Desktops must be supported (Horizon can do that)
  • Huge amount of memory per virtual desktop since sometimes models can have 100/200GB of size and needs to resist in the memory of the virtual machine. (vSphere scales indefinelty [almost :P]).
  • Most engineers will require a dedicated linux desktop where he can individually use modules/kernels/packages based on his needs (an IT managements nightmare). (dedicated user assignment with persistent VDIs).
  • Different working behaviours (more than 500 different user-interaction scenarios) that cannot easily be matched to a single use-case.
  • Store certain states of the virtual desktop. Loading / pre-processing might take 1-XX hours and needs to be repeated several times (VM snapshots with memory ftw).
  • Certain Desktops should be shared among team-members (working in parallel or sequential).
  • …..
Read more

VMware Cloud on AWS – Master Services Competency Specialist Exam 2019 Experience, Tipps & Tricks

During VMware Empower every participant had the chance to do a free VMware Exam. After doing my VCAP-DTM Design exam last year in Vienna I gave this year a new exam a shot.

VMware Cloud on AWS – Master Services Competency Specialist Exam 2019

Why? I am interested and fascinated of the VMware Cloud on AWS stack. Besides that, the newly created Master Services Competency requires this exam as well (besides doing an online & one onsite training).

When I registered for this exam a few days before Empower has started, I had no idea what to expect (normally you should prepare first and than register for an exam – but hey it was free and a chance to become one of the first who gets this certification).

Read more

#VMworld 2018: Introducing VMware Cloud Provider Pod

[DISCLAIMER]
The Cloud Provider Pod is a product that I was involved with since the beginning of the development phase. All information are accurate, but for sure some comments or opinions are biased ;-).
[/DISCLAIMER]

Within the session Introducing VMware Cloud Provider Pod Yves Sandfort and Wade Holmes presented a new VMware solution for VMware’s Cloud Service Provider Program (CSPP) (View the session online here). The goal of the Cloud Provider Pod is to easily enable service provider to setup a fully featured and supported vCloud Director environment based on the requirements of the service provider.

Read more

VMware #vSAN 6.6 – Features and expectations based on field-experience

The release of vSAN 6.6 came with a tremendous ‘what’s new feature set’ brining VMware’s software-defined storage and hyper-converged solution to the next level. Many bloggers out there in the community did a great to job to explain the details of the following new features:

  • Removal of the Multicast requirement
  • Encryption using existing KMS solutions (KMIP 1.1 compatible)
  • Stretched Cluster enhancements (changing of witness hosts and secondary level of failure protections within a site)
  • Re-synchronization enhancements (including throttling)
  • Web Client independant vSAN monitoring User Interface
  • Performance enhancements
  • Maintenance Mode enhancements including more information and prechecks
  • New ESXCLI commands (that can be used with PowerCLI, e.g. to easily get smart data of the physical devies)
  • and many more…

Read more

vSphere 6.5: Virtual Disk / VMDK Hot-extend beyond or equal to 2TB is NOW supported

When vSphere 6.5 was announced I was quite impressed about the features. Gathering more and more hands-on experience so far I am more than happy with it.

One of the new features that can have a real operational benefit hasn’t been documented so far that often (or at least I haven’t seen it anywhere).

Before vSphere 6.5 it was impossible to increase the VMDK size of a DISK that was larger than 2TB when the Virtual Machine was powered on. That was a fact that not many organizations were aware of it until they stumbled upon it.

From an architectural point of view there shouldn’t be many use cases where such a large disk layout would be the best practice. But from an operational point of view for many of my customers this has been a bigger issue.

Read more

VMware #vSAN Queue Depth: Call for input/discussions

During the week I was at a customer site that is using vSAN 6.2 as foundation for their upcoming virtual desktop infrastructure (Seems like 2016 is really really the year of the VDI). I love vSAN and believe that at the moment it’s a great fit for many dedicated use-cases within the virtualization field.

During some load- & failover tests of the vSAN installation I realized something regarding the IO-queues within the vSAN-stack and to be honest, I am not quiet sure what the risks, mitigations and therefore the correct actions are.

We open a VMware ticket in parallel, but if you have any more in-depth knowledge about this topic, please let me know since this might be interesting to more of people (since the number of vSAN implementations is increasing).

Since the integration of flash/SSD in the performance/cache tier of vSAN the performance is great compared to classical HDD-based solutions.

To ensure a good performance level Duncan Epping and Cormac Hogan talked about the queuing topics within some blog posts or the offical vSAN troubleshooting reference manual (great document btw.).

Screen Shot 2016-06-10 at 09.40.18

….

Screen Shot 2016-06-10 at 09.40.36

Read more

Let’s discover #vROPS Management Pack for #NSX

The more I work with #vROPS (vRealize Operations Manager) the more I love it (I know that it has a little bit of a learning curve).

The more I work with #NSX the more I love it (hell-yeah… the learning curve might be even bigger).

In both cases I did a lot of training and consulting work during 2016 and it is about time to bring those two solutions together and maximize our benefits. If you have licenced NSX and vRealize Operations Manager in the advanced edition you can make use of the SDDC Management pack, which includes the management pack for NSX 3.0.

In the following I want to give you a quick rush over the installation and features of the mentioned management pack.

Read more

NSX and nested ESXi environments: caveats and layer-2 troubleshooting

After having NSX running in a nested environment, I started last week to integrate / built a NSX environment between my physical and nested ESXi hosts. To be honest, achieving this was more complicated than I have expected. Anyway it was a good trip to improve my NSX troubleshooting skills and maybe the key-findings can help one or another to avoid the problems I had.

From a logical-level my goal was pretty straight forward. I have 3 physical (vSAN) ESXi hosts running n-nested ESXi hosts. All of them are managed from a single vCenter and should be part of a single transport zones where n-VXLANs (unfassbar viele) will be deployed.

Logical_design

Read more