Table of contents

Virtualization has been the backbone of enterprises for decades, as it allowed them to improve the hardware utilization by running their workloads on shared hardware. However, as applications evolved and adopted cloud native architecture, teams realised that traditional hypervisors like VMware and KVM operate in isolation from container platforms, forcing them to manage two separate infrastructure stacks with different tooling, workflows, and expertise requirements.

Modern virtualization solves this by bringing virtual machines into Kubernetes using the same Kubernetes APIs, storage classes, and operational workflows you already use for pods and containers.

OpenShift is a popular solution that is built on the open-source KubeVirt project, which allows you to run VMs alongside containerized applications within Red Hat OpenShift.
In this guide, you’ll understand how OpenShift Virtualization works, how it compares to traditional hypervisors, and why organizations are adopting modern virtualization strategies to consolidate their infrastructure.

Why Modern Virtualization Matters Today

Traditional virtualization that dominated the last few decades is misaligned in today’s containerized ecosystem. This has forced enterprises to rethink their approach and align with the newer ways to build, deploy, and operate modern infrastructure.
Below are a few reasons why modern virtualization matters today:

Operational Complexity from Dual Infrastructure Stacks

Running different platforms for VMs and containers creates friction due to the introduction of different management tools, automation workflows, storage backends, and networking. This leads to duplicated effort, fragmented visibility, and slower deployment cycles, which slows down IT modernization and makes it harder to leverage newer tooling like GitOps, service mesh, and cloud native observability.

Limited Path for Application Modernization

Legacy virtualization platforms weren’t designed for cloud native workloads. They lack native Kubernetes integration, API-driven automation, and the flexibility to gradually migrate VM-based applications to containers. This leads to the accumulation of technical debt, which limits innovation and forces organizations into expensive, risky migrations rather than incremental virtualization evolution.

Unsustainable Cost Growth

Traditional virtualization vendors have dramatically increased their license costs in the last few years, introducing newer pricing models and mandatory feature bundles. As modern workloads scale across hybrid and multi-cloud setups, the total cost of ownership for legacy hypervisors has become economically unsustainable.

What is OpenShift Virtualization?

OpenShift Virtualization is a native feature of Red Hat’s OpenShift ecosystem that allows teams to run their existing VM workloads alongside containerized applications within a unified Kubernetes platform. It is built on the open-source KubeVirt project that treats VMs as Kubernetes-native objects that can be managed using the same APIs, tools, and workflows as any other Kubernetes object.

Red Hat OpenShift Virtualization provides a unified approach that allows teams to deploy, scale, and manage both workload types through standard Kubernetes constructs like custom resources, kubectl commands, and the OpenShift web console.

This allows organizations that have already invested heavily in VM-based applications and aren’t ready for immediate containerization, yet they need to adopt cloud-native practices for new development. Using OpenShift virtualization, they can adopt a gradual modernization path where legacy VMs and modern containers coexist on shared infrastructure.

This approach is particularly valuable for:

  • Maintaining legacy applications that have a high refactoring risk or cost
  • Migrating from VMware or other traditional hypervisors without application changes
  • Enabling DevOps teams to use Kubernetes-native tools (GitOps, CI/CD pipelines, service mesh) for VM workloads

How Does OpenShift Virtualization Work?

OpenShift Virtualization transforms how virtual machines operate by embedding them within the Kubernetes orchestration layer. This allows teams to interact with VMs and containerized workloads using the same set of tools.

OpenShift Virtualization Architecture Explained

OpenShift uses an operator-based architecture that is managed by HyperConverged Operator (HCO). It coordinates several specialized operators that handle different aspects of virtualization and kubevirt functionality.

Let us look at some of the important components of OpenShift virtualization:

KubeVirt operator

One of the key operators is the KubeVirt operator, which forms the foundation of the virtualization stack. It extends Kubernetes with custom resource definitions (CRDs) for VirtualMachine (VM) and VirtualMachineInstance (VMI) objects.

It consists of three core components:

  • virt-controller: It manages the entire VM lifecycle, from initial power-on through operations like shutdown and reboot to eventual deletion. When you define a VM, the controller schedules it through the standard Kubernetes scheduler, ensuring proper node placement based on resource availability and constraints
  • virt-handler: It runs on each node as a DaemonSet, where a VM is scheduled and handles communication between the node and VM. It communicates with libvirtd to create the actual VM domain using KVM.
  • virt-api: It provides the API server that handles virtualization-specific requests, including VM console access, live migration commands, and related operations.

Containerized Data Importer (CDI)

CDI is responsible for integrating VM storage seamlessly with Kubernetes storage classes. It handles all VM disk operations, including importing existing VM disk images, cloning VM disks, and managing DataVolumes. It creates PersistentVolumeClaims automatically during import operations.

Network Management

KubeVirt network management integrates VMs into Kubernetes by using CNI plugins like Multus for standard pod networking or advanced multi-network setups via `NetworkAttachmentDefinition` (NAD) CRDs, allowing VMs to use the same network fabric as containers.

Storage Integration

VMs use standard Kubernetes storage primitives – StorageClasses define available storage backends, PersistentVolumeClaims request storage, and PersistentVolumes provide the actual disk resources. This Kubernetes native virtualization approach means VM disks can leverage any CSI-compatible storage system, from local volumes to enterprise storage arrays. KubeVirt extends Kubernetes without modifying the core control plane, preserving upstream compatibility.

How VMs and Containers Coexist in OpenShift

To understand how VMs and containers coexist in OpenShift, let’s understand the VM creation process and various KubeVirt components.

  • When you create a VirtualMachine resource, the `virt-controller` validates the specification and creates a corresponding VMI object. This VMI triggers the creation of a standard Kubernetes pod called the “launcher pod.”
  • The Launcher Pod contains the `virt-launcher`container that configures cgroups and namespaces to isolate the VM process. Runs a libvirt process that manages the KVM virtual machine lifecycle, while the guest OS and its applications execute inside the VM itself.

At this point, the VM process is executing within the launcher pod that is managed by KubeVirt as a native Kubernetes object. However, as we know, VMs are fundamentally different from containers. Let’s see how they coexist.

Container workloads are native to Kubernetes. Apps share the host resources using namespace isolation. These are scheduled as pods, managed via kubelet and networked through CNI.

VMs on the other hand, run as a full operating system with their own kernel. Each VM gets a launcher pod with a specialised `virt-launcher` container, which has the KVM that creates an actual virtual machine with its own kernel.

On a single OpenShift node, you’ll see:

  • Container pods running application workloads – web servers, databases, microservices
  • VM launcher pods running virtual machines – legacy apps, Windows workloads
  • Both consuming the same node resources – CPU, memory, storage
  • Both using the same networking stack -pod network, external connectivity.
  • Both workload types use the same storage infrastructure – StorageClasses, PVs, and PVs for stateful data.
  • Both are managed by the Kubernetes scheduler and kubelet at the pod level, with KubeVirt handling VM lifecycle inside those pods

This unified approach eliminates the operational overhead of maintaining separate platforms to deploy and manage containers and VMs.

Managing Workloads with Red Hat OpenShift

There are multiple ways to manage workloads with Red Hat OpenShift, all integrated in the existing OpenShift virtualization tooling.

  • Web Console: The OpenShift web console contains a dedicated section for virtualization, where you can:
    • Create VMs from templates, ISOs, and existing images
    • Deploy and monitor VMs
    • Configure networking and access VMs via RDP
    • Perform live migration among nodes
  • Command Line: The virtctl plugin extends kubectl with VM-specific operations while maintaining the same user experience as managing pods and deployments.
    • kubectl get vms -n my-namespace – list all vms in your namespace
    • virtctl start my-vm – start a VM
    • virtctl console my-vm – access a VM console
  • GitOps Workflow: Because VMs are defined as Kubernetes objects, you can leverage GitOps workflows. You can store the VM definition in Git repositories, use ArgoCD or Flux to update the VM configuration, and deploy them to a Kubernetes cluster.

Benefits of OpenShift Virtualization

By consolidating VMs and containerized workflows into a single platform, OpenShift virtualization provides both operational and strategic advantages.

Unified Management for VMs and Containers

OpenShift virtualization eliminates the operational overhead that teams had because of running two completely different tool stacks to manage VMs and containerized applications. It provides teams with a single control plane – one web console, one CLI toolset, and one automation framework – for managing all workloads. This consolidation reduces the learning curve, eliminates duplicate tooling costs and management overheads, and accelerates troubleshooting.

Faster Application Modernization

Organizations realize the benefits of virtualization as they can lift-and-shift existing VMs to OpenShift without application changes, then incrementally refactor components to containers as part of their application modernization strategy. This approach allows hybrid architectures where legacy VM-based backends serve containerized frontends.

Lower Infrastructure Complexity

By consolidating VMs and containers into a single platform, OpenShift dramatically reduces infrastructure footprint and operational burden for teams. It eliminates separate hypervisor clusters, duplicate networking stacks, and storage management systems. Fewer platforms mean fewer systems to patch, upgrade, and secure, making it easier to manage different types of workloads efficiently.

Hybrid Cloud Flexibility

Due to OpenShift’s consistent platform across environments, VMs defined as Kubernetes resources can move seamlessly between on-prem, public clouds (AWS, Azure, Google Cloud), edge locations, and managed services without modification. Based on cost, compliance, and latency requirements, organizations have the flexibility to easily move the workloads across multiple locations.

Common Customer Challenges with Traditional Virtualization

Unsustainable Cost Model

Rising licensing and operating costs have significantly increased the expense of maintaining traditional VM platforms, with price hikes ranging from 3x to 6x. Traditional hypervisor vendors have shifted to per-core licensing models with mandatory feature bundling, forcing organizations to pay for capabilities they don’t need. Additionally, Annual Maintenance Contract (AMC) and support contracts also require separate subscriptions, creating a multi-layered cost structure.

Investment Risk

Reduced investment in product support, engineering, and partner resources pose a major systemic risk to legacy VM application development and long-term sustainability. Many major hypervisor vendors have shifted focus away from traditional hypervisor innovation toward cloud services because of market demands and the evolving technical landscape. Due to this, organizations find themselves locked into aging platforms with diminishing third-party support and limited migration options.

Lack of App Innovation

Technical debt and poor interoperability between traditional VM and modern container app development limit flexibility, slowing innovation and hindering digital transformation. Traditional hypervisors lack native Kubernetes integration to connect VM-based infrastructure with cloud-native tooling. This leads to containers and VMs working in silos with different tools, skills, and processes – slowing down solving modern virtualization challenges and preventing organizations from building truly hybrid applications that span both worlds.

OpenShift Virtualization vs Traditional Virtualization Platforms

An Organization evaluating OpenShift virtualization vs traditional virtualization platforms needs to understand the fundamental differences between their architectural and operational models. This affects both the day-to-day operations and long-term modernization plans.

Key Differences Between OpenShift and VMware

The table below captures and summarizes the major differences between OpenShift and VMWare

Feature  VMware vSphere  OpenShift Virtualization 
Architecture  Bare-metal hypervisor (ESXi)   Kubernetes-native, VMs run inside specialized Kubernetes pods managed by KubeVirt
Management Interface  vCenter, vSphere Client  OpenShift web console, kubectl/virtctl 
Automation  vRealize, PowerCLI, Terraform  Kubernetes operators, GitOps, Helm, Ansible 
Networking  VMware vSwitch, NSX  Kubernetes CNI  
Storage  vSAN, VMFS, vendor plugins  Kubernetes CSI drivers, PVCs, any CSI-compatible storage 
Container Integration  Separate platform (Tanzu)  Native – VMs and containers in same cluster 
Licensing Model  Per-core/per-socket with bundled features  included with Red Hat OpenShift subscriptions, with no additional per-VM or per-socket fees
Multi-cloud Portability  Cloud-specific variants (VMC on AWS)  Consistent across on-prem and all public clouds 
Infrastructure-as-Code  Limited, requires external tools  Native declarative YAML, full K8s API support 
Live Migration  vMotion (proprietary)  KubeVirt live migration (open source) 

Why Businesses Are Moving from Traditional Hypervisors

There are multiple technical and business reasons why businesses are moving away from traditional hypervisors.

  • Organizations on VMWare want to escape from the escalating licensing costs post-Broadcom’s acquisition, with some organizations facing a significant price increase.
  • Teams want to adopt cloud native architecture, and want to adopt cloud native tooling like GitOps, service mesh, Kubernetes operators, etc.
  • Teams want to reduce operational overheads and bring the VM and container teams together, reducing duplicate skills, tools, and processes.
  • Organizations also want to take advantage of hybrid cloud and need the flexibility to run workloads across on-prem and public cloud without hypervisor lock-in.

Cost and Operational Efficiency Gains

Modern virtualization provides infrastructure consolidation that reduces hardware footprint – no separate clusters for VMs and containers. This means fewer servers, switches, and storage arrays to manage. A single control plane provides operational efficiency as it provides one web console, one CLI, one automation framework, and one monitoring stack for all workloads. Further, automation becomes simpler with declarative YAML definitions replacing complex scripting, and GitOps workflows enable infrastructure-as-code for both VMs and containers.

Modern Virtualization in Cloud Native Era

Unlike traditional hypervisor-based virtualization that treated VMs and containerized workflows as incompatible workload types requiring separate management stacks, cloud native virtualization treats them as complementary components of a unified application platform.

Organizations no longer want to build and maintain separate technology stacks for different workload types – they seek platforms that provide operational consistency across VMs, containers, and serverless functions. Modern virtualization tools like KubeVirt enable Kubernetes to go beyond its original container orchestration mandate to support VM-based workloads.

Bringing both VMs and containers under a single platform allows organizations to modernize incrementally while maintaining operational stability.

Getting Started with OpenShift Virtualization

Organizations ready to explore OpenShift virtualization have multiple paths to evaluate their options and adopt a solution that suits them the best based on their current infrastructure and modernization goals.

Prerequisites and Setup Overview

You can deploy OpenShift virtualization by ensuring your OpenShift cluster runs on infrastructure with CPU virtualization support. Install the OpenShift Virtualization Operator from OperatorHub, create a HyperConverged resource to deploy all components, and configure storage. Detailed OpenShift virtualization setup steps are available in the official Red Hat OpenShift virtualization documentation.

Next Steps: Evaluate OpenShift or Request a Demo

You can start with Red Hat’s managed trial clusters to evaluate its offerings without any infrastructure setup. You can also access interactive demos, workshops, and sandbox environments through Red Hat Developer programs, or work with certified partners for production architecture design and deployment planning.

Portworx also maintains a deep ecosystem of partners, supporting Kubernetes and KubeVirt offerings from Red Hat, SUSE, Spectro Cloud, AWS, and more, while working across any on-prem or public cloud storage

  • Single operating model across any Kubernetes deployment
  • Support for on-prem and public cloud infrastructure providers
  • Channel partner for seamless migration support

Virtualization Resources & FAQs

Q. What is OpenShift Virtualization used for?

OpenShift Virtualization enables organizations to run virtual machines alongside containerized applications within a unified Kubernetes platform. It’s primarily used for modernizing legacy VM-based applications without immediate refactoring and consolidating infrastructure to eliminate separate hypervisor platforms. Organizations use it to migrate from VMware or other traditional virtualization solutions while maintaining operational consistency across both VM and container workloads. This approach provides a gradual modernization path where legacy applications continue as VMs while teams develop new cloud-native services as containers.

Q. Is OpenShift Virtualization part of Red Hat OpenShift?

Yes, OpenShift Virtualization is included as a native feature with all Red Hat OpenShift editions – there are no additional per-VM or per-socket licensing fees. It’s deployed as an operator from OperatorHub and integrates directly into the OpenShift control plane. Red Hat also offers OpenShift Virtualization Engine, a dedicated edition focused exclusively on VM workloads for organizations that need virtualization without running containers in the same clusters.

Q. Can you run VMs and containers together in OpenShift?

Yes, OpenShift Virtualization allows VMs and containers to coexist on the same cluster nodes, sharing infrastructure resources, networking, and storage. VMs run inside specialized launcher pods managed by Kubernetes, while containers run natively as standard pods. This coexistence enables hybrid application architectures; all managed through a single web console and operational workflow. Teams can gradually modernize applications by moving components from VMs to containers incrementally.

Q. How does OpenShift Virtualization differ from VMware?

OpenShift Virtualization runs VMs as Kubernetes-native workloads managed through standard APIs and operators, while VMware ESXi operates as a bare-metal hypervisor with proprietary vCenter management. OpenShift provides unified management for both VMs and containers in a single platform, whereas VMware requires separate solutions (Tanzu) for container orchestration. Licensing differs significantly – OpenShift Virtualization is included with subscriptions versus VMware’s per-core/per-socket fees. OpenShift offers consistent deployment across on-premises and all public clouds, avoiding hypervisor-specific cloud variants.

Q. Can I migrate from VMware to OpenShift Virtualization?

Yes, OpenShift Virtualization includes the Migration Toolkit for Virtualization (MTV) that automates VM migration from VMware vSphere, Red Hat Virtualization, and other platforms. The toolkit handles migration with minimal downtime, transferring VM disks, configurations, and network settings. Organizations typically migrate in phases – starting with non-critical workloads for validation, then progressively moving production systems.

Next Steps
Get started with our solution

Data management