Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Tytus Kurek
on 12 January 2024

Cloud-native infrastructure – When the future meets the present


We’ve all heard about cloud-native applications in recent years, but what about cloud-native infrastructure? Is there any reason why the infrastructure couldn’t be cloud-native, too? Or maybe it’s already cloud-native, but you’ve never had a chance to dive deep into the stack to check it out? What does the term “cloud-native infrastructure” actually even mean?

The more you think about it, the more confusing it gets. We all know that the modern way to build infrastructure is to turn it into a cloud. But then, how can a cloud itself be cloud-native or… native to itself? If this sounds tricky, don’t worry – you’re in the right place. Keep reading to see what happens when the future of infrastructure meets the present.

Cloud-native building blocks

Before we start exploring cloud-native infrastructure, let’s make sure we have a common understanding of the concept of cloud-native itself. According to the official definition  maintained by the Cloud Native Computing Foundation (CNCF), cloud-native is a set of technologies that “empower organisations to build and run scalable applications in modern, dynamic environments”.

A lot of buzzwords but not many technical details. Fortunately, the CNCF definition also highlights some sample building blocks of cloud-native applications. Those are:

  • Containers – package applications’ code together with their dependencies and run them in isolation inside of their runtime environment
  • Service meshes – control service-to-service communication with a centrally configurable network of proxies
  • Microservices – turn applications into small independent services that communicate with each other through well-defined APIs
  • Immutable infrastructure – shifts service management paradigm from the one where components are changed to the one where they are replaced
  • Declarative APIs – enable describing the desired application state

While the definition does not force cloud-native application developers to use all these components, the majority of cloud-native applications usually follow this pattern. But can we apply the same approach to the underlying infrastructure as well?

It all starts with cloud-native

It’s easier when you’re operating in the applications space. The infrastructure is already there, providing all the capabilities you need, like container runtimes, rollback mechanisms and more. But what if you’re operating in an environment where the underlying infrastructure is yet to be built, for instance in the private cloud space?

Underneath every cloud is nothing more than a pool of bare metal resources. A cluster of physical machines equipped with CPUs, RAM and disks. What turns this raw infrastructure into a fully functional cloud is the cloud management software. And there is no single reason why this software couldn’t be cloud-native, too.

When the cloud becomes an app

Greatly simplified, the cloud itself is just another app. It installs directly on metal and provides functions to tenant applications running on top. Putting it this way, whether it’s cloud-native or not depends on how the cloud management software is implemented underneath. Porting its architecture on the components listed above effectively sets the foundation for cloud-native infrastructure. 

First, we can decompose the cloud management software into several microservices. In fact, leading cloud platforms, such as OpenStack, already follow this pattern. Then, we can run each of those microservices inside immutable containers. Both Kubernetes (K8s) and snapd are suitable for this purpose. Finally, declarative APIs enable high-level abstraction. Instead of struggling with the configuration of individual containers, we can just declare the desired state of the cloud.

Cloud-native infrastructure is an ideal answer for organisations looking for a future-proof cloud platform that will run on their premises. While adopting a hybrid multi-cloud architecture with a private cloud enables them to achieve cost optimisation, digital sovereignty and performance goals, using cloud-native principles enables them to operate it effectively. This way, the cloud management software simply becomes yet another application in their modern, containerised ecosystem, flattening the learning curve and increasing DevOps efficiency.

Let’s have a look at how it works in practice.   

Sunbeam – cloud-native infrastructure implementation example

A perfect example of cloud-native infrastructure implementation is Sunbeam.

Sunbeam is an upstream OpenStack project that revolutionises the way users deploy and operate clouds. Its architecture is entirely based on the components that define the cloud-native paradigm. By containerising OpenStack’s control plane and running it on top of K8s, Sunbeam effectively turns OpenStack into an extension to Kubernetes. This way, the K8s cluster gets additional functionalities, such as traditional infrastructure-as-a-service (IaaS) capabilities, which are not natively available in its ecosystem by default.

The architecture of Sunbeam is depicted in the diagram below:

With Sunbeam, all cloud management software components that require hardware access are delivered as snaps. This includes cloud management and governance services, the Kubernetes cluster and hypervisor and storage functions provided by data plane services. This approach ensures a high level of security thanks to the isolation and strict confinement provided by snapd. In turn, all services that don’t require hardware access run on top of K8s as OCI images. This mostly includes cloud control plane services.

All the pieces of the stack are wrapped with charmed operators. A declarative API in front of them enables a high level of abstraction. This way, the initial deployment of the cloud gets significantly simplified, while its post-deployment operations, such as the enablement of plugins, become fully automated.

Learn more

Download our e-book to learn more about Sunbeam and how you can turn OpenStack and Kubernetes into cloud-native apps.

Read more blogs about cloud-native and Sunbeam.

Get in touch with Canonical cloud experts.

Further Reading

Learn more about Canonical’s open source infrastructure solutions.

Related posts


Tytus Kurek
3 April 2024

OpenStack with Sunbeam as an on-prem extension of the OpenStack public cloud

Cloud and server Article

One of the biggest challenges that cloud service providers (CSPs) face these days is to deliver an extension of the public cloud they host to a small-scale piece of infrastructure that runs on customers’ premises. While the world’s tech giants, such as Amazon or Azure, have developed their own solutions for this purpose, many smaller, ...


Tytus Kurek
2 January 2024

OpenStack with Sunbeam for small-scale private cloud infrastructure

Cloud and server Article

Whenever it comes to a small-scale private cloud infrastructure project roll-out, organisations usually face a serious dilemma. The implementation process often seems complex due to a lack of knowledge, tricky migrations and an immediate need from management to run various extensions, such as Kubernetes, on top. The most obvious way to ov ...


Ilenia Zara
14 August 2023

How to ensure business continuity with IT infrastructure support

Ubuntu Article

Picture this: you’re on a dream vacation with your family on a serene tropical island. The weather is perfect, the sea is mesmerising, and you’re ready to enjoy a relaxing day at the beach. Just as you’re about to unwind, your phone rings: it’s your manager calling to inform you that your IT infrastructure is ...