Cloud native landscape

Cloud native landscape

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Cloud native landscape architecture, part 1

After the first part of this series (summarizing the state of the art in cloud native application development from the last five years), I realized how much the landscape has changed in the last five years and how there is still a lot of work to be done for cloud native technology to make it useful for the user. I also realized how many challenges there are when going from a traditional and familiar architecture to one in a cloud native one and how much pain we must go through to learn and adopt this new approach. I chose in this blog to summarize the biggest pain points and the key concepts needed to design and deploy a cloud native application.

I also decided to include a few requirements that I think will be very important for the adoption of cloud native applications, as well as a few open questions that we could still be facing.

It’s a constant effort to not repeat concepts already available in the field

How to define the new landscape?

The landscape has been evolving for more than 15 years, moving from a traditional application on a VM to a cloud-based application deployed on a combination of cloud services, from API servers and PaaS to SaaS. We have not gone from API to microservices to containers. Cloud native land took longer.

PaaS is about providing compute, storage and networking services on top of cloud services

A paaS is used to provide additional features to the app deployment, which includes:




Workflow orchestration/dependencies

Private networking with elastic IPs

Resource orchestration (CP)

Feature hosting (Docker)

Container management (Kubernetes)

Orchestration of endpoints (Asp.net Core)

Resource orchestration (service mesh)

Containerization of networks (Vnet)

Logging and monitoring (ELK)


Resource orchestration (Envoy)

Integration with 3rd party resources (Swarm)

If you want to read more about the other sections of a PaaS, you can check here and here. The Docker engine is also not considered part of a PaaS as the containers deployed can be hosted on different cloud services, such as public, private and bare metal.

Microservices architecture was born when the debate on monolithic vs. microservices applications was still ongoing, as part of an application architecture approach and not an application design concept. While some companies still fight for monolithic applications, for example the AWS team and their fight to preserve the monolithic App Mesh architecture, the reality is that most of the world is moving towards microservices applications. Monolithic applications will probably be seen as legacy as soon as 10 years from now.

The microservices approach relies on the concept of independent components that communicate with each other through APIs, a more distributed architecture allowing for easier scaling of the system. A microservices architecture requires a fully decentralized service control plane (DCSP), which orchestrates service communication, and the central management plane (CMP).

The DCSP is needed to orchestrate service deployment, communication and runtime coordination. A typical DCSP could include:

Service registry

Service communication protocols

Service request management

Service subscription management

Service discovery

A CMP is a central component of a microservices architecture, which is in charge of monitoring and orchestration, and of operation and maintenance. A CMP could include:

User interface

Updating inventory of services

Service management API

App development platform

Status reporting

Control plane API



Service lifecycle management

On top of the DCSP and CMP, a microservices architecture could include a rest service, which communicates with the DCSP and CMP. Finally, the service request could also be communicated with a non-cloud infrastructure, such as another service or a custom backend.

FaaS is used for provisioning, configuring and managing the infrastructure on top of a cloud provider

A Faas is a declarative way of defining infrastructure and applications using clouds services and microservices. It provides a way to have one or more servers provisioned and connected to a cloud service, as well as the configuration for all the software components (layers) for the application.

It is a declarative approach allowing to control the cloud resources deployment and management through YAML or JSON-based configs.

It is a serverless approach for infrastructure management, as well as a first approach to expose the underlying infrastructure. In essence, a “serverless” server (Faas) can simply manage the underlying infrastructure needed for the service in a way very similar to the serverless container engine, which is more limited, as it just needs to manage the containerized application. The resulting application deployed on a Faas is still a containerized one.

Microservices could also be exposed via FaaS. This is known as hybrid microservices. A key concept in a FaaS is the “serverless” provisioning of servers and an “operator” to manage the applications.

While being serverless, a service is not necessarily run on a PaaS, for example.

Containerization is the layer on top of a cloud service, enabling the deployment of any type of application

A container is an operating system (OS)