Evaluation Guide
Scalability
Genus is designed to support a variety of scenarios and to enable the natural growth of any application. This includes a strong focus on scalability, allowing applications to grow from a small proof-of-concept to large enterprise-wide solutions.
The Genus runtime microservice architecture is based on Kubernetes. This supports dynamic and automatic scale-up and scale-out with containers in addition to scaling out with new Kubernetes nodes.
Genus uses a de facto technology stack, including Kubernetes with Helm and Docker containers. We use Redis as an internal cache and non-persistent database within the Kubernetes cluster.
-
Horizontal scalability, or scaling out, in Genus is achieved by using Kubernetes scaling features. Essentially, this means that you add replicas of a pod (microservice). The number of replicas is set when deploying a microservice, this can later be adjusted manually at run time.
Kubernetes also supports autoscaling, where the number of pods is automatically increased based on metrics.
The number of nodes can also be scaled up or down as needed. This is usually more relevant when adding or removing environments in a cluster, rather than as a result of changes in load.
-
Vertical scalability, or scaling up, in Genus translates to modifying requests and limits for pods (microservices) in Kubernetes. Both CPU and memory have a request and a limit. The pods request certain amounts of CPU time and memory from the nodes.
The request settings are used to determine which node the pod is assigned to. If a pod uses more resources than its assigned limit, it is throttled.
Both the request and limit for each microservice can be specified when deploying the microservice through Helm values.