Google to Update GKE

IT giant has added container-native load balancing to Google Kubernetes Engine
15 October 2018   604

Google told about the improvements of the load sharing tool between containers in the Google Kubernetes Engine (GKE). Using the updated balancer, the user independently creates groups of traffic endpoints on the network. The system distributes the load between the containers for greater efficiency.

In GKE, the ability to create an object with external access provided by a dedicated balancer has been added. This allows to configure the routing to the targets at the specified path or host name. The new load distribution system has the following features:

  • Optimal load balancing
    Previously, the Google load balancing system evenly distributed requests to the nodes specified in the backend instance groups, without any knowledge of the backend containers. The request would then get routed to the first randomly chosen healthy pod, resulting in uneven traffic distribution among the actual backends serving the application. With container-native load balancing, traffic is distributed evenly among the available healthy backends in an endpoint group, following a user-defined load balancing algorithm.

  • Native support for health checking
    Being aware of the actual backends allows the Google load balancing system to health-check the pods directly, rather than sending the health check to the nodes and the node forwarding the health check to a random pod (possibly on a different node). As a result, health check results obtained this way more accurately mirror the health of the backends. You can specify a variety of health checks (TCP, HTTP(S) or HTTP/2), which health-check the backend containers (pods), directly  rather than the nodes.

  • Graceful termination
    When a pod is removed, the load balancer natively drains the connections to the endpoint serving traffic to the load balancer according to the connection draining period configured for the load balancer’s backend service.

  • Optimal data path
    With the ability to load balance directly to containers, the traffic hop from the load balancer to the nodes disappears, since load balancing is performed in a single step rather than two.

  • Increased visibility and security
    Container-native load balancing helps you troubleshoot your services at the pod level. It preserves the source IP to make it easier to trace back to the source of the traffic. Since the container sees the packets arrive from the load balancer rather than through a source NAT from another node, you can now create firewall rules using node-level network policy.  

The Cloud Native Computing Foundation updated the GKE container orchestration system at the end of September 2018. According to the developers, in version 1.12, two functions became stable: Kubelet TLS Bootstrap to sign security certificates for TLS connections and support for Azure virtual machines.

Kubic to be Adapt for ARM64

Kubic environemnt is built on the basis of the openSUSE, Docker, Kubernetes and Salt
01 February 2019   226

The openSUSE developers reported providing support for the AArch64 architecture in the Kubic toolkit, which allows you to deploy and maintain a cluster for running applications in insulated containers. An iso image (1.1 GB) is available for download, providing a complete solution for creating CaaS systems (Container as a Service) on server boards with processors based on AArch64 architecture. The solution is assembled from a single code base, also used to form assemblies for the x86_64 architecture.

Of the editorial restrictions for AArch64, some packages that are specific to x86_64 systems are unavailable, for example, kubernetes-dashboard is not supported. The basic boot image is formed for 64-bit ARM boards with UEFI support with a sufficiently large amount of RAM (more than 1 GB), such as Overdrive 1000, D05 and ThunderX2. For boards without UEFI, such as Pine64 and Raspberry Pi 3, a separate MicroOS-based image was prepared (a stripped-down distribution with atomic installation of updates, setting up via cloud-init, read-only root section with Btrfs, runtime Podman / CRI-O and Docker ). It is possible to organize an automated installation on a large number of machines using the standard AutoYaST profile or to load nodes over the network (PXE / tftpboot).

The Kubic environment is built on the basis of the openSUSE distribution (Tumbleweed repository), the Docker toolkit, the Kubernetes cluster isolated container cluster orchestration platform, and the Salt centralized configuration management system. To manage the cluster, Velum interface is proposed, which allows you to deploy a Kubernetes-based cluster in one click and organize its management, including adding and removing nodes, monitoring failures, and determining update installation policies. Kubernetes is launched on nodes in virtual machines deployed based on libvirt or OpenStack. It supports the launch of containers prepared using the Docker toolkit. Container images are distributed as RPM packages.