Google to Update GKE

IT giant has added container-native load balancing to Google Kubernetes Engine
15 October 2018   1337

Google told about the improvements of the load sharing tool between containers in the Google Kubernetes Engine (GKE). Using the updated balancer, the user independently creates groups of traffic endpoints on the network. The system distributes the load between the containers for greater efficiency.

In GKE, the ability to create an object with external access provided by a dedicated balancer has been added. This allows to configure the routing to the targets at the specified path or host name. The new load distribution system has the following features:

  • Optimal load balancing
    Previously, the Google load balancing system evenly distributed requests to the nodes specified in the backend instance groups, without any knowledge of the backend containers. The request would then get routed to the first randomly chosen healthy pod, resulting in uneven traffic distribution among the actual backends serving the application. With container-native load balancing, traffic is distributed evenly among the available healthy backends in an endpoint group, following a user-defined load balancing algorithm.

  • Native support for health checking
    Being aware of the actual backends allows the Google load balancing system to health-check the pods directly, rather than sending the health check to the nodes and the node forwarding the health check to a random pod (possibly on a different node). As a result, health check results obtained this way more accurately mirror the health of the backends. You can specify a variety of health checks (TCP, HTTP(S) or HTTP/2), which health-check the backend containers (pods), directly  rather than the nodes.

  • Graceful termination
    When a pod is removed, the load balancer natively drains the connections to the endpoint serving traffic to the load balancer according to the connection draining period configured for the load balancer’s backend service.

  • Optimal data path
    With the ability to load balance directly to containers, the traffic hop from the load balancer to the nodes disappears, since load balancing is performed in a single step rather than two.

  • Increased visibility and security
    Container-native load balancing helps you troubleshoot your services at the pod level. It preserves the source IP to make it easier to trace back to the source of the traffic. Since the container sees the packets arrive from the load balancer rather than through a source NAT from another node, you can now create firewall rules using node-level network policy.  

The Cloud Native Computing Foundation updated the GKE container orchestration system at the end of September 2018. According to the developers, in version 1.12, two functions became stable: Kubelet TLS Bootstrap to sign security certificates for TLS connections and support for Azure virtual machines.

Mirantis to Acquire Docker Enterprise Platform

After the sale of the enterprise part, Docker Inc will continue to exist in the form of an independent company and will be focused around the Docker Hub
14 November 2019   334

Mirantis, an OpenStack and Kubernetes-based cloud solution, has bought part of the Docker Enterprise platform business from Docker Inc (a commercial version of the enterprise toolkit and Docker engine, which also includes the Docker Enterprise Container Engine, Docker Trusted Registry, and Docker Universal Control Plane). After the sepation of the business, Docker Inc will continue to exist in the form of an independent company and will focus its activities around the Docker Hub catalog and the integrated development environment for microservices and Docker Desktop applications launched in containers.

Financial terms of the transaction were not disclosed. The team of developers, managers and support specialists who developed the Docker Enterprise platform will move to Mirantis. Mirantis will also receive contracts with 750 customers. The development of the open-source Docker project will continue with the participation of both companies, who together will continue to work on the Docker core and will ensure compatibility and portability in their products.