Critical Vulnerability to be Fixed in Kubernetes 1.13

Issue allowed to get full control over the cluster of containers
06 December 2018   108

Kubernetes 1.13 released, in which developers have eliminated the vulnerability of the illegal privilege escalation. The bug allowed to get full control over the cluster of containers.

To exploit the breach, it was necessary to send a specially designed discovery request to the backend API, which left the network connection open. This allowed access to the API server and send arbitrary commands to it. At the same time, the backend perceived requests as being sent by the server.

In addition, all Kubernetes users, including those who failed to authenticate, could use this flaw. As it turned out, the problem "stretches" from version 1.0.

To fix it, you need to update Kubernetes to versions 1.10.11, 1.11.5, 1.12.3 and 1.13.0 or at least block anonymous access to the API using the option --anonymous-auth = false, and also revoke the rights to perform exec operations / attach / portforward.

New Kubernates 1.13 features:

  • The Container Storage interface has been stabilized to create plug-ins for various storage systems. The developers also stabilized a simplified interface for managing the Kubernetes cluster.
  • TAVS container distribution planner, as well as the Kubelet Device Plugin Registration service, which provides access to the Kubelet from plug-ins.
  • An experimental interface for creating plug-ins has been added, which allows integrating third-party monitoring systems into Kubernetes.
  • The status of beta versions was obtained by APIServer DryRun, the Kubectl Diff team and the ability to use local block devices as permanent data stores.
  • The default CoreDNS DNS server is now used.

 

Google to Update GKE

IT giant has added container-native load balancing to Google Kubernetes Engine
15 October 2018   398

Google told about the improvements of the load sharing tool between containers in the Google Kubernetes Engine (GKE). Using the updated balancer, the user independently creates groups of traffic endpoints on the network. The system distributes the load between the containers for greater efficiency.

In GKE, the ability to create an object with external access provided by a dedicated balancer has been added. This allows to configure the routing to the targets at the specified path or host name. The new load distribution system has the following features:

  • Optimal load balancing
    Previously, the Google load balancing system evenly distributed requests to the nodes specified in the backend instance groups, without any knowledge of the backend containers. The request would then get routed to the first randomly chosen healthy pod, resulting in uneven traffic distribution among the actual backends serving the application. With container-native load balancing, traffic is distributed evenly among the available healthy backends in an endpoint group, following a user-defined load balancing algorithm.

  • Native support for health checking
    Being aware of the actual backends allows the Google load balancing system to health-check the pods directly, rather than sending the health check to the nodes and the node forwarding the health check to a random pod (possibly on a different node). As a result, health check results obtained this way more accurately mirror the health of the backends. You can specify a variety of health checks (TCP, HTTP(S) or HTTP/2), which health-check the backend containers (pods), directly  rather than the nodes.

  • Graceful termination
    When a pod is removed, the load balancer natively drains the connections to the endpoint serving traffic to the load balancer according to the connection draining period configured for the load balancer’s backend service.

  • Optimal data path
    With the ability to load balance directly to containers, the traffic hop from the load balancer to the nodes disappears, since load balancing is performed in a single step rather than two.

  • Increased visibility and security
    Container-native load balancing helps you troubleshoot your services at the pod level. It preserves the source IP to make it easier to trace back to the source of the traffic. Since the container sees the packets arrive from the load balancer rather than through a source NAT from another node, you can now create firewall rules using node-level network policy.  

The Cloud Native Computing Foundation updated the GKE container orchestration system at the end of September 2018. According to the developers, in version 1.12, two functions became stable: Kubelet TLS Bootstrap to sign security certificates for TLS connections and support for Azure virtual machines.