Operations with a classic cluster#

Attention

This section describes how to handle the previous version of Kubernetes clusters in NGN Cloud. This version does not support automatically scaled node groups and will not be developed further. We strongly recommend you to use EKS clusters when creating new Kubernetes clusters.

In the web interface of the service Kubernetes clusters you have the following actions:

Creating a cluster#

Creating a cluster can be divided into two parts: infrastructure creation and cluster installation. Currently, the infrastructure creation consists of creating the required number of instances of a given configuration from a prepared image. When the instances successfully start, the cluster installation process begins. Upon successful installation, the cluster gets the Ready status.

Кластер со статусом Запущен считается готовым к эксплуатации. Любой другой статус кластера говорит о незавершённом процессе его создания. Дополнительными сервисами считаются Ingress-контроллер, Docker registry и EBS-провайдер. Создание кластера включает в себя создание экземпляров, установку в них компонентов Kubernetes и опциональную установку дополнительных сервисов. При создании кластера с дополнительными сервисами статус Запущен говорит и об их успешной установке.

NGN Cloud facilitates the use of Kubernetes and enables you to deploy the entire infrastructure for the cluster in one click. To create a Kubernetes cluster, go to the Kubernetes clusters section, click on the down arrow beside the Create button and select Create cluster.

On the first step of creating a new cluster set the required parameters:

  • Name.

  • Version of Kubernetes that will be installed on all nodes.

  • VPC where the cluster will be created.

  • Option High Availability cluster. If you select this option, then a high availability cluster will be deployed. Its three nodes may be placed in three availability zones or a placement group within one availability zone. If any of these nodes fails, the cluster will continue to run on the remaining nodes.

  • The Certificate auto-renewal option. If you select this option, the certificates required for the Kubernetes cluster to run will be renewed automatically.

  • Pod subnet address. You can specify an IP address block in CIDR notation (X.X.X.X/Y), which will be allocated to the pod subnet. If you do not specify this parameter, a default range of IP addresses will be allocated.

  • Service subnet address. You can specify an IP address block in CIDR notation (X.X.X.X/Y), which will be allocated to the service subnet. If you do not specify this parameter, a default range of IP addresses will be allocated.

At the second step, set the network parameters required for the cluster operation:

  • Subnets where the cluster will be deployed.

  • SSH key for connecting to the cluster.

  • Security groups, which control traffic of instances interfaces.

  • The Allocate Elastic IP for API Server option. If you select this option, an Elastic IP will be allocated to a master node. This enables external access to the cluster’s API server.

At the third step, select a configuration of the master node, which will host service applications required for the cluster operation. This configuration will be applied to all master nodes if you select the High Availability cluster option. Specify the instance type and the volume type, size and IOPS (if available for the type you choose).

Note

Master node components are performance sensitive. We recommend using high performance volumes gp2: Universal (SSD), io2: Ultimate (SSD)

At the fourth step, specify a configuration of worker nodes, which will run user tasks. Specify the required number of worker nodes, instance type, and volume parameters: type, size and IOPS (if available for the type you choose). Selecting the Use placement groups option will create placement groups, in which instances with the cluster worker node will be then started.

At the fifth step, you can select additional services to be installed in the cluster:

  • Ingress controller to route request. You can select the Allocate Elastic IP for Ingress controller option.

  • Docker registry, for which you should set the volume configuration (type, size, IOPS ) to store your container images.

  • EBS-provider, in which you set the volume management user.

In the last step, you can specify user data to describe operations that will be automatically performed when the cluster nodes are created. User data is useful when you need, for example, to install packages, create and modify files, and execute shell scripts. To add user data, you need to specify the following information in the form:

  • User data type. Two user data types are currently supported: x-shellscript and cloud-config.

  • User data. If you have selected the x-shellscript type, enter your shell script to this field. If you have selected the cloud-config type, enter a configuration for cloud-config in YAML format to this field. For examples of operations that cloud-config allows and the corresponding configurations, please see the official cloud-init documentation.

The specified user data will be applied to all cluster nodes.

After completing the previous steps, click Create.

Note

The process of creating a new Kubernetes cluster can take from 5 to 15 minutes.

The Cluster-manager application will be installed in the cluster for proper monitoring functioning and for editing the count of worker nodes in cluster. Deleting it may cause the Kubernetes Clusters service incorrect work with the cluster.

To ensure correct cluster operation, a new security group is automatically created when a cluster is created. The following rules will be added to the group:

  • the rule to permit inbound traffic from interfaces that are in the same security group;

  • the rule to enable all outbound IPv4 traffic.

If the cluster is deleted, the security group will also be deleted.

Cluster node placement options#

If High-availability cluster option is selected, the following options to place cluster nodes are available:

  • Placement in three availability zones was selected, and the Use placement groups flag was set at the worker node configuration step. This combination of options guarantees that the nodes will be placed on different computing nodes in each availability zone.

  • Placement in three availability zones was selected, but the Use placement groups flag was not set at the worker node configuration step. In this combination, master nodes and worker nodes will be distributed across three availability zones. Still, it is not guaranteed that worker nodes will be distributed across different computing nodes within an availability zone.

  • Placement in one availability zone was selected, and the Use placement groups flag was set at the worker node configuration step. This combination guarantees that nodes will be placed on different computing nodes in an availability zone, though they will be distributed independently of each other. Therefore, a master node may be placed on the same computing node as a worker node.

  • Placement in one Availability Zone was selected, but the Use placement groups flag was not set at the worker node configuration step. In this combination, master nodes will be placed on different computing nodes in an availability zone. Still, it is not guaranteed that worker nodes will be distributed across different computing nodes within the availability zone.

Changing cluster parameters#

Change the number of worker nodes in the cluster#

You can change the number of worker nodes in the cluster, if necessary. To do this, go to the cluster page and edit the Number of worker nodes field in the Information tab. In the dilog window, you can specify the number of worker nodes to add or remove and change the instance type for the added nodes, if necessary. If you change the instance type, it will remain the same for existing worker nodes.

If you increase the number of worker nodes, a new instance is created, the required cluster components are installed on it, and the cluster is reconfigured to accommodate the new node. During this process, the cluster is in Not Ready state. After successful completion of this process, the cluster will go to the Ready state.

If you reduce the number of worker nodes, the nodes launched first are deleted first. The desired number of nodes is switched to the maintenance mode and then removed from the cluster using the Kubernetes API. After that, the released instances are deleted. During this process, the cluster is in Not Ready state. After the successful completion of this process, the cluster will go to the Ready state.

Note

To guarantee that a certain worker node is not deleted upon scaling, you can reduce the cluster size by selecting a specific node for deletion.

If the attempt to change the number of worker nodes fails, the cluster will continue to operate. A record with the failure details will be displayed on the Warnings tab.

Enable/disable certificate auto-renewal#

Kubernetes cluster uses PKI to securely exchange messages between its components. For the normal operation of the cluster, the certificates in use must be renewed in a timely manner.

The cluster-manager utility regularly checks the lifetime of certificates. If the remaining lifetime is less than two weeks, warnings are sent to the user.

You can renew certificates by yourself or enable the Certificate auto-renewal option. To enable or disable certificate auto-renewal on cluster master nodes, go to the cluster page and change the value of the Certificate auto-renewal field in the Information tab.

By default, certificate auto-renewal is available for all new clusters. If you have Kubernetes clusters where automatic renewal is not available (the Certificate auto-renewal field is not displayed), but you want to use this option, leave a request on the support portal or email at support@ngn.com.tr.

Attention

When certificate renewal is enabled, it is not recommended to renew them manually, since in case of an error, the cluster may fail.

When automatic renewal is disabled, you should monitor the certificate lifetime yourself and renew them when necessary.

Edit user data#

User data can be changed after the Kubernetes cluster has been created. In particular, this automatically executes new scripts when launching additional nodes to scale the cluster. User data can be edited in the Information tab on the cluster page. To do this, click the edit icon next to User data to edit parameters. In the dialog window, select the data type and enter the data.

Deleting resources#

Delete a worker node#

If a worker node is no longer needed (for example, you have transferred all its pods to another node) then it can be deleted. You can delete a worker node only when a Kubernetes cluster has the “Running” status and is in the “Ready” state.

Note

A worker node cannot be deleted if the “Scheduling disabled” option is enabled for all other worker nodes.

Attention

Before deleting a worker node, make sure that there are no pods to which Persistent Volumes are binded. Deleting a worker node with such volumes may render applications that use them temporarily unavailable. Recovery can take up to 10 minutes until the Kubernetes service mounts the volumes on another worker node.

To delete a worker node:

  1. Go to Kubernetes Clusters Clusters.

  2. Find the cluster in the table and click the cluster ID to go to its page.

  3. Open the Instances tab.

  4. Select the instance where the worker node is deployed, in the resource table.

  5. Click Delete and confirm the action in the dialog window.

Deleting a Kubernetes cluster#

Deleting a cluster is deleting all instances, created for it. Instances, created for additional services are also deleting. Volumes created by EBS-provider will be available for deletion in the Volumes section of the management console.

To delete a Kubernetes cluster and related services (Container Registry, EBS-provider), click Delete.

Attention

When you delete a cluster, the volume with the Docker Registry images will also be deleted!