Google Kubernetes Engine and basic concept

Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. It brings the latest innovations of Google in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market.
Kubernetes logo


Kubernetes Engine enables rapid application development and iteration by making it easy to deploy, update, and manage your applications and services.



Operate Seamlessly with High Availability
Control the environment from the built-in Kubernetes Engine dashboard in Google Cloud console. Use routine health checks to detect and replace hung, or crashed, applications inside your deployments.Container replication strategies, monitoring, and automated repairs help ensure that your services are highly available and offer a seamless experience to your users.

Scale Effortlessly to Meet Demand
Go from a single machine to thousands: Kubernetes Engine auto scaling allows you to handle increased user demand for your services, keeping them available when it matters most. Then, scale back in the quiet periods to save money, or schedule low-priority batch jobs to use up spare cycles. Kubernetes Engine helps you get the most out of your resource pool.

Run Securely on Google's Network
Connect to and isolate clusters no matter where you are with fine-grained network policies using Global Virtual Private Cloud (VPC) in Google Cloud. 

Move Freely between On-premises and Clouds
Kubernetes Engine runs Certified Kubernetes ensuring portability across clouds and on-premises. There's no vendor lock-in: you're free to take your applications out of Kubernetes Engine and run them anywhere Kubernetes is supported, including on your own on-premises servers.

Kubernetes architecture...

Master : Master is the main controlling unit of the Kubernetes cluster. It is the main management contact point for administrator.

Node/ Worker Minion: In Kubernetes the server which actually perform work is the worker node. This is where containers are deployed.

Pod : Pods are the basic deployment unit in Kubernetes. Kubernetes defines a pod as a group of “closely related containers” and pod can have multiple containers.

The Kubernetes Master is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: kube-apiserverkube-controller-manager and kube-scheduler. Each individual non-master node in your cluster runs two processes:

  •        kubelet, which communicates with the Kubernetes Master
  •        kube-proxy, a network proxy which reflects Kubernetes networking services on each node
Kubeadm: Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration
Google Cloud Platform
Here it presents the explanation with three practical use cases in this article. Follow those three use cases to understand the concept in better. 

USE CASE 01: Creating two API clusters and a data cluster, concept

Cluster Diagram
According to the above diagram, it presents to API pod and one data service pod. According to the internal network of the infrastructure, data service has taken 10.0.50.1 as the private IP, API I has taken 10.0.50.2 as the private IP and API II has taken 10.0.50.3 as the private IP.

In here, data service has not opened to the internet. But API I and API II has opened to the  internet with IP addresses in ordering 130.50.80.40 and 130.50.80.50.

According to the internal communication mechanism, pods make their own communication with private IPs. If we have created the data center with MySQL, the opened port is 3306 to the other pods. Here I have assumed that API I and API II can access to the internet with http://localhost/8080 and http://localhost/8081 URLs.

USE CASE 02: Creating API application on GKE


Here it is going to present that how to deploy an web application with GKE on GCP.
  1.    First access the google cloud platform by typing and loging with your google account, https://console.cloud.google.com
GKE terminal

2.       2. Then you need to create a project. You need to give a name for that and you will get a project number for it.

3.            3.  By clicking on Cloud Shell button you will be get the terminal/console.
GKE terminal



4.        4. GKE accepts Docker images as the application deployment format. To build a Docker image, you need to have an application and a Dockerfile.The application is packaged as a Docker image, using the Dockerfile that contains instructions on how the image is built. You will use this Dockerfile to package your application. To download the hello-app source code, run the following commands:

git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples cd kubernetes-engine-samples/hello-app


GKE terminal

5.          5. Set the PROJECT_ID environment variable in your shell by retrieving the pre- configured project ID on gcloud by running the command below:

export PROJECT_ID="$(gcloud config get-value project -q)"


GKE terminal


6.         6. xThe value of PROJECT_ID will be used to tag the container image for pushing it to your private Container Registry. To build the container image of this application and tag it for uploading, run the following command:

docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .

7.              7.This command instructs Docker to build the image using the Dockerfile in the current directory and tag it with a name, such as gcr.io/my-project/hello-app:v1. The gcr.io prefix refers to Google Container Registry, where the image will be hosted. Running this command does not upload the image yet. You can run docker images command to verify that the build was successful:

docker images
GKE terminal



The out put will be presented all docker images as a list.

8.               8.You need to upload the container image to a registry so that GKE can download and run it. First, configure Docker command-line tool to authenticate to Container Registry (you need to run this only once):

gcloud auth configure-docker


GKE terminal

9.             9.You can now use the Docker command-line tool to upload the image to your Container Registry:

docker push gcr.io/${PROJECT_ID}/hello-app:v1

1        10.   To test your container image using your local Docker engine, run the following command:.

docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/hello-app:v1


GKE terminal


1       11.   If you're on Cloud Shell, you can can click "Web preview" button on the top right to see your application running in a browser tab. Otherwise, open a new terminal window (or a Cloud Shell tab) and run to verify if the container works and responds to requests with "Hello, World!":

curl http://localhost:8080
Web interface
USE CASE 03: Creating two pods with API service and data service on GKE

01. As previously mentioned, first you need to create a google cloud project by visiting https://console.cloud.google.com web page.

02.  As previously the project may be listed in the google cloud console web page.

03. You need to create a cluster on GKE by clicking create a cluster. Within couple of minutes, your cluster will be presented as up and running.

04. Here it is going going to present the system with one API service with http://host/codeToState?code=XX and http://host/stateToCode?XXXXXXX URLs respectively.

     One data service to provide data from JSON files to APIs.

05. You are required to create two docker images and upload the respective APIs codes and data to them & push them to the Docker hub.

docker push chiranthamvv/data-service:tagname




06. After that you need to start with google Kubernetes Engine cloud shell/terminal.





07. then you are required to pull the docker image to  the repository to  containerized cluster on Docker.

kubectl run --image=image-name app --port=number --env="DOMAIN=cluster"

08. Expose the Kubernetes deployment through a Load balance.

kubectl expose deployment AppName --type=LoadBalancer --name=my-service

09. By typing kubetl get svc on shell you will be able to see the state of cluster with internal IP , external IP and working ports.

     As well as you can see it on web user interface of cloud console.

10. You can access the web pages according to the system with following format,

     http://IP:port/codeToState?code=XX and http://IP:port/stateToCode?state=XXXXXXX 


Comments

  1. Good work, your concept really helped me. Thank you for your contribution to sharing this wonderful information with everyone.
    GCP Training Online
    Online GCP Training

    ReplyDelete
  2. This is good information and really helpful for the people who need information about this.
    Good information. Thanks for sharing with us Igained more knowledge from your blog. Keep Doing.. It is very useful.Thanks for sharing.
    oracle training in chennai

    oracle training institute in chennai

    oracle training in bangalore

    oracle training in hyderabad

    oracle training

    hadoop training in chennai

    hadoop training in bangalore


    ReplyDelete
  3. Thank you for sharing such a useful article. I had a great time. This article was fantastic to read. Continue to publish more articles on

    Data Engineering Services 

    Data Analytics Solutions

    Data Modernization Solutions

    AI Services

    ReplyDelete

Post a Comment

Popular posts from this blog

Beauty Of Univeristy Of Kelaniya

Personal Progress Development

Learn about Apache Kafka