deployment
#是最常用的controller deployment可以管理pod的多個(gè)副本,并確保pod按照期望的狀態(tài)運(yùn)行。
replicaset
#實(shí)現(xiàn)了pod的多副本管理。使用deployment時(shí)會自動創(chuàng)建replicaset,也就是說deployment是通過replicaset來管理pod的多個(gè)副本的,我們通常不需要直接使用replicaset。
daemonset
#用于每個(gè)node最多只運(yùn)行一個(gè)pod副本的場景。正如其名稱所示的,daemonset通常用于運(yùn)行daemon。
statefuleset
#能夠保證pod的每個(gè)副本在整個(gè)生命周期中名稱是不變的,而其他controller不提供這個(gè)功能。當(dāng)某個(gè)pod發(fā)生故障需要?jiǎng)h除并重新啟動時(shí),pod的名稱會發(fā)生變化,同時(shí)statefulset會保證副本按照固定的順序啟動、更新或者刪除。
job
#用于運(yùn)行結(jié)束就刪除的應(yīng)用,而其他controller中的pod通常是長期持續(xù)運(yùn)行的。
vim /etc/hosts
10.0.0.50 k8s-master
10.0.0.51 k8s-node-01
10.0.0.52 k8s-node-02
$ hostnamectl set-hostname k8s-master
$ echo "127.0.0.1 $(hostname)" >> /etc/hosts
$ hostnamectl set-hostname k8s-node-01
$ echo "127.0.0.1 $(hostname)" >> /etc/hosts
$ hostnamectl set-hostname k8s-node-02
$ echo "127.0.0.1 $(hostname)" >> /etc/hosts
systemctl start chronyd.service && systemctl enable chronyd.service
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i s/^SELINUX=enforcing$/SELINUX=disabled/ /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
$ cat < /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
EOF
$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf
$ ls /proc/sys/net/bridge
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
$ cat > /etc/sysconfig/modules/ipvs.modules < #!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules
$ bash /etc/sysconfig/modules/ipvs.modules
$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4
$ yum install -y ipset ipvsadm
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-
tools wget vim ntpdate libseccomp libtool-ltdl
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-
ce/linux/centos/docker-ce.repo
yum install -y docker-ce-18.09.9-3.el7
$ vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/docker --graph /apps/docker
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://dockerhub.azk8s.cn",
"http://hub-mirror.c.163.com",
"https://registry.docker-cn.com"
],
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file":"5"
}
}
EOF
$ systemctl start docker && systemctl enable docker
kubelet: 在集群中的每個(gè)節(jié)點(diǎn)上用來啟動 pod 和 container 等;
kubectl: 用來與集群通信的命令行工具;
kubeadm: 用來初始化集群的指令。
yum install -y kubelet-1.16.3-0
yum install -y kubectl-1.16.3-0
yum install -y kubeadm-1.16.3-0
systemctl start kubelet && systemctl enable kubelet
reboot
配置 localAPIEndpoint.advertiseAddress 參數(shù),調(diào)整為你的 Master 服務(wù)器地址;
配置 imageRepository 參數(shù),調(diào)整 kubernetes 鏡像下載地址為阿里云;
配置 networking.podSubnet 參數(shù),調(diào)整為你要設(shè)置的網(wǎng)絡(luò)范圍。
$ cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.0.50
bindPort: 6443
nodeRegistration:
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.16.3
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF
kubeadm init --config kubeadm-config.yaml
......
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as
root:
kubeadm join 192.168.2.11:6443 --token 4udy8a.f77ai0zun477kx0p
--discovery-token-ca-cert-hash
sha256:4645472f24b438e0ecf5964b6dcd64913f68e0f9f7458768cfb96a9ab16b4212
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubeadm join 192.168.2.11:6443 --token 4udy8a.f77ai0zun477kx0p
--discovery-token-ca-cert-hash
sha256:4645472f24b438e0ecf5964b6dcd64913f68e0f9f7458768cfb96a9ab16b4212
$ wget https://docs.projectcalico.org/v3.10/getting-
started/kubernetes/installation/hosted/kubernetes-datastore/calico-
networking/1.7/calico.yaml
sed -i s/192.168.0.0/10.244.0.0/g calico.yaml
kubectl apply -f calico.yaml
kubectl get pod -n kube-system
5. 配置 Kubectl 命令自動補(bǔ)全(Master 節(jié)點(diǎn))
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: 192.168.121.140:5000/defaultbackend
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
kubectl apply -f default-backend.yaml
#訪問網(wǎng)站下載文件
https://github.com/kubernetes/ingress-nginx/blob/controller-
v0.48.1/deploy/static/provider/baremetal/deploy.yaml#部署ingress-nginx
kubectl apply -f deploy.yaml
kubectl get pods -n ingress-nginx
#ingress_test.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-test
annotations:
kubernetes.io/ingress.class: "nginx"
# 開啟use-regex,啟用path的正則匹配
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
# 定義域名
- host: test.ingress.com
http:
paths:
# 不同path轉(zhuǎn)發(fā)到不同端口
- path: /
backend:
serviceName: nginx-controller
servicePort: 8000
kubectl apply -f ingress_test.yaml
vim nginx-rc.yaml
apiVersion: v1
kind: Deployment
metadata:
name: nginx-controller
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
cat nginx-server-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
spec:
ports:
- port: 8000
targetPort: 80
protocol: TCP
type: NodePort
selector:
name: nginx
#創(chuàng)建pod以及service
kubectl apply -f nginx-rc.yaml
kubectl apply -f nginx-server-nodeport.yaml
curl -i test.ingress.com
HTTP/1.1 200 OK
文章版權(quán)歸作者所有,未經(jīng)允許請勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請注明本文地址:http://www.ezyhdfw.cn/yun/129561.html
摘要:我們要學(xué)習(xí),就有首先了解的技術(shù)范圍基礎(chǔ)理論知識庫等,要學(xué)習(xí),肯定要有入門過程,在這個(gè)過程中,學(xué)習(xí)要從易到難,先從基礎(chǔ)學(xué)習(xí)。組件組件一個(gè)集群是由一組被稱為節(jié)點(diǎn)的機(jī)器或虛擬機(jī)組成,節(jié)點(diǎn)有兩種類型。我們要學(xué)習(xí) Kubernetes,就有首先了解 Kubernetes 的技術(shù)范圍、基礎(chǔ)理論知識庫等,要學(xué)習(xí) Kubernetes,肯定要有入門過程,在這個(gè)過程中,學(xué)習(xí)要從易到難,先從基礎(chǔ)學(xué)習(xí)。 接...
摘要:聯(lián)調(diào)測試,無需依賴他人。針對以上問題,有兩種解決方法,一個(gè)是自己搭建私有服務(wù),另一個(gè)是用云服務(wù)的鏡像管理平臺如阿里云的容器鏡像服務(wù)。利用,先對阿里云的服務(wù)進(jìn)行登錄。推送后,就能在阿里云的倉庫上看到這個(gè)鏡像。 Docker簡述 Docker是一種OS虛擬化技術(shù),是一個(gè)開源的應(yīng)用容器引擎。它可以讓開發(fā)者將應(yīng)用打包到一個(gè)可移植的容器中,并且該容器可以運(yùn)行在幾乎所有l(wèi)inux系統(tǒng)中(Windo...
摘要:月日日,靈雀云企業(yè)定制培訓(xùn)在深圳招商銀行總部圓滿結(jié)束。靈雀云培訓(xùn)中強(qiáng)調(diào)理論實(shí)踐工具落地相結(jié)合。靈雀云是官方認(rèn)證培訓(xùn)合作伙伴服務(wù)提供商。不久之后,靈雀云還將推出微服務(wù)的培訓(xùn)課程。 showImg(https://segmentfault.com/img/bVblpFV?w=600&h=334); 12月13日-14日,靈雀云企業(yè)定制k8s培訓(xùn)在深圳招商銀行總部圓滿結(jié)束。 來自招行總部信息...
摘要:因此本篇博文將講解如何在本地構(gòu)建一個(gè)帶組件的底包鏡像并上傳到上供下載使用。 showImg(https://segmentfault.com/img/remote/1460000013318761); 【利用K8S技術(shù)棧打造個(gè)人私有云系列文章目錄】 利用K8S技術(shù)棧打造個(gè)人私有云(連載之:初章) 利用K8S技術(shù)棧打造個(gè)人私有云(連載之:K8S集群搭建) 利用K8S技術(shù)棧打造個(gè)人私...
摘要:因此本篇博文將講解如何在本地構(gòu)建一個(gè)帶組件的底包鏡像并上傳到上供下載使用。 showImg(https://segmentfault.com/img/remote/1460000013318761); 【利用K8S技術(shù)棧打造個(gè)人私有云系列文章目錄】 利用K8S技術(shù)棧打造個(gè)人私有云(連載之:初章) 利用K8S技術(shù)棧打造個(gè)人私有云(連載之:K8S集群搭建) 利用K8S技術(shù)棧打造個(gè)人私...
閱讀 1493·2023-01-11 13:20
閱讀 1851·2023-01-11 13:20
閱讀 1290·2023-01-11 13:20
閱讀 2041·2023-01-11 13:20
閱讀 4243·2023-01-11 13:20
閱讀 2948·2023-01-11 13:20
閱讀 1581·2023-01-11 13:20
閱讀 3853·2023-01-11 13:20