Kubernetes入门实验:volume

作者注:本文仅为笔者学习记录,不具任何参考意义。

k8s volume 实验。
注:本文为笔者实验记录,非教程,另会不定时更新。

环境

1
2
3
4
5
# kubectl get node
NAME STATUS ROLES AGE VERSION
edge-node Ready <none> 15m v1.17.0
edge-node2 Ready <none> 16m v1.17.0
ubuntu Ready master 67d v1.17.0

volume

技术总结

一个pod内多个容器,可通过emptyDir实时临时共享,如进行即时数据通信。为跨节点共享,可用 nfs 。
busybox镜像(其它类似)时区为UTC,可挂载/etc/localtime。以保证时间正确。
有些程序依赖库太多,可直接挂载主机lib目录。

创建pv和pvc,其meta名称不同。使用的pod指定pvc(注:pod使用的是pvc),自动匹配到pv(pv和pvc的匹配机制暂不清楚,有资料说根据容量匹配,待议)。

pvc不合适多个副本的情况,因为多副本会读写同一文件。实际用到什么场合,待议。

在pv指定的目录,最好先创建并且保证权限正常。使用nfs时,如没有创建目录,pod会创建失败。

常见类型

emptyDir

yaml文件。一个pod有2个容器,创建临时挂载目录(无须指定在主机哪个目录),两容器可相互访问,pod消失后,挂载目录不存在。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: latelee/lidch
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /test111
name: empty-volume
ports:
- name: http
containerPort: 80
hostIP: 0.0.0.0
hostPort: 80
protocol: TCP
- name: https
containerPort: 443
hostIP: 0.0.0.0
hostPort: 443
protocol: TCP
- name: busybox
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test222
name: empty-volume
volumes:
- name: empty-volume
emptyDir: {}

创建:

1
kubectl apply -f nginx-pod.yaml

验证:

1
2
3
4
5
6
7
kubectl exec -it nginx-pod -c busybox sh
echo "from busybos" > /test222/foo
exit

kubectl exec -it nginx-pod -c nginx sh
cat /test111/foo // 输出:from busybos
exit

即:两容器有各自的挂载目录,目录名不同,但内容共享。

hostPath

与上类似,但挂载的目录,会映射到主机(实际是节点主机)目录,容器之间共享,容器被删除,文件保留。但下次pod若调度其它节点,数据不存在。此法与节点主机相关。

busybox-pod.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Pod
metadata:
name: busybox-pod
labels:
app: busybox
spec:
containers:
- name: busybox1
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test111
name: host-volume
- name: busybox2
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test222
name: host-volume
volumes:
- name: host-volume
hostPath:
path: /data
type: DirectoryOrCreate

测试步骤同上,2容器分别写文件。

1
2
3
4
5
6
7
kubectl apply -f busybox-pod.yaml 

kubectl exec -it busybox-pod -c busybox1 sh

kubectl exec -it busybox-pod -c busybox1 sh

kubectl delete -f busybox-pod.yaml

当删除 pod 后,登陆节点机器,查看目录/data,可见文件还在。

多目录挂载示例busybox-pod1.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: v1
kind: Pod
metadata:
name: busybox-pod1
labels:
app: busybox
spec:
containers:
- name: busybox1
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test111
name: host-volume
- mountPath: /etc/localtime
name: time-zone
volumes:
- name: host-volume
hostPath:
path: /data
- name: time-zone
hostPath:
path: /etc/localtime

注1:如静态程序运行,直接挂载lib。或操作硬件的程序,直接挂载/dev目录。
注2:本例映射时间文件,可与前面对比。kubectl exec -it busybox-pod -c busybox1 date

NFS

要点:指定一台主机做NFS服务(本例为master),在其上安装服务。集群各节点安装nfs客户端。
此法固定了挂载目录,不随调度而变化。

安装并配置nfs。

1
2
3
4
5
6
7
sudo apt-get install nfs-kernel-server -y
vim /etc/exports
/nfs *(rw,no_root_squash,no_all_squash,sync)

sudo /etc/init.d/nfs-kernel-server restart

mount -t nfs -o nolock 192.168.0.102:/nfs /mnt/nfs

节点需要支持nfs格式挂载。

1
sudo apt-get install nfs-common -y

否则提示:

1
wrong fs type, bad option, bad superblock on 192.168.0.102:/nfs,missing codepage or helper program, or other error

busybox-nfs.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Pod
metadata:
name: busybox-pod
labels:
app: busybox
spec:
containers:
- name: busybox1
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test111
name: nfs-volume
- name: busybox2
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test222
name: nfs-volume
volumes:
- name: nfs-volume
nfs:
server: 192.168.0.102
path: /nfs

测试:

1
2
3
4
5
6
# kubectl exec -it busybox-pod -c busybox1 sh
/ # echo "bbb" > /test111/bbb
/ # exit
# cat /nfs/bbb
bbb

持久化

PV:将存储(如主机磁盘、云盘)抽象为k8s的存储单元,以便使用。(存疑:一个命名空间只能有一个PV?)。
创建PV后,是静态概念,即一直在那里。需要使用PVC注册并使用。

pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv1
labels:
storage: nfs
spec:
accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] # 支持多模式,可能好些
#accessModes:
# - ReadWriteMany
capacity:
storage: 200Mi
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain # 值 delete Recycle Retain
nfs:
server: 192.168.0.102
path: /nfs1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv2
spec:
capacity:
storage: 100Mi # 5Gi
accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"]
#accessModes:
# - ReadWriteMany
nfs:
server: 192.168.0.102
path: /nfs2

pvc.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi

注:metadata.name需要注意命名,后续要使用。
(疑问:pvc如何匹配pv?)

创建

1
2
3
4
5
6
kubectl apply -f pv.yaml
kubectl delete -f pv.yaml

kubectl apply -f pvc.yaml
kubectl delete -f pvc.yaml

查看创建的pv:

1
2
3
4
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 200Mi RWO,ROX,RWX Retain Available 17s
nfs-pv2 100Mi RWX Retain Available 3m

创建的pvc:

1
2
3
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc1 Bound nfs-pv2 100Mi RWX 3s
nfs-pvc2 Bound nfs-pv1 200Mi RWO,ROX,RWX 3s

注:似乎根据请求的容器来申请的,

busybox-pvc.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Pod
metadata:
name: busybox-pvc
labels:
app: busybox
spec:
containers:
- name: busybox1
image: latelee/busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: /test111
name: host-volume
volumes:
- name: host-volume
persistentVolumeClaim:
claimName: nfs-pvc2 # 请求的PVC,必须存在
1
2
3
kubectl apply -f busybox-pvc.yaml 
kubectl exec -it busybox-pvc -c busybox1 df
kubectl delete -f busybox-pvc.yaml

redis实例redis-pvc.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Pod
metadata:
name: redis-pvc
labels:
app: redis-pvc
spec:
containers:
- name: redis-pod
image: redis:alpine
imagePullPolicy: IfNotPresent
# command: # 不需要
volumeMounts:
- mountPath: /data # 这是redis的存储目录
name: pvc-volume
volumes:
- name: pvc-volume
persistentVolumeClaim:
claimName: nfs-pvc2 # 请求的PVC,必须存在

写数据:

1
2
3
4
5
6
7
8
9
10
11
12
# kubectl exec -it redis-pvc sh
/data # redis-cli
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> set who "latelee"
OK
127.0.0.1:6379> set email "li@latelee.or"
OK
127.0.0.1:6379> BGSAVE
Background saving started
127.0.0.1:6379> exit
/data # exit

nginx服务挂载:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # tells deployment to run 2 pods matching the template
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: latelee/lidch
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: www
subPath: html1 # 设置子目录,专用于本服务
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: nfs-pvc2

---

apiVersion: v1
kind: Service # 指定为service
metadata:
labels:
run: nginx
name: nginx
namespace: default
spec:
ports:
- port: 88 # 对外为88端口
targetPort: 80
selector:
app: nginx
type: LoadBalancer

实验小记:
创建,获取对应的service地址,访问之,出现403(此为正常,因为nfs-pvc2即/nfs1/html1没有index.html),写一文件即可正常访问。(要等待片刻)

问题及小测试

1、
没有pv,但创建了pvc。pvc请求的容量超过pv容器。提示:

1
no persistent volumes available for this claim and no storage class is set

先创建2个PV,再创建2个PVC,查看分配成功后,再删除一个PV。此时PV正在被占用,删除时为 Terminating 状态。当再删除绑定的pvc时,pv继续删除至成功。

创建一个pod,写入数据,超过PV和PVC规定容量。似乎也能成功。

1
dd if=/dev/zero of=null.bin count=3000 bs=102400

2、
先删除pvc,再删除pv。否则pv无法删除。

3、

1
mount.nfs: access denied by server while mounting 192.168.0.102:/nfs3

1、目录不存在;2、没有导出该目录。