阿里云容器服务ACK支持本地存储,提供自动化的逻辑卷生命周期管理能力,且能根据节点本地存储容量进行调度。您可通过PVC/PV方式使用LVM及Quotapath本地存储卷。本文介绍如何在ACK集群中初始化本地存储、本地资源的挂载使用及如何配置本地存储卷自动扩容。
前提条件
已创建Kubernetes集群。具体操作,请参见创建Kubernetes托管版集群。
背景信息
本地存储架构主要由以下三部分组成:
结构 | 说明 |
Node Resource Manager | 负责维护本地存储的初始化周期。以VolumeGroup为例, 当您在ConfigMap上声明了VolumeGroup相关组成信息时,NodeResourceManager会根据ConfigMap上的定义初始化当前节点上的本地存储。 |
CSI Plugin | 负责维护本地存储卷的生命周期。以LVM为例,当用户创建使用了某个VolumeGroup的PVC时,CSI Plugin会自动生成LogicVolume并绑定到PVC。 |
Storage Auto Expandor | 负责管理本地存储卷的自动扩容,当监控发现当前本地存储卷容量不足时,会自动对本地存储卷扩容。 |
关于本地存储卷的更多信息,请参见本地存储卷概述。
步骤一:初始化本地存储
通过Node Resource Manager组件初始化本地存储, 通过读取ConfigMap管理集群内所有节点的本地存储,即可以为每个节点自动化创建VolumeGroup及ProjectQuota的根目录。更多信息,请参见node-resource-manager。
- 在集群的ConfigMap中定义本地存储卷。使用以下内容创建ConfigMap。
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: node-resource-topo namespace: kube-system data: volumegroup: |- volumegroup: - name: volumegroup1 key: kubernetes.io/hostname operator: In value: cn-zhangjiakou.192.168.XX.XX topology: type: device devices: - /dev/vdb - /dev/vdc quotapath: |- quotapath: - name: /mnt/path1 key: kubernetes.io/hostname operator: In value: cn-zhangjiakou.192.168.XX.XX topology: type: device options: prjquota fstype: ext4 devices: - /dev/vdd EOF
以上的ConfigMap定义了在
cn-zhangjiakou.192.168.XX.XX
节点创建以下两种本地存储卷资源。- 创建名为
volumegroup1
的VolumeGroup资源, 该VolumeGroup资源由宿主机上的/dev/vdb及/dev/vdc两个块设备组成。 - 创建Quotapath资源, 该Quotapath由/dev/vdd块设备格式化挂载在/mnt/path1路径下。此处只能声明一个devices。
节点的筛选:通过key、value的方式匹配节点,可以精确到一个节点,也可以是一批节点。
- 创建名为
- 在集群中部署Node Resource Manager组件。使用以下内容部署Node Resource Manager组件。
cat <<EOF | kubectl apply -f ----apiVersion: v1kind: ServiceAccountmetadata: name: node-resource-manager namespace: kube-system---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: node-resource-managerrules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "watch", "list", "delete", "update", "create"] - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: node-resource-manager-bindingsubjects: - kind: ServiceAccount name: node-resource-manager namespace: kube-systemroleRef: kind: ClusterRole name: node-resource-manager apiGroup: rbac.authorization.k8s.io---kind: DaemonSetapiVersion: apps/v1metadata: name: node-resource-manager namespace: kube-systemspec: selector: matchLabels: app: node-resource-manager template: metadata: labels: app: node-resource-manager spec: tolerations: - operator: "Exists" priorityClassName: system-node-critical serviceAccountName: node-resource-manager hostNetwork: true hostPID: true containers: - name: node-resource-manager securityContext: privileged: true capabilities: add: ["SYS_ADMIN"] allowPrivilegeEscalation: true image: registry.cn-hangzhou.aliyuncs.com/acs/node-resource-manager:v1.18.8.0-5b1bdc2-aliyun imagePullPolicy: "Always" args: - "--nodeid=$(KUBE_NODE_NAME)" env: - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName volumeMounts: - mountPath: /dev mountPropagation: "HostToContainer" name: host-dev - mountPath: /var/log/ name: host-log - name: etc mountPath: /host/etc - name: config mountPath: /etc/unified-config volumes: - name: host-dev hostPath: path: /dev - name: host-log hostPath: path: /var/log/ - name: etc hostPath: path: /etc - name: config configMap: name: node-resource-topoEOF
- 安装csi-plugin组件。csi-plugin组件提供了数据卷的全生命周期管理,包括数据卷的创建、挂载、卸载、删除及扩容等服务。使用以下内容安装csi-plugin组件。
cat <<EOF | kubectl apply -f -apiVersion: storage.k8s.io/v1kind: CSIDrivermetadata: name: localplugin.csi.alibabacloud.comspec: attachRequired: false podInfoOnMount: true---apiVersion: apps/v1kind: DaemonSetmetadata: labels: app: csi-local-plugin name: csi-local-plugin namespace: kube-systemspec: selector: matchLabels: app: csi-local-plugin template: metadata: labels: app: csi-local-plugin spec: containers: - args: - --v=5 - --csi-address=/csi/csi.sock - --kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock env: - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v1.2.0 imagePullPolicy: Always name: driver-registrar volumeMounts: - mountPath: /csi name: plugin-dir - mountPath: /registration name: registration-dir - args: - --endpoint=$(CSI_ENDPOINT) - --v=5 - --nodeid=$(KUBE_NODE_NAME) - --driver=localplugin.csi.alibabacloud.com env: - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: SERVICE_PORT value: "11290" - name: CSI_ENDPOINT value: unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.14.8-40ee9518-local imagePullPolicy: Always name: csi-localplugin securityContext: allowPrivilegeEscalation: true capabilities: add: - SYS_ADMIN privileged: true volumeMounts: - mountPath: /var/lib/kubelet mountPropagation: Bidirectional name: pods-mount-dir - mountPath: /dev mountPropagation: HostToContainer name: host-dev - mountPath: /var/log/ name: host-log - mountPath: /mnt mountPropagation: Bidirectional name: quota-path-dir hostNetwork: true hostPID: true serviceAccount: csi-admin tolerations: - operator: Exists volumes: - hostPath: path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com type: DirectoryOrCreate name: plugin-dir - hostPath: path: /var/lib/kubelet/plugins_registry type: DirectoryOrCreate name: registration-dir - hostPath: path: /var/lib/kubelet type: Directory name: pods-mount-dir - hostPath: path: /dev type: "" name: host-dev - hostPath: path: /var/log/ type: "" name: host-log - hostPath: path: /mnt type: Directory name: quota-path-dir updateStrategy: rollingUpdate: maxUnavailable: 10% type: RollingUpdateEOF
步骤二:使用本地存储卷创建应用
方式一:使用LVM本地存储卷创建应用
- 创建StorageClass。使用以下内容创建StorageClass。
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-local provisioner: localplugin.csi.alibabacloud.com parameters: volumeType: LVM vgName: volumegroup1 fsType: ext4 lvmType: "striping" reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF
parameters.vgName为在node-resource-topo configmap中定义的VolumeGroup名称
volumegroup1
。更多信息,请参见LVM数据卷。 - 创建PVC。
cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-pvc annotations: volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: csi-local EOF
annotations下的
volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
表明后续关联PVC的Pod将会被调度到cn-zhangjiakou.192.168.XX.XX
节点上,该节点是之前在ConfigMap中定义的VolumeGroups资源所在节点。 - 创建应用。使用以下内容创建应用。
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: deployment-lvm labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 volumeMounts: - name: lvm-pvc mountPath: "/data" volumes: - name: lvm-pvc persistentVolumeClaim: claimName: lvm-pvc EOF
等待应用启动,容器中的/data目录容量等于PVC声明的容量2 GiB。
方式二:使用Quotapath本地存储卷创建应用
- 创建StorageClass。使用以下内容创建StorageClass。
cat << EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-local-quota parameters: volumeType: QuotaPath rootPath: /mnt/path1 provisioner: localplugin.csi.alibabacloud.com reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer EOF
parameters.rootPath为在node-resource-topo configmap中定义的Quotapath资源的名称
/mnt/path1
。更多信息,请参见QuotaPath数据卷。 - 创建PVC。
cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-quota annotations: volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX labels: app: web-quota spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: alicloud-local-quota EOF
annotations下的
volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
表明后续关联PVC的Pod将会被调度到cn-zhangjiakou.192.168.XX.XX
节点上,该节点是之前在ConfigMap中定义的Quotapath资源所在节点。 - 创建应用。使用以下内容创建应用。
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: StatefulSet metadata: name: web-quota spec: selector: matchLabels: app: nginx serviceName: "nginx" template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: disk-ssd mountPath: /data volumes: - name: "disk-ssd" persistentVolumeClaim: claimName: csi-quota EOF
等待应用启动,容器中的/data目录容量等于PVC声明的容量2 GiB。
步骤三:自动扩容本地存储卷
- 安装自动扩容插件。更多信息,请参见使用storage-operator进行存储组件的部署与升级。
- 执行以下命令配置storage-operator。
kubectl edit cm storage-operator -n kube-system
预期输出:
storage-auto-expander: | # deploy config install: true imageTag: "v1.18.8.1-d4301ee-aliyun"
- 执行以下命令检查自动扩容组件是否启动。
kubectl get pod -n kube-system |grep storage-auto-expander
预期输出:
storage-auto-expander-6bb575b68c-tt4hh 1/1 Running 0 2m41s
- 执行以下命令配置storage-operator。
- 配置自动扩容策略。使用以下内容配置自动扩容策略。
cat << EOF | kubectl apply -f - apiVersion: storage.alibabacloud.com/v1alpha1 kind: StorageAutoScalerPolicy metadata: name: hybrid-expand-policy spec: pvcSelector: matchLabels: app: web-quota namespaces: - default conditions: - name: condition1 key: volume-capacity-used-percentage operator: Gt values: - "80" actions: - name: action type: volume-expand params: scale: 50% limits: 15Gi EOF
从以上命令模板可以看出,当存储容量达到80%以上时执行数据卷扩容,每次扩容50%,最大扩容到15 GiB。
内容没看懂? 不太想学习?想快速解决? 有偿解决: 联系专家
阿里云企业补贴进行中: 马上申请
腾讯云限时活动1折起,即将结束: 马上收藏
同尘科技为腾讯云授权服务中心。
购买腾讯云产品享受折上折,更有现金返利:同意关联,立享优惠
转转请注明出处:https://www.yunxiaoer.com/156823.html