Drollery Medieval drollery of a knight on a horse

🏆 欢迎来到本站: https://xuchangwei.com/希望这里有你感兴趣的内容

flowery border with man falling
flowery border with man falling

Kubernetes: Kubernetes存储性能对比

Kubernetes存储性能对比

目标:存储高可用 当前:使用本地存储,优点是性能高,缺点机器宕机无法快速迁移数据。

对比

测试存储类型:

  • 腾讯云盘CBS HSSD (高可用)
  • local 高性能云盘 SATA HDD (单点)
  • local NVME SSD (单点)
  • 腾讯云文件存储系统NFS (高可用)

测试结果

除现有local本地挂载方式,其它存储类型效果不佳。云盘cbs性能比较低,CFS性能比较低

为了运行测试,使用 FIO 运行 测试用例,指定不同测试:

  • 随机读写带宽。
  • 随机读写 IOPS。
  • 读写延迟。
  • 顺序读写。
  • 混合读写 IOPS
k8s中存储性能.png

测试过程

docker
#制作镜像
git clone https://github.com/leeliu/dbench.git
cd dbench
docker build -t registry.cn-hangzhou.aliyuncs.com/gpdb/dbench:latest .
docker push registry.cn-hangzhou.aliyuncs.com/gpdb/dbench:latest
StorageClass

local本地

cat <<\EOF |kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: test-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF

cat <<\EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-local-hssd
  labels:
    storage: test-local-data
    app: hssd
spec:
  capacity:
    storage: "500Gi"
  accessModes:
    - "ReadWriteOnce"
  volumeMode: Filesystem
  local:
    path: /data/xcw/gitaly
  storageClassName: "test-local-storage"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          #- 10.22.0.7 #高性能
          - 10.22.0.26 #nvme ssd
EOF

ssd

#https://cloud.tencent.com/document/product/457/44239
cat <<\EOF |kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-cbs-ssd
provisioner: cloud.tencent.com/qcloud-cbs
parameters:
  type: CLOUD_SSD
  paymode: POSTPAID
  zone: "800003"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF

hssd

#https://cloud.tencent.com/document/product/457/44239
Cat <<\EOF |kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-cbs-hssd
provisioner: cloud.tencent.com/qcloud-cbs
parameters:
  type: CLOUD_HSSD
  paymode: POSTPAID
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF

nfs+pv+pvc

#nfs-pv
cat <<\EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: dbench-pv-0
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 500Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: dbench-pv-claim
    namespace: gitlab-test-xcw
  mountOptions:
  - hard
  - nfsvers=3
  - nolock
  nfs:
    path: /nvxa7jnu/tmp-test/gitaly
    server: 10.22.0.29
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
EOF

#nfs-pvc
cat <<\EOF |kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dbench-pv-claim
  namespace: gitlab-test-xcw
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 500Gi
  volumeMode: Filesystem
  volumeName: dbench-pv-0
EOF
pvc+job
cat <<\EOF> dbench.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dbench-pv-claim
  namespace: gitlab-test-xcw
spec:
  storageClassName: test-cbs-hssd
  # storageClassName: gp2
  # storageClassName: test-local-storage
  # storageClassName: ibmc-block-bronze
  # storageClassName: ibmc-block-silver
  # storageClassName: ibmc-block-gold
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Gi
---
apiVersion: batch/v1
kind: Job
metadata:
  name: dbench
  namespace: gitlab-test-xcw
spec:
  template:
    spec:
      containers:
      - name: dbench
        image: registry.cn-hangzhou.aliyuncs.com/gpdb/dbench:latest
        imagePullPolicy: Always
        env:
          - name: DBENCH_MOUNTPOINT
            value: /data
          # - name: DBENCH_QUICK
          #   value: "yes"
          # - name: FIO_SIZE
          #   value: 1G
          # - name: FIO_OFFSET_INCREMENT
          #   value: 256M
          # - name: FIO_DIRECT
          #   value: "0"
        volumeMounts:
        - name: dbench-pv
          mountPath: /data
      restartPolicy: Never
      volumes:
      - name: dbench-pv
        persistentVolumeClaim:
          claimName: dbench-pv-claim
  backoffLimit: 4
EOF
deploy
kubectl apply -f dbench.yaml
result
kubectl logs -f job/dbench

类似

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 75.7k/59.7k. BW: 523MiB/s / 500MiB/s
Average Latency (usec) Read/Write: 183.07/76.91
Sequential Read/Write: 536MiB/s / 512MiB/s
Mixed Random Read/Write IOPS: 43.1k/14.4k

不同云硬盘异同点 不同云盘性能指标 hssd 1元/GB/小时 文件存储性能指标 1.6元/GiB/月

hssd 是基于全 NVMe SSD 存储介质

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 26.6k/20.7k. BW: 348MiB/s / 330MiB/s
Average Latency (usec) Read/Write: 307.98/634.11
Sequential Read/Write: 344MiB/s / 333MiB/s
Mixed Random Read/Write IOPS: 17.1k/5987

nfs 通用标准型

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 69.9k/8995. BW: 113MiB/s / 101MiB/s
Average Latency (usec) Read/Write: 331.85/1843.09
Sequential Read/Write: 102MiB/s / 102MiB/s
Mixed Random Read/Write IOPS: 28.4k/9479

local 高性能云盘 SATA HDD

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 110k/27.4k. BW: 2519MiB/s / 965MiB/s
Average Latency (usec) Read/Write: 183.90/220.97
Sequential Read/Write: 3624MiB/s / 989MiB/s
Mixed Random Read/Write IOPS: 28.9k/9600

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 116k/25.1k. BW: 3125MiB/s / 906MiB/s
Average Latency (usec) Read/Write: 181.99/219.65
Sequential Read/Write: 3357MiB/s / 984MiB/s
Mixed Random Read/Write IOPS: 26.7k/8844

local NVME SSD

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 245k/222k. BW: 2361MiB/s / 2636MiB/s
Average Latency (usec) Read/Write: 104.23/22.16
Sequential Read/Write: 2726MiB/s / 2763MiB/s
Mixed Random Read/Write IOPS: 163k/54.4k
==================
= Dbench Summary =
==================
Random Read/Write IOPS: 237k/210k. BW: 2179MiB/s / 2681MiB/s
Average Latency (usec) Read/Write: 102.78/24.52
Sequential Read/Write: 2722MiB/s / 2750MiB/s
Mixed Random Read/Write IOPS: 178k/59.4k