Тестирую новенький NVME диск в связке с LVM/RAID0
для начала запускаю бенч на корневом разделе который размещён на NVME
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=4G --filename=testfile
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=475MiB/s,w=158MiB/s][r=122k,w=40.5k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=967354: Fri Jan 20 16:08:51 2023
read: IOPS=122k, BW=477MiB/s (500MB/s)(3070MiB/6433msec)
bw ( KiB/s): min=484400, max=494952, per=100.00%, avg=488777.33, stdev=2638.53, samples=12
iops : min=121100, max=123738, avg=122194.33, stdev=659.63, samples=12
write: IOPS=40.8k, BW=159MiB/s (167MB/s)(1026MiB/6433msec); 0 zone resets
bw ( KiB/s): min=160680, max=165056, per=100.00%, avg=163478.00, stdev=1347.77, samples=12
iops : min=40170, max=41264, avg=40869.50, stdev=336.94, samples=12
cpu : usr=21.74%, sys=43.69%, ctx=445460, majf=0, minf=6
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=477MiB/s (500MB/s), 477MiB/s-477MiB/s (500MB/s-500MB/s), io=3070MiB (3219MB), run=6433-6433msec
WRITE: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=1026MiB (1076MB), run=6433-6433msec
Disk stats (read/write):
nvme0n1: ios=781546/261240, merge=0/11, ticks=395602/2057, in_queue=397662, util=98.54%
получаю вот такие скорости чтения/записи:
read: IOPS=122k, BW=477MiB/s (500MB/s)(3070MiB/6433msec)
write: IOPS=40.8k, BW=159MiB/s (167MB/s)(1026MiB/6433msec);
затем создаю 6 пустых балванок
dd if=/dev/zero bs=1500M count=1 of=raid/sda{1..6}.dd
затем монтирую как устройства /dev/loop{15..20} и уже из них делаю LVM/RAID0
lvs -a -o name,copy_percent,devices vgdata
LV Cpy%Sync Devices
lvmirror /dev/loop15(0),/dev/loop16(0),/dev/loop17(0),/dev/loop18(0),/dev/loop19(0),/dev/loop20(0)
losetup -l|grep sda
/dev/loop19 0 0 0 0 /root/raid/sda5.dd 0 512
/dev/loop17 0 0 0 0 /root/raid/sda3.dd 0 512
/dev/loop15 0 0 0 0 /root/raid/sda1.dd 0 512
/dev/loop18 0 0 0 0 /root/raid/sda4.dd 0 512
/dev/loop16 0 0 0 0 /root/raid/sda2.dd 0 512
/dev/loop20 0 0 0 0 /root/raid/sda6.dd 0 512
монтирую полученную FS в дирикторию raid
mount |grep raid
/dev/mapper/vgdata-lvmirror on /root/raid type ext4 (rw,relatime,stripe=96)
запускаю бенч уже на этой файловой систем и получаю удвоение производительности
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=4G --filename=testfile
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=738MiB/s,w=247MiB/s][r=189k,w=63.2k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=967394: Fri Jan 20 16:09:19 2023
read: IOPS=190k, BW=743MiB/s (779MB/s)(3070MiB/4130msec)
bw ( KiB/s): min=750888, max=775560, per=100.00%, avg=761323.00, stdev=7770.14, samples=8
iops : min=187722, max=193890, avg=190330.75, stdev=1942.54, samples=8
write: IOPS=63.6k, BW=248MiB/s (260MB/s)(1026MiB/4130msec); 0 zone resets
bw ( KiB/s): min=251480, max=259656, per=100.00%, avg=254520.00, stdev=2730.45, samples=8
iops : min=62870, max=64914, avg=63630.00, stdev=682.61, samples=8
cpu : usr=29.35%, sys=70.53%, ctx=197, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=743MiB/s (779MB/s), 743MiB/s-743MiB/s (779MB/s-779MB/s), io=3070MiB (3219MB), run=4130-4130msec
WRITE: bw=248MiB/s (260MB/s), 248MiB/s-248MiB/s (260MB/s-260MB/s), io=1026MiB (1076MB), run=4130-4130msec
Disk stats (read/write):
dm-0: ios=739484/247392, merge=0/0, ticks=3636/1212, in_queue=4848, util=97.59%, aggrios=130986/43776, aggrmerge=0/0, aggrticks=648/234, aggrin_queue=883, aggrutil=96.38%
loop19: ios=130874/43894, merge=0/0, ticks=646/233, in_queue=880, util=96.38%
loop17: ios=131194/43558, merge=0/0, ticks=654/234, in_queue=888, util=96.38%
loop15: ios=131019/43749, merge=0/0, ticks=651/235, in_queue=886, util=96.38%
loop18: ios=130916/43848, merge=0/0, ticks=649/240, in_queue=888, util=96.38%
loop16: ios=131062/43706, merge=0/0, ticks=648/232, in_queue=879, util=96.38%
loop20: ios=130856/43901, merge=0/0, ticks=644/232, in_queue=877, util=96.38%
получаю вот такие скорости чтения/записи:
read: IOPS=190k, BW=743MiB/s (779MB/s)(3070MiB/4130msec)
write: IOPS=63.6k, BW=248MiB/s (260MB/s)(1026MiB/4130msec);
действительно ли этот тест показывает повышение производительности и за счёт чего оно происходит?