Benchmark details and full results for our NVMe servers
Tools used for NVMe benchmarking
In order to benchmark our powerful NVMe devices, we have used two Linux tools - dd and fio. You can see the test commands and the full output of the run commands below.
dd tests results
Write - speed
Test command: dd if=/dev/zero of=benchmark bs=64K count=32K conv=fdatasync
Command output (result):
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 0.939604 s, 2.3 GB/s
Test command: dd if=/dev/zero of=dd.test bs=64K count=256K conv=fdatasync
Command output (result):
262144+0 records in
262144+0 records out
17179869184 bytes (17 GB) copied, 7.52403 s, 2.3 GB/s
Read - speed
Test command: dd if=dd.test of=/dev/null bs=64k count=32k
Command output (result):
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 0.216884 s, 9.9 GB/s
Test command: dd if=dd.test of=/dev/null bs=64k count=128k
Command output (result):
131072+0 records in
131072+0 records out
8589934592 bytes (8.6 GB) copied, 1.27211 s, 6.8 GB/s
fio tests
Write - speed
Test command: fio --name=write_throughput --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write --group_reporting=1
Command output (result):
Starting 8 processes
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
Jobs: 8 (f=8): [W(8)][100.0%][r=0KiB/s,w=6106MiB/s][r=0,w=6106 IOPS][eta 00m:00s]
write_throughput: (groupid=0, jobs=8): err= 0: pid=15225: Fri Jan 6 04:11:24 2023
write: IOPS=5827, BW=5836MiB/s (6120MB/s)(342GiB/60046msec)
slat (usec): min=23, max=1195.5k, avg=1369.26, stdev=5911.07
clat (usec): min=1970, max=1324.1k, avg=86469.47, stdev=61678.33
lat (msec): min=2, max=1646, avg=87.84, stdev=62.40
clat percentiles (msec):
| 1.00th=[ 32], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 55],
| 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 78],
| 70.00th=[ 86], 80.00th=[ 102], 90.00th=[ 140], 95.00th=[ 180],
| 99.00th=[ 355], 99.50th=[ 460], 99.90th=[ 693], 99.95th=[ 751],
| 99.99th=[ 1250]
bw ( KiB/s): min=30658, max=2314158, per=12.50%, avg=746942.35, stdev=294612.36, samples=959
iops : min= 29, max= 2259, avg=729.39, stdev=287.71, samples=959
lat (msec) : 2=0.01%, 4=0.01%, 10=0.21%, 20=0.11%, 50=12.11%
lat (msec) : 100=67.34%, 250=18.22%, 500=1.79%, 750=0.29%, 1000=0.03%
cpu : usr=2.60%, sys=3.62%, ctx=160252, majf=0, minf=41
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=101.7%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,349939,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=5836MiB/s (6120MB/s), 5836MiB/s-5836MiB/s (6120MB/s-6120MB/s), io=342GiB (367GB), run=60046-60046msec
Disk stats (read/write):
vda: ios=3/1069131, merge=0/4778, ticks=24/15572731, in_queue=15572755, util=94.03%
Write - IOPS
Test command: fio --name=write_iops --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1
Command output (result):
write_iops: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=466MiB/s][r=0,w=119k IOPS][eta 00m:00s]
write_iops: (groupid=0, jobs=1): err= 0: pid=15333: Fri Jan 6 04:24:55 2023
write: IOPS=48.5k, BW=189MiB/s (199MB/s)(11.1GiB/60001msec)
slat (nsec): min=1490, max=27082k, avg=12221.64, stdev=216157.25
clat (usec): min=64, max=257421, avg=1305.70, stdev=2107.86
lat (usec): min=71, max=257434, avg=1318.26, stdev=2119.11
clat percentiles (usec):
| 1.00th=[ 297], 5.00th=[ 545], 10.00th=[ 627], 20.00th=[ 709],
| 30.00th=[ 783], 40.00th=[ 857], 50.00th=[ 938], 60.00th=[ 1045],
| 70.00th=[ 1172], 80.00th=[ 1385], 90.00th=[ 1713], 95.00th=[ 2343],
| 99.00th=[12387], 99.50th=[15664], 99.90th=[20579], 99.95th=[22152],
| 99.99th=[24511]
bw ( KiB/s): min=70064, max=477512, per=98.84%, avg=191758.47, stdev=66471.55, samples=119
iops : min=17516, max=119378, avg=47939.55, stdev=16617.92, samples=119
lat (usec) : 100=0.01%, 250=0.64%, 500=2.44%, 750=22.63%, 1000=30.40%
lat (msec) : 2=37.19%, 4=4.33%, 10=0.92%, 20=1.34%, 50=0.11%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%
cpu : usr=5.54%, sys=48.51%, ctx=77260, majf=0, minf=5
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=101.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,2910184,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=189MiB/s (199MB/s), 189MiB/s-189MiB/s (199MB/s-199MB/s), io=11.1GiB (11.9GB), run=60001-60001msec
Disk stats (read/write):
vda: ios=0/3495253, merge=0/1715345, ticks=0/2926319, in_queue=2926319, util=99.56%
Read - speed
Test command: fio --name=read_throughput --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read --group_reporting=1
Command output (result):
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
Jobs: 8 (f=8): [R(8)][100.0%][r=11.6GiB/s,w=0KiB/s][r=11.9k,w=0 IOPS][eta 00m:00s]
read_throughput: (groupid=0, jobs=8): err= 0: pid=15594: Fri Jan 6 04:28:13 2023
read: IOPS=12.6k, BW=12.3GiB/s (13.3GB/s)(741GiB/60020msec)
slat (usec): min=15, max=423179, avg=631.03, stdev=1861.12
clat (msec): min=3, max=501, avg=39.89, stdev=14.09
lat (msec): min=4, max=521, avg=40.52, stdev=14.24
clat percentiles (msec):
| 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 31],
| 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 39], 60.00th=[ 42],
| 70.00th=[ 44], 80.00th=[ 48], 90.00th=[ 54], 95.00th=[ 59],
| 99.00th=[ 79], 99.50th=[ 97], 99.90th=[ 174], 99.95th=[ 222],
| 99.99th=[ 380]
bw ( MiB/s): min= 460, max= 2268, per=12.48%, avg=1577.97, stdev=215.13, samples=960
iops : min= 460, max= 2268, avg=1577.91, stdev=215.13, samples=960
lat (msec) : 4=0.01%, 10=0.02%, 20=1.89%, 50=83.59%, 100=14.11%
lat (msec) : 250=0.42%, 500=0.03%, 750=0.01%
cpu : usr=0.21%, sys=7.50%, ctx=349473, majf=0, minf=38
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=103.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=758134,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=12.3GiB/s (13.3GB/s), 12.3GiB/s-12.3GiB/s (13.3GB/s-13.3GB/s), io=741GiB (795GB), run=60020-60020msec
Disk stats (read/write):
vda: ios=2347120/23, merge=0/83, ticks=15632646/230, in_queue=15632876, util=99.91%
Read - IOPS
Test command: fio --name=read_iops --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1
Command output (result):
Starting 1 process
read_iops: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=0): [f(1)][100.0%][r=688MiB/s,w=0KiB/s][r=176k,w=0 IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=15604: Fri Jan 6 04:29:35 2023
read: IOPS=174k, BW=680MiB/s (713MB/s)(39.9GiB/60001msec)
slat (nsec): min=1320, max=783551, avg=2281.25, stdev=3396.89
clat (usec): min=85, max=9054, avg=364.28, stdev=102.26
lat (usec): min=87, max=9056, avg=366.77, stdev=101.87
clat percentiles (usec):
| 1.00th=[ 169], 5.00th=[ 206], 10.00th=[ 237], 20.00th=[ 277],
| 30.00th=[ 310], 40.00th=[ 343], 50.00th=[ 367], 60.00th=[ 388],
| 70.00th=[ 412], 80.00th=[ 441], 90.00th=[ 486], 95.00th=[ 529],
| 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 791], 99.95th=[ 840],
| 99.99th=[ 1020]
bw ( KiB/s): min=604232, max=745192, per=99.98%, avg=696321.24, stdev=39653.34, samples=119
iops : min=151058, max=186298, avg=174080.34, stdev=9913.32, samples=119
lat (usec) : 100=0.01%, 250=13.03%, 500=79.09%, 750=7.72%, 1000=0.14%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
cpu : usr=10.05%, sys=54.88%, ctx=241230, majf=0, minf=5
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=103.3%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=10447203,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=680MiB/s (713MB/s), 680MiB/s-680MiB/s (713MB/s-713MB/s), io=39.9GiB (42.8GB), run=60001-60001msec
Disk stats (read/write):
vda: ios=10748804/4, merge=0/1, ticks=3136545/24, in_queue=3136569, util=99.92%