1

I've followed some guides but this has now gone beyond my limited knowledge.

I have a Dell R630 server running Proxmox V8.0.4 and I have installed a Dell H200e HAB flashed as per these instructions: -

https://techmattr.wordpress.com/2016/04/11/updated-sas-hba-crossflashing-or-flashing-to-it-mode-dell-perc-h200-and-h310/

And i've been trying to setup TrueNAS in a VM. Although I got it working the write performance was poor.

In an attempt to rule out issues with the hardware passthrough I have just done some read/write tests to a single drive in a MD1200 connected through the H200.

This is the read test:

root@proxmox:~# fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=4 --runtime=60 --time_based --name seq_read_job --filename=/dev/sdd
seq_read_job: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=168MiB/s][r=43.0k IOPS][eta 00m:00s]
seq_read_job: (groupid=0, jobs=1): err= 0: pid=10317: Sun Nov 12 14:12:12 2023
  read: IOPS=41.5k, BW=162MiB/s (170MB/s)(9734MiB/60001msec)
    slat (usec): min=3, max=675, avg=15.87, stdev= 5.57
    clat (usec): min=30, max=28442, avg=78.73, stdev=77.27
     lat (usec): min=41, max=28458, avg=94.60, stdev=77.14
    clat percentiles (usec):
     |  1.00th=[   43],  5.00th=[   54], 10.00th=[   63], 20.00th=[   70],
     | 30.00th=[   72], 40.00th=[   74], 50.00th=[   74], 60.00th=[   76],
     | 70.00th=[   79], 80.00th=[   81], 90.00th=[   85], 95.00th=[   89],
     | 99.00th=[  143], 99.50th=[  660], 99.90th=[  865], 99.95th=[  873],
     | 99.99th=[  955]
   bw (  KiB/s): min=124528, max=175392, per=100.00%, avg=166278.55, stdev=5530.75, samples=119
   iops        : min=31132, max=43848, avg=41569.62, stdev=1382.68, samples=119
  lat (usec)   : 50=3.37%, 100=95.08%, 250=0.71%, 500=0.32%, 750=0.03%
  lat (usec)   : 1000=0.48%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=9.57%, sys=40.60%, ctx=2888210, majf=0, minf=42
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2492009,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=9734MiB (10.2GB), run=60001-60001msec

Disk stats (read/write):
  sdd: ios=2487331/0, merge=82/0, ticks=151526/0, in_queue=151526, util=99.95%

And the write test:

root@proxmox:~# fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=4 --runtime=60 --time_based --name seq_write --filename=/dev/sdd 
seq_write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=624KiB/s][w=156 IOPS][eta 00m:00s]
seq_write: (groupid=0, jobs=1): err= 0: pid=8502: Sun Nov 12 14:00:37 2023
  write: IOPS=152, BW=611KiB/s (626kB/s)(35.8MiB/60019msec); 0 zone resets
    slat (usec): min=6, max=25638, avg=20.06, stdev=267.78
    clat (usec): min=24487, max=50681, avg=26114.08, stdev=2881.03
     lat (usec): min=24527, max=58414, avg=26134.15, stdev=2899.79
    clat percentiles (usec):
     |  1.00th=[25035],  5.00th=[25035], 10.00th=[25035], 20.00th=[25035],
     | 30.00th=[25035], 40.00th=[25035], 50.00th=[25035], 60.00th=[25035],
     | 70.00th=[25035], 80.00th=[25035], 90.00th=[33424], 95.00th=[33424],
     | 99.00th=[33424], 99.50th=[33817], 99.90th=[41681], 99.95th=[50070],
     | 99.99th=[50594]
   bw (  KiB/s): min=  448, max=  640, per=99.96%, avg=611.76, stdev=23.26, samples=119
   iops        : min=  112, max=  160, avg=152.94, stdev= 5.82, samples=119
  lat (msec)   : 50=99.92%, 100=0.08%
  cpu          : usr=0.05%, sys=0.37%, ctx=2341, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,9172,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=611KiB/s (626kB/s), 611KiB/s-611KiB/s (626kB/s-626kB/s), io=35.8MiB (37.6MB), run=60019-60019msec

Disk stats (read/write):
  sdd: ios=77/9152, merge=0/0, ticks=958/238391, in_queue=239348, util=99.80%

As you can see my write speed is very slow though I don't know if that is showing a problem with my hardware or an issue with the way I am testing it.

The HDD's are refurbished Dell branded SAS drives, model HUS723030ALS640.

Any pointers on what could be wrong would be greatly appreciated.

Thanks.

you are viewing a single comment's thread
view the rest of the comments
[-] Sopel97@alien.top 1 points 10 months ago

That's expected for 4k synchronous writes. They need to complete one by one and you're writing to a hard drive.

this post was submitted on 12 Nov 2023
1 points (100.0% liked)

Data Hoarder

2 readers
1 users here now

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

founded 11 months ago
MODERATORS