slow performance with SSD storage


navidmalek
2 months ago (Edited 2 months ago) 0 navidmalek 1

Hi, before i start i have to apologize for my bad English, i'm not a native english speaker.


i'm using LizardFS for my servers which one cluster uses HDD as storage and the other one uses SSD.

i was testing the performance of I/O with the dd command.

when i test the performance of the cluster using HDD storage, the results are just fine, but when i test the other cluster which uses SSD the read and write performances are way too slow.


common configs :

1 master server

2 chunk servers


the dd commands:

for write ==>

dd if=/dev/zero/ of=/mnt/lizardfs/TEST.img bs=2G count=1 oflag=dsync


for read ==>

 echo 3 | tee /proc/sys/vm/drop_caches ; time dd if=/mnt/lizardfs/TEST.img of=/dev/null bs=BESTFIT_NUMBER


the performance for the HDD cluster:

write ==> around 25 MB/s

read ==> around 270 MB/s


the perfomance for the SSD cluster:

read ==> around 36 MB/s

write ==> around 16 MB/s


Important Note:

the CGI server shows that both SSD and HDD are performing at full speed.

for SSD it shows :

read ==> around 680 MB/s

write ==> 94 MB/s



things i've tested but yet not solved the issue:

  1. use a random generated file instead of /dev/zero and don't use HDD_PUNCH_HOLES 
  2. testing the network speed of SSD servers ( around 155 MB/s)
  3. test the SSD drives itself
  4. use other benchmarks such as: https://github.com/Korkman/storage-tuner-benchmark



Reading this thread: