What is rotation latency

What is the largest stable storage overhead for SSDs?

For the HDD case, your example looks pretty good. The average seek time is 1/3 the maximum time to move across all tracks and the average rotational delay is half a revolution of the disk.

If the system has a multi-sector I / O buffer, it checks to see if the desired sector is already in memory. If so, the existing copy is used instead of being read from disk again. This is especially true in cases where clusters are used and sequential sectors in the cluster are accessed.

On the other hand, if the sectors in a file are read sequentially, but the sectors do not follow one another due to fragmentation on the hard disk, the search time plus rotation latency is added to each search sector.

For the HDD case too, there is usually very little difference between reading and writing (I realize you just asked to access the mass storage device, but I'm just including this to be complete). Even if the data is encrypted, there is a delay in decrypting the data when reading and encrypting it when writing, provided the latter can take a little longer.

For the SSD case, the big difference is that there is no seek time or rotation latency. (There are some set-up times, but these are in the order of magnitude of 100 µs.) In addition, the transfer of bytes from the SSD memory to the RAM can be many times faster than with a hard disk. The overhead for an encrypted volume would also be the same as above.

The great success with SSD is the writing of the data. The NAND flash memory must first be erased and then written to pages. The deletion can take a few milliseconds. There are also a limited number of times this erase cycle can be reliably performed on each side. Wear compensation is used to prevent the SSD medium from wearing out too quickly. This means that a frequently written page with sectors will be moved to a different area of ​​the hard disk.

To answer the question in your title, I would say the biggest hassle for SSDs would be to level wear out.


Thanks for the answer - I'll keep the question a little open just in case anyone has anything else to add. Also, a log-based FS seems to be good for flash as it would minimize the reuse of blocks (which is special since a log-based FS is also good for hard drives as it avoids random read / write operations).


@ VF1 In embedded systems, a log-based or journaling file system is often used to recover from a catastrophic crash such as: B. when a user pulls the power plug and the device loses power during a write process if there is no battery backup. Ext4 and JFS are examples of journaling file systems used on Linux.