I have written a few times about some older Supermicro servers I am working with recently.
Since the setup I built has 4 x 1tb hard drives (spinning kind), and no space for an SSD to use as cache I bought a used 160gb Fusion-IO ioDrive on eBay.
I had been running Debian on the server. The drivers/software for the Fusion-IO under Linux were a bit restrictive, requiring a specific distro and kernel. I could not find any free/cheap caching software to use under Linux that actually worked. I found some open source stuff, it just did not work.
The Hardware/Software Setup
- Supermicro X7DWT-INF motherboard
- 32 gigs RAM
- 4 x 1tb WD Red 2.5 – spinning hard drives
- 1 x 160gb Fusion ioDrive
- Intel® Cache Acceleration Software
- Windows 2008 R2
Why the switch from Debian to Windows?
- Fusion-IO setup under Windows is much easier
- Availability of Intel’s caching software
- Being able to easily run any version of mySQL
The server getting the upgrade powers a database – nothing else. The hard drives use the onboard Intel RAID controller, operating in RAID 10, which based on my previous testing gave the best write performance while providing redundancy.
The workload is 90% write, with no users sitting waiting for writes to complete. Write performance still needs to be quick to keep up with the transaction volume. Users do not want slow reads either – of course.
Ideally I would have a Fusion ioDrive big enough to hold all the database files, but that is not financially possible.
Currently the live data on the database is about 110+ gigs. Technically that would fit on the 160gb Fusion ioDrive. As the database grows, there could be issues.
The Intel Cache Acceleration Software provides read caching only, it operates in write-through mode to populate the cache – write-back is not available in the software but I would probably not use it anyway due to the increased risk of data loss in the event of a crash or power failure.
Lets find out what a PCI based SSD can do with some caching of those spinning disks. All tests were done on the server, with no users connected. The database was restored for each test, automatically populating the cache if caching enabled.
110+ Gig Database Restore
- 2 hours, 57 minutes – Restore to RAID 10 (no caching)
- 3 hours, 10 minutes – Restore to RAID 10 (with caching)
- 2 hours, 7 minutes – Restore to Fusion-IO storage (no caching)
Sample Reads
I conducted a number of read tests, accessing various databases running reports and user queries. The same queries were run for a tests after a fresh database restore. Some queries make full use of indexes, some do not.
- 22.54 seconds – Read from RAID 10 (no caching)
- 22.62 seconds – Read from RAID 10 (with caching)
- 21.57 seconds – Read from Fusion-IO storage (no caching enabled)
Surprised by those numbers?!? I sure am, underwhelming to say the least. Time to do some research into why the performance is so bad when it should be no much better, both when using cache and when direct from the Fusion-IO card.
Testing The Disks
RAID 10
A previous benchmark on this exact same hardware, but with a different version of Windows showed a maximum write speed of 138 MB/sec. Now running the same benchmark software, on the same hardware but a different version of Windows it is maxing out at only 58 MB/sec. Things get more and more strange today.
Fusion-IO ioDrive
I have no previous benchmark for this one, but the current graph shows speeds that I would expect from this PCI memory card. An impressive maximum write speed of 811 MB/sec. and a read speed of 847 MB/sec.
With a write speed increase of 1360% over the spinning disk, why is the mySQL restore only 39% faster when restoring to the Fusion-IO disk?