Author Archives: wagamama

Intel Cache Acceleration Software – v3 is out!!

I have been waiting for this, because I had previously read that version 3 would introduce (the dangerous) feature of write-back caching.

It is dangerous because if the server crashes or looses power and the writes in the cache have not been written to the disk you will have inconsistent data…. that’s bad.

So, lets do a few tests.

The Testing Environment

Here is the hardware setup of the server being used for the benchmark tests.

Motherboard: Supermicro
OS: Windows Server 2008 R2
Processor: Dual Intel Xeon L5420 @ 2.50 GHz
System RAM: 32 GB
Storage Controller: Intel ESB2 SATA RAID controller (driver 10.1.5.1001)
Spinning Disks: Seagate Laptop 1tb (ST1000LM014)
Spinning Disk Setup: RAID 10 (4 disks)
Cache Device: Kingston Digital HyperX Predator 240 GB PCIe Gen2 x4 Solid State Drive

FIRST TEST

Restore a 205gb MySQL database, that is a lot of writes to the disk. Lets see if the write cache makes any difference, in theory it should.

#1 – Cache off, source and destination on same disk: 6 hours 22 minutes
#2 – Cache on (write-back enabled), source and destination on same disk: 6 hours 30 minutes
#3 – Cache on (write-back enabled), source SMB network share: 7 hours 7 minutes

The hard drives in RAID 10 can write at less than 200 MB/sec and the cache device can write at more than 650 MB/sec —- yet the performance drops slightly? We should ideally be seeing a massive increase in performance.

How can the results actually be slower when using the cache?

SECOND TEST

Intel provides a tool called Intel IO Assessment Tool, you run it on your system and it will determine if your system can benefit from the cache and what files you should be caching.

The results say I could benefit and the files I should be caching are the MySQL data folder. No surprise since the server is strictly a MySQL server.

IOPS

Lets use Iometer to calculate how many IOPS this hardware setup can produce. The tests are conducted using the OpenPerformanceTest16.icf (16GB) config from http://vmktree.org/iometer/. The specific test used was RealLive-60%Rand-65%Read.

Kingston Digital HyperX Predator (systems cache drive)
I/Os per Second: 8,346
MBs per second: 68.38 MBPS
Average I/O Response Time: 5.25 ms

Seagate Laptop (RAID 10) – Intel CAS enabled: NO
I/Os per Second: 150
MBs per second: 1.23 MBPS
Average I/O Response Time: 350.20 ms

Seagate Laptop (RAID 10) – Intel CAS enabled: YES
I/Os per Second: 1424
MBs per second: 11.67
Average I/O Response Time: 34.22

This test shows that Intel CAS is working, with nearly a 9.5x improvement over going direct to the disk. Yet no measurable improvement in the performance of MySQL?

FINAL RESULTS

The results of all tests I have done with Intel CAS have been disappointing to say the least. The new version 3 has no options to set, so I can’t really be screwing anything up.

I am going to reach out to Intel and see if they can provide any insight as to why I am not seeing any benefit in my real life usage.

Intel ESB2 SATA RAID Controller Drivers

Intel Raid ControllerIt is February 2016. I am trying to find Windows drivers for an ‘antique’ Intel RIAD controller, the ESB2 SATA RAID card.

This card is accessable through the system BIOS and Windows 2012 R2 installs OK, but I can’t see the status of my RAID drives when in Windows. Is my array OK?

I started a search and tried to install drivers from Intel. Their Intel RST page says it has drivers that support Windows 2012 R2 but when trying to install I am told the platform is not supported.

I came to learn the last version of Intel drivers to support the ESB2 SATA RAID card is version 10.1.5.1001.

After doing some Google searching I found an installer for version 10.1.5.1001. Bingo, they install and it works. If you want to grab the Intel drivers yourself, we are keeping a copy here. Intel_Rapid_Storage_10.1.5.1001

Prior to the install the driver the card was using was Intel 8.6.2.1315 with a date of June 8, 2010. Now Windows is reporting version 10.1.5.1001 with a date of February 18, 2011.

Now with the Intel Rapid Storage Technology software available on the OS, I can enable/disable the write-back disk cache on my storage array. Dangerous? Yes. Faster? Yes. Will I do it? Probably.

Intel Cache Acceleration Software – Performance Test

Intel LogoI am attempting to boost the performance of some spinning disks by using Intel Cache Acceleration Software.

Using a write through process the software can place the contents of folders/files you specify in cache. When the file is next requested it can be quickly pulled from the cache rather than going to the slow spinning disk. This can result in major speed improvements of disk reads. Great for caching spinning disk content or network based storage.

In my specific case I have a fairly small amount of cache, only 124 gigs of PCIe SSD storage. I am allocating 100% of that space to cache the MySQL data folder, which in theory is making 100% of my database available from the cache because my data folder holds only 116 gigs of data.

The software integrates into Windows Performance Monitor so you can easily view how much data is in the cache, the % Cache Read Hits/sec and other good stuff.

While monitoring those stats and doing queries against the database, perf mon was not showing hits to the cache… what’s going on? Time to investigate and benchmark a few things.

There is really no configuration options in the caching software. The only setting is something called 2 Level Cache and you can select on or off – that’s it. There is not much information about what that is for, and it is not intuitive based on the label.

I am going to run three simple tests, which should reveal how much of a performance the cache makes and if I am better off running with this 2 Level Cache on or off.

Test #1
Restore & Backup 96gb Database With No Caching

Restore Time: 5 hours 49 minutes
Backup Time: 1 hour 03 minutes

Test #2
Restore & Backup 96gb Database With Caching – Level 2 Cache Off

Restore Time: 5 hours 56 minutes
Backup Time: 0 hours 53 minutes

Test #3
Restore & Backup 96gb Database With Caching – Level 2 Cache On

Restore Time: 6 hours 07 minutes
Backup Time: 0 hours 53 minutes

The purpose of the restoration before the backup is to give the cache time to populate. In theory all the database data on disk should be available to the cache, however I have no way to verify that it is all in cache or not.

The results? WOW! Can you say disappointed? What is going on here?

2.5 Inch Drives Are Not Quick

What can you do if you need high performance, only have 2.5 inch drive slots and can’t afford SSD?

Last year I built two servers using WD Red drives running in RAID 10. The performance is not that great and also not very consistent.

This year I built two more of the same servers, but wanted to equip them with Seagate Laptop SSHD disks which are a hybrid spinning disk/SSD with 8gb of SSD type memory. They cost a few dollars more, about $10 more at the time of this posting.

Reading benchmarks online before buying them, they should be faster than the WD RED drives… but the downside is they are not specifically designed for RAID based systems (does that matter?).

First up, WD Red:

Maximum Read: 143MB/sec
Maximum Write: 86MB/sec

raid10-bench

Next, Seagate Laptop SSHD:

Maximum Read: 214MB/sec
Maximum Write: 193MB/sec

segate-sshd

Overall, quite a big performance boost. I am really hoping that it helps with the write speed since the workload is just about all write.

I found doing file copies from an SSD over to the Seagate Hybrid drive in Windows was reporting write speeds of 400MB/Sec when coping a 90 gig file. That was very impressive.

To add some extra boost to these old servers I also equipped them with Kingston HyperX Predator 240 GB PCIe sold state drives. Using 100 GB of those drives as a boot device and the balance as a read cache for MySQL data which is stored on the Seagate RAID 10 drives.

How does the HyperX Predator perform in a benchmark test? Lets take a look.

Maximum Read: 1,388MB/sec
Maximum Write: 675MB/sec

HyperX

Those are certainly some big numbers, looking forward to the price coming down so we can get a few TB of speeds like that.

WD Red Drives – Inconsistent Performer

I have two servers, each with WD Red 2.5 1tb hard drives. The servers run 4 drives in RAID 10 configuration.

I had previously benchmarked them with ATTO’s Disk Benchmark software on two different occasions. I thought it was strange to get different results. Today I was doing another benchmark and again got a 3rd set of results.

Here are the dates and different results I have received.

August 29, 2014

Maximum Read: 162MB/Sec
Maximum Write: 135MB/Sec

raid-10

April 7, 2015

Maximum Read: 174MB/Sec
Maximum Write: 58MB/Sec

RAID 10 - Slow

September 5, 2015

Maximum Read: 143MB/Sec
Maximum Write: 70MB/Sec

raid10-bench

Why so much variance in the results? Really strange…

I suspect the results from April 7, 2015 are probably ‘off’ as the write speed sorta flat lining around 55MB/Sec seems too consistent.

If you just go with the average (maximums) from the other two you get:

Maximum Read: 152MB/Sec
Maximum Write: 102MB/Sec

Windows fails to see Autounattend.xml from ISO

I’m working on setting up a bunch of servers, so I used the Windows System Image Manager to create an image with answers to all the questions of a basic install.

I used VMWare to boot the ISO image I was creating to test.

Placing the autounattended.xml on the root of the DVD ISO gets the image in VMWare fully setup.

Try it out on the actual server and nothing works. I am prompted just like a normal install. It is not reading the autounattended.xml file at all.

Turns out the autounattended.xml file will only work if the BIOS considers the device removable, like a USB thumb drive or something. The BIOS I am using is mounting the ISO as a fixed disk so the Windows installer will not use the autounatteended.xml that lives in the root of the ISO.

Good news, there is a solution to this problem of the Autounattended.xml not being picked up by the installer. The process inegrates the Autounattended.xml into the boot.wim file so it is always picked up.

You need the following items beforehand:

  • WAIK
  • The USB or ISO disk you plan to make your install disk

Then do these steps:

Mount the boot.wim image located on your USB HDD disk (in this example H: is the USB disk) using ImageX from WAIK

imagex /mountrw H:\Sources\boot.wim 2 C:\temp

(assumes you have a folder “C:\Temp”)

(the number 2 stands for Index 2 within the boot.wim image)

Fire up your Windows Explorer and navigate to C:\Temp. You will see your boot.wim image mounted and all. Drop your “autounattend.xml” file you created directly into this folder (right next the the Setup.exe file)

Close Windows Explorer and unmount the image:

imagex /unmount /commit c:\temp

Benchmark Amazon

Amazon AWS LogoAmazon gave me $100 gift card for AWS services a couple months back. As much as I love the concept of cloud the numbers have never worked out for me. My gift card is going to expire soon, so lets burn up some credits by bench-marking some of their AWS EC2 instances.

I’m most interested in CPU and disk performance of various instances since servers do not need high end graphics. All tests were performed on Windows 2012 R2 Standard Edition (64 bit) using PerformanceTest 8.0 software.

The Amazon pricing per hour is for the N. Virginia area, which I believe is cheapest zone they offer.

Instance Type:

c4.large – 8 ECU – 2 vCPU – 3.75gb RAM – $0.193 per Hour

CPU Mark: 2,708
Disk Mark: 787 (EBS Storage)

c4.2xlarge – 31 ECU – 8 vCPU – 15gb RAM – $0.773 per Hour

CPU Mark: 9,485
Disk Mark: 1,017 (EBS Storage)

c4.4xlarge – 30 ECU – 16 vCPU – 30gb RAM – $1.546 per Hour

CPU Mark: 15,680
Disk Mark: 998 (EBS Storage)

c3.xlarge – 14 ECU – 4 vCPU – 7.5gb RAM – 0.376 per Hour

CPU Mark: oops… have to redo this one
Disk Mark: 910 (2 x 40 SSD)

Non Amazon Systems:

Supermicro – XEON L5420x2 – 16gb RAM

CPU Mark: 4,445
Disk Mark: 1,385 (Samsung 840 EVO 120gb)

Crunching The Numbers

Xeon L5420I want to compare prices of running in the cloud to the Dual Xeon L5420 processors, which are available for very cheap on eBay. Perfectly good used servers, slap some new SSD into them stick them in a datacenter and run them until they die.

The closest match offered by Amazon is the c4.2xlarge class machine, which has a CPU mark of 9,485 vs the dual Xeon’s score of 6,734.

The cost to run in the Amazon cloud would cost you $556.56 per month. That is just the machine, it does not include extras such as a load balancer, VPN or bandwidth.

The cost to run a 1/4 rack (10 machines) would be $5,556.60 per month. If you need and entire rack it would cost you $23,375.50 per month.

You can get your cost down quite a bit if you are willing to commit to a long term agreement of 1, 2 or 3 years with Amazon. Once you commit to a specific instance you can’t change, so calculate your usage requirements before committing.

Another cheap rack of compute power

I think it was about 1 year ago that I setup a rack at a datacenter, filled with servers that had come off lease. You can get them cheap, real cheap vs. a new server.

I read the other day that Intel has not made any ‘major’ improvements to their processors since 2011. Sure there have been some improvements to SATA, SSD’s etc. But when you can buy a server for 10% – 20% of the price of purchasing new, new just does not seem worthwhile.

Last year we used 1U twin Supermicro servers using the the X7DWT-INF motherboard. They came equipped with 2x Intel Xeon L5420 quad core processors and 16GB ram. I looked up the price paid last year, they were $450.

They work fine, way more ram than we need. The only downside is the IPMI management is not always the greatest but we have managed. We even bought extra servers that just sit in the rack, to be used as parts in the event of a failure of any of the old servers. So far the parts machines are just sitting there, no issues with parts.

Now 2015, we want to build another rack – at another datacenter (additional redundancy). Would like to find computers with X8 based motherboards as the IPMI is supposed to be better.

Unfortunately they are still too costly, so we are looking at the exact same model of server that we bought last year. The good news is the price has dropped for $450 per 1U down to $250. Imagine, there are two full servers in 1U for $250. That is really $125 per server, since there are two per 1U. Simply blows my mind, since a new machine would cost you $2,000+ for a server and you don’t get anywhere near the price/performance boost.

Say we put 45 1U units in a rack (that is 90 servers) for a cost without hard drives of $11,250. If we could find new servers, twin models for $2000 (without hard drives) the cost would be $90,000. I doubt you could find servers for $2000 new.

There are no hard drives included with the servers, so SSD will be purchased for most.

A couple servers will be used as database servers, last year we used WD Red drives and attempted to implement read caching using Fusion-IO cards. The caching concept did not work very well, the performance improvement seen was not worth the effort.

Seagate Laptop SSHDSo this year, rather than WD Red (currently $69.99) we are going to try using Seagate Laptop SSHD (currently $79.68).

Now according to benchmarks over at UserBenchmark these 2.5 inch drives do not perform well vs. 3.5 inch drives or SSD (of course). However, if you benchmark them WD Red 2.5 vs Seagate Laptop SSHD they actually perform 58% better overall than WD Red and 161% better on 4K random writes.

Since the workload on the database servers are 90% write, we are going to give these laptop drives a chance.

We still have a Fusion-IO card sitting here unused as well from last year. So we can stick that in one of the DB servers to increase the read side of things. Would not go out and buy one just for this purpose but since it is just sitting here on the shelf, might as well put it in.

LDAP / memcache frustration on DirectAdmin

I have about 5 servers that I maintain to run websites. In addition to the base system that DirectAdmin installs I have a requirement for a few additional modules.

These are memcache and LDAP.

I use Debian x64 and DirectAdmin, to try and keep things the same. So if it works on one, you test it on one… it should work on the others right?

Every time I upgrade PHP I have issues getting these two modules compiled back in (add custom modules).

I upgraded one server to PHP 5.6.8, using CustomBuild 2.0 I enabled LDAP. So my custom/ap2/configure.php56 looks like this, to add LDAP:

#!/bin/sh
./configure \
        --with-apxs2 \
        --with-config-file-scan-dir=/usr/local/lib/php.conf.d \
        --with-curl=/usr/local/lib \
        --with-gd \
        --enable-gd-native-ttf \
        --with-gettext \
        --with-jpeg-dir=/usr/local/lib \
        --with-freetype-dir=/usr/local/lib \
        --with-libxml-dir=/usr/local/lib \
        --with-kerberos \
        --with-openssl \
        --with-mcrypt \
        --with-mhash \
        --with-ldap \
        --with-mysql=mysqlnd \
        --with-mysql-sock=/tmp/mysql.sock \
        --with-mysqli=mysqlnd \
        --with-pcre-regex=/usr/local \
        --with-pdo-mysql=mysqlnd \
        --with-pear \
        --with-png-dir=/usr/local/lib \
        --with-xsl \
        --with-zlib \
        --with-zlib-dir=/usr/local/lib \
        --enable-zip \
        --with-iconv=/usr/local \
        --enable-bcmath \
        --enable-calendar \
        --enable-ftp \
        --enable-sockets \
        --enable-soap \
        --enable-mbstring \
        --with-icu-dir=/usr/local/icu \
        --enable-intl

Great, it works. LDAP is compiled in and everyone is happy.

I go and do the upgrade on the next server, no luck. The compiler for PHP says it can’t find the LDAP files. I check what ldap modules are installed on both machines with this:

dpkg-query -l '*ldap*'

Both machines show different results, the packages are mostly the same, but not exact. In fact the output of the dpkg-query is not the same, one shows an Architecture column and one does not. Hmmm… is one 64 bit and the other 32… checking on that… nope, both are 64 bit.

In the end on the machine with the issue where it says it can’t find the LDAP files I created a symlink to a ldap file I found.

ln -s /usr/lib/x86_64-linux-gnu/libldap-2.4.so.2 /usr/lib/libldap.so

That was enough that it could compile and things seem to be working, but super frustrating that there are so many differences on machines where I try and keep things the same.

To have DirectAdmin monitor the status of the memecached instance you can add it to /usr/local/directadmin/data/admin/services.status and DirectAdmin will start/restart if necessary.

The DA guys document it here: http://www.directadmin.com/features.php?id=487

PHP Memcache Sessions & Redundancy

I started using memcache to store sessions, rather than having PHP store them on disk. The server hard drives are SSD so I never noticed any performance issue sorting them on disk, but I did not like all those files filling up my /tmp space.

Once moved to memcache, then you have the issue of redundancy. If you have more than one server handling your traffic load, you need something to maintain a sticky session or the user would be logged out of your site (or session information lost) if they move between servers.

Doing some reading there seems to be a lot of bad information out there about exactly how to setup session redundancy across multiple memcache servers.

On a lot of sites I found this syntax to use:

tcp://127.0.0.1:11211?persistent=1&weight=1&timeout=1&retry_interval=15

I think that syntax is not correct, but it is that way on many sites. According to the documentation you would not encode the &. In addition all the values they are listing are the defaults. So odds are good those params are not valid like that, but it still works because the values are the default.

One of the better articles I found online was this one at DigitalOcean.

How To Share PHP Sessions on Multiple Memcached Servers on Ubuntu 14.04

His configuration is a bit different than mine, since the OS is different. A couple things to emphasize if you are trying to set this up.

In his example using three servers he says to place the following on each server:

session.save_path = 'tcp://10.1.1.1:11211,tcp://10.2.2.2:11211,tcp://10.3.3.3:11211'

The order here is important, I think a lot of people will want to change the order of the servers, placing the local server first. Don’t do that! In order for the redundancy to work correctly the session.save_path must be the same on all servers. Do not worry about the order, as PHP must contact each server to write the session data anyway.