Benchmark – wagamama.ca https://wagamama.ca Tech & Tokyo – A Gaijin in Japan Tue, 17 Aug 2021 10:46:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Temperature Testing Odroid-C2 Aluminum Case https://wagamama.ca/temperature-testing-odroid-c2-aluminum-case/ Fri, 27 Jan 2017 08:44:45 +0000 http://wagamama.ca/?p=486 I really like the ODROID-C2 by HardKernel, they are very reliable, have a quality network card, lots of RAM and really perform incredibly well when used with eMMC flash storage.

I have three of them and all have performed so well I am going to deploy them to data centers to act as DNS servers.

Need a case for that… hmmm… how about Cogent Design’s all aluminum case. Keep the little board very safe and cool by replacing the stock heat sink with the entire case to act as the cooling system.

I’m going to run a couple ‘very scientific’ tests to measure the performance of the case vs. the stock cooler that comes on the board.

Tested side by side with an ambient room temperature of 21.5°C (70.7°F), there is no fan in the room. Both units are powered from the same USB power supply.

The OS is Armbian Debian (jessie) 3.14.79 on both nodes, there is nothing active on the machines just the base OS.

Stock vs. Aluminum

After about 25 minutes running at idle the stock cooler comes in at 40°C and the aluminum case comes in at 36°C. Not a massive difference, but it is cooler. To the touch both the cooler and the case are warm but not too hot.

Stock cooler at idle
Aluminum case at idle

Heat It Up

I used the Linux stress program to apply a bit of heat to the computers. The following command was run for about 1.5 hours to see how high the temperatures would climb.

stress --cpu 4

Amazingly both computers did not heat up very much. They both stayed much cooler than I expected they would with the stock cooler climbing to 60°C and the aluminum case only reaching 55°C. Perhaps the stock cooler climbed only to 60°C because it itself is not in a case, but I have no case to test it with.

Stock cooler reaches 60°C
Aluminum case reaches 55°C

Testing Against Each Other

Since I have three of these devices, I purchased three cases. While the temperatures observed above are low, does the case really keep the machine cool enough?

Maybe I used too much thermal paste, or maybe not enough?

I performed a few tests with all cases, using different amounts of thermal compound. The result was they all performed the same.

Suggestions For Possible Improvement

One difference I see between the aluminum case and the stock CPU cooler is surface area. I’m no math mathematician but I suspect the stock cooler may have a similar amount of surface area because of all fins despite it only measuring 43cm x 33cm.

From looking at the case I would suggest too Cogent Design for a v3 of this case, consider keeping more material from the lid of the case. It seems to me that more material is being removed from inside the lid than needs to be.

Then increase surface area, produce ridges for fins on the outside of the case where possible. Even if a ridge was only 1mm in depth it adds a lot more surface area for cooling – just like traditional CPU coolers. No one produces a CPU cooler that is a solid block of aluminum, maximizing the surface area maximizes the cooling performance.

As a solid block of aluminum the CPU is heating up the aluminum block but without more surface area the ability dissipate that heat is limited.

I really noticed the difference between the stock cooler and the aluminum case when the stress test ended. The stock cooler was able to lower its temperature much quicker, which I attribute to the surface area of the stock cooler.

Would I Purchase Again

Would I purchase this case again if I was looking for another case? Yes, I would purchase the case again – hopefully version 3 will be available should I need a few more in the future.

]]>
Can Amazon AWS win this time? https://wagamama.ca/can-amazon-aws-win-this-time/ Mon, 19 Dec 2016 06:09:20 +0000 http://wagamama.ca/?p=470 I have previously written two posts about the cost of using Amazon AWS, one way back in May 2010 and one in August 2015. In both cases the costs involved with running a rack full of servers 24/7 365 was much more expensive than we could do it ourselves hosting equipment in a data center.

A project I’m working on is growing and we have a couple options that we are considering. Our volume increases a lot during USA working hours, specifically Monday – Friday 8am – 1pm. We are considering the following two options:

1) Provision Amazon EC2 servers during peak times, so 5 hours a day and 5 days a week.
2) Purchase a bunch of Intel NUC DC53427HYE computers. These is not server grade hardware, but they do have vPro technology which allows them to be remotely managed.

We have purchased 1 of these little Intel NUC machines for testing, purchased new in the box for $180 – plus shipping so about $200 for the base unit. We need 4gb of RAM and 120gb of disk space.

Configured the NUC is costing about $290 ($200 NUC + $60 SSD + $30 RAM). 10U at our data center with 20 amps of power and 30Mbps bandwidth. We should easily be able to power 30 of these little NUC machines with 20 amps of power. The data center space will cost $275 per month, for 24/7 operation with 30Mbps of bandwidth based on 95th percentile usage.

Lets run some CPU benchmarks on these little guys and see how much Amazon cloud hosting would cost us for the same compute power running only 5×5. It should be a lot cheaper to go the Amazon route… but lets find out.

The pricing / servers have not changed much since I ran the benchmarks last year. These benchmark were run in Amazon US West (Oregon) using Performance Test 9.0, which is a different version than last year so the scores are a bit different.

Instance Type:

c4.large – 8 ECU – 2 vCPU – 3.75gb RAM – $0.193 per Hour

Windows Server 2016
CPU Mark: 2,925
Disk Mark: 791 (EBS Storage)

Intel NUC DC53427JUE – Intel i5-3427u – 4gb RAM

Windows 10 Pro
CPU Mark: 3,888
Disk Mark: 3,913 (Samsung 840 EVO 120gb mSATA SSD)

Wow, the benchmark performance for the Amazon instance is quite a bit lower than the basic Intel NUC.

The total CPU score of 30 of these Intel NUC servers would would be 116,640. To get the same computer power out of the c4.large instance at Amazon we would need to boot up 40 instances.

Lets run the costs at Amazon. We would need 40 servers at .193 cents per hour. We need them for 5 hours a day, 20 days a month. So the math looks like this.

40 servers x 0.193 = $7.72 an hour
$7.72 x 5 hours = $38.60 per day
$38.60 x 20 days = $772 per month

In addition we need to take into consideration the load balancer, bandwidth and storage. We are using an estimate of 3,300 GB per month inbound and outbound to get our estimated pricing, this is only a fraction of the total bandwidth we could theoretically move on our 30Mbps line from the data center.

Load Balancer Pricing
$0.025 per hour = 5 hours x 20 days = 100 hours = $2.50 per month
$0.008 per GB = $0.008 x 3,300 = $26.40 per month

Bandwidth Pricing
3,300 gigs outbound bandwidth = $297 per month

Storage
Sorry Amazon, your pricing is so complex I can’t even figure out how much the disks are going to cost to provision 120gigs per machine for the 5 hours x 5 days operation…

So a rough cost is going to be somewhere in the range of $1,100 per month for only 5 hours a day, 5 days a week.

Lets look at a 1 year investment.

Amazon = $13,200 per year

Buying 30 Intel NUC servers and hosting them.

Intel NUC x30 = $8,700
Hosting = $275 x 12 months = $3,300

Total to buy and host 1 year = $12,000 for 1st year

Paying for the hardware up front, running the servers 24/7 still comes out cheaper than going to the cloud with usage of only 5 hours a day and 5 days a week.

Lets lengthen the time to 2 years.

Amazon = $26,400
Own & Host = $15,300

And the 3 year view?

Amazon = $39,600
Own & Host = $18,600

Incredible, I thought going to the cloud would be more cost effective than buying and hosting our own equipment when we only need the extra capacity for a limit amount of time per day.

]]>
Common – where are MySQL impressive numbers? https://wagamama.ca/common-where-are-mysql-impressive-numbers/ Thu, 14 Apr 2016 01:30:51 +0000 http://wagamama.ca/?p=432 I always read about blazing performance of MySQL, how many millions of transactions it can do per second when using a cache, SSD, Fusion-IO or whatever. Check out some of these reports:

RAID vs SSD vs FusionIO

FusionIO – time for benchmarks

Testing Fusion-io ioDrive

Fusion-io Tweaks Flash To Speed Up MySQL Databases

I like big numbers and faster this and faster that, who doesn’t. I’ve tried Fusion-IO cards, I’ve tried SSD, I’ve tried normal hard drives with PCI SSD as a cache using Intel CAS. Bottom line, nothing works to improve the performance in any significant way.

If I test with a lab testing tool, or benchmark test it will report an Fusion-IO, SSD or Intel CAS have the potential to provide speed improvements. What about a real world test? That is where I want to see a difference. When I am doing a database restore on of a MySQL server, I would like to see something actually impact the restore times. In reality on a 8 core machine only one core is used since the default restore can only run as a single process.

I read recently about disabling hyper threading on the CPU may actually give MySQL a boost. I also read about innodb_doublewrite in MySQL (dangerous, not recommended to use).

Lets run a few tests under different situations. I am going to restore a 3gb database with about 11.9 million records. This is not benchmark software, I want to see changes in real life.

Each restore is run twice and the average if the two runs used.

Testing with 4 cores (hyper-threading disabled)

Intel CAS disabled – 27.25 minutes

Intel CAS enabled – write through cache 29.10 minutes.
Intel CAS enabled – write back cache 27.18 minutes

So none of this seems to make any major difference. Perhaps the disk is not the bottleneck. Lets try another approach to the testing.

Conduct the same tests, but rather than performing a single restore we will restore four instances of the database simultaneously… Lets see what happens…

Intel CAS disabled – 1 hour 33 minutes

Intel CAS enabled – write through cache 1 hour 34 minutes
Intel CAS enabled – write back cache 43 minutes

Finally some improvement. That represents a 100% speed increase when using Intel CAS software, increasing the stress on the disks does show some improvement as a of the caching.

So the best performance comes from putting some pressure on the disk system, only then does the cache start to show some benefit.

Next, a quick test with SQL’s skip-innodb_doublewrite feature enabled (or I should say disabled).

So disable the innodb_doublewrite feature, leaving the Intel CAS inabled with write back caching and restoring 4 copies of the same database simultaneously… Lets see what happens…

Intel CAS enabled – write back cache 26 minutes

Nice! While turning off innodb_dobulewrite is not safe (at all!) for production, if you are trying to recover from a disaster situation and need to restore a large database (or several large databases) turning it off can certainly reduce your recovery time.

I’m curious, lets see our restore time with innodb_doublewrite turned off, but only restoring a single copy of the database.

Intel CAS enabled – write back cache 19 minutes

]]>
Intel Cache Acceleration Software – v3 is out!! https://wagamama.ca/intel-cache-acceleration-software-v3-is-out/ Sat, 20 Feb 2016 04:56:09 +0000 http://wagamama.ca/?p=393 I have been waiting for this, because I had previously read that version 3 would introduce (the dangerous) feature of write-back caching.

It is dangerous because if the server crashes or looses power and the writes in the cache have not been written to the disk you will have inconsistent data…. that’s bad.

So, lets do a few tests.

The Testing Environment

Here is the hardware setup of the server being used for the benchmark tests.

Motherboard: Supermicro
OS: Windows Server 2008 R2
Processor: Dual Intel Xeon L5420 @ 2.50 GHz
System RAM: 32 GB
Storage Controller: Intel ESB2 SATA RAID controller (driver 10.1.5.1001)
Spinning Disks: Seagate Laptop 1tb (ST1000LM014)
Spinning Disk Setup: RAID 10 (4 disks)
Cache Device: Kingston Digital HyperX Predator 240 GB PCIe Gen2 x4 Solid State Drive

FIRST TEST

Restore a 205gb MySQL database, that is a lot of writes to the disk. Lets see if the write cache makes any difference, in theory it should.

#1 – Cache off, source and destination on same disk: 6 hours 22 minutes
#2 – Cache on (write-back enabled), source and destination on same disk: 6 hours 30 minutes
#3 – Cache on (write-back enabled), source SMB network share: 7 hours 7 minutes

The hard drives in RAID 10 can write at less than 200 MB/sec and the cache device can write at more than 650 MB/sec —- yet the performance drops slightly? We should ideally be seeing a massive increase in performance.

How can the results actually be slower when using the cache?

SECOND TEST

Intel provides a tool called Intel IO Assessment Tool, you run it on your system and it will determine if your system can benefit from the cache and what files you should be caching.

The results say I could benefit and the files I should be caching are the MySQL data folder. No surprise since the server is strictly a MySQL server.

IOPS

Lets use Iometer to calculate how many IOPS this hardware setup can produce. The tests are conducted using the OpenPerformanceTest16.icf (16GB) config from http://vmktree.org/iometer/. The specific test used was RealLive-60%Rand-65%Read.

Kingston Digital HyperX Predator (systems cache drive)
I/Os per Second: 8,346
MBs per second: 68.38 MBPS
Average I/O Response Time: 5.25 ms

Seagate Laptop (RAID 10) – Intel CAS enabled: NO
I/Os per Second: 150
MBs per second: 1.23 MBPS
Average I/O Response Time: 350.20 ms

Seagate Laptop (RAID 10) – Intel CAS enabled: YES
I/Os per Second: 1424
MBs per second: 11.67
Average I/O Response Time: 34.22

This test shows that Intel CAS is working, with nearly a 9.5x improvement over going direct to the disk. Yet no measurable improvement in the performance of MySQL?

FINAL RESULTS

The results of all tests I have done with Intel CAS have been disappointing to say the least. The new version 3 has no options to set, so I can’t really be screwing anything up.

I am going to reach out to Intel and see if they can provide any insight as to why I am not seeing any benefit in my real life usage.

]]>
Intel ESB2 SATA RAID Controller Drivers https://wagamama.ca/intel-esb2-sata-raid-controller-drivers/ Tue, 16 Feb 2016 06:31:05 +0000 http://wagamama.ca/?p=394 Intel Raid ControllerIt is February 2016. I am trying to find Windows drivers for an ‘antique’ Intel RIAD controller, the ESB2 SATA RAID card.

This card is accessable through the system BIOS and Windows 2012 R2 installs OK, but I can’t see the status of my RAID drives when in Windows. Is my array OK?

I started a search and tried to install drivers from Intel. Their Intel RST page says it has drivers that support Windows 2012 R2 but when trying to install I am told the platform is not supported.

I came to learn the last version of Intel drivers to support the ESB2 SATA RAID card is version 10.1.5.1001.

After doing some Google searching I found an installer for version 10.1.5.1001. Bingo, they install and it works. If you want to grab the Intel drivers yourself, we are keeping a copy here. Intel_Rapid_Storage_10.1.5.1001

Prior to the install the driver the card was using was Intel 8.6.2.1315 with a date of June 8, 2010. Now Windows is reporting version 10.1.5.1001 with a date of February 18, 2011.

Now with the Intel Rapid Storage Technology software available on the OS, I can enable/disable the write-back disk cache on my storage array. Dangerous? Yes. Faster? Yes. Will I do it? Probably.

]]>
Intel Cache Acceleration Software – Performance Test https://wagamama.ca/intel-cache-performance/ https://wagamama.ca/intel-cache-performance/#comments Sun, 04 Oct 2015 06:19:20 +0000 http://wagamama.ca/?p=379 Intel LogoI am attempting to boost the performance of some spinning disks by using Intel Cache Acceleration Software.

Using a write through process the software can place the contents of folders/files you specify in cache. When the file is next requested it can be quickly pulled from the cache rather than going to the slow spinning disk. This can result in major speed improvements of disk reads. Great for caching spinning disk content or network based storage.

In my specific case I have a fairly small amount of cache, only 124 gigs of PCIe SSD storage. I am allocating 100% of that space to cache the MySQL data folder, which in theory is making 100% of my database available from the cache because my data folder holds only 116 gigs of data.

The software integrates into Windows Performance Monitor so you can easily view how much data is in the cache, the % Cache Read Hits/sec and other good stuff.

While monitoring those stats and doing queries against the database, perf mon was not showing hits to the cache… what’s going on? Time to investigate and benchmark a few things.

There is really no configuration options in the caching software. The only setting is something called 2 Level Cache and you can select on or off – that’s it. There is not much information about what that is for, and it is not intuitive based on the label.

I am going to run three simple tests, which should reveal how much of a performance the cache makes and if I am better off running with this 2 Level Cache on or off.

Test #1
Restore & Backup 96gb Database With No Caching

Restore Time: 5 hours 49 minutes
Backup Time: 1 hour 03 minutes

Test #2
Restore & Backup 96gb Database With Caching – Level 2 Cache Off

Restore Time: 5 hours 56 minutes
Backup Time: 0 hours 53 minutes

Test #3
Restore & Backup 96gb Database With Caching – Level 2 Cache On

Restore Time: 6 hours 07 minutes
Backup Time: 0 hours 53 minutes

The purpose of the restoration before the backup is to give the cache time to populate. In theory all the database data on disk should be available to the cache, however I have no way to verify that it is all in cache or not.

The results? WOW! Can you say disappointed? What is going on here?

]]>
https://wagamama.ca/intel-cache-performance/feed/ 1
2.5 Inch Drives Are Not Quick https://wagamama.ca/2-5-inch-drives-are-not-quick/ Sat, 05 Sep 2015 03:32:32 +0000 http://wagamama.ca/?p=369 What can you do if you need high performance, only have 2.5 inch drive slots and can’t afford SSD?

Last year I built two servers using WD Red drives running in RAID 10. The performance is not that great and also not very consistent.

This year I built two more of the same servers, but wanted to equip them with Seagate Laptop SSHD disks which are a hybrid spinning disk/SSD with 8gb of SSD type memory. They cost a few dollars more, about $10 more at the time of this posting.

Reading benchmarks online before buying them, they should be faster than the WD RED drives… but the downside is they are not specifically designed for RAID based systems (does that matter?).

First up, WD Red:

Maximum Read: 143MB/sec
Maximum Write: 86MB/sec

raid10-bench

Next, Seagate Laptop SSHD:

Maximum Read: 214MB/sec
Maximum Write: 193MB/sec

segate-sshd

Overall, quite a big performance boost. I am really hoping that it helps with the write speed since the workload is just about all write.

I found doing file copies from an SSD over to the Seagate Hybrid drive in Windows was reporting write speeds of 400MB/Sec when coping a 90 gig file. That was very impressive.

To add some extra boost to these old servers I also equipped them with Kingston HyperX Predator 240 GB PCIe sold state drives. Using 100 GB of those drives as a boot device and the balance as a read cache for MySQL data which is stored on the Seagate RAID 10 drives.

How does the HyperX Predator perform in a benchmark test? Lets take a look.

Maximum Read: 1,388MB/sec
Maximum Write: 675MB/sec

HyperX

Those are certainly some big numbers, looking forward to the price coming down so we can get a few TB of speeds like that.

]]>
WD Red Drives – Inconsistent Performer https://wagamama.ca/wd-red-drives-inconsistent-performer/ https://wagamama.ca/wd-red-drives-inconsistent-performer/#comments Sat, 05 Sep 2015 02:51:23 +0000 http://wagamama.ca/?p=360 I have two servers, each with WD Red 2.5 1tb hard drives. The servers run 4 drives in RAID 10 configuration.

I had previously benchmarked them with ATTO’s Disk Benchmark software on two different occasions. I thought it was strange to get different results. Today I was doing another benchmark and again got a 3rd set of results.

Here are the dates and different results I have received.

August 29, 2014

Maximum Read: 162MB/Sec
Maximum Write: 135MB/Sec

raid-10

April 7, 2015

Maximum Read: 174MB/Sec
Maximum Write: 58MB/Sec

RAID 10 - Slow

September 5, 2015

Maximum Read: 143MB/Sec
Maximum Write: 70MB/Sec

raid10-bench

Why so much variance in the results? Really strange…

I suspect the results from April 7, 2015 are probably ‘off’ as the write speed sorta flat lining around 55MB/Sec seems too consistent.

If you just go with the average (maximums) from the other two you get:

Maximum Read: 152MB/Sec
Maximum Write: 102MB/Sec

]]>
https://wagamama.ca/wd-red-drives-inconsistent-performer/feed/ 1