Basement underlayment

Something a little different than I normally post on this site. I have been researching options for putting laminate flooring in a basement, there are so many options.

The Problem

  • Basements are often wet
  • Concrete is cold
  • Ceiling height may be limited

I was recently looking at a basement that has laminate flooring installed directly over concrete, which works I guess but must be very uncomfortable during winter months.

Possible Solutions

Here are some possible solutions that I have found for improving the comfort of laminate in a basement. None of these are fixed to the concrete and can have laminate flooring installed directly on top.

  • Dricore – $190 100 sq/ft – 1.7 R-value – 22.22mm – Dimples create air space under the floor, OSB top.
  • DMX 1-Step – $90 100 sq/ft – 0.91 R-value – 7.94mm
  • DMX Airflow – $?? 100 sq/ft – 1.2 R-value – 4mm
  • Delta-FL – $66 100 sq/ft – ?.?? R-Value – 8.00mm

The insulation value of any of the above products seems very limited, however having the flooring off the actual concrete should make the floor feel a lot warmer.

DMS AirflowDMX Airflow seems like the best choice to me, availability may be limited in Canada (even though it is made there) but the cost seems to be about the same as DMX 1-Step although I can find no actual retailer.

Temperature Testing Odroid-C2 Aluminum Case

I really like the ODROID-C2 by HardKernel, they are very reliable, have a quality network card, lots of RAM and really perform incredibly well when used with eMMC flash storage.

I have three of them and all have performed so well I am going to deploy them to data centers to act as DNS servers.

Need a case for that… hmmm… how about Cogent Design’s all aluminum case. Keep the little board very safe and cool by replacing the stock heat sink with the entire case to act as the cooling system.

I’m going to run a couple ‘very scientific’ tests to measure the performance of the case vs. the stock cooler that comes on the board.

Tested side by side with an ambient room temperature of 21.5°C (70.7°F), there is no fan in the room. Both units are powered from the same USB power supply.

The OS is Armbian Debian (jessie) 3.14.79 on both nodes, there is nothing active on the machines just the base OS.

Stock vs. Aluminum

After about 25 minutes running at idle the stock cooler comes in at 40°C and the aluminum case comes in at 36°C. Not a massive difference, but it is cooler. To the touch both the cooler and the case are warm but not too hot.

Stock cooler at idle
Aluminum case at idle

Heat It Up

I used the Linux stress program to apply a bit of heat to the computers. The following command was run for about 1.5 hours to see how high the temperatures would climb.

stress --cpu 4

Amazingly both computers did not heat up very much. They both stayed much cooler than I expected they would with the stock cooler climbing to 60°C and the aluminum case only reaching 55°C. Perhaps the stock cooler climbed only to 60°C because it itself is not in a case, but I have no case to test it with.

Stock cooler reaches 60°C
Aluminum case reaches 55°C

Testing Against Each Other

Since I have three of these devices, I purchased three cases. While the temperatures observed above are low, does the case really keep the machine cool enough?

Maybe I used too much thermal paste, or maybe not enough?

I performed a few tests with all cases, using different amounts of thermal compound. The result was they all performed the same.

Suggestions For Possible Improvement

One difference I see between the aluminum case and the stock CPU cooler is surface area. I’m no math mathematician but I suspect the stock cooler may have a similar amount of surface area because of all fins despite it only measuring 43cm x 33cm.

From looking at the case I would suggest too Cogent Design for a v3 of this case, consider keeping more material from the lid of the case. It seems to me that more material is being removed from inside the lid than needs to be.

Then increase surface area, produce ridges for fins on the outside of the case where possible. Even if a ridge was only 1mm in depth it adds a lot more surface area for cooling – just like traditional CPU coolers. No one produces a CPU cooler that is a solid block of aluminum, maximizing the surface area maximizes the cooling performance.

As a solid block of aluminum the CPU is heating up the aluminum block but without more surface area the ability dissipate that heat is limited.

I really noticed the difference between the stock cooler and the aluminum case when the stress test ended. The stock cooler was able to lower its temperature much quicker, which I attribute to the surface area of the stock cooler.

Would I Purchase Again

Would I purchase this case again if I was looking for another case? Yes, I would purchase the case again – hopefully version 3 will be available should I need a few more in the future.

Google Street View – Did It Catch Me?

Today I was walking around in Mong Kok when this car drove past me with some equipment strapped to the top of the car.

Then I noticed some type of Google logo on the side, I think it said Google Earth but I’m not sure.

The equipment at the top was obviously pano photo equipment so there is a fairly good chance I will be included in the next street view update of Mon Kok!

Can Amazon AWS win this time?

I have previously written two posts about the cost of using Amazon AWS, one way back in May 2010 and one in August 2015. In both cases the costs involved with running a rack full of servers 24/7 365 was much more expensive than we could do it ourselves hosting equipment in a data center.

A project I’m working on is growing and we have a couple options that we are considering. Our volume increases a lot during USA working hours, specifically Monday – Friday 8am – 1pm. We are considering the following two options:

1) Provision Amazon EC2 servers during peak times, so 5 hours a day and 5 days a week.
2) Purchase a bunch of Intel NUC DC53427HYE computers. These is not server grade hardware, but they do have vPro technology which allows them to be remotely managed.

We have purchased 1 of these little Intel NUC machines for testing, purchased new in the box for $180 – plus shipping so about $200 for the base unit. We need 4gb of RAM and 120gb of disk space.

Configured the NUC is costing about $290 ($200 NUC + $60 SSD + $30 RAM). 10U at our data center with 20 amps of power and 30Mbps bandwidth. We should easily be able to power 30 of these little NUC machines with 20 amps of power. The data center space will cost $275 per month, for 24/7 operation with 30Mbps of bandwidth based on 95th percentile usage.

Lets run some CPU benchmarks on these little guys and see how much Amazon cloud hosting would cost us for the same compute power running only 5×5. It should be a lot cheaper to go the Amazon route… but lets find out.

The pricing / servers have not changed much since I ran the benchmarks last year. These benchmark were run in Amazon US West (Oregon) using Performance Test 9.0, which is a different version than last year so the scores are a bit different.

Instance Type:

c4.large – 8 ECU – 2 vCPU – 3.75gb RAM – $0.193 per Hour

Windows Server 2016
CPU Mark: 2,925
Disk Mark: 791 (EBS Storage)

Intel NUC DC53427JUE – Intel i5-3427u – 4gb RAM

Windows 10 Pro
CPU Mark: 3,888
Disk Mark: 3,913 (Samsung 840 EVO 120gb mSATA SSD)

Wow, the benchmark performance for the Amazon instance is quite a bit lower than the basic Intel NUC.

The total CPU score of 30 of these Intel NUC servers would would be 116,640. To get the same computer power out of the c4.large instance at Amazon we would need to boot up 40 instances.

Lets run the costs at Amazon. We would need 40 servers at .193 cents per hour. We need them for 5 hours a day, 20 days a month. So the math looks like this.

40 servers x 0.193 = $7.72 an hour
$7.72 x 5 hours = $38.60 per day
$38.60 x 20 days = $772 per month

In addition we need to take into consideration the load balancer, bandwidth and storage. We are using an estimate of 3,300 GB per month inbound and outbound to get our estimated pricing, this is only a fraction of the total bandwidth we could theoretically move on our 30Mbps line from the data center.

Load Balancer Pricing
$0.025 per hour = 5 hours x 20 days = 100 hours = $2.50 per month
$0.008 per GB = $0.008 x 3,300 = $26.40 per month

Bandwidth Pricing
3,300 gigs outbound bandwidth = $297 per month

Storage
Sorry Amazon, your pricing is so complex I can’t even figure out how much the disks are going to cost to provision 120gigs per machine for the 5 hours x 5 days operation…

So a rough cost is going to be somewhere in the range of $1,100 per month for only 5 hours a day, 5 days a week.

Lets look at a 1 year investment.

Amazon = $13,200 per year

Buying 30 Intel NUC servers and hosting them.

Intel NUC x30 = $8,700
Hosting = $275 x 12 months = $3,300

Total to buy and host 1 year = $12,000 for 1st year

Paying for the hardware up front, running the servers 24/7 still comes out cheaper than going to the cloud with usage of only 5 hours a day and 5 days a week.

Lets lengthen the time to 2 years.

Amazon = $26,400
Own & Host = $15,300

And the 3 year view?

Amazon = $39,600
Own & Host = $18,600

Incredible, I thought going to the cloud would be more cost effective than buying and hosting our own equipment when we only need the extra capacity for a limit amount of time per day.

Message Delivery Testing with Exim

Email DeliveryI was recently doing some troubleshooting of email delivery on an Exim server.

I wanted to see what happens when the server attempts delivery. The error in the logs was a timeout, but it does not tell you when in the process the timeout happens.

Brad the Mad has a nice Exim Cheatsheet online that gives some examples of how to manage an Exim server.

Expanding on some of his examples I used this command:

exim -v -M 

That command will force Exim to attempt a delivery of the message you define as the message-id. The addition of -v will show every step of the process, very handy.

In my test using this tool, the delivery failed. In fact, it could not even connect to the remote server at all. Yet, if I did a telnet command like this:

telnet 123.345.679.123 25

I could connect to the remote server. After some digging, I found the exim.conf file was using a command on the transport to bind the outbound connections to an IP address on the server other than the default IP.

interface      = <; 192.168.123.123

The network then had special rules on the firewall to direct outbound traffic from that IP address to a specific public IP address.

So my telnet command that was giving me a positive result, I could connect to the remove server but only because my test was not simulating the same outbound IP address as the server.

If you need to telnet out of a server and bind to a specific IP address, you can do it like this:

telnet -b   

Bonded Internet

A quick post for people who rely on ADSL connections for their Internet.

If you need more power take a look at iTel, they offer a bonded Internet connection where you can bond up-to 6 connections for ‘mega bandwidth’.

300 Fiber

I’m happy to say I currently enjoy a 300 meg fiber connection (yes, I’m bragging) so I don’t have a need for bonded ADSL now but perhaps in the future.

How bonded ADSL work?

Bonded ADSL is where multiple ADSL lines are combined into a single aggregated connection to deliver greater download and upload speeds. This is not the same as using a load balancing router with multiple connections.

With bonding, the original datastream is split up into multiple streams, each of which is sent down an individual ADSL connection. The process of splitting datastreams is performed by special hardware at the bonded ADSL provider’s datacentre. At the customer’s premises, the separate datastreams are recombined to form a single data stream.

A super cool way to increase your bandwidth when only ADSL is available.

WTB: Supermicro SuperServer 2015TA-HTRF

It was back in 2011 when Supermicro released the 2015TA-HTRF.

Supermicro 2015TA-HTRFIt is a 2U rack mount fat twin. It contains hot swap modules with two Atom-based computers. In total, you get 8 Atom computers and 24 2.5 inch hot-swap hard drive bays.

I want to build a Ceph cluster and of all the hardware I have seen this looks like it would be the most cost effective. Since I love buying used hardware this is what I would like to get.

The problem is, it must not have been a big seller. I have been unable to find even a single used unit for sale… I’ll keep watching eBay but do not have very high expectations in finding these 🙁

My next low budge rack. SuperServer 2026TT-H6RF

Check out the 2U – 4 node servers the Supermicro SuperServer 2026TT-H6RF. Nice looking upgrade from the servers I have been using and they are very cheap (like $200) on eBay as of June 2016.

This is mostly a note to self, and anyone else who builds racks based on off-lease servers.

Learned about these when I found some link to them while reading ServeTheHome.

Common – where are MySQL impressive numbers?

I always read about blazing performance of MySQL, how many millions of transactions it can do per second when using a cache, SSD, Fusion-IO or whatever. Check out some of these reports:

RAID vs SSD vs FusionIO

FusionIO – time for benchmarks

Testing Fusion-io ioDrive

Fusion-io Tweaks Flash To Speed Up MySQL Databases

I like big numbers and faster this and faster that, who doesn’t. I’ve tried Fusion-IO cards, I’ve tried SSD, I’ve tried normal hard drives with PCI SSD as a cache using Intel CAS. Bottom line, nothing works to improve the performance in any significant way.

If I test with a lab testing tool, or benchmark test it will report an Fusion-IO, SSD or Intel CAS have the potential to provide speed improvements. What about a real world test? That is where I want to see a difference. When I am doing a database restore on of a MySQL server, I would like to see something actually impact the restore times. In reality on a 8 core machine only one core is used since the default restore can only run as a single process.

I read recently about disabling hyper threading on the CPU may actually give MySQL a boost. I also read about innodb_doublewrite in MySQL (dangerous, not recommended to use).

Lets run a few tests under different situations. I am going to restore a 3gb database with about 11.9 million records. This is not benchmark software, I want to see changes in real life.

Each restore is run twice and the average if the two runs used.

Testing with 4 cores (hyper-threading disabled)

Intel CAS disabled – 27.25 minutes

Intel CAS enabled – write through cache 29.10 minutes.
Intel CAS enabled – write back cache 27.18 minutes

So none of this seems to make any major difference. Perhaps the disk is not the bottleneck. Lets try another approach to the testing.

Conduct the same tests, but rather than performing a single restore we will restore four instances of the database simultaneously… Lets see what happens…

Intel CAS disabled – 1 hour 33 minutes

Intel CAS enabled – write through cache 1 hour 34 minutes
Intel CAS enabled – write back cache 43 minutes

Finally some improvement. That represents a 100% speed increase when using Intel CAS software, increasing the stress on the disks does show some improvement as a of the caching.

So the best performance comes from putting some pressure on the disk system, only then does the cache start to show some benefit.

Next, a quick test with SQL’s skip-innodb_doublewrite feature enabled (or I should say disabled).

So disable the innodb_doublewrite feature, leaving the Intel CAS inabled with write back caching and restoring 4 copies of the same database simultaneously… Lets see what happens…

Intel CAS enabled – write back cache 26 minutes

Nice! While turning off innodb_dobulewrite is not safe (at all!) for production, if you are trying to recover from a disaster situation and need to restore a large database (or several large databases) turning it off can certainly reduce your recovery time.

I’m curious, lets see our restore time with innodb_doublewrite turned off, but only restoring a single copy of the database.

Intel CAS enabled – write back cache 19 minutes

Intel Cache Acceleration Software – v3 is out!!

I have been waiting for this, because I had previously read that version 3 would introduce (the dangerous) feature of write-back caching.

It is dangerous because if the server crashes or looses power and the writes in the cache have not been written to the disk you will have inconsistent data…. that’s bad.

So, lets do a few tests.

The Testing Environment

Here is the hardware setup of the server being used for the benchmark tests.

Motherboard: Supermicro
OS: Windows Server 2008 R2
Processor: Dual Intel Xeon L5420 @ 2.50 GHz
System RAM: 32 GB
Storage Controller: Intel ESB2 SATA RAID controller (driver 10.1.5.1001)
Spinning Disks: Seagate Laptop 1tb (ST1000LM014)
Spinning Disk Setup: RAID 10 (4 disks)
Cache Device: Kingston Digital HyperX Predator 240 GB PCIe Gen2 x4 Solid State Drive

FIRST TEST

Restore a 205gb MySQL database, that is a lot of writes to the disk. Lets see if the write cache makes any difference, in theory it should.

#1 – Cache off, source and destination on same disk: 6 hours 22 minutes
#2 – Cache on (write-back enabled), source and destination on same disk: 6 hours 30 minutes
#3 – Cache on (write-back enabled), source SMB network share: 7 hours 7 minutes

The hard drives in RAID 10 can write at less than 200 MB/sec and the cache device can write at more than 650 MB/sec —- yet the performance drops slightly? We should ideally be seeing a massive increase in performance.

How can the results actually be slower when using the cache?

SECOND TEST

Intel provides a tool called Intel IO Assessment Tool, you run it on your system and it will determine if your system can benefit from the cache and what files you should be caching.

The results say I could benefit and the files I should be caching are the MySQL data folder. No surprise since the server is strictly a MySQL server.

IOPS

Lets use Iometer to calculate how many IOPS this hardware setup can produce. The tests are conducted using the OpenPerformanceTest16.icf (16GB) config from http://vmktree.org/iometer/. The specific test used was RealLive-60%Rand-65%Read.

Kingston Digital HyperX Predator (systems cache drive)
I/Os per Second: 8,346
MBs per second: 68.38 MBPS
Average I/O Response Time: 5.25 ms

Seagate Laptop (RAID 10) – Intel CAS enabled: NO
I/Os per Second: 150
MBs per second: 1.23 MBPS
Average I/O Response Time: 350.20 ms

Seagate Laptop (RAID 10) – Intel CAS enabled: YES
I/Os per Second: 1424
MBs per second: 11.67
Average I/O Response Time: 34.22

This test shows that Intel CAS is working, with nearly a 9.5x improvement over going direct to the disk. Yet no measurable improvement in the performance of MySQL?

FINAL RESULTS

The results of all tests I have done with Intel CAS have been disappointing to say the least. The new version 3 has no options to set, so I can’t really be screwing anything up.

I am going to reach out to Intel and see if they can provide any insight as to why I am not seeing any benefit in my real life usage.