Modernizing the datacenter

The grand old vision of a datacenter full of raised floors, huge chiller units and racks of servers next to big RAID arrays is fast going away. Gear is shrinking physically, but this is compounded by the effect of hybrid clouds on server utilization. Add to that the reduction in server count due to SSDs and the introduction of multi-terabyte drives and the cluster is shrinking fast.

To figure out a plan for the future, we need to look at each of these areas of change in detail and also consider near and long-term technology changes that will alter servers and storage even more. First, let’s look at storage trends that impact what a datacenter is and does.

Storage

Drives are shrinking in size. The standard bulk storage drive today is a 3.5” HDD with 4TB capacity. That’s already a big step up from the 1TB we had just 3 years ago, but life promises to get in the fast lane in this are next year. We’ll have 100TB 2.5” SSDs in the market. That’s a game changer, since over a petabyte of storage will fit into a standard 2U server, making the hyper-converged 24-drive 2U box suddenly the right size for a combined server/storage farm and effectively storage boxes have disappeared completely from the datacenter.

Even those purists resisting HCI will follow the same path. Storage appliances will be small boxes, likely 2U 12/24 drive boxes with a COTS server. In other words, they’ll be physically identical!

Servers

There’s quite a bit more on storage, but it’s necessary to look at servers first. The typical server today is a cumbersome beast. It has its own peculiar configuration and usually proprietary locks to prevent commodity drives from being used. Most products from large OEMs have proprietary management too.

Because of the cloud, the value sweet spot in the market has shifted to ODM-sourced COTS configurations. These have few frills and are manufactured in huge quantities for Google, AWS etc. They are very inexpensive, but are state-of-the-art machines. More and more, large enterprises are cutting out the middleman and buying direct.

The trickle down on this is that inexpensive, fast servers are available through distribution.  Use a cluster manager like Stratacloud or Mirantis and you can manage the addition and removal of these boxes to the cluster via automation, taking away much of the risk of “going whitebox”.

Now, these servers are changing. Memory sizes allow in-memory databases and such to run at as much as 100x speed, which translates into, guess what, less gear to run a given job. The next evolution in the in-memory space is NVDIMM, which will initially double, or better, the speed of persisting the output of the database and getting data for it.

2017 and beyond

One step beyond, towards the end of 2017, we will see byte level persistent memory access for NVDIMMs, followed by memory on the CPU module and serially connected memory busses with flash or PCM memory, which together will take “drive” bandwidth into the 100 GBPS range. Again, this means fewer servers for the same work.

Impact of IoT

At this point, you might be wondering if the datacenter will fit into a smart-watch! We can expect a major growth in IT gear to happen in parallel. IoT, which is perhaps the most hyped concept in recent IT history, will create a lot of data. Professor Parkinson’s law will also apply: “IT spending will always expand to fill the budget available”.

However, IoT creates “big data” and unstructured data is very amenable to processing using blindingly-fast GPUs. Such GPU servers will eat up a good bit or the server growth, so don’t count on IoT filling a datacenter.

Other changes

There are some collateral effects of the changing technologies. Hard drives hated been warmed up too much. It used to be that 50C or 55C was the top operating temperature. This meant that servers had to operate around 40C and, with a 12-year drive life expectancy, 30C was even better, lowering failure rates.

Well, with HDDs now specified to 65C and SSDs at 70C or even higher, we can easily meet the drive needs without any chilling of the inbound air. That opens up a huge savings opportunity, with PUE typically dropping from1.15 down to 1.03. As for the shorter life of the drives … SSDs are semiconductors and acceleration of failures due to heat isn’t pronounced until much higher temperatures than mechanical drives … and the new view of server life is 3 to 4 years, not 12. In other words, chillers aren’t essential (except in Nevada and Death Valley).

There are some other factors to worry about. Software-defined infrastructure changes who the vendors are for all the gear in the datacenter. Most of the switches are dumbed-down, as is much of storage. Software for both will run in containers in the server farm, allowing a lot of operational agility. Administration of most tasks will be automated, using policies that allow tenants in your cloud to configure their working space.

Power Distribution

Power strategies in the rack need to be coalesced. With HCI, at least, the commonality of gear lends itself to 12 or 48V DC power distribution in the racks, making overall power operation more efficient, since the large multi-phase central power supplies are much more efficient. We may see a move to all-solid-state power systems, which are much more compact, very reliable and have efficiencies better than 99%.

I’ve been involved in most of the power experiments around feeds to the rack, and I’m still convinced that standard 3-phase AC works best. DC and AC high-voltage feeds aren’t going to improve efficiency enough to overcome the extra cost.

 

Datacenter in a box

One last question. Do we need datacenters at all? Shipping containers make great computer rooms. I’ve built quite a few of them and the lower cooling needs of today’s equipment make the question important. Wheeling up a ready-to-go container of gear is a good feeling, since it takes less than half the time of conventional bring-ups (and that doesn’t include the savings from avoiding building the datacenter).

Posted in:
About the Author

Jim O'Reilly

lanier.bauer@stratacloud.com'

Now a consultant and writer, Jim has managed a number of companies and corporate divisions. Among his notable accomplishments, he led the creation of SCSI with the first SCSI ASIC now in the Smithsonian, built multi-billion dollar server and PC businesses and created containerized storage for the cloud. Jim can be reached at jim.oreilly@volanto.com or 408.230.7338.