Jim O’Reilly, a storage and cloud computing consultant at Volanto, weighs in
Jim O’Reilly is a consultant at Volanto, where he helps companies modernize their data centers and build cloud strategies. In this Q&A, he shares his perspective on how low-cost, white-box infrastructure could re-shape the dynamics of enterprise data centers – and the IT teams that run them.
You recently wrote about the rise of white-box infrastructure. In your experience, what makes the technology so appealing?
I’ve built data centers on them. I did the first storage installation for Microsoft Azure, and believe me it was nothing like standard servers and nowhere near hyperconverged. It was very purpose built, but also very bare bones. I’ve been involved with Amazon and Google since then a few times, and they don’t look for any frills and furbelows. They get right down to it. They want cheap, functional, and relatively limited – and in very high volume.
Historically I’ve worked with a wide variety of servers and appliances and to me they’re all very similar; there’s no magic about the box they’re put in. They’re basically all Intel reference designs – once you get below the box layer and really look, it’s hard to tell the difference between one manufacturer and another. And that’s true at the same level with the interface between the software and the hardware.
The way the cloud operators buy their gear is percolating and it may filter down into smaller companies very soon. There’s going to be a trend of people buying open compute designs, and those are coming from the smaller vendors.
There’s been significant growth in the integrated systems market. What do you make of that?
I think it’s a fear, uncertainty and doubt issue that’s put on by big manufacturers. On the other hand, it’s the fear, uncertainty and doubt in the lack of skill-sets on the part of the IT shops that cause them to be a little timid about doing it themselves. But out-of-the box solutions can actually be counterproductive because they’re expensive. It’s a way of keeping the price up in my opinion.
I see the game playing out a little differently over the long haul. It will depend on what the likes of Quanta and other manufacturers in China do with their overall pricing. My sense is that they will go to market with really low-cost, really simple, standard server boxes. And the whole hyperconvergence fad might get bypassed by the fact that it’s cheaper to go buy from Quanta by a factor of two or three.
Is there a real lack of skills among IT shops today, and if so, how can that be addressed?
IT shops are getting hit with a lot of different things at the moment, and very few of them still have serious hardware people. They’re used to big manufacturers handing them solutions on a plate, and frankly, they’re lazy. Let’s remember where they’re coming from – they bought mainframes. They bought proprietary UNIX systems. Then they bought proprietary LINUX systems. And COTS (commercial off-the-shelf) is relatively new to a lot of them. And they still have the mindset, especially in senior levels of IT, that you get other people to do that sort of work to make sure everything plays together.
They don’t realize that standardization is so strong that it’s literally like Legos out there. You can take a box from this company and a switch from that company and plug everything together and 99 percent of the time it’s going to run very well. But the senior IT guys are still living in the past; they don’t want to take the risk and they don’t see that they have the skills to handle it. And we don’t yet have a consultancy infrastructure built around that. So things are a bit confusing and transitional right now.
The way I think it’ll play out is we’ll see master integrators putting together the product. I think we’ll see some new VARs springing up doing some level of integration and doing it at a low price point. They’ll be more flexible in the way they put things together. I think in the next five years, these VARs might be putting together Supermicro or Quanta appliance boxes. And I see larger companies defining their own hardware.
So it’s not as hard as people think to build a DIY private cloud?
The Googles and Facebooks of the world are trying to proselytize the idea of doing it yourself. The open compute project is basically aimed at getting people to buy the Legos of the data center and put them together. They’re trying to make it easier by standardizing the Lego pieces a bit more. But it’s really unnecessary. The interfaces are already so standardized that unless you get really esoteric in your architecture, you can’t lose. The bottom line is you don’t really have a very complicated integration path to go down. People just need a little push to start trying it themselves.
When you’re starting to build big clouds – 2,000 machines or more – the game really does change. There’s a lot of learning experience and the savings on doing it yourself and buying from white box vendors is roughly two to one.
How long will it take for this shift to occur?
We’re heading to a crossroads, but we’re not quite there in the industry. The industry is absolutely conservative. People have run into all sorts of integration problems with their clouds in the past, mainly in software. My sense is that it’s not really a hardware issue anyway. The game is going to be different in hardware and I think it’s going to have a major impact on the major hardware manufacturers of the world, and I grieve for them, but only briefly.