For some years, there’s been a school of thought that colocation is out of date, and will eventually wither away in favor of the cloud. But that idea runs counter to the facts.
The colo market is stubbornly growing. But it’s not the same market. Early cloud adopters are partially returning to colocation – and these return colo users are very different to the old school.
It’s been fashionable to see the cloud as an all-consuming future. The cloud can handle massive workloads, services are easy to buy, and are scalable. So why would anyone go to the trouble of buying racks and servers and installing them in retail colocation space? Surely you should let the cloud handle the grunt work, and get on with your real job!
Market figures tell a different story. Averaging out forecasts from a bunch of providers, it seems the colocation market as a whole is growing massively, at around 16 percent per year. Over the next ten years, that adds up to a market that will quadruple in size, going from roughly $46 million in 2020, to $200 billion in 2030.
Market researchers say the retail colocation sector is bigger than wholesale colocation, where whole data centers are rented by large operators – and retail colo will keep its lead at least till 2030. What’s going on?
Cloud is massive – and efficient
First off, it’s more complicated than that. Cloud data centers really are massive because, alongside the ones leased in wholesale colo deals, hyperscalers own a massive number of sites, which they’ve built themselves. These are huge beasts, with power demands up to 1,000MW.
“They’re dominating the market today,” says Yuval Bachar, a hyperscale veteran with stints at Microsoft Azure, Facebook, Cisco, and LinkedIn. “These mega data centers actually account for about 70 percent of the data center business in the world – from the power consumption as well as from a floor space perspective.”
But hyperscale includes some behemoths which are actually giant in-house IT services, like Facebook, Bachar points out: “Facebook is probably one of the biggest data center operators in the world nowadays. But they’re serving their own enterprise needs. They’re not a public cloud service – they’re running their own internal cloud.”
Bachar says hyperscale cloud data centers do indeed have a big advantage over other sectors, in their ability to deliver cheap IT power: “These sites are usually located in remote areas where the land is inexpensive, and power is available from multiple green sources.
If those sites don’t have connectivity, the hyperscalers have the muscle to provide it: “The large companies who are building those mega data centers need to bring connectivity into those sites and be creative to create the network backbone. And each and every one of them is creating their own backbone.”
On these sites, hyperscalers “start with one or two buildings, and then expand in a replication mode, on the same site,” Bachar says. “They create a very high level of efficiency operating the data center with a PUE of 1.06 to 1.1.”
In his view, the hyperscalers are “creating a very, very significant level of green data centers.”
Colocation has challenges
Smaller colocation sites are very different, he says. They were set up to host physical servers owned by enterprises which “decided not to actually build their own data center but actually to put part of their IT load into a colocation site.
“These are small sites between 50 and 75MW, and in some cases can be even smaller than 15MW. They are closer to urban areas – because historically those sites actually have been put closer to the headquarters of their customers.”
These colo providers have big challenges, says Bachar: “These buildings are not scalable. Because they’re sitting in urban areas, the size they have been built to this the size they’re actually going to operate under for the remainder of their life. They don’t have expansion space.“
A second challenge is, “they are heavily regulated – because the closer you get to the middle of the city, the heavier you are regulated for emissions, power availability and every aspect that impacts the environment around you.”
So the odds are stacked against smaller colocation companies. But their market share resolutely refuses to decrease – and there’s a surprising reason for this. According to Greg Moss, a partner at cloud advisory firm Upstack, large numbers of early cloud adopters are moving capacity out of the cloud.
Cloud defectors come back to colo
“The public cloud as we know it has been around for 12 years, right? I mean, the big three – GCP, Azure, and AWS. Everyone sees the growth, everybody sees people going pure cloud, and just running to the cloud kind of drinking the Kool-Aid. What they don’t realize is there’s two sides to that coin.”
According to Moss, the early adopters, the “sexy, innovative” companies who went all-in on the cloud twelve years ago, “are now at a point where they’re pulling out at least a portion of their environment, it could be 20 percent, it could be 80 percent, and hybridizing, because what they’ve realized over the last 12 years, that cloud isn’t perfect. To really get the efficiencies from an economic and technical perspective, you really need to be in some sort of hybrid environment.”
Companies started with a “knee jerk reaction” to put everything in AWS, he says: “Why? Because some board member mandated it, or because our competitors are doing it, or because it’s the rage right now.”
Later on it goes sour, because in a lot of cases, renting capacity on demand costs a lot more than owning the hardware: “Someone’s losing their job, because they realize they’re spending 30 percent more than they were – and the whole exercise was around cost reduction and innovation!”
The trouble with cloud
It turns out that going to the cloud isn’t a simple answer to all questions: “It doesn’t solve anything. It just hands your data center environment to a different company. If the data center just went away, and is miraculously living in the ozone, then fine. But it’s not. You’re just shifting infrastructure around in a different billable model. It makes sense: some people want to consume hardware in a day to day or hour by hour function.”
The hyperscale cloud operators can afford to lose some custom, says Moss, because they still have massive growth due to the late adopters: “AWS, GCP, and Azure are still seeing so much growth right now, because of healthcare, because of not-for-profit, because of legal, because of all the non-sexy companies that are just now getting comfortable enough to move to the cloud.”
But the early adopters really aren’t happy – and they have problems: “They’re stuck for five to 10 years, because no one’s going to pull out of a massive migration or massive decision after just doing it – regardless of the outcome. So that’s why the early adopters are now exiting. Finally! After 10 or 12 years.”
But it’s still not easy: “They probably would have liked to pull out a portion of their environment six years ago, but they can’t because they have no headcount. There’s a big deficit in the industry for talent.”
And there’s company politics: “There’s a person who’s been there 15 years, who just doesn’t want to do more than what he’s doing. He picks up his kid every day at school at three, and he knows that if the IT sits in AWS, he can continue to do his job and leave at three and pick up his kid. He could be the gatekeeper.
“I’ve seen large companies dismiss $50 million a year savings because the gatekeeper, a $150,000 employee, just doesn’t let the management know that there’s an opportunity.”
Sooner or later, those early adopters can get past the gatekeepers, and start shifting the balance of their IT provision towards a hybrid model with some loads returning to colocation. But these customers are a new generation, and they will want more than just the resilient racks with power and networking, that were good enough in days gone by.
Return to Colo: bare metal and cloud onramp
“You can’t just have great resiliency, you have to have a total solution. That means big buckets – a data center that’s resilient. And some sort of bare metal or custom managed component, like Equinix Metal for instance. And then there’s the connectivity to the large public clouds – through a partner like Megaport or a direct onramp. Those are the three components that make up hybridization.”
The capacity speaks for itself, while bare metal is a way to own dedicated capacity in someone else’s infrastructure. Customers can need this to meet privacy rules which require customer data to have a specific location away from shared hardware.
And the need for on-ramps to the public cloud is obvious. If customers are building hybrid clouds that include public cloud services as well as their own colocated servers, there should be easy to use links between the two.
Unlike the early cloud enthusiasts, the return-to-colocation customers are thinking ahead, says Moss. Privacy rules might force some loads onto bare metal in future. Or they might open up a new commerce branch which would have seasonal peaks – and that could require a quick link to the cloud.
They’re thinking ahead because of the trouble they’re experiencing coming off their cloud addiction, but also because, if they pick the wrong colo, they could have to move all their IT. And, as Moss says, “nobody wants to move a data center. It’s the biggest pain in the ass.”
There are companies that will physically move racks of servers from one facility to another, but Moss says: “They charge $5,000 in insurance for every million dollars in hardware, even if you’re moving three blocks away. If you move $10 million worth of hardware, your insurance cost is going to be upwards of $50,000. And will they even turn back on?”
Power and networking
According to Bachar, the new colo customers have another demand: they are much more power-hungry: “If we look at the technologies in the mega data centers and the colos, 80 percent of the IT load is compute and storage servers now. We’re starting to see the emergence of AI and GPU servers, which are growing at a much faster pace than the compute and storage servers, and specialty storage servers going hand in hand with the GPUs and AI.
“And the reason for that is that we’re starting to deal with very large data sets. And to process those very large data sets, we need a server, which is beyond the standard compute server.”
But GPU servers, and GPUs integrated standard compute servers demand more power: “Those high power servers are challenging our infrastructure. If you look at a typical high-end GPU server, like the ones from Nvidia, these servers are running between 6000W and 8000W watts for every six rack units (RU). That is very difficult to fit into a standard colocation where the average power per rack is 6kW to 8kW.”
On those figures, a standard rack is 42 RU, so a full rack of GPU servers could demand a sevenfold increase in power.
One thing which would help is more flexibility: “Am I taking a high power rack or a low power rack? Can I actually mix technology within the rack. We need a very flexible capability in the data centers.”
New apps also need more network bandwidth, says Bachar: “Networking today is 100 and 400 Gigabit Ethernet as a baseline. We will continue to grow this to 800G and the 1.2Tbits in the future.”
Can small colos cope?
All these changes are placing huge demands on small colocation firms, while there’s a surge in demand for what they provide, and that is a big factor driving the current surge in colocation mergers and acquisitions, says Moss.
Smaller colos realize that they can’t actually fund all the changes they need to be truly successful: “So you see a lot of these smaller data centers selling off to the larger guys.”
Meanwhile, he says: “The larger guys are buying them because it speeds their go-to-market – because the infrastructure is already in place. It takes a long time to build a data center. You could probably get away with a brownfield build in the US within 18 months. If it’s Greenfield, it’s more likely in three years.
A lot of requests are on a shorter timescale than that: “Imagine you are Equinix, you have three data centers in a market and they’re all bursting at the seams. You have very little inventory left. But one of your largest customers, or an RFP from a new customer, says ‘In 12 months, we’re going to need a megawatt and a half.’ But you can’t build in that time.”
In that situation, the large player can buy a smaller regional player, whose data center is only 30 percent full, and put that customer in there.
“You invest some money in upgrades, you bring it up to standards, and you get certain certificates that aren’t there, and you now have an anchor tenant, and maybe the facility is 60 percent full,” says Moss. “The bank loves it, because the bank takes on the existing customer leases to finance, and they also take the new signature tenant lease, that’s probably 10 years long.”
The other customers are happy too, as the data center gets a perhaps-overdue facelift, along with the addition of those new must-have features, bare metal services and on-ramps.
The odds are on big colo players
Small colo players often rail against giants like Equinix or Digital Realty (DRT), claiming they overcharge for basics like power and cooling, as well as services like cross-connects – links between two servers in the network. It’s very cheap for a large colo to activate a network link between two of its customers, who may even be in the same building – and yet customers are charged a high price for those cross-connects.
Multinationals don’t see that as a problem, says Moss: “A company like Equinix or DRT has everything that you would need to be successful. You are going to pay a premium, but that premium, if utilized properly, isn’t really a premium. If I’m using Equinix in three countries, I may be paying 30 percent more in space and power, but I’m saving a hell of a lot of money in my replication costs across those three data centers because I’m riding on their fabric.
“A local 200 person business in Pennsylvania, whose network engineer wants to touch every part of the hardware, is going to TierPoint, because it’s two miles down the road,” he says. “He doesn’t have this three country deployment, he has just, 10 racks in a cage and wants to make sure he’s there if something fails. There’s still plenty of that going on in the country, but most of the money’s being spent with companies like Equinix and DRT.”
Bigger issues on the horizon
But there are more issues to come, which will have even the largest players struggling. Bachar sums these up as Edge and Climate.
Colocation providers are going to have to keep providing their services, offering increasing power capacity, from a grid which is having to shift to renewable energy to avert climate catastrophe.
“Our power system is in transition,” says Bachar. “We’re trying to move the grids into a green grid. And that transformation is creating instability. Grids are unstable in a lot of places in the world right now, because of that transition into a green environment.”
At the same time, capacity is needed in the urban locations where grids are facing the biggest crisis.
At present, all Internet data has to go through exchange points. “In the United States, there are 28 exchange points covering the whole country. If you’re sending a WhatsApp message from, from your phone to another phone, and you’re both in Austin, Texas, the traffic has to go through Chicago.”
The next stage of colo will need localized networks, says Bachar: “In the next three to five years, we’re going to have to either find solutions to process at the Edge, or create stronger and better backbone networks. We’re having a problem with Edge cloud. It’s not growing fast enough.”
The colocation data centers of the future will have to be in urban areas: “They will have to complement and live in those areas without conflict,” says Bachar. That means they must be designed with climate change in mind – meeting capacity needs without raising emissions.
“We cannot continue to build data centers like we used to build them 15 years ago, it doesn’t work. It doesn’t help us to move society forward and create an environment for our children or grandchildren.”
Original article can be found at Data Center Dynamics