While it may not seem obvious, data centers are the foundation the internet is built on. And while that won’t change anytime soon, the technology that will enable data centers is changing and will continue to do so. For years, data center storage methods, security, power sources and locations were all done as cheaply as possible, and while that made financial sense, there were other costs (like on the environment). As technology is constantly improving and increasing in energy efficiency, tomorrow’s data centers are poised to be more efficient, secure and effective than ever before. Let’s take a look at how:
Harnessing the environment
Location, location, location, that’s the familiar mantra in real estate. Certainly the same is true for data centers, which have traditionally been built near existing dense fiber networks to save on cost. However, companies are now exploring ways to use the environment to help with one of the major challenges facing all data centers: cooling. A recent trend is to leverage colder climates and allow servers to be cooled by the natural environment instead of through traditional means. Companies are also harnessing renewable power technology, like hydroelectric plants, to boost reliability, minimize their environmental impact and save money. The data centers of the future will take this a step further using submarine technology to use the ocean for cooling.
Data center structure is equally as important as where it’s built when it comes to ensuring energy efficiency and reliability. Data centers can come in all shapes and sizes, including smaller, more distributed models (edge data or cloud centers) or on-premise models. It really just depends on who the data center will serve and deciding what’s the most efficient way to deliver that service. Edge technology allows data centers to be closer to the source of data than cloud servers, and this can boost response time or even security, while freeing up the physical space that on-premise models require. Mega data center technology allows giant facilities that occupy more than a million square feet to be built. These huge facilities can serve thousands of clients and are cheaper in cost per foot to build and operate than a smaller traditional data center.
The current standard method for data storage is the hard disk drive (HDD), which involves a spinning platter to read and write data. While HDDs are cheap to produce, they rely on moving parts and are susceptible to hardware failure. Enter solid state drives (SSD), which have no moving parts and can read and write data much faster than HDDs. SSDs are smaller, quiet and less likely to break if they’re moved or sustain a physical impact. The one downside to deploying SSDs at scale is cost—they’re more expensive than HDDs, though they’re currently the cheapest they’ve ever been.
A potential “new” technology that could lead the way in data centers of tomorrow is called heat-assisted magnetic recording (HAMR). This storage technology works by heating the HDD with a laser while it writes data. The heating process dramatically boosts the amount of data a typical 3.5 inch HDD can store. The laser process is also more stable and accurate than the normal HDD writing process. An HDD capable of 20 TB of storage is expected this year, with estimates of capacity increase going by as much as 30 TB? next year. An even more promising storage technology is what’s known as liquid state storage, which uses vanadium dioxide to hold data. According to estimates, 1 TB of data could be contained in a single tablespoon of liquid vanadium dioxide.
Navigating the ever-changing data center landscape can be daunting for any business, and helping your customers find the right data center technology can make all the difference. For more information on the data center technologies of today and tomorrow, contact the experts at Ingram Micro.
View Virtual Events