Data centers need to handle increasing amounts of data for business-critical processes and analytics. The latest contributor to the data explosion is the Internet of Things (IoT). Sensors placed in all sorts of devices are gathering data in order to power machine-to-machine interactions, and that has to be stored for analysis. The demands being placed on computing by IoT will change the data center forever, but the question is how to prepare the data center in order to handle IoT.
Gartner estimates that there will be 25 billion “things” connected via the Internet by 2020, but monitoring those 25 billion things will have a positive economic impact of $2 trillion. The changes that IoT will make possible will be pervasive. Everything will be monitored in order to assess performance and reduce waste. However, Gartner also says that the greatest challenge to data centers is that they don’t know how to handle IoT or what to do with the data.
Much of the demand for IoT will be for machine-to-machine communications. Algorithms and software will be used to automate machine monitoring and processes. Smart meters are a good example. Smart energy meters automate measurement of power usage and billing for consumers without human intervention. The possibilities provided by IoT are boundless, if you have the infrastructure to handle it.
Handling New Levels of Data Volume
The sheer volume of IoT data is going to have the most immediate impact on the data center.
Consider the diverse types of devices that will need to be connected. Everything from medical instrumentation to factory machinery to connected cars will be part of the IoT universe. Enabling machine-to-machine monitoring and management, such as controlling the pace of a production line or directing traffic for self-driven automobiles, is going to require real-time data. Most of today’s enterprise data traffic is self-generated by applications hosted on servers or by traffic between colocation and public cloud facilities to support those applications. With IoT, you will have a flood of self-generated traffic to support automated processes.
This is going to mean a wave of new types of traffic from new sources, such as machine sensors, video cameras, and other monitoring systems. The data traffic will have to be managed and prioritized. Then there are going to be additional security woes to think about.
There also is going to be a demand for more bandwidth. IoT data traffic is very chatty. Think of thousands of devices generating information at short intervals and sending that data across your network. The more devices there are, the more data traffic there is. Not all of that data is going to be fed into the data center for analysis. Using an intermediary controller, much of that noise can be filtered out and ignored. Data can be sampled rather than consumed en masse. In order to reduce potential security problems, IoT processes may use sandboxing in order to contain code so it can be executed safely.
Adopting an Edge Computing Model
In addition to the volume of bandwidth required, consider the amount of data storage needed to accommodate IoT data. For big data computing, gathering information in cloud data repositories for analytics by the data center makes perfect sense. However, with the sheer volume of IoT data, that same model doesn’t work. Analytics can’t be handled in the data center.
Instead, data will be aggregated in a distributed “mini–data center” that puts the analytics closer to the endpoints. Edge computing pushes the applications, data, and processing to the logical edge of the network. This decreases the amount of data traffic, reduces latency, and enables real-time analytics. Cisco refers to this as fog computing, where data and processing power are placed at the edge of the network; neither in the enterprise data center nor in the cloud. Cisco predicts that 40 percent of IoT data processing will be in the “fog” by 2018.
For IoT applications, edge computing is generally most useful where you need local processing in order to analyze data, such as in industrial automation or transportation where you need to monitor sensors and actuators. Time-sensitive applications, such as mobile healthcare and oil and gas production, where real-time analysis is vital, are also ideal applications for edge or fog computing.
With adoption of edge computing, you now can prioritize data center traffic in a more useful fashion. Edge computing is best for real-time analysis of large volumes of machine-generated data. However, historical IoT data can be imported into the data center for non-critical analytics using big data methods. IoT data can be normalized and prioritized so data needed for real-time decision-making gets priority bandwidth access, while larger data sets are allocated their own bandwidth for historical data.
So how does IoT affect data center design? It’s all going to depend on the application. Understanding how to prioritize data traffic, set up edge computing systems, and manage the floodgates that hold back the deluge of IoT data is crucial. For value-added resellers, this is going to mean learning how to balance data center computing with edge computing systems and harnessing the cloud and other resources in order to prevent customers’ data centers from choking on unneeded IoT data.