With Data Centers, What Can Happen Will Happen (Eventually).

Because data centers and telecom switching centers are designed to withstand failures without interrupting business operations, a 3 a.m. emergency due to a malfunctioning air conditioner should never occur – in theory. But Murphy’s Law says that if a single failure can create an emergency, it will. So, to date, operators have had to react to single-component failures as if they are business-critical. Because they might be.

In my previous blog, I pointed out the two components of risk: the probability of and the consequence of failure. While both of these components are important in failure analysis, it is the consequence of failure that’s most effective at helping decision-makers manage the cost of failure.

If you know there is a high probability of impending failure, but you don’t know the potential consequence, you have to act as though every threat has the potential for an expensive business interruption. Taking such actions is typically expensive. But if you know the consequence, even without knowing the probability of failure, you can react to inconsequential failures at your leisure and plan so that consequential failures are less likely.

In the past, the consequences of a failure weren’t knowable or predictable. The combination of Internet of Things (IoT) data and machine learning has changed all that. It’s now possible to predict the consequence of failure by analyzing large quantities of historical sensor data. These predictions can be performed on demand and without the need for geometrical data hall descriptions.

The advantage of machine learning-based systems is that predictive models are continually tuned to actual operating conditions. Even as things change and scale over time, the model remains accurate without manual intervention. The consequences of actions, in addition to equipment failures, become knowable and predictable.

This type of consequence analysis is particularly important for organizations that have a run-to-failure policy for mechanical equipment. Run-to-failure is common in organizations with severe capital constraints, but it only works, and avoids business interruptions, if the consequence of the next failure is predictable.

Predicting the consequence of failure allows an operations team to avoid over-reacting to failures that do not affect business continuity. Rather than dispatching a technician in the middle of the night, an operations team can address a predicted failure with minimal or no consequence during its next scheduled maintenance. If consequence analysis indicates that a cooling unit failure may put more significant assets at risk, the ability to predict how much time is available before a critical temperature is reached provides time for graceful shutdown – and mitigation.

Preventative maintenance carries risk, but equipment still needs to be shut off at times for maintenance. Will it cause a problem? Predictive consequence analysis can provide the answer. If there’s an issue with shutting off a particular unit, you can know in advance and provide spot cooling to mitigate the risks.

 The ability to predict the consequences of failure, or intentional action such as preventative maintenance, gives facility managers greater control over the reliability of their facilities, and the peace of mind that their operations are as safe as possible.

2016 and Looking Forward

2016-imageTo date, Vigilent has saved more than 1 billion kilowatt hours of energy, delivering $100 million in savings to our customers.  This also means we reduced the amount of CO2 released into the atmosphere by over 700,000 metric tons, equivalent to not acquiring and burning almost 4000 railcars of coal.  This matters because climate change is real.

Earlier this year, Vigilent announced its support for the Low-Carbon USA initiative, a consortium of leading businesses across the United States that support the Paris Climate Accord with the goal of reducing global temperature rise to well below 2 degrees Celsius.  Conservation plays its part, but innovation driving efficiency and renewable power creation will make the real difference.  Vigilent and its employees are fiercely proud to be making a tangible difference every day with the work that we do.

Beyond this remarkable energy savings milestone, I am very proud of the market recognition Vigilent achieved this year.  Bloomberg recognized Vigilent as a “New Energy Pioneer.”  Fierce Innovation named Vigilent the Best in Show:  Green Application & Data Centers (telecom category.)

Of equal significance, Vigilent has become broadly recognized as a leader in the emerging field of industrial IoT.  With our early start in this industry, integrating sensors and machine learning for measurable advantage long before they ever became a “thing,” Vigilent has demonstrated significant market traction with concrete results.  The industry has recognized Vigilent’s IoT achievements with the following awards this year:

TiE50                    Top Startup: IoT

IoT Innovator     Best Product: Commercial and Industrial Software

We introduced Vigilent prescriptive analytics this summer with shocking results, and I say that in a good way.  Our customers have uniformly received insights that surprised them.  These insights have ranged from unrealized capacity to failing equipment in critical areas.  The analytics are also helping customers meet SLA requirements with virtually no extra work and to identify areas ranging out of compliance, enabling facility operators to quickly resolve issues as soon as a temperature goes beyond a specified threshold.

Vigilent dynamic cooling management systems are actively used in the world’s largest colos and telcos, and in Fortune 500 companies spanning the globe.  We have expanded relationships with long-term partners’ NTT Facilities and Schneider Electric, who have introduced Vigilent to new regions such as Latin America and Greater Asia.  We signed a North America-focused partnership with Siemens, which leverages Siemens Demand Flow and the Vigilent system to optimize efficiency and manage data center challenges across the white space and chiller plant. We are very pleased that the world’s leading data center infrastructure and service vendors have chosen to include Vigilent in their solution portfolio.

We thank you, our friends, customers and partners, for your continued support and look forward to another breakout year as we help the businesses of the world manage energy use intelligently and combat climate change.

 

Breaking Down Communication Barriers with IoT

The Internet of Things holds the unprecedented opportunity to improve the long-standing conflict between facilities, IT and sustainability managers.  Traditionally, these three silos are orthogonal, and don’t share each other’s priorities.

Data generated from more granular sensing in data centers reveals information that has traditionally been difficult to access, and not easily shared between groups.  This data can provide both an incentive and a means to work together by establishing a common source for business discussions.  This concept is becoming increasingly important.  As Bill Kleyman said in a Data Center Knowledge article projecting Data Center and Cloud Considerations for 2016: “The days of resources locked in silos are quickly coming to an end.”  We agree.  While Kleyman was referring to architecture convergence in the reference we believe his forecast applies equally forcefully to data.  Multi-group access to more comprehensive data has collaborative power.  IoT contributes to both the generation of such data and the ability to act on it, instantaneously.

Consider the following examples of how IoT operations can accelerate decision-making and collaboration between IT and Facilities.

IT Expansion Deployments

As service shifts to the network edge, or higher traffic is needed for a particular geographic region, IT is usually tasked to identify the most desired sites for these expansions.  In bigger companies, the possible sites can number 50 or more.  IT and Facilities need to quickly determine a short list.

A highly granular view of the actual (versus designed) operating cooling capacity available in each of the considered sites would greatly speed and simplify this selection.  With operating cooling capacity information readily in hand, facilities can easily create a case for the most attractive sites from a cost and time perspective, and/or create a business case for the upgrades necessary to support IT’s expansion deployments.

Data can expose previously hidden or unknowable information.  Capacity planners are provided with the right information for asset deployment in the right places, faster and with less expense.  Everyone gets what they want.

Repurposing capital assets

After airflow is balanced, and redundant or unnecessary cooling is put into standby through automated control, IT and facilities can view the real-time amount of cooling actually available in a particular area.  It becomes easy to identify rooms that have way more cooling than needed.  The surplus cooling units can be moved to a different part of the facility, or to a different site as needed.

IoT powered by smart software can thus expose inefficient capital asset allocation.  Rather than spending money on new capital assets, existing capital can be moved from one place to another.  This has huge and nearly instant financial benefits.  It also establishes a method of cooperation between the facilities team that is maintaining the cooling system and the IT team that needs to deploy additional IT assets and that is tasked with paying for additional cooling.

In both situations, data produced by IoT becomes the arbiter and the language on which the business cases can be focused.

Data essentially becomes the “neutral party.”

All stakeholders can benefit from IoT-produced data to make rational and mutually understood decisions.  As more IoT-based data becomes available, stakeholders who use it to augment their intuition will find that data’s collaborative power is profitable as well as insightful.

IOT: A Unifying Force for the Data Center

A recent McKinsey & Company Global Institute report states that that factories, including industrial facilities and data centers, will receive the lion’s share of value enabled by IoT.  That’s up to $3.7 trillion dollars of incremental value over the next ten years.   Within that focus, McKinsey states that the areas of greatest potential are optimization and predictive maintenance – things that every data center facility manager addresses on a daily basis. The report also states that Industrial IoT (combining the strength of both industry and the Internet) will accelerate global GDP per capita to a pace never seen before during the industrial and Internet revolutions.

The McKinsey study described key enablers required for the success of Industrial IoT as: software and hardware technology, interoperability, security and privacy, business organization and cultural support.  Translated into the requirements for a data center, these are: low power & inexpensive sensors, mesh connectivity, smart software to analyze and act on the data (analytics), standardization and APIs across technology stacks, interoperability across vendors, and ways to share data that retain security and privacy.

Many of these enabling factors are readily available today.  Data centers must have telemetry and communications.  If you don’t have it, you can add it in the form of mesh network sensors.  Newer data centers and equipment will have this telemetry embedded.  The data center industry already has standards that can be used to share data.  Smart software capable of aggregating, analyzing and acting on this data is also available. Security isn’t as well evolved, or understood.  As more data becomes available through the Internet of Things, the network must be secure, private and locked down.

Transitions always involve change, and sometimes challenge the tried and true ways of doing things.  In the case of industrial IoT, I really think that change is good.  Telemetry and analytics reveal previously hidden information and patterns that will help facility professionals develop even more efficient processes.  Alternately, it may help these same professionals prove to their executive management that existing processes are working very well.  The point is that to date, no one has known for sure, because the data just hasn’t been available.

The emergence of IoT in the data center is inevitable, and facility managers who embrace this change and use it to their operational advantage can turn their attention to more strategic projects.

My next blog will address how telemetry and IoT can break down the traditional conflicts between facilities, IT and sustainability managers.

Stay tuned.