Optimizing Reliability: Step 1 — Sensors: Do More With Less

Tignis
4 min readJun 3, 2020

A recent post in this blog introduced the three steps you can take to optimizing asset reliability in your facilities through the use of physics-based modeling and digital twin technology. Today we’ll cover the first step in this optimization process: reducing the total sensor count in your environment.

When the internet of things (IoT) came along and the notion of 24/7 monitoring promised end-to-end detectability for system-wide issues, facilities everywhere began equipping their systems with sensors everywhere a sensor would fit. The idea was that “more was more” — the more sensors you can add, the more data you could collect about system performance, and the more you could do to monitor and maintain the system effectively.

Sounds good, right? Except for this: More sensors also means more hassles — mostly tied to the cost and nuisance of adding sensors to your environment:

  • The expense of buying, installing, and maintaining new sensors causes costs to go up.
  • Engaging IT to plan and carry out expanded sensor coverage typically take months of everyone’s valuable time.
  • Teams see new workload burdens as the scope of duties for a typical operator is expanded.
  • In some cases, life safety concerns are an issue, when sensors are added in hard-to-reach places.

So, while it’s important to gather as much relevant sensor data as you can to achieve world class condition monitoring for your mechanical assets, it’s ideal if you can do this without have to install any new sensors. What’s the secret? Instead of placing new sensors into your existing environment, consider the ways you can calculate “virtual” sensor values by applying physical laws to the data you already collect.

In the example shown below, the cooling tower in a building contains equipment that exchanges water at varying temperatures and uses a fan to produce cool air for the building. Let’s assume that as part of maintaining this system, the operator wants to keep a keen eye on the system’s overall power consumption.

The common way of tracking power consumption would be to attach a kilowatt power sensor that sends data back to the operator’s monitoring software. But let’s say it’s difficult to access some portions of the tower where sensors would be required, and the expertise to maintain these sensors means hiring outside contractors every time service or replacement is needed. What if there were a way to get by without using a sensor at all?

By mapping all system elements onto a digital twin, the operator has the data he needs to determine expected power consumption in the tower by reverse engineering basic physics calculations. The temperature differentials of the water entering and exiting the tower provide key factors, along with the known physical properties of the system itself. This in turn lets the monitoring system calculate how hard the tower needs to work to deliver the current cooling level, and the consequent power utilization required for doing so.

By forgoing the installation of new sensors and instead making better use of the sensors that are already part of your environment, you can save on installation costs while reducing your maintenance burden and also the number of unwanted outcomes.

But sometimes you need sensors anyway, and maintaining them is an inescapable part of your operational duties. In the next post in this series, we’ll talk about how you can also use physics and a digital twin to calculate sensor failure points, making it easier to assure reliability through basic computations of the sensor data you collect.

Did you find this article interesting? For more insights check out our blog: Physics, Machines, and Data.

Written by Jon Herlocker

Jon is a deep technologist and experienced executive in both on-premises enterprise software and consumer SaaS businesses. In his prior leadership roles, he was Vice President and CTO of VMware’s Cloud Management Business Unit, which generated $1.2B/year for VMware. Other positions include CTO of Mozy, and CTO of EMC’s Cloud Services division. As a co-founder of Tignis, Jon is an experienced entrepreneur, having founded two other startup companies. He sold his last startup, Smart Desktop, to Pi Corporation in 2006. Jon is a former tenured professor of Computer Science at Oregon State University, and his highly-cited academic research work was awarded the prestigious 2010 ACM Software System Award for contributions to the field of recommendation systems. Jon holds a Ph.D. in Computer Science from the University of Minnesota, and a B.S. in Mathematics and Computer Science from Lewis and Clark College.

--

--

Tignis

Tignis provides physics-driven analytics for connected mechanical systems, utilizing digital twin and machine learning technologies