Enhancing Digital Twins Part 1: Harnessing Data and Predictive Maintenance to Enhance Digital Twins

boardbrick.png
Part 1 of a 4-part series about displaying Predictive Maintenance insights in Digital Twins, using Azure Databricks. Before we get into the setup details, we’ll introduce the concept of Predictive Maintenance and the tools we used to make predictions.

We've previously explored simulating Digital Twins on IoT devices within Theta’s augmented reality (AR) platform, Mixiply, to create an end-to-end Digital Twin environment in AR.

This allowed us to visualise physical IoT devices alongside sensory data - temperature, pressure and humidity in AR, in real-time. We’d previously discussed the potential of Digital Twinning, concluding that “With the help of AI and Machine learning, the data collected [from Digital Twins] could also be used to predict and diagnose problems before they even happen”, among other things.

Digital twin scenarios

Since then, we’ve been implementing Digital Twins at an even greater scale using Mixiply. We are currently capable of creating expansive Digital Twins of real-world locations such as train stations.

Part of a large-scale train station Digital Twin in Mixiply/AR

Part of a large-scale train station Digital Twin in Mixiply/AR

We are redirecting IoT sensor data from physical entities to trigger incident alerts in its associated Digital Twins, which can help personnel to remotely monitor areas of concern in the real-word environment. As we’ve alluded to already, our next agenda is to avoid running into concerns over time by harnessing collated data for fault prediction and diagnostics. Operational equipment would greatly benefit from this sort of routine monitoring; this is practice known as Predictive Maintenance.

We’ve recently constructed a large-scale Digital Twin of a warehouse in Mixiply, and as part of the Digital Twins, we are conducting Predictive Maintenance upon its assets.

Overhead of a large-scale warehouse Digital Twin in Mixiply/AR

Overhead of a large-scale warehouse Digital Twin in Mixiply/AR

The following video is a walk-through of the entire twin, which shows a high-level overview of how we are harnessing Predictive Maintenance.

What is Predictive Maintenance and how does it work?

Predictive maintenance is the routine analysis of equipment assets to estimate their next point of failure ahead of time, to minimise asset or operation downtime, maximise operation productivity and avoid unnecessary maintenance. Predictive Maintenance is supported by AI and machine learning, helping you make more informed decisions by analysing legacy and real-time data. Predictive maintenance extends the practical application of our Digital Twins technology. In the following three blog posts, we will be detailing the entire process from start to finish.

Warehouse’s Digital Twin cooling asset suggestions in Mixiply/AR

Warehouse’s Digital Twin cooling asset suggestions in Mixiply/AR

Predictive Maintenance tools

But first, let's discuss the tools we've chosen to conduct Predictive Maintenance.

We are using R to compute our asset failure predictions as it has a solid library of data science packages, including survival and corrplot 

You can interact with R on your computer through RStudio, an R-compatible IDE. 

However, sometimes, the scale of data to be analysed overextends the limits of your PC’s RAM. What can be done then?  

This is just the job for Apache Spark.  

Apache Spark is a cluster-computing framework capable of distributing expansive programming workloads in parallel in a fault-tolerant manner. Apache Spark has an R interface, facilitated through R’s ‘sparklyr’ package, which is also capable of processing hefty datasets in an Apache Spark but on your local PC.   

While working locally with Apache Spark is a viable solution, cloud computing is the de facto standard for computing services. Our Digital Twins processes are also largely cloud-based, so it seemed appropriate for us to compute our analyses in the cloud too. Azure Databricks, which is built upon Apache Spark and is capable of processing massive datasets in the cloud, was therefore an obvious choice to conduct Predictive Maintenance on our Digital Twins’ corresponding real-life assets.  

And that's it for this post. We hope the concept of Predictive Maintenance in Digital Twins has piqued your interest. In our next post, we'll explain more about Azure Databricks and how to prepare it for Predictive Maintenance analyses. 

Read the next part in the series - Azure Databricks and Predictive Maintenance of our digital twin’s corresponding real-life assets.

20190701_Theta_FMP_HQ Edited-86 (2).jpg

 

This is the first blog post in a series of four on Predictive Maintenance and Digital Twins by Lillian Ho. Lillian is a developer who works in our innovation lab, exploring new and emerging technologies.