This allowed us to visualize physical IoT devices alongside sensory data - temperature, pressure and humidity in AR, in real-time. We’d previously discussed the potential of Digital Twinning, concluding that “with the help of AI and Machine learning, the data collected [from Digital Twins] could also be used to predict and diagnose problems before they even happen”, among other things.
Digital twin scenarios
Since then, we’ve been implementing Digital Twins at an even greater scale using Mixiply. We are currently capable of creating expansive Digital Twins of real-world locations such as train stations.
We are redirecting IoT sensor data of physical entities to trigger incident alerts in its associated Digital Twin, helping personnel to remotely monitor areas of concern in the real-word environment. As we’ve alluded to already, our next agenda is to avoid running into concerns over time by harnessing collated data for fault prediction and diagnostics. Operational equipment would greatly benefit from this sort of routine monitoring; this is a practice known as Predictive Maintenance.
We’ve recently constructed a large-scale Digital Twin of a warehouse in Mixiply, and as a part of the Digital Twin, we are conducting Predictive Maintenance upon its assets.
The following video is a walk-through of the entire twin, which shows a high-level overview of how we are harnessing Predictive Maintenance.
What is Predictive Maintenance and how does it work?
Predictive maintenance involves routinely analyzing equipment assets in order to estimate their next point of failure ahead of time, minimizing asset or operation downtime, maximizing operation productivity and avoiding unnecessary maintenance. Predictive Maintenance is supported by AI and machine learning, helping you make more informed decisions by analyzing legacy and real-time data. Predictive maintenance extends the practicality of our Digital Twins technology. In the following three blog posts, we will be detailing the entire process from start to finish.
Predictive Maintenance tools
But first, let's discuss the tools we've chosen to conduct Predictive Maintenance.
We are using R to compute our asset failure predictions as it has a solid library of data science packages, including survival and corrplot.
You can interact with R on your computer through RStudio, an R-compatible IDE.
However, when the amount of data to be analysed overextends the limits of your PC’s RAM. What can be done then?
This is just the job for Apache Spark.
Apache Spark is a cluster-computing framework capable of distributing expansive programming workloads in parallel and in a fault-tolerant manner. Apache Spark has an R interface, facilitated through R’s ‘sparklyr’ package, which is also capable of processing hefty datasets in Apache Spark but on your local PC.
Although working locally with Apache Spark is a viable solution, cloud computing is the de facto standard for computing services. Our Digital Twins processes are also largely cloud-based, so it seemed appropriate for us to compute our analyses in the cloud too. Using Azure Databricks to conduct Predictive Maintenance on our Digital Twins’ corresponding real-life assets was therefore, an obvious decision, as it is built upon Apache Spark and is also capable of processing massive datasets in the cloud.
And that's it for this post. We hope the concept of Predictive Maintenance in Digital Twins has piqued your interest. In our next post, we'll explain more about Azure Databricks and how to prepare it for Predictive Maintenance analyses.
Read the next part in the series - Azure Databricks and Predictive Maintenance of our digital twin’s corresponding real-life assets.