The HoloLens is a mixed reality headset that allows you to augment the real world with virtual holograms. The HoloLens features a multitude of sensors including cameras, a microphone array, an ambient light sensor, and an inertial measurement unit, which includes an accelerometer, a gyroscope and a magnetometer. These are coupled with 3D audio speakers and an impressive, complicated transparent display (a better explanation of the HoloLens display technology), all running on Windows 10.
We wanted to try out the functionality of the HoloLens, and create a framework for all the common things you might want to do with it. The idea was to give us a toolkit to use for any future HoloLens applications, as well as to come to grips with the technology.
Holo Exhibition allows you to load files into the device, such as images, videos, PDFs and models, and place them around a real space. You can scale them, rotate them, move them, delete them, save them, and display them – the application will recognise the space you are in and place objects wherever you left them when you load a scene. You can also record audio on the fly to place within your space. This is especially useful for narration or explanation of your displays.
Images, Videos, Text, Models
These elements are the foundation for many exhibits, so we added these to Holo Exhibition first. Bright colours work especially well for all of these, and transparent backgrounds are great for images.
We thought it’d be cool to be able to display PDFs in the HoloLens. Our capability to place PDFs or other gallery style image sets means you can fit your content into a much smaller space. It also means you can group related information cleanly together, allowing a user to look through it all in one go, as they would if they were swiping through a gallery on their phone.
Holo Exhibition supports one or more bots to be uploaded to the device (with a little help from Nick Landry). These are placed into the application like any other special item type – through a custom meta file. The creator of the exhibition simply provides the endpoint of the bot (from the Microsoft Bot Framework), as well as the name of the bot, and it will be added to the application menu.
We believe the ability to provide a Q&A bot for each exhibition is a great way of providing help for the user, as well as answering questions they may have about the exhibition. A big way of interacting with the HoloLens is through your gaze – what you’re looking at – so the bots could also have the ability to give context specific answers based on this.
To allow users of our application to show things like sensor data, graphics and the like, we created a live data source type. This is another special type, in which you create a text file with the URL to poll, and send it over through our companion app. You can then place the object wherever you want it, and it continuously updates. The graph shown below is from Grafana, a timeseries data platform.
Holo Exhibiton also allows the user to place audio around their space. These ‘proximity audio items’ play when the user comes close to them. Because it can be hard to know what you’ll want to say/narrate while you’re building up an exhibit, we’ve also added functionality to record these audio items on the fly, in the app. This allows you to quickly add explanations, extra information, or instructions, by simply pressing record in the menu, and then stop when you’re done.
So, where might all of this be useful? Augmented/mixed reality is already a huge industry, with companies like Microsoft, Apple, Google, Facebook and Snap Inc. (Developer of Snapchat) investing huge amounts of capital into its development. It’s a new technology, however, with many applications still to be discovered.
Some of the first and perhaps most obvious applications we had in mind were for museums, advertisements and other showcases. The HoloLens, and other augmented and mixed reality technologies such as ARKit and ARCore for mobile, are modern, fresh ways of showcasing a museum collection or product display.
We also believe there opportunities for this technology in areas such as training and system monitoring. Interactive training tools can be created to run trainees through scenarios, display prompts on actual equipment, and, when integrated with our bots, provide verbal help for the user. There are dozens of other technologies that can also be integrated with the HoloLens, such as image recognition and machine learning.
The MiniDevs have come up with some great ideas around mixed reality for education. One suggested an app that allows you to cut up a model of something like a bee, enabling you to view its inner workings.
They also had a lot of great gamification ideas – they know how to make things fun. For example, exhibits could be incorporated into a kind of ‘Breakout’ game. We added functionality to our app allowing a user to create one of these games using a simple text file. We provide a bunch of different types of ‘locks’, where, when playing the game, the user will have to enter a sequence of numbers, letters, colours etc, or place certain items onto ‘podiums’.
The idea behind this is to be able to encourage kids to interact with exhibits and learn about them to help them solve the clues. They might have to read about something, find a certain object in the museum, ask someone about a certain exhibit, or anything of that nature.
The future of mixed and augmented reality is exciting. We’ve only really hit the tip of the iceberg - the real potential comes from other industries embracing this technology.