Oculus Rift and Leap Motion Human Machine Interfaces and Google Glass Augmented Reality Holograms (4/16/2015)

Today I debugged the last issues in my Google Glass augmented reality hologram and my Leap Motion + Oculus Rift virtual reality human machine interfaces.
Now, by looking at an image target, I can instantiate a 3D hologram of something like myself.
I can also project a hologram under the target, using depth masking, like through a window:
I have also incorporated rudimentary gesture control, allowing me to 'grab" and "carry" a mesh throughout the 3D space.
This particular mesh is actually an accurate point cloud of the Energy Lab.
Along with gestures, I can call functions through virtual buttons on the target that can be clicked through reality, such as this file management tree.
Finally, I can pull live data from my own control systems, such as sensor values from my Raspberry Pi, and then display them into a hologram.
For my Leap Motion and Oculus Rift interface, I can place objects in stereoscopic 3D in front of a black and white augmented view of the world, and actually interact with them with my hands.
For example, I can grab and manipulate this mesh of me, just as I could the hologram.
Here is the Energy Lab:
And the file management tree, where I can open files by touching a leaf in 3D space.
Finally, the live data stream from the home automation system. For this one, I scaled the hologram up to be lifesize, so it is as if you are walking through my room in real life.

Comments