I envision a world where reality will change to fit our context, needs, and preferences with the enabling power of augmented reality in 2050. In this world, each person will have a companion “Jinn”, an embodied artificial intelligent agent that understands our physical environment and enhances our abilities through context-awareness.
This project is made up of several parts put together in one project. If you want to reproduce this project, you can check out my documentation of the project in my github repository: https://github.com/helenchg/LearnHololensDevUnity
Some of the projects have the results (what you should see when you build and deploy it onto your device) in their respective readme.md file.
In summary, I used a modified Spatial Mapping Manager from the HoloToolkit with added runtime NavMesh generation. This enable our AI agent to appear on the NavMesh and navigate on it. I also added voice input to summon the agent and to make it move towards you. In addition, gaze and air-tap gesture can be used to move the agent towards other destinations on the NavMesh.
In the future, with more development, the agent will be able to help you search for lost items in the physical environment, teach and show you up close the physical artifacts in a Museum, warn you of danger in your environment, appear only when needed and disappear based on context.
For the final project, I am attempting to make a basic context-aware virtual agent that responds to our voice commands in the physical environment.
Instructions: In the demo, the you can do some basic interactions with a virtual agent. You can summon the agent by saying “Hey Jinn” or “Hey Kid”. The user can select a destination by using the air-tap gesture. By saying “Come back”, the agent will “walk” back to where you are.
In the video below, you can see the Spatial Mapping mesh in white color and the NavMesh in multicolor meshing. The NavMesh is where the agent can freely move around.