VR app to connect people. Meet virtually whilst at thousand KMs.
About the experience
This is an application designed to connect people that are parted away, allowing them to see each other in 3D through virtual reality.
Workflow followed to create the experience
-
Design:
From a draft to reality. I drew a sketch for the room and a storyboard for the game flow in order to estimate the graphics needed, as well as the space. The room is fit with basic geometries, a screen, some chairs and a table, all them made in Blender. Lights are pre baked with Unity.
The outside. For performance purposes, the outside is just an extruded NURB with a picture, to add realism at a low cost. The window is placed strategically to fit the perspective view of the wallpaper.
About the avatars, they were created in two different ways:
-
- Male: scanned using structure sensor (depth sensor) with later rigging algorithm (developed by Merce Alvarez and me) applied to generate bones which allow movement.
- Female: scanned and modeled through photogrammetry. 150 pictures from different angles were taken of the model, with later modeling and mesh fitting using another 3D avatar as a reference, and projecting textures on it.
Quality of this avatar is lower than male one due to the nature of the technique. Better results can be achieved using a professional photogrammetry rig such as: http://www.pi3dscan.com/. And ditto as the first avatar in regard to rigging.
-
Game Logic:
This application doesn’t have very complex logic, the main challenges were:
- Motion tracking. Using Final IK library along with other custom scripts, the head and hands are tracked. Hand controllers are the IK targets for the hands, and HMD is the target for the head. That helped also to optimize the network synchronization as only 3 transforms require synchronization to replicate the movement.
- Handling connections, avatar spawning and other networking topics described in the next point.
- Synchronizing slides shown on the screen, both to match in all devices and to load them. The slides can be packed in the build or loaded with an asset bundle at runtime.
- To Do: Add and apply “Lypsinc” library (blendshape animations for the lips when the user is talking).
-
Connectivity:
For this part, Photon Voice and UNet (Unity Network) were used. In case the device is a phone, for now, the device will look for any available created rooms, if none is found, new one will be created with “Default” name.
For now, this is scalable only to two players, but ready to work with several with just a few coding.
These are the synchronized variables:
- Transform attributes (rotation, position, scale) of the IK reference points for hands and head of each connected avatar.
- Voice between all parties connected. You can talk and listen to all users in the room.
- Commands to change the slides. It sends the slide number from the selected vector of slides to the server, and then it’s shared along the network updating all clients.
-
Phone Version:
In order to make it runnable in GearVR only a little touch was needed. Decreasing the graphics quality (limiting shadows to hard only, anti aliasing and light quality) was enough to keep the FPS over 30 in the phone, and device checker script to decide whether to look for hand controllers or not, hence the hands are just slightly following the head in the mobile version.
Tools used to create and play the experience
Software:
- Unity3D 2018.2.13f1, https://unity.com/es
- Blender, https://www.blender.org/
- Itseez3D, https://itseez3d.com/
- Wrap3, https://www.russian3dscanner.com/
- Agisoft Photoscan (photogrammetry), https://www.agisoft.com/
HW:
- Structure Sensor by Occipital + ipad,
- Oculus Rift + hand controllers,
- Gear VR with Samsung Galaxy S7 Edge.
Video experience
The video has two sides, male and female avatars, with its respective points of view in VR. Note that video from the Samsung S7 (Gear VR) has very low FPS due to the recording tool, since most of the phone resources are taking to run the app. Movements in the video are not fully synchronized since the video is intended for a basic and fun demonstration only.

Credits
- Natasha Tsedryk for being a model and actress for the female avatar. Website delaytelo.com
- Laura Roca for helping to synchronize the experience and the actress.
If you feel like you would like to reuse the code, please contact me to get access to the GitHub project.