After putting more work in this project, I finally got a functional FPV or tele-presence system that takes the most of the Rift immersivity (if that's a word). The system relies on various bits and pieces that can't possibly be better explained than with a diagram!
The result is indeed surprisingly immersive. I initially feared that the movement of the FPV camera would lag behind but it's not the case, the servos react quickly enough. Also, the large field of view of the Rift is put to good use with the lens I used on the FPV camera.
Some technical notes
The wide FOV lens I use on the FPV camera causes significant barrel distortion on the captured image. After calibrating the camera (using Agisoft Lens), I implemented a shader to correct this in realtime.
I use Ogre 3D and Kojack's Oculus code to produce the type of image expected by the Rift. In the Ogre 3D scene, I simply create a 3D quad mapped with the captured image and place it in front of the "virtual" head. Kojack's Rift code takes care of rendering the scene on two viewports (one for each eye). It also performs another distortion correction step which, this time, compensates for the Rift lenses in front each eye. Lastly, it provides me with the user's head-orientation that translates later down the chain to servo positions for moving the FPV camera.
As the camera is physically servo-controlled only on yaw and pitch, I apply the head-roll to the 3D quad displaying the captured image (in the opposite direction). This actually works really well (thanks Mathieu for the idea!). I'm not aware of any commercial RC FPV system that does that.
And ideas for future developments...
One of the downside of the system is the poor video quality. This comes from several things:
- the source video feed is rather low resolution,
- the wireless transmission adds some noise
- the analog to digital conversion is performed with a cheap USB dongle
- for example, using the Raspberry Pi camera as a source: the resolution and image quality would be better. It is also much lighter than the Sony CCD. It doesn't have a large FOV though (but this can be worked around)
- transmitting over WiFi would avoid using a separate wireless system. But what kind of low-latency codec to use then? Also range is an issue (though directional antena and tracking could help)
- the image manipulated by the receiving computer would directly be digital, so no more composite video capture step.
I should also get rid of the Pololu Maestro module on the transmitter end as I've already successfully used the Raspberry Pi GPIO for generating PWM signals in the past.
Lastly, it would be fantastic to capture with two cameras and use the Rift stereoscopic display.
So still some room for improvement! Any advice welcomed.
|The receiver-end (A/V receiver on the left, Wifi-Pi on the right)|