For our final project we will integrate stereoscopic rendering into Scotty3D. We will target our render outputs to be viewed in a headset such as the Oculus Rift. This will allow viewers to see the rendered scenes in 3D. The example render below shows the type of output that we hope to achieve (for simpler Scotty3D scenes).
We intend to write the followind additions to Scotty3D in order to implement this feature.Implementation steps:
We implemented a pair of stereo cameras in order to perform stereo rendering. In order to simplify the implementation, the stereo camera is created in the PathTracer object, and is configured and placed before the beginning of ray tracing.
configure_stereo_cameras(double dx), we configured the stereo cameras by copying the parameters of the original viewing camera, with the exception of horizontal field of view (hFov), and aspect ratio (ar), as these two variables are halved in our stereo rendering pipeline in order to fit both camera views onto our screen pixel space. Next, we offset both cameras by dx units in both left and right direction respectively in order to simulate the eye separation in a stereoscopic view. We also move the cameras back by the focal length, so that the field of view of the rendered scene matches more closely to that of the viewing screen in Scotty 3D.
raytrace_pixel(size_t x, size_t y), we modified the function to use both the stereo cameras, depending on which side of the screen the pixel falls onto.
In this task, we implemented the lens model to simulate the optical distortion effects of VR headset, in order to perform realistic stereoscopic rendering. We applied a barrel distortion to an image in order to tune the parameters and observe the effects of the distortion. The implementation is located inside the matlab folder.
In barrel distortion, image magnification decreases as the image shifts away from the optical center. The parameters for the distortion is a list of positive numbers that sum up to 1, and the resulting scaling is the sum of the product of each distortion factor multiplied by the normalized distance to the optical center raised to various powers.
Below is the result when barrel distortion is applied with an offset in the optical center:
In this task, we applied the optical distortion model to our ray tracer. We did so by apply the distortion warping to each camera ray we generated, and then performed ray tracing as usual. Our implementation can be found under
raytrace_pixel. Since the distortion parameters, the offset, and the eye separation for each VR headset is different, the parameters can be tuned. Currently, we are using the following parameters, as they give us the best results:
camera_seperation = 0.5;
k1 = 0.42;
k2 = 0.26;
k3 = 0.02;
k_offset = 0.05;