Authors: Nadejda Roubtsova1,2 and Jean-Yves Guillemaut1
1 Centre for Vision, Speech and Signal Processing, University of Surrey (United Kingdom)
2 Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath (United Kingdom)
The repository contains the dataset generated for validation of Bayesian Helmholtz Stereopsis research in the following publication:
N. Roubtsova and J.-Y. Guillemaut, "Bayesian Helmholtz Stereopsis with Integrability Prior", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
By accessing and/or using the data you are agreeing to following terms and conditions:
1. All original imagery and associated data provided may be used for non-commercial research purposes only.
2. The source of the datasets must be acknowledged in all publications in which it is used. This should be done by referencing all of the following:
3D meshes of Pear and Bunny were used with permission of the third-party creators:
Both models are said to be allowed for use in non-commercial research. It is however your responsibility to obtain details of licence agreement information concerning these models from the creators' websites or by contacting them directly should you wish to use the models.
To access, download and/or use the data you must agree to these terms and conditions. If you agree to the Licence Agreement, please click here to download the dataset.
The dataset consists of 3 objects: Sphere, Pear and Bunny.
The groundtruth 3D meshes for these objects are in folder /groundtruth/. The meshes were either created in house (Sphere) or borrowed with the creators' permission as indicated above in the licence agreement (Pear and Bunny).
For each object we have generated noise-free and noise corrupted sets of intensity images. The images come in Helmholtz Stereosis reciprocal pairs, 8 reciprocal pairs per set. Noise corruption: Gaussian noise, normalised variance of 0.001 or +/-2072 intensity levels. The images are rendered using the physically plausible modified Phong reflectance model (Lewis, 1994) which combines a diffuse and a specular part. The intesity images per object are in folder /intensity/ with ../S_noN/ - specular noise-free and ../S_NL1/ - specular noise level 1 (as described).
Geometric camera calibration as 3x4 projection matrices per object for the sythetic acquistion set-up of 16 camera viewpoints is in folder /calibration/.
Masks used to compute the reconstruction volume per object are in folder /masks/.
For further details please see the readme.txt file enclosed with the dataset.
THANK YOU FOR YOUR INTEREST IN OUR WORK!