Recovering Shape and Reflectance

Spring Courses in Computer Graphics -- 2018

CPSC 078 - See It, Change It, Make It, M 2:30-4:20

CPSC 276 - Digital Humanities Apps, MW 2:30-3:45

CPSC 479 - Advanced Topics: Computer Graphics T 9:25-11:15



Abstract

Models of both the shape and material properties of physical objects are needed in many computer graphics applications. In many design applications, even if shape is not needed, it is desirable to start with the material properties of existing non-planar objects. We consider the design of a system to capture both shape and appearance of objects. We focus particularly on objects that exhibit significant subsurface scattering and inter-reflection effects.

We present preliminary results from a system that uses coded light from a set of small, inexpensive projectors coupled with commodity digital cameras.

September/October Presentations

The Graphics Group will be presenting papers at Digital Heritage 2015 in Granada, Spain, http://www.digitalheritage2015.org/  and the Symposium on Applied Perception in Tuebingen, Germany http://sap.acm.org/2015/, as well as a tutorial at the 2015 Color Imaging Conference http://www.imaging.org/ist/conferences/CIC/index.cfm

Graphics Group Collaborates with the Yale Center for British Art

CS Researchers Ying Yang and Ruggero Pintus collaborate with Yale Center for British Art on new exhibit - See more at Yale News

Book: The Morgan Kaufmann Series in Computer Graphics: Morgan Kaufmann (2007)


Computer graphics systems are capable of generating stunningly realistic images of objects that have never physically existed. In order for computers to create these accurately detailed images, digital models of appearance must include robust data to give viewers a credible visual impression of the depicted materials. In particular, digital models demonstrating the nuances of how materials interact with light are essential to this capability. 

This is the first comprehensive work on the digital modeling of material appearance: it explains how models from physics and engineering are combined with keen observation skills for use in computer graphics rendering.

Written by the foremost experts in appearance modeling and rendering, this book is for practitioners who want a general framework for understanding material modeling tools, and also for researchers pursuing the development of new modeling techniques. The text is not a "how to" guide for a particular software system. Instead, it provides a thorough discussion of foundations and detailed coverage of key advances.

Practitioners and researchers in applications such as architecture, theater, product development, cultural heritage documentation, visual simulation and training, as well as traditional digital application areas such as feature film, television, and computer games, will benefit from this much needed resource.

Published in 2007, Available From Amazon.com

 


Download video (Highest Quality)

A video overviewing the Context Aware Texture Paper [1].


References

  1. Lu, J., A. S. Georghiades, A. Glaser, H. Wu, L. - Y. Wel, B. Guo, J. Dorsey, and H. Rushmeier, "Context-Aware Textures", ACM Transactions on Graphics, vol. 26, issue 1, no. 3, New York, NY, USA, ACM, 01/2007.



Abstract  We examined two image-based methods, photogrammetry and stereo vision, used for reconstructing the threedimensional form of biological organisms under field conditions. We also developed and tested a third ‘hybrid’ method, which combines the other two techniques. We tested these three methodologies using two different cameras to obtain digital images of museum and field sampled specimens of giant tortoises. Both the precision and repeatability of the methods were assessed statistically on the same specimens by comparing geodesic and Euclidean measurements made on the digital models with linear measurements obtained with caliper and flexible tape. We found no substantial difference between the three methods in measuring the Euclidean distance between landmarks, but spatially denser models (stereo vision and ‘hybrid’) were more accurate for geodesic distances. The use of different digital cameras did not influence the results. Image-based methods require only inexpensive instruments and appropriate software, and allow reconstruction of the three-dimensional forms (including their curved surfaces) of organisms sampled in the field.