Material and Texture Models


Material design is the process by which artists or designers set the appearance properties of virtual surface to achieve a desired look. This process is often conducted in a virtual synthetic environment however, advances in computer vision tracking and interactive rendering now makes it possible to design materials in augmented reality (AR), rather than purely virtual synthetic, environments. However, how designing in an AR environment affects user behavior is unknown. To evaluate how work in a real environment influences the material design process, we propose a novel material design interface that allows designers to interact with a tangible object as they specify appearance properties. The setup gives designers the opportunity to view the real-time rendering of appearance properties through a virtual reality setup as they manipulate the object. Our setup uses a camera to capture the physical surroundings of the designer to create subtle but realistic reflection effects on the virtual view superimposed on the tangible object. The effects are based on the physical lighting conditions of the actual design space. We describe a user study that compares the efficacy of our method to that of a traditional 3D virtual synthetic material design system. Both subjective feedback and quantitative analysis from our study suggest that the in-situ experience provided by our setup allows the creation of higher quality material properties and supports the sense of interaction and immersion.

We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not
exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the
geometric shapes. We collect crowdsourced data where humans rank their perception of the softness of vertex pairs on virtual 3D
models. We then compute shape descriptors and use a learning-torank approach to learn a softness measure mapping any vertex to
a softness value. Finally, we demonstrate the accuracy and robustness of our framework with a user study and a variety of 3D shapes,
and show an application of fabricating virtual 3D objects.

Download video (Highest Quality)

Graphics Group at SIGGRAPH 2016

The  Graphics Group will have two papers at SIGGRAPH 2016 —
Multi-Scale Label-Map Extraction for Texture Synthesis  and Tactile Mesh Saliency. SIGGRAPH will be held July 24-28, 2016 in Anaheim California. We will also have a paper at the ACM Symposium on Applied Perception, held just before the main SIGGRAPH conference.

Source code now online!


Texture synthesis is a well-established area, with many important applications in computer graphics and vision. However, despite their success, synthesis techniques are not used widely in practice because the creation of good exemplars remains challenging and extremely tedious. In this paper, we introduce an unsupervised method for analyzing texture content across multiple scales that  automatically extracts good exemplars from natural images. Unlike existing methods, which require extensive manual tuning, our method is fully automatic. This allows the user to focus on using texture palettes derived from their own images, rather than on manual interactions dictated by the needs of an underlying algorithm.

Most natural textures exhibit patterns at multiple scales that may vary according to the location (non-stationarity).  To handle such textures many synthesis algorithms rely on an analysis of the input and a guidance of the synthesis. Our new analysis is based on a labeling of texture patterns that is both (i) multi-scale and (ii) unsupervised that is, patterns are labeled at multiple scales, and the scales and the number of labeled clusters are selected automatically. 

Our method works in two stages: the first builds a hierarchical extension of superpixels; the second labels the superpixels based on random walk in a graph of similarity between superpixels and a nonnegative matrix factorization.

Our label-maps provide descriptors for pixels and regions that benefit state-of-the-art texture synthesis algorithms. We show several applications including guidance of non-stationary synthesis, content selection and texture painting. Our method is designed to treat large inputs and can scale to many megapixels. 

In addition to traditional exemplar inputs, our method can also handle natural images containing different textured regions.