About Us People Research Publications Education Links Search


Computer Graphics Group
yellowline
Education
S e m i n a r s
yellowline

The MIT Computer Graphics Group is a part of MIT's Computer Science and Artificial Intelligence Laboratory. As of March 2004, our lab is in the Ray and Maria Stata Center.

Please go to the CSAIL Events Calendar for the most up to date information on seminars and talks hosted by MIT's Computer Graphics Group.

yellowline
Past Events

These past seminars were primarily held in the Computer Graphics Group Laboratory on the second floor of building NE43 (unless otherwise noted). 
For an abstract of the talk listed, use the title of the talk as a link. Other links may appear for the presenter and/or their institution, depending on the individual.


April 7, 2004
4:00pm (Stata Center - Room TBD)
Michael Hawley, MIT - Media Lab
BHUTAN: A Visual Odyssey Across the Last Himalayan Kingdom

March 2, 2004
11:00am (NE43-518)
Ron Fedkiw, Stanford - Computer Science
Physics Based Simulation for Computer Graphics | PDF Abstract

February 25, 2004
11:00am (NE43-518)
Marc Levoy, Stanford - Computer Graphics Laboratory
Synthetic aperture photography and illumination using arrays of cameras and projectors

April 3, 2003
3:00pm (NE43-518)
Cass Everitt, Manager of Developer Technology, NVIDIA
Cg - The Future of Computer Graphics

October 4, 2002
2:00pm (NE43-2nd floor lounge)
Ioana Boier Martin, IBM T.J. Watson Research Center, Visual Technologies Department
Interactive Shape Design with Multiresolution Subdivision Surfaces

March 26, 2002
3:00pm (NE43-518)
Zoran Popović, University of Washington - Department of Computer Science and Engineering
Interactive High-Fidelity Character Animation

March 15, 2002
3:00pm (NE43-518)
Daphna Weinshall, The Hebrew University of Jerusalem, School of Computer Science and Engineering
Perspective Unbound: Visionary Rendering

March 1, 2002
1:30pm (NE43-941)
Harry Shum, Microsoft Research
(Beyond) Plenoptic Sampling

January 10, 2002

4:00pm (NE43-518)
Robert Cipolla, University of Cambridge, Department of Engineering
3D model acquisition and tracking using uncalibrated silhouettes

October 26, 2001

2:00pm (NE43-518)
Steven Seitz, University of Washington, Department of Computer Science and Engineering
Seeing 3D: The Space of All Stereo Images

July 14, 2000
2:30pm
Joao Goncalves, RESOLV, EU Joint Research Centre in Ispra, Italy
3D Reconstruction of Environments "as built"

May 12, 2000
4:00pm (NE43-518)
Marc Levoy, Computer Science Department, Stanford University
The Digital Michelangelo Project

April 13, 2000

4:00pm (NE43-518)
James O'Brien, Georgia Institute of Technology
Generating Synthetic Motion Using Physically Based Simulation

February 24, 2000

4:00pm (NE43-941)
Shai Avidan, Microsoft Research
Static, Dynamic and Transparent Structure From Motion

February 16, 2000

11:30am (NE43-518)
Darwyn Peachey, Pixar
R&D at Pixar

October 7, 1999
4:00pm (NE43-941)
Michal Irani, The Weizmann Institute of Science
Multi-Frame Analysis of Information in Video

June 7, 1999
3:00pm
Victor Ostromoukhov, Swiss Federal Institute of Technology, Lausanne (EPFL)
Artistic Halftoning - Between Technology and Art

March 17, 1999
4:00pm
Darwyn Peachey, Pixar
The Art and Technology of Pixar's Animated Films

February 19, 1999
2:00pm
Hugues Hoppe, Microsoft Research
Robust Mesh Watermarking

February 18, 1999

3:15pm
Ned Greene
Optimized Hierarchical Occlusion Culling for Z-Buffer Systems

September 11, 1998
3:00pm
Marc Levoy, Computer Science Department, Stanford University
The Digital Michelangelo Project

June 25, 1998
Dr. Paul Debevec, Computer Science Division, University of California at Berkeley
Rendering the Campanile Movie: Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping

April 21, 1998
Mel Slater, Visiting Scientist at the Research Laboratory of Electronics at MIT
Small Group Behaviour in a Virtual and Real Environment: A Comparative Study

February 5, 1998
Mathieu Desbrun, iMAGIS - GRAVIR/IMAG
Modeling and Animating Highly Deformable Materials

November 7, 1997
Steven Seitz, Microsoft Corporation
Viewing and Manipulating 3D Scenes Through Photography

September 30, 1997
Paul Haeberli, Silicon Graphics, Inc.
Media Synthesis

August 19, 1997
Frédo Durand, iMAGIS
The Visibility Skeleton and the 3D Visibility Complex

April 25, 1997
Carlo Sequin, Computer Science Division at U.C. Berkeley

April 10, 1997
Fabrice Neyret, Dynamic Graphics Project at University of Toronto
Synthesizing Complex Natural Scenes using Volumetric Textures

March 19, 1997
John Canny, UC Berkeley
PROPs: Personal Robot Presences

March 13, 1997

John Canny, UC Berkeley
3DDI: 3D Direct Interaction

November 12, 1996
Paul Debevec, UC Berkeley
Modeling and Rendering Architecture from Photographs

October 28, 1996
Michal Irani, David Sarnoff Research Center
A Unified Approach to 2D and 3D Scene Analysis

September 11, 1996
Oded Sudarsky, Technion-Israel Institute of Technology
Output-Sensitive Visibility Algorithms for Dynamic Scenes with Applications to Virtual Reality

May 17, 1996
Steven J. Gortler, Harvard University
The Lumigraph

March 22, 1996
Peter Schroeder, Caltech
Spherical Wavelets: Efficiently Representing Functions on the Sphere

October 27, 1995
Thomas A. Funkhouser, Bell Laboratories
Visualization and Interaction in Large 3D Virtual Environments

June 21, 1995
Marc Levoy, Stanford
A Project to Build a 3D Fax Machine

May 4, 1995
Nina Amenta, The Geometry Center at the University of Minnesota
Computer Vision as Low-Dimensional Optimization

April 21, 1995
Julie Dorsey and Seth Teller, MIT Computer Graphics Group at the MIT Laboratory for Computer Science
Ten Hard Problems in Computer Graphics

yellowline
Abstract - "BHUTAN: A Visual Odyssey Across the Last Himalayan Kingdom" by Michael Hawley

Michael Hawley
MIT - Media Laboratory and Friendly Planet

We recently published the world's largest book about one of the smallest and most extraordinary countries on earth: BHUTAN. The megatome is quite breathtaking (especially if you try to lift it): it opens to 5x7 feet, weighs 133 pounds. It's pricey ($10,000 per click at Amazon.com), partly because every copy is hand-built by Acme Bookbinding (the world's oldest bookbindery), and partly because the proceeds (as much as $8,000 per copy) are being donated to help needy schools and scholars in that country. BHUTAN has already been called a masterpiece, and received worldwide acclaim in print and broadcast media. More than $500,000 in donations have been contributed; and let's not forget that the big book was produced exclusively with the HP DesignJet 5500 - with enormous assistance from people from all over HP.

In this talk I have a chance to do something special: not only show you the book, but tell a bit of the remarkable story of how it came to be. Technically, the effort helped push on limits of imaging systems (hence our MIT research interest, supported by iCampus). But the genesis of the idea, the remarkable adventures in photographing Bhutan's rich cultures (working with young Bhutanese kids to take more than 40,000 images), the production of David Macaulay's exquisite map, the crafty old-world methods used to "super size" the binding, and even the innovative "grantsmanship" that made the work possible - all of this adds up to a most unusual tale in which very high tech methods produce charming low-tech results.

Since this effort was incubated via iCampus at MIT, I am especially proud to share with our community the story of the making of BHUTAN.

Links: our page on it: http://www.friendlyplanet.org/bhutan
buy one now: http://www.amazon.com/bhutan
recent press: http://www.technologyreview.com/articles/atwood0204.asp
===================================================
Bio:
Michael Hawley is a devoted educator. A member of the faculty at MIT in the Department of Electrical Engineering and Computer Science as well as the Media Lab for about a decade, he held the Dreyfoos Professorship and later became MIT's Director of Special Projects. Educated at MIT and Yale, Hawley is an explorer who has consistently pushed the frontiers of technology, literally and figuratively. Michael's career has included research posts at Bell Laboratories, at IRCAM in Paris, at Lucasfilm (where he played a small part of the pioneering efforts in digital cinema), and at NeXT (where he created the first generation of digital books and libraries with Steve Jobs). He has traveled and lectured in over 60 countries and led major research efforts at MIT. But his favorite times are spent with young students in some of the world's farthest reaches.

The same Michael Hawley is also an accomplished pianist. A protege of Ward Davenny, his teachers have included Jon Quinn, Claude Frank, Earl Wild, and David Deveau. He won the Van Cliburn International Piano Competition for Outstanding Amateurs in 2002. He performs only rarely, but recently made debuts at Symphony Hall and Jordan Hall in Boston.

Host: Seth Teller yellowline

Abstract - "Synthetic aperture photography and illumination using arrays of cameras and projectors" by Marc Levoy

Marc Levoy
Stanford - Computer Graphics Laboratory

Leonardo observed that if you hold an object near your eye, and that object is smaller than your pupil (he used a needle), then it no longer obscures your vision. A dense array of cameras can be treated as a synthetic "eye" with an unusually large pupil. Such a system has a shallow depth of field, allowing us to "see through" partially occluding objects like foliage and crowds. We call this idea synthetic aperture photography (SAP). In the first part of this talk, I will describe two systems we have built to explore this idea: a camera aimed at an array of mirrors, and an array of 128 custom CMOS video cameras.

Similarly, a dense array of projectors can be treated as a single synthetic projector with an unusually large aperture. Such a system produces a real (aerial) image having such a shallow depth of field that the image ceases to exist a small distance away from the focal plane. We call this idea synthetic aperture illumination (SAI). In the second part of this talk, I will describe two systems we have built to explore this idea: a video projector aimed at an array of mirrors, and an array of miniature video projectors.

Finally, I will describe how these two ideas can be combined to implement a large-scale analogue of confocal microscopy - a family of techniques that employ wide-aperture structured illumination and synchronized imaging to produce cross-sectional views of biological specimens. Replacing the optical apertures used in confocal microscopy with arrays of cameras and video projectors, we can selectively image any plane in a partially occluded environment, and we can see further through weakly scattering environments such as murky water than is otherwise possible. By thresholding these confocal images, we produce mattes that can be used to selectively illuminate any plane in a scene. These capabilities should find applications in scientific imaging, remote sensing, surveillance, shape measurement, and stage lighting.

Bio:
Marc Levoy is an Associate Professor of Computer Science and (jointly) Electrical Engineering at Stanford University. He received a Bachelor's and Master's in Architecture from Cornell University in 1976 and 1978, and a PhD in Computer Science from the University of North Carolina at Chapel Hill in 1989.

In the 1970's, Levoy worked on computer animation, developing an early computer-assisted cartoon animation system. In the 1980's Levoy worked on volume rendering, a family of techniques for displaying sampled three-dimensional functions such as medical scanner data. In the 1990's he worked on technology and algorithms for digitizing three-dimensional objects.

This led to the Digital Michelangelo Project, in which he and a team of researchers spent a year in Italy digitizing the statues of Michelangelo using laser scanners. His current interests include sensing and display technologies, image-based modeling and rendering, and applications of computer graphics in art history, preservation, restoration, and archaeology. Levoy received the NSF Presidential Young Investigator Award in 1991 and the SIGGRAPH Computer Graphics Achievement Award in 1996 for his work in volume rendering.

Host: Frédo Durand yellowline
Abstract - "Physics Based Simulation for Computer Graphics" by Ronald Fedkiw

Ronald Fedkiw
Stanford - Computer Science

This talk will take a survey approach to physics based simulation for computer graphics. For each topic, we will discuss the basic simulation techniques emphasizing the key enabling technology that allows for high quality simulations. First, we will consider smoke simulation on both uniform and octree grids using vorticity confinement to provide for adequate rolling motions. A two-dimensional technique for simulating nuclear explosions will be discussed as well. Then we will discuss the use of level set methods for simulating water and other liquids, along with the ghost fluid method for simulating fire. Turning to the simulation of solids, we will first address mesh generation techniques for both volumetric tetrahedral meshes and for triangulated surfaces. Then rigid body simulation techniques will be discussed with an emphasis on methods for treating contact and collision between large numbers of nonconvex bodies. Next, we will consider the simulation of cloth and thin shells with an emphasis on newly proposed techniques for treating bending as well as full-proof collision techniques that stop all nonphysical interpenetrations of the thin triangulated surface. Finally, we turn to finite element simulations proposing a new technique that allows for robust simulation even under adverse conditions such as the full collapse or inversion of individual elements. Then a new discrete virtual node algorithm that provides the degrees of freedom necessary to topologically separate a mesh along arbitrary (possibly) branching curves will be presented in the context of fracture. Examples of finite element simulation include elasticity and plasticity for both shells and volumes with isotropic and anisotropic materials including both active and passive components. Time permitting, implications for simulating the human musculoskeletal system will be discussed.

Host: Jovan Popović yellowline
Abstract - "Cg - The Future of Computer Graphics" by Cass Everitt

Cass Everitt
NVIDIA
Manager of Developer Technology

The field of interactive computer-generated graphics is rapidly converging with film, but there still are several challenges. In the past, achieving interactive performance required assembly programming expertise and sophisticated PC hardware and systems. Today, real-time rendering can be offloaded from the CPU to high-performance Graphics Processing Units (GPUs), and developers are now able to bring more compelling effects to the desktop. However, this technology alone does not reduce programming complexity. The real-time rendering industry is ready for a new approach to graphics Programming - a high-level language for graphics...Cg!

This method for programming real-time pixel and vertex effects eliminates the need to write applications with extensive low-level assembly code. With built-in abstractions and optimizations, a new graphics programming language can revolutionize the industry by increasing in the number of applications with cinematic-quality effects.

Today's software applications are universally developed with high-level programming languages, typically either C or C++. However, when it comes to creating complex visual effects, developers have had to use a highly restrictive assembly language. The Cg programming language, considered the "C" for graphics, provides developers with a major leap forward in ease and speed of programming of special effects and accelerates delivery of real-time cinematic-quality graphics experiences on the desktop.

NVIDIA is leading the way in the real-time rendering revolution, and we're taking a quantum leap toward enabling real-time cinematic rendering!

yellowline
Abstract - "Interactive Shape Design with Multiresolution Subdivision Surfaces" by Ioana Boier Martin

Ioana Boier Martin
IBM T.J. Watson Research Center
Visual Technologies Department

Pasting, engraving, trimming, and free-form editing of surfaces are important in many applications. Their implementation is non-trivial due to computational, topological, and smoothness constraints that the underlying surface must satisfy. We developed a set of algorithms based on multiresolution subdivision surfaces that enable such editing operations at different resolutions on models of arbitrary topology.

In this talk I will focus on cutting and pasting of surfaces for combining 3D data from various sources. The basic idea is quite simple: select a region on a given surface, extract it, and paste it onto another surface. The complications arise from the need to define what part of a surface represents the detail to be transferred and what constitutes the base with respect to which the details are expressed. In addition to addressing this issue, I will present algorithms for target area identification and for finding suitable maps between the two surfaces. I will use interactive demos to illustrate the results.

This work is part of a joint project between IBM, NYU, and Dassault Systemes.

Bio:
Ioana Boier Martin is a Research Staff Member in the Visual Technologies Department at the IBM Thomas J. Watson Research Center. Her research interests include shape modeling, digital image and geometry processing, and interactive techniques. She received a Ph.D. degree in Computer Science from Purdue University in 1996. Currently, her research focuses on surface styling using a multiresolution subdivision approach. Since 1998 when she joined IBM, she has also worked on adaptive delivery of 3D models over networks, 3D data compression, and texture reconstruction. She can be reached at ioana@us.ibm.com. Additional information regarding her projects is available at http://www.research.ibm.com/people/i/imartin/

Host: Frédo Durand yellowline
Abstract - "Interactive High-Fidelity Character Animation" by Zoran Popović

Zoran Popović
University of Washington
Department of Computer Science and Engineering

Joint Graphics / Vision Seminar

This talk will describe recent advances in synthesis of highly-realistic character animation. We use physics-based modeling, motion capture, and static scan data to model highly-realistic characters that can be controlled at interactive speeds. The talk will describe our approach to three fundamental problems in realistic character animation:

  1. Physics-based Refinement of Sketched Character Animation. We introduce a framework for synthesis of complex realistic motion with little user effort. We show that constraints that enforce realism can be distilled into a few simple core invariants. We apply our technique to create motions of complicated jumps, runs, gymnastics, acrobatics and ice skating.
  2. Articulated Body Deformation from Range Scan Data. This work presents a first attempt at capturing the motion of detailed human shape. We demonstrate results on an real-time animated human upper body model with realistic muscle bulging and skin folding behavior.
  3. Skeletal Animation of Deformable Characters. We employ a hierarchical, volumetric, finite element mesh to simulate skeleton-driven elastic body dynamics. We demonstrate the ability to animate complex dynamic models using simple skeletons and coarse volumetric meshes in a manner that simulates secondary motions of flesh at interactive rates.
Presented research is done in collaboration with Brett Allen, Steve Cappel, Brian Curless, Tom Duchamp, Seth Green and Karen Liu.

Host: Jovan Popović yellowline
Abstract - "Perspective Unbound: Visionary Rendering" by Daphna Weinshall

Daphna Weinshall
The Hebrew University of Jerusalem, School of Computer Science and Engineering

Joint Graphics / Vision Seminar

Practically all image rendering methods today use the perspective projection, or try to approximate it. We develop a new image-based rendering approach which is based on a new projection model -- the Two-Slit Camera. The model is obtained by relaxing the strict requirement of perspective projection, namely, that all projection rays intersect at a single point (the focal-point). Instead, projection rays are required by definition to intersect two curves in space.

The Two-Slit Camera model allows us to develop image-based rendering techniques that can use much simpler image data, and have reduced storage and computation needs. For example, from an image sequence taken by a side-moving camera we can generate a forward moving sequence. This is done with no computation and no explicit use of 3D scene structure. Moreover, precise information about the camera's internal and external parameters is not essential. While the images in the synthetic sequences are not perspective, they still look very realistic. This, we believe, opens the door to the emergence of simpler image-based rendering.

===========

Joint work with:
Mi-Suen Lee, Tomas Brodsky, Assaf Zomet, Doron Feldman, Shmuel Peleg

Host: Seth Teller yellowline
Abstract - "(Beyond) Plenoptic Sampling" by Harry Shum

Harry Shum
Microsoft Research

Joint Graphics / Vision Seminar

Image-based rendering has become an active research area in vision and graphics. In this talk, I will give a brief overview of research activities in image-based rendering. Then I will focus on three projects we have been working on in the last few years: concentric mosaics (1999), plenoptic sampling (2000) and manifold hopping (2001). I will argue at the end of my talk that image-based rendering can be unified with traditional geometry-based 3D graphics, but the key is how to represent and sample the environment.

Bio
Harry Shum received his Ph.D. in robotics from the School of Computer Science, Carnegie Mellon University in 1996. He worked as a researcher for three years in the vision technology group at Microsoft Research Redmond. In 1999, he moved to Microsoft Research Asia where he is currently a senior researcher and the assistant managing director. His research interests include computer vision, computer graphics, user interface, pattern recognition, statistical learning and robotics.

Host: Leonard McMillan yellowline
Abstract - "3D model acquisition and tracking using uncalibrated silhouettes" by Robert Cipolla

Roberto Cipolla
U Cambridge, UK

I will review the recovery of 3D shape from uncalibrated images and present novel algorithms for automatically acquiring arbitrary objects by tracking their silhouettes. Examples will be given of the reconstruction of sculpture, people and buildings.
I will also present algorithms for the real-time tracking of articulated objects, such as the human body or hand, again exploiting information from the outline.

On-line resources:
See http://www.jesus.cam.ac.uk/virtualtour/hires/models/models.htm for a 3D virtual tour of Jesus College which was built using our PhotoBuilder application.

See http://svr-www.eng.cam.ac.uk/~cipolla/research.htm for 3D models of sculptures and people and software and papers.

Software available from http://svr-www.eng.cam.ac.uk/research/vision/software.html

Host: Seth Teller yellowline
Abstract - "Seeing 3D: The Space of All Stereo Images" by Steven Seitz

Steven Seitz
Department of Computer Science and Engineering
University of Washington

A stereo pair consists of two images with purely horizontal parallax, that is, every scene point visible in one image projects to a point in the same row of the other. Stereo images play a central role in both human depth perception and computer-based shape reconstruction techniques. However, a single stereo pair typically yields a very incomplete perception of the world, due to limited coverage and field of view.

In this talk, I will describe a class of new "panoramic" stereo image representations that can be used to image an entire scene at once. These images can be acquired by moving a conventional camera along a path and compositing pixels from different views into a "multiperspective" mosaic image. Future sensor designs may enable capturing such images directly. I will show several examples of multiperspective stereo images, and motivate their use for visualization and 3D reconstruction of objects and scenes. In addition, I will classify the space of all possible stereo images, by defining all distributions of light rays and sensor designs that produce a stereo pair.

See http://grail.cs.washington.edu/projects/stereo/ for more information.

Host: Jovan Popovic yellowline
Abstract - "3D Reconstruction of Environments "as built"" by Joao Goncalves

Joao Goncalves
RESOLV (REconstruction using Scanned Laser and Video)
EU Joint Research Centre in Ispra, Italy

The objective of 3D Reconstruction is to create a 3D photo-realistic computer model of a real object or environment "as-built". Our approach to 3D Reconstruction is to combine range measurements and pictures. Two technologies are used: i) a combined laser - digital camera sensor for data acquisition and ii) mobility for resolving spatial occlusions. During the presentation we will address the main steps in building a complete 3D texture-mapped model and describe some major projects. At the end, we will demonstrate state-of-the-art results in a few application fields (eg, tourism, real-estate, cultural heritage and virtual studios).

See http://www.scs.leeds.ac.uk/resolv/welcome.htm for more information.

Host: Seth Teller yellowline
Abstract - "The Digital Michelangelo Project" by Marc Levoy

Marc Levoy
Computer Science Department
Stanford University

Recent improvements in laser rangefinder technology, together with algorithms developed in our research group for combining multiple range images, allow us to reliably and accurately digitize the external shape of many physical objects. As an application of this technology, I and a team of 30 faculty, staff, and students from Stanford University and the University of Washington spent the 1998-99 academic year in Italy digitizing the sculptures and architecture of Michelangelo.

Our primary acquisition device was a laser triangulation rangefinder mounted on a large motorized gantry. Using this device and a smaller rangefinder mounted on a jointed digitizing arm, we created 3D computer models of 10 statues, including the David. These models range in size from 100 million to 2 billion polygons. Using a time-of-flight rangefinder, we also created 3D computer models of the interiors of two museums, including Michelangelo's Medici Chapel. Finally, using our rangefinders in conjunction with a high-resolution digital color camera, we created a light field and aligned 3D computer model of Michelangelo's highly polished statue of Night. A light field is a dense array of images viewable using new techniques from image-based rendering.

As a side project, we also scanned all 1,163 fragments of the Forma Urbis Romae, the giant marble map of ancient Rome carved circa 200 A.D. Piecing this map together has been one of the great unsolved problems of archeology. Our hope is that by scanning the fragments and searching among the resulting geometric models for matching surfaces, we can find new fits among the fragments.

In this talk, I will outline the technological underpinings, logistical challenges, and possible outcomes of this project.

Host: Julie Dorsey yellowline
Abstract - "Generating Synthetic Motion Using Physically Based Simulation" by James O'Brien

Realistic synthetic motion is required in applications ranging from commercial entertainment to surgical training. However, generating realistic motion for complex objects is a difficult task because of the large amount of data that must be specified and because humans are very good at detecting unnatural or implausible motions. I have explored one possible solution to this problem: using physically based methods to automatically generate motion for animated objects through the numerical simulation of their physical counterparts. In particular, I have developed a series of techniques for modeling the behavior of passive systems such as water, cloth, and breaking objects, as well as techniques for coupling multiple, heterogeneous systems together. In this talk, I will emphasize recent research on modeling fracture propagation in a dynamically restructured finite element mesh in order to animate objects that can crack or tear. Because my goal is realistic motion, I will also discuss evaluation techniques such as user testing and side by side comparison with high-speed video footage.

Hosts: Professors Lozano-Perez, Dorsey, McMillan and Teller yellowline
Abstract - "Static, Dynamic and Transparent Structure From Motion " by Shai Avidan

The problem of structure-from-motion (SFM) has been extensively explored. Yet, most of the work seems to focus on the case of a camera moving in a static, Lambertian 3D environment. Clearly, our world is much more complicated than that and pushing the envelope of SFM algorithms is needed. In this talk I will review some of the progress done in three complementing areas of SFM. First, I will present a new method for recovering the ego-motion of a camera moving in a static 3D world. This method takes advantage of the relationships between affine and perspective projections and is especially suited for dealing with video sequences. Then, I will present a new method that can recover the 3D coordinates of moving objects seen from a single moving camera. This problem can only be solved if some constraints are placed on the motion of the objects. I coin the term "trajectory triangulation" to describe this family of tasks and, in particular, I focus on the case of objects that move along a straight line or along a conic section. Finally, I will present a new approach to handling reflections in 3D scenes. At its core, the new approach offers a new method for decomposing reflections, seen by a moving camera, into the individual layers that created them. Taken together, these contributions extend the capabilities of SFM algorithms.

Parts of this work were done with P. Anandan, Amnon Shashua and Rick Szeliski.

Host: Seth Teller yellowline
Abstract - "R&D at Pixar" by Darwyn Peachey

Darwyn Peachey is the Vice President of Research & Development at Pixar Animation Studios in Richmond, California. He will describe the unique collaboration of art and technology involved in making feature films at Pixar. The presentation will outline the stages in the life cycle of an animated film and will show how the combined efforts of many people with diverse skills are required to produce each film.

Host: Julie Dorsey yellowline
Abstract - "Multi-Frame Analysis of Information in Video" by Michal Irani

Video is a very rich source of information. It provides *continuous* coverage of scenes over an *extended* region both in time and in space. That is what makes video more interesting than a plain collection images of the same scene taken from different views. Yet most video analysis algorithms do not take advantage of the full power of video data, and usually use information only from a few *discrete* frames or points at any given time.

In this talk I will describe some aspects of our research on multiple-frame video analysis that aims to take advantage of both the continuous acquisition and extended spatio-temporal coverage of video data. First, I will describe a new approach to estimating image correspondences simultaneously across multiple frames. We show that for static (or "rigid") scenes, the correspondences of multiple points across multiple frames lie in low-dimensional subspaces. We use these subspace constraints to *constrain* the correspondence estimation process itself. These subspace constraints are geometrically meaningful and are not violated at depth discontinuities nor when the camera motion changes abruptly. This enables us to obtain dense correspondences *without* using heuristics such as spatial or temporal smoothness assumptions. Our approach applies to a variety of imaging models, world models, and motion models, yet does *not* require prior model selection, nor does it involve recovery of any 3D information.

The spatio-temporal scene information contained in video data is distributed across many video frames and is highly redundant by nature. Accurate knowledge of both the continuous as well as the extended spatio-temporal data redundancy can be powerfully exploited to integrate scene information that is distributed across many video frames into compact and coherent scene-based representations. These representations can be very efficiently used to view, browse or index into, annotate, edit and enhance the video data. In the second part of my talk, I will show some demonstrations of video applications, which exploit both the continuous acquisition and the extended coverage of video data. In particular, I will show a live *interactive* demonstration of indexing, browsing, and manipulation of video data, as well as video editing and video enhancement applications.

Host: Seth Teller yellowline
Abstract - "Artistic Halftoning - Between Technology and Art" by Victor Ostromoukhov

Halftoning, a technique for producing the illusion of continuous tones on bi-level devices is widely used in today's printing industry. Different variants of halftoning - clustered and dispersed dither, error-diffusion, blue-noise mask, stochastic clustered dithering - have been extensively studied during last thirty years. Inspired by the pioneering work of Professor William Schreiber and his disciple Robert Ulichney at MIT in the 1970s and 1980s, several considerable improvements have been suggested by different researchers. Nevertheless, various unsolved problems, including very important ones, still resist to all attempts and represent an excellent topic of research in computer graphics and discrete mathematics.

After a short introduction into basic techniques of halftoning, we shall focus our attention on different artistic halftoning techniques developed at EPFL (Swiss Federal Institute of Technology, Lausanne, Switzerland). We shall briefly explore Rotated Dispersed Dither, a technique based on discrete one-to-one rotation (SIGGRAPH94) and Artistic Screening, a library-based approach, which has been presented at SIGGRAPH95. Then, we shall extend this basically black-and-white technique to multiple colors (Multicolor and Artistic Dithering, SIGGRAPH99). This technique permits to print with non-standard colors such as opaque or semi-opaque inks, using traditional or artistic screens of arbitrary complexity. Finally, we shall draw a way of extension of traditional dithering to purely artistic domain: Digital Facial Engraving presented at SIGGRAPH99.

Bio: Victor Ostromoukhov studied physics and mathematics at the Moscow Institute of Physics and Technology (MFTI). After graduating in 1980, he worked as a researcher at the Globe Physics Institute in Moscow, part of Russian Academy of Science. In 1983, he emigrated to France. He worked as a computer engineer and (later) a computer scientist, with several prominent European companies including SG2 (Paris) and Olivetti (Paris and Milan). In 1989, he joined the Peripheral Systems Laboratory at Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland, where he is currently a senior researcher. In 1995, he completed his doctorate in CS there. During winter and spring quarters 1997, he was an invited professor at the University of Washington in Seattle. His current research interests are mainly in computer graphics, and more specifically in non-photorealistic rendering, digital halftoning, and all topics related to digital art. In mathematics, his research interests are the theory of tilings and space-filling curves.

Host: Julie Dorsey yellowline
Abstract - "The Art and Technology of Pixar's Animated Films" by Darwyn Peachey

Darwyn Peachey is the Vice President of Research & Development at Pixar Animation Studios in Richmond, California. He will describe the unique collaboration of art and technology involved in making feature films at Pixar, with examples drawn from "A Bug's Life" and "Toy Story". The presentation will outline the stages in the life cycle of an animated film and will show how the combined efforts of many people with diverse skills are required to produce each film.

Host: Seth Teller yellowline
Abstract - "Optimized Hierarchical Occlusion Culling for Z-Buffer Systems" by Ned Greene

Within a rendering system having z-buffer hardware for rasterizing polygons, the term "occlusion culling" applies to culling occluded polygons prior to rasterization in order to accelerate rendering. In this talk I'll review published work on hierarchical methods for culling occluded geometry and then describe how these methods can be extended to accelerate z-buffer systems having hardware rasterizers. Hierarchical culling is performed by a culling stage that culls occluded geometry and passes visible polygons on to z-buffer hardware. The culling stage performs conservative culling using a highly optimized variation of hierarchical z-buffering that requires far less computation, memory, and memory traffic than the original algorithm. With these optimizations, the traffic in z values required to perform conservative culling on densely occluded scenes is very low, typically much less than a single z access per image sample on average. Another innovation that will be discussed is a general way to trade off the quality of z-buffer images in exchange for faster rendering.

Related publications:
"Hierarchical Z-Buffer Visibility," Siggraph '93
"Hierarchical Polygon Tiling with Coverage Masks," Siggraph '96

Ned Greene has worked in computer graphics research at the NYIT Computer Graphics Lab, Apple Computer's Advanced Technology Group, and at Hewlett-Packard Laboratories. He holds a PhD in computer science from the University of California at Santa Cruz. Over the years, Ned has been a frequent contributor to the Siggraph technical program and Electronic Theatre.

Host: Seth Teller yellowline
Abstract - "Robust Mesh Watermarking" by Hugues Hoppe

Watermarking provides a mechanism for copyright protection of digital media by embedding information identifying the owner in the data. The bulk of the research on digital watermarks has focused on media such as images, video, audio, and text. Robust watermarks must be able to survive a variety of "attacks", including resizing, cropping, and filtering. For resilience to such attacks, recent watermarking schemes employ a "spread-spectrum" approach --- they transform the document to the frequency domain (e.g. using DCT) and perturb the coefficients of the perceptually most significant basis functions. In this paper we extend this spread-spectrum approach for the robust watermarking of arbitrary triangle meshes.

Generalization of the spread spectrum techniques to surfaces presents two major challenges. First, arbitrary surfaces lack a natural parameterization for frequency-based decomposition. Our solution is to construct a set of scalar basis function over the mesh vertices using a multiresolution analysis of the mesh. The watermark is embedded in the mesh by perturbing vertices along the direction of the surface normal, weighted by the basis functions. The second challenge is that attacks such as simplification may modify the connectivity of the mesh. We use an optimization technique to resample an attacked mesh using the original mesh connectivity. Results demonstrate that our watermarks are resistant to common mesh processing operations such as translation, rotation, scaling, cropping, smoothing, simplification, and resampling, as well as malicious attacks such as the insertion of noise, modification of low-order bits, or even insertion of other watermarks.

URL: http://research.microsoft.com/~hoppe/

Host: Leonard McMillan yellowline
Abstract - "The Digital Michelangelo Project" by Marc Levoy

Marc Levoy
Computer Science Department
Stanford University

Recent improvements in laser rangefinder technology, together with algorithms developed in our research group for combining multiple range images, allow us to reliably and accurately digitize the external shape and appearance of many physical objects. As an application of this technology, we have embarked on a multi-year project to create a high-quality 3D archive of the sculptures of Michelangelo. To accomplish this project, I and a team of Stanford students will spend the 1998-99 academic year in Italy, basing ourselves at the Stanford Overseas Studies Center in Florence.

Our primary acquisition devices for this project will be a set of high-resolution laser triangulation rangefinders and color digital still cameras mounted on mobile gantries. These devices, used in conjunction with our range image processing algorithms, will enable us to produce a set of 3D computer models, one per sculpture, each model consisting of about 100 million colored triangles. In some cases, we may augment this data with models of architectural settings, acquiring these models using a time-of-flight laser rangefinder. A second acquisition technology consists of color video cameras mounted on (different) mobile gantries. The output of these devices will be a set of light fields, which are dense arrays of images viewable using new techniques from image-based rendering.

The goals of this project are primarily scholarly and educational, although commercialization is also possible. In this talk, I will outline the motivations, technical challenges, and possible outcomes of this project. I will also enumerate some of the problems posed by incorporating 3D graphics and image-based rendering techniques into interactive multimedia venues. Finally, I will mention some applications of these technologies to problems in art preservation and archeology.

Host: Seth Teller yellowline
Abstract - "Rendering the Campanile Movie: Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping" by Dr. Paul Debevec, inconjunction with Yizhou Yu and George Borshukov

Dr. Paul Debevec
Computer Science Division
University of California at Berkeley

In this talk I will explain how we created "The Campanile Movie", a short film that used image-based modeling and rendering techniques to create photorealistic aerial cinematography of the UC Berkeley campus. The core of the work is an efficient implementation of the image-based rendering technique of view-dependent texture-mapping (VDTM) using projective texture mapping, a feature commonly available in polygon graphics hardware. VDTM is useful for generating novel views of a scene with approximately known geometry making maximal use of a sparse set of original views. The earlier presentation of VDTM required significant per-pixel computation and did not scale well with the number of original images. In our technique, we precompute for each polygon the set of original images in which it is visible and create a "view map" data structure that encodes the best texture map to use for a regularly sampled set of possible viewing directions. To generate a novel view, the view map for each polygon is queried to determine a set of no more than three original images to blend together in order to render the polygon with projective texture-mapping. Invisible triangles are shaded using an object-space hole-filling method. We show how the rendering process can be streamlined for implementation on standard polygon graphics hardware. We present results of using the method to render a large-scale model of the Berkeley bell tower and its surrounding campus environment.

This is joint work with Yizhou Yu and George Borshukov.

See also http://www.cs.berkeley.edu/~debevec/Campanile/

Host: Leonard McMillan yellowline
Abstract - "Small Group Behaviour in a Virtual and Real Environment: A Comparative Study" by Mel Slater

Mel Slater
Dept of Computer Science
University College London

Visiting Scientist
Research Laboratory of Electronics
Massachusetts Institute of Technology

This talk describes a recent experiment to explore the behaviour of small groups carrying out a task initially in a virtual and continuing in a real environment. Each of the 10 groups involved consisted of three people, unknown to one another beforehand. The group task consisted of solving a set of riddles. The task only involved observation and talking, and it could be solved most efficiently by group cooperation.

The focus of the study was not at all on performance, in the sense of how well the task was completed, but rather on how the social relations between the members developed in the virtual environment, and how, if at all, these carried over to their interactions in the real world. In particular, the study was concerned with the following issues:

The analysis of the data generated by this experiment (questionnaires, audio and video recordings, post-experimental de-briefings) is continuing, and the talk will address some of the results discovered to date.

Host: Seth Teller yellowline
Abstract - "Modeling and Animating Highly Deformable Materials" by Mathieu Desbrun

In this talk, I will present work in modeling and animating highly deformable materials. Our work is aimed at creating virtual, physically-based models of matter that are able to automatically deform due to interactions with the environment. Coping with large deformations is known to be time consuming, so efficiency and visual quality are key issues in making deformations practical for surgery simulation or other virtual reality applications.

After a review of previous related work, I will present a hybrid model for highly deformable materials that combines implicit surfaces and a particle system. It results in a global model that combines the advantages of the two approaches, and other valuable properties such as volume preservation.

Next I will discuss the limitations of conventional particle system approaches. An alternative model is then proposed that allows a space-time adaptative simulation, where particles can subdivide to better discretize rapidly deforming areas, or merge to simplify stable regions. Computations are therefore stable and optimized, as discretization is automatically adapted.

Last, an active implicit skin model is introduced. This deformable surface can coat any deformable model and provide both a neat visualization and confer physical properties such as surface tension. More generally, it offers an efficient, yet low-cost technique to visualize adaptive models, avoiding "popping" effects through smoothing of sudden internal changes of granularity.

The new approaches are all aimed at allowing us a fully adaptive simulation of deformable objects, as computations are minimized to ensure a given accuracy and focussed where needed.

Biography: Mathieu Desbrun was awarded an engineering degree in Computer Science at ENSIMAG with distinction and a graduate degree in Computer Graphics and Vision, both in France in 1994. He has just completed his PhD in Computer Science at the INPG (Grenoble, France). His research interests include physically-based animation, implicit surfaces, and their multi-resolution aspects.

Host: Julie Dorsey yellowline
Abstract - "Viewing and Manipulating 3D Scenes Through Photography" by Steven Seitz

The problem of acquiring and manipulating photorealistic visual models of real scenes is a fast-growing new research area that has spawned successful commercial products like Apple's QuickTime VR. An ideal solution is one that enables (1) photographic realism, (2) real-time user-control of viewpoint, and (3) changes in illumination and scene structure.

In this talk I will describe recent work that seeks to achieve these goals by processing a set of input images (i.e., photographs) of a scene to effect changes in camera viewpoint and 3D editing operations. Camera viewpoint changes are achieved by manipulating the input images to synthesize new scene views of photographic quality and detail. 3D scene modifications are performed interactively, via user pixel edits to individual images. These user edits are automatically propagated to other images in order to preserve physical coherence between different views of the scene. Because all of these operations require accurate correspondence, I will discuss the image correspondence problem in detail and present new results and algorithms that are particularly suited for image-based rendering and editing applications.

Bio: Steven Seitz is a member of the computer vision group at Microsoft Research, which he joined in October 1997. Previously, he was a graduate student at the University of Wisconsin, Madison, where he conducted research in visual motion analysis and image-based rendering. His current interests focus on the problem of acquiring visual representations of real environments using semi- and fully-automated techniques. He received his BA in computer science and mathematics at the University of California, Berkeley in 1991, his MS in computer science at the University of Wisconsin, Madison in 1993, and worked as a summer intern in the Advanced Technology Group at Apple Computer in 1993.

Hosts: Julie Dorsey and Leonard McMillan yellowline
Abstract - "Media Synthesis" by Paul Haeberli

By applying modern computer graphics to traditional drawing, painting, photography and manufacturing, we can synthesize a new class of creative media. Several ongoing projects will be shown that explore parametric manufacturing, digital photography, abstract representations, and user interaction.

Bio: Paul Haeberli received a Bachelor of Science degree in Electrical Engineering from the University of Wisconsin in Madison, and has been working for Silicon Graphics since 1983. His research interests include geometric paper folding, laser cutting for rapid prototyping, futurist programming, image processing, and tools for exploring visual representations, geometry and shading. Mr. Haeberli is a Principal Scientist at Silicon Graphics in California. He has contributed to the development of software for all generations of SGI workstations.

Host: Seth Teller yellowline
Abstract - "The Visibility Skeleton and the 3D Visibility Complex" by Frédo Durand

Many problems in computer graphics and computer vision require accurate global visibility information. Previous approaches have typically been complicated to implement and numerically unstable, and often too expensive in storage or computation. The Visibility Skeleton is a new powerful utility that can efficiently and accurately answer visibility queries for the entire scene. The Visibility Skeleton is a multi-purpose tool, which can solve numerous different problems. A simple construction algorithm is presented which only requires the use of well-known computer graphics algorithmic components such as ray-casting and line/plane intersections. We provide an exhaustive catalogue of visual events, which completely encode all possible visibility changes of a polygonal scene into a graph structure. The nodes of the graph are external stabbing lines, and the arcs are critical line swaths.

Our implementation demonstrates the construction of the Visibility Skeleton for scenes of over a thousand polygons. We also show its use to compute exact visible boundaries of a vertex with respect to any polygon in the scene, the computation of global or on-the-fly discontinuity meshes by considering any scene polygon as a source, as well as the extraction of the exact blocker list between any polygon pair. The algorithm is shown to be manageable for the scenes tested both in storage and in computation time. To address the potential complexity problems for large scenes, on-demand or lazy construction is presented, its implementation showing encouraging first results.

We will then show that the Visibility Skeleton is a subset of the 3D Visibility Complex. The 3D Visibility is the partition of the maximal free segments of 3D space according to the objects they touch. Intuitively, the idea is to group rays that "see" the same objects.

Hosts: Seth Teller and Julie Dorsey yellowline
ABSTRACT - "Planning Soda Hall -- and Living with the Result" by Carlo Sequin

Soda Hall is the new Computer Science Building on the Berkeley Campus, completed in 1994. Prof. Sequin has been involved with the planning, design, and construction of this building since 1986.

During the planning, conceptual design, and design development phases, the occupants of the future building can have a major influence on the result if they are willing to spend the necessary time and effort. Considering that they may have to work in the building for many decades, the investment of time and effort seems well worth it.

The talk will describe the eight-year long interaction with architects, University administrators, lawyers, Berkeley City officials and contractors, focussing on those activities that made -- or could have made -- a crucial difference in a successful outcome. The top ten blunders in Soda Hall will be reviewed and their causes analyzed. Some lessons learned that might bring a next building development even closer to perfection will be reviewed.

The talk will be followed by an open-ended question and answer session.

Host: Seth Teller yellowline
ABSTRACT - "Synthesizing Complex Natural Scenes using Volumetric Textures" by Fabrice Neyret

Complicated repetitive scenes such as forests, foliage, grass, fur, etc, are very challenging for the usual modeling and rendering tools. The sheer amount of data, the challenges presented by the tedium of the modeling and animation tasks, and the cost of realistic rendering of these scenes make these types of environments quite rare even in today's computer graphics video productions.

We will show how the Volumetric Textures paradigm is well suited to these types of complicated scenes, as it simplifies the modeling and animation tasks and facilitates ray-traced rendering that is both efficient and quite free of aliasing artifacts. We will illustrate this approach with videos presenting complex scenes (e.g. forests) raytraced in about 20 minutes per frame with low aliasing.

The idea (initially introduced by Kajiya) is to represent a pattern of 3D geometry in a reference volume, to be mapped over an underlying surface like a regular 2D texture (a deformed copy being called a texel). The key point is that for small details, orientation is more important than shape. Thus, each voxel of the pattern contains a reflectance function, representing the infra-voxel geometry, and the pattern is pre-computed at different scales as for Mip-Mapping, in such a way that the rendering never uses data smaller than a pixel.

Bio: Neyret is a postdoctoral fellow with the Dynamic Graphics Projectat the University of Toronto. Previously, he was an engineer at TDI (Thomson Digital Image, now Alias-Wavefront), Paris. He completed a PhD in 1996 at the Syntim project, INRIA-Rocquencourt, France.

URL for texels (including papers): http://www-rocq.inria.fr/syntim/research/neyret yellowline
ABSTRACT - "PROPs: Personal Robot Presences" by John Canny

A PROP is a tele-robot that provides a mobile physical presence in a remote place. In other words, an avatar in the physical world. A PROP provides its user with a strong sense of being immersed in the remote space, and it provides the people near the PROP with a sense of a human presence in their midst. Most importantly, a PROP provides its user with the ability to work and function in the remote space in ways that are very difficult with telecommunication. We have been flying and driving tele-robot avatars for about two years. Our earliest PROPs were radio controlled, camera-equipped, indoor blimps. More recently, we have been building even simpler "carts with heads". The goal of our research is to improve the effectiveness of PROPs for tele-work. To do this, we have to hone the boundary between communication and "being-there". Video, sound and movement are not difficult to provide, and simple PROPs can be built for about the cost of a printer. So they provide a very cost- and time-effective alternative to travel. The interesting question is, after sight, sound and movement, what other capabilities are most important for tele-work? What types of sensing and actuation best enhance the sense of immersion? What can go wrong with a remotely-controlled tele-robot in your midst?

In separate work we are developing tools for acquiring and simulating the physics of real-world environments. We have developed a simulator called IMPULSE^* which has an improved model for local contact. Most recently, IMPULSE has been given a Java interface, and object behaviors are now written in Java and dynamically loaded. We are experimenting with different programming styles in a quest for "standard libraries" for programming physics-based behaviors. One of our first Java behaviors was a virtual blimp, controlled by the same applet we use to drive our real blimp robots.

* IMPULSE was the thesis work of Brian Mirtich, now at MERL.

Host: Seth Teller yellowline
ABSTRACT - "3DDI: 3D Direct Interaction" by John Canny

3DDI is about direct visual interaction with a simulated or remote 3d world. Users interact with the world without gloves or motion capture sensors and view the world stereoscopically without glasses. The project covers the pipeline of technologies from real-time 3d capture devices, through physical modeling, to rendering on autostereoscopic and volumetric displays. It is supported under the DoD's MURI program, and involves researchers from UC Berkeley, MIT and UCSF. By including all the infrastructure for direct interaction, the project provides a detailed context for development of the individual technologies, and for exploration of the driving applications in telesurgery, training and crisis simulation. This will outline the project and what needs to be accomplished in its 3-year time frame. yellowline
ABSTRACT - "Modeling and Rendering Architecture from Photographs" by Paul Debevec

In this talk I will present a new approach for modeling and rendering architectural scenes from a sparse set of still photographs. The modeling approach, which combines both geometry-based and image-based techniques, has two components. The first component is a photogrammetric modeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it takes advantage of the constraints that are characteristic of architectural scenes. The second component is model-based stereo, which recovers how the real scene deviates from the basic model. By making use of the model, this stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail and non-lambertian reflectance than flat texture-mapping. I will present results that demonstrate our approach's ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs, including the Rouen Revisited art installation presented at SIGGRAPH '96.

http://www.cs.berkeley.edu/~debevec/Research/

http://www.interval.com/projects/rouen/ yellowline
ABSTRACT - "A Unified Approach to 2D and 3D Scene Analysis" by Michal Irani

The analysis of three dimensional scenes from image sequences has a number of goals. These include (but are not limited to): (i) the recovery of 3D scene structure, (ii) the detection of moving objects in the presence of camera induced motion, and (iii) the synthesis of new camera views based on a given set of views.

Previous approaches to the problem of dynamic scene analysis can be broadly divided into two classes: (i) 2D algorithms which apply when the scene can be approximated by a flat surface and/or when the camera is only undergoing rotations and zooms, and (ii) 3D algorithms which work well only when significant depth variations are present in the scene and the camera is translating.

This talk will present a unified approach to dynamic scene analysis in both 2D and 3D scenes, with a strategy to gracefully bridge the gap between those two extremes. Our approach is based on a stratification of the problem into scenarios which gradually increase in their complexity. We present a set of techniques that match the above stratification. These techniques progressively increase in their complexity, ranging from 2D techniques to more complex 3D techniques. Moreover, the computations required for the solution to the problem at one complexity level become the initial processing step for the solution at the next complexity level. We illustrate these techniques using examples from real image sequences.

Host: Seth Teller yellowline
ABSTRACT - "Output-Sensitive Visibility Algorithms for Dynamic Scenes with Applications to Virtual Reality" by Oded Sudarsky

An output-sensitive visibility algorithm is one whose runtime is proportional to the number of visible graphic primitives in a scene model - not to the total number of primitives, which can be much greater. The known practical output-sensitive visibility algorithms are suitable only for static scenes, because they include a heavy preprocessing stage that constructs a spatial data structure which relies on the model objects' positions. Any changes to the scene geometry might cause significant modifications to this data structure. We show how these algorithms may be adapted to dynamic scenes. Two main ideas are used: first, update the spatial data structure to reflect the dynamic objects' current positions; make this update efficient by restricting it to a small part of the data structure. Second, use temporal bounding volumes (TBVs) to avoid having to consider every dynamic object in each frame. The combination of these techniques yields efficient, output-sensitive visibility algorithms for scenes with multiple dynamic objects. The performance of our methods is shown to be significantly better than previous output-sensitive algorithms, intended for static scenes.

TBVs can be adapted to applications where no prior knowledge of the objects' trajectories is available, such as virtual reality (VR), simulations etc. Furthermore, they save updates of the scene model itself, not just of the auxiliary data structure used by the visibility algorithm. They can therefore be used to greatly reduce the communications overhead in client-server VR systems, as well as in general distributed virtual environments.

Joint work with Craig Gotsman, presented at Eurographics'96.

Host: Seth Teller yellowline
ABSTRACT - "The Lumigraph" by Steven J. Gortler

In this talk, I will discuss a new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used in computer vision and the rendering process traditionally used in computer graphics, this approach does not rely on geometric representations. Instead we sample and reconstruct a 4D function, which we call a Lumigraph. The Lumigraph is a subset of the complete plenoptic function that describes the flow of light at all positions in all directions. With the Lumigraph, new images of the object can be generated very quickly, independent of the geometric or illumination complexity of the scene or object. I will discuss a complete working system including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images from this new representation. yellowline
ABSTRACT - "Spherical Wavelets: Efficiently Representing Functions on the Sphere" by Peter Schroeder

Wavelets have proven to be powerful bases for use in numerical analysis and signal processing. Their power lies in the fact that they only require a small number of coefficients to represent general functions and large data sets accurately. This allows compression and efficient computations. Traditional constructions have been limited to simple domains such as intervals and rectangles. In this talk I will present a wavelet construction for scalar functions defined on surfaces and more particularly the sphere. Treating these bases in the fully biorthogonal case I will explain how bases with custom properties can be constructed with the lifting scheme. The bases are extremely easy to implement and allow fully adaptive subdivisions. The resulting algorithms have been implemented on workstation class machines in an interactive application. I will give examples of functions defined on the sphere, such as topographic data, bi-directional reflection distribution functions, and illumination, and show how they can be efficiently represented with spherical wavelets.

This is joint work with Wim Sweldens of AT&T Bell Laboratories.

Bio: Peter Schroeder is an assistant professor of computer science at Caltech and one of the leading authorities on the use of wavelets in computer graphics. He received his PhD from Princeton University for his work on wavelet methods for illumination computations. His current research interests revolve around the construction of second generation wavelets and their application to computer graphics computations. For this work he was recently named a Sloan Foundation Research Fellow.

Host: Seth Teller yellowline
ABSTRACT - "Visualization and Interaction in Large 3D Virtual Environments" by Thomas A. Funkhouser

Interactive systems that simulate the visual experience of inhabiting a three dimensional environment along with other users may be an important application domain. However, interesting three dimensional models may consist of millions of polygons and require gigabytes of data - far more than today's workstations can render at interactive frame rates, or fit into memory simultaneously. Furthermore, hundreds or thousands of users may inhabit the same virtual environment simultaneously, creating a multitude of potential interactions. In order to achieve real-time visual simulations in such large virtual environments, a system must identify a small, relevant subset of the model to store in memory and process at any given time.

I will describe a few techniques to handle the vast complexity of large, sparse virtual environments in visual simulation applications. These techniques rely upon an efficent geometric database that represents the model as a set of objects, each of which is described at multiple levels of detail. The database contains a spatial subdivision which partitions the model into an adjacency graph of regions with similar visibility characteristics. The object hierarchy and spatial subdivision are used by visibility determination and multi-resolution detail elision algorithms to compute a small subset of the model to store in memory and process during each step of the computation.

These techniques have been used in three applications: interactive building walkthroughs, radiosity computations, and multi-user virtual reality. The building walkthrough system is able to maintain more than 15 frames/second during visualization of architectural models containing over one million polygons. The radiosity system generates solutions for input models containing over eighty thousand polygons. The multi-user virtual reality system manages interactions between more than one thousand simultaneous users in real-time. In all cases, the tested data sets are an order of magnitude larger than those supported by previous state-of-the-art systems.

This is joint work with Carlo Sequin (University of California, Berkeley), Seth Teller (MIT), Celeste Fowler (SGI), and Pat Hanrahan (Stanford University).

Bio: Thomas Funkhouser is a member of the technical staff at AT&T Bell Laboratories. His research interests include interactive computer graphics, real-time display algorithms, global illumination, multi-user systems, and object-oriented databases. He received a B.S. in biological sciences from Stanford University in 1983, a M.S. in computer science from UCLA in 1989, and a PhD in computer science from UC Berkeley in 1993.

Host: Seth Teller yellowline
ABSTRACT - "A Project to Build a 3D Fax Machine" by Marc Levoy

There is a growing interest in the graphics and computer vision communities in building a "3D fax machine", an inexpensive device capable of digitizing the shape and appearance of small stationary objects. Applications for such a device include product design, reverse engineering, medical scanning, museum archiving, and digitizing of models for the visual simulation, film making, video game, and home shopping industries.

The first step in this process involves generating a seamless, occlusion-free, low-level geometric representation of the object's surfaces. Our prototype system uses a modified Cyberware laser-stripe triangulation scanner and a precision motion platform. By affixing the object to our platform and sweeping it from several directions with the Cyberware scanner, we obtain a set of 512 x 512 pixel range images aligned with submillimeter precision. To combine the information from these range images, we are pursuing two alternative methods: 1) zippering of fine-grain polygon meshes derived from the individual range images, and 2) incremental updating of a 3D probabilistic occupancy grid. I will describe both methods.

I will also describe a new algorithm for analyzing laser reflection image sequences that corrects several well-known problems inherent to laser-stripe scanners: range errors at surface discontinuities, range distortions due to surface reflectance changes, and range noise arising from laser speckle. By correcting these problems, we can improve on Cyberware's resolution by up to 5x, allowing us to digitize models with 0.1mm resolution.

Finally, I will summarize our strategies for automatically determining the next best view to acquire, for fitting curved surface patches to the low-level geometric representations, and for acquiring the spatially varying color and reflectance properties of objects.

Host: Seth Teller yellowline
ABSTRACT - "Computer Vision as Low-Dimensional Optimization" by Nina Amenta

In this talk we express some problems from computer vision as low-dimensional optimization problems, getting fast and simple algorithms via recent results in computational geometry. Then we give a lower bound for one of the optimization problems, maximizing a convex function of two variables over a polytope in four variables, by building a bad four-dimensional polytope. We show a short computer graphics video of projections of this polytope, which we hope conveys the geometric intuition behind the result.

The problems we consider all have the goal of finding the transformation, out of a family of transformations, which optimally matches one convex polygon with another. We can find the largest translate of one polygon in another by linear programming. With some more work, we can find the translation and scaling which minimizes the Hausdorff distance between two polygons in linear time. These problems get harder when we allow rotation.

Allowing translation and rotation, finding the largest copy of one convex polygon inside another corresponds to maximizing a convex function in two variables over a convex polytope in four dimensions. It was unknown how many local minima this problem could have, but we construct a polytope which achieves the obvious upper bound of O(f^2), where f is the number of facets.

Host: Seth Teller yellowline
ABSTRACT - "Ten Hard Problems in Computer Graphics" by Julie Dorsey and Seth Teller

Computer graphics is an extremely young field. Some areas, such as rendering, have matured quickly; however, plenty of hard problems haven't yet been cracked or even addressed. Moreover, one can identify the edge of current techniques rather quickly, and start to contribute something new.

We briefly motivate and describe ten problems whose solutions lie beyond the reach of current techniques, but which seem ripe for attack given greatly increased computation and storage resources, and sophisticated graphics subsystems, encapsulated by modern computer systems. Each of the problems falls into one or more of these four areas:

We hope and expect to develop collaborations among the: Theory and Computational Geometry; Machine Vision; High-Performance and Scientific Computing; Human Factors and Perception; and Database communities as these problems are addressed. Consequently, we are seeking interested students and others to join a concerted assault on several of these problems. yellowline

Computer Graphics Group · Computer Science and Artificial Intelligence Laboratory · Massachusetts Institute of Technology ·
32-D408· 32 Vassar Street · Cambridge, MA 02139 · USA Tel 617.253.6583 · Fax 617.253.4640

http://graphics.csail.mit.edu

We gladly welcome any questions or comments about our web site: email webmaster@graphics.csail.mit.edu

Copyright © 2003 Computer Graphics Group. All Rights Reserved.  



Valid HTML 4.01!