6.838J/4.214J - Advanced Topics in Computer Graphics


L1 - COURSE INTRODUCTION

L2 - REVIEW OF CLASSICAL RAY TRACING

  • A. Appel, Some Techniques for Shading Machine Renderings of Solids, SJCC, Thompson Books, Washington, D.C., 1968, pages 37-45.
  • Turner Whitted, An Improved Illumination Model for Shading Display, CACM, volume 23, number 6, 1980, pages 343-349.
  • Glassner IRT Ch 1 - Intro to ray tracing
  • Glassner IRT Ch 7 - Writing a ray tracer (Heckbert)
  • Supplemental: Glassner PODIS Ch 19 - ray tracing
  • Discussion Questions:

    1) Does ray tracing perform local shading? Global shading?

    2) What has to be recomputed when the viewpoint changes position? An object? A light source? How about when some property of a scene material or medium changes?

    3) Under what conditions could a ray tracing computation be deemed "convergent?"

    4) How might you express the asymptotic complexity of a ray tracing computation? What terms might appear in such an expression?


    L3 - REVIEW OF CLASSICAL & MODERN RADIOSITY

  • Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennett Battaile, Modeling the Interaction of Light Between Diffuse Surfaces, Computer Graphics (Proc.~Siggraph '84), volume 18, number 3, 1984, pages 213-222.
  • Sillion/Puech Ch. 3, Ch. 4
  • Supplemental: Glassner PODIS Ch 18
  • Presentation Slides:

    Discussion Questions:

    1) Does radiosity perform local shading? Global shading?

    2) How would a system designer choose between regular and irregular meshing techniques? How might an algorithm employ estimates both of transport error (that is, error in shooting/gathering radiosity) and representation error (that is, error in modeling the radiosity function on a given source/receiver)?

    3) Under what conditions could a radiosity computation be deemed "convergent?"

    4) What does "linearity" mean in practice for radiosity?

    5) How would you contrast ray tracing and radiosity?


    L4 - THE RENDERING EQUATION

  • Cohen/Wallace Ch. 2 (Hanrahan) - Rendering Concepts
  • J. T. Kajiya, The Rendering Equation, pages 143-150, Computer Graphics (SIGGRAPH '86 Proceedings), volume 20, Aug 1986.
  • Supplemental: Glassner PODIS Ch 17
  • Presentation Slides:

    Discussion Questions:

    1) What types of optical effects are not accounted for by the rendering equation?

    2) There exist algorithms which extend the "Utah Approximation" to include shadows. How would the approximation

    I = ge + gMe

    be altered to reflect such an extension?


    L5 - MONTE CARLO METHODS (I)

  • Robert L. Cook, Thomas Porter, and Loren Carpenter, Distributed Ray Tracing, pages 137-145, Computer Graphics (SIGGRAPH '84 Proceedings), volume 18, number 3, Jul 1984.
  • Glassner PODIS S 16.9
  • Gregory J. Ward, Francis M. Rubinstein,and Robert D. Clear, editor John Dill, A Ray Tracing Solution for Diffuse Interreflection, Computer Graphics, volume 22, number 4, pages 85-92, Aug, 1988.
  • Presentation Slides:

    Discussion Questions:

    1) How do the rendering approaches described in the Cook and Ward papers improve upon classical ray tracing?

    2) In what ways does jittering work better than adaptive supersampling in sampling the image plane?

    3) What sort of weighting functions should be used for time sampling? How does the level of glossiness affect the sampling of reflective light? The level of translucency affect the sampling of transmitted light?

    4) Is there a better way to estimate the Illuminance Integral (used in the "primary method" of the Ward paper) then the naive Monte Carlo method used in the paper? What efficiency tradeoffs are there?

    5) What advantages does using Ward's ray tracing solution for diffuse interreflections have over radiosity techniques? (Maybe too obvious.)



    L6 - MONTE CARLO METHODS (II)

  • Robert L. Cook, Stochastic Sampling in Computer Graphics, Jan 1986, ACM Transactions on Graphics, volume 5, number 1, pages 51-72.
  • Henrik Wann Jensen, 1996, Global Illumination Using Photon Maps, Rendering Techniques '96 (Proceedings of the Seventh Eurographics Workshop on Rendering), pages 21-30, Springer-Verlag/Wien, New York, NY.
  • Eric Veach and Leonidas J. Guibas, Optimally Combining Sampling Techniques for Monte Carlo Rendering, Robert Cook, Annual Conference Series, pages 419-428, ACM SIGGRAPH, Addison Wesley, August 1995.
  • Presentation Slides:

    Discussion Questions:


    L7 - MULTIPASS METHODS

  • Francois X. Sillion and Claude Puech, A General Two-Pass Method Integrating Specular and Diffuse Reflection, pages 335-344, Computer Graphics (SIGGRAPH '89 Proceedings), volume 23, Jul 1989.
  • Shenchang Eric Chen, Holly E. Rushmeier, Gavin Miller, and Douglass Turner, A progressive multi-pass method for global illumination, pages 165-174, Computer Graphics (SIGGRAPH '91 Proceedings), volume 25, number 4, 1991.
  • Photon tracing, bidirectional r/t; hybrid algorithms
  • Glassner PODIS 19.4, 19.5, 19.6
  • Discussion Questions:

    1) The readings often mention how well ray tracing and radiosity complement each other. The obvious complement is that ray tracing does specular well and radiosity doesn't, and vice-versa for diffuse. In what other ways do the two complement each other (one's weakness is another's strength)?

    2) The readings discuss a simple two-pass method in which a radiosity first pass takes care of diffuse reflection, and then a ray tracing second pass takes care of specular reflection/refraction. What is the essential flaw in the method? Are there others?

    3) The three multi-pass methods discussed in the readings (Sillion and Puech's, bi-directional ray tracing, and Chen et al.'s) each employ more than one algorithm (ray tracing, radiosity, photon tracing, etc.). What strengths/weaknesses of its included algorithms is each of the methods trying to utilize/compensate for? How do the three methods compare?


    L8 - IMPORTANCE ALGORITHMS

  • Brian E. Smits, James R. Arvo and David H. Salesin, An importance-driven radiosity algorithm, pages 273-282, Computer Graphics (SIGGRAPH '92 Proceedings), volume 26, number 2, 1992, jul.
  • Larry Aupperle and Pat Hanrahan, Importance and Discrete Three Point Transport, Fourth Eurographics Workshop on Rendering, 1993, Michael F. Cohen and Claude Puech and Francois Sillion, pages 85-94, Eurographics, jun, held in Paris, France, 14-16 June 1993,
  • Per H. Christensen, Eric J. Stollnitz, David H. Salesin and Tony D. DeRose, Global Illumination of Glossy Environments Using Wavelets and Importance, ACM Transactions on Graphics, 1996, volume 15, number 1, pages 37-71.
  • Supplemental: S. Pattanaik and S. Mudur, Efficient Potential Equation Solutions for Global Illumination Computation, Computers and Graphics, volume 17, number 4, pages 387-396, 1993.
  • Presentation Slides:

    Discussion Questions:

    1. Consider an importance algorithm for rendering animation. Will the importance algorithm do more/less/the same work if the (known) path catches few/some/most of the surfaces in the scene?

    What if the path is unknown?

    2. All four of these papers deal with importance and hierarchical or wavelet algorithms.

    Why?

    Would a "naive" radiosity algorithm gain from importance calculations?

    3. Both the Christensen and the Aupperle papers consider importance in solving global radiosity glossy reflectance.

    Why might algorithms that consider glossy reflectance particularly benefit from importance calculations? (i.e. what kind of useless work would importance remove from scenes with mirror like surfaces?)

    4. We saw that one advantage of radiosity over raytracing was its ability to correctly capture color bleeding and diffuse interreflections. (such as the inability of classical raytracing to correctly show the colors from Ferren's sculpture)

    Could importance calculations introduce these kinds of problems (or others) into radiosity?


    L9 - CLUSTERING

  • Pat Hanrahan, David Salzman and Larry Aupperle, A Rapid Hierarchical Radiosity Algorithm, Computer Graphics, volume 25, number 4, pages 197-206, jul, 1991.
  • Brian Smits, James R. Arvo and Donald Greenberg, A Clustering Algorithm for Radiosity in Complex Environments, Computer Graphics Proceedings, Annual Conference Series, 1994 (ACM SIGGRAPH '94 Proceedings), pages 435-442.
  • Francois X. Sillion, G. Drettakis and C. Soler, A clustering algorithm for radiance calculation in general environments, Eurographics Rendering Workshop 1995.
  • Francois X. Sillion, A unified hierarchical algorithm for global illumination with scattering volumes and object clusters, IEEE Transactions on Visualization and Computer Graphics, 1995, volume 1, number 3, pages 240-254.
  • Discussion Questions:

    1. Discuss some methods of building clusters.

    2. When transfering energy from one cluster to another, what problems does visibility create? How can this be resolved?

    3. Why are more refinement tests needed when running the program with visibility coherence then without? What effect does this have on the seen?

    4. How can a visibility algorithm be improved?

    5. For alpha and beta links what are the advantages and disadvantages of using these methods?

    6. What can be done to improve the lower bound of energy transfer from zero?


    L10 - RADIOMETRIC & GEOMETRIC ACCELERATION

  • Glassner IRT Ch 6: Survey of Ray Tracing Acceleration
  • Seth Teller, Kavita Bala and Julie Dorsey, 1996, Conservative Radiance Interpolants for Ray Tracing, Rendering Techniques '96 (Proceedings of the Seventh Eurographics Workshop on Rendering), pages 257-268
  • Revisit Ward:1988:RTS
  • Revisit Hanrahan:1991:RHR
  • Revisit Jensen96-GIUPM
  • Discussion Questions:

    1) How does this scheme take advantage of object coherence? ray coherence? frame (visibility) coherence?

    2) How does this scheme balance solution error and running time?

    3) How does this scheme differ from those of Ward and Jansen in terms of estimating radiance?

    4) What is lazy subdivision? How is it used and why is it so useful here?

    5) How is ray intersection cost amortized over multiple rays in this scheme?


    L11 - DYNAMIC ENVIRONMENTS

  • Shenchang Eric Chen, Incremental Radiosity: An Extension of Progressive Radiosity to an Interactive Image Synthesis System, pages 135-144, Computer Graphics (SIGGRAPH '90 Proceedings), volume 24, 1990, aug.
  • Rachel Orti, Stephane Riviere, Fredo Durand and Claude Puech, Radiosity for Dynamic Scenes in Flatland with the Visibility Complex, Computer Graphics Forum, volume 15, number 3, pages C237-C248.
  • Normand Briere and Pierre Poulin, Hierarchical View-Dependent Structures for Interactive Scene Manipulation, Annual Conference Series, pages 83-90, SIGGRAPH 96 Conference Proceedings, 1996.
  • Supplemental: David A. Forsyth, Chien Yang and Kim Teo, Efficient Radiosity in Dynamic Environments, Fifth Eurographics Workshop on Rendering, pages 313-323, Jun 1994.
  • Discussion Questions:

    1. Briere and Poulin's paper claims color trees can be used to recompute changes in procedural textures. In which cases does this not work?

    2. Looking closely at the performance these papers state they have achieved, are any of these systems truly "interactive"?

    3. What trade-offs do each of these applications make between speed, memory, and error?

    4. Think of three applications where realistic interactive 3D is useful. Do these techniques help those applications? What are the limitations?

    5. Brainstorm and write down three other ways of accelerating 3D image synthesis that might help achieve interactivity (hardware acceleration, gemontry reduction, etc.) Be creative -- make something up.

    6. Be prepared to give a 1-3 minute explanation of how your favorite interactive 3D system achieves reaslistic image sysnthesis & interactivity. (Doom, VRML, etc.) What are the trade-offs/limitations?


    L12 - GENERAL REFLECTANCE MODELS

  • Introductory material Glassner PODIS S 13.7
  • R. L. Cook and K. E. Torrance, A reflectance model for computer graphics, pages 307-316, Computer Graphics (SIGGRAPH '81 Proceedings), volume 15, number 3, 1981.
  • Gregory J. Ward, Measuring and modeling anisotropic reflection, pages 265-272, Computer Graphics (SIGGRAPH '92 Proceedings), volume 26, number 2, 1992.
  • Christophe Schlick, A Survey of Shading and Reflectance Models, Computer Graphics Forum, volume 13, number 2, pages 121-131, jun, 1994.
  • Discussion Questions:

    1. Why do we want/need realistic renderings?

    2. What kind of applications are empirical models suited for?

    3. What kind of applications are theoretical models suited for?

    4. Are theoretical models better than empirical models?

    5. Are reflectance models really physical?

    6. Are reflectance models actually realistic?

    7. Why and when are theoretical models better than empirical ones?

    8. What are the advantages and disadvantages of unifying the diffuse and specular reflector into one?

    9. How important is the accuracy of the specular highlight?

    10. Why can we simplify the reflectance/shading models?


    L13 - TEXTURE MAPPING

  • Paul S. Heckbert, Survey of Texture Mapping, 1986, nov, IEEE Computer Graphics and Applications, volume 6, number 11, pages 56-67
  • James F. Blinn, Simulation of Wrinkled Surfaces, 1978, aug, volume 12, number 3, Computer Graphics (SIGGRAPH '78 Proceedings), pages 286-292.
  • Robert L. Cook, Shade trees, pages 223-231, Computer Graphics (SIGGRAPH '84 Proceedings), volume 18, 1984, jul.
  • Supplemental: Matt Pharr and Pat Hanrahan, Geometry Caching for Ray-Tracing Displacement Maps, Proc. 1996 Eurographics Workshop on Rendering.
  • Discussion Questions:

    Texture Mapping

    1. What's the difference between screen and texture scanning?

    2. Why doesn't bilinear mapping preserve diagnal lines?

    3. How is perspective mapping able to preserve diagnal lines?

    4. Why do we need to filter the texture when scanning?

    5. What's the cost of convolution filters

    6. Why is pyramid filters the filter of choice for hardware vendors?

    Bump and Displacement mapping

    1. What's the difference between the two?

    2. Why use geometry caching for rendering displacement maps?

    3. Why use shade trees over conventional shading?


    L14 - SOLID TEXTURES AND TEXTURE SYNTHESIS

  • Ken Perlin, An Image Synthesizer, Computer Graphics (SIGGRAPH '85 Proceedings), volume 19, pages 287-296, Jul 1985.
  • Darwyn R. Peachey, Solid Texturing of Complex Surfaces, Computer Graphics (SIGGRAPH '85 Proceedings), volume 19, pages 279-286, Jul 1985.
  • David, J. Heeger and James R. Bergen, Pyramid-Based Texture Analysis/Synthesis, Computer Graphics (SIGGRAPH '95 Proceedings), pages 229-238, Aug. 1995.
  • Supplemental: James T. Kajiya and Timothy L. Kay, Rendering Fur with Three Dimensional Textures, 1989, jul, volume 23, Computer Graphics (SIGGRAPH '89 Proceedings), pages 271-280.
  • Presentation Slides:

    Discussion Questions:

    Image synthesizer:

    1.) What are the limitations of this system? Are these limitations inherent in the system, or a function of the limited knowledge of the users. That is, is the system flawed, or do we just not know how to use it?

    Solid texturing of complex surfaces:

    2.) What are the advantages of solid texturing? The disadvantages?

    3.) What's the difference bewteen a digitized and a synthetic texture? Which is easier to synthesize? Which is more useful?

    4.) How can we get around the costs of solid texturing? Is it really that expensive?

    Pyramid-based Texture analysis/synthesis:

    5.) What textures does this system work well with? With what textures does it fail? Why?

    6.) Why do we even need this technique? That is, if we have the sample, what's the point in creating an imperfect copy of it?

    7.) Is this system's implementation of solid textures useful? How would you make a system that had this system's 2D functionality for solid textures? That is, how would you make a pyramid-based solid texture analyzer/synthesizer?

    Rendering fur:

    8.) This system is very limited. Does it have any applications outside of modeling teddy bears?

    9.) This paper opens the door on using texture to replace geometry. What are some applications of this idea? What kinds of objects can we model with texels?


    L15 - CELLULAR TEXTURES AND 3D PAINTING

  • Pat Hanrahan and Paul E. Haeberli, Direct WYSIWIG Painting and Texturing on 3D Shapes, Computer Graphics (SIGGRAPH '90 Proceedings), volume 24, pages 215-223, Aug 1990.
  • Kurt Fleischer, David Laidlaw, Bena Currin, and Alan Barr, Cellular Texture Generation, Computer Graphics (SIGGRAPH '95 Proceedings), pages 239-248, Aug 1995.
  • Steven P. Worley, A Cellular Texture Basis Function, Computer Graphics (SIGGRAPH '96 Proceedings), pages 291-294, Aug 1996.
  • Presentation Slides:

    Discussion Questions:

    Questions for L15 (3D Texturing):

    Hanrahan:

    1) Painting onto 3D objects is the Common Metaphor in this paper. Identify others that have been/could be used in computer graphics.

    2) Why not just use 2D textures?

    3) How is the brush mapped onto the object? What other possibilities are there? What are the advantages/disadvantages of each method?

    4) What is the id buffer? When is it constructed, and at what cost?

    5) Why is transparency impractical to implement in this system?

    6) Design a new operator for this system based on your answer to 1)

    Worley:

    1) What differentiates this basis function approach from other texture generation approaches?

    2) What makes Perlin's Noise function so useful? What needs precomputing?

    3) What is precomputed for the Cellular Basis? What is done on the fly?

    4) What does a 2D F1 function look like?

    5) What would an arbitrary slice through the 3D F1 function look like?

    6) Given the best value of F1 for feature points in the cubic volume containing our sample point, how can we determine which neighboring cubes that we can skip checking?

    7) How can we use texture maps to seed feature point placement?

    8) What other functions can we design that uses this cellular basis? ie. Smooth it, Invert it, etc.

    9) Is this basis coordinate system dependent? Will the pattern on the surface of an object change if we rotate it? Might there be advantages/disadvantages to this?

    Fleischer:

    1) Motivation: Texture mapping can often allow a complex geometric representation to be replaced by a much simpler representation. Under what circumstances is texture mapping inadequate?

    2) Background: What are a) L-Systems, b) Particle Systems, and c) Reaction-Diffusion?

    3) Cell Distribution: What user controls help guide the process after it is initiated?

    4) What does a cell know about its environment? How does this influence distribution, etc.

    5) What is a quaternion, and why shouldn't it differ greatly from unity during the process?

    6) A number of Cell Programs are detailed, such as : adhere to other cells, divide until surface is covered, etc. What other behaviors might be useful?

    7) Cell programs modify attributes using steepest decent energy minimization. This requires a differentiable function. How is this requirement circumvented for imposing surface constraints in the case of non-implicit surfaces?

    8) When in the process is Level of Detail considered?

    9) How are byproducts of the cell distribution step used in rendering?

    10) What collision issues can arise when converting cells to geometry? How can they be avoided?


    L16 - PHYSICALLY MOTIVATED SURFACE MODELS

  • Jay S. Gondek, Gary W. Meyer and Jonathan G. Newman, Wavelength Dependent Reflectance Functions, ACM SIGGRAPH, ACM Press, pages 213-220, jul, 1994.
  • Patrick Callet, Pertinent Data for Modelling Pigmented Materials in Realistic Rendering, Computer Graphics Forum, volume 15, number 2, pages 119-127, 1996.
  • Kenneth K. Ellis, Frank N. Jones, Guobei Chu, William F. Lynn , First-Principles Coatings Reflectance Model Validation, Presented at the Sixth Annual Ground Target Modeling and Validation Conference in Houghton, MI 22-24 August 1995.
  • Supplemental: Haase and G. W. Meyer, Modeling pigmented materials for realistic image synthesis, ACM Tran. Graphics, volume 11, number 4, pages 305-335, oct, 1992
  • Discussion Questions:

    Paper 1:

    -(Gondek) What are the advantages and disadvantages of the using the capture sphere? Compare it to using empirical methods.

    -(Gondek) How does the capture sphere compare to the F-BEAM (Ellis) model? Which is more accurate with specular reflections and why? Will this matter for the types of material that we are trying to model?

    Paper 2:

    -(Callet) Callet's model only applies to single-layered media. How is this restrictive? What kinds of media (i.e. water, space, etc.) is this model most applicable to?

    -(Callet) How does the effective complex refractive index model multiple scattering?

    -(Callet) In his model, Callet sometimes used a very coarse approximation for the scattering coefficient, and sometimes he used a more accurate one. In what situationsis a coarse approximation applicable? How about an accurate one? Can you make intuitive sense of this?

    Paper 3:

    -(Ellis) Ellis' model is able to consider up to 5 layers of paint (or other material) plus an underlying substrate. What types of materials is this model better suited for than Callet's (which can only do one layer).

    -(Ellis) How does Ellis model multiple scattering? How does this differ from Callet's model?

    Overall:

    -(Overall question) What types of applications are these models useful for in industry? Is it worth it to model sub-surface scattering for most types of rendering we do?

    -(Overall question) Compare using Mie theory to Kubelka-Munk theory. Which is more useful for dense materials? Why?

    -(Overall question) What advantages do these models have over more traditional BRDF's?


    L17 - SUB-SURFACE EFFECTS

  • Pat Hanrahan and Wolfgang Krueger, Reflection from Layered Surfaces Due to Subsurface Scattering, Computer Graphics (SIGGRAPH '93 Proceedings), 1993, pages 165-174.
  • Julie Dorsey and Pat Hanrahan, Modeling and Rendering of Metallic Patinas, Annual Conference Series, pages 387-396, SIGGRAPH 96 Conference Proceedings, 1996.
  • Stephen H. Westin, James R. Arvo and Kenneth E. Torrance, Predicting Reflectance Functions From Complex Surfaces, 1992, jul, volume 26, Computer Graphics (SIGGRAPH '92 Proceedings), pages 255-264.
  • Supplemental: Lawrence B. Wolff, Diffuse Reflection from Smooth Dielectric Surfaces, pages 26-44, SPIE Proceedings, 1993.
  • Discussion Questions:

    1. Why does it make sense that the reflections in wolff's simulation are isotropic?

    2. What other effects besides weathering of metals could the erode, fill, polish and coat processies be used for?

    3. What is the larges limitation to looking realistic (the thing that makes or breaks the realism) in the process described in Dorsey's and Hanrahan's paper?

    4. This process focuses a lot on the building up of corrosion, but it lacks in other areas. How important do people feel the lack of abrasion, polishing and pitting are? Do people feel that it would be reasonable to introduce a simple process to simulate these using observed effects?

    5. How does this technique still suffer from not being realistic looking? What factor go into making a picture that looks really realistic.

    6. The paper focuses mainly on metal. What other surfaces would benefit from this technique?

    7. What is the viablity of basing more of the process on the chemistry of the erosion?

    8. Why is it important to assume that the scattering is isotropic?


    L18 - SAMPLING STRATEGIES

  • Technical treatment I: Glassner PODIS Ch. 9
  • Technical treatment II: Eric Veach and Leonidas J. Guibas, Optimally Combining Sampling Techniques for Monte Carlo Rendering, Annual Conference Series, pages 419-428, SIGGRAPH 95 Conference Proceedings, 1995.
  • Supplemental: Eric Veach, Non-Symmetric Scattering in Light Transprot Algorithms, Rendering Techniques '96 (Proceedings of the Seventh Eurographics Workshop on Rendering), pages 81-90, 1996.
  • Supplemental: Don Mitchell and Pat Hanrahan, Illumination from Curved Reflectors, Computer Graphics (SIGGRAPH '92 Proceedings), volume 26, number 2, pages 283-291, Jul 1992.
  • Discussion Questions:

    1. The chapter mainly talks about 2D signals, but rendering involves producing a 2D image from a 3D scene, what is the differences/similarities?

    2. What sampling strategies have we seen so far, and what problems was a unique method of sampling introduced to solve?

    3. What kind of refinement strategies introduce a bias, and under what situations would a bias be ok? not ok?

    4. What are diffent ways of reconstructing a sampled signal, seeing the chapter in the book does not cover it much?

    5. In choosing a sampling strategy, what characteristics determine how you would go about choosing a strategy?


    L19 - PARTICIPATING MEDIA

  • James T. Kajiya and Brian P. Von Herzen, Ray Tracing Volume Densities, pages 165-174, Computer Graphics (SIGGRAPH '84 Proceedings), volume 18, 1984.
  • Holly E. Rushmeier and Kenneth E. Torrance, The Zonal Method for Calculating Light Intensities in the Presence of a Participating Medium, pages 293-302, Computer Graphics (SIGGRAPH '87 Proceedings), volume 21, 1987.
  • Supplemental: Victor Klassen, Modeling the Effect of the Atmosphere on Light, 1987, ACM Transactions on Graphics, volume 6, number 3, pages 215-237.
  • Supplemental: Nelson L. Max, Fifth Eurographics Workshop on Rendering, Efficient Light Propagation for Multiple Anisotropic Volume Scattering, pages 87-104, jun, 1994
  • Supplemental: Eric P. Lafortune and Yves D. Willems, 1996, Rendering Participating Media with Bidirectional Path Tracing, Rendering Techniques '96 (Proceedings of the Seventh Eurographics Workshop on Rendering), pages 91-100.
  • Supplemental: Jos Stam and Eugene Fiume, Turbulent Wind Fields for Gaseous Phenomena, Computer Graphics (SIGGRAPH '93 Proceedings), 1993, pages 369-376.
  • Presentation Slides:

    Discussion Questions:

    1. What are the two main stages in visualizing volume densities?

    2. Discuss view dependence/independence. What needs to be recomputed when the view changes?

    3. Discuss sampling strategies and artifacts. Is a dynamic environment likely to suffer heavily from these artifacts.

    4. What are the common bottlenecks (storage, complexity).

    5. Discuss adaptations to complex geomtric models (e.g. particle systems).

    6. Discuss applications in photomontage: merging a rendered model with a photograph.



    L20 - IMAGE-BASED RENDERING I

  • Leonard McMillan and Gary Bishop, Plenoptic Modeling: An Image-Based Rendering System, Computer Graphics (SIGGRAPH '95 Proceedings), pages 39-46, Aug 1995.
  • Shenchang Eric Chen and Lance Williams, View Interpolation for Image Synthesis, Computer Graphics (SIGGRAPH '93 Proceedings), 1993, pages 279-288, aug, volume 27.
  • Shenchang Eric Chen, Quicktime VR - An Image-Based Approach to Virtual Environment Navigation, Annual Conference Series, pages 29-38, SIGGRAPH 95 Conference Proceedings, 1995.
  • Supplemental: Adelson, E. H. and J. R. Bergen, The Plenoptic Function and the Elements of Early Vision, Computational Models of Visual Processing, MIT Press, 1991.
  • Discussion Questions:

    1. What are the advantages of using an image-based rendering system, as opposed to a system that is geometry-based?

    2. Can you think of some limitations to the IBR systems presented in these papers? (i.e. What assumptions are made, for instance, in the View Interpolation paper or any of the other papers? What effects might be difficult to obtain in an IBR system?)

    3. When might one of the approaches presented here (plenoptic modeling, view interpolation, or QuickTime VR) be more useful than another?

    4. Can you think of some more applications for the systems in these papers, or for IBR systems in general?


    L21 - IMAGE-BASED RENDERING II

  • Steven M. Seitz and Charles R. Dyer, View Morphing: Synthesizing 3D Metamorphoses Using Image Transforms, Annual Conference Series, pages 21-30, SIGGRAPH 96, Conference Proceedings, 1996.
  • Jeffry Nimeroff, Julie Dorsey and Holly Rushmeier, Implementation and Analysis of an Image-Based Global Illumination Framework for Animated Environments, IEEE Transactions on Visualization and Computer Graphics, 1996, volume 2, number 4, pages 283-298, dec.
  • Marc Levoy and Pat Hanrahan, Light Field Rendering, Computer Graphics Proceedings, Annual Conference Series, 1996 (ACM SIGGRAPH '96 Proceedings), pages 31-42.
  • Supplemental: Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski and Michael F. Cohen, The Lumigraph, Annual Conference Series, pages 43-54, SIGGRAPH 96 Conference Proceedings, 1996.
  • Discussion Questions:

    (View Morphing)

    1. Can you think of situations where view morphing would fail?

    2. Can you foresee any systems that we have studied in the course that would benifit from this technology? Would they need additional information, e.g. 3D information?

    (Light Field)

    3. In what ways does the View Morphing paper and the Lightfield papers complement each other?

    4. A current limitation of the system is that the user can only view the lightfield from area of free space. How might this design be extended to overcome this limitation?

    5. Can a lightfield co-exist with 3D Geometry?

    6. Does a lightfield capture optical effects, such as refraction?

    (Framework for Animated Environments)

    7. What are desirable attributes of a walk-through system?

    8. Why would one want to separate the indirect and direct illumination calculations?

    9. What are the advantages of using the described quality measures?

    10. What kinds of image error are introduced by the described implementation of this framework?


    L22 - INVERSE METHODS AND LINEARITY

  • John K. Kawai, James S. Painter and Michael F. Cohen, Computer Graphics Proceedings, Annual Conference Series, 1993, Radioptimization - Goal Based Rendering, pages 147-154, 1993.
  • Chris Schoeneman, Julie Dorsey, Brian Smits, James R. Arvo and Donald Greenberg, Computer Graphics Proceedings, Annual Conference Series, 1993, Painting With Light, pages 143-146, 1993.
  • Jeffry S. Nimeroff, Eero Simoncelli and Julie Dorsey, Fifth Eurographics Workshop on Rendering, Efficient Re-rendering of Naturally Illuminated Environments, pages 359-373, jun, 1994.
  • Supplemental: J. Dorsey, James R. Arvo and D. Greenberg, Interactive Design of Complex Time Dependent Lighting, IEEE Computer Graphics and Applications, 1995, volume 15, number 2, pages 26-36, mar.
  • Discussion Questions:

    Radioptimization -- Goal Based Rendering

    1. What are they optimizing (hint: take a look at the objective function)?

    2. Why is radiosity being used as the rendering operator? Could one use ray tracing for this system?

    3. How are the "goals" different from the Painting with Light paper?

    Painting with Light

    1. What are they optimizing (hint: again, the objective function describes the answer)?

    2. What are some advantages/disadvantages of painting on to 2D images rather than 3D patches?

    Efficient Re-rendering of Naturally Illuminated Environments

    1. Under what circumstances can you exploit linearity of light transport?

    2. What are some ways to combine this method with other IBR methods?


    L23 - ERROR ESTIMATES / COMPARING REAL & SYNTHETIC IMAGES

  • A. Fournier, A. Gunawan and C. Romanzin, Common Illumination between Real and Computer Generated Scenes, 1993, Proceedings of Graphic Interface, pages 254-262.
  • James R. Arvo, Kenneth Torrance and Brian Smits, Proceedings of SIGGRAPH '94 (Orlando, Florida, July 24-29, 1994), A Framework for the Analysis of Error in Global Illumination Algorithms, pages 75-84, jul, 1994.
  • H. Rushmeier, Gregory J. Ward, C. Piatko, P. Sanders and B. Rust, Comparing real and synthetic images: Some ideas about metrics, Eurographics Rendering Workshop 1995.
  • Supplemental: Dani Lischinski, Brian Smits and Donald P. Greenberg, 1994, Bounds and Error Estimates for Radiosity, Computer Graphics Proceedings, Annual Conference Series, 1994 (ACM SIGGRAPH '94 Proceedings), pages 67-74.
  • Discussion Questions:

    Common Illumination

    - What do you have to do to merge the images such that you cannot tell which parts are real and which are computer-generated?

    - Why is it so difficult to seamlessly merge a real video image with a computer generated one? What makes CGI stand out in a composited image?

    - Their tests were done at low-resolutions on video. What would have to be changed to apply this method to high-resolution images?

    - What are some possible applications to this techology?

    Framework for the Analysis of Error

    - Why is this measure of the error of the global illumination solution useful?

    - Which method might be more useful: Human judgement or a calculated error value? In what situations?

    Comparing Real and Synthetic Images

    - One obvious metric is requiring that the luminances match at every pixel. Why is isn't this necessary or desired?

    - Where might this idea of an image metric be useful?


    L24 - COURSE SUMMARY

    Discussion Questions:


    Last modified: 11 April 1997

    boh@graphics.lcs.mit.edu