6.838J/4.214J - Advanced Topics in Computer Graphics
L1 - COURSE INTRODUCTION
L2 - REVIEW OF CLASSICAL RAY TRACING
Discussion Questions:
1) Does ray tracing perform local shading? Global shading?
2) What has to be recomputed when the viewpoint changes position? An object? A light source? How about when some property of a scene material or medium changes?
3) Under what conditions could a ray tracing computation be deemed "convergent?"
4) How might you express the asymptotic complexity of a ray tracing computation? What terms might appear in such an expression?
L3 - REVIEW OF CLASSICAL & MODERN RADIOSITY
Discussion Questions:
1) Does radiosity perform local shading? Global shading?
2) How would a system designer choose between regular and irregular meshing techniques? How might an algorithm employ estimates both of transport error (that is, error in shooting/gathering radiosity) and representation error (that is, error in modeling the radiosity function on a given source/receiver)?
3) Under what conditions could a radiosity computation be deemed "convergent?"
4) What does "linearity" mean in practice for radiosity?
5) How would you contrast ray tracing and radiosity?
L4 - THE RENDERING EQUATION
Discussion Questions:
1) What types of optical effects are not accounted for by the rendering equation?
2) There exist algorithms which extend the "Utah Approximation" to include shadows. How would the approximation
I = ge + gMe
be altered to reflect such an extension?
L5 - MONTE CARLO METHODS (I)
Discussion Questions:
1) How do the rendering approaches described in the Cook and Ward papers improve upon classical ray tracing?
2) In what ways does jittering work better than adaptive supersampling in sampling the image plane?
3) What sort of weighting functions should be used for time sampling? How does the level of glossiness affect the sampling of reflective light? The level of translucency affect the sampling of transmitted light?
4) Is there a better way to estimate the Illuminance Integral (used in the "primary method" of the Ward paper) then the naive Monte Carlo method used in the paper? What efficiency tradeoffs are there?
5) What advantages does using Ward's ray tracing solution for diffuse interreflections have over radiosity techniques? (Maybe too obvious.)
L6 - MONTE CARLO METHODS (II)
Discussion Questions:
L7 - MULTIPASS METHODS
Discussion Questions:
1) The readings often mention how well ray tracing and radiosity complement each other. The obvious complement is that ray tracing does specular well and radiosity doesn't, and vice-versa for diffuse. In what other ways do the two complement each other (one's weakness is another's strength)?
2) The readings discuss a simple two-pass method in which a radiosity first pass takes care of diffuse reflection, and then a ray tracing second pass takes care of specular reflection/refraction. What is the essential flaw in the method? Are there others?
3) The three multi-pass methods discussed in the readings (Sillion and Puech's, bi-directional ray tracing, and Chen et al.'s) each employ more than one algorithm (ray tracing, radiosity, photon tracing, etc.). What strengths/weaknesses of its included algorithms is each of the methods trying to utilize/compensate for? How do the three methods compare?
L8 - IMPORTANCE ALGORITHMS
Discussion Questions:
1. Consider an importance algorithm for rendering animation. Will the importance algorithm do more/less/the same work if the (known) path catches few/some/most of the surfaces in the scene?
What if the path is unknown?
2. All four of these papers deal with importance and hierarchical or wavelet algorithms.
Why?
Would a "naive" radiosity algorithm gain from importance calculations?
3. Both the Christensen and the Aupperle papers consider importance in solving global radiosity glossy reflectance.
Why might algorithms that consider glossy reflectance particularly benefit from importance calculations? (i.e. what kind of useless work would importance remove from scenes with mirror like surfaces?)
4. We saw that one advantage of radiosity over raytracing was its ability to correctly capture color bleeding and diffuse interreflections. (such as the inability of classical raytracing to correctly show the colors from Ferren's sculpture)
Could importance calculations introduce these kinds of problems (or others) into radiosity?
L9 - CLUSTERING
Discussion Questions:
1. Discuss some methods of building clusters.
2. When transfering energy from one cluster to another, what problems does visibility create? How can this be resolved?
3. Why are more refinement tests needed when running the program with visibility coherence then without? What effect does this have on the seen?
4. How can a visibility algorithm be improved?
5. For alpha and beta links what are the advantages and disadvantages of using these methods?
6. What can be done to improve the lower bound of energy transfer from zero?
L10 - RADIOMETRIC & GEOMETRIC ACCELERATION
Discussion Questions:
1) How does this scheme take advantage of object coherence? ray coherence? frame (visibility) coherence?
2) How does this scheme balance solution error and running time?
3) How does this scheme differ from those of Ward and Jansen in terms of estimating radiance?
4) What is lazy subdivision? How is it used and why is it so useful here?
5) How is ray intersection cost amortized over multiple rays in this scheme?
L11 - DYNAMIC ENVIRONMENTS
Discussion Questions:
1. Briere and Poulin's paper claims color trees can be used to recompute changes in procedural textures. In which cases does this not work?
2. Looking closely at the performance these papers state they have achieved, are any of these systems truly "interactive"?
3. What trade-offs do each of these applications make between speed, memory, and error?
4. Think of three applications where realistic interactive 3D is useful. Do these techniques help those applications? What are the limitations?
5. Brainstorm and write down three other ways of accelerating 3D image synthesis that might help achieve interactivity (hardware acceleration, gemontry reduction, etc.) Be creative -- make something up.
6. Be prepared to give a 1-3 minute explanation of how your favorite interactive 3D system achieves reaslistic image sysnthesis & interactivity. (Doom, VRML, etc.) What are the trade-offs/limitations?
L12 - GENERAL REFLECTANCE MODELS
Discussion Questions:
1. Why do we want/need realistic renderings?
2. What kind of applications are empirical models suited for?
3. What kind of applications are theoretical models suited for?
4. Are theoretical models better than empirical models?
5. Are reflectance models really physical?
6. Are reflectance models actually realistic?
7. Why and when are theoretical models better than empirical ones?
8. What are the advantages and disadvantages of unifying the diffuse and specular reflector into one?
9. How important is the accuracy of the specular highlight?
10. Why can we simplify the reflectance/shading models?
L13 - TEXTURE MAPPING
Discussion Questions:
Texture Mapping
1. What's the difference between screen and texture scanning?
2. Why doesn't bilinear mapping preserve diagnal lines?
3. How is perspective mapping able to preserve diagnal lines?
4. Why do we need to filter the texture when scanning?
5. What's the cost of convolution filters
6. Why is pyramid filters the filter of choice for hardware vendors?
Bump and Displacement mapping
1. What's the difference between the two?
2. Why use geometry caching for rendering displacement maps?
3. Why use shade trees over conventional shading?
L14 - SOLID TEXTURES AND TEXTURE SYNTHESIS
Discussion Questions:
Image synthesizer:
1.) What are the limitations of this system? Are these limitations inherent in the system, or a function of the limited knowledge of the users. That is, is the system flawed, or do we just not know how to use it?
Solid texturing of complex surfaces:
2.) What are the advantages of solid texturing? The disadvantages?
3.) What's the difference bewteen a digitized and a synthetic texture? Which is easier to synthesize? Which is more useful?
4.) How can we get around the costs of solid texturing? Is it really that expensive?
Pyramid-based Texture analysis/synthesis:
5.) What textures does this system work well with? With what textures does it fail? Why?
6.) Why do we even need this technique? That is, if we have the sample, what's the point in creating an imperfect copy of it?
7.) Is this system's implementation of solid textures useful? How would you make a system that had this system's 2D functionality for solid textures? That is, how would you make a pyramid-based solid texture analyzer/synthesizer?
Rendering fur:
8.) This system is very limited. Does it have any applications outside of modeling teddy bears?
9.) This paper opens the door on using texture to replace geometry. What are some applications of this idea? What kinds of objects can we model with texels?
L15 - CELLULAR TEXTURES AND 3D PAINTING
Discussion Questions:
Questions for L15 (3D Texturing):
Hanrahan:
1) Painting onto 3D objects is the Common Metaphor in this paper. Identify others that have been/could be used in computer graphics.
2) Why not just use 2D textures?
3) How is the brush mapped onto the object? What other possibilities are there? What are the advantages/disadvantages of each method?
4) What is the id buffer? When is it constructed, and at what cost?
5) Why is transparency impractical to implement in this system?
6) Design a new operator for this system based on your answer to 1)
Worley:
1) What differentiates this basis function approach from other texture generation approaches?
2) What makes Perlin's Noise function so useful? What needs precomputing?
3) What is precomputed for the Cellular Basis? What is done on the fly?
4) What does a 2D F1 function look like?
5) What would an arbitrary slice through the 3D F1 function look like?
6) Given the best value of F1 for feature points in the cubic volume containing our sample point, how can we determine which neighboring cubes that we can skip checking?
7) How can we use texture maps to seed feature point placement?
8) What other functions can we design that uses this cellular basis? ie. Smooth it, Invert it, etc.
9) Is this basis coordinate system dependent? Will the pattern on the surface of an object change if we rotate it? Might there be advantages/disadvantages to this?
Fleischer:
1) Motivation: Texture mapping can often allow a complex geometric representation to be replaced by a much simpler representation. Under what circumstances is texture mapping inadequate?
2) Background: What are a) L-Systems, b) Particle Systems, and c) Reaction-Diffusion?
3) Cell Distribution: What user controls help guide the process after it is initiated?
4) What does a cell know about its environment? How does this influence distribution, etc.
5) What is a quaternion, and why shouldn't it differ greatly from unity during the process?
6) A number of Cell Programs are detailed, such as : adhere to other cells, divide until surface is covered, etc. What other behaviors might be useful?
7) Cell programs modify attributes using steepest decent energy minimization. This requires a differentiable function. How is this requirement circumvented for imposing surface constraints in the case of non-implicit surfaces?
8) When in the process is Level of Detail considered?
9) How are byproducts of the cell distribution step used in rendering?
10) What collision issues can arise when converting cells to geometry? How can they be avoided?
L16 - PHYSICALLY MOTIVATED SURFACE MODELS
Discussion Questions:
Paper 1:
-(Gondek) What are the advantages and disadvantages of the using the capture sphere? Compare it to using empirical methods.
-(Gondek) How does the capture sphere compare to the F-BEAM (Ellis) model? Which is more accurate with specular reflections and why? Will this matter for the types of material that we are trying to model?
Paper 2:
-(Callet) Callet's model only applies to single-layered media. How is this restrictive? What kinds of media (i.e. water, space, etc.) is this model most applicable to?
-(Callet) How does the effective complex refractive index model multiple scattering?
-(Callet) In his model, Callet sometimes used a very coarse approximation for the scattering coefficient, and sometimes he used a more accurate one. In what situationsis a coarse approximation applicable? How about an accurate one? Can you make intuitive sense of this?
Paper 3:
-(Ellis) Ellis' model is able to consider up to 5 layers of paint (or other material) plus an underlying substrate. What types of materials is this model better suited for than Callet's (which can only do one layer).
-(Ellis) How does Ellis model multiple scattering? How does this differ from Callet's model?
Overall:
-(Overall question) What types of applications are these models useful for in industry? Is it worth it to model sub-surface scattering for most types of rendering we do?
-(Overall question) Compare using Mie theory to Kubelka-Munk theory. Which is more useful for dense materials? Why?
-(Overall question) What advantages do these models have over more traditional BRDF's?
L17 - SUB-SURFACE EFFECTS
Discussion Questions:
1. Why does it make sense that the reflections in wolff's simulation are isotropic?
2. What other effects besides weathering of metals could the erode, fill, polish and coat processies be used for?
3. What is the larges limitation to looking realistic (the thing that makes or breaks the realism) in the process described in Dorsey's and Hanrahan's paper?
4. This process focuses a lot on the building up of corrosion, but it lacks in other areas. How important do people feel the lack of abrasion, polishing and pitting are? Do people feel that it would be reasonable to introduce a simple process to simulate these using observed effects?
5. How does this technique still suffer from not being realistic looking? What factor go into making a picture that looks really realistic.
6. The paper focuses mainly on metal. What other surfaces would benefit from this technique?
7. What is the viablity of basing more of the process on the chemistry of the erosion?
8. Why is it important to assume that the scattering is isotropic?
L18 - SAMPLING STRATEGIES
Discussion Questions:
1. The chapter mainly talks about 2D signals, but rendering involves producing a 2D image from a 3D scene, what is the differences/similarities?
2. What sampling strategies have we seen so far, and what problems was a unique method of sampling introduced to solve?
3. What kind of refinement strategies introduce a bias, and under what situations would a bias be ok? not ok?
4. What are diffent ways of reconstructing a sampled signal, seeing the chapter in the book does not cover it much?
5. In choosing a sampling strategy, what characteristics determine how you would go about choosing a strategy?
L19 - PARTICIPATING MEDIA
Discussion Questions:
1. What are the two main stages in visualizing volume densities?
2. Discuss view dependence/independence. What needs to be recomputed when the view changes?
3. Discuss sampling strategies and artifacts. Is a dynamic environment likely to suffer heavily from these artifacts.
4. What are the common bottlenecks (storage, complexity).
5. Discuss adaptations to complex geomtric models (e.g. particle systems).
6. Discuss applications in photomontage: merging a rendered model with a photograph.
L20 - IMAGE-BASED RENDERING I
Discussion Questions:
1. What are the advantages of using an image-based rendering system, as opposed to a system that is geometry-based?
2. Can you think of some limitations to the IBR systems presented in these papers? (i.e. What assumptions are made, for instance, in the View Interpolation paper or any of the other papers? What effects might be difficult to obtain in an IBR system?)
3. When might one of the approaches presented here (plenoptic modeling, view interpolation, or QuickTime VR) be more useful than another?
4. Can you think of some more applications for the systems in these papers, or for IBR systems in general?
L21 - IMAGE-BASED RENDERING II
Discussion Questions:
(View Morphing)
1. Can you think of situations where view morphing would fail?
2. Can you foresee any systems that we have studied in the course that would benifit from this technology? Would they need additional information, e.g. 3D information?
(Light Field)
3. In what ways does the View Morphing paper and the Lightfield papers complement each other?
4. A current limitation of the system is that the user can only view the lightfield from area of free space. How might this design be extended to overcome this limitation?
5. Can a lightfield co-exist with 3D Geometry?
6. Does a lightfield capture optical effects, such as refraction?
(Framework for Animated Environments)
7. What are desirable attributes of a walk-through system?
8. Why would one want to separate the indirect and direct illumination calculations?
9. What are the advantages of using the described quality measures?
10. What kinds of image error are introduced by the described implementation of this framework?
L22 - INVERSE METHODS AND LINEARITY
Discussion Questions:
Radioptimization -- Goal Based Rendering
1. What are they optimizing (hint: take a look at the objective function)?
2. Why is radiosity being used as the rendering operator? Could one use ray tracing for this system?
3. How are the "goals" different from the Painting with Light paper?
Painting with Light
1. What are they optimizing (hint: again, the objective function describes the answer)?
2. What are some advantages/disadvantages of painting on to 2D images rather than 3D patches?
Efficient Re-rendering of Naturally Illuminated Environments
1. Under what circumstances can you exploit linearity of light transport?
2. What are some ways to combine this method with other IBR methods?
L23 - ERROR ESTIMATES / COMPARING REAL & SYNTHETIC IMAGES
Discussion Questions:
Common Illumination
- What do you have to do to merge the images such that you cannot tell which parts are real and which are computer-generated?
- Why is it so difficult to seamlessly merge a real video image with a computer generated one? What makes CGI stand out in a composited image?
- Their tests were done at low-resolutions on video. What would have to be changed to apply this method to high-resolution images?
- What are some possible applications to this techology?
Framework for the Analysis of Error
- Why is this measure of the error of the global illumination solution useful?
- Which method might be more useful: Human judgement or a calculated error value? In what situations?
Comparing Real and Synthetic Images
- One obvious metric is requiring that the luminances match at every pixel. Why is isn't this necessary or desired?
- Where might this idea of an image metric be useful?
L24 - COURSE SUMMARY
Discussion Questions:
Last modified: 11 April 1997
boh@graphics.lcs.mit.edu