[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

end-to-end checks with photographs and photo-textures




here are some thoughts about how to use photographs and
photo-textures as "end-to-end" checks on the quality of
the model in the sense of 6.033 -- that is, checks of the
ultimate fidelity (to reality) of the 3D generation step.

photographs record what's "on the ground."  photo-textures,
say of a single facade, report what's on that facade (plus
a lot of clutter and occlusion, which we can ignore for the
moment).

suppose that, in the 2D cad, we define a place for "per-
facade" photo-texture data, and a rule for mapping it onto
each building facade during generation. (see * below.) now 
we run the generation, build the textured model with windows, 
doors, etc., and simply render it in simulation.  do the doors,
windows, building edges, rooflines, etc. _from the geometry_
(i.e., from our generated model) align with the doors,
windows, etc. _from the phototexture_ (i.e., from an
observation of reality)?

if they don't align, something is wrong from an end-to-end
point of view.  the texture is the wrong one, or it is 
reversed.  the texture coordinates are incorrect -- shifted,
scaled, etc.  the geometry is wrong -- windows haven't been
extruded with the right width, or height, or whatever.

if everything lines up (and things will never be perfect),
but let's say we get close, it's not a proof that things 
are right, but at least it rules out the kind of first-order
errors i've listed above. 

you can make the same kind of argument for photographs not
of a single facade, but of constellations of several buildings.
we can compare the photograph with the generated model, 
rendered from the same position and camera parameters.  are
the buildings in the same relation to one another in both
images?  if no -- then something is wrong, end-to-end, and it
should be investigated.

we can even apply this idea indoors, to the generated 
floorplans and furniture !  although clearly it would be
much harder to acquire useful contextual photos indoors,
and recover their camera parameters so the same views 
could be generated in simulation.

in general, this idea of "generate and compare to some known
reference" is a powerful way of guaranteeing end-to-end
quality.  i'd like you all to think about how we can 
incorporate this kind of thing into BMG.

prof. t.

* note that this brings up several sticky issues.  what is 
a facade?  easy to say, in tech square.  harder to say for the
chapel or kresge.  how are facades referred to?  by relative 
orientation in the CAD drawing?  by absolute orientation on
campus?  by the order in which they are generated?  note that
we have to solve these to make the generation process well-
defined in an algorithmic sense.  it's made harder by the 
fact that facades themselves are derived quantities -- they're
not explicitly present in the 2D source data.