-------------------------------------------- student : nilesh graded by : Damian Isla problem set: ivscan -------------------------------------------- Renders 1st test (triangle) (0-3): 2 Renders 2nd test (rgb) (0-2): 2 Renders 3rd test (cube) (0-2): 2 Renders 4th test (interpen) (0-2): 1 Renders 5th test (ellipse) (0-2): 2 Renders 6th test (geekball) (0-2): 2 Exactitude (0-2): 2 total (0-15): 13 -------------------------------------------- notes: Good job. You unfortunately just had three problems: 1) You were not correctly intializing the EdgeRec, in that you were not correctly adjusting the starting value of X to account for the starting point of the edge being off-pixel-center. i.e. the initial value of xcurr does NOT equal xstart, but rather xstart + epsilon (finding epsilon is the trick). Because what you REALLY want is the value of X at the level of the center of the next row of pixels. Since the edge probably starts below that level, you need to adjust X. Note that it is not JUST X that should be adjuseted in this way, but also color and depth (and inverse depth). 2) The above also counts for when you're interpolating along a scan-line (in RenderScanLine()). For each pixel that you light up, you want to take the values of color and depth at the CENTER of the pixel you're considering. So, if the starting X of an edge does not happen to be at the center of a pixel, you need to adjust the starting values of h and color to find their values at the center of the first pixel. The formula for this 1-D case is: hstart = e1->hstart + (pixelCenter_X - e1->xcurr) * (e1->hcurr - e2->hcurr) / (e1->xcurr - e2->xcurr) and likewise for color. Where pixelCenter_X is the X-coordinate of the center of the first pixel you're lighting up, e1 is the first edge and e2 is the second edge. 3) Linearly interpolating Z in screen space is wrong. You needed to instead use linearly interpolated values of h in order to correcly gauge depth. --------------------------------------------