-------------------------------------------- student : smeaton graded by : Damian Isla problem set: ivscan -------------------------------------------- Renders 1st test (triangle) (0-3): 2 Renders 2nd test (rgb) (0-2): 2 Renders 3rd test (cube) (0-2): 1 Renders 4th test (interpen) (0-2): 1 Renders 5th test (ellipse) (0-2): 1.5 Renders 6th test (geekball) (0-2): 1.5 Exactitude (0-2): 2 total (0-15): 11 -------------------------------------------- notes: You got most of it right, but there were some problems when we tested it with some of the more complex models. In the cube, for example, cracks appeared between pixels that should have been flush. This is probably a problem in your RenderScanLine function, although it could have something to do with how you set up your EdgeRecs. A few other issues: 1) You were right to adjust the initial xcurr values associated with each edge in order to get the correct X values at the center of the next scanline. However, you have to do this with Z, h and color as well. Also, a better place to do that would have been in EdgeRec::Init. 2) You have to do the above kind of adjustment when interpolating across a scanline as well (in RenderScanLine()). For each pixel that you light up, you want to take the values of color and depth at the CENTER of the pixel you're considering. So, if the starting X of an edge does not happen to be at the center of a pixel, you need to adjust the starting values of h and color to find their correctly interpolated values at the center of the first pixel. The formula for this 1-D case is: hstart = e1->hstart + (pixelCenter_X - e1->xcurr) * (e1->hcurr - e2->hcurr) / (e1->xcurr - e2->xcurr) and likewise for color. Where pixelCenter_X is the X-coordinate of the center of the first pixel you're lighting up, e1 is the first edge and e2 is the second edge. 3) Linearly interpolating Z in screen space is wrong. You needed to instead use linearly interpolated values of h in order to correcly gauge depth. --------------------------------------------