Once we've satified the sampling theorem we can theoretically reconstruct the
function. This requires low-pass filtering the samples that are used to represent
the image.
Remember the samples that we store in memory are points;
they have no area only positions. It is this reconstruction
filter that gives a pixel area.
Question: If what I say is true how can we just stick pixel
values in our frame buffer and see a picture? If they were just points
(like I claim) we would see anything.
The fact is that a CRT has a built in reconstruction filter. Most real
world devices do because they do not have an infinite frequency response.
Each pixel illuminates a spot on the screen whose intensity fall off is well
approximated by a gaussian.