I suppose you could have that monster CPU calculate ∇Z in screenspace and figure out lighting from that. But that would cause errors at internal edges (e.g. the silhouette of a hill against a more distant hill), because you're trying to take derivatives across what should be a discontinuity.segaloco wrote: ↑Sat Sep 28, 2024 12:24 pm If you could add Z to the pre-rendering metadata then with known pixel Z values from a given point you could do some sort of post processing for the lighting, no calcs to get the entire Z value during on-line rendering, just plugging the already computed values into the various calcs, something like what a fragment shader would do today. Yeah it's not a full lighting calc, but gives you something to work with from the pre-rendering angle.
Perhaps you could reserve a bit to denote an internal edge pixel, so it calculates the gradient using only 15-bit Z values from adjacent pixels that don't have that bit set. I wonder if that would look decent. It would impose a minimum feature size, though, or require preprocessing to let some errors through on purpose, because with multiple edges too close together (or in too shallow of an angle), you'd end up trying to compute a gradient with no data at all...
I kinda feel like storing normals would have better results. Maybe a sort of normal mesh, so you can do smooth gradients without having to use a ton of bits to store every pixel's normal values. You'd still need Z, though, because you absolutely cannot get Z by integrating normals, nor should you try...
...
You could also store an 8-bit stencil descriptor for each pixel to specify which adjacent pixels to take gradient information from. This is still susceptible to getting too boxed in to get useful results, and it requires either 24 bits of storage per pixel (on top of the actual colour) or an 8-bit Z value (probably not good enough). It's still cheaper than storing Z plus three (or even two) normal components.