tepples wrote:Super NES VRAM allows one low-byte read and one high-byte read per pixel. The addresses of the two reads are the same except for mode 7 backgrounds. In mode 7, eight map reads and eight texture reads completely fill the time allotted to one sliver of eight pixels. One extra 4bpp layer would be one extra low-byte read and one extra high-byte read for the map entry and two extra low-byte reads and two extra high-byte reads for the pattern data for every eight pixels, for a total of eleven, which exceeds eight by three.
How about a 1bpp layer?
(admittedly, this could work for stars.) Oh yeah, the graphics format always assumes that the color depth is some multiple of 2bits. It was a silly idea anyway.
tepples wrote:tepples wrote:The GBA PPU can also retrieve multiple pixels at once from VRAM because VRAM is word-wide rather than byte-wide. It can't do this in mode 7
Doesn't this have to do with the fact that two neighboring pixels aren't guaranteed to be next to one another like they are in a regular tiled layer?
Yes.
By this logic, do affine sprites take up twice as many sprite pixels per line, even not in the double sized (for avoiding clipping) mode?
You know, because we're talking about 3D graphics, how exactly does z buffering work? It seems the most logical thing to do would be to order all the polygons from farthest away to closest, but that's probably way harder than it sounds. What I imagine would be the best thing to do although you could only draw polygons in software if it isn't a hardware feature (which it probably is) would be to have every single pixel in the framebuffer have a "depth value". What this would do is if you were drawing a polygon, it would look at every single pixel in the polygon, and before it would look to map the texture on it, it would see if the depth number is below the number of the pixel currently occupying that spot. If the what is to be drawn pixel is below, it just doesn't draw anything and doesn't even bother looking to find the texture for that pixel. If the value is above, it actually looks for the texture for that pixel and draws it into the framebuffer. This could actually explain the "glitchines" that occurs when two textures are very close to one another, but aren't as large as each other (so their coordinates aren't exactly the same) It's like in that example thing you showed,there are "fluctuations" in how the line is drawn (because it can't be a perfect line)
Like in this picture below I made, if you're viewing the picture as if you were from the bottom (as apposed to the "aerial" view you're actually seeing it) it should be pretty obvious that the blue texture is just a hair closer to you, but if you were only looking at one pixel, you'd have no clue on certain pixels. The purple is where the blue and the red texture overlap. On the bottom of the image, it shows kind of what is actually drawn, because say if in a situation where two pixels two different textures are on the exact same spot, the GPU makes the pixel of the newly drawn texture have priority and overwrite the other, the red texture is the newly drawn texture.
- Texture Priority.png (223 Bytes) Viewed 10652 times
You know what would kind of make sense? Why not just blend the colors together like in the example above? It's still not pretty, but it's at least better than nothing.
Speaking of transparency, I just realized that this whole ideology I came up with completely falls apart when transparency is used... Actually, now that I think about it, I've seen in several games where if two transparent things are drawn over one another, only the one with the highest priority is displayed. That actually makes sense, as if it draws the higher priority thing first and then the lower priority, it will see the priority of the pixel is high, and it can't tell that there are two pixels being blended together because it's already been done and the result is in the framebuffer, so it might as well be any color. I guess there actually is some sorting for transparent objects?