Another Raycasting Demo

A place where you can keep others updated about your NES-related projects through screenshots, videos or information in general.

Moderator: Moderators

tepples
Posts: 22708
Joined: Sun Sep 19, 2004 11:12 pm
Location: NE Indiana, USA (NTSC)
Contact:

Re: Another Raycasting Demo

Post by tepples »

Animation of RLE ray casting, the intermediate between Wolf3D and Build engines

Key:
  • Orange lines: Edges of viewport
  • Light blue lines: Edges of area not yet deemed to be obscured
  • Light blue blocks: Sectors containing visible walls
  • Dark blue blocks: Sectors determined to contain visible floor
  • Green at end: Visible walls
Attachments
portalcasting_animation.gif
portalcasting_animation.gif (21.08 KiB) Viewed 13071 times
User avatar
tokumaru
Posts: 12427
Joined: Sat Feb 12, 2005 9:43 pm
Location: Rio de Janeiro - Brazil

Re: Another Raycasting Demo

Post by tokumaru »

Oh, now I see what you meant by "strips". Cool idea! I still don't know how to properly select a texture slice for each rendered column though...
Celius
Posts: 2158
Joined: Sun Jun 05, 2005 2:04 pm
Location: Minneapolis, Minnesota, United States
Contact:

Re: Another Raycasting Demo

Post by Celius »

This is a cool idea. I'd always thought about how something like this would work.

If you had solid walls, you could theoretically just find the on-screen coordinates of a wall side's 4 corners, and use a line drawing algorithm to connect the dots. Then after that, use something like XOR filling to fill them in. Though, this would get more complicated when you have partial walls sicking out from behind another wall in the distance. But if implemented correctly, this might make for a fast, high resolution raycaster.
User avatar
tokumaru
Posts: 12427
Joined: Sat Feb 12, 2005 9:43 pm
Location: Rio de Janeiro - Brazil

Re: Another Raycasting Demo

Post by tokumaru »

Celius wrote:If you had solid walls, you could theoretically just find the on-screen coordinates of a wall side's 4 corners, and use a line drawing algorithm to connect the dots. Then after that, use something like XOR filling to fill them in.
I have always considered something like this. I thought about defining rooms as polygons (which could result in worlds more complex than those built with boxes), and finding the angle and distance to all the corners relative to the player (much like is done with objects) in order to render the graphics. A line drawing algorithm would be used to interpolate the wall heights between adjacent corners, like you said.
Though, this would get more complicated when you have partial walls sicking out from behind another wall in the distance.
Yeah, but I bet there's a decent algorithm that can be used to sort the distances out and avoid wasting processing power on stuff that's not visible. We just have to think carefully.

It's still unclear to me how the textures would be rendered. How to "stamp" the texture without messing up the perspective on walls that are being interpolated? I'm sure there's something about that in 3D literature, but is it feasible on the NES? I'm talking about this:
perspective-correction.png
With typical raycasting, that fires one ray for each screen column, you get the correct perspective for free, but if you only detect the edges of the wall and use a line drawing algorithm (linear) to interpolate between them, you'd get what's shown on the left side.

There's an interesting Game Boy demo called Back to Earth that has slanted walls and decent texture mapping. Maybe their algorithm is worth checking out.
User avatar
rainwarrior
Posts: 8732
Joined: Sun Jan 22, 2012 12:03 pm
Location: Canada
Contact:

Re: Another Raycasting Demo

Post by rainwarrior »

To get perspective correct texture mapping, instead of interpolating texture coordinates u, v linearly across space, divide the endpoints by depth z, then interpolate u/z, v/z and 1/z linearly as you rasterize. To recover the perspective correct u, v divide the interpolated u/z, v/z by the interpolated 1/z.

It's probably faster than trying to trace a ray for every pixel, but can you do it fast/accurate enough on the NES? At least you only have to perspective correct the horizontal texture coordinate for the walls.

You can also compromise by subdividing, i.e. break the rasterized object into two parts with perspective correction at the break, but linearly interpolate from there. This way you can trade speed for accuracy by choosing how many times to subdivide.
Celius
Posts: 2158
Joined: Sun Jun 05, 2005 2:04 pm
Location: Minneapolis, Minnesota, United States
Contact:

Re: Another Raycasting Demo

Post by Celius »

tokumaru wrote:It's still unclear to me how the textures would be rendered. How to "stamp" the texture without messing up the perspective on walls that are being interpolated?
I also wondered about this. Assuming you still have strictly vertical walls, you can get the perspective for free for the "Y" axis of the textures. Picture that all of your textures were just walls with horizontal stripes. You would be able to find where these stripes land on each end of the wall, and connect the dots using line drawing. But that only takes care of 1 dimension.

This might defeat the purpose of the method tepples suggested, but what if you performed those checks against sub-cells instead of the entire cell (assuming 1 wall is a cell, and each texture slice is 1 subcell)? So you would determine the end points for each color in each texture slice, and connect the dots. That would take care of the perspective issue. But it would probably defeat the purpose, as doing that many checks and drawing all those separate lines could get really slow.

Another way to think of it is, if you know how tall a given texture slice should appear, how can you find the point at which the apparent distance between the top and bottom of the wall are equal to that apparent height? If you can find the X position of that point, that must be where the texture slice has to go. I'll try and post a picture later with what I'm trying to say.

EDIT: So I tried to draft up some images to explain what I was getting at earlier. Not sure if it's a good idea, or even a new idea (I'm sure it's been thought of already and refined). This is just what I came up with on my own.
RaycastTextureIdea1.png
RaycastTextureIdea1.png (11.43 KiB) Viewed 12986 times
RaycastTextureIdea2.png
Then again, you still have to know the texture slice height. I'm not sure if this would be easy to figure out by itself.
User avatar
Alp
Posts: 223
Joined: Mon Oct 06, 2014 12:37 am

Re: Another Raycasting Demo

Post by Alp »

tepples wrote:Animation
Huh. Interesting!

Many first-person dungeon crawlers used a similar method to prevent using up too much processing time. The optimal method was to read map cells from the foreground, towards the background, and set flags, disabling other cells from being tested. Freeing up memory, and speeding up draw time.
8bitMicroGuy
Posts: 314
Joined: Sun Mar 08, 2015 12:23 pm
Location: Croatia

Re: Another Raycasting Demo

Post by 8bitMicroGuy »

Very good! You really pushed up the NES's power. What I'd like to see is enemy sprites for an RPG battle like Orcs and Elves.
User avatar
mrmmaclean
Posts: 32
Joined: Mon Oct 07, 2013 5:40 pm

Re: Another Raycasting Demo

Post by mrmmaclean »

Wow, this is pretty impressive considering the hardware!!!

Without using trig, how is it that you're able to get the texture x-coordinate for mapping in your demo? Is that even possible without a look-up table?!?!
From reading about tokumaru's raycaster, he gets the distance from precomputed tables and adding and even that doesn't lend itself well to getting a precise x-coordinate...

If either of you have any input I'd be interested to hear it.
User avatar
tokumaru
Posts: 12427
Joined: Sat Feb 12, 2005 9:43 pm
Location: Rio de Janeiro - Brazil

Re: Another Raycasting Demo

Post by tokumaru »

mrmmaclean wrote:From reading about tokumaru's raycaster, he gets the distance from precomputed tables and adding and even that doesn't lend itself well to getting a precise x-coordinate...
Since you mentioned my raycaster, I can tell you how I did it:

Like you said, I have a table of pre-computed ray lengths, which are the distances between one block boundary to the next, for each angle. To calculate distances, I start with only a portion of the distance (calculated from the player's position within the block) to find the first boundary, and then I add the whole distance over and over until a solid block is hit.

Once I know the orientation of the wall, I find the side of the triangle the exact same thing I did to find the hypotenuse. I have a table of sides for all angles, I use a portion of the side for the first intersection (based on the player's position within the block), and add the full side as many times as I added the full distance. From that I can easily extract the X coordinate of the texture.

EDIT: Figure I'd try to illustrate it:

Code: Select all

+----------------+----------------+--X-------------+
|                |                | /|             |
|                |                |/ |             |
|                |                /  |             |
|                |               /|  |             |
|                |              / |  |             |
|                |             /  |  |             |
|                |            /   |  |             |
+----------------+-----------*----+--|-------------+
|                |          /|    |  |             |
|                |         / |    |  |             |
|                |        /  |    |  |             |
|                |    H  /   | 1  |  |             |
|                |      /    |    |  |             |
|                |     /     |    |  |             |
|                |    /      |    |  |             |
+----------------+---O-------*----+--*-------------+
|                |       S       S|                |
|                |                |                |
|                |                |                |
|                |                |                |
|                |                |                |
|                |                |                |
|                |                |                |
+----------------+----------------+----------------+
Here's a ray starting at the "O", moving upwards to the right until it hits a wall "X". "H" is the hypotenuse (ray length), which comes from a table. "S" is the side, which also comes from a table. The other side is always 1 (i.e. a full block). As I extend the hypotenuse, I also extend the side, and you can see that when the ray hits the all at the very top, the side tells me exactly where within that block the wall was hit, and that's where I get the texture's X coordinate from.
User avatar
mrmmaclean
Posts: 32
Joined: Mon Oct 07, 2013 5:40 pm

Re: Another Raycasting Demo

Post by mrmmaclean »

That's actually quite brilliant, tokumaru. Thanks for your in-depth response and diagram!
tepples
Posts: 22708
Joined: Sun Sep 19, 2004 11:12 pm
Location: NE Indiana, USA (NTSC)
Contact:

Re: Another Raycasting Demo

Post by tepples »

"Grid Puzzle" on DataGenetics is relevant to raycasting.
User avatar
Drew Sebastino
Formerly Espozo
Posts: 3496
Joined: Mon Sep 15, 2014 4:35 pm
Location: Richmond, Virginia

Re: Another Raycasting Demo

Post by Drew Sebastino »

Where in the world would the 3D example be useful? That is, unless real 3D TVs exist, like if there was a somehow 1920x1080x1920 pixel television. That would be awesome. (although the framebuffer for a hypothetical 3D system running on it would be nearly 120GB. I'm really weird for thinking of this kind of stuff...)

I'm assuming the way texture mapping is done is by looking at a pixel of the inside of where the polygon or whatever is being drawn in some sort of framebuffer, and undoing the equation that transformed the polygon and seeing where that ends up on the texture? It would be like if you flipped the image 90 degrees to the right when rendering it, it would (presumably) look first at the top left pixel, kind of swing that pixel around from the center of the square, left 90 degrees (some sine and cosine stuff would probably be done) and render whatever the result was in the top left corner, essentially taking the bottom left pixel and putting it in the top left spot. Because it's not always going to match up perfectly, on systems like the PS1, I guess it just renders the pixel it landed closest to (I wonder what would happen if it landed perfectly between pixels. I guess it would just choose whatever one then) while on a system like the N64, it averages the pixels together.

I guess the thought of doing non Wolfenstein texture mapping is completely ridiculous on just about any system that wasn't several hundred dollars from the time period, or latter when the PS1 came out. Isn't the SNES's mode 7 layer done just about like how the PS1 renders polygons? Both don't seem to average colors together, and they both only use 2D coordinates. (Although the SNES can fake it horizontally with hdma.) Now, this isn't even related at all really (not like any of the other stuff I said was) but why is there only 1 layer in mode 7? I assume both PPUs are maxed out doing the transformations, but not all the vram bandwidth is being used, which I thought was mainly what slowed it down (instead of processing). Actually, I guess it would still have to check priorities and stuff, and at that point, there wouldn't be enough CPU time. Well, really, is it a vram bandwidth problem or a processor speed problem? One odd thing I noticed is that the GBA seems to take a bigger hit in rendering "mode 7 layers", as it losses 2 8bpp BG layers instead of 1 8bpp and 1 4bpp BG layer like on the SNES.
tepples
Posts: 22708
Joined: Sun Sep 19, 2004 11:12 pm
Location: NE Indiana, USA (NTSC)
Contact:

Re: Another Raycasting Demo

Post by tepples »

Espozo wrote:Where in the world would the 3D example be useful?
Voxel rendering, perhaps. Minecraft anyone?
I'm assuming the way texture mapping is done is by looking at a pixel of the inside of where the polygon or whatever is being drawn in some sort of framebuffer, and undoing the equation that transformed the polygon and seeing where that ends up on the texture?
Pretty much. It just has a lot of precalculated stuff to make it go fast.
Isn't the SNES's mode 7 layer done just about like how the PS1 renders polygons? Both don't seem to average colors together, and they both only use 2D coordinates.
They're conceptually similar, the major difference being that the PlayStation GPU renders triangles rather than planes, and it renders to a frame buffer rather than directly to a layer compositor.
(Although the SNES can fake it horizontally with hdma.) Now, this isn't even related at all really (not like any of the other stuff I said was) but why is there only 1 layer in mode 7?
In addition to VRAM bandwidth, there's a unit to calculate texture coordinates, and the Super NES has only one of those.
One odd thing I noticed is that the GBA seems to take a bigger hit in rendering "mode 7 layers", as it losses 2 8bpp BG layers instead of 1 8bpp and 1 4bpp BG layer like on the SNES.
The GBA PPU can also retrieve multiple pixels at once from VRAM because VRAM is word-wide rather than byte-wide. It can't do this in mode 7 (layer 2 of mode 1 or layers 2 and 3 of mode 2).
User avatar
Drew Sebastino
Formerly Espozo
Posts: 3496
Joined: Mon Sep 15, 2014 4:35 pm
Location: Richmond, Virginia

Re: Another Raycasting Demo

Post by Drew Sebastino »

I'm assuming the way texture mapping is done is by looking at a pixel of the inside of where the polygon or whatever is being drawn in some sort of framebuffer, and undoing the equation that transformed the polygon and seeing where that ends up on the texture?
Pretty much. It just has a lot of precalculated stuff to make it go fast.
That's still insane. I guess by the way it's done like this, texture size doesn't matter, only how much of the screen is covered. By precalculated stuff though, is it similar so how the position of pixels in sprite shrinking is precalculated on the Neo Geo? It seems way, way simpler on the Neo Geo though, as if you're only scaling, for making a sprite more narrow, the system I guess just kind of looks at a precalculated table or something that says how the sprite is to be drawn based on the horizontal scaling value, and for making a sprite shorter, it also goes through a precalculated table thing that says what lines of pixels are to be drawn instead of pixels per line, and you can easily put both of them together. The fact that polygons on just about anything that isn't the Sega Saturn are triangles seems to make everything more complicated. Unlike what I said earlier, I don't think sine and cosine could be used, because we aren't always talking about rotation always making a perfect circle.

You know, isn't transparency achieved in a similar (although more complicated) way? I heard it's like you every pixel on the mirror, create an imaginary line between the camera and the mirror, and then have that line reflect of the mirror and follow the reflected line until it hits something, and then go back and draw the result on the mirror. I always warned to know what would happen if you got two mirrors to look glood
tepples wrote:In addition to VRAM bandwidth, there's a unit to calculate texture coordinates, and the Super NES has only one of those.
I mean, I know that there's no way you could ever have two mode 7 layers, but just one and an extra, regular 4bpp layer. I think this would work according to vram bandwidth, and it isn't going to use the multiplication and division unit.
tepples wrote:The GBA PPU can also retrieve multiple pixels at once from VRAM because VRAM is word-wide rather than byte-wide. It can't do this in mode 7
Doesn't this have to do with the fact that two neighboring pixels aren't guaranteed to be next to one another like they are in a regular tiled layer? It seems like 16bpp Mode 7 layers should have been an option, although then you start to worry about vram space. I guess the reason the GBA can get so much more data from vram is because it is word based instead of byte based, so it's transferring twice the data? That would explain why the SNES doesn't take a dramatic a hit when rendering a mode 7 layer than the GBA.
Post Reply