Another rendering update

Anything concerning the ongoing creation of Futurecraft.
User avatar
hyperlite
Lieutenant
Lieutenant
Posts:360
Joined:Thu Dec 06, 2012 3:46 pm
Re: Another rendering update

Post by hyperlite » Thu Jan 17, 2013 5:24 pm

How long does it take you to type out those paragraphs? I know it isn't really on topic, but, really, that must take at least 20 minutes.
Spoiler:

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Thu Jan 17, 2013 5:35 pm

Hours. But it helps get my thoughts in order too.

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Thu Jan 17, 2013 6:11 pm

Working out the size of texture lookup tables, now. If anyone was wondering, at normal 70 degree FOV on a 1080p screen, 1m = 1px at ~1371m from the camera. Around this point, full resolution geometry becomes worthless, so only untextured heightmaps (both varieties) will be rendered beyond that, which gives us some bounds on how widely spread our geometry needs to be for any given point. Right now I am planning on all polygonal geometry to fit in a 2048^3 grid. Coordinates will wrap around as the camera moves, and this will keep precision error down, as well as the coordinate precision to express them. The max dimensions we can expect for planet coordinates are maybe +-4096m from the center, or roughly 1.5km into the sky. However, with the repeating grid, there's no numerical limit because the grid itself simply becomes a single block inside a larger grid.

Additionally, this means each 32^3 super chunk can be referenced as a 64^3 grid, which is just about manageable. 32^3 (.5km view radius) would be even better but that might also be a difficult sell (though perhaps not for a laptop with a lower resolution. For instance @1280x720 1m = 1px at ~914m, so 1024m^3 would have boundaries where 1m < 2x2px which is still larger than it needs to be). I'll leave it adjustable. Definitely no point in going beyond 2048m^3 though.

If you'll recall, vanilla far view is 16x16 chunks of 16x256x16. So this geometry region is an area with 512x the volume. If there is no UV in the vertex, and that looks to be the case, block corner will only take up 4 bytes, or 7.5 if you include the indexes, or just 1 byte if the geometry shader is used.

It will be interesting to see how visible features become at that distance. The cutoff point really needs to begin once the relative distortion of the polygons becomes noticeable due to rasterization rounding to the nearest pixel. If we have anti-aliasing it will be a screen space version like FXAA, which is ridiculously fast and looks just as good as MSAA, so any distorted geometry is not going to get less distorted, so we should switch to raycasting before that if we haven't already.

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Tue Jan 22, 2013 8:04 pm

I'm starting to strongly consider again the merits of an Eihort-style polygon based renderer for the close stuff. This method works by dividing surfaces into large polygons which span multiple tiles to save calculations. However, each polygon can only cover a single block type. The advantage of this is that you have a single type of texture cached and a known orientation, because only polygons of that block type and orientation are drawn in that batch. This does use more vertices, which is why I originally went with the virtual texturing method, but it is possible that the real restriction will end up being in the texture lookup, and going back and forth between textures just for each tile is bad, not to mention traversing the secondary lookups. I'm starting to think that perhaps it is worth blowing a few extra vertices to get better texture coherency.

In this version, we are targeting 32 bits per vertex. Since we'll need to transform coordinates between chunk draws anyway, we may as well use smaller coordinate values. The normal range is 0-32 for each component, so a 4-way vector with 8 bits per component would work well. The fourth component can be used for the texture ID. Then, by sorting the primitive list to keep blocks of a kind together, each texture patch only has to load once per orientation. For each visible orientation, a new uniform value representing the current normal needs to be modified, which flushes the pipeline but is otherwise fast, and each chunk needs to be passed a transformation matrix and a few additional parameters for biome texture.

One speedup available is the fact that the 8-bit position channels are much larger than we need. We could use the same transformation matrix for all the chunks in a 128^3 region simply by extending the vertex positions. Additionally, each vertex buffer can have 2^16 vertices in it, so chunks with fewer vertices could double up inside a single buffer and save some rebinding calls, and the biome parameters can be carried across the whole region.

One notable disadvantage, however, is that changing the texture of a block will trigger an expensive chunk update like normal. However, this render pass only covers opaque cubes, and the only one of those which really change texture states are the redstone lamp and furnace. If we wanted to, we could just as easily place them in a different render lane since they are pretty inconsequential, and then we are guaranteed that only adding/removing opaque cubes will cause a chunk update.


In this scenario, lighting is kind of interesting. It still uses the virtual texture system, but now it doesn't need to map to everything, since most of the surfaces have 0 block light and either 0 or 15 sky light. Skylight state can be worked out from the heightmaps we''re already keeping on the GPU (assumed 15 if lit, everything else defaults to 0). Then, regions can be specified by the CPU for varying light levels (either sky or block). This would save a ton of space in resident texture memory, and can be applied as a deferred shader, as a lot of patches will be completely gone. The one thing this can't do is fake ambient occlusion, since there is no AO information for blocks in direct sunlight. I'm not sure what to do about this. There is always screen-space ambient occlusion, but it's kind of taxing on GPUs for being required even on the low-settings. Eihort has some sort of AO, which it only calculates out to a certain range, though I haven't investigate what's actually going on there. I'll probably go with that if it works reliably.

------------

Anyway, long story short, some assumptions I made about vertex-based geometry being the end-all worst case to avoid may not actually be the case, and Eihort may have had the right idea all along. Storing light levels to a 3D texture might not have been the right call, though. While it has fast and simple access, it also takes up 32KB/chunk (Same space as 8K of our vertices). To look at that another way, it would take 128 MB of memory for lighting alone just to cover a cross-section of the 2048m^3 region that the polygon geometry exists within.

...Hmm, I wonder about having a 3D virtual texture map and only loading in those sections where unique lighting is actually needed. Wouldn't be as memory efficient, but it would also be a lot easier to access. Plus, we could lower the resolution on distant objects so that the memory requirement is no longer linear. If we did that, we could even use volumetric lighting mapping outside the polygon range (so smoothing out cave lighting and providing distant torchlight). The more I consider it, the more I start to like this idea.

If we're careful, we might even be able to approximate some global irradiance lighting (where a room is lit up by multiple bounces from direct light). This would allow us to have the same sort of spread out area lighting we do now, but using directional sunlight, rather than it simply making hard shadows.

User avatar
Tunnelthunder
Ensign
Ensign
Posts:259
Joined:Wed Dec 05, 2012 10:33 pm
Affiliation:Insomniacs
IGN:Tunnelthunder

Re: Another rendering update

Post by Tunnelthunder » Tue Jan 22, 2013 10:00 pm

We can has sunlight?

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Tue Jan 22, 2013 10:05 pm

Tunnelthunder wrote:We can has sunlight?
If your rig is powerful enough. Otherwise you get nonsense up-and-down light and you'll like it.

User avatar
Tunnelthunder
Ensign
Ensign
Posts:259
Joined:Wed Dec 05, 2012 10:33 pm
Affiliation:Insomniacs
IGN:Tunnelthunder

Re: Another rendering update

Post by Tunnelthunder » Tue Jan 22, 2013 10:09 pm

fr0stbyte124 wrote:
Tunnelthunder wrote:We can has sunlight?
If your rig is powerful enough. Otherwise you get nonsense up-and-down light and you'll like it.
Well I lag when it snows..... =( I am going to like it.

Vinyl
Fleet Admiral
Fleet Admiral
Posts:3217
Joined:Wed Dec 05, 2012 9:54 pm
Affiliation:Hexalan
IGN:PCaptainRexK
Location:Hexalan

Re: Another rendering update

Post by Vinyl » Tue Jan 22, 2013 11:49 pm

Tunnelthunder wrote:
fr0stbyte124 wrote:
Tunnelthunder wrote:We can has sunlight?
If your rig is powerful enough. Otherwise you get nonsense up-and-down light and you'll like it.
Well I lag when it snows..... =( I am going to like it.
I'm not! I'm gonna soak up that sunshi-OHGODITBURRRRRRNS
cats wrote:I literally cannot be wrong about this fictional universe

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Wed Jan 23, 2013 3:46 am

I made a few small mistakes on my Eihort description. Apparently its chunk size is 128x128, not 32x32. Also, the ambient occlusion thing seems to be baked into the 3D light texture (also 128^3). Though it is difficult to tell, my guess is that each light voxel is offset to represent the light levels at the corners of cubes, like vanilla minecraft. This would allow you to modify edge values without affecting the other light values. Not a bad way to do it, but if we use variable scaled 3D lighting, we can't guarantee a one-to-one resolution, so that's not going to work for us.

I'm looking into SSAO again. Normally it is a costly technique because it needs to take quite a few samples from the surrounding area to get an acceptable AO estimation, however, we know how blocks are layed out and where to check to see if which faces are bordering walls. If done at the right point, we should be able to do it with just 4 lookups, and we might even be able to cache them if we can use the same point for each block. I'll look into it, but even in the meantime, I think this 3D light map will be a good method.

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Wed Jan 23, 2013 11:52 am

Hmm, looks like Eithort also suffers from light bleeding. When you have a fully enclosed set of cubes like a dome, and some of them are only bordering at a corner (so zero thickness at that point) the light level for that vertex is the same on both sides, even if one side is in the daylight, and the interior is completely unlit. That's the downside putting light values on the corners instead of inside the individual blocks. It's uncommon, but for the purposes of fully replacing vanilla lighting, it an unacceptable artifact.
In vanilla Minecraft, vertex lighting is evaluated individually for each block by examining the light levels of each block of air touching that vertex (and not obstructed on the corner). From there it's simply taking the max of all those levels, minus a constant 1 for each wall casting AO on that face at that vertex (so at most -2).
It's not too expensive if you only have to do it once, but anything we implement has to work in realtime on the GPU. High end cards wouldn't care, but it's too expensive for the low end, and so we need a new solution.

Still, the per-vertex 3D light map is an elegant solution which mimics Minecraft lighting exactly, most of the time. I wonder if it's not salvageable...
What we really need is some sort of framework for defining the "interior" and "exterior" of a mesh of blocks. It doesn't matter which is which, only that light can't pass through from one side to the other. Maybe a system where you can specify which faces a light map can play off of, and use that to assist with light culling. I'm not sure yet. Need to sketch some diagrams.

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Thu Jan 24, 2013 1:14 am

Man, it's really a shame there's not a byte left over without doubling the vertex size (vertex data has to be powers of 2). By adding some additional fracture vertices where the light level changes gradient (primarily when it is meeting against another light), we could paste the lightmap back into the geometry and even render with a fixed function pipeline. If that were possible, you could plug this module into Minecraft as a client mod and instantly have better video memory.

Actually we could still do that. Minecraft render areas are only 16x16x16, so you could fit the vector into 16-bit color and have a byte for the block ID and a byte for the lightmap. Food for thought.

This is a tough call. On the one hand, it is doubling the size of the vector data just to fit a single extra byte in, along with increasing the amount of geometry and polygons to draw. On the other, this is one of the few ways we can cover multiple blocks with a single light values and not loose precision. It would also heavily simplify the texturing process and reduce processing power in the fragment shader (though it increases the cost in the vertex shader). It's really difficult to know which way is better. Shrinking chunks back down to 16x16x32 would reduce the vertex size needed, but it would mean lots more draw calls which may be a problem. On the other hand, we don't have to draw polygons out as far as we did before, and the less memory we spend here means more we can spend on less mundane stuff.


Hmm, another option is using a texture lookup in the vertex shader to add that extra byte. Each vertex has an ID which is its sequence in the list, but it is not always clear how this ID is implemented. Still, it may be worth taking a look at. The other issue is that on some hardware, texture lookup in the vertex shader is reaaaaly slow, though I think it is decent on nvidia at least. If this can be resolved, we don't have to compromise. Actually, if we did that, I would put the light level in the vertex and block type in the texture. That way, if a vertex is reused down the road, it could pass a different ID.

Or, you know, just bite the bullet and settle on 8 bytes per vertex. Not only could we have the vector (3) in there, we would also have block ID (1), both lights (1), biome color (2) for whatever material that is, and still have a full byte left over for whatever. Maybe some advanced lighting. For now I think I'm going to try this before anything else. It looks the most promising. Originally, it was going to be 4 bytes per vertex and at least 7 bytes per face, so this is still an improvement, assuming we can share geometry.

Ivan2006
Fleet Admiral
Fleet Admiral
Posts:3021
Joined:Fri Dec 07, 2012 12:10 pm
Affiliation:[redacted]
IGN:Ivan2006
Location:In a universe.
Contact:

Re: Another rendering update

Post by Ivan2006 » Thu Jan 24, 2013 8:04 am

So you´re about 1 bite short somewhere?
Must be annoying...
Quotes:
Spoiler:
CMA wrote:IT'S MY HOT BODY AND I DO WHAT I WANT WITH IT.
Tiel wrote:hey now no need to be rough
Daynel wrote: you can talk gay and furry to me any time
CMA wrote:And I can't fuck myself, my ass is currently occupied

User avatar
Keon
Developer
Posts:662
Joined:Thu Dec 06, 2012 7:09 pm
Affiliation:Inactive
IGN:ducky215

Re: Another rendering update

Post by Keon » Thu Jan 24, 2013 9:23 am

This is for space-rendering, correct?
- I can be reached as ducky215 on minecraft forums -

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Thu Jan 24, 2013 12:18 pm

Ivan2006 wrote:So you´re about 1 bite short somewhere?
Must be annoying...
It really is. Worst of all the method is an apples and oranges compared to the virtual texturing, so I have no idea which is better.
On the plus side, even at worst case, it will still be a minimum of 12-16 times more memory efficient than vanilla polygons even if it was just using uniform square tiles, and more importantly it doesn't have a fixed amount of real-estate in the video memory which gives us some much needed flexibility.
Lighting is a bit problematic because it would be attached to the geometry, so the naive approach would involve redoing the entire chunk. However, only a handful of vertices would need a change, and of those, most are only getting their light level changed, so we can leave most of the buffer intact if we left some vertices at the end for wiggle room.
Keon wrote:This is for space-rendering, correct?
Nope, this is for nearby terrain rendering. Heightmaps are orders of magnitude more memory efficient, but they lack the ability to add arbitrary blocks and lighting, so for close distances we need a perfect representation. We can supplement this polygon geometry with heightmaps in places, but geometry has a limit of ~1km from the camera, at which point blocks are small enough that the rasterization starts causing significant distortion. Realistically, the switchover will happen well before this boundary.

User avatar
fr0stbyte124
Developer
Posts:727
Joined:Fri Dec 07, 2012 3:39 am
Affiliation:Aye-Aye

Re: Another rendering update

Post by fr0stbyte124 » Fri Jan 25, 2013 3:33 pm

Well this is annoying. Continuing with the everything-baked-into-the-vertices meshing strategy I mentioned last time, there is an issue I hadn't thought of. If you ignore light levels, you can define a uniform block cluster as tiles on a single plane which are all connected and of a single block type. The entire world can be broken up into these pieces, which simplifies the meshing process. All you have to do once you have that is figure out which vertices connect to which other vertices to minimize the number of primitives needed to display it. If you are going for optimized triangles, you will only ever need to use corner vertices (points on the terrain where 3 planes intersect, also counting the chunk boundaries as a plane). These vertices also happen to define the contours of the block cluster. If you are making the mesh out of quads instead, you may end up adding extra vertices to keep everything rectangular. There are pros and cons to each.

Now, let's add light to the mix. In vanilla minecraft we have smooth lighting, and it's surprisingly easy to model. All you have to do is set the light level at the 4 corners of each tile, and when drawn, the light level will smoothly interpolate across the entire square. Now imagine instead of a 1x1 tile, you have a 2x2. If the light has a consistent gradient, you can reproduce it while still only using the corners to set the light levels. The only difference is that one side is now +2 brighter than the other instead of +1. So this is the strategy to take in adding light to the simplified mesh. You take advantage of the fact that light drops off along a fairly consistent gradient and place additional vertices wherever the is a change in gradient, like between two intersecting torches. The mesher won't know the reason, of course. It will just know what the light level is supposed to be for that XYZ coordinate, and work out the placement of vertices to make it happen. You'll end up with more geometry (though still far less than vanilla minecraft). But this way you don't need any external textures or additional lookups, which is very helpful. And it is the only way I know of to store lighting at less than one value per tile without introducing error (it is a requirement that we can perfectly reproduce minecraft's graphics).

So here's the annoying part. There is one case in which light doesn't have a linear gradient, and it's a big one. I'm speaking, of course, of our faked ambient occlusion. You see it on the inner corners where blocks push up against one another, and while subtle, they make a big difference with how convincing the terrain lighting is. The way you get it is that, for each vertex you subtract one light level for each connecting face which creates an inner edge, so it can be 0, -1, or -2. But beyond that single tile, it is back to business as usual. To recreate this in our mesh, for every single change in terrain elevation, we will have to create an additional ring of vertices around each perimeter to handle this different light gradient. In hilly areas, that is awfully wasteful and I'm not really happy about adding all of that extra geometry. We've been trying hard to keep the memory footprint down, and every compromise keeps on adding more.

So, I'm looking into AO alternatives we can do with shader logic. The obvious thing would be screen-space ambient occlusion (SSAO), which samples the depth buffer of surrounding points to estimate the amount of rays which can escape the local area to be lit by ambient light. It's a rough, but convincing approximation. In our case we have a well defined world, so we wouldn't need many sample points to come to make a good estimate. Another, which I just found, is a shader-based wireframing technique. How this works is you give a polygon barycentric coordinates (for a triangle , the corners would be (1,0,0), (0,1,0), and (0,0,1), and each coordinate would drop to 0 once the coordinate reaches the opposing line.) Through hardware interpolation, you get barycentric coordinates for every pixel inside the polygon, and you can tell how close it is to an edge (0 on one of the axes). Then it's just a matter of drawing a pixel dark if it is close enough to that edge, and you have a very fast wireframe method with an optionally opaque surface. In our case we would like it to smoothly darken an area 1m from each vertical edge.

There are some caveats, though. The first is that it's not easy to share vertices between polygons, because you have to guarantee that every contributing vertex is on a unique axis. The other is that we only want to darken edges and not everything, and I have no idea how that might be represented. I'm also not entirely clear on how to tell how close you are to the edge in world terms when you don't know the scale of the polygon. It might have something to do with the depth buffer, but I'm still fuzzy on the details.

Still, I feel like there is something usable here. We've got a leftover byte in the vertex we can use. Maybe that will play a role.

Post Reply