Procedural Planets

I put together a little planet generator a few months ago, thought I would share. The generation is based on several parameters like temperature(distance to sun), size, population etc… then it churns out these neat little pixel art planets. Pretty simple but a cool result, this was done in Unity3D.

23456789101112planetgen

Advertisements

Update 10: GPU Picking

Urho3d has a nice method for picking objects by casting a ray through the scene octree. This works well enough for picking objects using their AABBs but it was quite slow for picking points on a mesh. The issue now is that with the new terrain generation the mesh doesn’t even exist on the CPU, so it’s not possible now anyway. I thought about doing the picking on the GPU, and because,the world position for every point on screen has already been calculated for the positions buffer the work has pretty much been done already. Urho stores these buffers as textures, so I did a minor modification to the Urho3D internals and by binding the positions texture to a framebuffer I can do a glReadPixel() to get the value under the mouse cursor.

The issue with glReadPixel is that it’s super slow on some hardware, my mac was taking 10ms to get that one singular pixel. This is where ‘pixel buffer objects’ come into play. Using a PBO the call to glReadPixel is non blocking, so it won’t halt the application. Instead of reading directly into client memory the function will read values into a buffer, which you can retrieve values from later. A common set up would be to have 2+ PBO’s initiated, then alternate between reading and writing them each frame. So the data you get would be from the PBO written to on a previous frame.

Unfortunately my development machine is an older macbook pro and I was having real problems getting the glReadPixels to function asynchronously. Despite binding the PBO, when reading the pixels it would still halt the application. Despite lots and lots of testing I could see no solution to this, looks like a bug beyond my control. Very annoying, however I was able to gain some performance by placing the glReadPixel before the scene is rendered. The way opengl works is that it won’t read the pixels until every other opengl call has been executed. This was a large part of why it was so slow before. Also by using a 32bit float for the texture instead of 16 bit half float seemed to help as well. I was able to take the performance to under a millisecond, however it seems to spike randomly, taking 5 to 10ms every 10th frame or so. On better hardware with proper PBO support it shouldn’t be any issue.

I’ve set up my shaders so they write an object ID into the alpha channel of the positions buffer. So figuring out the type of object under the mouse is easy. The idea is that I would use that ID to cull objects from a CPU side raycast.

Here you can see the results, I am now able to use the mouse to draw onto the terrain in real time. Converting grass tiles into sand tiles.

Screenshot_Sun_Mar__5_14_24_19_2017.png

Screenshot_Sun_Mar__5_14_24_50_2017.png

Screenshot_Sun_Mar__5_14_25_20_2017.png

Here is some good reading on PBO’s:
http://http.download.nvidia.com/developer/Papers/2005/Fast_Texture_Transfers/Fast_Texture_Transfers.pdf
https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-AsynchronousBufferTransfers.pdf

Update 9: Terrain LOD and the GPU

Previously the mesh for the terrain was generated on the CPU, both positions and normals. While this ‘worked’, it was extremely slow. So slow in fact, that it required the process to be split across multiple frames, usually it would take upward of 50 frames for one chunk to be ready on screen. Usually there are 9 chunks needed, so if the player pans quickly across the map then it takes a long time for the terrain to prepare. Now this would be fine for a small map size, as I could just cache the terrain mesh for each chunk. But for a large map (4096 x 4095), it would take up to much memory. Also another issue was that with CPU bound mesh generation it’s difficult to alter the mesh in real time.

The new method involves moving the mesh calculation to the GPU and having just one small flat mesh which follows the camera. The vertex shader then reads in values from a texture and calculates the correct Y position. The downside is that I can’t do greedy meshing or any kind of optimisation to reduce the verts. However the upside is that mesh generation is instant and the terrain can be updated in realtime.

Another big issue was terrain LOD, which was proving very difficult. The issue is that it’s very hard to simplify a voxel style mesh, a cube is still a cube at any distance and can not be reduced further. You can see some experiments with smooth meshing at low LOD in previous posts, I was not very happy with the results. My new solution is to show the map in strictly 2 dimensions when zoomed out. So what the player sees is actually just one quad. The interesting part is that directional light and depth still work as normal. The quad writes directly into the depth buffer the Y value at the given tile. So when the ocean is rendered in the next pass, it can discard fragments above sea level by looking at the depth. I’ve also done done some work with an adaptive LOD for grass, so the number of rendered shells decreases as you zoom out.

screenshot_sun_feb_26_19_50_26_2017

screenshot_sun_feb_26_19_50_32_2017

screenshot_sun_feb_26_19_50_37_2017

screenshot_sun_feb_26_19_50_42_2017

screenshot_sun_feb_26_19_50_46_2017

screenshot_sun_feb_26_19_50_52_2017

Cape.com Drone Flight Beta

I got invited into the cape.com drone flight beta. This technology involved is very cool. Basically you can remotely fly real drones from your laptop or phone anywhere in the world. You can select 1 of 7 locations in North America to launch the drone, and you can fly around for roughly 5 minutes or so. It’s free for the beta, I imagine you can fly for longer and at more diverse locations in the full release.

The first flight was a failure, I couldn’t figure out how to actually launch the drone. Keyboard control instructions pop up onscreen and you have to press the Enter key to initiate the launch. I didn’t realise that until the next flight. Before you start controlling the drone yourself it will autopilot itself of the ground and up to an altitude of at least 16m. From there you get given the controls. I was quite impressed with the latency all things considered. There is a definite lag, but nothing that stoped me from feeling like I wasn’t in full control of the drone.

It was a pretty cool experience, the whole thing works so seamlessly. I remember years back being impressed by seeing webcam live feeds through the browser. Now I’m viewing a webcams mounted on unmanned aerial vehicles remotely controlled in my bedroom, quite surreal.

1.png

2.png

3.png

4.png

5.png

The top right map shows a green dot if another drone is in the area. You can’t see it so great in the pic above but there was another guy piloting a drone just in front of me.

Update 8: Grass Rendering

I started some experiments with parallax mapped grass, they didn’t turn out to successful. With parallax mapping the grass can not extend out onto other geometry which made it look pretty dull. Next idea was to use shell rendering, this is an age old method used for realtime fur. ‘Shadow of the Colossus’ used it well on the ps2. But it still sees use today in games like Uncharted 4 and GTA5. For me it suits perfectly, as the camera in my game looks down upon the scene I only need a limited number of shells to get a convincing effect.

Shadow-of-the-Colossus-6-Barba-10.jpg

Shadow Of The Colossus (2005) Shell rendered fur

0rF1Omd.jpg

GTA5 (2015) Shell rendered hedges

This is a rough and ready test, the whole terrain mesh has been duplicated multiple times and assigned a new grass material. Works fine for a proof of concept but for final release the next step would be implement the shells in a geometry shader. You cant see this in the images but the grass texture is sampled using the sin and cos of the elapsed time.  This gives an animated wavy wind effect.

shortgrass.jpg

Short Grass, 5 shells

longgrass2.jpg

Medium grass, 10 Shells

longgrass

Medium thick grass, 10 shells close up

superlonggrass.jpg

Long thin grass, 20 shells.

The obvious limitation of this method is that the grass can only be looked down on. If you are looking parallel to the grass the effect can not be seen at all.

limitgrass.jpg

Short Grass, 5 shells, looked at with free floating camera

Update 7: Fixed Camera Rendering

I’ve had this idea for a fixed camera rendering system in the back of my mind for a little while. I thought I would put together a proof of concept. Basically the idea is that the world can only be viewed in isometric and top down oblique perspective, but the game is still a fully 3d world. The idea spawned when I was watching this video by Roller Coaster Tycoon 1&2 artist Simon Foster. He explains the process for creating the graphics for both games. He creates game models in 3d studio max and renders them out at different angles. Then he would manually put them on a giant sprite sheet ready for use in the game. It got me thinking, what if I could render 3d models to a sprite sheet in the actual game, as well as normals and depth for the model. This would allow for procedurally generated sprites.

Obviously this kind of graphics has its drawbacks. You can’t rotate the camera freely to any angle. Instead you can only view the world at only 8  fixed angles, 4 of which resemble an isometric view (Although it isn’t true isometric). I did a bit of research and a guy on youtube has implemented a similar idea here. Very cool effect, he uses high poly pre rendered assets where ambient occlusion has been baked in.

Here is a stress test with 1000+ models and 200+ lights. The engine maintains a solid frame rate (on my 5 yr old macbook pro).

1

2.jpg

4

I’m basically taking the graphics concept of ‘imposters’ to the extreme, a technique where a 2d image replaces 3d geometry at a certain LOD. Say for example in a video game, a complex tree model would get replaced with an image of the tree when it’s viewed from far away. Difference is, these imposters look exactly the same as the original model. By storing depth and normals they can be lit as if they were 3d in realtime.

Behind the scenes the engine renders a model 8 times from different angles. Then it puts the data onto one single texture atlas. When the model needs to be rendered, just a single polygon is drawn on screen. The normals and depth for that part of the screen are taken straight from the atlas.  In theory, the final output should be no different than having rendered a full 3d geometry, assuming the camera maintains a fixed position. This is just a proof of concept at the moment, I haven’t tested how this really performs compared to normal 3D geometry. I’m not convinced the savings will be that great right now but this rendering method opens doors to other optimisations later on.

depthnormals

For this type of rendering to work, depth is compared using the Y axis in world space. This is different from a typical depth buffer which would hold depth values in window space. The issue is that in a deferred rendering system, when G Buffers are brought together and the scene is lit, world coordinates are calculated directly from the depth buffer. You need world coordinates in order to calculate the distance to light source. That works fine normally but when depth is only worldspace Y axis it’s not enough to do lighting calculation. The solution is pretty simple,  I’ve actually done away with the depth buffer and replaced it with a position buffer. Meaning, I store XYZ for each frag in an RGB channels of a texture. This was just a bit of fun, but it might have some use within the game, maybe for trees and for models viewed at a distance.

Good articles for further reading:
http://blog.wolfire.com/2010/10/Imposters
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch21.html
https://software.intel.com/en-us/articles/impostors-made-easy