User Tools

Site Tools


Technical FAQ

Often technical questions regarding programming techniques, data structures, optimization and other questions that are not specifically game-related are asked of Knuckle Cracker. This is a collation of answers to those questions, in the event (hopefully) these answers are useful to others as well.

IXE: Why Pixel graphics? 1)
For IXE, I wanted to explore adding additional simulations to the game that would work in conjunction with the creeper, and I wanted the terrain to be simulated at the most basic unit. Work backwards from the computational capabilities of current home computers running multi-threaded SIMD compiled code, and you get a “pixel” scale that you see in IXE. I'm not just drawing pixelated terrain because I like the way it looks. It is pixilated because every pixel is being physics simulated 30 or more times per second, across the entire map. That is true for terrain, creeper, and even for ship movement when they are in flight.

Pixel graphics are not just an artistic choice for this game. They are a technical choice to get to the game play I want to explore. If computers were 16x faster, the pixels you see in IXE would be 1/4 their current size.

So nobody is required to like it, even if this is the reason. Folks like what they like and everybody wants from games something slightly, or majorly, different from others. I'm not trying to convince anybody to like these choices, and I respect everyone's opinions. I say all of this only so that anyone who wonders “why pixels when the last game was 3D”, has an answer.

As a footnote, the creeper and terrain simulators in this game are the fastest, most advanced, most multithreaded, and most efficient… by a healthy margin, in any game I have made. It may look like the 1980s, but it's code, algorithms, and engine are definitely 2020s.

CW4: What data structures and rendering methods are used for terrain rendering? 2)
The terrain is held internally as a flattened 2D array (a single dimensional array treated as 2D data). Each position in the array represents a height, essentially creating a basic height map. The game dynamically generates a mesh for the terrain, where the points/vertices form a grid. Each set of 4 vertices is connected by a pair of triangles. The cutting direction of the triangles (top-left to bottom-right or bottom-left to top-right) is dynamically determined based on the shortest 3D distance. This is a common way to represent a 2D height map as a 3D mesh.

To enhance the visual appearance, shaders, textures, and other elements are necessary. Most of the work involves applying a shader with fancy stochastic methods to blend and shuffle textures, eliminating seams between textures as they tile across the surface. Additionally, edge and vertex coloring techniques are used to create lighting and shadowing effects on the mesh. In CW4, the terrain heights are limited to integer values, and different textures are applied to the non-flat portions using the shader.

For your game, you may prefer smoother terrain. Unreal Engine (UE) likely provides shaders that can render terrain in a standard and visually appealing way. If not, UE offers excellent shader authoring support, surpassing even Unity. If I were starting from scratch in Unity today, I would follow a similar approach, but utilize the NativeArray to hold the flattened data, and utilize Unity's Burst and Jobs systems to update the mesh data efficiently when changes occur. These new engine APIs offer better performance while employing the same technique. Therefore, the same techniques should work in UE4 or UE5 (although in UE5, the introduction of Nanite may change your approach completely).

This is a general overview of the terrain representation and rendering technique. There may be additional considerations regarding surface normals and other details that you will likely address as you progress further.

technical_faq.txt · Last modified: 2023/07/08 18:10 by Karsten75