Here is the latest test on the game and I’m just playing with my additional camera in the scene 😉
Author Archives: seganx
Crash report include call-stack and memory
I changed the design of the engine in the next version. In the new design, I have a separate main library of the engine. It includes memory allocators, containers, random systems, basic math functions, simple memory leak and memory corruption detectors, call stack systems, and crash reporters. All of these stuff are now simpler, more optimized, and more easy to use.
Developing a memory leak detector and the call-stack system is straight forward and you can find many samples on the net easily. But I was thinking of finding a solution to detect memory corruption. I found that it’s really complicated and it doesn’t suit me. My problem was not much trouble and there was no need to use complex solutions. All I needed was a simple system to check the array bounds. So my memory corruption detector does that for me fast and securely. Although it may not report all memory corruptions it still can do for many of them.
The new system can report memory status, and call stack with every value in every call when the system is running or crashed. When a crash happens on the application side, a message will be received and I check the flags and memory status to verify that the application closed unexpectedly and report memory statuses.
My next news title is about the network system. The new version has a basic network system based on the UDP protocol. The system has been guaranteed to create a connection that supports reliable data order, request for loosed critical data, merge and compress data packets, and connection speed controller to prevent data accumulation. The network system contains Client, Server, Connection, and socket objects which allows the developer to design any network paradigm based on the genre of the game.
In the Client/Server architecture, the Server object has been featured by broadcast capabilities to create a game session with a specified number of clients to join. The Client object can list the game sessions and let the player choose a game to join. Also, there are Connection objects and Socket objects to design the other network architectures as like as peer-to-peer connections, stars, mesh, etc. I wrote the network system recently and still need more optimization and debugging and probably we will test it on “Rush for Glory”.
Next Container Library
Developing “Rush for Glory” game is going to be finished. so I decided to rewrite some parts of the engine recently. this may cause you to rewrite the whole source code. In the first step, I rewrote the containers library. in this new version, I changed the structure of containers to decrease the usage of templates<> to more easier to understand. I used a sampler to decrease allocations. also implementing memory allocators is more easy than before and changeable during the code in some cases. finally implementing a new memory allocator that uses stack memory to avoid calling the malloc function.
Note that some new features are not designed for general purposes. there are many limitations to using them and must be used with care because those features are designed to develop the game and most of the capabilities depend on the needs of a game engine.
Here are some highlights :
Added math base functions to the library with some fast functions like :
float sx_sqrt_fast( const float x );
// return sine( x ) from table. maximum absolute error is 0.001f
float sx_sin_fast( const float x );
// return cosine( x ) from table. maximum absolute error is 0.001f
float sx_cos_fast( const float x );
// compute the sine and cosine of the angle x at the same time. maximum absolute error is 0.001f
void sx_sin_cos_fast( const float IN x, float& OUT s, float& OUT c);
and some useful conventional functions
MemManFixed::SetBuffer(void* buffer);
I can change the memory buffer of the allocator by setting a new buffer. this feature is available only on the fixed memory manager. it means that it’s suitable just for arrays and strings.
For example, I wanna change/fill a string in a structure. in this example when I set members of the structure to the allocator, all string functions can be applied easily.
wchar path[256];
wchar name[64];
wchar type[32];
}
FileIfo fileInfo;
MemManFixed memTmp;
String strTmp( 0, &memTmp );
// extract file path from file name and copy that to fileInfo.path
memTmp.SetBuffer( &fileInfo.path );
strTmp = fileName;
strTmp.ExtractFilePath();
// extract the name of the file and copy that to fileInfo.name
memTmp.SetBuffer( &fileInfo.name );
strTmp = fileName;
strTmp.ExtractFileName();
// extract file type and copy that to the fileInfo.type
memTmp.SetBuffer( &fileInfo.type );
strTmp = fileName;
strTmp.ExtractFileExtension();
MemManFixed_inline<memSizeInByte>
This memory manager uses a memory stack of functions. by the way, this feature is available only on the fixed memory manager. this means that it’s suitable just for arrays and strings.
In this example, I try to collect some nodes from the scene manager. this will happen at each frame more than once, at the renderer, AI system &, etc. Implementing a new Array class causes calling allocation/deallocation. the other solution is declaring a static array. the other solution is using a memory stack with some limitations but fast and easy memory management.
example :
Array<Node*> nodes( 0, &tmpMem );
// collect nodes from the scene manager
g_engine->m_scene->GetNodesByFrustum( cameraFrustum, nodes );
for ( int i = 0; i < nodes.Count(); i++ )
{
// ... do something !
}
Although there are some changes like adding new functions and changing algorithms in the other stuff to increase performance and capabilities, but those still need to be optimized more and more.
Terrain :: Geometry
The Terrain for SeganX engine is going to be completed. in this post, I will describe the geometry algorithms I used for rendering the terrain briefly. my approach to designing this feature was easy to use, simple, and straightforward to implement. Artist/Level designer can easily:
– create geometry everywhere and every amount that was required.
– increase performance by implementing LOD and patching geometries to reach minimum draw call.
As I mentioned before, everything in the scene is a kind of node member (Component). In this method, the terrain is like the other members ( mesh, particle, sound, … ). The terrain is a node member too. Thus for importing terrain to the scene, we must create terrain attach it to the node, and then add the node to the scene. so we can create any number of terrain and add them to the scene. But to reach the goals mentioned above I designed terrain members with some more information.
Geometry data
Each terrain member has a fixed 33×33 vertices placed in grid style. I chose this size because I want to generate 6 levels of detail. With 33×33 vertices we have 32×32 quads for the first level. This means if we half each level 6 times, finally we will have one quad for the lowest level of details. Choosing vertices for each level will be done by using 6 static fixed-size LOD structures called TerrainLOD that are used for all Terrain members. Thus, there is no need for more indices/vertices for each level and we just choose a correct LOD structure depending on view distance. So for rendering all terrain members, I used a big buffer and appended to that big buffer vertices of all terrain members in the view port. The appending process is just applied on vertices which are described in the LOD structure. Finally, the terrain will be drawn by one draw call.
T-junction problem
T-junction problem appears between terrain blocks with different LODs and makes terrain ugly. The picture below illustrates this problem clearly:
There are several ways to remove these junction problems. according to the ‘Geometry Data’ of terrain in the engine I defined and implemented a simple structure to describe indices for each LOD. this structure splits indices for each LOD into 9 parts:
{
TerrainChunk center; // standard center
TerrainChunk sideFUL[4]; // standard 4 sides
TerrainChunk sideLOD[4]; // simplified 4 sides
}
*PTerrainLOD;
Indices of center in this structure contain polygon indices that cover the center except for 4 sides “up, right, down, left”. these sides will covered by sideFUL or sideLOD. sideFUL contains indices to cover all polygons in that LOD and sideLOD contain simplified indices.
Choosing which side ( sideFUL or sideLOD ) to use as indices is simple. we can easily predict the LOD of neighbors. to predict the LOD of each neighbor I compute the neighbor’s position by adding fixed terrain size to the current position. By putting this position to some special functions we can get the neighbor’s LOD number depending on current camera properties. for each neighbor that uses fewer details, we append sideLOD to the big buffer, otherwise, we use sideFUL. here is a screenshot for terrain in wireframe mode :
My next task is choosing and implementing a terrain material system. The current system supports multilayer material systems. but implementing brush, and gathering information about each brush on terrain to increase performance, optimization and optimization and optimization still need more and more and more work. but now I have to spend more time on to Rush for Glory game project.
Steve Jobs
AI for Rush for Glory
It’s been about 3 weeks since our playable demo of Rush for Glory is ready. It has 2 main mechanics. Create and place towers, and shoot at enemies. I improved some features of the engine during the demo development process. AI was one of these features.
Currently, the AI includes pathfinding, mission-based decision making, and Hierarchical State-Driven Agent Behavior from “Programming Game AI by Example by Mat Buckland“. Now I try to explain it briefly to give you some concepts.
States
Each state is a class that inherits from the standard state class interface. each agent in the game has a primary state. in addition, agents can also have a parallel state to do some parallel operations. we can create a state and replace it with the current state slot of the agent. These states will control the behavior of the agent. In other words, each agent is similar to an Atari game consul and each state is similar to a cartridge, by replacing states, the behavior of the agent will be changed.
in Buckland’s design, each state can be atomic or composite. atomic states have no sub-states and just do simple operations. for instance, pressing a button in the game to open a door. but composite states have a list of sub-states. depending on the circumstances in the game, composite states may create sub-states and add them to the list. substates will process consecutively. each sub-state will removed when the operation is completed.
For example, when an agent is traversing a path that is closed by a door, the agent can create a sub-state that defines a new mission, “Pressing a key to open the door” and push it to the list. this new sub-state may include some more sub-states to find a path to the key, push the key, etc. This process of decomposing and satisfying states continues until the entire hierarchy has been traversed and the agent arrives at the goal.
here is a shot to describe traversing a path which closed by a door :
On www.ai-junkie.com you can find more books, tutorials, source codes, and …
Missions
In this game, each Mission is a simple data structure containing position, time, behavior flag, etc. The flag of the mission can be a combination of different behaviors. the brain of the agent uses this data structure to make a decision depending on the mission flag. there is a queue of missions in the brain of the agent. we can push new missions to the brain and the brain will perform each mission respectively.
here is a simple sample that I used in one of our game prototypes to set the enemy’s behavior. the game is in the tower defense genre and enemies try to reach the base which is placed in the center of the game world. also, enemies can attack the towers and when they arrive at the base, they must remove themselves from the game :
m.flag = MISSION_GOTO_POSITION | MISSION_KILL_ENEMY;
m.pos = float3(0, 0, 0);
m.status = Mission::MS_ACTIVE;
pAgent->GetBrain()->AddMission(m);
m.flag = MISSION_SUICIDE;
pAgent->GetBrain()->AddMission(m);
In this example, I added two missions to the brain of the enemy agent. In the first mission, the flag is the combination of two behaviors including “go to specified position” and “kill the enemies”. the brain creates one state for agents to move to the specified position and set that as the main state. also, the brain creates the other state to find and kill enemies and set that as a parallel state. finally, if the first mission is completed the brain will process the next mission. the flag of the next mission is “suicide” which causes the brain to remove his agent from the game.
Rush for Glory
We started to make a simple game with the engine. so much work I have to do and the time is limited. deadline is so closed and I have no time to update the weblog frequently. in this regard, I started to develop the engine and editor simultaneously.
Currently, the engine has background loading of textures and geometries with 3 levels of LOD, step by step validation/invalidation system to load/unload resources in the scene, simple shader-based material system, particle system, AI path nodes ( POV ), a simple 2×2 PCF shadow map, an incomplete forward shading pipeline and some usual features which every engine should have. some works that I have done will described later.
Here are some screenshots: [Intel Sonoma 1.8 GH – Nvidia GF GO 6400]
Component based animation
Lots of engines have different node types and static meshes usually are separate from animated meshes and so on. I don’t prefer to separate static meshes from animated meshes and generally everything from anything. In my engine, the base and the most important object in the scene structure is the Node type.
Node types have transformation information, volumes, etc. they can have parent and child. in addition, they can carry NodeMember types. any other objects will inherit of NodeMember type. in other words, they are exactly some things similar to components. basically, NodeMembers don’t have a transformation, volumes, etc, and generally geometry. thus They are not counted by the scene manager. node members can describe the behavior of a node in the scene. for instance, a node can have more than one mesh and materials, some rigid bodies and triggers, some AI and script members, and sounds that form this node.
Unlike other systems which have different types of meshes for animated meshes, static meshes, and even breakable meshes; I think that they can be one mesh type and even the animation can be an independent object type. because we want to attach shaders, nodes, and node members to them easily and to draw them in any rendering pipelines unexceptionable. reaching this goal is possible and simple in the engine’s scene structure.
In this regard, I created a NodeMember called Animator. This object is the same as the animation controller which can contain shared animations and animation data. by attaching an Animator to a node, any mesh of node, which has a shader of animation, will obey the animation sequences. also, this member adds a new node as the number of bones to its parent to attach every other thing to any specified bones easily. By detaching this object from the node, all meshes will become default mode and additional nodes will disconnected from the parent.
here are screenshots that show the bones and nodes as red and white cube
model from: http://www.gfx-3d-model.com/2008/07/ninja/
Scene Manager
I think that the Scene manager is the most important part of an engine. rendering pipelines, some lighting techniques and shadow map generators, culling and collision systems, AI systems, triggers, and any other things in the game that need to cooperate with objects in the scene, have to use the features of Scene managers.
For managing objects in the scene, there are many algorithms and techniques of scene management. most of them use tree structures to hold them. the programmers had to use software rasterizer to draw polygons in 3D space before new graphics accelerators appeared. because of that reducing the number of polygons and ordering them from back to front were some of the important issues. in this regard objects in the scene managers were usually polygons and the algorithms generally try to split the scene into separated polygons. nowadays hardware rasterizers have different behaviors.
However, rendering a large number of small objects, each made from a few polygons imposes a big strain on today’s GPUs and rendering libraries. Graphics APIs such as Direct3D and OpenGL are not designed to efficiently render a small number of polygons thousands of times per frame (Wloka 2003).
Today most scene managers don’t split scenes to separate polygons and the algorithms try to manage batches ( meshes, instances, … ) by their location instead of polygons. in addition, there are hybrid scene systems that use different algorithms to manage the scene. for instance, using Octree to partition the space and BSP to separate meshes into polygons and vice versa. anyway, choosing and implementing an appropriate scene management algorithm depends on the genre of the game.
To implement various kinds of scene graphs in SeganX engine, there is a ::SceneManager interface that contains necessary virtual abstract functions. we can easily implement our scene algorithm by writing down a new scene manager class which is derived from the interface, mentioned before. after that we can pass it to the ::Scene class to let the engine use our scene system instead of its default scene manager.
some things like this :
class SEGAN_API SceneManager
{
public:
/*!
fill the node list by founded nodes in the frustum.
NOTE: this function called by many parts of core/rendering/AI/etc and should be fast as possible.
*/
virtual void GetNodesByFrustum(const Frustum& frustum, IN_OUT ArrayPNode& nodeList) = 0;
...
};
// our new scene manager derived from SceneManager
class MySceneManager : public SceneManager
{
public:
virtual void GetNodesByFrustum(const Frustum& frustum, IN_OUT ArrayPNode& nodeList);
...
};
// implement our new scene manager to the engine
MySceneManager pSceneManager = SEGAN_NEW( MySceneManager );
Scene::Initialize( pSceneManager );
...
Currently, the engine has a default scene manager that uses Spherical Bounding Volume Hierarchy (SBVH) to manage the objects in the scene by their bounding sphere. supporting dynamic objects and removing the compile step to create the tree are some of my algorithm features. all nodes can insert/remove/update in run time mode, collecting and gathering nodes is guaranteed and the performance is acceptable. but using a sphere as a bounding volume has basically some drawbacks and no required accuracy. the main disadvantage of using SBVH is sinking spheres which cause it to traverse useless nodes in the tree.
however simple and fast collision detection for spheres ( in comparison with others ) makes the algorithm faster and more acceptable.
The heart of the algorithm is the place where we choose a leaf of node ( Sector in my tree ) to traverse the tree to find an appropriate position for a new sector when we want to insert or update a node. in this situation, there are three sectors, two sectors from the node of the tree and our new one. at first, look, comparing two distances of sectors that compute by the center of their spheres between our new sector and the others is a good solution. but not actually! contributing more parameters like the third distance and radius of spheres to choose the next sector to traverse the tree can greatly affect the form of the tree.
Here is my function implementation to find the nearest node to the specified sphere of a new node. I used this function just to insert a new node to the scene manager:
{
static float dis = 0.0f;
if ( !root ) return;
if ( !sphere.Intersect(root->m_sphere, dis) || root->m_node)
{
if ( distance_Point_Point_sqr(root->m_sphere.center, sphere.center) < distance_Point_Point_sqr(result->m_sphere.center, sphere.center) )
result = root;
return;
}
if (root->m_left && root->m_left->m_node && root->m_right->m_node)
{
float dis_left = distance_Sector_Point_sqr( root->m_left, sphere.center);
float dis_right = distance_Sector_Point_sqr( root->m_right, sphere.center);
float dis_left_right = distance_Sector_Sector_sqr(root->m_left, root->m_right);
PSector res = NULL;
if (root->m_left && dis_left<dis_left_right && dis_left<dis_right)
res = root->m_left;
else if (root->m_right && dis_right<dis_left_right && dis_right<dis_left)
res = root->m_right;
else
res = root;
if ( distance_Point_Point_sqr(res->m_sphere.center, sphere.center) < distance_Point_Point_sqr(result->m_sphere.center, sphere.center) )
result = res;
}
else
{
FindNearestSectorTo( root->m_left, sphere, result );
FindNearestSectorTo( root->m_right, sphere, result );
}
}
As you can see I used three parameters the distance between a point and left sector, point and right sector, and in addition distance between the left sector and right sector. certainly, there are many better algorithms for choosing the right sector and the code still needs more optimization.
GUI System
GUI primitive objects have been completed. these objects are “Panel”, “Button”, “Checkbox”, “Trackbar”, “Progress – linear and circular”, “Label”, “Text editor”, and “Extended panel”. all these objects inherit from a base “Control” class. we can easily attach controls to each other, bind them to their parents, clip them by parents, send them to the 3D scene space, and combine them to create more complex objects for instance “List box”, “Combo box”, “Grid box” etc …
Working with this new GUI system is more easy than the last version. the parents control the behavior of their children, switching between 2D and 3D space became easier, etc. The other features of these controls are similar to the last version of Segan GUI except for the “Text editor”.
The “Text edit” control supports multiline editing, and selection by mouse and keyboard, specifies special languages for each text editor, and currently supports Persian language, ready to implement 24 languages (16 left to right and 8 right to left language). use the internal input system to control the mouse and keyboard instead of Windows messages and simulate shift+alt to change the input language. fit characters to pixels to avoid jagged text and represent clear-readable text.
I plan to use these GUI controls as high-level GUI in The Plankton ( SeganX Editor ). So the Editor will represent an integrated workspace to create/modify levels of the game.