Saturday, February 27, 2016

I forget to install NVIDIA driver

What!! The supported OpenGL shader language version is only 1.3? I was shocked when I am first seeing this.

I'm just feeling bored from my daily coding routine, thus thinking to try out something new to get some new inspiration. I grab the piece from learnopengl.com hoping to get some sparkling idea:
const GLchar* vertexShaderSource = "#version 330 core\n"
    "layout (location = 0) in vec3 position;\n"
    "void main()\n"
    "{\n"
    "gl_Position = vec4(position.x, position.y, position.z, 1.0);\n"
    "}\0";

int main(int argc, char **argv)
{
    ...

    GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
    glCompileShader(vertexShader);
    // Check for compile time errors
    GLint success;
    GLchar infoLog[512];
    glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);
    if (!success)
    {
        glGetShaderInfoLog(vertexShader, 512, NULL, infoLog);
        std::cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << std::endl;
    }

    ...
}

But what's surprising me is that the program failed to run even though I have NVIDIA GT 610 installed. What even surprising me when I issue the following command:
kokhoe@KOKHOE:~$ lspci -vnn | grep -i VGA -A 12
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1) (prog-if 00 [VGA controller])
 Subsystem: ASUSTeK Computer Inc. Device [1043:8496]
 Flags: bus master, fast devsel, latency 0, IRQ 45
 Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
 Memory at e8000000 (64-bit, prefetchable) [size=128M]
 Memory at f0000000 (64-bit, prefetchable) [size=32M]
 I/O ports at e000 [size=128]
 Expansion ROM at f7000000 [disabled] [size=512K]
 Capabilities: <access denied="">
 Kernel driver in use: nouveau

01:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1)
 Subsystem: ASUSTeK Computer Inc. Device [1043:8496]

Notice that the Kernel driver in use is showing nouveau?! That means I have not yet install the driver since the first day I got the NVIDIA card. To fix this error, I installed the driver (see here to see how I do the installation), make a final verification:
kokhoe@KOKHOE:~$ lspci -vnn | grep -i VGA -A 12
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1) (prog-if 00 [VGA controller])
 Subsystem: ASUSTeK Computer Inc. Device [1043:8496]
 Flags: bus master, fast devsel, latency 0, IRQ 46
 Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
 Memory at e8000000 (64-bit, prefetchable) [size=128M]
 Memory at f0000000 (64-bit, prefetchable) [size=32M]
 I/O ports at e000 [size=128]
 [virtual] Expansion ROM at f7000000 [disabled] [size=512K]
 Capabilities: <access denied="">
 Kernel driver in use: nvidia

01:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1)
 Subsystem: ASUSTeK Computer Inc. Device [1043:8496]

Ah ha~ now it shows nvidia is the current driver in use. Before I run the program, I just want to make sure the supported shader language is 3.3.
kokhoe@KOKHOE:~$ glxinfo | grep 'version'
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 4.3.0 NVIDIA 352.63
OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL version string: 4.5.0 NVIDIA 352.63
OpenGL shading language version string: 4.50 NVIDIA
Now my OpenGL info shows me version 4.5, it is far more advance than the expected version. This shouldn't be a problem with my program.

Monday, February 15, 2016

Does Flyweigh pattern really help in memory management?

In a game architecture design, memory allocator is one of the core modules sit in the game system. I’m so wondering how could I implement this module and why do I need this? After reading many articles from the experts, understand that this is a crucial part of game performance when there are thousands of game objects being spawned at a time. Due to the slow performance of the new/delete operator, it is advisable to implement a very own or game specific memory allocator. But how could I do it?

Forget about those big games adopting very advance features of memory allocator. I should focus on my game since my game is categorized as a casual game. So the most basic need for me would be:
  1. Able to allocate from OS and release memory back to the OS.
  2. The memory pool should be expandable and return the memory back into the pool.
  3. The memory pool should not release the memory back to OS until the game exit.
For this purpose, I’m borrowing the idea of Flyweigh pattern, the idea of this pattern is to minimize the memory usage by sharing as much data as possible with other similar objects. First thing first, I define a default pool size of 10 whenever a new memory pool was created:
template <typename T>
class GameObjectPool
{
private:
 static const int POOL_SIZE = 10;

 T *freshPiece; 
}
Notice the use of template for this class, it is to allow the game to be able to allocate different kinds of game object (eg. particle, sprite) in the pool. freshPiece will be responsible for holding the available memory chunk for the game object. When the pool is first constructed, freshPiece was empty. There isn't any game object hold by freshPiece. Thus, the pool will first acquire some memory from the OS. This was done in the constructor:
template <typename T>
class GameObjectPool
{
public:
 GameObjectPool () {
  fillUpMemory();
 }
 ...
private:
 void fillUpMemory() {
  T *curObj = new T();
  freshPiece = curObj;

  for (int i = 0; i < POOL_SIZE - 1; i++) {
   curObj->setNext(new T());
   curObj = curObj->getNext();
  }

  curObj->setNext(nullptr);
 }

}
Once the game was finished, the memory is released back to the OS:
template <typename T>
class GameObjectPool
{
public:
 ~GameObjectPool() { 
  Particle *curObj = freshPiece;

  if (freshPiece != nullptr) {
   for (; curObj; curObj = freshPiece) {
    freshPiece = freshPiece->getNext();
    delete curObj;
   }
  }
 }
}
During the game runtime, the game object will acquire the memory from the pool instead of using the new keyword:
template <typename T>
class GameObjectPool
{
public:
 inline T *create() {
  if (freshPiece == nullptr)
   fillUpMemory();

  T *curObj = freshPiece;
  freshPiece = freshPiece->getNext();

  return curObj;
 }

}
Once the game object has done his job, the memory will release back to the pool. Same thing as acquiring the memory, no delete keyword is used:
template <typename T>
class GameObjectPool
{
public:
 inline void release(T* obj) {
  obj->setNext(freshPiece);
  freshPiece = obj;
 }
}
Notice the POOL_SIZE I pre-code it to 10, this is due to my laziness. I let the pool automatically allocate the predefined pool size if it exceeds the available pool size. May be in the future, I will need a more elegant way to adjust the pool size as shown below:
template <typename T>
class GameObjectPool
{
public:
GameObjectPool() {
  fillUpMemory();
 }

 GameObjectPool(int poolSize) : mPoolSize(poolSize) {
  fillUpMemory();
 }

...

private:
 void expandPoolSize() {
  T *curObj = new T();
  particleHead = curObj;

  for (int i = 0; i < mPoolSize - 1; i++) {
   curObj->setNext(new T());
   curObj = curObj->getNext();
  }

  curObj->setNext(nullptr);
 }

private:
 int mPoolSize = 10;

 T *freshPiece; 
}
Well, this is my very first version that meets the most basic fundamental of my game.