I have this monitor and computer sitting on my right for a while now, and as you can see, the screen is rotated so it's in portrait mode. Not only is it great for Tweetdeck, it's also awesome for reading various API docs while I'm working.
I'd tried a few times to set this up with the Nvidia driver, but could never get it to work, and would just give up and use the Nouveau driver, as it was very simple to set up the way I would like.
I decided to come back at this, and did some reading around, and discovered the following solution (I've since lost the link on the Ubuntu forum that pointed this out, if anyone finds it, please add to the comments).
Open Nvidia Settings and set up your monitors using Twinview as you would like them positioned, and hit apply.
Open the Displays application (the one in the System Settings application), select the window you want to rotate, change it's rotation in the rotate dropdown, and hit apply.
Go back to Nvidia Settings and save your configuration to your xorg.conf
That should be about it! Now you have 1 screen rotated!
In Part 2 we drew a triangle using a Vertex Buffer, and some basic shaders. While on the surface this can seem overtly complicated, it actually becomes the basis of a powerful OpenGL achitecture to enable you to leverage the GPU is a variety of very interesting ways without having to rely on the CPU.
With this code, we are going to take our original triangle, and we will make it move around a bit, and also change colour at the same time.
The clever thing about this is, we won't be changing the vertex data stored in the buffer, but will instead be manipulating it with Fragment and Vertex shaders. This almost feels like Uber-CSS over the top of HTML.
This is pretty powerful stuff, as we can let the GPU do a lot of the processing by using shader programs, and passing them attributes to control the overall affect that we want.
You can see we now have a uniform vec2 offset;. This defines a value that is going to get passed in from outside the Shader. It has the keyword uniform as the value stays the same on the same rendering frame.
The offset attribute will be expecting a vector with an x and y coordinate to be passed through (vec2) for the uniform value.
We convert the offset vec2 to a vec4, as you can't add a vec2 to a vec4. Then we can add these two together (GLSL will do vector arithmatic for you out of the box) to get our final gl_Position vector.
This means we can change the position of our triangle with relative ease, just by changing the offset of each of the vertices.
To do this from our JRuby code, is quite straight forward. The first thing we need to do is find out the location / position of the offset attribute. This is done through:
In Part 1 we created a window, nothing too fancy. Now we get to actually display a triangle.
Just to follow along as well, I'm moving through the Learning Modern 3D Graphics Programming online book to learn OpenGL (again), so the OpenGL examples I will be displaying will be ports from the code that that is providing. I would suggest for the complete theory behind this code, read the linked section before going through the JRuby code. I expect my explanations to only be commentary on the information already provided in that series and discussion on some of the finer points on JRuby and Java library integration as well.
If you have any questions, please feel free to ask, but be aware, I'm very new to OpenGL, so writing this series is very much part of my learning experience. However, I will attempt to answer the best way I can. On the other hand, if you find anything wrong with what I'm written, please point it out so it can be corrected.
When I first did OpenGL back in University, we used the glBegin() and glEnd() paradigm. This was definitely far easier that the more modern APIs, as it was very clear and easy to draw a simple polygon on the screen (example). However, it did mean more computation was occurring on the CPU and a larger use of the system RAM than the newer APIs. The newer APIs, while (far?) more complicated, shift much of the work to the GPU and also provide a far more flexible implementation. I liken it to working with HTML and tables back in early HTML days. Sure it worked, but CSS and semantic markup gives a clear separation and creates far more flexible implementation options (at least in theory ;) ).
So we have some basic vertex information to display a right angle triangle:
Each line of this array define the x, y and z coordinates of our triangle. You will notice there is a fourth coordinate (1.0) on each line. This defines the clip space. For now we'll just say this just means that the vertexes you see in the window has to have values between -1 and 1 on the x, y and z axis. More than that will render outside of the window.
As discussed previously, in old school OpenGL you would just look through this list of vertexes and say draw a triangle here, however, this is no longer the case!
I feel like modern OpenGL is almost like a database - you put some data into it, and have an id to reference that data that was placed in. Then you can work on that data that is stored on the GPU through some other techniques (that we will look at in a minute) via that id. This seems to be a concept that is used across the board.
The code that inserts our vertex data into the GPU can be seen in the method init_vertex_buffers
So first thing we do, is we generate an id for the vertex buffer, which is where we will store our vertex data. (gl_gen_buffers). Then we tell OpenGL, hey, this is the buffer we want to work with for the moment through gl_bind_buffer, passing in the specific @buffer_id we generated before. We also tell OpenGL that the buffer we are working with an GL_ARRAY_BUFFER, so it knows what data to expect.
In case you aren't aware, JRuby will convert Java static constants to Ruby constants, so we can access these static fields very easily.
To pass the vertex data into the new vertex array buffer, LWJGL has us use it's BufferUtils class to create a NIO buffer, and push the data into it, like so:
float_buffer = BufferUtils.create_float_buffer(@vertex_positions.size)
#MUST FLIP THE BUFFER! THIS PUTS IT BACK TO THE BEGINNING!
A couple of interesting notes:
You will noticed the .to_java(:float). That is the JRuby code for converting a Ruby array to a Java array. Passing in :float tells it to make it an array of primitive floats.
The .flip at the end. This is very important (and took me a day to work out, as I'm not familiar with NIO buffers). Here is a great article that explains it more detail, but essentially the buffer tracks where it is at, and flip sends it back to the beginning. Without this, no data goes to our Vertex Array, and nothing happens!
After we are done, we tell OpenGL not to be bound to any array buffers (0 works like NULL in OpenGL land). This could be considered optional, but ensures that weird things don't occur.
Now all we have to do is actually write the code that makes the data that displays the triangle!
Telling OpenGL how to Render the Vertexes
So now we have the vertex data stored on the GPU, we have to tell it how to render it, and to do that, we have to build a program of a couple of different types of shaders. Think of Shaders kind of like the CSS of HTML. They simply work on the existing data in the GPU and tell it how it to render (although that's a bit of an over simplification).
First thing, we'll write a simple vertex shader in GLSL, the language for writing shaders. This specifies to the GPU where the vertexes actually are from the data you entered earlier. We'll just say that it's correct and basically pass it through.
Without this, you have no idea why your shader fails (if it does). Mine was failing, and I didn't realise it until I looked deeper. (Version 330 of GLSL wasn't supported on my Ultrabook on Linux with Mesa. I had to switch to my main laptop with the Nvidia graphics card)
Make sure to look at the output logs!
Creating the program to define how our data is output on the screen, is quite similar to what we did before. We generate an id and then link the shaders to the program like so:
This tells OpenGL how the structure of the vertex buffer data matches is structured. Here we are saying that we have an array of floats, and every 3 elements is a x, y and then z member, with the fourth defining the clip space, as we saw earlier. (0 is the start, 4 is the size).
GL11.gl_draw_arrays(GL11::GL_TRIANGLES, 0, 3)
DRAW THE TRIANGLE! 0 is the start of the array of vertices you want to draw, and 3 is the number of indexes you want to process - in this case 3 to make a triangle.
This sets the JRUBY_OPTS environment variable, which tells JRuby to append these arguments to all JRuby operations. This obviously doesn't work for deployment, but during development it makes thing very handy.
Now to write some Ruby code to get a window up and running!
First we need to require java, and the lwjgl jar file:
Drawing the window is now very straightforward:
# Just a basic display using lwjgl
class OpenGL::BasicDisplay # initialise def initialize Display.display_mode = DisplayMode.new(800, 600) Display.create
while(!Display.is_close_requested) Display.update() end
end def self.start OpenGL::BasicDisplay.new end
We can then write a little bin file to get this to run:
Warning: This is going to be far less technical, and more about the journey of learning that I'm enjoying.
As per my previous blog post, and from several tweets I've posted, I've been slowly plodding away on doing some game development in my spare time and in my holiday break. I originally started doing this because I've always loved games, and was deeply inspired by the indie game development scene and some of the incredible ideas that were coming out of that.
That also being said, I was really keen to try a new programming challenge. Something totally outside of my usual purview, and doing game programming seemed like a perfect fit for that, as it sat so far outside my usual environment of building web based business applications. What I didn't realise was how much it was going to push me in both directions that we new and also topics that I've learnt (far?) in the past and unfortunately forgotten much about.
One aspect that has totally blown me away was the architectural idea of "Entity Systems" (ES) for game development. (I'm not going to explain it here, there are way too many good articles on this, but if you want to read more click the link above). For what is such a relatively new idea in software development (< 10 years old if what I believe is correct), it fits so incredibly perfectly into how to write some insanely flexible game architectures, I have been rolling it around inside my head for other ideas of where it can be applied.
Since the last blog post, I refactored my entire code base into an ES architecture, and that was a fair bit of a mental shift away from a traditional OO model, but the more I use it, the more it just makes an incredible amount of sense. From the screen shot below you can also see I started adding some graphics to the page too, to make a little nicer to look at.
Some of the basics (and beginnings) for this came from the awesome SpriteLib project, but I also decided to start doing some drawing and try my hand at doing some of my own pixel art. Most people probably don't know this, but I used to draw a lot. I even spent a year in design school... but to be honest, I was nothing special at it and ended finding a real passion for programming. But doing this game programming stuff has got me sketching out little characters and whatnot again, so it's been a real pleasure to rediscover this passion for drawing that I haven't touched in probably over a decade.
My next step in the game is to do some basic physics, nothing too special, just a player and some jumping to move around the world. Suddenly I'm back in high school doing kinematics and dynamics! I've still got all my old maths and physics notes from high school, so I'm pouring over that and also reading some great websites as well, and the concepts and equations are slowly coming back to me.
While doing that though, I started to look at the development cycle of Slick2d and the fact that it seemed like many people (including the original developers themselves) where moving over to libGDX. Comparing Slick2D and libGDX, you can clearly see that Slick2d has a far lower barrier to entry that libGDX. From my review of libGDX, having some knowledge of OpenGL and how 3D graphics work (even on a 2D project) is going to be a huge boon. So now I'm wondering - do I switch to libGDX? It looks more active, performant and gives far more options for distribution (desktop, web, android).
Again, I'm back to looking at old code from University - specifically my 3D programming class, and trying to make heads and tails of the OpenGL code I had previously written (in C++ no less). It vaguely makes sense, but there are concepts there I have either partially or totally forgotten (what is a Quad again? And different projection modes? no idea). Therefore, I'm going to keep playing with OpenGL some more, just to get that basic foundation under my feet, starting with the very low level lwjgl project with JRuby, and just keep going forward one step at a time, and then probably refactor my code again into libGDX once I'm done. While I don't think I have to really know OpenGL to use libGDX, I always feel more confident having the base understanding under my feet. That and I think it's a direct path to re-learning concept as vectors and 3d math with matrices that really is core to any sort of advanced game programming (yeah, forgot most of that too).
So overall, it feels like I am going around in circles a bit - but not in a bad way. I'm finding old things that I used to love doing (drawing and math) and rediscovering the joy I had in doing them, and although it's frustrating knowing that you have forgotten the knowledge you wish you still knew, it's a delight to reopen these memories and play with them once again.