It was a long rainy Sunday here in London. To keep my hopes high I decided to do something a bit more exciting other than rewriting the rendering system for the 10th time :D.
If you reading this article for the 2nd time and you had a chance to see previous examples I have to say I get overexcited and get my numbers wrong.. sorry. But it didn’t change the fact that 100 000 with 60fps is doable as well as 4,200,000 but in this case on my machine I get a bit lower results: 45/60. And yes this is when we’re hitting the limits of Molehill3D performance itself.
For our 2D framework obviously there is no need for this amount of triangles on the screen anyway. In fact for mobile development I am sure you get some app running with just tens of them or at least just hundreds.
A bit more about my approach how this thing is being done: without the technical description’s – here’s my thoughts.
First of all there is plenty resources available over the web you can learn a basics.
bytearray.org would be good starting point.
There is also well written article by jam3.
Obviously you can find a lot of examples that you’re probably well aware about already. And I was following almost all of them from very beginning of the public alpha of Molehill3D.
Despite of the fact, all this stuff is suppose to explain to you the basics, I found myself digging into examples much deeper because it failed to do so. A lot of stuff was confusing and left many questions open.
What was the biggest thing bothering me? My first impression: I saw a “Digging more into the Molehill APIs” article describing this low level API that will let you operate on raw data and colours as well as textures as any fully flagged 3D might. But with 2D content in mind I was hoping to investigate the first part of it the most.
Then all the examples I saw on the web afterwards were actually based on textures only! No bloody clue how to display single colour rectangles on the screen. Secondly when the first Molehill2D frameworks arrived It was clear something is about to happen in this area. But I never had time to dig into this deeper. In fact those guys are also showing off texture based stuff only. So, now I could take a closer look.
1. Setting up geometry
The Thibault Imbert’s aricle explained a lot of things and was the ONLY source you could actually get something decent from. But the AGAL part wasn’t so obvious at first (and this part is still a bit blurry for me). However the AS3 part of it makes perfect sense. It was that missing link between how the basic work-flow and procedures are supposed to be done and how you should prepare your data. After all I end up with a single buffer pushing some vertices and indices into 2 Vector arrays and then reusing them. This was my base:
So you can see each line actually represents a Vertex property and how to form a triangle.
This told me actual colour representation is the value between 0-1. A bit different from what we get use to, but Thibault already even gave us a clue on how to transpose it from what we know about hex values. All I wanted was a square, so my version was like this:
As you could expect this defines 4 points with exactly the same colour which should be dark red. Now to specify indices:
Data from the above renderer needs to this info in order to draw triangles. How does it know the coordinates are 3 and colour is being defined in next 3 values per each vertex?
This is also straight forward. This is as simple as specifying a of row and col for our data matrix.
and upload all this specifying start number as well as an offset (number of points (vertices) you have on this matrix)
vaery similar process with indices:
The syntax might be something new but the whole principal is very similar to what you already know about Drawing API’s and the drawTriangles method. The biggest difference here is the vectorBuffer that you can specify on your own.
There are no real set rules, you have full freedom. Use the data, or little matrices or multi dimensional arrays if you like, AGAL is magic and can perform certain GPU accelerated procedures. This is your join point however you are free to specify this bit, I like to make it custom for my needs.
2D and alpha is what bothers me mainly. I don’t need mess with the Z property and I’d like to add transparency. Now my format looks like this:
So far so good.
Now lets try to render something on the screen: (I’m assuming you already know the basic procedures of Molehill after reading the above articles)
So it’s obvious that coordinate systems are a bit different and always start on the middle of your viewport. But the first question is: how do I specify width and height of a square? It’s also made something obvious that the values in our array 0,1 are completely abstract, but what do they actually do?
Quickly changing my array to this:
0.5 means half from centrer point. 1 means full distance from centrer point
Now my square coveres entire screen. Because I’ve changed the colours of the first 2 vertices I see whole gradient on my screen.
2. Setting up a 2D environment
If I make a square 10 times smaller to compare to the one above it tells me there is a scale hidden in these values! But hold on. Looking on the implementation of many 2D examples there is a matrix (orthoMatrix) transformation going on to transpose all these weird calculations to 2D environment.
Looking at this simple example it is obvious that we can specify vertices and their depending scale factors. This scale is nothing but a link to your width and height properties of the viewport. And in fact it doesn’t need to be square at all! Values corresponds to scale of the viewport and the only matrix you need is the one to render everything- matrix3D. That telling me there is only one thing I need to do. Negate this scale.
Now I can go back to my vertices and try to deal with normal values like this:
And that recipe will draw 20x20px Square. No crazy additional matrices! Because we’re ignoring the z value, our rendering procedure now looks as follow.
This is as simple as it could be to draw 2D stuff using 3D geometry. The magic is happening with those 2 setVertexBufferAt commands. In human readable language it means: Set stream 0 with my vertex pattern I provided, start reading from 0 and use FLOAT_2, basically it means read the next 2 values. X and Y only in this case. The next one is saying, set stream 1 from the same pattern, but start from 2 and read next 4 values.
drawTraingles method needs to know where to pick up the points and how many of the points are in your pattern, that’s all. Now if you need to move your square around using our pattern since we’re operating on natural numbers, things are very easy:
Simple isn’t it? Running it all you notice one more thing. My custom 6th (witch is alpha) is not working at all. Don’t mess around with AGAL and the shader settings. This is actually working perfectly. You already told the renderer to read all 4 values. You just didn’t set how Molehill needed to deal with this, since what’s is happening on one flat layer in 2D space.
Here is my complete code:
You might notice there are more loops going on. This is because a single buffer has its own limitations. To draw more you need to have multiple buffers.
It will be good practice to use different buffers for a different purposes. For example one that deals with alpha objects, another with solid shapes. Then you don’t need to run a setDepthTest and increase performance.
Even more, solid colours and textured objects can also be divided. It’s a whole new area of exploration for me and if you feel I’m being mistaken somewhere here, please correct me if I am wrong. But it looks like the current approach being taken by 2D molehill enthusiasts is not the best. If you think about 3D space you’ll have difficulties understanding how to represent 2D stuff in it.
So don’t mess with heavy Math. Just put a few yellow stickers on your flat screen, don’t move your head or spin around and then things become obvious. Where there is 3D there is 2D already 😉
If the GPU has been designed for performing the heaviest 3D operations they can perform 2D ones even better. The only difference is you, as a 2D content creator have to translate your drawings into the world made up of triangles!
Here is the result of my tests done bases on the above learning curve:
Platform : CPU – PC Quad Core AMD Phenom II 955, GPU – ATI Radeon HD 5800 Series, OS – Windows 7
100 000 transparent colourful squares floating around (20 000 triangles)!
File size : 10.2kb
RAM Initial pick up to 10MB then 8.6MB STABLE
Calculation time : 0ms
processor consumption: 2% (Maybe winamp is on in the background 😉 )
Now, if this is not mind blowing enough – think about a test scenario where you need to cover entire screen pixel by pixel with 1px coloured and transparent squares made up of 2 triangles each.
Let’s take Full HD Spec 1080*1920*2 – we’re ending up with 4 147 200 triangles. Here you go!
4 200 000 triangles!
File size : 10.2kb
RAM Initial pick up to 10MB then at 8.1MB it’s STABLE
Calculation time : 0ms (sometimes it’s trying to show me something but…)
processor consumption: 7%.
You almost have a feeling that Molehill is limitless! In both cases AA is set on 4 to have a decent quality. Obviously this ridiculous amount of triangles would never be in a final product due to the limitations of the language speed, processing and executions. Also, who needs to cover every single pixel with 2 triangles 😀 (maybe some clients). Much more needs to happen but you can see the biggest drawback of the Flash Player renderer itself will soon be the strongest key and most powerful technology available on the web.
The way Molehill is processing/rendering data is 100% compatible with the current state of my Flaemo Framework custom display list. In fact I need to get rid of the matrices since this bit is also better to accelerated by the GPU. Molehill leaves me with a huge hope for the future.
Now all I need to do is integrate this stuff with the custom display list.
Share This Post15 Comments