Hallo DPler,
ich stelle hier auch mal die Frage, da ich weiß, dass hier so manche Leute reinschauen, die sich damit wirklich auskennen. Ich bitte zu entschuldigen, dass der Post auf Englisch ist, aber ich denke mal, dass die meisten das auch auf Englisch hinkriegen und ich mir daher das übersetzen sparen kann. Jegliche Antworten können gerne auf Deutsch gepostet werden.
I'd like to ask you guys a couple of design questions.
We've been working on a mesh deformation tool and got it working on the CPU. Now we'd like to implement the algorithm on the GPU. Unfortunately, we're not very experienced with Cg, shader programming in general and the usage of
OpenGL extensions, just the very basics. So please excuse the lack of technical terms.
Obviously, for mesh deformation we need to change the vertex positions after an interaction. So from what we read on forums we figured we'd go for VBOs. Each vertex should be permanently transformed by adding a displacement vector, i.e. the updated vertex position should be written back to the VBO, so that we can apply another displacement vector next frame.
Some directions on how to get that done optimally would help us out a lot even though this is probably quite easy to realize, but we would also like to introduce new geometry. That will likely restrict us to GF80+, but that's fine.
The scheme is as following:
1. For all vertices of a triangle, compute the displacement.
2. If the displacement distorts / enlarges the triangle by more than a threshold, subdivide the original mesh.
3. Then, for all vertices, including the newly created ones, compute and apply the displacement vector.
Note, the computation of the displacement vector only requires the local information delivered from each single vertex, so nothing fancy here.
Our Algorithm requires the VBO - or whatever data structure fits best - to grow over time. Is that possible? Or if not: how would having to copy the data each frame into a resized VBO (or other buffer) affect the real-time capability of the program?
The most important issue is performance. We absolutely need this thing to happen in real-time. In case there is a way to speeden things up by allocating more memory, thats fine. We have plenty of memory.
To sum things up: we generally need a plan of action that will result in a way to actually enable us to implement our algorithm on the GPU with the most possible framerates.
Key questions for us are:
-What kind of buffers would we have to use?
-What sort of shaders would we have to use?
-Could you point us in the direction of tutorials and/or other documentation to specific parts that we will have to implement?
I'm really looking forward to hear some recommendations / keywords on how to realize that.
Regards
Jan