Hi Cindy,
> I hope the 'final' Version will be a bit cheaper than Rhino :)
Yes, as Pilou mentions version 1.0 will be right around 20% of the price of Rhino, so it certainly fits this aspect.
But MoI just isn't focused on restricting the mesh output to fit this particular type of constraints, it's focused on general purpose arbitrary geometry. I'm also trying pretty hard not to add too many new things right now, I'm really trying to wrap things up for the 1.0 release...
> What can i do ? I wished there was a 'Magic Algorithm' to re-build the Mesh
> at a certain 'Size'.
> Or use Texture Baking, too ?
Hmmm, I think I may be able to give you this "Magic Algorithm".
It would be difficult to re-build just any mesh to a certain MxN grid size, but there is one extra piece of information that MoI saves out to .obj files which I think you can leverage to do it more easily: UV coordinates at each mesh vertex.
When MoI generates a mesh for a surface, it will create 3D x,y,z point locations for each polygon vertex, but it will also create 2D u,v texture coordinates for each polygon vertex as well (these are the vt entries in the .obj file). Those u,v coordinates come from the NURBS surface.
So say you want to create a 16x16 node grid. What you'll do is create a UV coordinate for each of these nodes within the UV space, which in this case is between 0.0 and 1.0. So for instance you would get 2d coordinates at 0, 1/16, 2/16, 3/16, 4/16, .... through to 1.0 .
So you now have a UV coordinate for each point. You now want to convert this into a 3D coordinate.
To do the UV->3D conversion, you're going to find which triangle contains this point. Each triangle vertex has a UV coordinate (those vt points in the .obj file) in addition to the 3D coordinates. For the moment you just consider the triangle to be a 2D triangle using the UV vertices. Test each triangle to see if your UV point is inside of it, until you find the triangle that contains it.
If there is no triangle that contains it, it means you had a trimmed surface - I should mention that this algorithm only works for certain conditions - your object should be made up of just one single surface, not multiple surfaces (like a cube has 6 surfaces in it normally, this won't work. You have to create a special cube that is made up of just one surface instead). And also the surface should not be trimmed, like it should not have been run through booleans or those types of processes. That's because you want the UV space to be completely covered by the triangulation which is only the case for untrimmed surfaces. Another constraint - to make things a bit easier to process you probably want the .obj file to be saved as Triangles only, instead of using N-Gons.
Anyway, once you find which triangle contains your UV point, you want to calculate the barycentric coordinates of your UV point. The barycentric coordinates basically will express the point as proportional weighting or averaging between the 3 triangle points. Once you know these proportions, you can then apply this proportional blending to the 3D points of the triangle to generate an equivalent 3D point inside the 3D version of the triangle. That's your 3D point for that node.
This will basically sample some points on existing triangles to create your regular grid. Of course if you don't use very many points it will be pretty rough and jagged in many spots...
Also, I don't think you want your .obj file that you're processing to have a very low polygon count, because that will kind of increase sampling error. You want the base mesh to be of at least normal density or maybe actually a fair bit high density, that will make your final re-sample closer to the original surface. I mention this because it might be tempting to try to aid the final reduction by having a coarse mesh, but you will really be adding additional error by that.
Anyway I think it would work. Let me know if you need any more clarification on the algorithm or more details on any of the steps.
- Michael
|