New GPU Renderer - Octane

 From:  Michael Gibson
3233.34 In reply to 3233.32 
Hi anthony,

> Michael, could MoI use CUDA to speed up slow
> operations -- like booleans?

No, it's not really feasible to do that - things that are suited for CUDA are more when you have a very large quantity of somewhat more simple individual tasks.

Booleans don't particularly fit into that category - although it would be theoretically possible (with quite a lot of work to rewrite some chunks of the geometry library) to separate booleans into some list of individual tasks, each of those tasks is fairly complex and involves the intersection between 2 NURBS surfaces.

It's not easy to make more complex individual functions run on the GPU, and it is also not very easy to take existing program code and automatically turn it into CUDA, the GPU is a fairly different environment for processing things than the CPU and you cannot just push a button and automatically run the same kind of process on it.

See here for some information: http://en.wikipedia.org/wiki/CUDA#Limitations

It wasn't even too many years ago that shader programs on GPU did not even have any branching (like if/then) instructions available to them...

I can't really think of any particular area of MoI that would fit into the massive parallel processing of more simple individual calculations that would fit well with CUDA. In general the things that MoI does have more potential for multi-core CPU processing, and there has been a big step forward with that already in MoI v2 with the mesh generation at export making use of multiple CPU cores.

- Michael