New GPU Renderer - Octane
 1-15  16-35  36-42

Previous
Next
 From:  jbshorty
3233.16 
Wow, looks nice! Good luck with this Philbo! I'm glad to see Radiance has been working on a commercial product. I had no idea this was under development. Also it's really well priced and perfectly timed with both the arrival of Thea and the questionable future of Hypershot.

My GTX280 is waiting to test this. One question i have, does this use only HDRI lighting? Or there is user-created lights as well?

jonah

EDIT - Just noticed it has emissive material properties, so that answers my question about lighting... :)

EDITED: 11 Jan 2010 by JBSHORTY

  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  PaQ
3233.17 
Hi Phil,

This looks promizing,

Are there any restriction with the .obj comming from MoI ? I mean, is it possible to use ngones ? Are the vertex normals fully translated ? Except Maya, .obj are really a problem with this ngone/normals stuffs, whatever you use 3dsmax, modo or lightwave ... without talking about uv's limitation too (only one channel).

Are you planning to extend the files support ? (true .3dm import with tesselation on the fly, .lwo, .fbx) ?

I'm also really curious to see more complex indirect lighting scenario, and how long it takes to render to clean the noise.

As a final note, there is something a little bit strange with the textures from actual examples provided ... I'm not a technical guy, but they looks really blured/washed/compressed ... there is this kind of realtime texture feeling about it, maybe it's the filtering I don't know. (I know it's still in alpha)
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Rudl
3233.18 In reply to 3233.16 
What is a CUDA card. I have a MacBookPro. Does it have a CUDA card.

Rudl
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Ralf-S
3233.19 
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  neo
3233.20 In reply to 3233.18 
>What is a CUDA card. I have a MacBookPro. Does it have a CUDA card.

I also have MacBookPro with 9600M GT (CUDA-enabled)... BUT that card could only render TEAPOTS :) if you know what I mean.

EDITED: 11 Jan 2010 by NEO

  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Samuel Zeller
3233.21 In reply to 3233.20 
Well I also have MacBookPro with a 9600M GT and its already faster than Quad core at 2.4ghz (Q6600)
According to this graph
http://www.refractivesoftware.com/forum/viewtopic.php?p=1150#p1150

That mean you can render like 4 time the ammount of teapots in the same ammount of time !!!
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Rudl
3233.22 
My MacBookPro is a little bit older and has only the 8600M GT in it.

Is it worth to give this renderer a try.

Rudl
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  ed (EDDYF)
3233.23 
.

EDITED: 12 Mar 2010 by EDDYF

  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Phr0stByte
3233.24 
I just purchased my CUDA enabled card and will be purchasing Octane as soon as the but up the link.

On a side note, maybe MoI could get CUDA enabled? This may allow us poor linux users to enjoy MoI in a VM, or better yet, natively. I am sure it would be much easier than porting to OpenGL..? I purchased MoI v1.0 way back and honestly just through my money away, as I have only used it a couple times at work (when I was supposed to be working - I don't run Windows at home).
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  PaQ
3233.25 
An Octane viewport inside MoI would be terrific ... (sorry, so thinking loud)

... I'm half sold (well completely in fact, at 100Euro I can't be wrong) ... just this memory limitation is a bit annoying. Today 'render' station have easilly 8 or 12 Mo (I'm on 16Mo here, and I allready feel limited sometimes). I'm not really sure nvidia will put so much memory on a gaming card any soon ... the biggest available are 'only' 2Mo (and it's allready a waste of memory for actual games requirement)
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Samuel Zeller
3233.26 In reply to 3233.25 
PaQ it will support multiple cards so 4 cards at 1gb will be like one card at 4gb
Also 2gb cards are coming soon (mainstream prices)
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  PaQ
3233.27 In reply to 3233.26 
Yes I know for the 2gig card ... but having 4 of them dont give you 8 gig, as the card can't share memory ... one copy of the scene data will be stored per GPU to speed up the rendering. (I allready asked on the Octane forum ;))
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  BurrMan
3233.28 In reply to 3233.25 
>>>>An Octane viewport inside MoI would be terrific ...

Should be the other way around. :o
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Michael Gibson
3233.29 In reply to 3233.24 
Hi Phr0stByte,

> On a side note, maybe MoI could get CUDA enabled?

What part of MoI would you expect to use CUDA?

I'm not sure if CUDA is what you are expecting - it's a mechanism for giving programming access to the resources of your GPU for certain kinds of calculations.

It's most useful for something that can be broken up into something like a million little smaller tasks that can be run in parallel (similar to rendering).

It is not generally very feasible to use CUDA for the regular real-time viewport display of a modeler, which is what you seem to be thinking of? I mean it is possible, but it would mean writing a lot of custom code to just reproduce what is already set up in Direct3D or OpenGL.


> I am sure it would be much easier than porting to OpenGL..?

Nope - it would be more like 100 times more work than porting to OpenGL, because I'd basically be trying to rewrite what OpenGL does.

- Michael
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Michael Gibson
3233.30 In reply to 3233.24 
Hi Phr0stByte, also you wrote:

> This may allow us poor linux users to enjoy MoI in a VM

Actually this may already be possible - there are several Mac users that run MoI in a VM on Mac OSX using Parallels or VMWare.

I think that VMWare is also available for Linux so you can try that.


> or better yet, natively

Actually, you can already do that as well at least for basic stuff by using WINE.

You do have to set up IE6 or IE7 under wine though, look for something like "wine tricks" to find out how to get IE installed and then you can actually launch MoI running the code directly in WINE. Not everything works, like a couple of little menus don't show up but for the most part it appears to be ok:
http://appdb.winehq.org/objectManager.php?sClass=version&iId=7383

- Michael
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Samuel Zeller
3233.31 In reply to 3233.30 
Paq >>>> but having 4 of them dont give you 8 gig, as the card can't share memory

Ok. That's bad but not so much because 4gb cards are coming soon :)
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  anthony
3233.32 
Michael, could MoI use CUDA to speed up slow operations -- like booleans?
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Brian (BWTR)
3233.33 In reply to 3233.29 
3D Coat has an option for Cuda, and combined with DX, operation in both 32 and 64 bit versions.
The speed up of calculating is excellent.

Brian
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Michael Gibson
3233.34 In reply to 3233.32 
Hi anthony,

> Michael, could MoI use CUDA to speed up slow
> operations -- like booleans?

No, it's not really feasible to do that - things that are suited for CUDA are more when you have a very large quantity of somewhat more simple individual tasks.

Booleans don't particularly fit into that category - although it would be theoretically possible (with quite a lot of work to rewrite some chunks of the geometry library) to separate booleans into some list of individual tasks, each of those tasks is fairly complex and involves the intersection between 2 NURBS surfaces.

It's not easy to make more complex individual functions run on the GPU, and it is also not very easy to take existing program code and automatically turn it into CUDA, the GPU is a fairly different environment for processing things than the CPU and you cannot just push a button and automatically run the same kind of process on it.

See here for some information: http://en.wikipedia.org/wiki/CUDA#Limitations

It wasn't even too many years ago that shader programs on GPU did not even have any branching (like if/then) instructions available to them...

I can't really think of any particular area of MoI that would fit into the massive parallel processing of more simple individual calculations that would fit well with CUDA. In general the things that MoI does have more potential for multi-core CPU processing, and there has been a big step forward with that already in MoI v2 with the mesh generation at export making use of multiple CPU cores.

- Michael
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged

Previous
Next
 From:  Michael Gibson
3233.35 In reply to 3233.33 
Hi Brian, re: 3D-Coat - yup the kind of data that 3D-Coat works on, with a huge number of simple individual voxel elements, is well suited for CUDA.

That is a very much different kind of geometry than the NURBS surface data that MoI uses though.

- Michael
  Reply Reply More Options
Post Options
Reply as PM Reply as PM
Print Print
Mark as unread Mark as unread
Relationship Relationship
IP Logged
 

Reply to All Reply to All

 

 
Show messages:  1-15  16-35  36-42