Grasshopper

algorithmic modeling for Rhino

Hi guys, i´m testing Rhinoceros with Grasshopper in the latest ( mid 2014 ) macbook pro 15 retina, with i7 quadcore, 2.5ghz , 16gb, 512gb SSD and Nvidia GT 750m 2gb, runing bootcamp windows 8.1, and i have some issues with the speed preformance, at least i was expecting more. The reason of this is because i was using before intel core duo  2.53ghz with 4gb, and didn´t notice a considerable increse in preformance. What do you guys think about that ?! it could be because i´m using bootcamp and the machine are losing power ?! or is it  Grasshopper that not support multicore, ( or does already support it ) ?

Thanks in Advance 

Views: 972

Replies to This Discussion

Interesting topic:
I have an old Core2 Quad Q6600 2.4Ghz 8gb ram MB, Nvidia GTX 760 4gb graphics card running Win 7.1
I use the Intel CPU Usage Gadget with shows that Rhino 64 with Grasshopper uses Core 3 most of time, while the 3 other cores are mostly sleeping. Draw your own conclusions.

Grasshopper does not support multi cores. Bootcamp does not cause slowdowns (only virtual machines do that).

Would this part be easier on OS X using Grand Central Dispatch if we are ever lucky enough to have the last remaining software to install windows for also fully on OS X? 

And what about CUDA/OpenCL? Could this be utilised for some speed-ups?

Actually thinking about it, I never tried running GH on our render machine with some new Xeons.. hmmm.. *rubs fingers on chin*  The patch I am working on for the last few months only takes about 10 seconds to recalculate, but faster is always nicer. I will report back ;)

Ok, that was (unsurprisingly) dissappointing. 

The i7-4771 @ 3.5 GHz (2.2 seconds recalculate time) in the iMac is actually quite a bit faster than the Xeon 2565 v2 @ 2.1 GHz (2.9 seconds for identical file), as it is only using a single core on a single CPU. I guess a fast Xeon would still beat the i7 by quite a bit though.

The tinkerer in me already has plans to overclock a CPU just for GH ;)

Here is a nice list with single core performance: https://www.cpubenchmark.net/singleThread.html

Clock speeds aren't everything, the type of processor does make a difference, so a 3.1 GHz core may end up being faster than a 3.5GHz core of a different make. However these tend to be smallish differences, so not terribly important in the end. After all, it's not really worth going through a lot of trouble to get a 15% speed increase; 15% faster than slow is still pretty slow.

Also processor speed has pretty much peaked these past few years, there have been no more significant increases lately. Instead, manufacturers have started putting more cores on motherboards, which is something GH unfortunately cannot take advantage of.

Multi-threading (very high on the list for GH2) brings with it a promise of full core utilisation (minus the inevitable overhead for aggregating computed results), but there are some problems that may end up being significant. Here's a non-exhaustive list:

  • It's not possible to modify the UI from a non-UI thread. This is probably not that big a deal for Grasshopper components, especially since we can make methods such a Rhino.RhinoApp.WriteLine() thread safe.
  • Not all methods used by component code are necessarily thread safe. There used to be a lot of stuff in the Rhino SDK that simply wouldn't work correct or would crash if the same method was run more than once simultaneously. Rhino core team has been working hard to remedy this problem, and I'm confident we can fix any problems that still come up, though it may take some time. If components rely on other code libraries then the problem may not be solvable at all. So we need to make sure multi-threading is an optional property of components.
  • There's overhead involved in multi-threading, it's especially difficult to get a good performance gain when dealing with lots of very fast operations. The overhead in these cases can actually make stuff perform slower.
  • There's the question on what level should multi-threading be implemented. Obviously the lower the better, but that means a lot of extra work, complicated patterns of responsibilities and a lot of communications between different developers.
  • There's the question on how the interface should behave during solutions now. If all the computation is happening in a thread, the interface can stay 'live'. So what should it look like if a solution takes -say- 5 seconds to complete? Should you be able to see the waves of data streaming through the network, turning components and wires grey and orange like strobe lights? What happens if you modify a slider during a solution? Simple answer is to abort the current solution and start a new one with the new slider value. But as you slowly drag the slider from left to right, you end up computing 400 partial solution and never getting to a final answer, even though you could have computed 2 full solutions in the same time and given better feedback. Does the preview geometry in the Rhino viewports flicker in and out of existence as solutions cascade through the network?

Thanks David, for making things a bit clearer. I can see that its not trivial to say the least.

Also to be realistic here, we all want things to simply "be faster", but that takes into account a lot more than just the computing of the solution. Still, especially in light of the developments of CPUs you mention (more cores, rather than higher frequency) it would be a shame to have all that computing power sit idle during use of GH. But it would need to be a real significant speed-up (2x or more) to be worth all the hassle and potential for bugs (which would slow us down a lot).

I think just as important as speed-ups of computing is speeding up of coding (which still makes up the bigger chunk of time), so it would be good to iron out some bugs and some missing features to make coding in GH faster.

In terms of looks I wouldnt even mind if it just freezes the UI and you can just cancel the calculation to change a value. Thats kind of what it does now too. I cant change any sliders while it is calculating, but can press ESC to cancel it and make the changes. Ok, so I just found out a nice feature, because I have so many sliders and inputs. I can "disable solver" and press F5 to recompute manually - how good is that :)

So you mean you could visualise where in the GH doc it is calculating at any given time? That would be quite nice to visualise the calculation process and see instantly which parts take a long time. I imagine it like a tiny LED on each component that is red when uncalculated and changes to green when calculated.

So all in all I think it would be nice to multi-threaded, but is not a show-stopper. Rather concentrate on bugs, adding features and improving UI (which is already one of the best around), but if I had 2 wishes than it would be to make clustering and grouping much better. Both are not so intuitive right now and have many weird behaviours.

Thank you both for this detailed information,i wasn't aware of this technical issues. but well as Armin says "it would be nice to multi-threaded, but is not a show-stopper." 

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service