Grasshopper

algorithmic modeling for Rhino

In comparison to Processing or other small Java-based applications?

Views: 2550

Replies to This Discussion

Don't know. It's a bit weird to frame it in terms of Processing or Java because they are not really the same thing. If you'd write some pure C# code in GH it would run about as fast as compiled code can run. GH2 still uses the Rhino SDK for the geometry functionality, so curve offsets, meshing, brep intersections etc. will run exactly as fast as they do now.

However even in the absence of a working version of GH2 which can be profiled, we can still discuss some of the major aspects of performance:

  • Preview display. Each GH solution involves a redraw of all the Rhino viewports at the end (unless is preview is switched off, which I imagine is exceedingly rare). For simple GH files, the viewport redraw takes far more time than the solution. Rhino6 has a completely rewritten display pipeline using more modern APIs so we should see a speed-up here in the future, be it GH1 or GH2 or GHx.
  • Canvas display. Each GH solution involves a redraw of the Grasshopper canvas. If the canvas shows a lot of bitmaps or intricate geometry (lots or text, dense graphs, etc.) this can take a significant amount of time. GH2 will use Eto instead of GDI+ as a UI platform. Eto can be both faster and slower than GDI, depending on what's being drawn. It is particularly fast when drawing images, not so much when drawing lots of lines. There is a little room for improvement here and I intend to take full advantage of that.
  • Preview meshing. Grasshopper uses standard Rhino mesher to generate preview meshes. If a GH file generates lots of breps, a large amount of time will be required to create the preview meshes. The new display improvements in Rhino6 will allow us to get away with previewing some types of geometry without the need to mesh them first, and I imagine some effort will be spend in the near future to improve the Rhino mesher as well.
  • Data casting. Most component code operates on standard framework and RhinoCommon types (bool, int, string, Point3d, Curve, Brep, ...), however Grasshopper stores and transfers data wrapped up in IGH_Goo derived types. This means that every time a component 'does it's thing', data needs to be converted from one type into another, and then back again. This involves type-checking and often type instantiation. This stuff is fast, but it's overhead nonetheless and can take significant amount of processor cycles when there's lots of data. GH2 no longer does this, it stores and transfers the types directly as they are. There will still be some overhead left, but hopefully a lot less.
  • Computation. GH1 is a single-threaded application. When a component operates on a large collection of data, each iteration waits for the next. GH2 will be parallel, meaning components will be invoked on multiple threads, each thread focusing only on part of the data. Then all the results need to be merged back into a single data tree. On my 8-core machine (4 physical cores, each with 2 logical cores) I've been getting performance speed-ups of 4~6 times when using my multi-threading code. I wish it was 8, but clearly there is some overhead involved here as well.
    This will not help to speed up a single very complicated solid boolean operation, but if you're offsetting 800 curves, then each thread can be assigned 100 curves and the time it takes will set by whatever thread takes the longest.
  • Algorithms. If a specific component is slow, there may be things we can do to speed it up. Either improve the Rhino SDK, or improve the GH code. Depends on the component in question.

When all's said and done, I'd love to see a 10x speed increase for GH2 over GH1 for simplish stuff, and I shall get very cross if it's anything less than 5x.

Thank you David, this question was actually addressed to you (who else?). So to clarify, do you mean that for example particles created in Grasshopper via C# component could create tens of thousands of them? Currently plugins like boid or curebla are unable to reach this number.

Really glad to hear that GH you are working on making GH faster. Do you have already a preliminary release date by any chance?

Also it sounded like you meant that Rhino 6 won't be released before GH2? I found it surprising given that its beta version is already available?

Oh Rhino6 is very close to release. Grasshopper2 will probably be part of the RhinoWIP process.

That's great. Also is it going to be "embedded" in Rhino?

EDIT: replied below

It's great to hear that you want to simplify data casting, but can you tell us if it wouldn't render older plugins incompatible? Would they still work the same?

That's true. GH2 is a completely different program and none of the plug-ins will work. We're also dropping the GHA file format for Grasshopper plugins and switching to RHP, so you can have a single assembly which contains both Rhino commands and Grasshopper components.

It's an open question whether any or all of the old files will even work, because a lot of components will be very different. So in order to support existing gh files we'd either have to re-code hundreds of legacy components, or come up with ways to convert the old logic into new logic. Both of these routes require a lot of work and the early beta releases of GH2 will definitely not be able to open any *.gh files you might have.

I see, do you think you could include also a "converter" for the old code? Is the new one so different or is it perhaps like "IGH_Goo.Extrude" >>> "RhinoCommon.Extrude"?

Algorithms Are "Volume" or "Area" compoents something that could perhaps work faster? I always use them to get centroids, but impossible to use on too many breps at the moment.

Those are good examples of algorithms which can probably be made faster, but also are already pretty optimised. We'll have to investigate whether we can improve the speed of the existing features, or whether it makes sense to add other types of algorithms that are faster but compute less, or maybe are faster but less accurate.

Another possible improvement here is to inspect the data going in, and if it's a box, or sphere, or plate-like shape we can use quicker algorithms for those special cases.

Exactly, for example "Centroid" command, which doesn't necessarily compute volume or area would be great.

Also, how would you compute a centroid for a box? That is faster than "volume" component?

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service