Grasshopper

algorithmic modeling for Rhino

Hello, I have a repetitive task that seems to forever increase my Rhino memory footprint.  The task is pretty strait forward...

- (In Rhino) Open .3DM file containing reference geometry

- (In Grasshopper) Open .gh file 

- Modify input parameters (int's) 

- Recompute

- Bake some geometry

- close .gh file

- (In Rhino) Export new geometry

- Goto step 1. 

Every time I complete the sequence Rhino takes up more memory.  I am wondering if there is a way to free any of it along the way?  I have tried GrasshopperUnloadPlugin but it does not affect the total allocation.

Update: Here is the visualization of the memory:

And the snapshot before and after the 35 cycles of the repetitive task

Has anyone dealt with releasing memory / clean up run-time caches? Is that even possible via GH SDK or Rhino SDK's?  Is this behavior expected?

Thanks

Views: 1653

Replies to This Discussion

I moved the question to VB, C# and Python Coding as there might be a fix in the SDK?

I wonder if the Managed Heap indicates all memory used by RhinoCommon+Grasshopper and if the Private Data refers only to C++ memory. If so, then the memory increase happens solely in Rhino and has nothing to do with Grasshopper (wouldn't that be nice?).

Since you're loading new 3dm documents on every iteration it should mean that the memory increase isn't due to undo-buffers. Since you're unloading gh documents it means that the memory increase isn't due to cached data sticking around. At least I hope it isn't. I can look into this but it will be a while before I have enough time to do so (too many deadlines at the moment).

--

David Rutten

david@mcneel.com

Poprad, Slovakia

I will post a timeline of a single cycle but from what I can tell the big increase in memory occurs when the bake command happens (which is expected).  The unload of gh and 3dm do not have the expected equivalent reduction in memory that I would like.  There is some memory reduction so I know something is happening on document close but somewhere there is some extra allocated luggage along for the ride indefinitely.  

If I just open and close the files and do not bake, then I don't have any accumulation...

Let me post some more info.  Deadlines loom but would dmp files be helpful at some point?

I don't think a dmp file will help. I'll need to code up some tests and put breakpoints in strategic positions to see when objects are constructed and destructed.

--

David Rutten

david@mcneel.com

Poprad, Slovakia

I did some more testing (nothing scientific yet) and this is what I found:

I used both Rhino 4.0 SR9 and Rhino 5 (64bit version) WIP 5.1.20927.2215 with Grasshopper Build 0.9.0014 running on 64bit windows 7.

1. Toggle between two 3DM documents in Rhino with open command and no Saves => No increase in memory (as expected).

2. Open / Close Same GH document with Solver locked => No increase in memory

3. Open / Close Same GH document with Solver unlocked and solution computed on open but no geometry previews enabled => Increase in memory, small but steady.  ~ 2-4 mb on Rhino4.0 and ~8-10 mb increase on Rhino 5.0 on every on close cycle.  Note, this GH file has surface parameter collections that point to surfaces in the 3DM file open in Rhino during test.

I will also try to isolate the bake command and see what happens and post results.  

Lastly, I have run the same repetitive task on a 32bit Windows 7 machine with Rhino 5.0 and have verified the same steady rise in memory but have not split out the individual steps. 

I do not know what you are doing but if you are scripting sometimes you need to clear the GarbageCollector GC or Dispose() some Tasks (Fileloading)

You can also use a extra thread to clear the GC, but do not do it too often, because this is a performance issue.

this can really speedup you application. Look if some classes you are using have dispose ... this mostly means what it not automatically cleaned up and you get  memory leaks like in c++.

I've been adding a lot of dispose calls to see if I could reduce the memory after a document has been unloaded. So far no luck whatsoever. 

Manually clearing the Garbage Collection ought not be necessary, it should always kick in eventually all by itself when it deems memory needs to be reclaimed. I don't know how it affects performance though, but from a memory point of view it should all be automatic.

--

David Rutten

david@mcneel.com

Poprad, Slovakia

I did also test this by removing the logic and generative objects of my GH file and just leaving the surface parameter collections.  Cycling open/close this type of file did not show the same increased memory allocation profile.  It is the compute of the file that seems to correlated to the increase in the unmanaged (private data) memory.  .NET memory seems to stay fairly steady although as I look at the snapshots most of the objects stay in Gen2 as you described.  Don't know what I would do to kick off their compaction.  

This is starting to get very much out of my wheel house of experience but how is the memory allocated for objects during normal operations?  The increasing "Private Data" allocations are created by virtualalloc.  How that is related is to the CLR and .Net is mostly greek to me and the subject of many a msdn and stack overflow post.  I am curious if those objects stuck in Gen2 were compacted then corresponding virtualFree calls would shrink the "Private Data".  

In the end, I am just interested in seeing if there is a way for me to keep my memory at a steady state or if there is way for me to manually recover/free memory through my plugin.  While I can recycle my instance of Rhino by closing it and restarting that seems a little drastic.

Thanks again for looking into this

I'm using the same process as Craig (except for the baking and exporting of geometry; I export a textfile)

And I'm having the same problem. I can run the definition three time after which it starts using page files  and the definition becomes a lot slower. Any new thoughts on how to resolve this? Thanks!

We never found solution directly. We were driving multiple rhino/grasshopper instances from another service. Eventually we would just recycle each rhino application after a certain number of cycles. In our system killing the process and standing one back up took ~15sec. We could do 100 cycles before memory issues but sounds like your process has much greater memory allocation...

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service