algorithmic modeling for Rhino
Hi everybody!
A short story: (maybe dumb... who care!)
A: Hey B, it's "false"!
B: Ok.
- 0.01ms later -
A: Hey B, it's still "false"!
B: Ok, I got it.
- 0.01ms later -
A: Hey B, you'll never believe it!... It's still "false"!
B: OK! I got it! Tell me when it changes!
- 0.01ms later -
You got it.
This might happen when sliding a slider, and some part of a big definition continues to be recalculated (maybe also with heavy tasks) very often uselessly.
Every GH component "refresh" whenever any of its inputs refresh...
Those updating/expiration "waves" could be unhandy (http://www.grasshopper3d.com/forum/topics/control-knobs-in-c)
Not if coding!
I did something like this, working with a decimal value converted to integer:
The data dam(s) are set to "0.25 sec" so we can see that it keeps updating...
By using a c# component (or other similar) we can "filter" the updating waves, passing datas only when it changes, etc...
Now, would it be dangerous? In which ways?
Can this be done for every kind of data? (geometries, values, booleans, etc)
In my ignorance i would say every component should do this "check" to lighten the load on the cpu...
But more than anything else... was this already solved in a better way?
In GH2 it would be cool to have a "filter" component.
Maybe I'm just wrong at everything....
Give me your thoughts!
Cya! :D
(attached c# script, also some cool component positioning inside, stuff just learnt)
Tags:
This is a possible alternative approach to running solutions, however it doesn't work well within the current GH SDK. The current approach is to just erase all data as soon as something expires, and this wave of erasures propagates throughout the document. All the current data is lost, so when new data comes along there's nothing there to compare it against. In order for your suggestion to work, old data needs to be kept long enough so it can be compared to new data. If, after the comparison, it turns out the new data is identical to the old data, then the old data can be re-used downstream.
Just beyond the fact that the current system is not set up for such an approach, there are several problems with it. Chief among them are:
The good news is that GH2 will retain solution data longer than GH1. This is necessary because GH2 solutions run on background threads and the old data needs to still be previewable in Rhino viewports while the new solution computes. This at least provides a mechanism by which data before and after a solution could be compared, and even a way for old data to be re-instated without new data having to be computed.
But even in this new scheme I'm hesitant to try and be 'clever' about this. Because if you get it wrong you introduce a bunch of really hard to detect and work-around bugs, and that's not even to mention the increased complexity of the solution logic in general, which is already incredibly convoluted and difficult to debug.
Thanks for the exhaustive answer, David... as expected.
Doing this check all the time it would be a heavy load, ok...
Maybe something like a MD5 checksum?
Still I don't quite get why all the data needs to be expired and recalculated every time... but I am certainly far away from even imagining the whole complexity of the situation here (:
... maybe what i need could be solved just with a smart use of data dam component.
Welcome to
Grasshopper
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
© 2024 Created by Scott Davidson. Powered by