algorithmic modeling for Rhino
Can some more enlightened individual please explain how GH spits out list addresses? It doesn't seem to follow any sense whatsoever, making any attempt at parametric modelling an endeavour I have decided to coin 'smart-idiot' until proven otherwise.
For example i have the following input:
1. A circle
2. points on that circle (from divide curve).
Output address of the points: {0;0}
Firstly, what does that mean? I.e in c# i'm familar with lists represented as {0,0} meaning if i want to extract an item from that list, it is as simple as myList[0] etc. In GH that isnt the case. Why isn't its address {0} i.e a list of points with 'tree index' of 0?
Secondly, how does this address system even work? It seems to pluck these sequences out of some kind of secretive GH ether? I know with tree components its possible to control this somewhat, but its so convoluted and requires constant manual adjustment if certain dependencies change. Is there no control of list addresses from the off set? I've been using hoopsnake, and the never-ending address lengths which get longer and longer as it cycles is a joke. Is GH parametric software or semi-parametric because currently I feel short-changed? I'm even more aggrieved when I realise that if i do want to do more sophisticated definitions in GH, or have greater control over lists/trees, the scripting components are the only option, thus rendering the whole advantage of GH's visual programming utterly useless. Salvation please...
Tags:
it's just you... this stuff is actually extremely well designed... it's just that you are making comments about something you do not yet know how to use, so it looks incomprehensible to you.
Trees are essential to understanding how GH works, even for "just past basic" stuff. If you're used to programming, trees are just nested lists. What you're seeing with your (good) example is that objects derived from other objects most often go up a tree branch level. It may seem bizarre with one circle being divided, but try feeding the component a list of 20 circles and then look. Each circle's point collection will be on a separate branch, and this allows you to do things like connect the corresponding points of all the branches.
If you do not want/need the branching on particular components in particular situations, try using the Simplify and/or Flatten options (right click on the input or output).
GH is neither parametric nor semi-parametric software in the classic sense, it's a visual programming interface for Rhino, with it you can design parametric algorithms to create and control geometry.
--Mitch
I found some explanation from David Rutten about this, might be useful to understand it.
Hi Anew,
Glad you returned, are you ready for some homework now :)
Here is a simple definition which shows the Path Structure's nature quite well. The basic principle is to take a surface and divide it into points on that surface. From Each Point draw a Line and then divide that line into points. The end result is a 3D cube of points made from multiple rows and columns in all directions.
The file is attached so you can have a play for yourself.
The first example shows, as you say, a very nonsensical path structure resulting from such a simple problem. Why on earth does it need so many zeros at the Front {0;0;......
If you change the first slider from 1 to 2 things become a little more clearer.
Because we are now supplying two sets of surfaces on on the same path then the second zero {0;0... starts to make sense. as we can see that when it refers to the second surface it is now a {0;1;...
If you then change the Multi-Branch Toggle to True the first zero starts to make sense.
So with 1 surface on different original Paths {0} and {1} we get the first zero meaning something.
Have a go yourself by changing the settings in any of the Purple circles and see what happens. You'll find that the additional levels of path structure a present for the "what if" scenarios. But there has to be consistency because I don't want to create a definition with the intention of having multiple possibilities and finding because I add complexity that the structure suddenly becomes complex.
I hope this helps
Danny
Thanks, this is very informative. But it does seem very peculiar indeed; why doesnt GH simply create lists dynamically rather than 'second guessing' what the potential length of a list might actually be. I.e. going back to my example and to pick up on what Mitch mentioned, if i have one circle with points on it, logically the list structure should be {0}; if i add more circles naturally {0;0} makes sense. Why should the behaviour favour the software and not the user?
To work backwards is akin to predicting your first word before you were even conceived. Its paradoxical. Surely there must be scenarios whereby this bottom up approach flaws itself.
To illustrate the point, if you imagine a database where at any time new data rows could be added. If it was using GH 'logic', it would require:
1. a predefined end-point (final row)
2. The next row available for data writing (going backwards of course)
3. That it has finite rows available, and can only insert new data up to this point.
Clearly all three points are contradictions to the fundamentals of a database data structure, where records always start from 1, the next row available can easily be established, and crucially data insertion is unlimited, all due to a structured, sound and intuitive logic as opposed to a predisposed counter-intuitive one.
I think a lot more thought needs to be put behind the data structure in GH so that it works for its users rather than against - the expectation that users should adapt and be aware of such peculiar notions is really going to limit GH's appeal in the long term, as i cant think of any sane individual who finds an abstraction (programming) of an abstraction (GH) appealing or conducive to the learning curve.
Why should the behaviour be different for different events?
You've missed the point if you think that gh logic dictates either a predefined number of rows available in its data tree or that there are such things as rows. Try not to think of this as a 2D problem.
The Path structure can only grow as much as you make it. If you add geometry to your definition and it has the possibility to be multiple then you need another branch. For instance in the example i posted above i could add a circle to each point in the last branch. But i could also add multiple circles to each point all with different radii. Now each new circle could be divided up into points, but i'm not restricted to just one set of points on each circle i could have multiple divide values therefore a branch is required to hold each possible out come. So the tree grows and grows. If it didn't then grasshopper would be very restrictive in what you could do.
The main issue you have with it that you would like it to be adaptive to what is being supplied rather than a one size fits all.
My argument against this would be that i could no longer create definitions to behave consistently depending on the different number of variable inputs. I.e. If i change the first slider in the definition above i don't want an extra branch being added at the start as latter on in the definition this could change how i 'm splitting a tree to retrieve only certain branches.
"Why should the behaviour be different for different events?"
It shouldn't. The underlying logic should surely be consistent. The data sequencing from that logic should surely be dynamic. Go for the ball, not the player.
Here's one example of why the existing GH logic is somewhat flawed:
{0;0} - as in my example, default
{0;0;0} - I've added additionally components and list length has grown
{0;0;0;0} - similar scenario
The flaw (in reference to what I believe is a contradiction in your argument):
If I am splitting a tree as you mentioned, potentially the branches will change as the list length changes. I either have to manually edit my definition or accept that GH is actually restrictive, as it predefines a list length, yet still grows it when its forced to. Why not just grow from the off set and maintain a logic, rather than one rule to start, another to adapt (if the first rule gets broken).
GH would actually be more flexible if it handled data dynamically, not "very restrictive". I guess its the lack of control which really holds it back; the reliance on automatically handling list addresses is perhaps its Achilles Heel.
I'm not sure if Danny is really getting through to you or you really understand how and why gh adds/removes layers to data structure. So even though it may seem counter-intuitive to you, the data handling behavior that you highlight is essential for consistency and management of highly complex structures, and is in fact one of gh's greatest strengths.
It comes down to consistency and predictability, regardless of whether a component is fed an item, a list, or data tree. Whatever goes in, you want the result to be self-consistent. That's what it does. It seems that you're advocating for a component to treat an item differently than it does a list in terms of how it registers structure in the output.
Im not, perhaps understanding list structures in c# is to my detrement. I really dont see why my argument for a dynamic list structure is asking too much, especially when GH appears to do something similar once its default data managment techniques are exceeded thus forcing a new address index to be inserted. Its all just so unnecessarily particular and finickity.
If addresses are added when forced to, why not just have that as the default behaviour in the first place? Its not so much 'one size fits all' as postulated previously, but more one size fits 80% of cases and in the remaining 20% of cases you're going to be a slave to your definition as constant manual management will be required just to control the thing.
My final point:
circle with points should have a list address of {0}
multiple circles with points should have list address of {0;0}
multiple circles in multiple locations with points should have list address of {0;0;0} etc
I really dont see how that is any less consistent for highly complex data strucutres. To any rational individual this is predicable and follows a logic. What advantage is there in fixing the address at {0;0} yet still allow for new address sequences to be added firther down stream? Logic is the key thing to keep in mind here, not peculiar nuances only the initiated can ever be aware of.
"I really dont see how that is any less consistent for highly complex data strucutres."
Precisely because when the model is adjusted parametrically, there is the possibility that your "circle" can become "circles" and everything downstream will become fubared if the data structure outputs differently.
But I guess I must be irrational.
No, but maybe stupid:
If 'circle' becomes 'circles' the data structure down stream dynamically updates as the tree address upstream (where the circles are instantiated) reconfigures (i.e. {0} to {0;0}. How is that unpredicable?
My point is, it doesn't just occur at one stage in the definition and not another- its almost as though you take my point into consideration and then try and make it fit in with the existing GH data management system: my point is to change it entirely, not stick with some hybrid mishmash. Take off the GH goggles for a second and think about it...
Oh....I didn't realize you're "that guy".
it's so odd to me that you come back here every few months or so to trash GH. I don't really understand the chip on your shoulder.
Welcome to
Grasshopper
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
© 2024 Created by Scott Davidson. Powered by