algorithmic modeling for Rhino
It's been more than a year since the last release of Human - so I'm excited to share with you the latest version, packed chock-full of new functionality. See the release notes for details on the new features. A few of my favorites:
Download the components below! They will also be available on food4rhino as soon as they're approved.
Tags:
Yes, I am aware of SortIndex. I just don't think it's a good idea to have layers sorted along these lines by default, as I said in my other post. And the layers list is not alphabetized, it is sorted by LayerIndex.
(1) and (2) I prefer to let users build their own structuring routines for the output of the dynamic pipeline. It might make sense for you to have stuff structured by layer, but other users may prefer other ordering mechanisms. If you feed layer names in explicitly, output is grouped by layer. If you feed multiple types or multiple object name filters, output is grouped by these instead, following standard data tree behavior. There are a number of ways to achieve layer-based sorting of objects using existing components:
Letting anything be organized by the layer-palette order is extremely sticky, since it can be re-ordered on the fly by various sort mechanisms. See more on this topic in my conversation with Tim Halvorson in the comments on the Human Group page. As it stands now I do not alphabetize layers in the layer table component as you say - I use the layer index, so that as new layers are added/rearranged, the entire order of the set doesn't change in unpredictable ways. If layer sort order is preferable to you, you can use the simple one-line script I provided Tim in order to retrieve the sort index and use it to sort your layers. My priority for both object order and layer order in the components is to minimize unnecessary changes/refreshes/event listening by returning everything exactly as the SDK gives it to me.
(3) I've added the Include Locked and Include Hidden options to the latest release attached to this post. However, this only pays attention to hidden objects, not hidden layers. I do not want to include a toggle for "Include Hidden Layers" because this would force the dynamic pipeline to expire every time a layer's visibility changed, which I do not want - for most purposes this would cause unnecessary recomputing. However, with the components as they exist now, you can drive the dynamic pipeline with a layertable set to auto update, like so:
This has the effect I think you are after, which is to ignore objects on layers that are hidden - and recognize them as soon as the layer is turned on.
Aha!, no need to postprocess the output, just control the pipeline in the first place, to avoid off layers.
My project is likely too slow for real time use anyway, so I plan to disable the Grasshopper solver and then issue single updates as needed using the Recompute menu item which works even with the solver "disabled."
[Testing it though, the pipeline is indeed avoiding items on hidden layers, not just hidden objects, but hidden layer updates require a Recompute command, so I'm good to go anyway.]
I'm a bit confused though still: does the Human plugin afford *any* access to the Layers palette stacking order manually arranged by the Rhino user?
No, you'll have to script that yourself (or use the script I referred you to earlier from my conversation with Tim Halverson on the main group page comments.)
Firstly Andrew, great addition! New tools are already proving very useful.
I am however having a strange experience with the Screen-Orientated text. What appears on the model is different to the print preview page, which is again different to the actual final output image.
Print Setup window (bigger):
Actual jpg (small):
Yes, I am aware of this issue, and unfortunately I don't really know what to do about it. You might have better luck with the text-to-screen component set to use "absolute" sizing rather than relative - set the last input on the component to true. Give that a try and see if it helps.
Ok thanks man, will do. Could I also add to the request list. Text justification options on 'text to screen'.
I'm afraid i'm getting the same results with 'Text to screen' and 'Mesh to screen'. The results are actually more erratic, as the position is wildly different in addition to the scale being way off.
Apologies if you already know all this, but this is what i have found:
It works if i set my 'print' options to match identical pixels as the viewport. But if for example i double the pixels, the 'mesh to screen' is still anchored to the same pixels in the centre of the image. I have an idea for a temporary fix, i will let you know if it works..
Viewport Pixels = Print Pixels (components work perfectly)
'Print' pixels double the Viewport pixels (mesh to screen and text to screen components are same number of pixels in size, and i presume the same number of pixels from the centre of the screen)
Ok I just gave it a go on my lunch break. I have a temporary fix. (for my problem, not a universal fix)
The main drawback of this method, is you need to know what size you are going to be printing at. (in my case its many many A3 sheets.)
Basically you can figure out the difference between your current viewport dimensions, and the size of the print output.
You create your 'key/mesh/text' off screen on your new co-ordinates. (these co-ordinates are negative X and negative Y, as in this instance they are top left, off your current screen.)
You can also scale the new 'Key' to match the new output size.
I'm not sure how helpful this will be to anyone but my method is attached for example.
Hello Nick,
i needed some justification options, as orientation, for the text to screen too. What i ended up doing was use squid, which converts text to curves and then curve tou screen through human. Not the quickest or the most efficient way to do this, but got the justfication/orientation options thrugh squid.
One very annoying issue i am having is that viewport zoom extends and clipping planes get seriously messed up, i think because of the points used(even with preview off) for curve/mesh to screen location inputs, which are far from zero. Andrew is there any trick to avoid this?
best
alex
Welcome to
Grasshopper
© 2024 Created by Scott Davidson. Powered by