Grasshopper

algorithmic modeling for Rhino

Could someone brief or show how I can use these components?

 

It looks it can show exposure energy color or shadow on mesh directly without other components...

 

Or Is it same way using 0 or 1 value to dispatch different color preview?

 

Thanks for ur help~~~~~~~~!!!!

Views: 6869

Replies to This Discussion

The two components work similarly. The occlusion one is simpler, so I'll start with that. Essentially, you give it sample points (mesh vertices, for example), some kind of occluding geometry in the form of meshes, and a set of vectors representing light rays. The H output returns a number for every point - and that number is the number of rays that were occluded (blocked) by the occluding mesh. The Exposure component works much the same way, except it takes a mesh as its input directly, instead of points. It also allows you to assign each light ray vector an energy value - so some are "brighter" than others. It then returns the sum total of energy hitting each vertex of the receiving mesh. Attached is a quick example file that shows how to color a mesh using the output data from either component.

Hope that helps clear it up a little!
Attachments:
Andrew,

I started to play a bit with the components, but haven't had a look at your file...

One thing I haven't had time to think through yet is that [Exposure] and [Occlusion] return the values for the verticies of each mesh faces.

For example I was playing with a mesh cube as my object (4 faces per side) and the components returned 54 verticies values.

I would think the results need to be averaged since some faces share 4 coincident verticies (mid-side and face of cube) and some faces share 3 coincident verticies (cube corner).

Still pondering...
In that case, you could weld the mesh before importing it into Grasshopper. That would make every vertex a single point.

@[uto] component:
http://utos.blogspot.com/2010/06/new-mesh-analysis-utilities.html

or Rhino's command:
_Weld 180

- Giulio
__________________
giulio@mcneel.com
McNeel Europe, Barcelona
hi Giulio and taz

our weld component is setup with the ON_Mesh::CombineIdenticalVertices function so it is not possible to weld a meshbox, but it is possible with rhino command weld because it's using the ON_Mesh::CombineCoincidentVertices which is currently not exposed in the SDK.
One component works on loose points, the other works on meshes. I need meshes in the latter case since there's an option (which doesn't work yet) for lambertian shading. I.e. the angle between the ray and the mesh normal contributes to the shading.

If you have a mesh cube, then you'd expect a sharp transition over the edge. Such a sharp transition is not possible unless there are duplicate vertices along the seams.

--
David Rutten
david@mcneel.com
Poprad, Slovakia

 

Hi Andrew,

 

I am working on a definition that should provide me with the number of points on a curve that can be seen from a distant point while being obstructed by a surface. I am using the occlusion component for this, but it seems it's not working properly. It might that I'm abusing the component for the wrong purpose or that the vector definition is off, but I don't how to fix the definition.

 

I've attached a file that hopefully explains the problem. I was expecting that the number of rays that were occluded to be the same as the total number of rays. But it appears not to be so. However, if I move the mesh horizontally or vertically, the number changes.

 

Since you seem to get the hang of it, hope you can help.

 

Regards,

 

Roel.

Attachments:

Hi Andrew, I´d like you could explain me how to calculate the energy that falls onto the receiving mesh and how you made this animation in Vimeo:http://vimeo.com/11829735 as I can see a point moving like the sun projecting the rays on the volumes.

 Thanks in advance!

Hi Andrew!

I´m using the exposure component and Galapagos to optimize the position of a set of terrazes on a building façade. To simplify, I´m considering each terraze as a mesh in which I am evaluating the exposure. By default each terraze is a mesh (4 vertices and 1 face per mesh).

Should I join them before plug them as the "S" input in the exposure component? 

I´m also introducing the geometry of the building as an occlusion mesh (box mesh 8 vertices and 6 faces). If I want the exposure component to calculate the shadow that every terraze produce on each other, should I plug them in the "O" input as well? Should I join them before doing that together with the building box-mesh?

Thanks a lot!!

JPGS attached

Attachments:
I'm going to piggyback on this thread.

I've been playing around a bit with Ted Ngai's solar path algorithm in conjunction with Galapagos for reducing solar exposure. Pretty fun stuff! I've run into a wall with self shading geometries though and I have a feeling that Exposure and Occlusion could be a solution.

The idea is that I've got a pixelated building facade. Each facet has a binary condition (e.g. extended or not). [this is kind of a pain to handle in Galapagos beyond a certain number of genes). I was hoping to distribute the extensions across the facade such that solar shading is optimized. Unfortunately the way I have it setup up now doesn't take into account shading from the extensions.

I'm sure there are several problems with this, but what jumps out at me is:
1. I haven't been able to get meshes to join into 1 solid mesh so its a cross referencing nightmare.
2. If there is a way to set up occlusion easily and feed it into Galapagos with 48 binary genes its going to be a computational hog.

Note: I setup the "binary" genes as a 0-3 integer (0&1 vs 2&3) slider because I heard there was a bug with 0-1 integer mutations. Is this still a problem?

I've attached the definition.

Thanks in advance for any insights.
Attachments:
Got it to work! (in theory) I need to clean it up a bit and get my Galapagos settings fine tuned, but it seems to be good. It really just came down to having the meshing settings properly calibrated. I'll post something detailed when I've cleaned it up.


I feel like I'm having a conversation with myself, but it turns out the mesh settings are really the sticking point here. I'm having trouble getting the surfaces to divide up evenly. Thus some surfaces are weighted more heavily than others because they have a higher number of sample points contributing to the overall number of rays occluded. If any meshing experts out there can give me some tips, I'd appreciate it!!

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service