Grasshopper

algorithmic modeling for Rhino

Hello,

I have a theoretical question regarding combination of mentioned plugins.

Let's say i have 5 sliders, each of numeric domain 1-10. Each slider changes one parameter (lets say it changes dimentions and rotation of a box + size of a window). This gives me 10^5 possibilities, that is 10.000 combinations. I would like these to be calculated by Colibri iterator.

I would like to connect these with ladybug and honeybee. Lets say that i want to check amount of sunlight [lux] on the floor and total radiation received for each slider combination.

Colibri 'iterator selector' lets you choose number of combinations that will be calculated. Lets say i want to divide these 10.000 combinations into 4 even parts of 2.500 each.That way i could run each calculation in another rhino+grasshopper window on the same machine (so that 4 cores would be used to make whole calculation faster)

My doubts arise from the fact that honeybee uses (as far as i know) external programs for its calculations (daysim, openstudio etc.).

Here is my question: 

If i run 4 rhino+gh windows as mentioned - is it possible to run these external honeybee simulations simultaneously? Is there possibility that results would be corrupt or sunlight calculations wouldn't be made for each slider combination? 

Thank you for your time and help

Have a nice day.

Views: 3634

Replies to This Discussion

Hi,

A lot of components in Ladybug and honeybee have parallel computing capability, which means it would use all CPUs to finish the task. Therefore, you cannot open 4 windows and run simulations simultaneously.

But definitely, you can use Colibri to split computing works in different machines. 

Mingbo

Mingbo,

Thank you very much for your reply, exactly what I needed to know.

Since I'm lucky enough that YOU replied to my question, I have another I suppose you might know the answer to.

I'm pretty confused now because I'd like to use brute-force instead of genetic/swarm algorithms. I am worried about fitness landscape and local extremes with my variables. On the other hand Colibri Iterator would compute millions of combinations in my case.

I'd like to use Colibri brute force+Design Explorer to explore different solutions. I am not sure how many combinations there would be but I assume that up to 10^7. That is a lot of data to compute and look through. Probably impossible to do.

I looked into 3rd example on Design Explorer page (the building one) which is pretty close to what i want to do. As far as i understand the graph, there are 9 variables. Each has different domain. My calculation for number of combinations (going from left to right - from 'elevator width' to 'solid wall amount') is 3*5*5*10*6*4*10*50. That is 72.000.000 combinations. On design explorer page there are around 250 solutions. I probably do not understand something or there is a way to filter data somehow.

My questions are - Is it even possible for Colibri Iterator to go through such huge data? Have you got any tips how to filter that data so that only chosen solutions are saved to *.csv file? Maybe I am missing something and there is another way used for design explorer examples? Any tips or tricks? :)

Thank you for your time and help,

Have a nice weekend

Hi Wujo,

We had the same discussion in our team as well. Colibri will not make any decision for you, but sampling, which is what we meant to do. 

You can use Galapagos/Octopus with Colibri to do what you are trying to do.

Here is an example: http://hydrashare.github.io/hydra/viewer?owner=MingboPeng&fork=...

For the 3rd example, I am not the one did this study, but I looked at its details. There are a lot of certain combinations doesn't exist, like this: elevatorW 10; elevatorL 12; moduleL 60.

To your question: technically, Colibri can handle 9x10^18 iterations, but don't believe any computer can finish this work. Even though we got this data in CSV, then Design Explorer (your browser) can not handle this as well. The biggest data set we tested in Design Explorer is about 5000.

Hope this helps,

Mingbo

Mingbo,

Thank you for your reply. I have already studied all of your hydrashare - design explorer examples :). I took this approach into account.

My problem is that genetic algorithms may be only partially suitable for what i try to do - the solution landscape may be full of local extremes and i am worried about that. This is because, as far as i know, there should be "fluent" change in values of a slider. I will try to explain what i mean on an example.

Lets say we want to optimize energy use of a building but also give a penalty function - each time overall building rising cost exceeds X. Our variables are - its morphology (overall shape, glazing%, shading elements) and materials used. The problem starts with the materials. Let's say each of materials has 'y-cost per m^2' and 'z-thermal resistance' values. Even if that list would be sorted, jumps between these values wouldnt be fixed as for example when using sliders for box dimensions, and that may be no good for optimization purposes - there would be possibility that some the list item slider for material list would simply get stuck.

The perfect solution would be using Galapagos with a loop - for each geometry generated there are all materials checked and the best solution is picked. Unfortunately i don't think this is possible.

Thats why i looked into brute force optimization - it eliminates that problem but generates a lot of results.

I think i will have to consider step-by-step approach - first optimize the shape and then find the right material. That, on the other hand, eliminates a lot of possible solutions.

Another way would be using Colibri and culling unpromising results before they are exported to csv file. That would have to be done between Colibri Iterator and Aggregator. The problem with that is that first I would need to have some solutions to compare to (example - cost>h and energy use>i results in a cull). That would allow me to upload only best solutions to Design Explorer, but it may take ages to check all the possibilities.

Sorry for such long and intricate posts. I try to organize my thoughts and writing helps me to do so.

I didn't find perfect approach for my purpose. If you have any further thoughts on that topic I'd really like to hear them. Thank you for your effort, time and help, I really appreciate that.

Hi Wujo,

I just did a quick test for filtering the variables (in this case: x>=5, y>=5) before saved in CSV, you can also do the same checking with the Phenome (in this case: area). this will only save what is filtered (see csv screenshot). I guess this might be a solution if I understand your problem correctly.

Mingbo,
Exactly, I must admit I assumed that each time write is set to true, new file is created, I didn't check it. Since it doesn't that pretty much seals the deal :).
Once again thanks so much!

Mingbo,

I tested this method and it seems to work - I set up definition with true/false "trigger" and I got around 3.000 solutions out of 20.000 combinations. Unfortunately i encountered a problem with CSV file. All parameters are saved in one column. I think it might be connected to TT Toolbox version - my components inputs are sslightly different than in your definition - ex. "Iteration Genome (ID)" instead of "Genome". Have you encountered something like that?

This is what it looks like when I open the file

Another thing i would like to know - is it possible to disable saving images?

Thanks once again

I found today a way to disable the image saving:

connect a bool toggle that is set to "false" to the "save as" in the img setting. connect it to the aggregator as usual. 

It will save only one picture named "false" and thats it.

Amit

Thanks Amit!

My computer was overloading while running 1000's of iterations because of the IMG creation and this is the solution I needed!

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service