Grasshopper

algorithmic modeling for Rhino

Possible to use Galapagos/Octopus genome approach without fitness (LBug+HBee)? Alternatives to or 'enhancements' of Pollination

Hi all,

I have recently decided to venture upon the seas of Grasshopper, ready to face perils yet hoping to discover marvels.

After briefly testing Archsim, which I found pretty simple but probably not very flexible (still a very useful tool), I switched to LadyBug+HoneyBee, with great expectations.

My goal is to set parametric thermal analysis of a room (or a group of rooms), and perform dynamic simulations with a high number of possible combination of parameters, regarding:
- building geometry (orientation, window to wall ratio, etc)
- envelope (wall construction, thickness of insulation, glazing type etc)
- occupancy and ventilation pattern

- others.

I have come across the Pollination tool, which is certainly a very effective way of organizing a limited number of final outputs for a wide range of input scenario: well done!!

However, trying to run a pollination example fine I have realised that the processing time could be very significant with a high number of simulations, as the tool creates a file folder for each scenario and idf + csv files with the entire simulation output (hourly temperatures etc). Plus, every simulation is performed after the previous, without any chance to run in parallel.

I was wondering whether using Galapagos or Octopus, WITHOUT aiming at maximising/optimising could be a faster way to do it. What I am ultimately after is creating a massive database collecting a high number of scenarios, which would then be queried through SQL/python.

Is it too much I am asking? :P

Thanks in advance,

Andrea

Views: 895

Replies to This Discussion

Galapagos is not good for doing systematic searches, it's a stochastic solver meaning the order of states it examines is very unpredictable, and there's no guarantee that all states will be handled.

There are better ways to iterate over all possible states, and there have been some script components posted on these forums that do so (good luck trying to find those though, the forum search is pretty terrible).

You're right that the number of individual states can grow very large very quick, so you'll need to find a balance between the level of saturation you're willing to settle for and the amount of time you're willing to grant.

How many variables (sliders) do you have, how many different values do you want to examine per slider and how long does it take to compute a single iteration? Basically multiply it all together to get the total time needed.

David,

thanks for your reply. The reason I asked about this is that I already made a preliminary calculation of the total different values I might need to consider and the result would exceed 100k simulations.

Assuming an optimistic simulation time of 10 seconds (which would be exceeded if I was using the pollinator example I found in the forum, as E+ would write idf files etc into separate folders), this would imply almost 300hours for simulations.

Do you know any way to speed this up, or do you recommend looking elsewhere? (e.g. http://www.jeplus.org/wiki/doku.php?id=start)

My aim is to perform something similar to this: http://www.arcc-network.org.uk/wordpress/wp-content/pdfs/CREW-overh...
although it is stated that: "The assessment of compatible combined adaptations involved approximately 100,000 simulations and the process was automated and based on a cluster of parallel processors at IESD."

Hi Andrea,


Which Pollination project are you referring to? Are you referring to running simulations in cloud (http://honeypatch.github.io/pollination/index) or just the visualization part (http://mostapharoudsari.github.io/Honeybee/Pollination)?

In general if you are trying to brute force through set of options you don't want to use optimization algorithms. You can use uniform [random] distribution of input parameters to produce an acceptable data-set for further exploration of data.

Back to running all the simulations, Honeybee is not really optimized to run 100K simulations! I think your best solution is to generate the idf files with Honeybee and then use one of the available cloud based systems to run the files and get the results.

I recently developed two components which let you run the analysis using Autodesk's Cloud system (https://beta.autodesk.com/callout/?callid=5366FFEA6FCC4F6A94E19B78A...) which I haven't released yet. Their API is not as flexible as it should be and I'm afraid not to be able to support the users once I release it to public but I can share it with you if you want to give it a try > (https://www.facebook.com/LadyBugforGrasshopper/photos/a.44232096911...)

Also you may want to check Apidae (https://apidaelabs.com/). I haven't tested it myself yet. Looks like to be what you are looking for. I think you can email them directly but I can also put you in touch with them if you haven't heard from them.

Best of luck with your thesis and am looking forward to see the output.

Cheers,

Mostapha

PS: I don't receive the notifications for general discussions. It's probably a better idea to send your questions to Ladybug+Honeybee group (http://www.grasshopper3d.com/group/ladybug/forum). I just saw this from the Grasshopper page by chance.

So this is an older post, but still, what values do you use regarding thermal conducitivity and or diffusivity for the materials you are using? Because in my opionion, it's crucial that you are working with real life values right from the beginning. i know that many people start with some assumed values but in my experience it doesn't work that way.

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service