Grasshopper

algorithmic modeling for Rhino

So I was thinking we can have a nice community chat about pollination.

I have recently actually starting to use it for parametric studies. I also had the opportunity to discuss with some wonderful people in the forums about their experiences and troubles in some other threads but I though we can somehow bring us all together. After all, when we are talking about parametric tools and parametric design, practical application seems to be the grey area.

How do I use it? Why do I use it? I know I have many difficulties in the actual generation and manipulation of patterns in my parametric designs. Most of the examples I've run are usually manipulating massing orientation, WWR, heights and lengths, mainly simple parameters. I guess it is a good thing to be at a point where the above are considered normal. Means that HB/LB is a really powerful tool.

Anyways I wondered if we can share these kind of things here. The actual practices of our parametric models and designs, maybe creative ideas to manipulate shapes and relationships, or even ideas of generative design.

As a constructive pessimist I will start with an issue I have. Memory.

It seems that high iteration models make me run out of memory. I am not sure why that is. It happens in most types of simulation I've tried so far. I wonder if it is what Chris and Mostapha mentioned on occasion, the peculiar way GH deals and stores things in the computer's memory.

Is it constantly adding each iteration to the memory? Or is it the way I structure the data flow. For example I have a few number, average, etc., native components before the data recorder which might multiply things saved in memory. Or is it a limitation of the external simulation tools (E+, radiance, etc.)? For me it's the most important issue so far, mainly due I'm used from my CFD studies to leave the work pc simulating for a few days and I'd love to do that in HB/LB!

I'm adding a simple example. It calculates solar radiation for a simple configuration that changes across spatial and temporal parameters (angles and months). This one has around 3900 iterations (that's because my boolean check will not work with multiple meshes but that's a discussion for later) and it stops around 2300.

Okay, enough about my problems! How is everyone doing?

Views: 882

Attachments:

Replies are closed for this discussion.

Replies to This Discussion

Theodoros,

 

I haven't run into the memory issues you describe, but I'll comment on how we could pollination to better inform our design process. Long post, I know, but I'm curious what everyone thinks about our take on the issue.

 

How do we use it? Why do we use it? These are great questions. SGJJR sees the potential of using iterative parametric models to chart the design space. Understanding how the current design performs is helpful, but the next question is always "what can we do to make it better." If we understand the design space we can predict how to improve the design and discuss the ramifications of such decisions when they are being made. Too often decisions are made without a full understanding of the consequences because analysis is expensive and not timely enough to feasibly include it in a meeting. This is especially true in conceptual design, when some of the most important decisions are made. In practice, tools like Pollination can empower a conversational design process that allows an integrated team to collaborate far more effectively. The trick in our mind is predictive modeling, which pollination supports. The goal should be to direct the team towards the critical design decisions, make the right call, and move on. This process is empowered when models account for competing metrics, such as cost, energy and daylight quality. I'm curious how others think these tools can be applied. It is incredible to be able to calculate 1000 iterations, and I trust everyone on this forum to figure out the nuts and bolts to make that possible, but what do we do with it? What design decisions are we targeting? What should we be targeting?

 

One successful small example we did was to inform the interior design of an office building. We had some ideas about changing the room finishes, cubicle heights, cubicle finish, and of creating a lighter colored perimeter floor band. All of these things went against the interior design. By running every combination of these input parameters (at several settings) we were able to quickly determine that all successful designs had low cubicles. Rather than waste time fighting over finishes, we held firm on the cubicles and compromised on everything else. I imagine more teams can use pollination to guide the design process in this way. Rather than use Pollination to optimize, it empowers us to understand the design space, identify the critical parameters, and chart a path forward.

 

You mentioned massing and orientation. I think this is a huge opportunity for us all. These decisions are often made long before proper analysis is considered, because up until now that analysis has been too slow, too expensive and not useful enough. This new tech can change all that. It's a tough problem. Site constraints, programming constraints, and client preferences rightly play a large part, but we need to understand performance as well. What do our architects need to know when developing that first blocking mass study? What drives performance? As our architects balance 18 different goals, what are the critical things they need to consider to understand performance? It's not enough to know that too much glazing will hurt EUI. That will not sway the designer. Knowing that there is no way to meet LEED platinum with that southwest glassy atrium, will. Identifying that the only solutions that perform well have certain things in common will encourage those concerns to rise in the hierarchy of criteria being considered.

 

So what metrics are we calculating? My first instinct is EUI, cost, DA, and ASE. But that's not actually what we are looking for. Our architects care about % of regularly occupied area that is well daylit, or if we are going to meet our energy or budget targets. We need to take a step past the raw metric and translate that into a design decision recommendation. Displaying the data in terms of "% Well daylit", "% EUI under target", and "$/sf" are more useful to the team. Anyone have other suggestions?

 

What input parameters can we control to affect performance? Floor-to-floor height, plate depth, ceiling height, fenestration design, construction type, orientation, external shading, massing options, mechanical system choice. That's a lot of iterations, but in theory, we should be able to study all combinations. What else am I missing?

  

Can anyone share links to articles or white papers that explore this topic? I'm eager to learn more and hear what other people are thinking.

 

Hi Leland,

Thank you for your response, I am one of those who appreciate the long posts in this discussion.

You make some very good points. I really liked the example you bring about analyzing and focusing on what matters by backing off on parameters that were not as critical. This is something the whole pollination process can indeed provide.

I want to focus on your last point. The whole purpose of my personal efforts in my work the last few months has been to move myself and my work from assessment to design. That is because if I don't do it, I will always be kept in the compliance part of the process, with minimum opportunities for meaningful impact. The tools and workflows to do that took and will take a lot of time and collective effort. It is hard to develop but doable.

The most difficult part is communication. How do I communicate this to the project team without becoming the enemy. I'm planning to develop workshops exactly for this purpose and perhaps a very interesting discussion for this forum would be: how exactly would you go about communicating and disseminating these tools and information to actual, real-life project teams?

LEED v4 already includes, and rewards, a parametric charrette process where they define a few parameters to be tested. But how to do it is the question. Pollination really helps here but there are still challenges. The biggest one as I said is how to do it without becoming the enemy, either because you are tampering with design issues (architects are especially sensitive) or because of lack of knowledge of these tools and parameters, which always creates hostility in from other parties. 

I'll post my experiences in this group once I have them.

Kind regards,

Theodore.

Theodore,

We face similar communication issues. To get past them we have rallied around the idea of promoting an informed design process. Data-driven design neuters artistic expression. Data-informed design empowers it. Tools that map the design space, rather than focus towards an "optimized" solution, are more useful to designers. Understanding where the current design fits within potential alternates will help convince designers that meaningful improvements can be made. Comparative analysis to nudge the team towards a particular option, or strong data to quantify exactly how much worse one solution is than another, always gives the design team the option of choosing the bad option, but encourages the good option. We recognize that architects have to balance far more than just performance criteria. Client wishes, site constraints, programmatic constraints, all determine the best overall solution. Energy or daylighting analysis is only one part. Due to this reality, it is great to have data that encourages a compromise. If my solution is all or nothing, it will get rejected. If I can present incremental improvement, the team can balance that change against competing criteria. By charting the range of possible solutions, Pollination and Design Explorer don't force a design decision. They merely inform the process. Using them to empower a conversation about the relative performance of design options or identify a variety of improvement options makes these new methods far more appealing to designers than optimizing scripts or shaming energy metrics.

 

We think that a posture that respects both aesthetic quality and performance criteria encourages better design. To this end, singular optimization is too selfish and inflexible to be useful within an integrated team. We think that design explorer and similar tools that support conversation and compromise are the right way to go.

This is an interesting one! I feature the post so it doesn't get lost.

Agreed Mostapha very interesting discussion. I am trying to play with theodoros file and I am getting the following error with the Honeybee_Create_Pollinator component.

Runtime error (Win32Exception): WindowsError
Traceback:
line 62, in main, "<string>"
line 86, in script

Any ideas? Is this because the component is built for 32 bit? Also could you please outline what is the difference between http://mostapharoudsari.github.io/Honeybee/Pollination here and design explorer http://tt-acm.github.io/DesignExplorer/?  What format of CSV can Designexplorer accept? 

Also Mostapha what is the status with getting Honeybee pollinate linked up to Amazon cloud to run the simulations this would solve the issues that Theodore is encountering right?

There is no active development on our side. We are still waiting for a reliable cloud-based service to use. OpenStudio cloud server, Autodesk EnergyPlus Cloud and other available services have API access but my experience is that none of them are really ready to be used right now. I'm still talking with people who are providing cloud-based services and hope to find a reliable option soon.

The Windows error is probably because the component cannot create the file in whatever filepath you have provided. Back to DesignExplorer check the documentation here: https://github.com/tt-acm/DesignExplorer/wiki/File-Preparation

The main difference is that DesignExplorer can visualize images and 3D geometries which Pollinator can't. If you want a sample file select "Save Selection To File"

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service