algorithmic modeling for Rhino
Dear Mostapha and dear everyone!
We found some problems with LB and HB which look like bugs.
Some are easy to fix and show up very often.
To fix them we have to disable and reactivate or even replaced various items and let the HB and LB fly again and again.
With growing complexity of the Grasshopper canvas some unpredictable behaviors of LBHB are increasing.
The result of an annual cumulative irradiation study is not always consistent when performed with LBHB but is credible when performed with Diva on the very same Grasshopper file.
To trigger the bug just change the value of one of the inputs without set the HB toggle to "False" but leaving it on "True"
We noticed that when the bug happens the simulation in radiance is really quick.If the toggle on the HB simulation is switched "False" and then switched "True" again the simulation produces meaningful results.
We can solve this problem by switching the toggle, but the number of unpredictable behaviors makes us uneasy.
Is it for other users as well or is a problem specific of some computers?
There may be problems that we did not detect?
Thank you very much!
Tags:
Replies are closed for this discussion.
Hi Jennifer,
"To fix them we have to disable and reactivate or even replaced various items and let the HB and LB fly again and again."
This happens because of the order of inserting components in Grasshopper. The easy fix is to select Ladybug component, and Honeybee component separately and press ctrl+B which will send the components back in the order and they will be executed first when you open the file.
The result of an annual cumulative irradiation study is not always consistent when performed with LBHB but is credible when performed with Diva on the very same Grasshopper file.
To trigger the bug just change the value of one of the inputs without set the HB toggle to "False" but leaving it on "True"
We noticed that when the bug happens the simulation in radiance is really quick.If the toggle on the HB simulation is switched "False" and then switched "True" again the simulation produces meaningful results.
It sounds like when you trigger the analysis it doesn't run the first time. It's not normal and I haven't experienced this before. Other users can comment on this. I you upload the file more people can test your file.
Mostapha
Dear Mostapha,
thank you for the reply :)
which file you are referring to?
i attached "HB simulation.gh" in that the problem occurs
Thank You Again
Jennifer
Dear Mostapha,
we found the origin of the problem.
is connected with the "rad parameters" component.
if you plug an empty "rad parameters component" in the "grd component " in the file attached to the original message everything works
instead if you leave the slot empty it works only by switching false and true again in the "RunDaylightAnalysis" component
Jennifer
Jennifer,
After playing around with your file a bit, I believe that I understand the specific issue that you are concerned about and I have a best guess for what is going on under the hood (Mostapha might be able to provide more insight). Whenever you run a new simulation in Radiance, it is not always necessary to re-write all of the initial simulation files from scratch. These initial simulation files include both a .rad geometry file as well as a separate .pts file that contains the test point locations. If all that you are changing in a given parametric run is the locations of the test points (like your case), it is not necessary to re-write (or reinterpret) the entire .rad geometry file. My guess is that there is some type of check for this built into either code Mostapha wrote or radiance functions that Mostapha is calling. As such, it seems that the rad geometry file is not being re-written (or re-interpreted by radiance) completely when all that you change is the test points and this actually seems to be saving you an extra 10 seconds each time that you run the component without changing the materials or the building geometry. Other times (like when you plug in custom radParameters), it seems that it re-writes (or re-interprets) the .rad geometry file from scratch since this file is probably affected by customized rad parameters.
So far, if this explanation is holding, it seems like there would be no concern on your end but I also recognize that the difference between these long and short simulations is giving you radiation results that are ever so slightly different from each other (by my estimates, they differ by about 0.2%). Compared to the other types of assumptions that the radiance model is making, though, these are mere rounding errors that probably originate from the number of decimal places in the vertices of the rad geometry file. Rather than worrying about whether your simulations are giving you the right rounding errors to give you matching results, I would encourage you to instead contemplate how much your radiance results are matching reality given all of the assumptions that you are making about the climate (with the epw file for a "typical" year) and with the number of light bounces in the radiance simulation. To give you an example, I ran your model with a higher quality of simulation type (3 ambient bounces) and this gives you results that differ by 1.1% from the original simulation that you were running with only 2 ambient bounces (this is practically an order of magnitude larger than 0.2%).
To address your unease I will say that, for a long time, I also felt uneasy any time that I encountered something that seemed unpredictable in software that I was using. Once I started coding my own stuff, though, I realized quickly that unpredictable behavior is an unavoidable aspect of all software. There is always a tradeoff between accurate results and the time it takes to get them, which produces a multitude of possible ways to arrive at a solution. Add into this complex situation the fact that you might have an almost infinite number of possible inputs to a given set of code.
Because of the unpredictable multitude of cases, there is no application that is completely free from limitations and assumptions. In this light, what ends up being more important than the actual calculation method used is the social infrastructure that is in place to help understand what is being run under the hood, hence why both Radiance and Honeybee are open source and why we try to build a robust community of support through forums like this one!
-Chris
Hi,
Just to strenghten Chris's comments and without checking the files. Radiance simulations are stochastic , so there is a factor of randomness in the simulations. That's why the same simulation can give slightly different results. There are plenty of comments on this. Not sure that in this forum, but for sure in others.
So if this is the case, i'll be fine with this 0.2% difference for the same parameters.
-A.
I tested the file and the results looks fine to me. As Abraham mentioned Radiance uses stochastic sampling and up to 2-5% difference in runs is pretty normal. Here is a post on Radiance group for your reference: http://www.radiance-online.org/pipermail/radiance-general/2012-Octo...
Welcome to
Grasshopper
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
© 2024 Created by Scott Davidson. Powered by