Grasshopper

algorithmic modeling for Rhino

I just wanted to see if anyone's having a similar experience to what I'm seeing with multithreading for Honeybee, specifically with the Honeybee_Run Daylight Simulation component.

As I test I ran the same simulation with two different settings, with _numOfCUPs_ set to 1 and again with it set to 10.  I'm running it on a 6 core/12 thread CPU so I should have no problem handling 10 threads.

I've attached an Excel file with my results for how long it took with a single thread vs. how long each of threads took on the 10 thread run.  What it amounts to is that with a single thread it took 51 minutes, and with the 10 thread option it took 47 minutes for it all to finish.  With the latter the first 9 threads finished between 28-34 minutes which is a significant time savings over the single thread run, but the last thread still took almost as long and gave us only about an 8% time savings.

This is only one simple comparison and only one simulation type, but I've seen this same result happen over and over again over the past year or so with several simulations (annual, irradiance, illuminance,...).  I also don't think it's necessarily a Honeybee problem, because even running the simulations myself by creating the octree mesh and running radiance and daysim from the command line I had the same issue where splitting a model up and running parallel simulations that were then recombined took just about as long as running it all in one thread.  Add into this the time necessary to combine the different files at the end and your multithreading time savings become even less.

So is it me, my machine, radiance, bad karma?

Views: 498

Attachments:

Replies are closed for this discussion.

Replies to This Discussion

Hi Timothy,

Thank you for reporting this. I have seen this before but never really fixed it. It is half bad Karma ( how Radiance works) and half bad code!

Back to the karma part it is about how Radiance works. It calculates the scene first and then it starts the ray-tracing process. Breaking down the points into subgroups is only helpful for the second part so in cases that calculating the scene can take a long time it won't be really helpful.

But in case only the last file takes much longer it is my fault for not writing a better distribution function. I assume the number of test points for the last CPU was bigger than the rest. I modified the code and if you update with github and try again hopefully it should give you a better performance overall.

Keep me posted about the results.

Mostapha

I'll download the code and try it out this week, but I'm not confident it will make a huge difference.  Not through any fault of Honeybee, just that the last points file only had 5 more points than the rest of the points files had.

I also did some tests a few months ago about how long it takes to run the simulations based on varying point densities and as you've probably seen, there's not nearly the difference you would expect between how long it takes to calculate for a couple hundred versus a couple thousand points.  I assumed this is because the calculations occurring in the first step seem to take the majority of the calculation times. I guess at that point it's mostly just simplifying the meshes that radiance uses that would provide the most benefit?  I guess I'll have to do some tests on that next.

Thanks for the update though, I'll download the latest and run the same project file again and post my results

-Tim

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service