Grasshopper

algorithmic modeling for Rhino

Honeybee EPlus crash: Exception of type 'System.OutOfMemoryException'

Hi all,

I ran a big Eplus model (50 zones, ~1000 windows) and it crashed grasshopper when it tried to read the results file. See image below. I was trying to read zone results and energy flow data from each window surface at the same time. It appears that this is too much for my computer. Can anyone think of a way that I can get around this? Gathering 8760 data for each window surface really bumps up the data being brought in.

 

It would be great if the "read results" modules allowed us to select individual zones rather than bringing in all data. Is there a way to do this? I tried to bring in only those columns of data that I want directly from the csv file but there is too much data. I have no idea what columns to choose based on my zone/ window surface choice. Any ideas? I'm open to any suggestions.

 

Thanks for your help.

Views: 360

Replies are closed for this discussion.

Replies to This Discussion

I agree. We should have sort of query based results reader. It will be probably the best to do it through sql and not the csv file. I'm pretty sure we discussed this before and I think it's important to implement it at some point soon. There is a WIP component for read .sql results but I'm not sure how useful it is at this point of time.

I'm also not sure if there is any other tool that can handle this amount of data and visualizes the results. Have you tried OpenStudio?

The only solution I can think of right now is to write a custom code to parse the results from the csv file which will probably take a considerable amount of time.

Mostapha

Leland and Mostapha,

There are definitely some simple ways around this (I was making models that were as many as 70 zones for my thesis) and there are some more complex ones if you really need a specific slice of the data.

First things that I can recommend: if you can run the simulation for just an analysis period that you are interested in, this can really cut down the parsing time and memory a lot.  Try running the simulation for a typical or extreme hot/cold week to see if you can test the thing you are interested in this way.  This was primarily how I got around the issue for my thesis and I can say that the comfort map workflow allows you to scroll through the data any way you like even when it is not an annual simulation (in the past, it needed to be an annual sim to use the analysisPeriod input).  I just implemented this feature on the color zones and color surfaces components (https://github.com/mostaphaRoudsari/Honeybee/commit/519823b5a3a9c85...).

Second, if you can curate the outputs that you are interested in from the "simulationOutputs" of the "Generate EP Output" component, this will also cut down on the amount of data you are bringing in through the result reader.  Obviously, cutting out surface outputs cuts down the amount of imported data way more than zone outputs.

Third, if you can accept not bringing in hourly results but just monthly or total results for some of the outputs, this also cuts down on the data substantially.

Forth, as a last-ditch effort you can always connect up just the zones that you are interested in to the Run Simulation component and put adiabatic surfaces for the adjacencent surfaces.  I know that this is not ideal from a simulation accuracy perspective but it is clearly better than no results at all.  I recently updated the "Make Adiabatic by Type" component to allow you to make any type of surface adiabatic and this should help with setting such cases up (https://github.com/mostaphaRoudsari/Honeybee/commit/1b5329ca61d9547...).

To be clear about the sql option: I agree that this is the best way to deal with the huge data sets we get out of E+ and I have been wanting to develop this ability into the Run Simulation workflow for a while.  However, I have been unable to move forward because the parsing of the sql file relies on the openStudio parser (asking people to download a separate sqlLite parser and writing corresponding code for it seems to just be too much work). I experience the OpenStudio PINVOKE error on my system and so this has halted my ability to develop such a workflow.

With a few hours, I can write the ability into the Read EP Result component to only bring in data for a HBZones input that I can add.  I have been questioning whether this would be worth it because I was still holding out hope that the PINVOKE error would be solved.  Mostapha, could you let me know what time frame you think it would take to fix the PINVOKE error?  Knowing that will help me decide whether it's worth it to invest in an input on the current result reader.  Also, let me know if there are any ways that I might help or I might begin researching a solution.

-Chris

Chris and Mostapha,

Thanks for your advice. In this case I went with Chris' fourth option and ran individual zones, but I'll be looking at the other methods of reducing output data before I run my next large model. Good luck with fixing the Open Studio bug, the sql option sounds promising.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service