Grasshopper

algorithmic modeling for Rhino

What is the best way to handle point clouds with Grasshopper/Rhino.

I wish to achieve the following in the most computer resource friendly way possible;

-Handling ¬ 2,000,000+ points.

-Coloring individually using a sample picture.

-Baking the colored point cloud.

The X,Y,Z data of the points is being read by reading a text file with the information in csv format.

Each point is to be colored individually using a sample picture.

With this simple setup I am showing I managed 800,000 points with 9Gb of my computers Memory being utilized. 

To achieve this sort of 3D survey output;

3D surveys usually contain millions of points and specialized software only can handle that volume of points. But I am looking into ways of manipulating all this data directly inside grasshopper. Ideas/suggestions are more then welcome.

- How can you bake these points with the color information stored with the point? 

  Each point would have its color set by object - and it retains the color of the custom     previewer. (What you are seeing in the picture is the grasshopper preview)

- Are there settings that I can tweak for Rhino/Grasshopper to handle this volume of     points more efficiently?

 

Views: 12809

Replies to This Discussion

Grasshopper doesn't support pointclouds as a native data type. Each point must be stored separately, which greatly increases the memory footprint. It's also not possible to associate a colour with each point, except if you're willing to keep a list of synchronized colours in parallel with the point list.

For this sort of geometry Grasshopper is almost certainly not your application of choice. What you need is something that knows how to squeeze every last inch of performance out of huge point sets.

Apart from displaying, what other things do you want to do with this data?

Incidentally, a single point entry should require 3 * 8 bytes (assuming points are three-dimensional and stored as double precision floating point numbers) plus 3*1 bytes (assuming colour is stored as byte-accuracy red, green, blue channels) = 27 bytes. More likely 28 bytes as colours are probably stored as 32-bit integers, allowing for an unused alpha channel.

28 * 800,000 equals roughly 22 megabytes, which is way down from 9 gigabytes. That's a 400 fold memory overhead, which is pretty hefty.

Grasshopper stores points as instances of classes, so on 64-bit systems it actually takes 64+64+3*8 = 152 bytes per point*, which adds up to 122MB, still way less than 9GB. It would be interesting to know where all the memory goes...

* Grasshopper points also store reference data, in case they come from the Rhino document. This data will not exist, but even so it will require 64-bits of storage.

Hi David,

Thanks for your input on this,

I would like to manipulate the point cloud to be able to select ranges x,y,z data, to effectively cut slices through the model and get sections out of them. In general it would be very useful to be able to manipulate real 3D surveys to use as context for sculpting apertures in buildings, visualizing in-city views accurately etc..  

The x,y information are real world coordinates so a gps point of a person inside this particular space would appear referenced as a dot. I wish to use the survey and the proximity to this dot to manipulate survey points in the area, such as creating visual representations of people flowing through the surveyed Area. This makes most sense on a macro space like a 1km^2. But if this is not possible I can always reduce the extents of the observable space. 

What would be an effective way for me to diagnose where effectively Memory is being used up?

I see this thread is a bit old, but maybe this plugin helps for future pointcloud queries within gh! Very easy and fast to use http://www.grasshopper3d.com/group/tarsier .

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service