Grasshopper

algorithmic modeling for Rhino

Hi awesome Community,

I am successfully using the bitmap component (Gh/Firefly, they are very similar) to reproduce images on arrays of surfaces YaY

I am then using a 'straight line distance' algorithm to reduce the color range of these surfaces to just a bunch of final colors (I'm manually picking a few target colors based on custom palettes, like 15-20 colors in total)

What I am trying to implement now is a kind of dithering-algorithm to spread the color-error of a single surface (that I consider equal to the straight-line-distance measured in the above step) to its neighbors
Hopefully this step should avoid the production of huge flat island of exactly the same color on the final surface, where original shades on the starting images have very very close values

As I'm really not a coding expert, the best solution I could guess was to use the Floyd Steinberg error-spreading algorithm, or other more complex ones as described here: http://www.visgraf.impa.br/Courses/ip00/proj/Dithering1/floyd_stein...

This method requires the use of loop cycles (like Anemone) to iterate calculations pixel-by-pixel from top left to bottom right, with an error diffusion that will be computed and distributed only to not-yet-calculated pixels of the final image
By using this method, the time required for a single iteration-loop will always be about the same, but the processing time will increase the higher the number of pixel to process on the original image is (and this factor could be dramatic)

Looking for a solution that takes advantage of the in-parallel Grasshopper calculation potential, I thought I could use no loop cycles at all, and spread a Nth fraction of the straight-line-distance error of every given pixel to all its surrounding neighbors (not just the subsequent ones like the above system) then reiterate the color matching process (and evaluate the quality of the results by quantifying the number of pixels that have changed their color because of this last step)
Eventually, this whole operation could be repeated several times until the desired effect is reached

has anyone in this Community had to deal with this before, or have some awesome idea on how to better dither an image with a reduced color palette?

I've been digging in the forum posts for quite some time, but did not find any magic hint for that

Thanks for reading!!

Views: 1644

Replies to This Discussion

I have written various dithering algorithms for GH2, including Floyd-Steinberg, Jarvis, and Atkinson. These images are part of my unit-test project:

Original image, one of the pictures I took while playing FireWatch.

Reduced to a 5-colour palette, picked by hand, though a K-Means palette is also available.

Floyd-Steinberg, note the very noisy regions with lots of dithering.

Atkinson doesn't diffuse the full error, meaning larger solid areas will be present in the result.

I realise the above doesn't help you much now, but I can tell you that while it is relatively easy to implement these algorithms in C#, I'd hate to have to do it in GH. The iterative and forward-dependent nature of dithering makes that really cumbersome.

oooh man
-drooling allover my keyboard-
can't wait!!!!

ok, in the meanwhile I'll look into C# scripts

thanks!

Decided to implement a kd-tree palette quantizer (the k-means approach I was using was just too slow). See here for some details.

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service