Grasshopper

algorithmic modeling for Rhino

Hi David and everyone else,

I have some questions regarding the Simulated Annealing implementation in Galapagos, maybe someone knows an answer?

- when using delta Energy for calculating the acceptance probability, is that value being normalized? It would make sense to do so, to keep the behaviour of the algorithm independent of a specific problem, right? If it is normalized, how can you have a generalized knowledge of the normalization factor, without knowing the specific problem (especially the objective function value bounds)?

- also in one of the earlier posts of David, he mentioned he uses a Cauchy distribution for this acceptance probability, which confuses me a bit. Does it mean, this distribution is imposed on the probability? I thought, p depends on the current temperature and delta Energy?

I actually already asked this as a follow up question in this post:

http://www.grasshopper3d.com/forum/topics/about-simulated-annealing...

Would appreciate any help on this!

Thanks and best, Chris

Views: 686

Replies to This Discussion

[...] is that value being normalized?

Only in the sense that the number of possible values a gene can assume is normalised to the 0~1 range, so to jump from one end of a slider to the other would take the same amount of energy, independent of how many values are in between.

Other than that Galapagos does not understand anything about the problem at hand, so it cannot be 'smart' about it. There is at present no way to inform GP about any non-linear behaviour of individual sliders.

also in one of the earlier posts of David, he mentioned he uses a Cauchy distribution for this acceptance probability, which confuses me a bit.

I don't remember that, which does not mean I deny saying it. The annealing process is pretty standard in its implementation, following the equations as they were written down in the Wikipedia page. I do however pick starting points for each new solution that aren't random, but are as far away as possible from previous samples. Perhaps this is the distribution you refer to?

Hi David,

thanks a lot for your response.

i referred to this post of yours, where you explain the two graphs (probability density and cumulative density) in the galapagos interface: http://www.grasshopper3d.com/video/2-simulated-annealing-runs

"I've now got a Gaussian distribution showing lattice jump magnitude and a Cauchy cumulative distribution showing acceptance probabilities."

The lattice jump thing (gaussian, left bright green graph), I get it. But the cumulative density graph in dark green, I don't understand its purpose.

(...) number of possible values a gene can assume is normalised to the 0~1 range, so to jump from one end of a slider to the other (...)

If I understand correctly, here you are talking about the variables? What I meant is actually the objective function values, this delta Energy, or e'-e on wikipedia.

As an example: If my problem is to maximize the area of a floorplan... it would make a huge difference for the acceptance probability if I either use square meters, or square milimeters (why would I ever calculate a floorplan in square milimeters? but just for the sake of the example) as objective function values (or fitness).

Those graphs aren't all that useful, they're really just something to look at. The cyan graph is the integral of the green one, that's all.

[...] it would make a huge difference for the acceptance probability if I either use square meters, or square milimeters (why would I ever calculate a floorplan in square milimeters? but just for the sake of the example) as objective function values (or fitness).

Oh I see! I actually can't remember now whether or how I solved that. Obviously there's no way ahead of time to know what the possible range of fitnesses will be, so it's definitely a problem. If the range is -1~+1, 0~1000 or 999~1000 very different acceptance functions ought to be used.

I sort of suspect I did not do anything particularly clever, probably something about new fitness relative to current fitness assuming zero to be the anchor point.

ah ok. the impact of scale you can of course avoid by normalizing with the current best fitness.

yes, but the range-issue is still a problem, especially if the "real" anchor point is not zero.

thanks a lot, this clarified my questions :)

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service