Grasshopper

algorithmic modeling for Rhino

Hello everyone,

here I am again with another question :')

After some research I am most likely done with figuring out the time complexity of my algorithm. However there are a few points where I am not sure about my decision. 

Let's take a look at the method <createPopulation>: 

public static List<List<Point3d>> createPopulation(List<Point3d> cP, int populationCount)

{

  List<List<Point3d>> Population = new List<List<Point3d>>();  // 1

  for(int i = 0; i < populationCount; i++)  //2

  {

    List<Point3d> individual = cP.ToList();  // 3

    Population.Add(individual);  // 4

    System.Security.Cryptography.RNGCryptoServiceProvider provider = new System.Security.Cryptography.RNGCryptoServiceProvider();

    int n = Population[i].Count;

    while (n > 0)

    {

      byte[] box = new byte[1];

      do provider.GetBytes(box);  // 5

      while (!(box[0] < n * (Byte.MaxValue / n)));

      int k = (box[0] % n);

      n--;

      Point3d value = Population[i][k];

      Population[i][k] = Population[i][n];

      Population[i][n] = value;

    }

  }

  1. In my algorithm there are lots of declarations like: List<Point3d> = new List<Point3d>(); or Random r = new Random(); these are constant time O(1) right? 
  2. i = 0 executes once; i < populationCount executes (N+1) times; i++ N times
  3. this executes M times? because every point needs to be added individually? 
  4. this one I don't really know; at msdn it is stated:
    If P:System.Collections.Generic.List`1.Count is less than P:System.Collections.Generic.List`1.Capacity, this method is an O(1) operation. If the capacity needs to be increased to accommodate the new element, this method becomes an O(n) operation, where n is P:System.Collections.Generic.List`1.Count. Shouldn't it be O(N), because I am adding every individual separately? 
  5. can't find any statements about this one either, I am guessing it should be of constant time O(1), because just one random number is being generated?

Views: 2003

Replies to This Discussion

You need to forget about details of your code and one-off things like declarations (more on those later btw.). Big-O runtime is about how much longer your algorithm will take if you increase the amount of data it has to operate on. So only those aspects of the algorithm that differ in these cases are relevant.

For example if your algorithm always does exactly the same thing (let's say all it does is measure the size of an array and display it on screen) will be O(1), because it doesn't matter if you run it on an array containing 10 or 1000000 items. Measuring the size of an array is a constant-time operation:

Print(string.Format("Array contains: {0} element(s)", data.Length);

However if your algorithm works on not on arrays but on linked-lists, then it becomes an O(N) operation because counting all the elements in a linked list means you have to iterate over all of them. And the longer the list, the more iterations you need. In fact the number of iterations is exactly the same as the number of items. (ps. if you'd be using the System.Collections.Generic.LinkedList<T> class then it's still O(1), because apparently that particular implementation of linked lists caches the count and keeps it up to date.)

If you have a loop that runs for each item, and then inside that loop there is another loop that also runs for each item, then your complexity becomes O(N²). Or, in a similar case if your algorithm consumes two collections (N and M) and iterates over all items in N, and then inside that loop it iterates over all items in M, the complexity is O(N×M).

The case can be made that only the most severe complexity is relevant enough to report. For example if you have an algorithm that comprises of three steps, the first of which is O(log(N)), the second is O(N²) and the third is O(3ⁿ), then technically the total complexity would be O(log(N) + N² + 3ⁿ), however the first two parts are utterly insignificant compared to the third and therefore can be omitted entirely. Consider for example increasing the input size from 10 to 20 elements:

  • log(10) + 10² + 3¹⁰ = 1 + 100 + 59049 = 59150
  • log(20) + 20² + 3²⁰ ≈ 1 + 400 + 3486784401 ≈ 3486784802

As you can see the increase of the complexity is almost entirely due to the O(3ⁿ) portion, so much so that there's almost no point in mentioning the other two.

Now, your specific questions:

  1. Constructors/declarations and method invokes are not necessarily O(1). In this particular case they are, but it is possible that some constructor you call may have a higher complexity. For example if instead of an empty List<T> you're constructing a SortedList<T> based on your inputs, then it definitely may be the most significant complexity in your entire algorithm and it needs to be taken into account.
  2. Correct. A loop like this has complexity O(N), ignore stuff that only happens once like the declaration of the iteration variable.
  3. I don't understand that line of code. cP is already a list. Why are you calling ToList() on it? In general making copies of memory-contiguous collections (like arrays or lists) can be done in O(1), depending on implementation, because blocks of memory can simply be duplicated or moved at one go using the correct hardware ops. However other times it will require a loop in which the complexity goes up.
  4. It's very cheap to add items to lists, provided the list has enough space to add new items. By default a list is big enough to contain only 4 items. If you try and add a fifth one, the list will need to allocate more memory elsewhere, copy the 4 existing items into the newly allocated space and only then add a new item. So, if you know ahead of time how many items you'll be adding to a list (or even if you only know a theoretical upper bound), you should construct the list using that known capacity. This will speed up the process of adding many items to a single list.
  5. Don't know how crypto providers work, but since this part of your algorithm does not depend on cp.Count or the magnitude of populationCount, it doesn't matter for the big-O complexity metric.

There's no such thing as O(13). O(1) is a stand-in for constant time, no matter how long it takes. O(1) tells you that it always takes the same amount of time, but it doesn't tell you how much time. Similarly, O(2N) is not meaningful, because it merely linearly scales the actual measured runtime, which is not what big-O notation is about.

Initialising new empty collections is almost always an O(1) operation. It's only when you initialise a class and populate it through the constructor that complexity may be worse. For example new HashSet<int>(largeIntegerCollection) or new SortedList<string, int>(largeDictionary). Note that the comments for the latter constructor on MSDN specifica....

You mean I should do something like: List<Point3d> test = new List<Point3d>(N); if I know I am going to have N points in the list. 

Yup.

So <calculateFitness> has runtime N^2 and <List.ToList()> runtime N. What is then the total runtime? 

Either O(N+N²) or just O(N²) depending on opinion. I'd say the latter.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service