Unusual behaviour of Ant Colony Optimization for Closest String Problem in Python and C++

  ant-colony, benchmarking, c++, optimization, python

This is probably going to be a long question, I apologize in advance.

I’m working on a project with the goal of researching different solutions for the closest string problem.

Let s_1, … s_n be strings of length m. Find a string s of length m such that it minimizes max{d(s, s_i) | i = 1, …, n}, where d is the hamming distance.

One solution that has been tried is one using ant colony optimization, as decribed here.

The paper itself does not go into implementation details, so I’ve done my best on efficiency. However, efficiency is not the only unusual behaviour.

I’m not sure whether it’s common pratice to do so, but I will present my code through pastebin since I believe it would overwhelm the thread if I should put it directly here. If that turns out to be a problem, I won’t mind editing the thread to put it here directly. As all the previous algorithms I’ve experimented with, I’ve written this one in python initially. Here’s the link:

https://pastebin.com/W4xq4ZC9

The utils.problem_metric function looks like this:

def hamming_distance(s1, s2):
    return sum(c1 != c2 for c1, c2 in zip(s1, s2))

def problem_metric(string, references):
    return max(hamming_distance(string, r) for r in references)

I’ve seen that there are a lot more tweaks and other parameters you can add to ACO, but I’ve kept it simple for now. The configuration I’m using is is 250 iterations, colony size od 10 ants and rho=0.1. The problem that I’m testing it on is from here: http://tcs.informatik.uos.de/research/csp_cssp , the one called 2-10-250-1-0.csp (the first one). The alphabet consists only of ‘0’ and ‘1’, the strings are of length 250, and there are 10 strings in total.

For the ACO configuration that I’ve mentioned, this problem, using the python solver, gets solved on average in 5 seconds, and the average target function value is 108.55 (simulated 20 times). The correct target function value is 96. Ironically, the 5-second average is good compared to what it used to be in my first attempt of implementing this solution. However, it’s still surprisingly slow.

After doing all kinds of optimizations, I’ve decided to try and implement the exact same solution in C++ so see whether there will be a significant difference between the running times. Here’s the C++ solution:

https://pastebin.com/YN3U44hz

The average running time now is only 530.4 miliseconds. However, the average target function value is 122.75, which is significantly higher than that of the python solution.

If the average function values were the same, and the times were as they are, I would simply write this off as ‘C++ is faster than python’ (even though the difference in speed is also very suspiscious). But, since C++ yields worse solutions, it leads me to believe that I’ve done something wrong in C++. What I’m suspiscious of is the way I’m generating an alphabet index using weights. In python I’ve done it using random.choices as follows:

ants[ant_idx][next_character_index] = random.choices(alphabet, weights=world_trails[next_character_index], k=1)[0]

As for C++, I haven’t done it in a while so I’m a bit rusty on reading cppreference (which is a skill of its own), and the std::discrete_distribution solution is something I’ve plain copied from the reference:

std::random_device rd;
std::mt19937 gen(rd());
 
int get_from_distrib(const std::vector<float>& weights){
    
    std::discrete_distribution<> d(std::begin(weights), std::end(weights));
    return d(gen);
}

The suspiscious thing here is the fact that I’m declaring the std::random_device and std::mt19937 objects globally and using the same ones every time. I have not been able to find an answer to whether this is the way they’re meant to be used. However, if I put them in the function:

int get_from_distrib(const std::vector<float>& weights){
    std::random_device rd;
    std::mt19937 gen(rd());    
    std::discrete_distribution<> d(std::begin(weights), std::end(weights));
    return d(gen);
}

the average running time gets significantly worse, clocking in at 8.84 seconds. However, even more surprisingly, the average function value gets worse as well, at 130.
Again, if only one of the two things changed (say if only the time went up) I would have been able to draw some conclusions. This way it only gets more confusing.

So, does anybody have an idea of why this is happening?

Thanks in advance.

Source: Windows Questions C++

LEAVE A COMMENT