Just wondering if you have constant liar algorithm implemented internally for hyper-parameter suggestions in parallel. If I understand wandb API correctly, it is geared more for sequential suggestions, and considering our company can run pods in parallel, would be amazing if you guys can implement this on your end, rather than us hacking it on our end.
The basic idea is that for the first pod (in a parllel set) it will suggest the hyper-parameters as usual, but for the 2nd and other pods starting now in parallel, it will send back the worst loss it has currently seen. The logic being that the next suggested hyper-parameters will be far away from ones suggested to first one. You could probably be smarter here since wandb has access to loss metrics as it trains, but that would be a side project.
Hi @lesliewandb, Thanks for getting back to me. I feel like the CL algorithm is independent of whether it is Bayes Search or TPE.
Consider this example. Suppose we have conducted 1000 sweeps already, so the sampler is more or less confident about the parameter space. The problem with the current implementation is that if I were to spin up the next 5 sweeps in parallel, wandb ignores the fact they are happening in parallel and would independently suggest 5 hyper-parameters. There is a good chance that all these 5 parameters are extremely similar. However, if each sample “lies” and says that it was a bad location, it would force the sampler to look at a different location and makes sure we can “explore” the hyper-parameter space better.
So basically what I’m asking for is for the constant liar algorithm on top of bayes search. I do however, think that TPE algorithm is better than sklearn’s GP’s but that can be another discussion.