Ah okay, if you’ve got an actual unbiased test set, then you’re golden. I’d be interested to hear more about your project!
And yes, as @_scott points out, if you aren’t using Bayesian optimization, the choice of
metric won’t impact the behavior of your search.
random is actually a pretty good choice for HPO, competitive with
bayes in my and others’ experience – and less prone to error/misconfiguration. Also BTW, the
early_terminate feature uses HyperBand, which is more aggressive than the usual early stopping folks learn about in an ML class, based on stopping training when you see increasing validation set error. That style of early stopping is best delegated to the ML framework you’re using.
Thanks for pointing out the issue with the Slack link. As @bhutanisanyam1 said, we are moving discussion to this forum, but that link should’ve still been in operation anyway. Will fix it shortly.