I want to do hyperparameter tuning using sweeps for my project. The experiments all involve k-fold cross-validation. Prior to doing hyperparameter tuning, I was separating the folds into different runs in wandb.init() by assigning them to the same group but giving them the fold number as a name. That works well.
However, now that I’m using sweeps for hyperparameter tuning, wandb shows only one run per group (Fold 1) and all folds get packed together into this one run.
Is there any best practice how to do both hyperparameter search with sweeps and k-fold cross-validation?
Thanks a lot!