I discovered W&B recently and decided to use it for my current research project. The thing is that I have a specific task to solve and would like to evaluate a bunch of completely different model architectures, having different sets of hyper-parameters.
Most of the online resources and tutorials I’ve found only shows examples of W&B usage to evaluate different experiments with different params selections (e.g. optimized using sweeps). However none of the examples I found explained how to optimally organize a W&B project including different architectures to solve the same task, and thus being able to compare in a glimpse the different performances in a single view / report.
My idea was to make use of the job_type flag and group every architecture instances together under a same job_type flag. But still seems like not the best solution and was wondering if there is some special feature or built-in tool that I’ve not noticed yet (or even good practices?).
(Other than that, W&B looks really insane).