Separating Training From Testing


I’ve got a sort of lopsided ML workflow where most of my time is spent producing plots and metrics, and my set of trained models is rarely updated. Is there a good way to separate my training code from all of my evaluation scripts?

I’d like my experiments to have the logging information from when the model was trained, but not to have to retrain it every time.


Hey Marcel!

I’m not sure if you’re asking generic feedback on this. Usually structuring code separately into similar models for training and evaluation and logging information accordingly is the best approach for this.

Please let me know if you have any specific Qs around this :slight_smile: