Separating Training From Testing

Hi!

I’ve got a sort of lopsided ML workflow where most of my time is spent producing plots and metrics, and my set of trained models is rarely updated. Is there a good way to separate my training code from all of my evaluation scripts?

I’d like my experiments to have the logging information from when the model was trained, but not to have to retrain it every time.

Thanks!

Hey Marcel!

I’m not sure if you’re asking generic feedback on this. Usually structuring code separately into similar models for training and evaluation and logging information accordingly is the best approach for this.

Please let me know if you have any specific Qs around this :slight_smile:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.