Using Wandb with HParams on TF

I think I may have got confused with this one. I had to code up a custom model using TF. It is training and running but I want to do some hyper parameter tuning so been working on getting HParms integrated.

But I’m trying to link up Wandb to keep track of things.

Currently, since I’m using hparms, when I initialize wandb with wandb.init(), it seems to initialize it for the whole process and it doesn’t change when it is a new parameter set.

I am calling the wandb.init() and logging after each parameter run, but still it doesn’t create a unique job.

This the function I call,

def write_to_wandb(ldl_model_params, KLi, f1_macro):
    wandb.init(project="newjob1", entity="demou")
    wandb.config = ldl_model_params

    wandb_log = {
        "train KL": KLi,
        "train F1": f1_macro,
        }

    # logging accuracy
    wandb.log(wandb_log)   

This is called from this train function (a high-level version of it). This train_model function is repeated again through another hyperparamter function with different hyper-parameter.


def train_model(ldl_model_params,X,Y):
    model = new_model(ldl_model_params)
    model.fit(X,Y)
    predict = model.transform(X)
    KLi,F1 = model.evaluate(predict,Y)
    write_to_wandb(ldl_model_params,KLi,F1)

So how do I fix this? I want each call to train_model to be recorded in a new run.

I’m new to wandb so I have a feeling that I am not using it as it should be. Thanks.

Just had a chat with the support and figured out how to fix the problem with over-writing.

Issue was with the init function and there is a flag for reinitializing (reinit=True)

wandb.init(project="newjob1", entity="demou",reinit=True) this fixed this issue.

1 Like