XGBoost evaluation metrics show up in Workspace, but not in run overview


I have written code for a XGBoost model. Everything seems to be working, and everything I logged appears in a nice plot in my Workspace. However, the part that is not logged by WandbCallback is not showing up in my Run overview as a column.

However, when I search trough my columns in the ‘Manage Columns’ window, I cannot find these columns.

Here is my code:

# setup wandb
    wandb.init(project=f'subscriptionPrediction', entity='churn-vhc', config=args, tags=args.tags)

    # create a folder for the wandb logs
    args.logging_folder = f'./wandb_logs/{wandb.run.name}'
    os.makedirs(args.logging_folder, exist_ok=True)

    # load data
    X_train, X_test, y_train, y_test = prep_data(pd.read_csv(r".\src\data\cleaned_data_03-27.csv"), args)

    # initializing model
    model = xgb.XGBClassifier(**params, 

    # training model
    model.fit(X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)])

     # get train and validation predictions
    train_pred = model.predict(X_train)
    test_pred = model.predict(X_test)

    # get metrics
    train_metrics = getMetrics(y_train, train_pred)
    test_metrics = getMetrics(y_test, test_pred)

    # log to wandb
    wandb.log({'train_metrics': train_metrics})
    wandb.log({'test_metrics': test_metrics})

    # close wandb run

Any one know what is up or how I could fix this? Thanks in advance!

Hey Celi,

Thanks for messaging in and sharing a code snippet – it looks like there are a couple of different thing going on here.

I wonder if your train metrics are a dictionary or single values – if they are a dict then nesting might be the cause here.

To get you back on track:

first I wonder if you’d object to replacing wandb.init(project=f'subscriptionPrediction', entity='churn-vhc', config=args, tags=args.tags) with run = wandb.init(project=f'subscriptionPrediction', entity='churn-vhc', config=args, tags=args.tags) and replacing wandb with run in your code.

If you are looking to log single values at the end of you run would you also object to giving the below a try for each metric you are looking to log:

run.summary['train_metrics': train_metrics['some_metric']

followed by


finally here is a usfull xgboost getting started colab:
Which uses some of the techniques shared above.

I Hope this helps and look forward to hearing back from you.



Hey Celi,

Wanted to follow up and check if the above had been of assistance to you – please let me know if you have any further question/ need of help here.



Hi Cile, since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!