Getting `AttributeError : 'NoneType' object has no attribute '_log' `when trying to run test set

Framework: Pytorch
wandb version : 0.13.3
workspace: Google colab

config = dict(
    dropout = 0.4,
    train_batch = 3,
    val_batch = 1,
    test_batch = 1,
    learning_rate = 0.001,
    epochs = 5,
    architecture = "CNN",
    model_name = "efficientnet-b0",
    infra = "Colab",
    dataset="dysphagia_dataset2"
    )

My test function

def test_model():
    running_correct = 0.0
    running_total = 0.0
    true_labels = []
    pred_labels = []
    with torch.no_grad():
        for data in dataloaders[TEST]:
            inputs, labels = data
            inputs = inputs.to(device)
            labels = labels.to(device)
            true_labels.append(labels.item())
            outputs = model_ft(inputs)
            _, preds = torch.max(outputs.data, 1)
            pred_labels.append(preds.item())
            running_total += labels.size(0)
            running_correct += (preds == labels).sum().item()
        acc = running_correct/running_total
    return (true_labels, pred_labels, running_correct, running_total, acc)


true_labels, pred_labels, running_correct, running_total, acc = test_model()

Error

AttributeError                            Traceback (most recent call last)

<ipython-input-26-b7dbeaddcbbb> in <module>
----> 1 true_labels, pred_labels, running_correct, running_total, acc = test_model()
      2 

4 frames

/usr/local/lib/python3.7/dist-packages/wandb/wandb_torch.py in log_tensor_stats(self, tensor, name)
    254             bins = torch.Tensor(bins_np)
    255 
--> 256         wandb.run._log(
    257             {name: wandb.Histogram(np_histogram=(tensor.tolist(), bins.tolist()))},
    258             commit=False,

AttributeError: 'NoneType' object has no attribute '_log'

This is how i initialize training:

model_ft = train_model(model_ft, 
                       criterion, 
                       optimizer_ft,
                       config
                       )

my wandb init:

wandb.init(config=config,
           name='efficientnet0+albumentions',
           group='pytorch-efficientnet-baseline', 
           project='dysphagia_image_classification',
           job_type='train')
config = wandb.config

Finally caught my mistake :slightly_smiling_face:

model_ft._fc = nn.Sequential(
    nn.BatchNorm1d(num_features=num_ftrs),    
    nn.Linear(num_ftrs, 512),
    nn.ReLU(),
    nn.BatchNorm1d(512),
    nn.Linear(512, 128),
    nn.ReLU(),
    nn.BatchNorm1d(num_features=128),
    nn.Dropout(p=config.dropout), # Error due to this
    nn.Linear(128, 2),
    )

model_ft = model_ft.to(device)

I was calling my test function outside of wandb(only used wandb for training)and wandb must have call .finish so, it must have set the my config dict:-> None as I was passing it to wandb.config.

Now , my model class use one of the config (dropout) but I passed my config file into wandb config so, it set it to None after my model finish training. So, when my def test function use my model, the dropout hyparameter value is None now!

Hi @vishu , thank-you for letting us know that you successfully resolved your issue.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.