I am running into below problem in stable diffusion based code : TypeError: first argument must be c

Problem at: /home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/loggers/wandb.py 193 experiment
Traceback (most recent call last):
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_init.py”, line 1170, in init
run = wi.init()
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_init.py”, line 629, in init
run = Run(
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_run.py”, line 566, in init
self._init(
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_run.py”, line 698, in _init
self._config._update(config, ignore_locked=True)
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_config.py”, line 177, in _update
sanitized = self._sanitize_dict(
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_config.py”, line 237, in _sanitize_dict
k, v = self._sanitize(k, v, allow_val_change)
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/sdk/wandb_config.py”, line 255, in _sanitize
val = json_friendly_val(val)
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/wandb/util.py”, line 671, in json_friendly_val
converted = asdict(val)
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/dataclasses.py”, line 1073, in asdict
return _asdict_inner(obj, dict_factory)
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/dataclasses.py”, line 1080, in _asdict_inner
value = _asdict_inner(getattr(obj, f.name), dict_factory)
File “/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/dataclasses.py”, line 1110, in _asdict_inner
return type(obj)((_asdict_inner(k, dict_factory),
TypeError: first argument must be callable or None

I verified the config looks just fine as below

{‘target’: ‘pytorch_lightning.loggers.WandbLogger’, ‘params’: {‘project’: ‘tcga-brca’, ‘name’: ‘06-11T11-10_plip_imagenet_finetune_PanNuke’, ‘save_dir’: ‘logs/06-11T11-10_plip_imagenet_finetune_PanNuke’, ‘offline’: False, ‘id’: ‘06-11T11-10_plip_imagenet_finetune_PanNuke’, ‘resume’: None, ‘config’: {‘name’: ‘’, ‘resume’: ‘’, ‘base’: [‘/home/csgrad/mbhosale/phd/Pathdiff/PathLDM/configs/latent-diffusion/mask_cond/plip_imagenet_finetune_PanNuke.yaml’], ‘train’: True, ‘no_test’: False, ‘project’: None, ‘debug’: False, ‘seed’: 23, ‘postfix’: ‘’, ‘logdir’: ‘logs’, ‘scale_lr’: False, ‘wandb_name’: None, ‘wandb_id’: None, ‘logger’: True, ‘checkpoint_callback’: True, ‘default_root_dir’: None, ‘gradient_clip_val’: 0.0, ‘gradient_clip_algorithm’: ‘norm’, ‘process_position’: 0, ‘num_nodes’: 1, ‘num_processes’: 1, ‘devices’: None, ‘gpus’: None, ‘auto_select_gpus’: False, ‘tpu_cores’: None, ‘ipus’: None, ‘log_gpu_memory’: None, ‘progress_bar_refresh_rate’: None, ‘overfit_batches’: 0.0, ‘track_grad_norm’: -1, ‘check_val_every_n_epoch’: 1, ‘fast_dev_run’: False, ‘accumulate_grad_batches’: 1, ‘max_epochs’: None, ‘min_epochs’: None, ‘max_steps’: None, ‘min_steps’: None, ‘max_time’: None, ‘limit_train_batches’: 1.0, ‘limit_val_batches’: 1.0, ‘limit_test_batches’: 1.0, ‘limit_predict_batches’: 1.0, ‘val_check_interval’: 1.0, ‘flush_logs_every_n_steps’: 100, ‘log_every_n_steps’: 50, ‘accelerator’: None, ‘sync_batchnorm’: False, ‘precision’: 32, ‘weights_summary’: ‘top’, ‘weights_save_path’: None, ‘num_sanity_val_steps’: 2, ‘truncated_bptt_steps’: None, ‘resume_from_checkpoint’: None, ‘profiler’: None, ‘benchmark’: False, ‘deterministic’: False, ‘reload_dataloaders_every_n_epochs’: 0, ‘reload_dataloaders_every_epoch’: False, ‘auto_lr_find’: False, ‘replace_sampler_ddp’: True, ‘terminate_on_nan’: False, ‘auto_scale_batch_size’: False, ‘prepare_data_per_node’: True, ‘plugins’: None, ‘amp_backend’: ‘native’, ‘amp_level’: ‘O2’, ‘distributed_backend’: None, ‘move_metrics_to_cpu’: False, ‘multiple_trainloader_mode’: ‘max_size_cycle’, ‘stochastic_weight_avg’: False}}}
I am clueless as to how to debug this further. Any pointers are genuinely appreciated.

Hello, This a reply from our support bot designed to assist you with your Weights & Biases related queries. To reach a human please reply to this message.

The error you’re encountering, TypeError: first argument must be callable or None, typically occurs when the Weights & Biases SDK attempts to serialize an object that it doesn’t know how to handle. This can happen during the configuration update process if there’s an object within your configuration that isn’t directly serializable to JSON.

Here are some steps and considerations to help you troubleshoot and resolve this issue:

  1. Review Configuration Objects: Ensure that all objects within your configuration are simple data types (e.g., int, float, str, dict, list) that are easily serializable to JSON. Complex objects or instances of custom classes should be avoided unless they are explicitly supported.

  2. Custom Objects Serialization: If your configuration must include complex objects, consider converting these to a serializable form before passing them to the configuration. For example, if you have a custom class instance, you might convert it to a dictionary of its properties.

  3. Use Dataclasses Carefully: If you’re using dataclasses or similar constructs, ensure that they are fully compatible with the serialization process. The error stack trace you provided suggests that the issue might be related to dataclass serialization. You might need to implement a custom method to serialize these objects to a dictionary before including them in the configuration.

  4. Debugging Serialization Issues: To identify the specific object causing the issue, you could manually attempt to serialize your configuration dictionary using json.dumps() or a similar method. This might help you pinpoint the exact object or key-value pair that’s causing the serialization to fail.

  5. Simplify Configuration: As a temporary measure to isolate the issue, try simplifying your configuration to the most basic form and gradually add back elements until you identify the problematic component.

  6. PyTorch Lightning Integration: Since you’re using pytorch_lightning.loggers.WandbLogger, ensure that the integration is set up correctly. PyTorch Lightning should automatically handle most of the serialization for you, so the issue might be with how the configuration is passed or updated.

  7. Consult Documentation: Review the Weights & Biases and PyTorch Lightning documentation for any updates or best practices related to configuration management and logging.

  8. Update Libraries: Ensure that both wandb and pytorch_lightning are updated to their latest versions, as updates might contain fixes for issues related to serialization and integration.

Here’s a simplified example of how you might adjust your configuration to avoid serialization issues:

# Assuming `config` is your original configuration dictionary
# Convert complex objects to serializable forms here
config_serializable = {key: value for key, value in config.items() if is_serializable(value)}

# Use the serializable configuration with WandbLogger
logger = WandbLogger(config=config_serializable)

In this context, is_serializable would be a hypothetical function you might implement to check if a value is directly serializable to JSON.

Remember, the key is to ensure that all elements of the configuration passed to Weights & Biases are in a format that can be serialized without issues. If the problem persists, consider reaching out to Weights & Biases support with details of the issue for more targeted assistance.

To reach a human please reply to this message.

-WandBot :robot: