About WandbModelCheckpoint : TypeError: ModelCheckpoint.__init__() got an unexpected keyword argume

model=model.fit(
normed_train_data,
train_labels,
epochs=config.epochs,
batch_size = config.batch_size,
verbose=0,
shuffle=True,
steps_per_epoch = int(normed_train_data.shape[0] / config.batch_size) ,
validation_data = (normed_valid_dataset, valid_labels),
callbacks=[
WandbMetricsLogger(),
WandbModelCheckpoint(filepath=“my_model_{epoch:02d}”)
])

TypeError Traceback (most recent call last)
File :16

File /opt/conda/lib/python3.10/site-packages/wandb/integration/keras/callbacks/model_checkpoint.py:94, in WandbModelCheckpoint.init(self, filepath, monitor, verbose, save_best_only, save_weights_only, mode, save_freq, options, initial_value_threshold, **kwargs)
81 def init(
82 self,
83 filepath: StrPath,
(…)
92 **kwargs: Any,
93 ) → None:
—> 94 super().init(
95 filepath=filepath,
96 monitor=monitor,
97 verbose=verbose,
98 save_best_only=save_best_only,
99 save_weights_only=save_weights_only,
100 mode=mode,
101 save_freq=save_freq,
102 options=options,
103 initial_value_threshold=initial_value_threshold,
104 **kwargs,
105 )
106 if wandb.run is None:
107 raise wandb.Error(
108 “You must call wandb.init() before WandbModelCheckpoint()
109 )

TypeError: ModelCheckpoint.init() got an unexpected keyword argument ‘options’

is there an updated version for implementing wandbcallbacks ???

Hello, This a reply from our support bot designed to assist you with your Weights & Biases related queries. To reach a human please reply to this message.

The error you’re encountering, TypeError: ModelCheckpoint.__init__() got an unexpected keyword argument 'options', suggests that the WandbModelCheckpoint callback is being initialized with an argument (options) that it does not expect. This could be due to a mismatch between the version of wandb you are using and the version in which the options parameter was introduced or removed for the WandbModelCheckpoint.

To resolve this issue, you should ensure that you are using the correct arguments for WandbModelCheckpoint as per the documentation. Based on the provided context, the WandbModelCheckpoint does not list options as a valid parameter. Here’s how you can correctly use WandbModelCheckpoint:

WandbModelCheckpoint(
    filepath="my_model_{epoch:02d}",
    monitor="val_loss",  # or another metric you wish to monitor
    verbose=1,
    save_best_only=False,
    save_weights_only=False,
    mode="auto",
    save_freq="epoch",
    initial_value_threshold=None
)

Make sure to remove the options parameter from your WandbModelCheckpoint initialization, as it is not recognized based on the provided context.

Additionally, ensure that wandb.init() is called before using WandbModelCheckpoint to ensure that your W&B run is properly initialized. This is crucial for the callback to function correctly, as it relies on the active run to log the model checkpoints.

If you are looking for updated versions of Wandb callbacks or additional features, it’s a good practice to check the official Weights & Biases Documentation for the latest updates and usage guidelines. The documentation is regularly updated to reflect new features and improvements.

In summary, to resolve the TypeError:

  • Remove the options parameter from the WandbModelCheckpoint initialization.
  • Ensure wandb.init() is called before using WandbModelCheckpoint.
  • Check the official documentation for the latest usage guidelines and features.

If you continue to experience issues or if there have been recent updates, consulting the official documentation or reaching out to the Weights & Biases support team may provide additional assistance.

To reach a human please reply to this message.

-WandBot :robot:

I have been having the same problem: I get this same error

even when my WandbModelCheckpoint object is initialized as
WandbModelCheckpoint("model-{epoch:02d}-{val_loss:.2f}")
and there is no other arg, where is it getting the “options” argument from? Is there a solution to this at the moment?

Thank you!

Dear Mohamed and Equara2,

Thank you for reaching out to Weights and Biases!

Which version of Keras are you both respectively using?

Thank you,
Vitoria

I have the same issue.
I have this keras:

keras.__version == '3.3.3'
wandb.__version__ == "0.17.0"

This reproduces the issue:

from wandb.integration.keras import WandbModelCheckpoint
WandbModelCheckpoint("models")
1 Like

I went to the class definition and commented out the option parameter from the super() call and it seems to work now:

def __init__(
        self,
        filepath: StrPath,
        monitor: str = "val_loss",
        verbose: int = 0,
        save_best_only: bool = False,
        save_weights_only: bool = False,
        mode: Mode = "auto",
        save_freq: Union[SaveStrategy, int] = "epoch",
        options: Optional[str] = None,
        initial_value_threshold: Optional[float] = None,
        **kwargs: Any,
    ) -> None:
        super().__init__(
            filepath=filepath,
            monitor=monitor,
            verbose=verbose,
            save_best_only=save_best_only,
            save_weights_only=save_weights_only,
            mode=mode,
            save_freq=save_freq,
            #options=options,     <------- I MODIFIED THIS LINE
            initial_value_threshold=initial_value_threshold,
            **kwargs,
        )


Thank you Mauhcs for reaching out!

If you are using Keras 3, then this is a bug that we are expecting because we are currently working on a PR regarding this.

Please try downgrading to a previous version of Keras for now, and let us know if you still get this issue. Otherwise, what you tried seem to be a feasible workaround.

Our engineering team is currently working on this and we will keep you updated about this!