Confusion matrix not generating "Custom chart" entry

I’m using the following code (based on wandb.plot.confusion_matrix because I accumulate my own confusion matrix per step using tf.math.confusion_matrix) to log a confusion matrix:

 data = []
 for i in range(n_classes):
     for j in range(n_classes):
         data.append([class_names[i], class_names[j], value[i, j]])

 fields = {
     "Actual": "Actual",
     "Predicted": "Predicted",
     "nPredictions": "nPredictions",
 return wandb.plot_table(
     wandb.Table(columns=["Actual", "Predicted", "nPredictions"], data=data),
     {"title": title},

On one run I did get the custom chart, but the “Actual” labels (on the Y-axis) were horribly laid out so I tweaked the code to generate different labels and now I don’t see the custom chart. I tried to create my own custom chart cloning the settings from the other confusion matrix, but the “OK” button is grayed out.

Are there some extra checks somewhere that decide whether or not to generate a confusion matrix report?

Hi @tbirch, thank you for reporting this, and sorry for the delay in this response. I am following up with you here after our discussion in the Support chat.

I have looked into the wandb-summary.json and the summary metrics between the two Runs. It appears that the variables conf_mat_table and conf_mat_norm_table were not logged in the case where the confusion matrix isn’t generated. The Vega spec and the code snippet above are correct, but the button is greyed out because it can’t query the data. I am wondering if you had overwritten the Run, or did this experiment run a single time?

The artifacts seem to be properly logged in both cases and you can get them to your Workspace using a Weave expression such as:
project(“entity”, “project-name_”).artifact(“run-runid-conf_mat_norm_table”).membershipForAlias(“v19”).artifactVersion.file(“conf_mat_norm_table.table.json”)

Would you be interested in the option to download the data, generate the chart externally and then log it to this Run?

I’d like to just know the correct code to log a “wandb/confusion_matrix/v1” given a NxN matrix of values and N labels. I don’t care about the data for these runs, I was just doing them to test confusion matrix.

Hi @tbirch apologies for the late reply here, this is a working Colab that demonstrates how to log confusion matrices from your code. A minimal code example would be something like the following:

        vals = np.random.uniform(size=(10, 5))
        probs = np.exp(vals)/np.sum(np.exp(vals), keepdims=True, axis=1)
        y_true = np.random.randint(0, 5, size=(10))
        labels = ["Cat", "Dog", "Bird", "Fish", "Horse"]
        wandb.log({'confusion_matrix': wandb.plot.confusion_matrix(probs, y_true=y_true, class_names=labels)})

Please let me know if that would help, or if you still have issues or further questions about this.

Hi @tbirch I wanted to follow up on this request, did the above Colab solve this? and would there be any further questions to help you with? thanks!

Hi @tbirch since we haven’t heard back from you, I am going to close this ticket for now. If you still experience any issues, please let us know and we will keep investigating.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.