This post states that ‘Cross entropy loss is a metric used to measure how well a classification model in machine learning performs. The loss (or error) is measured as a number between 0 and 1, with 0 being a perfect model. The goal is generally to get your model as close to 0 as possible.’ But as far as I understand, there is no upper bound for cross-entropy loss, as it is nothing but KL divergence differed by some constant.
Hi @zyzhang, thanks for pointing this out! I’d say the upper bound of 1 only applies for binary clasification but not for multilabel clasification so this should be probably modifed. I’ll ask internally about this!
Hi is there any update in this issue?
Hi @zyzhang, apologies for the delay here. This is been reviewed internally and will be updated. Thanks!