Loss Functions¶
- mlcpl.losses.PartialNegativeBCEWithLogitLoss(alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, reduction: Literal['mean', 'sum', 'none'] = 'mean')¶
Get the binary crossentropy loss function that treats unknown labels as negative.
- Args:
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.PartialBCEWithLogitLoss(alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, reduction: Literal['mean', 'sum', 'none'] = 'mean')¶
Get the binary crossentropy loss function that ignores unknown labels (0 gradient).
- Args:
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.PartialSelectiveBCEWithLogitLoss(alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, reduction: Literal['mean', 'sum', 'none'] = 'mean', label_priors: Tensor | None = None, likelihood_topk: int = 5, prior_threshold: float = 0.05)¶
Get the class-aware selective loss function with binary crossentropy (https://arxiv.org/abs/2110.10955).
- Args:
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.label_priors (Tensor, optional): The label priors of the class-aware selective loss. Default to
None.likelihood_topk (int, optional): The Top-K likelihood of the class-aware selective loss. Default to
5.prior_threshold (float, optional): The threshold that determine the mode used for each category. Default to
0.05.
- mlcpl.losses.PartialNegativeFocalWithLogitLoss(gamma: float = 1, alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, discard_focal_grad: bool = True, reduction: Literal['mean', 'sum', 'none'] = 'mean')¶
Get the focal loss function that treats unknown labels as negative.
- Args:
gamma (float, optional): The focal term. Defaults to 1.
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
discard_focal_grad: Whether discarding the gradient of the focal term. Defaults to True.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.PartialFocalWithLogitLoss(gamma: float = 1, alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, discard_focal_grad: bool = True, reduction: Literal['mean', 'sum', 'none'] = 'mean')¶
Get the focal loss function that ignores unknown labels (0 gradient).
- Args:
gamma (float, optional): The focal term. Defaults to 1.
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
discard_focal_grad: Whether discarding the gradient of the focal term. Defaults to True.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.PartialSelectiveFocalWithLogitLoss(gamma: float = 1, alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, discard_focal_grad: bool = True, reduction: Literal['mean', 'sum', 'none'] = 'mean', label_priors: Tensor | None = None, likelihood_topk: int = 5, prior_threshold: float = 0.05)¶
Get the class-aware selective loss function with focal (https://arxiv.org/abs/2110.10955).
- Args:
gamma (float, optional): The focal term. Defaults to 1.
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
discard_focal_grad: Whether discarding the gradient of the focal term. Defaults to True.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.label_priors (Tensor, optional): The label priors of the class-aware selective loss. Default to
None.likelihood_topk (int, optional): The Top-K likelihood of the class-aware selective loss. Default to
5.prior_threshold (float, optional): The threshold that determine the mode used for each category. Default to
0.05.
- mlcpl.losses.PartialAsymmetricWithLogitLoss(clip: float = 0, gamma_pos: float = 0, gamma_neg: float = 1, alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, discard_focal_grad: bool = True, reduction: Literal['mean', 'sum', 'none'] = 'mean')¶
Get the asymmetric loss function that ignores unknown labels (0 gradient) (https://openaccess.thecvf.com/content/ICCV2021/html/Ridnik_Asymmetric_Loss_for_Multi-Label_Classification_ICCV_2021_paper.html).
- Args:
clip (float, optional): The threshold that discards easy negative labels.
gamma_pos (float, optional): The focal term of the positive loss term. Defaults to 1.
gamma_neg (float, optional): The focal term of the negative loss term. Defaults to 1.
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
discard_focal_grad: Whether discarding the gradient of the focal term. Defaults to True.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.PartialSelectiveAsymmetricWithLogitLoss(clip: float = 0, gamma_pos: float = 0, gamma_neg: float = 1, alpha_pos: float = 1, alpha_neg: float = 1, normalize: bool = False, discard_focal_grad: bool = True, reduction: Literal['mean', 'sum', 'none'] = 'mean', label_priors: Tensor | None = None, likelihood_topk: int = 5, prior_threshold: float = 0.05)¶
Get the class-aware selective loss function with asymmetric (https://arxiv.org/abs/2110.10955).
- Args:
clip (float, optional): The threshold that discards easy negative labels.
gamma_pos (float, optional): The focal term of the positive loss term. Defaults to 1.
gamma_neg (float, optional): The focal term of the negative loss term. Defaults to 1.
alpha_pos (float, optional): The weight of the positive loss term. Defaults to 1.
alpha_neg (float, optional): The weight of the negative loss term. Defaults to 1.
normalize (bool, optional): Whether applying normalization to the losses (https://arxiv.org/pdf/1902.09720.pdfs). Defaults to False.
discard_focal_grad: Whether discarding the gradient of the focal term. Defaults to True.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.label_priors (Tensor, optional): The label priors of the class-aware selective loss. Default to
None.likelihood_topk (int, optional): The Top-K likelihood of the class-aware selective loss. Default to
5.prior_threshold (float, optional): The threshold that determine the mode used for each category. Default to
0.05.
- mlcpl.losses.LargeLossRejection(loss_fn=PartialLoss( (lossfn_pos): FocalLossTerm() (lossfn_neg): FocalLossTerm() ), delta_rel=0.1, reduction='mean')¶
The Large Loss-Reject in https://arxiv.org/abs/2206.03740.
- Args:
loss_fn (Callable, optional): The base loss function. Defaults to PartialNegativeBCEWithLogitLoss(reduction=None).
delta_rel (float, optional): The delta_rel hyperparameter. Defaults to 0.1.
reduction (str, optional): reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.LargeLossCorrectionTemporary(loss_fn=PartialLoss( (lossfn_pos): FocalLossTerm() (lossfn_neg): FocalLossTerm() ), delta_rel=0.1, reduction='mean')¶
The Large Loss-Correct (temporary) in https://arxiv.org/abs/2206.03740.
- Args:
loss_fn (Callable, optional): The base loss function. Defaults to PartialNegativeBCEWithLogitLoss(reduction=None).
delta_rel (float, optional): The delta_rel hyperparameter. Defaults to 0.1.
reduction (str, optional): reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.
- mlcpl.losses.PartialStrictlyProperAsymmetricWithLogitLoss(zeta_p=1, k_p=1, b_p=0, zeta_n=5, k_n=3, b_n=1, reduction='mean')¶
The strictly proper asymmetric loss that ignores unknown labels (0 gradient) (https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_Towards_Calibrated_Multi-label_Deep_Neural_Networks_CVPR_2024_paper.html).
- Args:
zeta_p (int, optional): A hyperparameter. Defaults to 1.
k_p (int, optional): A hyperparameter. Defaults to 1.
b_p (int, optional): A hyperparameter. Defaults to 0.
zeta_n (int, optional): A hyperparameter. Defaults to 5.
k_n (int, optional): A hyperparameter. Defaults to 3.
b_n (int, optional): A hyperparameter. Defaults to 1.
reduction (str, optional): Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Defaults to'mean'.