ELEmModule

class mowl.nn.ELEmModule(nb_ont_classes, nb_rels, nb_inds=None, embed_dim=50, margin=0.1, reg_norm=1)[source]

Bases: ELModule

Implementation of ELEmbeddings from [kulmanov2019].

Changed in version 0.4.0: The class ELEmModule receives an optional parameter nb_inds

Methods Summary

class_assertion_loss(data[, neg])

Loss function for class assertion: \(C(a)\).

gci0_bot_loss(data[, neg])

Loss function for GCI0 with bottom concept: \(C \sqsubseteq \perp\).

gci0_loss(data[, neg])

Loss function for GCI0: \(C \sqsubseteq D\).

gci1_bot_loss(data[, neg])

Loss function for GCI1 with bottom concept: \(C_1 \sqcap C_2 \sqsubseteq \perp\).

gci1_loss(data[, neg])

Loss function for GCI1: \(C_1 \sqcap C_2 \sqsubseteq D\).

gci2_loss(data[, neg, idxs_for_negs])

Loss function for GCI2: \(C \sqsubseteq \exists R.D\).

gci2_score(data)

gci3_bot_loss(data[, neg])

Loss function for GCI3 with bottom concept: \(\exists R.C \sqsubseteq \perp\).

gci3_loss(data[, neg])

Loss function for GCI3: \(\exists R.C \sqsubseteq D\).

object_property_assertion_loss(data[, neg])

Loss function for role assertion: \(R(a,b)\).

regularization_loss()

Methods Documentation

class_assertion_loss(data, neg=False)[source]

Loss function for class assertion: \(C(a)\). :param axiom_data: Input tensor of shape (*,2) where C classes will be at axiom_data[:,0] and a individuals will be at axiom_data[:,1]. It is recommended to use the ELDataset. :type axiom_data: torch.Tensor :param neg: Parameter indicating that the negative version of this loss function must be used. Defaults to False. :type neg: bool, optional.

gci0_bot_loss(data, neg=False)[source]

Loss function for GCI0 with bottom concept: \(C \sqsubseteq \perp\).

Parameters:
  • gci (torch.Tensor) – Input tensor of shape (*,2) where C classes will be at gci[:,0] and bottom classes will be at gci[:,1]. It is recommended to use the ELDataset.

  • neg (bool, optional.) – Parameter indicating that the negative version of this loss function must be used. Defaults to False.

gci0_loss(data, neg=False)[source]

Loss function for GCI0: \(C \sqsubseteq D\).

Parameters:
  • gci (torch.Tensor) – Input tensor of shape (*,2) where C classes will be at gci[:,0] and D classes will be at gci[:,1]. It is recommended to use the ELDataset.

  • neg (bool, optional.) – Parameter indicating that the negative version of this loss function must be used. Defaults to False.

gci1_bot_loss(data, neg=False)[source]

Loss function for GCI1 with bottom concept: \(C_1 \sqcap C_2 \sqsubseteq \perp\).

Parameters:
  • gci (torch.Tensor) – Input tensor of shape (*,3) where C1 classes will be at gci[:,0], C2 classes will be at gci[:,1] and bottom classes will be at gci[:,2]. It is recommended to use the ELDataset.

  • neg (bool, optional.) – Parameter indicating that the negative version of this loss function must be used. Defaults to False.

gci1_loss(data, neg=False)[source]

Loss function for GCI1: \(C_1 \sqcap C_2 \sqsubseteq D\).

Parameters:
  • gci (torch.Tensor) – Input tensor of shape (*,3) where C1 classes will be at gci[:,0], C2 classes will be at gci[:,1] and D classes will be at gci[:,2]. It is recommended to use the ELDataset.

  • neg (bool, optional.) – Parameter indicating that the negative version of this loss function must be used. Defaults to False.

gci2_loss(data, neg=False, idxs_for_negs=None)[source]

Loss function for GCI2: \(C \sqsubseteq \exists R.D\). \(C \sqsubseteq \exists R. D\). :param gci: Input tensor of shape (*,3) where C classes will be at gci[:,0], R object properties will be at gci[:,1] and D classes will be at gci[:,2]. It is recommended to use the ELDataset. :type gci: torch.Tensor :param neg: Parameter indicating that the negative version of this loss function must be used. Defaults to False. :type neg: bool, optional.

gci2_score(data)[source]
gci3_bot_loss(data, neg=False)[source]

Loss function for GCI3 with bottom concept: \(\exists R.C \sqsubseteq \perp\).

Parameters:
  • gci (torch.Tensor) – Input tensor of shape (*,3) where R object properties will be at gci[:,0], C classes will be at gci[:,1] and bottom classes will be at gci[:,2]. It is recommended to use the ELDataset.

  • neg (bool, optional.) – Parameter indicating that the negative version of this loss function must be used. Defaults to False.

gci3_loss(data, neg=False)[source]

Loss function for GCI3: \(\exists R.C \sqsubseteq D\).

Parameters:
  • gci (torch.Tensor) – Input tensor of shape (*,3) where R object properties will be at gci[:,0], C classes will be at gci[:,1] and D classes will be at gci[:,2]. It is recommended to use the ELDataset.

  • neg (bool, optional.) – Parameter indicating that the negative version of this loss function must be used. Defaults to False.

object_property_assertion_loss(data, neg=False)[source]

Loss function for role assertion: \(R(a,b)\). :param axiom_data: Input tensor of shape (*,3) where a object properties will be at axiom_data[0], ``R object properties will be at axiom_data[:,1] and b individuals will be at axiom_data[:,2]. It is recommended to use the ELDataset. :type axiom_data: torch.Tensor :param neg: Parameter indicating that the negative version of this loss function must be used. Defaults to False. :type neg: bool, optional.

regularization_loss()[source]