geomstats.learning package#

Submodules#

geomstats.learning.aac module#

Align All and Compute for Graph Space.

Lead author: Anna Calissano.

class geomstats.learning.aac.AAC(space, *args, estimate='frechet', **kwargs)[source]#

Bases: object

Class for Align all and Compute algorithm on Graph Space.

The Align All and Compute (AAC) algorithm is introduced in [Calissano2020] and it allows to compute different statistical estimators: the Frechet Mean, the Generalized Geodesic Principal components and the Regression for a set of labeled or unlabeled graphs. The idea is to optimally aligned the graphs to the current estimator using the correct alignment technique and compute the current estimation using the geometrical property of the total space, i.e., the Euclidean space of adjacency matrices.

Parameters:
  • space (GraphSpace) – Graph space total space with a quotient structure.

  • estimate (str) – Desired estimator. One of the following:

Examples

Available example on Graph Space: notebooks.19_practical_methods__aac

Available example on Graph Space with real world data: notebooks.20_real_world_application__graph_space

References

[Calissano2020] (1,2,3)

Calissano, A., Feragen, A., Vantini, S. “Graph Space: Geodesic Principal Components for a Population of Network-valued Data.” Mox report 14, 2020. https://mox.polimi.it/reports-and-theses/publication-results/?id=855.

[Calissano2022]

Calissano, A., Feragen, A., Vantini, S. “Graph-valued regression: prediction of unlabelled networks in a non-Euclidean Graph Space.”Journal of Multivariate Analysis 190 - 104950, (2022). https://doi.org/10.1016/j.jmva.2022.104950.

MAP_ESTIMATE = {'frechet_mean': <class 'geomstats.learning.aac._AACFrechetMean'>, 'ggpca': <class 'geomstats.learning.aac._AACGGPCA'>, 'regression': <class 'geomstats.learning.aac._AACRegression'>}#

geomstats.learning.agglomerative_hierarchical_clustering module#

The Agglomerative Hierarchical Clustering (AHC) on manifolds.

Lead author: Yann Cabanes.

class geomstats.learning.agglomerative_hierarchical_clustering.AgglomerativeHierarchicalClustering(space, n_clusters=2, memory=None, connectivity=None, compute_full_tree='auto', linkage='average', distance_threshold=None)[source]#

Bases: AgglomerativeClustering

The Agglomerative Hierarchical Clustering on manifolds.

Recursively merges the pair of clusters that minimally increases a given linkage distance.

Parameters:
  • space (Manifold) – Equipped manifold.

  • n_clusters (int or None, default=2) – The number of clusters to find. It must be None if distance_threshold is not None.

  • memory (str or object, default=None) – Used to cache the output of the computation of the tree. By default, no caching is done. If a string is given, it is the path to the caching directory.

  • connectivity (array-like or callable, default=None) – Connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. This can be a connectivity matrix itself or a callable that transforms the data into a connectivity matrix. Default is None, i.e, the hierarchical clustering algorithm is unstructured.

  • compute_full_tree (‘auto’ or bool, default=’auto’) – Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. This option is useful only when specifying a connectivity matrix. Note also that when varying the number of clusters and using caching, it may be advantageous to compute the full tree. It must be True if distance_threshold is not None. By default compute_full_tree is ‘auto’, which is equivalent to True when distance_threshold is not None or that n_clusters is inferior to the maximum between 100 or 0.02 * n_samples. Otherwise, ‘auto’ is equivalent to False.

  • linkage ({‘ward’, ‘complete’, ‘average’, ‘single’}, default=’average’) – Which linkage criterion to use. The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion.

    • average uses the average of the distances of each observation of the two sets.

    • complete or maximum linkage uses the maximum distances between all observations of the two sets.

    • single uses the minimum of the distances between all observations of the two sets.

    • ward minimizes the variance of the clusters being merged. It works for the ‘euclidean’ distance only.

  • distance_threshold (float, default=None) – The linkage distance threshold above which, clusters will not be merged. If not None, n_clusters must be None and compute_full_tree must be True.

n_clusters_#

The number of clusters found by the algorithm. If distance_threshold=None, it will be equal to the given n_clusters.

Type:

int

labels_#

Cluster labels for each point.

Type:

ndarray, shape=[…,]

n_leaves_#

Number of leaves in the hierarchical tree.

Type:

int

n_connected_components_#

The estimated number of connected components in the graph.

Type:

int

children_#

The children of each non-leaf node. Values less than n_samples correspond to leaves of the tree which are the original samples. A node i greater than or equal to n_samples is a non-leaf node and has children children_[i - n_samples]. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i.

Type:

array-like, shape=[n_samples-1, 2]

References

This algorithm uses the scikit-learn library: scikit-learn/scikit-learn /_agglomerative.py#L656

geomstats.learning.expectation_maximization module#

Expectation maximization algorithm.

Lead authors: Thomas Gerald and Hadi Zaatiti.

class geomstats.learning.expectation_maximization.GaussianMixtureModel(space, means=None, variances=None, zeta_lower_bound=0.5, zeta_upper_bound=2.0, zeta_step=0.01)[source]#

Bases: object

Gaussian mixture model (GMM).

Parameters:
  • space (Manifold) – Equipped manifold.

  • means (array-like, shape=[n_gaussians, dim]) – Means of each component of the GMM.

  • variances (array-like, shape=[n_gaussians,]) – Variances of each component of the GMM.

normalization_factor_var#

Array of computed normalization factor.

Type:

array-like, shape=[n_variances,]

variances_range#

Array of standard deviations.

Type:

array-like, shape=[n_variances,]

phi_inv_var#

Array of the computed inverse of a function phi whose expression is closed-form \(\sigma\mapsto \sigma^3 \times \frac{d} {\mathstrut d\sigma}\log \zeta_m(\sigma)\) where \(\sigma\) denotes the variance and \(\zeta\) the normalization coefficient and \(m\) the dimension.

Type:

array-like, shape=[n_variances,]

compute_variance_from_index(weighted_distances)[source]#

Return the variance given weighted distances.

Parameters:

weighted_distances (array-like, shape=[n_gaussians,]) – Mean of the weighted distances between training data and current barycentres. The weights of each data sample corresponds to the probability of belonging to a component of the Gaussian mixture model.

Returns:

var (array-like, shape=[n_gaussians,]) – Estimated variances for each component of the GMM.

pdf(data)[source]#

Return the separate probability density function of GMM.

The probability density function is computed for each component of the GMM separately (i.e., mixture coefficients are not taken into account).

Parameters:

data (array-like, shape=[n_samples, dim]) – Points at which the GMM probability density is computed.

Returns:

pdf (array-like, shape=[n_samples, n_gaussians,]) – Probability density function computed at each data sample and for each component of the GMM.

weighted_pdf(mixture_coefficients, mesh_data)[source]#

Return the probability density function of a GMM.

Parameters:
  • mixture_coefficients (array-like, shape=[n_gaussians,]) – Coefficients of the Gaussian mixture model.

  • mesh_data (array-like, shape=[n_precision, dim]) – Points at which the GMM probability density is computed.

Returns:

weighted_pdf (array-like, shape=[n_precision, n_gaussians,]) – Probability density function computed for each point of the mesh data, for each component of the GMM.

class geomstats.learning.expectation_maximization.RiemannianEM(space, n_gaussians=8, initialisation_method='random', tol=0.01, max_iter=100, conv_rate=0.0001, minimum_epochs=10)[source]#

Bases: TransformerMixin, ClusterMixin, BaseEstimator

Expectation-maximization algorithm.

A class for performing Expectation-Maximization to fit a Gaussian Mixture Model (GMM) to data on a manifold. This method is only implemented for the hypersphere and the Poincare ball.

Parameters:
  • space (Manifold) – Equipped manifold.

  • n_gaussians (int) – Number of Gaussian components in the mix.

  • initialisation_method (basestring) – Optional, default: ‘random’. Choice between initialization method for variances, means and weights.

    • ‘random’ : will select random uniformly train points as initial cluster centers.

    • ‘kmeans’ : will apply Riemannian kmeans to deduce variances and means that the EM will use initially.

  • tol (float) – Optional, default: 1e-2. Convergence tolerance. If the difference of mean distance between two steps is lower than tol.

  • max_iter (int) – Maximum number of iterations for the gradient descent. Optional, default: 100.

mixture_coefficients_#

Weights for each GMM component.

Type:

array-like, shape=[n_gaussians,]

variances_#

Variances for each GMM component.

Type:

array-like, shape=[n_gaussians,]

means_#

Barycentre of each component of the GMM.

Type:

array-like, shape=[n_gaussian, _dimension]

Example

Available example on the Poincaré Ball manifold examples.plot_expectation_maximization_ball

fit(X, y=None)[source]#

Fit a Gaussian mixture model (GMM) given the data.

Alternates between Expectation and Maximization steps for some number of iterations.

Parameters:
  • X (array-like, shape=[n_samples, n_features]) – Training data, where n_samples is the number of samples and n_features is the number of features.

  • y (None) – Target values. Ignored.

Returns:

self (object) – Returns self.

property means_#

Means of each component of the GMM.

property variances_#

Array of standard deviations.

geomstats.learning.exponential_barycenter module#

Exponential barycenter.

Lead author: Nicolas Guigui.

class geomstats.learning.exponential_barycenter.ExponentialBarycenter(space)[source]#

Bases: BaseEstimator

Empirical exponential barycenter for matrix groups.

Parameters:

space (LieGroup) – Lie group instance on which the data lie.

estimate_#

If fit, exponential barycenter.

Type:

array-like, shape=[dim, dim]

fit(X, y=None, weights=None)[source]#

Compute the empirical weighted exponential barycenter.

Parameters:
  • X (array-like, shape=[n_samples, dim, dim]) – Training input samples.

  • y (None) – Target values. Ignored.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

set(**kwargs)[source]#

Set optimizer parameters.

Especially useful for one line instantiations.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') ExponentialBarycenter#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

class geomstats.learning.exponential_barycenter.GradientDescent(max_iter=32, epsilon=0.0001, init_point=None, init_step_size=1.0, verbose=False)[source]#

Bases: BaseGradientDescent

Gradient descent for exponential barycenter.

minimize(group, points, weights=None)[source]#

Compute the (weighted) group exponential barycenter of points.

Parameters:
  • group (LieGroup) – Instance of the class LieGroup.

  • points (array-like, shape=[n_samples, dim, dim]) – Input points lying in the Lie Group.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the points. Optional, defaults to 1 for each point if None.

Returns:

exp_bar (array-like, shape=[dim, dim]) – Exponential barycenter of the input points.

geomstats.learning.frechet_mean module#

Frechet mean.

Lead authors: Nicolas Guigui and Nina Miolane.

class geomstats.learning.frechet_mean.AdaptiveGradientDescent(max_iter=32, epsilon=0.0001, init_point=None, init_step_size=1.0, verbose=False)[source]#

Bases: BaseGradientDescent

Adaptive gradient descent.

minimize(space, points, weights=None)[source]#

Perform adaptive gradient descent.

Frechet mean of (weighted) points using adaptive time-steps The loss function optimized is \(||M_1(x)||_x\) (where \(M_1(x)\) is the tangent mean at x) rather than the mean-square-distance (MSD) because this simplifies computations. Adaptivity is done in a Levenberg-Marquardt style weighting variable tau between the first order and the second order Gauss-Newton gradient descent.

Parameters:
  • points (array-like, shape=[n_samples, *metric.shape]) – Points to be averaged.

  • weights (array-like, shape=[n_samples,], optional) – Weights associated to the points.

Returns:

current_mean (array-like, shape=[*metric.shape]) – Weighted Frechet mean of the points.

class geomstats.learning.frechet_mean.BaseGradientDescent(max_iter=32, epsilon=0.0001, init_point=None, init_step_size=1.0, verbose=False)[source]#

Bases: ABC

Base class for gradient descent.

Parameters:
  • max_iter (int, optional) – Maximum number of iterations for the gradient descent.

  • epsilon (float, optional) – Tolerance for stopping the gradient descent.

  • init_point (array-like, shape=[*metric.shape]) – Initial point. Optional, default : None. In this case the first sample of the input data is used.

  • init_step_size (float) – Learning rate in the gradient descent. Optional, default: 1.

  • verbose (bool) – Level of verbosity to inform about convergence. Optional, default: False.

abstract minimize(space, points, weights=None)[source]#

Perform gradient descent.

class geomstats.learning.frechet_mean.BatchGradientDescent(max_iter=32, epsilon=0.0001, init_point=None, init_step_size=1.0, verbose=False)[source]#

Bases: BaseGradientDescent

Batch gradient descent.

minimize(space, points, weights=None)[source]#

Perform batch gradient descent.

class geomstats.learning.frechet_mean.CircleMean(space)[source]#

Bases: BaseEstimator

Circle mean.

Parameters:

space (Manifold) – Equipped manifold.

estimate_#

If fit, Frechet mean.

Type:

array-like, shape=[2,]

fit(X, y=None)[source]#

Compute the circle mean.

Parameters:
  • X (array-like, shape=[n_samples, 2]) – Training input samples.

  • y (None) – Target values. Ignored.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

class geomstats.learning.frechet_mean.ElasticMean(space)[source]#

Bases: BaseEstimator

Elastic mean.

Parameters:

space (Manifold) – Equipped manifold.

estimate_#

If fit, Frechet mean.

Type:

array-like, shape=[*space.shape]

fit(X, y=None, weights=None)[source]#

Compute the elastic mean.

Parameters:
  • X (array-like, shape=[n_samples, *metric.shape]) – Training input samples.

  • y (None) – Target values. Ignored.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') ElasticMean#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

class geomstats.learning.frechet_mean.FrechetMean(space, **kwargs)[source]#

Bases: BaseEstimator

Empirical Frechet mean.

Parameters:
  • space (Manifold) – Equipped manifold.

  • method (str, {'default', 'adaptive', 'batch'}) – Gradient descent method. The adaptive method uses a Levenberg-Marquardt style adaptation of the learning rate. The batch method is similar to the default method but for batches of equal length of samples. In this case, samples must be of shape [n_samples, n_batch, *space.shape]. Optional, default: 'default'.

estimate_#

If fit, Frechet mean.

Type:

array-like, shape=[*space.shape]

Notes

  • Required metric methods for general case:
    • log, exp, squared_norm (for convergence criteria)

fit(X, y=None, weights=None)[source]#

Compute the empirical weighted Frechet mean.

Parameters:
  • X (array-like, shape=[n_samples, *metric.shape]) – Training input samples.

  • y (None) – Target values. Ignored.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

property method#

Gradient descent method.

set(**kwargs)[source]#

Set optimizer parameters.

Especially useful for one line instantiations.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') FrechetMean#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

class geomstats.learning.frechet_mean.GradientDescent(max_iter=32, epsilon=0.0001, init_point=None, init_step_size=1.0, verbose=False)[source]#

Bases: BaseGradientDescent

Default gradient descent.

minimize(space, points, weights=None)[source]#

Perform default gradient descent.

class geomstats.learning.frechet_mean.LinearMean(space)[source]#

Bases: BaseEstimator

Linear mean.

Parameters:

space (Manifold) – Equipped manifold.

estimate_#

If fit, Frechet mean.

Type:

array-like, shape=[*space.shape]

fit(X, y=None, weights=None)[source]#

Compute the Euclidean mean.

Parameters:
  • X (array-like, shape=[n_samples, *metric.shape]) – Training input samples.

  • y (None) – Target values. Ignored.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') LinearMean#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

geomstats.learning.frechet_mean.linear_mean(points, weights=None)[source]#

Compute the weighted linear mean.

The linear mean is the Frechet mean when points:

  • lie in a Euclidean space with Euclidean metric,

  • lie in a Minkowski space with Minkowski metric.

Parameters:
  • points (array-like, shape=[n_samples, dim]) – Points to be averaged.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the points. Optional, default: None.

Returns:

mean (array-like, shape=[dim,]) – Weighted linear mean of the points.

geomstats.learning.frechet_mean.variance(space, points, base_point, weights=None)[source]#

Variance of (weighted) points wrt a base point.

Parameters:
  • space (Manifold) – Equipped manifold.

  • points (array-like, shape=[n_samples, dim]) – Points.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the points. Optional, default: None.

Returns:

var (float) – Weighted variance of the points.

geomstats.learning.geodesic_regression module#

Geodesic Regression.

Lead author: Nicolas Guigui.

The generative model of the data is: \(Z = Exp_{\beta_0}(\beta_1.X)\) and \(Y = Exp_Z(\epsilon)\) where:

  • \(Exp\) denotes the Riemannian exponential,

  • \(\beta_0\) is called the intercept, and is a point on the manifold,

  • \(\beta_1\) is called the coefficient, and is a tangent vector to the manifold at \(\beta_0\),

  • \(\epsilon \sim N(0, 1)\) is a standard Gaussian noise,

  • \(X\) is the input, \(Y\) is the target.

The geodesic regression method:

  • estimates \(\beta_0, \beta_1\),

  • predicts \(\hat{y}\) from input \(X\).

class geomstats.learning.geodesic_regression.GeodesicRegression(space, center_X=True, method='extrinsic', initialization='random', regularization=1.0, compute_training_score=False)[source]#

Bases: BaseEstimator

Geodesic Regression.

The generative model of the data is: \(Z = Exp_{\beta_0}(\beta_1.X)\) and \(Y = Exp_Z(\epsilon)\) where:

  • \(Exp\) denotes the Riemannian exponential,

  • \(\beta_0\) is called the intercept, and is a point on the manifold,

  • \(\beta_1\) is called the coefficient, and is a tangent vector to the manifold at \(\beta_0\),

  • \(\epsilon \sim N(0, 1)\) is a standard Gaussian noise,

  • \(X\) is the input, \(Y\) is the target.

The geodesic regression method:

  • estimates \(\beta_0, \beta_1\),

  • predicts \(\hat{y}\) from input \(X\).

Parameters:
  • space (Manifold) – Equipped manifold.

  • center_X (bool) – Subtract mean to X as a preprocessing.

  • method (str, {'extrinsic', 'riemannian'}) – Gradient descent method. Optional, default: extrinsic.

  • initialization (str or array-like,) – {‘random’, ‘data’, ‘frechet’, warm_start’} Initial values of the parameters for the optimization, or initialization method. Optional, default: ‘random’

  • regularization (float) – Weight on the constraint for the intercept to lie on the manifold in the extrinsic optimization scheme. An L^2 constraint is applied. Optional, default: 1.

  • compute_training_score (bool) – Whether to compute R^2. Optional, default: False.

Notes

  • Required metric methods:
    • all: exp, squared_dist

    • if riemannian: parallel transport or to_tangent

fit(X, y, weights=None)[source]#

Estimate the parameters of the geodesic regression.

Estimate the intercept and the coefficient defining the geodesic regression model.

Parameters:
  • X (array-like, shape=[n_samples,]) – Training input samples.

  • y (array-like, shape[n_samples, {dim, [n,n]}]) – Training target values.

  • weights (array-like, shape=[n_samples]) – Weights associated to the points. Optional, default: None.

Returns:

self (object) – Returns self.

property method#

Gradient descent method.

predict(X)[source]#

Predict the manifold value for each input.

Parameters:

X (array-like, shape=[n_samples,]) – Input data.

Returns:

y (array-like, shape=[n_samples, {dim, [n,n]}]) – Training target values.

score(X, y, weights=None)[source]#

Compute training score.

Compute the training score defined as R^2.

Parameters:
  • X (array-like, shape=[n_samples,]) – Training input samples.

  • y (array-like, shape=[n_samples, {dim, [n,n]}]) – Training target values.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the points. Optional, default: None.

Returns:

score (float) – Training score.

set(**kwargs)[source]#

Set optimizer parameters.

Especially useful for one line instantiations.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') GeodesicRegression#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

set_score_request(*, weights: bool | None | str = '$UNCHANGED$') GeodesicRegression#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in score.

Returns:

self (object) – The updated object.

class geomstats.learning.geodesic_regression.RiemannianGradientDescent(max_iter=100, init_step_size=0.1, tol=1e-05, verbose=False)[source]#

Bases: object

Riemannian gradient descent.

minimize(space, fun, x0)[source]#

Perform gradient descent.

geomstats.learning.geometric_median module#

Geometric median.

class geomstats.learning.geometric_median.GeometricMedian(space, max_iter=100, lr=1.0, init_point=None, print_every=None, epsilon=1e-12)[source]#

Bases: BaseEstimator

Geometric median.

Parameters:
  • space (Manifold) – Equipped manifold.

  • max_iter (int) – Maximum number of iterations for the algorithm. Optional, default : 100

  • lr (float) – Learning rate to be used for the algorithm. Optional, default : 1.0

  • init_point (array-like, shape=[*space.shape]) – Initialization to be used in the start. Optional, default : None, in which case it uses last sample.

  • print_every (int) – Print updated median after print_every iterations. Optional, default : None

  • epsilon (float) – Tolerance for stopping the algorithm (distance between two successive estimates). Optional, default : gs.atol

estimate_#

If fit, geometric median.

Type:

array-like, shape=[*space.shape]

Notes

  • Required metric methods: dist, log, exp.

References

[FVJ2009]

Fletcher PT, Venkatasubramanian S and Joshi S. “The geometric median on Riemannian manifolds with application to robust atlas estimation”, NeuroImage, 2009 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2735114/

fit(X, y=None, weights=None)[source]#

Compute the weighted geometric median.

Compute the geometric median on manifold using Weiszfeld algorithm.

Parameters:
  • X (array-like, shape=[n_samples, *metric.shape]) – Training input samples.

  • y (None) – Target values. Ignored.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') GeometricMedian#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

geomstats.learning.incremental_frechet_mean module#

Incremental frechet mean estimator.

class geomstats.learning.incremental_frechet_mean.IncrementalFrechetMean(space, verbose=False, clean_state=True)[source]#

Bases: BaseEstimator

Incremental Frechet Mean Estimator.

Incremental frechet mean estimator calculates sample frechet mean by moving iteratively along the geodesic between current mean estimate and next point.

\[\text{Initialization}: m_{1} := X_{1}\]
\[\text{Update}: \text{Let } \gamma_k \text{ be geodesic joining } m_{k-1}\text{ and } X_{k} \text{ then } m_{k} := \gamma(1/k) \,\, \forall 2 \leq k \leq N\]

Asymptotic convergence to population frechet mean is guranteed for simply connected, complete and non-positively curved Riemannian manifolds. It is important to note that estimator obtained by such iterative fashion need not necessarily be solution to the following optimization problem.

\[\max_{q \in M} \sum_{i=1}^{N} d(q, X_{i})^2\]

where d is the riemannian metric. Also, Estimator is not permutation invariant , i.e.,the estimate might depend on the order in which incremental updates are performed.

Parameters:
  • space (Manifold) – Equipped manifold.

  • verbose (bool) – Verbose option. Optional, default: False.

  • clean_state (bool) – If keeping track of last iteration or clean state of estimator.

Notes

  • Required metric methods: geodesic.

References

[CHSV2016]

Cheng, Ho, Salehian, Vemuri. “Recursive Computation of the Frechet Mean on Non-Positively Curved Riemannian Manifolds with Applications”, Riemannian Computing in Computer Vision pp 21-43, 2016. https://link.springer.com/chapter/10.1007/978-3-319-22957-7_2

fit(X, y=None, init=None)[source]#

Compute the incremental Frechet mean.

Parameters:
  • X (array-like, shape=[n_samples, {dim, [n, n]}]) – Training input samples.

  • y (None) – Ignored.

  • init (array-like, shape=[{dim, [n, n]}]) – If not None, starts mean computation from init, could be useful when data comes in streaming setting. Optional, default: None.

Returns:

self (object) – Returns self.

set_fit_request(*, init: bool | None | str = '$UNCHANGED$') IncrementalFrechetMean#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

init (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for init parameter in fit.

Returns:

self (object) – The updated object.

geomstats.learning.kalman_filter module#

Kalman filter on Lie groups, with two local test system models.

Lead author: Paul Chauchat.

class geomstats.learning.kalman_filter.KalmanFilter(model)[source]#

Bases: object

Class for a general Kalman filter working on Lie groups.

Given an adapted model, it provides the tools to carry out non-linear state estimation with an error modeled on the Lie algebra. The model must provide the functions to propagate and update a state, the observation model, and the computation of the Jacobians.

Parameter#

model{class, instance}

Object representing an observed dynamical system.

compute_gain(observation)[source]#

Compute the Kalman gain given the observation model.

Given the observation Jacobian H and covariance N (not necessarily equal to that of the sensor), and the current covariance P, the Kalman gain is K = P H^T(H P H^T + N)^{-1}.

Parameters:

observation (array-like, shape=[dim_obs]) – Obtained measurement.

Returns:

gain (array-like, shape=[model.dim, model.dim_obs]) – Kalman gain.

initialize_covariances(prior_values, process_values, obs_values)[source]#

Set the values of the covariances.

propagate(sensor_input)[source]#

Propagate the estimate and its covariance.

Given the propagation Jacobian F and the noise Jacobian G, the covariance P becomes F P F^T + G Q G^T.

Parameters:

sensor_input (array-like) – Vector representing the propagation sensor input.

update(observation)[source]#

Update the current estimate given an observation.

The state is updated by the matrix-vector product of the Kalman gain K and the innovation. The possibly non-linear update function is provided by the model. Given the observation Jacobian H and covariance N, the current covariance P is updated as (I - KH)P.

Parameters:

observation (array-like, shape=[dim_obs]) – Obtained measurement.

class geomstats.learning.kalman_filter.Localization[source]#

Bases: object

Class for modeling a non-linear 2D localization problem.

The state is composed of a planar orientation and position, and is thus a member of SE(2). A sensor provides the linear and angular speed, while another one provides sparse position observations.

adjoint_map(state)[source]#

Construct the matrix associated to the adjoint representation.

The inner automorphism is given by \(Ad_X : g |-> XgX^-1\). For a state \(X = (\theta, x, y)\), the matrix associated to its tangent map, the adjoint representation, is \(\begin{bmatrix} 1 & \\ -J [x, y] & R(\theta) \end{bmatrix}\), where \(R(\theta)\) is the rotation matrix of angle theta, and \(J = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}\)

Parameters:

state (array-like, shape=[dim]) – Vector representing a state.

Returns:

adjoint (array-like, shape=[dim, dim]) – Adjoint representation of the state.

get_measurement_noise_cov(state, observation_cov)[source]#

Get the observation covariance.

For an observation y and an orientation theta, the modified observation considered for the innovation is \(R(\theta)^T y\) [BB2017], so the covariance N is rotated accordingly as \(R(\theta)^T N R(\theta)\).

Parameters:
  • state (array-like, shape=[dim]) – Vector representing a state.

  • observation_cov (array-like, shape=[dim_obs, dim_obs]) – Covariance matrix associated to the sensor.

Returns:

covariance (array-like, shape=[dim_obs, dim_obs]) – Covariance of the observation.

innovation(state, observation)[source]#

Discrepancy between the measurement and its expected value.

The linear error (observation - expected) is cast into the state’s frame by rotation, following [BB2017]

Parameters:
  • state (array-like, shape=[dim]) – Vector representing the state.

  • observation (array-like, shape=[dim_obs]) – Obtained measurement.

Returns:

innovation (array-like, shape=[dim_obs]) – Error between the measurement and the expected value.

noise_jacobian(state, sensor_input)[source]#

Compute the matrix associated to the propagation noise.

The noise being considered multiplicative, it is simply the identity scaled by the time step.

Parameters:
  • state (unused)

  • sensor_input (array-like, shape=[4]) – Vector representing the information from the sensor.

Returns:

jacobian (array-like, shape=[dim_noise, dim]) – Jacobian of the propagation w.r.t. the noise.

observation_jacobian(state, observation)[source]#

Compute the matrix associated to the observation model.

The Jacobian is given by \(\begin{bmatrix} 0 & I_2 \end{bmatrix}\).

Parameters:
  • state (unused)

  • observation (unused)

Returns:

jacobian (array-like, shape=[dim_obs, dim]) – Jacobian of the observation.

observation_model(state)[source]#

Model used to create the measurements.

This model simply outputs the position part of the state, i.e. its two last elements.

Parameters:

state (array-like, shape=[dim]) – Vector representing the state.

Returns:

observation (array-like, shape=[dim_obs]) – Expected observation of the state.

preprocess_input(sensor_input)[source]#

Separate the input into its main parts.

Each input is the concatenation of four parts: the time step, the 2D linear velocity and the angular velocity.

Parameters:

sensor_input (array-like, shape=[4]) – Vector representing the sensor input.

Returns:

  • dt (float) – Time step between two consecutive inputs.

  • linear_vel (array-like, shape=[2]) – 2D linear velocity.

  • angular_vel (array-like, shape=[dim_rot]) – Angular velocity.

propagate(state, sensor_input)[source]#

Propagate state with constant velocity motion model on SE(2).

From a given state (orientation, position) pair \((\theta, x)\), a new one is obtained as \((\theta + dt * \omega, x + dt * R(\theta) u)\), where the time step, the linear and angular velocities u and :math:omega are given some sensor (e.g., odometers).

Parameters:
  • state (array-like, shape=[dim]) – Vector representing a state (orientation, position).

  • sensor_input (array-like, shape=[4]) – Vector representing the information from the sensor.

Returns:

new_state (array-like, shape=[dim]) – Vector representing the propagated state.

propagation_jacobian(state, sensor_input)[source]#

Compute the Jacobian associated to the input.

Since the propagation writes f(x) = x*u, and the error is modeled on the Lie algebra, the Jacobian is Ad_{u^{-1}} [BB2017].

Parameters:
  • state (unused)

  • sensor_input (array-like, shape=[4]) – Vector representing the information from the sensor.

Returns:

jacobian (array-like, shape=[dim, dim]) – Jacobian of the propagation.

regularize_angle(theta)[source]#

Bring back angle theta in ]-pi, pi].

rotation_matrix(theta)[source]#

Construct the rotation matrix associated to the angle theta.

Parameters:

theta (float) – Rotation angle.

Returns:

rot (array-like, shape=[2, 2]) – 2D rotation matrix of angle theta.

class geomstats.learning.kalman_filter.LocalizationLinear[source]#

Bases: object

Class for modeling a linear 1D localization problem.

The state is made of a scalar position and scalar speed, thus a 2D vector. A sensor provides acceleration inputs, while another one provides sparse measurements of the position.

static get_measurement_noise_cov(state, observation_cov)[source]#

Get the observation covariance.

Parameters:
  • state (unused)

  • observation_cov (array-like, shape=[dim_obs, dim_obs]) – Covariance matrix associated to the sensor.

Returns:

covariance (array-like, shape=[dim_obs, dim_obs]) – Covariance of the observation.

innovation(state, observation)[source]#

Discrepancy between the measurement and its expected value.

Parameters:
  • state (array-like, shape=[dim]) – Vector representing the state.

  • observation (array-like, shape=[dim_obs]) – Obtained measurement.

Returns:

innovation (array-like, shape=[dim_obs]) – Error between the measurement and the expected value.

noise_jacobian(state, sensor_input)[source]#

Compute the matrix associated to the propagation noise.

The noise is supposed additive and only applies to the speed part. The Jacobian is given by \(\begin{bmatrix} 0 & \sqrt{dt} \end{bmatrix}\).

Parameters:
  • state (unused)

  • sensor_input (array-like, shape=[2]) – Vector representing the information from the accelerometer.

Returns:

jacobian (array-like, shape=[dim_noise, dim]) – Jacobian of the propagation w.r.t. the noise.

observation_jacobian(state, observation)[source]#

Compute the matrix associated to the observation model.

The Jacobian is given by \(\begin{bmatrix} 1 & 0 \end{bmatrix}\).

Parameters:
  • state (unused)

  • observation (unused)

Returns:

jacobian (array-like, shape=[dim_obs, dim]) – Jacobian of the observation.

static observation_model(state)[source]#

Model used to create the measurements.

This model simply outputs the position part of the state, i.e. its first element.

Parameters:

state (array-like, shape=[dim]) – Vector representing the state.

Returns:

observation (array-like, shape=[dim_obs]) – Expected observation of the state.

static propagate(state, sensor_input)[source]#

Propagate with piece-wise constant acceleration and velocity.

Takes a given (position, speed) pair \((x, v)\) and creates a new one \((x + dt * v, v + dt * acc)\), where the time step and the acceleration are given by an accelerometer.

Parameters:
  • state (array-like, shape=[dim]) – Vector representing a state (position, speed).

  • sensor_input (array-like, shape=[2]) – Vector representing the information from the accelerometer.

Returns:

new_state (array-like, shape=[dim]) – Vector representing the propagated state.

propagation_jacobian(state, sensor_input)[source]#

Compute the Jacobian associated to the affine propagation..

The Jacobian is given by \(\begin{bmatrix} 1 & dt \\ & 1 \end{bmatrix}\).

Parameters:
  • state (unused)

  • sensor_input (array-like, shape=[2]) – Vector representing the information from the accelerometer.

Returns:

jacobian (array-like, shape=[dim, dim]) – Jacobian of the propagation.

geomstats.learning.kernel_density_estimation_classifier module#

The kernel density estimation classifier on manifolds.

Lead author: Yann Cabanes.

class geomstats.learning.kernel_density_estimation_classifier.KernelDensityEstimationClassifier(space, radius=inf, kernel='distance', bandwidth=1.0, leaf_size=30, outlier_label=None, n_jobs=None)[source]#

Bases: RadiusNeighborsClassifier

Classifier implementing the kernel density estimation on manifolds.

The kernel density estimation classifier classifies the data according to a kernel density estimation of each dataset on the manifold. The density estimation is performed using radial kernel functions: the distance is the only geometrical tool used to estimate the density on the manifold. This classifier inherits from the radius neighbors classifier of the scikit-learn library, we expect the classifier presented here to be easier to use on manifolds. Compared with the radius neighbors classifier, we force the parameter ‘algorithm’ to be equal to ‘brute’ in order to be compatible with any metric. We also changed some default values of the scikit-learn algorithm in order to take into account every point of the dataset during the kernel density estimation, i.e. the default value of the parameter ‘radius’ is set to infinity instead of 1 and the default value of the parameter ‘weight’ is set to ‘distance’ instead of ‘uniform’. Our main contribution is a greater choice of kernel functions, see the radial_kernel_functions.py file in the learning directory. The radial kernel functions are now easier to define by a user: the input data should be an array of distances instead of an array of arrays. Moreover the new parameter ‘bandwidth’ of our classifier can be used to adapt the kernel function to the size of the dataset. The scikit-learn library also provides a kernel density estimation tool (see sklearn.neighbors.KernelDensity), however this algorithm is not built as a classifier and is not available with all metrics.

Parameters:
  • space (Manifold) – Equipped manifold.

  • radius (float, optional (default = inf)) – Range of parameter space to use by default.

  • kernel (string or callable, optional (default = ‘distance’)) – Kernel function used in prediction. Possible values:

    • ‘distance’ : weight points by the inverse of their distance. In this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.

    • ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.

    • [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.

  • bandwidth (float, optional (default = 1.0)) – Bandwidth parameter used for the kernel. The kernel parameter is used if and only if the kernel is a callable function.

  • outlier_label ({manual label, ‘most_frequent’}, optional (default = None)) – Label for outlier samples (samples with no neighbors in given radius).

    • manual label: str or int label (should be the same type as y) or list of manual labels if multi-output is used.

    • ‘most_frequent’ : assign the most frequent label of y to outliers.

    • None : when any outlier is detected, ValueError will be raised.

  • n_jobs (int or None, optional (default = None)) – The number of parallel jobs to run for neighbors search. None means 1; -1 means using all processors.

classes_#

Class labels known to the classifier.

Type:

array-like, shape=[n_classes,]

effective_metric_#

The distance metric used. It will be same as the distance parameter or a synonym of it, e.g. ‘euclidean’ if the distance parameter set to ‘minkowski’ and p parameter set to 2.

Type:

string or callable

effective_metric_params_#

Additional keyword arguments for the distance function. For most distances will be same with distance_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’.

Type:

dict

outputs_2d_#

False when y’s shape is […,] or […, 1] during fit, otherwise True.

Type:

bool

References

This algorithm uses the scikit-learn library: https://scikit-learn.org/stable/modules/generated/ sklearn.neighbors.RadiusNeighborsClassifier.html

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') KernelDensityEstimationClassifier#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self (object) – The updated object.

geomstats.learning.kernel_density_estimation_classifier.wrap(function)[source]#

Wrap a function to first convert args to arrays.

geomstats.learning.kmeans module#

K-means clustering.

Lead author: Hadi Zaatiti.

class geomstats.learning.kmeans.RiemannianKMeans(space, n_clusters=8, init='random', tol=0.01, max_iter=100, verbose=0)[source]#

Bases: TransformerMixin, ClusterMixin, BaseEstimator

Class for k-means clustering on manifolds.

K-means algorithm using Riemannian manifolds.

Parameters:
  • space (Manifold) – Equipped manifold.

  • n_clusters (int) – Number of clusters (k value of the k-means). Optional, default: 8.

  • init (str or callable or array-like, shape=[n_clusters, n_features]) – How to initialize cluster centers at the beginning of the algorithm. The choice ‘random’ will select training points as initial cluster centers uniformly at random. The choice ‘kmeans++’ selects cluster centers heuristically to improve the convergence rate. When providing an array of shape (n_clusters, n_features), the cluster centers are chosen as the rows of that array. When providing a callable, it receives as arguments the argument X to fit() and the number of cluster centers n_clusters and is expected to return an array as above. Optional, default: ‘random’.

  • tol (float) – Convergence factor. Convergence is achieved when the difference of mean distance between two steps is lower than tol. Optional, default: 1e-2.

  • max_iter (int) – Maximum number of iterations. Optional, default: 100

  • verbose (int) – If verbose > 0, information will be printed during learning. Optional, default: 0.

Notes

  • Required metric methods: dist.

Example

Available example on the Poincaré Ball and Hypersphere manifolds examples.plot_kmeans_manifolds

fit(X)[source]#

Provide cluster centers and data labels.

Alternate between computing the mean of each cluster and labelling data according to the new positions of the cluster centers.

Parameters:

X (array-like, shape=[n_samples, n_features]) – Training data, where n_samples is the number of samples and n_features is the number of features.

Returns:

self (object) – Returns self.

predict(X)[source]#

Predict the labels for each data point.

Label each data point with the cluster having the nearest cluster center using metric distance.

Parameters:

X (array-like, shape[n_samples, n_features]) – Input data.

Returns:

labels (array-like, shape=[n_samples,]) – Array of predicted cluster indices for each sample.

geomstats.learning.kmedoids module#

K-medoids clustering.

Lead author: Hadi Zaatiti.

class geomstats.learning.kmedoids.RiemannianKMedoids(space, n_clusters=8, init='random', max_iter=100, n_jobs=1)[source]#

Bases: TransformerMixin, ClusterMixin, BaseEstimator

Class for K-medoids clustering on manifolds.

K-medoids algorithm using Riemannian manifolds.

Parameters:
  • space (Manifold) – Equipped manifold.

  • n_clusters (int) – Number of clusters (k value of k-medoids). Optional, default: 8.

  • max_iter (int) – Maximum number of iterations. Optional, default: 100.

  • init (str) – How to initialize cluster centers at the beginning of the algorithm. The choice ‘random’ will select training points as initial cluster centers uniformly at random. Optional, default: ‘random’.

  • n_jobs (int) – Number of jobs to run in parallel. -1 means using all processors. Optional, default: 1.

Notes

  • Required metric methods: dist, dist_pairwise.

Example

Available example on the Poincaré Ball and Hypersphere manifolds examples.plot_kmedoids_manifolds

fit(X)[source]#

Provide cluster centers and data labels.

Labels data by minimizing the distance between data points and cluster center chosen from the data points. Minimization is performed by swapping the cluster centers and data points.

Parameters:

X (array-like, shape=[n_samples, dim]) – Training data, where n_samples is the number of samples and dim is the number of dimensions.

Returns:

self (object) – Returns self.

predict(X)[source]#

Predict the closest cluster for each sample in X.

Parameters:

X (array-like, shape=[n_samples, dim,]) – Training data, where n_samples is the number of samples and dim is the number of dimensions.

Returns:

labels (array-like, shape=[n_samples,]) – Index of the cluster each sample belongs to.

geomstats.learning.knn module#

The KNN classifier on manifolds.

Lead author: Yann Cabanes.

class geomstats.learning.knn.KNearestNeighborsClassifier(space, n_neighbors=5, weights='uniform', n_jobs=None)[source]#

Bases: KNeighborsClassifier

Classifier implementing the k-nearest neighbors vote on manifolds.

Parameters:
  • space (Manifold) – Equipped manifold.

  • n_neighbors (int, optional (default = 5)) – Number of neighbors to use by default.

  • weights (string or callable, optional (default = ‘uniform’)) – Weight function used in prediction. Possible values:

    • ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.

    • ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.

    • [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.

  • n_jobs (int or None, optional (default = None)) – The number of parallel jobs to run for neighbors search. None means 1; -1 means using all processors.

classes_#

Class labels known to the classifier

Type:

array, shape=[n_classes,]

effective_metric_#

The distance metric used. It will be same as the distance parameter or a synonym of it, e.g. ‘euclidean’ if the distance parameter set to ‘minkowski’ and p parameter set to 2.

Type:

string or callable

effective_metric_params_#

Additional keyword arguments for the distance function. For most distances will be same with distance_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’.

Type:

dict

outputs_2d_#

False when y’s shape is (n_samples, ) or (n_samples, 1) during fit otherwise True.

Type:

bool

References

This algorithm uses the scikit-learn library: scikit-learn/scikit-learn neighbors/_classification.py#L25

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') KNearestNeighborsClassifier#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self (object) – The updated object.

geomstats.learning.knn.wrap(function)[source]#

Wrap a function to first convert args to arrays.

geomstats.learning.mdm module#

The MDM classifier on manifolds.

Lead authors: Daniel Brooks and Quentin Barthelemy.

class geomstats.learning.mdm.RiemannianMinimumDistanceToMean(space)[source]#

Bases: BaseEstimator, ClassifierMixin, TransformerMixin

Minimum Distance to Mean (MDM) classifier on manifolds.

Classification by nearest centroid. For each of the given classes, a centroid is estimated according to the chosen metric. Then, for each new point, the class is affected according to the nearest centroid [BBCJ2012].

Parameters:

space (Manifold) – Equipped manifold.

classes_#

If fit, labels of training set.

Type:

array-like, shape=[n_classes,]

mean_estimates_#

If fit, centroids computed on training set.

Type:

array-like, shape=[n_classes, *space.shape]

Notes

  • Required metric methods: squared_dist, closest_neighbot_index.

References

[BBCJ2012]

A. Barachant, S. Bonnet, M. Congedo and C. Jutten, Multiclass Brain-Computer Interface Classification by Riemannian Geometry. IEEE Trans. Biomed. Eng., vol. 59, pp. 920-928, 2012.

fit(X, y, weights=None)[source]#

Compute Frechet mean of each class.

Parameters:
  • X (array-like, shape=[n_samples, *space.shape]) – Training input samples.

  • y (array-like, shape=[n_samples,]) – Training labels.

  • weights (array-like, shape=[n_samples,]) – Weights associated to the samples. Optional, default: None, in which case it is equally weighted.

Returns:

self (object) – Returns self.

property n_classes_#

Number of classes.

predict(X)[source]#

Compute closest neighbor according to riemannian_metric.

Parameters:

X (array-like, shape=[n_samples, *space.shape]) – Test samples.

Returns:

y (array-like, shape=[n_samples,]) – Predicted labels.

predict_proba(X)[source]#

Compute probabilities.

Compute probabilities to belong to classes according to riemannian_metric.

Parameters:

X (array-like, shape=[n_samples, *space.shape]) – Test samples.

Returns:

probas (array-like, shape=[n_samples, n_classes]) – Probability of the sample for each class in the model.

set_fit_request(*, weights: bool | None | str = '$UNCHANGED$') RiemannianMinimumDistanceToMean#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') RiemannianMinimumDistanceToMean#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self (object) – The updated object.

transform(X)[source]#

Compute distances to each centroid.

Compute distances to each centroid according to riemannian_metric.

Parameters:

X (array-like, shape=[n_samples, *space.shape]) – Test samples.

Returns:

dist (ndarray, shape=[n_samples, n_classes]) – Distances to each centroid.

geomstats.learning.online_kmeans module#

Online kmeans algorithm on Manifolds.

Lead author: Alice Le Brigant.

class geomstats.learning.online_kmeans.OnlineKMeans(space, n_clusters, n_repetitions=20, atol=1e-05, max_iter=500)[source]#

Bases: BaseEstimator, ClusterMixin

Online k-means clustering.

Online k-means clustering seeks to divide a set of data points into a specified number of classes, while minimizing intra-class variance. It is closely linked to discrete quantization, which computes the closest approximation of the empirical distribution of the dataset by a discrete distribution supported by a smaller number of points with respect to the Wasserstein distance. The algorithm used can either be seen as an online version of the k-means algorithm or as Competitive Learning Riemannian Quantization (see [LBP2019]).

Parameters:
  • space (Manifold) – Equipped manifold. At each iteration, one of the cluster centers is moved in the direction of the new datum, according the exponential map of the underlying space, which is a method of metric.

  • n_clusters (int) – Number of clusters of the k-means clustering, or number of desired atoms of the quantized distribution.

  • n_repetitions (int, default=20) – The cluster centers are updated using decreasing step sizes, each of which stays constant for n_repetitions iterations to allow a better exploration of the data points.

  • max_iter (int, default=5e4) – Maximum number of iterations. If it is reached, the quantization may be inacurate.

cluster_centers_#

Coordinates of cluster centers.

Type:

array, [n_clusters, n_features]

labels_#

Labels of each point.

Notes

  • Required metric methods: exp, log, dist, closest_neighbor_index.

Example

>>> from geomstats.geometry.hypersphere import Hypersphere
>>> from geomstats.learning.onlinekmeans import OnlineKmeans
>>> sphere = Hypersphere(dim=2)
>>> metric = sphere.metric
>>> X = sphere.random_von_mises_fisher(kappa=10, n_samples=50)
>>> clustering = OnlineKmeans(metric=metric,n_clusters=4).fit(X)
>>> clustering.cluster_centers_
>>> clustering.labels_

References

[LBP2019]

A. Le Brigant and S. Puechmorel, Optimal Riemannian quantization with an application to air traffic analysis. J. Multivar. Anal. 173 (2019), 685 - 703.

fit(X, y=None)[source]#

Perform clustering.

Perform online version of k-means algorithm on data contained in X. The data points are treated sequentially and the cluster centers are updated one at a time. This version of k-means avoids computing the mean of each cluster at each iteration and is therefore less computationally intensive than the offline version.

In the setting of quantization of probability distributions, this algorithm is also known as Competitive Learning Riemannian Quantization. It computes the closest approximation of the empirical distribution of data by a discrete distribution supported by a smaller number of points with respect to the Wasserstein distance. This smaller number of points is n_clusters.

Parameters:
  • X (array-like, shape=[n_samples, n_features]) – Input data. It is treated sequentially by the algorithm, i.e. one datum is chosen randomly at each iteration.

  • y (None) – Target values. Ignored.

Returns:

self (object) – Returns self.

predict(X)[source]#

Predict the closest cluster each sample in X belongs to.

Parameters:

X (array-like, shape=[n_samples, n_features]) – New data to predict.

Returns:

labels (int) – Index of the cluster each sample belongs to.

geomstats.learning.pca module#

Principal Component Analysis on Manifolds.

Lead author: Nina Miolane.

class geomstats.learning.pca.ExactPGA(space, **kwargs)[source]#

Bases: object

Exact Principal Geodesic Analysis.

Parameters:

space (Manifold) – Equipped manifold.

class geomstats.learning.pca.HyperbolicPlaneExactPGA(space, n_grid=100)[source]#

Bases: _BasePCA

Exact Principal Geodesic Analysis in the hyperbolic plane.

The first principal component is computed by finding the direction in a unit ball around the mean that maximizes the variance of the projections on the induced geodesic. The projections are given by closed form expressions in extrinsic coordinates. The second principal component is the direction at the mean that is orthogonal to the first principal component.

Parameters:
  • space (Hyperbolic) – Two-dimensional hyperbolic space.

  • n_vec (int) – Number of vectors used to discretize the unit ball when finding the direction of maximal variance.

components_#

Principal axes, representing the directions of maximal variance in the data. They are the initial velocities of the principal geodesics.

Type:

array-like, shape=[n_components, 2]

mean_#

Intrinsic mean of the data points.

Type:

array-like, shape=[2,]

References

[CSV2016]

R. Chakraborty, D. Seo, and B. C. Vemuri, “An efficient exact-pga algorithm for constant curvature manifolds.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

fit(X, y=None)[source]#

Fit the model with X.

Parameters:
  • X (array-like, shape=[…, n_features]) – Training data in the hyperbolic plane. If the space is the Poincare half-space or Poincare ball, n_features is 2. If it is the hyperboloid, n_features is 3.

  • y (Ignored (Compliance with scikit-learn interface))

Returns:

self (object) – Returns the instance itself.

fit_transform(X, y=None)[source]#

Project X on the principal components.

Parameters:
  • X (array-like, shape=[n_points, 2]) – Training data in the hyperbolic plane. If the space is the Poincare half-space or Poincare ball, n_features is 2. If it is the hyperboloid, n_features is 3.

  • y (Ignored (Compliance with scikit-learn interface))

Returns:

X_new (array-like, shape=[n_components, n_points, 2]) – Projections of the data on the first principal geodesic (first line of the array) and on the second principal geodesic (second line).

class geomstats.learning.pca.TangentPCA(space, n_components=None, copy=True, whiten=False, tol=0.0, iterated_power='auto', random_state=None)[source]#

Bases: _BasePCA

Tangent Principal component analysis (tPCA).

Linear dimensionality reduction using Singular Value Decomposition of the Riemannian Log of the data at the tangent space of the Frechet mean.

Parameters:
  • space (Manifold) – Equipped manifold.

  • n_components (int) – Number of principal components. Optional, default: None.

Notes

  • Required geometry methods: exp, log.

  • If base_point=None, also requires FrechetMean required methods.

  • Lie groups can be used without a metric, but base_point or mean_estimator need to be specified.

fit(X, y=None, base_point=None)[source]#

Fit the model with X.

Parameters:
  • X (array-like, shape=[…, n_features]) – Training data, where n_samples is the number of samples and n_features is the number of features.

  • y (Ignored (Compliance with scikit-learn interface))

  • base_point (array-like, shape=[…, n_features], optional) – Point at which to perform the tangent PCA Optional, default to Frechet mean if None.

Returns:

self (object) – Returns the instance itself.

fit_transform(X, y=None, base_point=None)[source]#

Fit the model with X and apply the dimensionality reduction on X.

Parameters:
  • X (array-like, shape=[…, n_features]) – Training data, where n_samples is the number of samples and n_features is the number of features.

  • y (Ignored (Compliance with scikit-learn interface))

  • base_point (array-like, shape=[…, n_features]) – Point at which to perform the tangent PCA Optional, default to Frechet mean if None.

Returns:

X_new (array-like, shape=[…, n_components]) – Projected data.

inverse_transform(X)[source]#

Low-dimensional reconstruction of X.

The reconstruction will match X_original whose transform would be X if n_components=min(n_samples, n_features).

Parameters:

X (array-like, shape=[…, n_components]) – New data, where n_samples is the number of samples and n_components is the number of components.

Returns:

X_original (array-like, shape=[…, n_features]) – Original data.

set_fit_request(*, base_point: bool | None | str = '$UNCHANGED$') TangentPCA#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

base_point (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_point parameter in fit.

Returns:

self (object) – The updated object.

transform(X, y=None)[source]#

Project X on the principal components.

Parameters:
  • X (array-like, shape=[…, n_features]) – Data, where n_samples is the number of samples and n_features is the number of features.

  • y (Ignored (Compliance with scikit-learn interface))

Returns:

X_new (array-like, shape=[…, n_components]) – Projected data.

geomstats.learning.preprocessing module#

Transformer for manifold-valued data.

Lead author: Nicolas Guigui.

class geomstats.learning.preprocessing.ToTangentSpace(space)[source]#

Bases: BaseEstimator, TransformerMixin

Lift data to a tangent space.

Compute the logs of all data points and reshape them to 1d vectors if necessary. This means that all the data points, that belong to a possibly non-linear manifold are lifted to one of the tangent space of the manifold, which is a vector space. By default, the mean of the data is computed (with the FrechetMean or the ExponentialBarycenter estimator, as appropriate) and the tangent space at the mean is used. Any other base point can be passed. The data points are then represented by the initial velocities of the geodesics that lead from base_point to each data point. Any machine learning algorithm can then be used with the output array.

Parameters:

space (Manifold) – Equipped manifold or unequipped space implementing exp and log.

Notes

  • Required geometry methods: log, exp.

fit(X, y=None, weights=None, base_point=None)[source]#

Compute the central point at which to take the log.

This method is only used if base_point=None to compute the mean of the input data.

Parameters:
  • X (array-like, shape=[n_samples, {dim, [n, n]}]) – The training input samples.

  • y (None) – Ignored.

  • weights (array-like, shape=[n_samples, 1]) – Weights associated to the points. Optional, default: None

  • base_point (array-like, shape=[{dim, [n, n]}]) – Point similar to the input data from which to compute the logs. Optional, default: None.

Returns:

self (object) – Returns self.

inverse_transform(X)[source]#

Reconstruction of X.

The reconstruction will match X_original whose transform would be X.

Parameters:

X (array-like, shape=[n_samples, dim]) – New data, where dim is the dimension of the manifold data belong to.

Returns:

X_original (array-like, shape=[n_samples, {dim, [n, n]}) – Data lying on the manifold.

set_fit_request(*, base_point: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') ToTangentSpace#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_point (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_point parameter in fit.

  • weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for weights parameter in fit.

Returns:

self (object) – The updated object.

transform(X)[source]#

Lift data to a tangent space.

Compute the logs of all data point and reshapes them to 1d vectors if necessary. By default the logs are taken at the mean but any other base point can be passed. Any machine learning algorithm can then be used with the output array.

Parameters:

X (array-like, shape=[n_samples, {dim, [n, n]}]) – Data to transform.

Returns:

X_new (array-like, shape=[n_samples, dim]) – Lifted data.

geomstats.learning.radial_kernel_functions module#

Radial kernel functions.

Lead author: Yann Cabanes.

References

https://en.wikipedia.org/wiki/Kernel_(statistics) https://en.wikipedia.org/wiki/Radial_basis_function

Notes

We chose not to apply the normalization coefficients used in some references in order that the kernel functions integrate to 1 on the Euclidean space of dimension 1.

geomstats.learning.radial_kernel_functions.biweight_radial_kernel(distance, bandwidth=1.0)[source]#

Biweight radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.bump_radial_kernel(distance, bandwidth=1.0)[source]#

Bump radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Radial_basis_function

geomstats.learning.radial_kernel_functions.cosine_radial_kernel(distance, bandwidth=1.0)[source]#

Cosine radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.gaussian_radial_kernel(distance, bandwidth=1.0)[source]#

Gaussian radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

geomstats.learning.radial_kernel_functions.inverse_multiquadric_radial_kernel(distance, bandwidth=1.0)[source]#

Inverse multiquadric radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Radial_basis_function

geomstats.learning.radial_kernel_functions.inverse_quadratic_radial_kernel(distance, bandwidth=1.0)[source]#

Inverse quadratic radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Radial_basis_function

geomstats.learning.radial_kernel_functions.laplacian_radial_kernel(distance, bandwidth=1.0)[source]#

Laplacian radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

geomstats.learning.radial_kernel_functions.logistic_radial_kernel(distance, bandwidth=1.0)[source]#

Logistic radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.parabolic_radial_kernel(distance, bandwidth=1.0)[source]#

Parabolic radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.sigmoid_radial_kernel(distance, bandwidth=1.0)[source]#

Sigmoid radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

geomstats.learning.radial_kernel_functions.triangular_radial_kernel(distance, bandwidth=1.0)[source]#

Triangular radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.tricube_radial_kernel(distance, bandwidth=1.0)[source]#

Tricube radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.triweight_radial_kernel(distance, bandwidth=1.0)[source]#

Triweight radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.radial_kernel_functions.uniform_radial_kernel(distance, bandwidth=1.0)[source]#

Uniform radial kernel.

Parameters:
  • distance (array-like) – Array of non-negative real values.

  • bandwidth (float, optional (default=1.0)) – Positive scale parameter of the kernel.

Returns:

weight (array-like) – Array of non-negative real values of the same shape than parameter ‘distance’.

References

https://en.wikipedia.org/wiki/Kernel_(statistics)

geomstats.learning.riemannian_mean_shift module#

Riemannian mean-shift clustering.

Lead author: Nina Miolane and Shubham Talbar.

class geomstats.learning.riemannian_mean_shift.RiemannianMeanShift(space, bandwidth, tol=0.01, n_clusters=1, n_jobs=1, max_iter=100, init_centers='from_points', kernel='flat')[source]#

Bases: ClusterMixin, BaseEstimator

Class for Riemannian Mean Shift algorithm on manifolds.

Mean Shift is a procedure for locating the maxima - the modes of a density function given discrete data sampled from that function. It is an iterative method for finding the centers of a collection of clusters.

Following implementation assumes a flat kernel method.

Parameters:
  • space (Manifold) – Equipped manifold.

  • bandwidth (float) – Size of neighbourhood around each center. All points in ‘bandwidth’ size around center are considered for calculating new mean centers.

  • tol (float) – Stopping condition. Computation of subsequent mean centers is stopped when the distance between them is less than ‘tol’. Optional, default : 1e-2.

  • n_clusters (int) – Number of centers. Optional, default : 1.

  • n_jobs (int) – Number of parallel threads to be initiated for parallel jobs. Optional, default : 1.

  • max_iter (int) – Upper bound on total number of iterations for the centers to converge. Optional, default : 100.

  • init_centers (str) – Initializing centers, either from the given input points or random points uniformly distributed in the input manifold. Optional, default : “from_points”.

  • kernel (str) – Weighing function to assign kernel weights to each center. Optional, default : “flat”.

Notes

  • Required metric methods: dist, closest_neighbor_index.

fit(X, y=None)[source]#

Fit centers in all the input points.

Parameters:
  • X (array-like, shape=[n_samples, n_features]) – Clusters of points.

  • y (None) – Target values. Ignored.

Returns:

self (object) – Returns self.

predict(X)[source]#

Predict the closest cluster each point in points belongs to.

Parameters:

points (array-like, shape=[n_samples, n_features]) – Clusters of points.

geomstats.learning.wrapped_gaussian_process module#

Wrapped Gaussian Process.

Lead author: Arthur Pignet

Extension of Gaussian Processes to Riemannian Manifolds, introduced in [Mallasto].

References

[Mallasto]

Mallasto, A. and Feragen, A. “Wrapped gaussian process regression on riemannian manifolds.” IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018)

class geomstats.learning.wrapped_gaussian_process.WrappedGaussianProcess(space, prior)[source]#

Bases: MultiOutputMixin, RegressorMixin, BaseEstimator

Wrapped Gaussian Process.

The implementation is based on the algorithm 4 of [1]_.

Parameters:
  • space (Manifold) – Equipped manifold.

  • prior (callable) – Associate to each input a manifold valued point.

References

fit(X, y)[source]#

Fit Wrapped Gaussian process regression model.

The Wrapped Gaussian process is fit through the following steps:

  • Compute the tangent dataset using the prior

  • Fit a Gaussian process regression on the tangent dataset

  • Store the resulting euclidean Gaussian process

Parameters:
  • X (array-like, shape=[n_samples,]) – Training input samples.

  • y (array-like, shape[n_samples, {dim, [n,n]}]) – Training target values.

Returns:

self (object) – Returns self.

predict(X, return_tangent_std=False, return_tangent_cov=False)[source]#

Predict using the Gaussian process regression model.

A fitted Wrapped Gaussian process can be use to predict values through the following steps:

  • Use the stored Gaussian process regression on the dataset to return tangent predictions

  • Compute the base-points using the prior

  • Map the tangent predictions on the manifold via the metric’s exp with the base-points yielded by the prior

We can also predict based on an unfitted model by using the GP prior. In addition to the mean of the predictive distribution, optionally also returns its standard deviation (return_std=True) or covariance (return_cov=True). Note that at most one of the two can be requested.

Parameters:
  • X (array-like of shape (n_samples, n_features) or list of object) – Query points where the GP is evaluated.

  • return_tangent_std (bool, default=False) – If True, the standard-deviation of the predictive distribution on at the query points in the tangent space is returned along with the mean.

  • return_tangent_cov (bool, default=False) – If True, the covariance of the joint predictive distribution at the query points in the tangent space is returned along with the mean.

Returns:

  • y_mean (ndarray of shape (n_samples,) or (n_samples, n_targets)) – Mean of predictive distribution a query points.

  • y_std (ndarray of shape (n_samples,) or (n_samples, n_targets), optional) – Standard deviation of predictive distribution at query points in the tangent space. Only returned when return_std is True.

  • y_cov (ndarray of shape (n_samples, n_samples) or (n_samples, n_samples, n_targets), optional) – Covariance of joint predictive distribution a query points in the tangent space. Only returned when return_cov is True. In the case where the target is matrix valued, return the covariance of the vectorized prediction.

sample_y(X, n_samples=1, random_state=0)[source]#

Draw samples from Wrapped Gaussian process and evaluate at X.

A fitted Wrapped Gaussian process can be use to sample values through the following steps:

  • Use the stored Gaussian process regression on the dataset to sample tangent values

  • Compute the base-points using the prior

  • Flatten (and repeat if needed) both the base-points and the tangent samples to benefit from vectorized computation.

  • Map the tangent samples on the manifold via the metric’s exp with the flattened and repeated base-points yielded by the prior

Parameters:
  • X (array-like of shape (n_samples_X, n_features) or list of object) – Query points where the WGP is evaluated.

  • n_samples (int, default=1) – Number of samples drawn from the Wrapped Gaussian process per query point.

  • random_state (int, RandomState instance or None, default=0) – Determines random number generation to randomly draw samples. Pass an int for reproducible results across multiple function calls.

Returns:

y_samples (ndarray of shape (n_samples_X, n_samples), or (n_samples_X, *target_shape, n_samples)) – Values of n_samples samples drawn from wrapped Gaussian process and evaluated at query points.

set(**kwargs)[source]#

Set euclidean_gpr parameters.

Especially useful for one line instantiations.

set_predict_request(*, return_tangent_cov: bool | None | str = '$UNCHANGED$', return_tangent_std: bool | None | str = '$UNCHANGED$') WrappedGaussianProcess#

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • return_tangent_cov (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for return_tangent_cov parameter in predict.

  • return_tangent_std (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for return_tangent_std parameter in predict.

Returns:

self (object) – The updated object.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') WrappedGaussianProcess#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self (object) – The updated object.

Module contents#

Learning algorithms on manifolds.

class geomstats.learning.TemplateClassifier(demo_param='demo')[source]#

Bases: BaseEstimator, ClassifierMixin

An example classifier which implements a 1-NN algorithm.

For more information regarding how to build your own classifier, read more in the User Guide.

Parameters:

demo_param (str, default=’demo’) – A parameter used for demonstation of how to pass and store paramters.

X_#

The input passed during fit().

Type:

ndarray, shape (n_samples, n_features)

y_#

The labels passed during fit().

Type:

ndarray, shape (n_samples,)

classes_#

The classes seen at fit().

Type:

ndarray, shape (n_classes,)

fit(X, y)[source]#

Train classifier on labeled data.

Parameters:
  • X (array-like, shape (n_samples, n_features)) – The training input samples.

  • y (array-like, shape (n_samples,)) – The target values. An array of int.

Returns:

self (object) – Returns self.

predict(X)[source]#

Classify input data.

Parameters:

X (array-like, shape (n_samples, n_features)) – The input samples.

Returns:

y (ndarray, shape (n_samples,)) – The label for each sample is the label of the closest sample seen during fit.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') TemplateClassifier#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self (object) – The updated object.

class geomstats.learning.TemplateEstimator(demo_param='demo_param')[source]#

Bases: BaseEstimator

A template estimator to be used as a reference implementation.

For more information regarding how to build your own estimator, read more in the User Guide.

Parameters:

demo_param (str, default=’demo_param’) – A parameter used for demonstation of how to pass and store paramters.

fit(X, y)[source]#

Train estimator on labeled data.

Parameters:
  • X ({array-like, sparse matrix}, shape (n_samples, n_features)) – The training input samples.

  • y (array-like, shape (n_samples,) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).

Returns:

self (object) – Returns self.

predict(X)[source]#

Perform prediction.

Parameters:

X ({array-like, sparse matrix}, shape (n_samples, n_features)) – The training input samples.

Returns:

y (ndarray, shape (n_samples,)) – Returns an array of ones.

class geomstats.learning.TemplateTransformer(demo_param='demo')[source]#

Bases: BaseEstimator, TransformerMixin

An example transformer that returns the element-wise square root.

For more information regarding how to build your own transformer, read more in the User Guide.

Parameters:

demo_param (str, default=’demo’) – A parameter used for demonstation of how to pass and store paramters.

n_features_#

The number of features of the data passed to fit().

Type:

int

fit(X, y=None)[source]#

Train function for a transformer.

Parameters:
  • X ({array-like, sparse matrix}, shape (n_samples, n_features)) – The training input samples.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns:

self (object) – Returns self.

transform(X)[source]#

Transform input data.

Parameters:

X ({array-like, sparse-matrix}, shape (n_samples, n_features)) – The input samples.

Returns:

X_transformed (array, shape (n_samples, n_features)) – The array containing the element-wise square roots of the values in X.