# LazyTensor¶

Summary

This section contains the full API documentation of the LazyTensor wrapper, which works identically on NumPy arrays and PyTorch tensors.

 Vi(x_or_ind[, dim]) Simple wrapper that return an instantiation of LazyTensor of type 0. Vj(x_or_ind[, dim]) Simple wrapper that return an instantiation of LazyTensor of type 1. Pm(x_or_ind[, dim]) Simple wrapper that return an instantiation of LazyTensor of type 2. LazyTensor([x, axis]) Symbolic wrapper for NumPy arrays and PyTorch tensors.

Syntax

pykeops.torch.Vi(x_or_ind, dim=None)[source]

Simple wrapper that return an instantiation of LazyTensor of type 0.

pykeops.torch.Vj(x_or_ind, dim=None)[source]

Simple wrapper that return an instantiation of LazyTensor of type 1.

pykeops.torch.Pm(x_or_ind, dim=None)[source]

Simple wrapper that return an instantiation of LazyTensor of type 2.

class pykeops.torch.LazyTensor(x=None, axis=None)[source]

Symbolic wrapper for NumPy arrays and PyTorch tensors.

LazyTensor encode numerical arrays through the combination of a symbolic, mathematical formula and a list of small data arrays. They can be used to implement efficient algorithms on objects that are easy to define, but impossible to store in memory (e.g. the matrix of pairwise distances between two large point clouds).

LazyTensor may be created from standard NumPy arrays or PyTorch tensors, combined using simple mathematical operations and converted back to NumPy arrays or PyTorch tensors with efficient reduction routines, which outperform standard tensorized implementations by two orders of magnitude.

__init__(x=None, axis=None)[source]

Creates a KeOps symbolic variable.

Parameters
• x

May be either:

• A float, a list of floats, a NumPy float, a 0D or 1D NumPy array, a 0D or 1D PyTorch tensor, in which case the LazyTensor represents a constant vector of parameters, to be broadcasted on other LazyTensor.

• A 2D NumPy array or PyTorch tensor, in which case the LazyTensor represents a variable indexed by $$i$$ if axis=0 or $$j$$ if axis=1.

• A 3D+ NumPy array or PyTorch tensor with a dummy dimension (=1) at position -3 or -2, in which case the LazyTensor represents a variable indexed by $$i$$ or $$j$$, respectively. Dimensions before the last three will be handled as batch dimensions, that may support operator broadcasting.

• A tuple of 3 integers (ind,dim,cat), in which case the LazyTensor represents a symbolic variable that should be instantiated at call-time.

• An integer, in which case the LazyTensor represents an integer constant handled efficiently at compilation time.

• None, for internal use.

• axis (int) – should be equal to 0 or 1 if x is a 2D tensor, and None otherwise.

Warning

A LazyTensor constructed from a NumPy array or a PyTorch tensor retains its dtype (float32 vs float64) and device properties (is it stored on the GPU?). Since KeOps does not support automatic type conversions and data transfers, please make sure not to mix LazyTensor that come from different frameworks/devices or which are stored with different precisions.

fixvariables()[source]

If needed, assigns final labels to each variable and pads their batch dimensions prior to a Genred() call.

promote(other, props)[source]

Creates a new LazyTensor whose None properties are set to those of self or other.

init()[source]

Creates a copy of a LazyTensor, without formula attribute.

join(other)[source]

Merges the variables and attributes of two LazyTensor, with a compatibility check. This method concatenates tuples of variables, without paying attention to repetitions.

unary(operation, dimres=None, opt_arg=None, opt_arg2=None)[source]

Symbolically applies operation to self, with optional arguments if needed.

The optional argument dimres may be used to specify the dimension of the output result.

binary(other, operation, is_operator=False, dimres=None, dimcheck='sameor1', opt_arg=None, opt_pos='last')[source]

Symbolically applies operation to self, with optional arguments if needed.

Keyword Arguments
• dimres (-) – May be used to specify the dimension of the output result.

• is_operator (-) – May be used to specify if operation is an operator like +, - or a “genuine” function.

• dimcheck (-) – shall we check the input dimensions? Supported values are "same", "sameor1", or None.

reduction(reduction_op, other=None, opt_arg=None, axis=None, dim=None, call=True, **kwargs)[source]

Applies a reduction to a LazyTensor. This method is used internally by the LazyTensor class. :param reduction_op: the string identifier of the reduction, which will be passed to the KeOps routines. :type reduction_op: string

Keyword Arguments
• other – May be used to specify some weights ; depends on the reduction.

• opt_arg – typically, some integer needed by ArgKMin reductions ; depends on the reduction.

• axis (integer) – The axis with respect to which the reduction should be performed. Supported values are nbatchdims and nbatchdims + 1, where nbatchdims is the number of “batch” dimensions before the last three ($$i$$ indices, $$j$$ indices, variables’ dimensions).

• dim (integer) – alternative keyword for the axis argument.

• call (True or False) – Should we actually perform the reduction on the current variables? If True, the returned object will be a NumPy array or a PyTorch tensor. Otherwise, we simply return a callable LazyTensor that may be used as a pykeops.numpy.Genred or pykeops.torch.Genred function on arbitrary tensor data.

• backend (string) – Specifies the map-reduce scheme, as detailed in the documentation of the Genred module.

• device_id (int, default=-1) – Specifies the GPU that should be used to perform the computation; a negative value lets your system choose the default GPU. This parameter is only useful if your system has access to several GPUs.

• ranges (6-uple of IntTensors, None by default) – Ranges of integers that specify a block-sparse reduction scheme as detailed in the documentation of the Genred module. If None (default), we simply use a dense Kernel matrix as we loop over all indices $$i\in[0,M)$$ and $$j\in[0,N)$$.

solve(other, var=None, call=True, **kwargs)[source]

Solves a positive definite linear system of the form sum(self) = other or sum(self*var) = other , using a conjugate gradient solver.

Parameters
Keyword Arguments
• var (LazyTensor) – If var is None, solve will return the solution of the self * var = other equation. Otherwise, if var is a KeOps symbolic variable, solve will assume that self defines an expression that is linear with respect to var and solve the equation self(var) = other with respect to var.

• alpha (float, default=1e-10) – Non-negative ridge regularization parameter.

• call (bool) – If True and if no other symbolic variable than var is contained in self, solve will return a tensor solution of our linear system. Otherwise solve will return a callable LazyTensor.

• backend (string) – Specifies the map-reduce scheme, as detailed in the documentation of the Genred module.

• device_id (int, default=-1) – Specifies the GPU that should be used to perform the computation; a negative value lets your system choose the default GPU. This parameter is only useful if your system has access to several GPUs.

• ranges (6-uple of IntTensors, None by default) – Ranges of integers that specify a block-sparse reduction scheme as detailed in the documentation of the Genred module. If None (default), we simply use a dense Kernel matrix as we loop over all indices $$i\in[0,M)$$ and $$j\in[0,N)$$.

Warning

Please note that no check of symmetry and definiteness will be performed prior to our conjugate gradient descent.

__call__(*args, **kwargs)[source]

Executes a Genred or KernelSolve call on the input data, as specified by self.formula .

__str__()[source]

Returns a verbose string identifier.

dim()[source]

Just as in PyTorch, returns the number of dimensions of a LazyTensor.

__add__(other)[source]

x + y returns a LazyTensor that encodes, symbolically, the addition of x and y.

__radd__(other)[source]

x + y returns a LazyTensor that encodes, symbolically, the addition of x and y.

__sub__(other)[source]

Broadcasted subtraction operator - a binary operation.

x - y returns a LazyTensor that encodes, symbolically, the subtraction of x and y.

__rsub__(other)[source]

Broadcasted subtraction operator - a binary operation.

x - y returns a LazyTensor that encodes, symbolically, the subtraction of x and y.

__mul__(other)[source]

Broadcasted elementwise product - a binary operation.

x * y returns a LazyTensor that encodes, symbolically, the elementwise product of x and y.

__rmul__(other)[source]

Broadcasted elementwise product - a binary operation.

x * y returns a LazyTensor that encodes, symbolically, the elementwise product of x and y.

__truediv__(other)[source]

Broadcasted elementwise division - a binary operation.

x / y returns a LazyTensor that encodes, symbolically, the elementwise division of x by y.

__rtruediv__(other)[source]

Broadcasted elementwise division - a binary operation.

x / y returns a LazyTensor that encodes, symbolically, the elementwise division of x by y.

__or__(other)[source]

Euclidean scalar product - a binary operation.

(x|y) returns a LazyTensor that encodes, symbolically, the scalar product of x and y which are assumed to have the same shape.

__ror__(other)[source]

Euclidean scalar product - a binary operation.

(x|y) returns a LazyTensor that encodes, symbolically, the scalar product of x and y which are assumed to have the same shape.

__abs__()[source]

Element-wise absolute value - a unary operation.

abs(x) returns a LazyTensor that encodes, symbolically, the element-wise absolute value of x.

abs()[source]

Element-wise absolute value - a unary operation.

x.abs() returns a LazyTensor that encodes, symbolically, the element-wise absolute value of x.

__neg__()[source]

Element-wise minus - a unary operation.

-x returns a LazyTensor that encodes, symbolically, the element-wise opposite of x.

exp()[source]

Element-wise exponential - a unary operation.

x.exp() returns a LazyTensor that encodes, symbolically, the element-wise exponential of x.

log()[source]

Element-wise logarithm - a unary operation.

x.log() returns a LazyTensor that encodes, symbolically, the element-wise logarithm of x.

cos()[source]

Element-wise cosine - a unary operation.

x.cos() returns a LazyTensor that encodes, symbolically, the element-wise cosine of x.

sin()[source]

Element-wise sine - a unary operation.

x.sin() returns a LazyTensor that encodes, symbolically, the element-wise sine of x.

sqrt()[source]

Element-wise square root - a unary operation.

x.sqrt() returns a LazyTensor that encodes, symbolically, the element-wise square root of x.

rsqrt()[source]

Element-wise inverse square root - a unary operation.

x.rsqrt() returns a LazyTensor that encodes, symbolically, the element-wise inverse square root of x.

__pow__(other)[source]

Broadcasted element-wise power operator - a binary operation.

x**y returns a LazyTensor that encodes, symbolically, the element-wise value of x to the power y.

Note

• if y = 2, x**y relies on the "Square" KeOps operation;

• if y = 0.5, x**y uses on the "Sqrt" KeOps operation;

• if y = -0.5, x**y uses on the "Rsqrt" KeOps operation.

power(other)[source]

Broadcasted element-wise power operator - a binary operation.

pow(x,y) is equivalent to x**y.

square()[source]

Element-wise square - a unary operation.

x.square() is equivalent to x**2 and returns a LazyTensor that encodes, symbolically, the element-wise square of x.

sign()[source]

x.sign() returns a LazyTensor that encodes, symbolically, the element-wise sign of x.

step()[source]

Element-wise step function - a unary operation.

x.step() returns a LazyTensor that encodes, symbolically, the element-wise sign of x.

relu()[source]

Element-wise ReLU function - a unary operation.

x.relu() returns a LazyTensor that encodes, symbolically, the element-wise positive part of x.

sqnorm2()[source]

Squared Euclidean norm - a unary operation.

x.sqnorm2() returns a LazyTensor that encodes, symbolically, the squared Euclidean norm of a vector x.

norm2()[source]

Euclidean norm - a unary operation.

x.norm2() returns a LazyTensor that encodes, symbolically, the Euclidean norm of a vector x.

norm(dim)[source]

Euclidean norm - a unary operation.

x.norm(-1) is equivalent to x.norm2() and returns a LazyTensor that encodes, symbolically, the Euclidean norm of a vector x.

normalize()[source]

Vector normalization - a unary operation.

x.normalize() returns a LazyTensor that encodes, symbolically, a vector x divided by its Euclidean norm.

sqdist(other)[source]

Squared distance - a binary operation.

x.sqdist(y) returns a LazyTensor that encodes, symbolically, the squared Euclidean distance between two vectors x and y.

weightedsqnorm(other)[source]

Weighted squared norm - a binary operation.

LazyTensor.weightedsqnorm(s, x) returns a LazyTensor that encodes, symbolically, the weighted squared Norm of a vector x - see the main reference page for details.

weightedsqdist(f, g)[source]

Weighted squared distance.

LazyTensor.weightedsqdist(s, x, y) is equivalent to LazyTensor.weightedsqnorm(s, x - y).

elem(i)[source]

Indexing of a vector - a unary operation.

x.elem(i) returns a LazyTensor that encodes, symbolically, the i-th element x[i] of the vector x.

extract(i, d)[source]

Range indexing - a unary operation.

x.extract(i, d) returns a LazyTensor that encodes, symbolically, the sub-vector x[i:i+d] of the vector x.

__getitem__(key)[source]

Element or range indexing - a unary operation.

x[key] redirects to the elem() or extract() methods, depending on the key argument. Supported values are:

• an integer k, in which case x[key] redirects to elem(x,k),

• a tuple ..,:,:,k with k an integer, which is equivalent to the case above,

• a slice of the form k:l, k: or :l, with k and l two integers, in which case x[key] redirects to extract(x,k,l-k),

• a tuple of slices of the form ..,:,:,k:l, ..,:,:,k: or ..,:,:,:l, with k and l two integers, which are equivalent to the case above.

one_hot(D)[source]

Encodes a (rounded) scalar value as a one-hot vector of dimension D.

x.one_hot(D) returns a LazyTensor that encodes, symbolically, a vector of length D whose round(x)-th coordinate is equal to 1, and the other ones to zero.

concat(other)[source]

Concatenation of two LazyTensor - a binary operation.

x.concat(y) returns a LazyTensor that encodes, symbolically, the concatenation of x and y along their last dimension.

concatenate(axis=-1)[source]

Concatenation of a tuple of LazyTensor.

LazyTensor.concatenate( (x_1, x_2, ..., x_n), -1) returns a LazyTensor that encodes, symbolically, the concatenation of x_1, x_2, …, x_n along their last dimension. Note that axis should be equal to -1 or 2 (if the x_i’s are 3D LazyTensor): LazyTensors only support concatenation and indexing operations with respect to the last dimension.

cat(dim)[source]

Concatenation of a tuple of LazyTensors.

LazyTensor.cat( (x_1, x_2, ..., x_n), -1) is a PyTorch-friendly alias for LazyTensor.concatenate( (x_1, x_2, ..., x_n), -1); just like indexing operations, it is only supported along the last dimension.

matvecmult(other)[source]

Matrix-vector product - a binary operation.

If x._shape[-1] == A*B and y._shape[-1] == B, z = x.matvecmult(y) returns a LazyTensor such that z._shape[-1] == A which encodes, symbolically, the matrix-vector product of x and y along their last dimension. For details, please check the documentation of the KeOps operation "MatVecMult" in the main reference page.

vecmatmult(other)[source]

Vector-matrix product - a binary operation.

If x._shape[-1] == A and y._shape[-1] == A*B, z = x.vecmatmult(y) returns a LazyTensor such that z._shape[-1] == B which encodes, symbolically, the vector-matrix product of x and y along their last dimension. For details, please check the documentation of the KeOps operation "VetMacMult" in the main reference page.

tensorprod(other)[source]

Tensor product of vectors - a binary operation.

If x._shape[-1] == A and y._shape[-1] == B, z = x.tensorprod(y) returns a LazyTensor such that z._shape[-1] == A*B which encodes, symbolically, the tensor product of x and y along their last dimension. For details, please check the documentation of the KeOps operation "TensorProd" in the main reference page.

keops_tensordot(other, dimfa, dimfb, contfa, contfb, *args)[source]

Tensor dot product (on KeOps internal dimensions) - a binary operation.

Parameters
• other – a LazyTensor

• dimfa – tuple of int

• dimfb – tuple of int

• contfa – tuple of int listing contraction dimension of a (could be empty)

• contfb – tuple of int listing contraction dimension of b (could be empty)

• args – a tuple of int containing the graph of a permutation of the output

Returns

grad(other, gradin)[source]

z = x.grad(v,e) returns a LazyTensor which encodes, symbolically, the gradient (more precisely, the adjoint of the differential operator) of x, with respect to variable v, and applied to e. For details, please check the documentation of the KeOps operation "Grad" in the main reference page.

__weakref__[source]

list of weak references to the object (if defined)

sum(axis=-1, dim=None, **kwargs)[source]

Summation unary operation, or Sum reduction.

sum(axis, dim, **kwargs) will:

• if axis or dim = 0, return the sum reduction of self over the “i” indexes.

• if axis or dim = 1, return the sum reduction of self over the “j” indexes.

• if axis or dim = 2, return a new LazyTensor object representing the sum of the values of the vector self,

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), 1 (= reduction over $$j$$) or 2 (i.e. -1, sum along the dimension of the vector variable).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

sum_reduction(axis=None, dim=None, **kwargs)[source]

Sum reduction.

sum_reduction(axis, dim, **kwargs) will return the sum reduction of self.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

logsumexp(axis=None, dim=None, weight=None, **kwargs)[source]

Log-Sum-Exp reduction.

logsumexp(axis, dim, weight, **kwargs) will:

• if axis or dim = 0, return the “log-sum-exp” reduction of self over the “i” indexes.

• if axis or dim = 1, return the “log-sum-exp” reduction of self over the “j” indexes.

For details, please check the documentation of the KeOps reductions LogSumExp and LogSumExpWeight in the main reference page.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• weight (LazyTensor) – optional object that specifies scalar or vector-valued weights in the log-sum-exp operation

• **kwargs – optional parameters that are passed to the reduction() method.

logsumexp_reduction(**kwargs)[source]

Log-Sum-Exp reduction. Redirects to logsumexp() method.

sumsoftmaxweight(weight, axis=None, dim=None, **kwargs)[source]

Sum of weighted Soft-Max reduction.

sumsoftmaxweight(weight, axis, dim, **kwargs) will:

• if axis or dim = 0, return the “sum of weighted Soft-Max” reduction of self over the “i” indexes.

• if axis or dim = 1, return the “sum of weighted Soft-Max” reduction of self over the “j” indexes.

For details, please check the documentation of the KeOps reduction SumSoftMaxWeight in the main reference page.

Keyword Arguments
• weight (LazyTensor) – object that specifies scalar or vector-valued weights.

• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

sumsoftmaxweight_reduction(**kwargs)[source]

Sum of weighted Soft-Max reduction. Redirects to sumsoftmaxweight() method.

min(axis=None, dim=None, **kwargs)[source]

Min reduction.

min(axis, dim, **kwargs) will:

• if axis or dim = 0, return the minimal values of self over the “i” indexes.

• if axis or dim = 1, return the minimal values of self over the “j” indexes.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

min_reduction(**kwargs)[source]

Min reduction. Redirects to min() method.

__min__(**kwargs)[source]

Min reduction. Redirects to min() method.

argmin(axis=None, dim=None, **kwargs)[source]

ArgMin reduction.

argmin(axis, dim, **kwargs) will:

• if axis or dim = 0, return the indices of minimal values of self over the “i” indexes.

• if axis or dim = 1, return the indices of minimal values of self over the “j” indexes.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

argmin_reduction(**kwargs)[source]

ArgMin reduction. Redirects to argmin() method.

min_argmin(axis=None, dim=None, **kwargs)[source]

Min-ArgMin reduction.

min_argmin(axis, dim, **kwargs) will:

• if axis or dim = 0, return the minimal values and its indices of self over the “i” indexes.

• if axis or dim = 1, return the minimal values and its indices of self over the “j” indexes.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

min_argmin_reduction(**kwargs)[source]

Min-ArgMin reduction. Redirects to min_argmin() method.

max(axis=None, dim=None, **kwargs)[source]

Max reduction.

max(axis, dim, **kwargs) will:

• if axis or dim = 0, return the maximal values of self over the “i” indexes.

• if axis or dim = 1, return the maximal values of self over the “j” indexes.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

max_reduction(**kwargs)[source]

Max reduction. Redirects to max() method.

__max__(**kwargs)[source]

Max reduction. Redirects to max() method.

argmax(axis=None, dim=None, **kwargs)[source]

ArgMax reduction.

argmax(axis, dim, **kwargs) will:

• if axis or dim = 0, return the indices of maximal values of self over the “i” indexes.

• if axis or dim = 1, return the indices of maximal values of self over the “j” indexes.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

argmax_reduction(**kwargs)[source]

ArgMax reduction. Redirects to argmax() method.

max_argmax(axis=None, dim=None, **kwargs)[source]

Max-ArgMax reduction.

max_argmax(axis, dim, **kwargs) will:

• if axis or dim = 0, return the maximal values and its indices of self over the “i” indexes.

• if axis or dim = 1, return the maximal values and its indices of self over the “j” indexes.

Keyword Arguments
• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

max_argmax_reduction(**kwargs)[source]

Max-ArgMax reduction. Redirects to max_argmax() method.

Kmin(K, axis=None, dim=None, **kwargs)[source]

K-Min reduction.

Kmin(K, axis, dim, **kwargs) will:

• if axis or dim = 0, return the K minimal values of self over the “i” indexes.

• if axis or dim = 1, return the K minimal values of self over the “j” indexes.

Keyword Arguments
• K (integer) – number of minimal values required

• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

Kmin_reduction(**kwargs)[source]

Kmin reduction. Redirects to Kmin() method.

argKmin(K, axis=None, dim=None, **kwargs)[source]

argKmin reduction.

argKmin(K, axis, dim, **kwargs) will:

• if axis or dim = 0, return the indices of the K minimal values of self over the “i” indexes.

• if axis or dim = 1, return the indices of the K minimal values of self over the “j” indexes.

Keyword Arguments
• K (integer) – number of minimal values required

• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

argKmin_reduction(**kwargs)[source]

argKmin reduction. Redirects to argKmin() method.

Kmin_argKmin(K, axis=None, dim=None, **kwargs)[source]

K-Min-argK-min reduction.

Kmin_argKmin(K, axis, dim, **kwargs) will:

• if axis or dim = 0, return the K minimal values and its indices of self over the “i” indexes.

• if axis or dim = 1, return the K minimal values and its indices of self over the “j” indexes.

Keyword Arguments
• K (integer) – number of minimal values required

• axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over $$i$$), or 1 (= reduction over $$j$$).

• dim (integer) – alternative keyword for the axis parameter.

• **kwargs – optional parameters that are passed to the reduction() method.

Kmin_argKmin_reduction(**kwargs)[source]

Kmin_argKmin reduction. Redirects to Kmin_argKmin() method.

__matmul__(v)[source]

Matrix-vector or Matrix-matrix product, supporting batch dimensions.

If K is a LazyTensor whose trailing dimension K._shape[-1] is equal to 1, we can understand it as a linear operator and apply it to arbitrary NumPy arrays or PyTorch Tensors. Assuming that v is a 1D (resp. ND) tensor such that K.shape[-1] == v.shape[-1] (resp. v.shape[-2]), K @ v denotes the matrix-vector (resp. matrix-matrix) product between the two objects, encoded as a vanilla NumPy or PyTorch 1D (resp. ND) tensor.

Example

>>> x, y = torch.randn(1000, 3), torch.randn(2000, 3)
>>> x_i, y_j = LazyTensor( x[:,None,:] ), LazyTensor( y[None,:,:] )
>>> K = (- ((x_i - y_j)**2).sum(2) ).exp()  # Symbolic (1000,2000,1) Gaussian kernel matrix
>>> v = torch.rand(2000, 2)
>>> print( (K @ v).shape )
... torch.Size([1000, 2])

t()[source]

Matrix transposition, permuting the axes of $$i$$- and $$j$$-variables.

For instance, if K is a LazyTensor of shape (B,M,N,D), K.t() returns a symbolic copy of K whose axes 1 and 2 have been switched with each other: K.t().shape == (B,N,M,D).

Example

>>> x, y = torch.randn(1000, 3), torch.randn(2000, 3)
>>> x_i, y_j = LazyTensor( x[:,None,:] ), LazyTensor( y[None,:,:] )
>>> K  = (- ((    x_i     -      y_j   )**2).sum(2) ).exp()  # Symbolic (1000,2000) Gaussian kernel matrix
>>> K_ = (- ((x[:,None,:] - y[None,:,:])**2).sum(2) ).exp()  # Explicit (1000,2000) Gaussian kernel matrix
>>> w  = torch.rand(1000, 2)
>>> print( (K.t() @ w - K_.t() @ w).abs().mean() )
... tensor(1.7185e-05)

property T[source]

Numpy-friendly alias for the matrix transpose self.t().

matvec(v)[source]

Alias for the matrix-vector product, added for compatibility with scipy.sparse.linalg.

If K is a LazyTensor whose trailing dimension K._shape[-1] is equal to 1, we can understand it as a linear operator and wrap it into a scipy.sparse.linalg.LinearOperator object, thus getting access to robust solvers and spectral routines.

Example

>>> import numpy as np
>>> x = np.random.randn(1000,3)
>>> x_i, x_j = LazyTensor( x[:,None,:] ), LazyTensor( x[None,:,:] )
>>> K_xx = (- ((x_i - x_j)**2).sum(2) ).exp()  # Symbolic (1000,1000) Gaussian kernel matrix
>>> from scipy.sparse.linalg import eigsh, aslinearoperator
>>> eigenvalues, eigenvectors = eigsh( aslinearoperator( K_xx ), k=5 )
>>> print(eigenvalues)
... [ 35.5074527   59.01096445  61.35075268  69.34038814 123.77540277]
>>> print( eigenvectors.shape)
... (1000, 5)

rmatvec()[source]

Alias for the transposed matrix-vector product, added for compatibility with scipy.sparse.linalg.

See matvec() for further reference.