LazyTensor¶
Summary
This section contains the full API documentation of the LazyTensor wrapper, which works identically on NumPy arrays and PyTorch tensors.

Simple wrapper that return an instantiation of 

Simple wrapper that return an instantiation of 

Simple wrapper that return an instantiation of 

Symbolic wrapper for NumPy arrays and PyTorch tensors. 
Syntax

pykeops.torch.
Vi
(x_or_ind, dim=None)[source]¶ Simple wrapper that return an instantiation of
LazyTensor
of type 0.

pykeops.torch.
Vj
(x_or_ind, dim=None)[source]¶ Simple wrapper that return an instantiation of
LazyTensor
of type 1.

pykeops.torch.
Pm
(x_or_ind, dim=None)[source]¶ Simple wrapper that return an instantiation of
LazyTensor
of type 2.

class
pykeops.torch.
LazyTensor
(x=None, axis=None)[source]¶ Symbolic wrapper for NumPy arrays and PyTorch tensors.
LazyTensor
encode numerical arrays through the combination of a symbolic, mathematical formula and a list of small data arrays. They can be used to implement efficient algorithms on objects that are easy to define, but impossible to store in memory (e.g. the matrix of pairwise distances between two large point clouds).LazyTensor
may be created from standard NumPy arrays or PyTorch tensors, combined using simple mathematical operations and converted back to NumPy arrays or PyTorch tensors with efficient reduction routines, which outperform standard tensorized implementations by two orders of magnitude.
__init__
(x=None, axis=None)[source]¶ Creates a KeOps symbolic variable.
 Parameters
x –
May be either:
A float, a list of floats, a NumPy float, a 0D or 1D NumPy array, a 0D or 1D PyTorch tensor, in which case the
LazyTensor
represents a constant vector of parameters, to be broadcasted on otherLazyTensor
.A 2D NumPy array or PyTorch tensor, in which case the
LazyTensor
represents a variable indexed by \(i\) if axis=0 or \(j\) if axis=1.A 3D+ NumPy array or PyTorch tensor with a dummy dimension (=1) at position 3 or 2, in which case the
LazyTensor
represents a variable indexed by \(i\) or \(j\), respectively. Dimensions before the last three will be handled as batch dimensions, that may support operator broadcasting.A tuple of 3 integers (ind,dim,cat), in which case the
LazyTensor
represents a symbolic variable that should be instantiated at calltime.An integer, in which case the
LazyTensor
represents an integer constant handled efficiently at compilation time.None, for internal use.
axis (int) – should be equal to 0 or 1 if x is a 2D tensor, and None otherwise.
Warning
A
LazyTensor
constructed from a NumPy array or a PyTorch tensor retains its dtype (float32 vs float64) and device properties (is it stored on the GPU?). Since KeOps does not support automatic type conversions and data transfers, please make sure not to mixLazyTensor
that come from different frameworks/devices or which are stored with different precisions.

fixvariables
()[source]¶ If needed, assigns final labels to each variable and pads their batch dimensions prior to a
Genred()
call.

promote
(other, props)[source]¶ Creates a new
LazyTensor
whose None properties are set to those of self or other.

init
()[source]¶ Creates a copy of a
LazyTensor
, without formula attribute.

join
(other)[source]¶ Merges the variables and attributes of two
LazyTensor
, with a compatibility check. This method concatenates tuples of variables, without paying attention to repetitions.

unary
(operation, dimres=None, opt_arg=None, opt_arg2=None)[source]¶ Symbolically applies operation to self, with optional arguments if needed.
The optional argument dimres may be used to specify the dimension of the output result.

binary
(other, operation, is_operator=False, dimres=None, dimcheck='sameor1', opt_arg=None, opt_pos='last')[source]¶ Symbolically applies operation to self, with optional arguments if needed.
 Keyword Arguments
dimres () – May be used to specify the dimension of the output result.
is_operator () – May be used to specify if operation is an operator like
+
,
or a “genuine” function.dimcheck () – shall we check the input dimensions? Supported values are
"same"
,"sameor1"
, or None.

reduction
(reduction_op, other=None, opt_arg=None, axis=None, dim=None, call=True, **kwargs)[source]¶ Applies a reduction to a
LazyTensor
. This method is used internally by the LazyTensor class. :param reduction_op: the string identifier of the reduction, which will be passed to the KeOps routines. :type reduction_op: string Keyword Arguments
other – May be used to specify some weights ; depends on the reduction.
opt_arg – typically, some integer needed by ArgKMin reductions ; depends on the reduction.
axis (integer) – The axis with respect to which the reduction should be performed. Supported values are nbatchdims and nbatchdims + 1, where nbatchdims is the number of “batch” dimensions before the last three (\(i\) indices, \(j\) indices, variables’ dimensions).
dim (integer) – alternative keyword for the axis argument.
call (True or False) – Should we actually perform the reduction on the current variables? If True, the returned object will be a NumPy array or a PyTorch tensor. Otherwise, we simply return a callable
LazyTensor
that may be used as apykeops.numpy.Genred
orpykeops.torch.Genred
function on arbitrary tensor data.backend (string) – Specifies the mapreduce scheme, as detailed in the documentation of the
Genred
module.device_id (int, default=1) – Specifies the GPU that should be used to perform the computation; a negative value lets your system choose the default GPU. This parameter is only useful if your system has access to several GPUs.
ranges (6uple of IntTensors, None by default) – Ranges of integers that specify a blocksparse reduction scheme as detailed in the documentation of the
Genred
module. If None (default), we simply use a dense Kernel matrix as we loop over all indices \(i\in[0,M)\) and \(j\in[0,N)\).

solve
(other, var=None, call=True, **kwargs)[source]¶ Solves a positive definite linear system of the form
sum(self) = other
orsum(self*var) = other
, using a conjugate gradient solver. Parameters
self (
LazyTensor
) – KeOps variable that encodes a symmetric positive definite matrix / linear operator.other (
LazyTensor
) – KeOps variable that encodes the second member of the equation.
 Keyword Arguments
var (
LazyTensor
) – If var is None, solve will return the solution of theself * var = other
equation. Otherwise, if var is a KeOps symbolic variable, solve will assume that self defines an expression that is linear with respect to var and solve the equationself(var) = other
with respect to var.alpha (float, default=1e10) – Nonnegative ridge regularization parameter.
call (bool) – If True and if no other symbolic variable than var is contained in self, solve will return a tensor solution of our linear system. Otherwise solve will return a callable
LazyTensor
.backend (string) – Specifies the mapreduce scheme, as detailed in the documentation of the
Genred
module.device_id (int, default=1) – Specifies the GPU that should be used to perform the computation; a negative value lets your system choose the default GPU. This parameter is only useful if your system has access to several GPUs.
ranges (6uple of IntTensors, None by default) – Ranges of integers that specify a blocksparse reduction scheme as detailed in the documentation of the
Genred
module. If None (default), we simply use a dense Kernel matrix as we loop over all indices \(i\in[0,M)\) and \(j\in[0,N)\).
Warning
Please note that no check of symmetry and definiteness will be performed prior to our conjugate gradient descent.

__call__
(*args, **kwargs)[source]¶ Executes a
Genred
orKernelSolve
call on the input data, as specified by self.formula .

dim
()[source]¶ Just as in PyTorch, returns the number of dimensions of a
LazyTensor
.

__add__
(other)[source]¶ Broadcasted addition operator  a binary operation.
x + y
returns aLazyTensor
that encodes, symbolically, the addition ofx
andy
.

__radd__
(other)[source]¶ Broadcasted addition operator  a binary operation.
x + y
returns aLazyTensor
that encodes, symbolically, the addition ofx
andy
.

__sub__
(other)[source]¶ Broadcasted subtraction operator  a binary operation.
x  y
returns aLazyTensor
that encodes, symbolically, the subtraction ofx
andy
.

__rsub__
(other)[source]¶ Broadcasted subtraction operator  a binary operation.
x  y
returns aLazyTensor
that encodes, symbolically, the subtraction ofx
andy
.

__mul__
(other)[source]¶ Broadcasted elementwise product  a binary operation.
x * y
returns aLazyTensor
that encodes, symbolically, the elementwise product ofx
andy
.

__rmul__
(other)[source]¶ Broadcasted elementwise product  a binary operation.
x * y
returns aLazyTensor
that encodes, symbolically, the elementwise product ofx
andy
.

__truediv__
(other)[source]¶ Broadcasted elementwise division  a binary operation.
x / y
returns aLazyTensor
that encodes, symbolically, the elementwise division ofx
byy
.

__rtruediv__
(other)[source]¶ Broadcasted elementwise division  a binary operation.
x / y
returns aLazyTensor
that encodes, symbolically, the elementwise division ofx
byy
.

__or__
(other)[source]¶ Euclidean scalar product  a binary operation.
(xy)
returns aLazyTensor
that encodes, symbolically, the scalar product ofx
andy
which are assumed to have the same shape.

__ror__
(other)[source]¶ Euclidean scalar product  a binary operation.
(xy)
returns aLazyTensor
that encodes, symbolically, the scalar product ofx
andy
which are assumed to have the same shape.

__abs__
()[source]¶ Elementwise absolute value  a unary operation.
abs(x)
returns aLazyTensor
that encodes, symbolically, the elementwise absolute value ofx
.

abs
()[source]¶ Elementwise absolute value  a unary operation.
x.abs()
returns aLazyTensor
that encodes, symbolically, the elementwise absolute value ofx
.

__neg__
()[source]¶ Elementwise minus  a unary operation.
x
returns aLazyTensor
that encodes, symbolically, the elementwise opposite ofx
.

exp
()[source]¶ Elementwise exponential  a unary operation.
x.exp()
returns aLazyTensor
that encodes, symbolically, the elementwise exponential ofx
.

log
()[source]¶ Elementwise logarithm  a unary operation.
x.log()
returns aLazyTensor
that encodes, symbolically, the elementwise logarithm ofx
.

cos
()[source]¶ Elementwise cosine  a unary operation.
x.cos()
returns aLazyTensor
that encodes, symbolically, the elementwise cosine ofx
.

sin
()[source]¶ Elementwise sine  a unary operation.
x.sin()
returns aLazyTensor
that encodes, symbolically, the elementwise sine ofx
.

sqrt
()[source]¶ Elementwise square root  a unary operation.
x.sqrt()
returns aLazyTensor
that encodes, symbolically, the elementwise square root ofx
.

rsqrt
()[source]¶ Elementwise inverse square root  a unary operation.
x.rsqrt()
returns aLazyTensor
that encodes, symbolically, the elementwise inverse square root ofx
.

__pow__
(other)[source]¶ Broadcasted elementwise power operator  a binary operation.
x**y
returns aLazyTensor
that encodes, symbolically, the elementwise value ofx
to the powery
.Note
if y = 2,
x**y
relies on the"Square"
KeOps operation;if y = 0.5,
x**y
uses on the"Sqrt"
KeOps operation;if y = 0.5,
x**y
uses on the"Rsqrt"
KeOps operation.

power
(other)[source]¶ Broadcasted elementwise power operator  a binary operation.
pow(x,y)
is equivalent tox**y
.

square
()[source]¶ Elementwise square  a unary operation.
x.square()
is equivalent tox**2
and returns aLazyTensor
that encodes, symbolically, the elementwise square ofx
.

sign
()[source]¶ Elementwise sign in {1,0,+1}  a unary operation.
x.sign()
returns aLazyTensor
that encodes, symbolically, the elementwise sign ofx
.

step
()[source]¶ Elementwise step function  a unary operation.
x.step()
returns aLazyTensor
that encodes, symbolically, the elementwise sign ofx
.

relu
()[source]¶ Elementwise ReLU function  a unary operation.
x.relu()
returns aLazyTensor
that encodes, symbolically, the elementwise positive part ofx
.

sqnorm2
()[source]¶ Squared Euclidean norm  a unary operation.
x.sqnorm2()
returns aLazyTensor
that encodes, symbolically, the squared Euclidean norm of a vectorx
.

norm2
()[source]¶ Euclidean norm  a unary operation.
x.norm2()
returns aLazyTensor
that encodes, symbolically, the Euclidean norm of a vectorx
.

norm
(dim)[source]¶ Euclidean norm  a unary operation.
x.norm(1)
is equivalent tox.norm2()
and returns aLazyTensor
that encodes, symbolically, the Euclidean norm of a vectorx
.

normalize
()[source]¶ Vector normalization  a unary operation.
x.normalize()
returns aLazyTensor
that encodes, symbolically, a vectorx
divided by its Euclidean norm.

sqdist
(other)[source]¶ Squared distance  a binary operation.
x.sqdist(y)
returns aLazyTensor
that encodes, symbolically, the squared Euclidean distance between two vectorsx
andy
.

weightedsqnorm
(other)[source]¶ Weighted squared norm  a binary operation.
LazyTensor.weightedsqnorm(s, x)
returns aLazyTensor
that encodes, symbolically, the weighted squared Norm of a vectorx
 see the main reference page for details.

weightedsqdist
(f, g)[source]¶ Weighted squared distance.
LazyTensor.weightedsqdist(s, x, y)
is equivalent toLazyTensor.weightedsqnorm(s, x  y)
.

elem
(i)[source]¶ Indexing of a vector  a unary operation.
x.elem(i)
returns aLazyTensor
that encodes, symbolically, the ith elementx[i]
of the vectorx
.

extract
(i, d)[source]¶ Range indexing  a unary operation.
x.extract(i, d)
returns aLazyTensor
that encodes, symbolically, the subvectorx[i:i+d]
of the vectorx
.

__getitem__
(key)[source]¶ Element or range indexing  a unary operation.
x[key]
redirects to theelem()
orextract()
methods, depending on thekey
argument. Supported values are:an integer
k
, in which casex[key]
redirects toelem(x,k)
,a tuple
..,:,:,k
withk
an integer, which is equivalent to the case above,a slice of the form
k:l
,k:
or:l
, withk
andl
two integers, in which casex[key]
redirects toextract(x,k,lk)
,a tuple of slices of the form
..,:,:,k:l
,..,:,:,k:
or..,:,:,:l
, withk
andl
two integers, which are equivalent to the case above.

one_hot
(D)[source]¶ Encodes a (rounded) scalar value as a onehot vector of dimension D.
x.one_hot(D)
returns aLazyTensor
that encodes, symbolically, a vector of length D whose round(x)th coordinate is equal to 1, and the other ones to zero.

concat
(other)[source]¶ Concatenation of two
LazyTensor
 a binary operation.x.concat(y)
returns aLazyTensor
that encodes, symbolically, the concatenation ofx
andy
along their last dimension.

concatenate
(axis=1)[source]¶ Concatenation of a tuple of
LazyTensor
.LazyTensor.concatenate( (x_1, x_2, ..., x_n), 1)
returns aLazyTensor
that encodes, symbolically, the concatenation ofx_1
,x_2
, …,x_n
along their last dimension. Note that axis should be equal to 1 or 2 (if thex_i
’s are 3D LazyTensor): LazyTensors only support concatenation and indexing operations with respect to the last dimension.

cat
(dim)[source]¶ Concatenation of a tuple of LazyTensors.
LazyTensor.cat( (x_1, x_2, ..., x_n), 1)
is a PyTorchfriendly alias forLazyTensor.concatenate( (x_1, x_2, ..., x_n), 1)
; just like indexing operations, it is only supported along the last dimension.

matvecmult
(other)[source]¶ Matrixvector product  a binary operation.
If
x._shape[1] == A*B
andy._shape[1] == B
,z = x.matvecmult(y)
returns aLazyTensor
such thatz._shape[1] == A
which encodes, symbolically, the matrixvector product ofx
andy
along their last dimension. For details, please check the documentation of the KeOps operation"MatVecMult"
in the main reference page.

vecmatmult
(other)[source]¶ Vectormatrix product  a binary operation.
If
x._shape[1] == A
andy._shape[1] == A*B
,z = x.vecmatmult(y)
returns aLazyTensor
such thatz._shape[1] == B
which encodes, symbolically, the vectormatrix product ofx
andy
along their last dimension. For details, please check the documentation of the KeOps operation"VetMacMult"
in the main reference page.

tensorprod
(other)[source]¶ Tensor product of vectors  a binary operation.
If
x._shape[1] == A
andy._shape[1] == B
,z = x.tensorprod(y)
returns aLazyTensor
such thatz._shape[1] == A*B
which encodes, symbolically, the tensor product ofx
andy
along their last dimension. For details, please check the documentation of the KeOps operation"TensorProd"
in the main reference page.

keops_tensordot
(other, dimfa, dimfb, contfa, contfb, *args)[source]¶ Tensor dot product (on KeOps internal dimensions)  a binary operation.
 Parameters
other – a LazyTensor
dimfa – tuple of int
dimfb – tuple of int
contfa – tuple of int listing contraction dimension of a (could be empty)
contfb – tuple of int listing contraction dimension of b (could be empty)
args – a tuple of int containing the graph of a permutation of the output
 Returns

grad
(other, gradin)[source]¶ Symbolic gradient operation.
z = x.grad(v,e)
returns aLazyTensor
which encodes, symbolically, the gradient (more precisely, the adjoint of the differential operator) ofx
, with respect to variablev
, and applied toe
. For details, please check the documentation of the KeOps operation"Grad"
in the main reference page.

sum
(axis=1, dim=None, **kwargs)[source]¶ Summation unary operation, or Sum reduction.
sum(axis, dim, **kwargs)
will:if axis or dim = 0, return the sum reduction of self over the “i” indexes.
if axis or dim = 1, return the sum reduction of self over the “j” indexes.
if axis or dim = 2, return a new
LazyTensor
object representing the sum of the values of the vector self,
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), 1 (= reduction over \(j\)) or 2 (i.e. 1, sum along the dimension of the vector variable).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

sum_reduction
(axis=None, dim=None, **kwargs)[source]¶ Sum reduction.
sum_reduction(axis, dim, **kwargs)
will return the sum reduction of self. Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

logsumexp
(axis=None, dim=None, weight=None, **kwargs)[source]¶ LogSumExp reduction.
logsumexp(axis, dim, weight, **kwargs)
will:if axis or dim = 0, return the “logsumexp” reduction of self over the “i” indexes.
if axis or dim = 1, return the “logsumexp” reduction of self over the “j” indexes.
For details, please check the documentation of the KeOps reductions
LogSumExp
andLogSumExpWeight
in the main reference page. Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
weight (
LazyTensor
) – optional object that specifies scalar or vectorvalued weights in the logsumexp operation**kwargs – optional parameters that are passed to the
reduction()
method.

logsumexp_reduction
(**kwargs)[source]¶ LogSumExp reduction. Redirects to
logsumexp()
method.

sumsoftmaxweight
(weight, axis=None, dim=None, **kwargs)[source]¶ Sum of weighted SoftMax reduction.
sumsoftmaxweight(weight, axis, dim, **kwargs)
will:if axis or dim = 0, return the “sum of weighted SoftMax” reduction of self over the “i” indexes.
if axis or dim = 1, return the “sum of weighted SoftMax” reduction of self over the “j” indexes.
For details, please check the documentation of the KeOps reduction
SumSoftMaxWeight
in the main reference page. Keyword Arguments
weight (
LazyTensor
) – object that specifies scalar or vectorvalued weights.axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

sumsoftmaxweight_reduction
(**kwargs)[source]¶ Sum of weighted SoftMax reduction. Redirects to
sumsoftmaxweight()
method.

min
(axis=None, dim=None, **kwargs)[source]¶ Min reduction.
min(axis, dim, **kwargs)
will:if axis or dim = 0, return the minimal values of self over the “i” indexes.
if axis or dim = 1, return the minimal values of self over the “j” indexes.
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

argmin
(axis=None, dim=None, **kwargs)[source]¶ ArgMin reduction.
argmin(axis, dim, **kwargs)
will:if axis or dim = 0, return the indices of minimal values of self over the “i” indexes.
if axis or dim = 1, return the indices of minimal values of self over the “j” indexes.
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

min_argmin
(axis=None, dim=None, **kwargs)[source]¶ MinArgMin reduction.
min_argmin(axis, dim, **kwargs)
will:if axis or dim = 0, return the minimal values and its indices of self over the “i” indexes.
if axis or dim = 1, return the minimal values and its indices of self over the “j” indexes.
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

min_argmin_reduction
(**kwargs)[source]¶ MinArgMin reduction. Redirects to
min_argmin()
method.

max
(axis=None, dim=None, **kwargs)[source]¶ Max reduction.
max(axis, dim, **kwargs)
will:if axis or dim = 0, return the maximal values of self over the “i” indexes.
if axis or dim = 1, return the maximal values of self over the “j” indexes.
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

argmax
(axis=None, dim=None, **kwargs)[source]¶ ArgMax reduction.
argmax(axis, dim, **kwargs)
will:if axis or dim = 0, return the indices of maximal values of self over the “i” indexes.
if axis or dim = 1, return the indices of maximal values of self over the “j” indexes.
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

max_argmax
(axis=None, dim=None, **kwargs)[source]¶ MaxArgMax reduction.
max_argmax(axis, dim, **kwargs)
will:if axis or dim = 0, return the maximal values and its indices of self over the “i” indexes.
if axis or dim = 1, return the maximal values and its indices of self over the “j” indexes.
 Keyword Arguments
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

max_argmax_reduction
(**kwargs)[source]¶ MaxArgMax reduction. Redirects to
max_argmax()
method.

Kmin
(K, axis=None, dim=None, **kwargs)[source]¶ KMin reduction.
Kmin(K, axis, dim, **kwargs)
will:if axis or dim = 0, return the K minimal values of self over the “i” indexes.
if axis or dim = 1, return the K minimal values of self over the “j” indexes.
 Keyword Arguments
K (integer) – number of minimal values required
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

argKmin
(K, axis=None, dim=None, **kwargs)[source]¶ argKmin reduction.
argKmin(K, axis, dim, **kwargs)
will:if axis or dim = 0, return the indices of the K minimal values of self over the “i” indexes.
if axis or dim = 1, return the indices of the K minimal values of self over the “j” indexes.
 Keyword Arguments
K (integer) – number of minimal values required
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

Kmin_argKmin
(K, axis=None, dim=None, **kwargs)[source]¶ KMinargKmin reduction.
Kmin_argKmin(K, axis, dim, **kwargs)
will:if axis or dim = 0, return the K minimal values and its indices of self over the “i” indexes.
if axis or dim = 1, return the K minimal values and its indices of self over the “j” indexes.
 Keyword Arguments
K (integer) – number of minimal values required
axis (integer) – reduction dimension, which should be equal to the number of batch dimensions plus 0 (= reduction over \(i\)), or 1 (= reduction over \(j\)).
dim (integer) – alternative keyword for the axis parameter.
**kwargs – optional parameters that are passed to the
reduction()
method.

Kmin_argKmin_reduction
(**kwargs)[source]¶ Kmin_argKmin reduction. Redirects to
Kmin_argKmin()
method.

__matmul__
(v)[source]¶ Matrixvector or Matrixmatrix product, supporting batch dimensions.
If
K
is aLazyTensor
whose trailing dimensionK._shape[1]
is equal to 1, we can understand it as a linear operator and apply it to arbitrary NumPy arrays or PyTorch Tensors. Assuming thatv
is a 1D (resp. ND) tensor such thatK.shape[1] == v.shape[1]
(resp.v.shape[2]
),K @ v
denotes the matrixvector (resp. matrixmatrix) product between the two objects, encoded as a vanilla NumPy or PyTorch 1D (resp. ND) tensor.Example
>>> x, y = torch.randn(1000, 3), torch.randn(2000, 3) >>> x_i, y_j = LazyTensor( x[:,None,:] ), LazyTensor( y[None,:,:] ) >>> K = ( ((x_i  y_j)**2).sum(2) ).exp() # Symbolic (1000,2000,1) Gaussian kernel matrix >>> v = torch.rand(2000, 2) >>> print( (K @ v).shape ) ... torch.Size([1000, 2])

t
()[source]¶ Matrix transposition, permuting the axes of \(i\) and \(j\)variables.
For instance, if
K
is a LazyTensor of shape(B,M,N,D)
,K.t()
returns a symbolic copy ofK
whose axes 1 and 2 have been switched with each other:K.t().shape == (B,N,M,D)
.Example
>>> x, y = torch.randn(1000, 3), torch.randn(2000, 3) >>> x_i, y_j = LazyTensor( x[:,None,:] ), LazyTensor( y[None,:,:] ) >>> K = ( (( x_i  y_j )**2).sum(2) ).exp() # Symbolic (1000,2000) Gaussian kernel matrix >>> K_ = ( ((x[:,None,:]  y[None,:,:])**2).sum(2) ).exp() # Explicit (1000,2000) Gaussian kernel matrix >>> w = torch.rand(1000, 2) >>> print( (K.t() @ w  K_.t() @ w).abs().mean() ) ... tensor(1.7185e05)

matvec
(v)[source]¶ Alias for the matrixvector product, added for compatibility with
scipy.sparse.linalg
.If
K
is aLazyTensor
whose trailing dimensionK._shape[1]
is equal to 1, we can understand it as a linear operator and wrap it into ascipy.sparse.linalg.LinearOperator
object, thus getting access to robust solvers and spectral routines.Example
>>> import numpy as np >>> x = np.random.randn(1000,3) >>> x_i, x_j = LazyTensor( x[:,None,:] ), LazyTensor( x[None,:,:] ) >>> K_xx = ( ((x_i  x_j)**2).sum(2) ).exp() # Symbolic (1000,1000) Gaussian kernel matrix >>> from scipy.sparse.linalg import eigsh, aslinearoperator >>> eigenvalues, eigenvectors = eigsh( aslinearoperator( K_xx ), k=5 ) >>> print(eigenvalues) ... [ 35.5074527 59.01096445 61.35075268 69.34038814 123.77540277] >>> print( eigenvectors.shape) ... (1000, 5)
