Formulas and syntax
KeOps lets us define any reduction operation of the form
or
where \(F\) is a symbolic formula, the \(x^k_{\iota_k}\)’s are vector variables and \(\text{Reduction}\) is a Sum, LogSumExp or any other standard operation (see Reductions for the full list of supported reductions).
We now describe the symbolic syntax that can be used through all KeOps bindings.
Variables: category, index and dimension
At a low level, every variable \(x^k_{\iota_k}\) is specified by its category \(\iota_k\in\{i,j,\emptyset\}\) (meaning that the variable is indexed by \(i\), by \(j\), or is a fixed parameter across indices), its positional index \(k\) and its dimension \(d_k\).
In practice, the category \(\iota_k\) is given through a keyword
keyword 
meaning 


variable indexed by \(i\) 

variable indexed by \(j\) 

parameter 
followed by a \((k,d_k)\) or (index,dimension) pair of integers.
For instance, Vi(2,4)
specifies a variable indexed by \(i\), given as the third (\(k=2\)) input in the function call, and representing a vector of dimension \(d_k=4\).
N.B.: Using the same index k
for two variables with different dimensions or categories is not allowed and will be rejected by the compiler.
Reserved words
keyword 
meaning 


indexes sequences 
followed by a sequence \((i_0, i_1, \cdots)\) of integers. For instance, Ind(2,4,2,5,12)
can be used as parameters for some operations.
Math operators
To define formulas with KeOps, you can use simple arithmetics:

scalarvector multiplication (if 

addition of two vectors 

difference between two vectors or minus sign 

elementwise division (N.B. 

scalar product between vectors 

elementwise equal function ( 

elementwise equal function ( 

elementwise lessthan function ( 

elementwise greaterthan function ( 

elementwise lessthanorequalto function ( 

elementwise greaterthanorequalto function ( 
Elementary functions:

elementwise opposite of 

elementwise inverse 

elementwise exponential function 

elementwise natural logarithm 

computes 

elementwise sine function 

function 

elementwise arcsine function 

elementwise cosine function 

elementwise arccosine function 

elementwise arctangent function 

elementwise 2argument arctangent function 



power operation, alias for 

elementwise square, faster than 

elementwise square root, faster than 

elementwise inverse square root, faster than 

elementwise absolute value 

elementwise sign function ( 

elementwise step function ( 

elementwise ReLU function ( 

elementwise Clamp function ( 

elementwise Clamp function, with a and b fixed integers 

elementwise IfElse function ( 

elementwise Modulo function, with offset (computes 

elementwise Round function, with d decimal rounding 
Simple vector operations:

squared L2 norm, same as 

L2 norm, same as 

normalize vector, same as 

squared L2 distance, same as 
Generic squared Euclidean norms, with support for scalar, diagonal and full (symmetric)
matrices. If f
is a vector of size N, depending on the size of
s
, WeightedSqNorm(s,f)
may refer to:
a weighted L2 norm \(s[0]\cdot\sum_{0\leqslant i < N} f[i]^2\) if
s
is a vector of size 1.a separable norm \(\sum_{0\leqslant i < N} s[i]\cdot f[i]^2\) if
s
is a vector of size N.a full anisotropic norm \(\sum_{0\leqslant i,j < N} s[iN+j] f[i] f[j]\) if
s
is a vector of size N * N such thats[i*N+j]=s[j*N+i]
(i.e. stores a symmetric matrix).

generic squared euclidean norm 

generic squared distance, same as 
Operations involving complex numbers:

Real part of complex (vectorized) 

Imaginary part of complex (vectorized) 

convert real vector to complex vector with zero imaginary part (F+0*i) 

convert real vector to complex vector with zero real part (0+i*F) 

Complex conjugate (vectorized) 

Absolute value or modulus of complex (vectorized) 

Square of modulus of complex (vectorized) 

Angle of complex (vectorized) 

Sum of complex vector 

Adjoint operation of ComplexSum  replicates f (complex scalar) dim times 

Complex multiplication of f and g (vectorized) 

Multiplication of f (complex scalar) with g (complex vector) 

Multiplication of f (real scalar) with g (complex vector) 

Complex division of f and g (vectorized) 
Constants and padding/concatenation operations:

integer constant N 

alias for 

vector of zeros of size N 

sum of elements of vector 

max of elements of vector 

min of elements of vector 

argmax of elements of vector 

argmin of elements of vector 

extract Mth element of vector 

insert scalar value 

extract subvector from vector 

insert vector 

concatenation of vectors 

encodes a (rounded) scalar value as a onehot vector of dimension D 
Elementary tensor algebra:

matrixvector product 

vectormatrix product 

tensor cross product 

kronecker product (similar to numpy's kron in the spirit) 

tensordot product 
Symbolic gradients and linear operators:

gradient of 

differential of 

matrix of gradient (i.e. transpose of the jacobian matrix) 

divergence of 

laplacian of 

trace of 

adjoint of 
Other:

SoftDTW distance (based on the sqaured euclidean distance) between 
Reductions
The operations that can be used to reduce an array are described in the following table.
code name 
arguments 
mathematical expression (reduction over j) 
remarks 



\(\sum_j f_{ij}\) 



\((m_i,s_i)\) with \(\left\{\begin{array}{l}m_i=\max_j f_{ij}\\s_i=\sum_j\exp(f_{ij}m_i)\end{array}\right.\) 



\(\log\left(\sum_j\exp(f_{ij})\right)\) 
only in Python bindings 


\((m_i,s_i)\) with \(\left\{\begin{array}{l}m_i=\max_j f_{ij}\\s_i=\sum_j\exp(f_{ij}m_i)g_{ij}\end{array}\right.\) 



\(\log\left(\sum_j\exp(f_{ij})g_{ij}\right)\) 
only in Python bindings 


\(\left(\sum_j\exp(f_{ij})g_{ij}\right)/\left(\sum_j\exp(f_{ij})\right)\) 
only in Python bindings 


\(\min_j f_{ij}\) 
no gradient 


\(\text{argmin}_jf_{ij}\) 
gradient xreturns zeros 


\(\left(\min_j f_{ij} ,\text{argmin}_j f_{ij}\right)\) 
no gradient 


\(\max_j f_{ij}\) 
no gradient 


\(\text{argmax}_j f_{ij}\) 
gradient returns zeros 


\(\left(\max_j f_{ij},\text{argmax}_j f_{ij}\right)\) 
no gradient 


\(\begin{array}{l}\left[\min_j f_{ij},\ldots,\min^{(K)}_jf_{ij}\right] \\(\min^{(k)}\text{means kth smallest value})\end{array}\) 
no gradient 


\(\left[\text{argmin}_jf_{ij},\ldots,\text{argmin}^{(K)}_j f_{ij}\right]\) 
gradient returns zeros 


\(\left([\min^{(1...K)}_j f_{ij} ],[\text{argmin}^{(1...K)}_j f_{ij}]\right)\) 
no gradient 
N.B.: All these reductions, except Max_SumShiftExp
and LogSumExp
, are vectorized : whenever the input f
or g
is vectorvalued, the output will be vectorvalued, with the corresponding reduction applied elementwise to each component.
N.B.: All reductions accept an additional optional argument that specifies wether the reduction is performed over the j or the i index. (see C++ API for KeOps and Generic reductions)
An example
Assume we want to compute the sum
where:
\(p \in \mathbb R\) is a parameter,
\(x \in \mathbb R^{M\times 3}\) is an xvariable indexed by \(i\),
\(y \in \mathbb R^{N\times 3}\) is an yvariable indexed by \(j\),
\(a \in \mathbb R^N\) is an yvariable indexed by \(j\).
Using the variable placeholders presented above and the
mathematical operations listed in Math operators,
we can define F
as a symbolic string
F = "Sum_Reduction( Square( Pm(0,1)  Vj(3,1) ) * Exp( Vi(1,3) + Vj(2,3) ), 1 )"
in which +
and 
denote the usual addition of vectors, Exp
is the (elementwise) exponential function and *
denotes scalarvector multiplication.
The second argument 1
of the Sum_Reduction
operator
indicates that the summation is performed with respect to the \(j\)
index: a 0
would have been associated with an \(i\)reduction.
Note that in all bindings, variables can be defined through aliases.
In this example, we may write p=Pm(0,1)
, x=Vi(1,3)
, y=Vj(2,3)
, a=Vj(3,1)
and thus give F
through a much friendlier expression:
F = "Sum_Reduction( Square(p  a) * Exp(x + y), 1 )"