Formulas and syntax¶
KeOps lets you define any reduction operation of the form
or
where \(F\) is a symbolic formula, the \(x^k_{\iota_k}\)’s are vector variables and \(\text{Reduction}\) is a Sum, LogSumExp or any other standard operation (see Reductions for the full list of supported reductions).
We now describe the symbolic syntax that can be used through all KeOps bindings.
Variables: category, index and dimension¶
At a low level, every variable \(x^k_{\iota_k}\) is specified by its category \(\iota_k\in\{i,j,\emptyset\}\) (meaning that the variable is indexed by \(i\), by \(j\), or is a fixed parameter across indices), its positional index \(k\) and its dimension \(d_k\).
In practice, the category \(\iota_k\) is given through a keyword
keyword 
meaning 


variable indexed by \(i\) 

variable indexed by \(j\) 

parameter 
followed by a \((k,d_k)\) or (index,dimension) pair of integers.
For instance, Vi(2,4)
specifies a variable indexed by \(i\), given as the third (\(k=2\)) input in the function call, and representing a vector of dimension \(d_k=4\).
N.B.: Using the same index k
for two variables with different dimensions or categories is not allowed and will be rejected by the compiler.
Math operators¶
To define formulas with KeOps, you can use simple arithmetics:

scalarvector multiplication (if 

addition of two vectors 

difference between two vectors or minus sign 

elementwise division (N.B. 

scalar product between vectors 
Elementary functions:

elementwise inverse 

elementwise exponential function 

elementwise natural logarithm 

elementwise sine function 

elementwise cosine function 



power operation, alias for 

elementwise square, faster than 

elementwise square root, faster than 

elementwise inverse square root, faster than 

elementwise absolute value 

elementwise sign function ( 

elementwise step function ( 

elementwise ReLU function ( 
Simple vector operations:

squared L2 norm, same as 

L2 norm, same as 

normalize vector, same as 

squared L2 distance, same as 
Generic squared Euclidean norms, with support for scalar, diagonal and full (symmetric)
matrices. If f
is a vector of size N, depending on the size of
s
, WeightedSqNorm(s,f)
may refer to:
a weighted L2 norm \(s[0]\cdot\sum_{1\leqslant i \leqslant N} f[i1]^2\) if
s
is a vector of size 1.a separable norm \(\sum_{1\leqslant i \leqslant N} s[i1]\cdot f[i1]^2\) if
s
is a vector of size N.a full anisotropic norm \(\sum_{1\leqslant i,j\leqslant N} s[N\cdot i+j1]\cdot f[i1] f[j1]\) if
s
is a vector of size N*N such thats[N*i+j1]=s[N*j+i1]
.

generic squared euclidean norm 

generic squared distance, same as 
Constants and padding/concatenation operations:

integer constant N 

alias for 

vector of zeros of size N 

sum of elements of vector 

extract Mth element of vector 

insert scalar value 

extract subvector from vector 

insert vector 

concatenation of vectors 
Elementary dot products:

matrixvector product 

vectormatrix product 

tensor product 
Symbolic gradients:

gradient of 

matrix of gradient (i.e. transpose of the jacobian matrix) 
Reductions¶
The operations that can be used to reduce an array are described in the following table.
code name 
arguments 
mathematical expression (reduction over j) 
remarks 



\(\sum_j f_{ij}\) 



\((m_i,s_i)\) with \(\left\{\begin{array}{l}m_i=\max_j f_{ij}\\s_i=\sum_j\exp(m_if_{ij})\end{array}\right.\) 



\(\log\left(\sum_j\exp(f_{ij})\right)\) 
only in Python bindings 


\((m_i,s_i)\) with \(\left\{\begin{array}{l}m_i=\max_j f_{ij}\\s_i=\sum_j\exp(m_if_{ij})g_{ij}\end{array}\right.\) 



\(\log\left(\sum_j\exp(f_{ij})g_{ij}\right)\) 
only in Python bindings 


\(\left(\sum_j\exp(f_{ij})g_{ij}\right)/\left(\sum_j\exp(f_{ij})\right)\) 
only in Python bindings 


\(\min_j f_{ij}\) 
no gradient 


\(\text{argmin}_jf_{ij}\) 
gradient xreturns zeros 


\(\left(\min_j f_{ij} ,\text{argmin}_j f_{ij}\right)\) 
no gradient 


\(\max_j f_{ij}\) 
no gradient 


\(\text{argmax}_j f_{ij}\) 
gradient returns zeros 


\(\left(\max_j f_{ij},\text{argmax}_j f_{ij}\right)\) 
no gradient 


\(\begin{array}{l}\left[\min_j f_{ij},\ldots,\min^{(K)}_jf_{ij}\right] \\(\min^{(k)}\text{means kth smallest value})\end{array}\) 
no gradient 


\(\left[\text{argmin}_jf_{ij},\ldots,\text{argmin}^{(K)}_j f_{ij}\right]\) 
gradient returns zeros 


\(\left([\min^{(1...K)}_j f_{ij} ],[\text{argmin}^{(1...K)}_j f_{ij}]\right)\) 
no gradient 
N.B.: All these reductions, except Max_SumShiftExp
and LogSumExp
, are vectorized : whenever the input f
or g
is vectorvalued, the output will be vectorvalued, with the corresponding reduction applied elementwise to each component.
N.B.: All reductions accept an additional optional argument that specifies wether the reduction is performed over the j or the i index. (see C++ API for Keops and Generic reductions)
An example¶
Assume we want to compute the sum
where:
\(p \in \mathbb R\) is a parameter,
\(x \in \mathbb R^{M\times 3}\) is an xvariable indexed by \(i\),
\(y \in \mathbb R^{N\times 3}\) is an yvariable indexed by \(j\),
\(a \in \mathbb R^N\) is an yvariable indexed by \(j\).
Using the variable placeholders presented above and the
mathematical operations listed in Math operators,
we can define F
as a symbolic string
F = "Sum_Reduction( Square( Pm(0,1)  Vj(3,1) ) * Exp( Vi(1,3) + Vj(2,3) ), 1 )"
in which +
and 
denote the usual addition of vectors, Exp
is the (elementwise) exponential function and *
denotes scalarvector multiplication.
The second argument 1
of the Sum_Reduction
operator
indicates that the summation is performed with respect to the \(j\)
index: a 0
would have been associated with an \(i\)reduction.
Note that in all bindings, variables can be defined through aliases.
In this example, we may write p=Pm(0,1)
, x=Vi(1,3)
, y=Vj(2,3)
, a=Vj(3,1)
and thus give F
through a much friendlier expression:
F = "Sum_Reduction( Square(p  a) * Exp(x + y), 1 )"