# Kernel Operations on the GPU, with autodiff, without memory overflows¶

The KeOps library lets you compute generic reductions of large 2d arrays whose entries are given by a mathematical formula. It combines a tiled reduction scheme with an automatic differentiation engine, and can be used through Matlab, NumPy or PyTorch backends. It is perfectly suited to the computation of Kernel dot products and the associated gradients, even when the full kernel matrix does not fit into the GPU memory.

Using the PyTorch backend, a typical sample of code looks like:

import torch
from pykeops.torch import Genred

# Kernel density estimator between point clouds in R^3
my_conv = Genred('Exp(-SqDist(x, y))',  # formula
['x = Vi(3)',        # 1st input: dim-3 vector per line
'y = Vj(3)'],       # 2nd input: dim-3 vector per column
reduction_op='Sum',  # we also support LogSumExp, Min, etc.
axis=1)              # sum with respect to "j", result indexed by "i"

# Apply it to 2d arrays x and y with 3 columns and a (huge) number of lines
y = torch.randn(2000000, 3).cuda()
a = my_conv(x, y)  # shape (1000000, 1), a_i = sum_j exp(-|x_i-y_j|^2)


KeOps allows you to leverage your GPU without compromising on usability. It provides:

• Support for a wide range of mathematical formulas.
• Seamless computation of derivatives, up to arbitrary orders.
• Sum, LogSumExp, Min, Max but also ArgMin, ArgMax or K-min reductions.
• A conjugate gradient solver for e.g. large-scale spline interpolation method (or kriging aka. Gaussian process regression).
• An interface for block-sparse and coarse-to-fine strategies.
• Support for multi GPU configurations.

KeOps can thus be used in a wide variety of settings, from shape analysis (LDDMM, optimal transport…) to machine learning (kernel methods, k-means…) or kriging (aka. Gaussian process regression). More details are provided below: