# Interpolating between Optimal Transport and MMD using Sinkhorn Divergences

The purpose of this paper is to show that the Sinkhorn divergences are convex, smooth, positive definite loss functions that metrize the convergence in law.

Countless methods in machine learning and image processing reley on comparisons betwen probability distributions. But simple dissimilarities such as the Total Variation norm or the Kullback-Leibler relative entropy do not take into account the distance d on the feature space $\chi$. As a result, they do not metrize the convergence in law and are unstable with respect to deformations of the distributions’ support. Optimal Transport distances (sometimes refeered as Earth Mover’s Distance) and Maximum Mean Discrepancies are continuous with respect to convergence in law and metrize its topology when feature space $\chi$ is compact.

# Problems

Deep Neural Networks achieve high performance under the large labelled datasets. For some circumstances, no enough labelled images are provided. One of most well-studied machine learning algorithms is few-shot image classification. Only with small labelled data, few-shot algorithms can categorize new images.

# 我的2021

2021已经结束了，又到了秀各种年终总结的时候，那么也不多我这一份。不知道大家在总结2021年时，是满怀成就感呢，还是懊悔自己某些事没做或者做错了，还是浑浑噩噩，没啥可圈可点呢？对于我来讲，这一年很充实。