Learn Prolog

Recently, I got in touch with the Prolog in my coursework (EEEM0005). It is not easy! Prolog is a very old programming language which I only see it in the TIOBE programming language rank (a boring index), but I was attracted by its simple structure and strange programming grammar at begining.

parent(pam, bob).
parent(tom, bob).
parent(tom, liz).
parent(bob, pat).
parent(bob, ann).
parent(pat, jim).

Above all six facts (also called clauses) represent a parent relationship:

Read More

Training Model on Nvidia DIGITS

About Nvidia DIGITS

GPU has high efficient on training model especially for Deep Neural Network. Honestly, training model is not an easy task that we need to prepare datasets, choose network and config lots of parameters. It is a big challenge for you don’t have much experience with programming and frameworks like Python, Py-torch. Nvidia DIGITS is a web platform which allows us train model with user friendly GUI without coding. DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

In this article, I’ll use MNIST dataset and LeNet network to train a model which can classify numbers. Training process can be divided into three steps:

  1. Import Dataset
  2. Training
  3. Testing and analysing

Now we start. If you are confused with how to install DIGITS, please go to the document.

Read More

Salt Pepper噪声以及使用median filter来减少噪声

处理图像时,我们用的图片往往都会有很多噪声。在黑暗中或是设备感光器受到影响,拍出来的图像就会有很多噪声,俗称“噪点”,Salt & Pepper就是其中一种。为什么会叫盐和胡椒粉?因为这些噪声不是白的就是黑的,看起来很像是在图片上撒了盐和胡椒粉,而更专业点的会叫它脉冲噪声,这是在图像信号中突然且尖锐的(sudden and sharp)扰动导致图片变得粗糙。

Read More



上图可知,人类神经系统由三部分构成,Receptors接收到外部环境的刺激后,将这种刺激转换成电脉冲信号传送到Neural net,经过处理,将电脉冲信号发送给Effectors,最后生成可识别的响应。整个环节中有两个方向:

Read More

Interpolating between Optimal Transport and MMD using Sinkhorn Divergences

Reading notes on Interpolating between Optimal Transport and MMD using Sinkhorn Divergences

The purpose of this paper is to show that the Sinkhorn divergences are convex, smooth, positive definite loss functions that metrize the convergence in law.

Countless methods in machine learning and image processing reley on comparisons betwen probability distributions. But simple dissimilarities such as the Total Variation norm or the Kullback-Leibler relative entropy do not take into account the distance d on the feature space χ\chi. As a result, they do not metrize the convergence in law and are unstable with respect to deformations of the distributions’ support. Optimal Transport distances (sometimes refeered as Earth Mover’s Distance) and Maximum Mean Discrepancies are continuous with respect to convergence in law and metrize its topology when feature space χ\chi is compact.

Read More