Label smoothing is a regularisation technique that applies commonly on classification problems. Instead of using one-hot encoding for class labels, label smoothing relaxes the label from $1$ to a slight smaller number, e.g., $0.9$, and distribute the residual $0.1$ to the rest of the classes. So give a $K$ classes label encoding as: $y_{i}=[0,0,\ldots, 1, 0, ...]$ where the $i$-th entry is $1$. The label smoothing makes $y_{i}=[\frac{\epsilon}{K}, \frac{\epsilon}{K}, \ldots, 1-\epsilon + \frac{\epsilon}{K}, \frac{\epsilon}{K}, \frac{\epsilon}{K}]$ . Eventually this prevents the prediction goes to extreme value such that overfitting is mitigated. The softened labels increase the entropy and therefore is form of regularisation.