# Effective number of parameters for $k$-nearest neighbour A note on the effective number of parameters for $k$-nearest neighbour, which is actually $n/k$ instead of a simple 1, where $n$ is the sample size and $k$ is the free parameter determines the number of neighbours. The fact is that $k$-nearest neighbour algorithm doesn't have the "typical" training stage, and the prediction of an unknown input $x_0$ will take all $n$ samples effectively, and average over the $k$ nearest samples. Therefore the model complexity can be measured as the average impact of each sample on the prediction, which ends up with $n/k$. This is in comparison to other types of models where the parameters are usually a set of weights and bias terms. In $k$-nearest neighbours, when $k$ is small, $n/k$ gets larger and the model is more complex, therefore it gives low bias and high variance. In opposite, if $k$ gets bigger, $n/k$ will be small and model is less complex therefore high bias and low variance.