Markov chain Monte Carlo (MCMC) methods have been used successfully in Bayesian inference in a wide range of statistical problems. MCMC methods provide the means of conducting uncertainty quantification via posterior density and associated credible interval estimation. However, MCMC techniques have been less popular in machine learning, and deep learning specifically. One of the main reasons for the lack of popularity of Bayesian inference via MCMC for neural networks is the incurring computational complexity. This is not the whole story though. The model structure of neural networks pose other fundamental challenges to MCMC beyond scalability. We identify such challenges in inferring the posteriors of weights and of biases in small neural networks via geometric and population MCMC sampling. The showcased examples pinpoint some of the fundamental issues that need to be resolved in order to hold some hope to develop effective MCMC sampling methods for neural networks.