When measuring the uncertainty of a deep learning model, there are various methods to consider, depending on the type of model and uncertainty. Bootstrap involves creating multiple models from different subsets of the data and measuring the variance or diversity of their predictions; this captures both aleatoric and epistemic uncertainty, but requires more computational resources and storage. On the other hand, dropout applies dropout to the hidden layers of the model during inference and averages the predictions of multiple dropout samples; this approximates epistemic uncertainty, yet does not account for aleatoric uncertainty. Ensembles involve training multiple models with different architectures, hyperparameters, or initializations, then combining their predictions using a weighted average or a voting scheme; this captures both aleatoric and epistemic uncertainty, yet also requires more computational resources and storage. Lastly, Bayesian neural networks model the weights of the neural network as probability distributions and compute the posterior distribution of the weights given the data; this captures both aleatoric and epistemic uncertainty, but is computationally challenging and often requires approximations.