Once you have your criteria, you need to apply some methods for evaluation. These methods can help you measure and compare the performance of different feature selection algorithms, and to validate their results. Cross-validation is a technique that splits your data into multiple training and testing sets, allowing you to avoid overfitting and assess the generalization ability of your feature selection algorithm. Bootstrap is a technique that resamples your data with replacement, helping you estimate the variability and uncertainty of your feature selection algorithm. Wrapper is a technique that uses your model as a black box, evaluating your feature selection algorithm based on the model's accuracy. Lastly, filter is a technique that uses statistical or information-theoretic measures, such as correlation, mutual information, or chi-square, to evaluate your feature selection algorithm based on the relevance of each feature. You can use these methods individually or in combination, depending on your data and goals. For instance, you could use cross-validation with wrapper to select the optimal subset of features for your model, or bootstrap with filter to estimate the stability and robustness of your feature selection algorithm.