ANALYSIS OF FEATURE SELECTION METHODS FOR IMPROVING THE EFFICIENCY OF MACHINE LEARNING ALGORITHMS IN PREDICTION TASKS
DOI:
https://doi.org/10.30857/2786-5371.2025.6.4Keywords:
feature selection methods, prediction, algorithms, datasets, data processing, machine learningAbstract
Purpose. To systematize and conduct a comparative analysis of feature selection methods in order to improve the efficiency of machine learning algorithms in prediction and classification tasks.
Methodology. Systematization, formalization, and comparative analysis of three main categories of feature selection methods: filter methods, wrapper methods, and embedded methods.
Results. A detailed analysis of existing feature selection algorithms, their advantages and limitations in the context of working with large volumes of data and high-dimensional feature spaces was carried out. A classification of methods was developed depending on the type of task (prediction or classification), the nature of the data, and the available computational resources.
Originality. A systematized methodology for feature selection is proposed, which ensures a reduction in the dimensionality of the feature space, minimizes data redundancy, and improves model interpretability while maintaining or enhancing their predictive capability.
Practical value. The obtained results demonstrate that the correct choice of a feature selection method makes it possible to significantly reduce model training time, improve their generalization ability, and decrease the risk of overfitting. The results open prospects for applying feature selection methods in various fields where processing large volumes of high-dimensional data is required.