Often, objects in a large dataset have many features or attributes. Not all of them are relevant or apport information about the object (for instance, to classify it into one of several known classes). Narrowing the set of relevant features is useful to "understand what is going on". Also, fewer attributes mean faster data processing.
Arturo Heredia, Adolfo Guzmán and Gilberto Martínez have published an article that provides a new technique to select relevant features (those comprising most of the information for correct classification of an object) in a dataset containg objects with many features:
Arturo Heredia Márquez, Adolfo Guzmán-Arenas, Gilberto Lorenzo Martínez Luna (2023). FSOC – Feature selection ordered by correlation. Computación y Sistemas Vol. 27 No. 1, 2023, 33-51. ISSN: 2007-9737. DOI: 13053/CyS-27-1-3982.
The article (full text) can be dowloaded from here. Its abstract follows.
Abstract. Data sets have increased in volume and features, yielding longer times for classification and training. When an object has many features, it often occurs that not all of them are highly correlated with the target class, and that significant correlation may exist between certain pair of features. An adequate removal of “useless” features saves time and effort at data collection, and assures faster learning and classification times, with little or no reduction in classification accuracy.
This article presents a new filter type method, called FSOC (Feature Selection Ordered by Correlation), to select, with small computational cost, relevant features. FSOC achieves this reduction by selecting a subset of the original features. FSOC does not combine existing features to produce a new set of fewer features, since
the artificially created features mask the relevance of the original features in class assignment, making the new model difficult to interpret.
To test FSOC, a statistical analysis was performed on a collection of 36 data sets from several repositories some with millions of objects. The classification percentages (efficiency) of FSOC were similar to other feature selection features.
Nevertheless, when obtaining the selected features, FSOC was up to 42 times faster than other algorithms such as Correlation Feature Selection (CFS), Fast Correlation-Based Filter (FCFB) and Efficient feature selection based on correlation measure (ECMBF).
Keywords. Feature selection, data mining, pre-processing, feature reduction, data analysis.
No hay comentarios:
Publicar un comentario