Classifiers face a myriad of challenges in today’s data-driven world, ranging from overfitting and high computational costs to low accuracy, imbalanced training datasets, and the notorious black box effect. Furthermore, many traditional classifiers struggle with the robust handling of noisy and missing feature values. In response to these hurdles, we present classification methods that leverage the power of feature partitioning learning and outranking measures. Our classification algorithms offer an innovative approach, eliminating the need for prior domain knowledge by automatically discerning feature intervals directly from the data. These intervals capture essential patterns and characteristics within the dataset, empowering our classifiers with newfound adaptability and insight. In addition, we employ outranking measures to mitigate the influence of noise and uncertainty in the data. Through pairwise comparisons of alternatives on each feature, we enhance the robustness and reliability of our classification outcomes. The developed classifiers are empirically evaluated on several data sets from UCI repository and are compared with well-known classifiers including k Nearest Neighbors (K-NN), Support Vector Machine (SVM), Random Forest (RF), Neural Network (NN), Naive Bayesian (NB) and Nearest Centroid (NC). The experiments result demonstrate that the classifiers based on feature interval learning and outranking approaches are robust to imbalanced data and to irrelevant features and achieve comparably and even better performances than the well-known classifiers in some cases. Moreover, our proposed classifiers produce more explainable models whilst preserving high predictive performance levels.