Image may be NSFW.
Clik here to view.
Imbalance data distribution is an important part of machine learning workflow. An imbalanced dataset means instances of one of the two classes is higher than the other, in another way, the number of observations is not the same for all the classes in a classification dataset. This problem is faced not only in the binary class data but also in the multi-class data.
In this article, we list some important techniques that will help you to deal with your imbalanced data.
1| Oversampling
Image may be NSFW.
Clik here to view.
This technique is used to modify the unequal data classes to create balanced datasets. When the quantity of data is insufficient, the oversampling method tries to balance by incrementing the size of rare samples.
A primary technique used in oversampling is SMOTE (Synthetic Minority Over-sampling TEchnique). In this technique, the minority class is over-sampled by producing synthetic examples rather than by over-sampling with replacement and for each minority class observation, it calculates the k nearest neighbours (k-NN). But this technique is limited to an assumption that local space between any two positive instances belongs to the minority class, which may not always true in the case when the training data is not linearly separable. Depending upon the amount of oversampling required, neighbours from k-NN are randomly chosen.
Advantages
- No loss of information
- Mitigate overfitting caused by oversampling.
To take a deep dive into the SMOTE technique. Click here.
2| Undersampling
Image may be NSFW.
Clik here to view.
Unlike oversampling, this technique balances the imbalance dataset by reducing the size of the class which is in abundance. There are various methods for classification problems such as cluster centroids and Tomek links. The cluster centroid methods replace the cluster of samples by the cluster centroid of a K-means algorithm and the Tomek link method removes unwanted overlap between classes until all minimally distanced nearest neighbours are of the same class.
Advantages
- Run-time can be improved by decreasing the amount of training dataset.
- Helps in solving the memory problems
To learn more about undersampling, click here.
3| Cost-Sensitive Learning Technique
The Cost-Sensitive Learning (CSL) takes the misclassification costs into consideration by minimising the total cost. The goal of this technique is mainly to pursue a high accuracy of classifying examples into a set of known classes. It is playing as one of the important roles in the machine learning algorithms including the real-world data mining applications.
In this technique, the costs of false positive(FP), false negative (FN), true positive (TP), and true negative (TN) can be represented in a cost matrix as shown below where C(i,j) represents the misclassification cost of classifying an instance and also “i” the predicted class and “j” is the actual class. Here is an example of cost matrix for binary classification.
Image may be NSFW.
Clik here to view.
To deep dive into CSL technique, click here.
Advantages
- This technique avoids pre-selection of parameters and auto-adjust the decision hyperplane.
4| Ensemble Learning Techniques
The ensemble-based method is another technique which is used to deal with imbalanced data sets, and the ensemble technique is combined the result or performance of several classifiers to improve the performance of single classifier. This method modifies the generalisation ability of individual classifiers by assembling various classifiers. It mainly combines the outputs of multiple base learners. There are various approaches in ensemble learning such as Bagging, Boosting, etc.
Bagging or Bootstrap Aggregating tries to implement similar learners on a smaller dataset and then takes a mean of all the predictions. The Boosting (Adaboost) is an iterative technique that rectifies the weight of an observation depending on the last classification. This method decreases the bias error and builds strong predictive models.
Advantages
- This is a more stable model
- The prediction is better
To learn more about this technique, click here.
5| Combined Class Methods
In this type of method, various methods are fused together to get a better result to handle imbalance data. For instance, like SMOTE can be fused with other methods like MSMOTE (Modified SMOTE), SMOTEENN (SMOTE with Edited Nearest Neighbours), SMOTE-TL, SMOTE-EL, etc. to eliminate noise in the imbalanced data sets. However, the MSMOTE is the modified version of SMOTE which classifies the samples of minority classes into three groups such as security samples, latent nose samples, and border samples.
Advantages
- No loss of useful information
- Good generalisation
The post 5 Important Techniques To Process Imbalanced Data In Machine Learning appeared first on Analytics India Magazine.