This tutorial explains about random forest in simple term and how it works with examples. It includes step by step guide of running random forest in R. Also, it highlights the explanation of parameters used in random forest R package.

Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of overcoming over-fitting problem of individual decision tree.

In other words, random forests are an ensemble learning method for classification and regression that operate by constructing a lot of decision trees at training time and outputting the class that is the mode of the classes output by individual trees.

Decision tree is encountered with over-fitting problem and ignorance of a variable in case of small sample size and large p-value. Whereas, random forests are a type of recursive partitioning method particularly well-suited to small sample size and large p-value problems.

Random forest comes at the expense of a some loss of interpretability, but generally greatly boosts the performance of the final model.

Random Forest is one of the most widely used machine learning algorithm for classification. It can also be used for regression model (i.e. continuous target variable) but it mainly performs well on classification model (i.e. categorical target variable). It has become a lethal weapon of modern data scientists to refine the predictive model. The best part of the algorithm is that there are a very few assumptions attached to it so data preparation is less challenging and results to time saving. It's listed as a top algorithm (with ensembling) in Kaggle Competitions.

**Background**Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of overcoming over-fitting problem of individual decision tree.

In other words, random forests are an ensemble learning method for classification and regression that operate by constructing a lot of decision trees at training time and outputting the class that is the mode of the classes output by individual trees.

**What is overfitting?**Explaining your training data instead of finding patterns that generalize is what overfitting is. In other words, your model learns the training data by heart instead of learning the patterns which prevent it from being able to generalized to the test data. It means your model fits well to training dataset but fails to the validation dataset.

Random Forest Explained with R |

**Decision Tree vs. Random Forest**Decision tree is encountered with over-fitting problem and ignorance of a variable in case of small sample size and large p-value. Whereas, random forests are a type of recursive partitioning method particularly well-suited to small sample size and large p-value problems.

Random forest comes at the expense of a some loss of interpretability, but generally greatly boosts the performance of the final model.

**Popularity of Random Forest Algorithm**

**Can Random Forest be used both for Continuous and Categorical Target Variable?**

Yes, it can be used for both continuous and categorical target (dependent) variable. In random forest/decision tree,

**classification model**refers to factor/categorical dependent variable and**regression model**refers to numeric or continuous dependent variable.**How random forest works**

Each tree is grown as follows:

1.

**Random Record Selection :**Each tree is trained on roughly 2/3rd of the total training data

**(exactly 63.2%)**. Cases are drawn at

**random with replacement**from the original data. This sample will be the training set for growing the tree.

2.

**Random Variable Selection :**Some

**predictor variables (say, m) are selected at**

**random**out of all the predictor variables and the best split on these m is used to split the node.

By default, m is square root of the total number of all predictors for classification. For regression, m is the total number of all predictors divided by 3.The value of m is held constant during the forest growing.

**Note :**In a standard tree, each split is created after examining every variable and picking the best split from all the variables.

3. For each tree, using the leftover (36.8%) data, calculate the misclassification rate -

**out of bag (OOB) error rate.**Aggregate error from all trees to determine

**overall OOB error rate**for the classification.

4. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes over all the trees in the forest. For a binary dependent variable, the vote will be YES or NO, count up the YES votes. This is the RF score and the percent YES votes received is the predicted probability. In regression case, it is average of dependent variable.

**What is random in 'Random Forest'?**

'Random' refers to mainly two process - 1. random observations to grow each tree and 2. random variables selected for splitting at each node. See the detailed explanation in the previous section.

**Important Point :**

Random Forest does not require split sampling method to assess accuracy of the model. It performs internal validation as 2-3rd of available training data is used to grow each tree and the remaining one-third portion of training data always used to calculate out-of bag error to assess model performance.

**Preparing Data for Random Forest**

**1. Imbalance Data set**

A data set is class-imbalanced if one class contains significantly more samples than the other. In other words, non-events have very large number of records than events in dependent variable.

In such cases, it is challenging to create an appropriate testing and training data sets, given that most classifiers are built with the assumption that the test data is drawn from the same distribution as the training data.

Presenting imbalanced data to a classifier will produce undesirable results such as a much lower performance on the testing than on the training data. To deal with this problem, you can do undersampling of non-events.

**Undersampling**

It means down-sizing the non-events by removing observations at random until the dataset is balanced.

2. Random forest is affected by multicollinearity but not by outlier problem.

3. Impute missing values within random forest as proximity matrix as a measure

**Terminologies related to random forest algorithm:**

1. Bagging (Bootstrap Aggregating)

Generates m new training data sets. Each new training data set picks a sample of observations with replacement (bootstrap sample) from original data set. By sampling with replacement, some observations may be repeated in each new training data set. The m models are fitted using the above m bootstrap samples and combined by averaging the output (for regression) or voting (for classification).

In random forest, approx. 2/3rd of the total training data (63.2%) is used for growing each tree. And the remaining one-third of the cases (36.8%) are left out and not used in the construction of each tree. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes over all the trees in the forest. For a binary dependent variable, the vote will be YES or NO, count up the YES votes. This is the RF score and the percent YES votes received is the predicted probability. In regression case, it is average of dependent variable.

2. Out-of-Bag Error (Misclassification Rate)

Out-of-Bag is equivalent to validation or test data. In random forests, there is no need for a separate test set to validate result. It is estimated internally, during the run, as follows:

As the forest is built on training data , each tree is tested on the 1/3rd of the samples (36.8%) not used in building that tree

**(similar to validation data set)**. This is the out of bag error estimate - an internal error estimate of a random forest as it is being constructed.
3. Bootstrap Sample

It is a random with replacement sampling method.

**Example :**Suppose we have a bowl of 100 unique numbers from 0 to 99. We want to select a random sample of numbers from the bowl. If we put the number back in the bowl, it may be selected more than once. In this process, we are sampling

**randomly with replacement.**

4. Proximity (Similarity)

- Initialize proximities to zeroes
- For any given tree, apply the tree to all cases
- If case i and case j both end up in the same node, increase proximity prox(ij) between i and j by one
- Accumulate over all trees in RF and normalize by twice the number of trees in RF

It creates a proximity matrix (a square matrix with 1 on the diagonal and values between 0 and 1 in the off-diagonal positions).Observations that are “alike” will have proximities close to 1. The closer proximity to 0, the more dissimilar cases are.

Proximity matrix is used for the following cases :

Proximity matrix is used for the following cases :

- Missing value imputation
- Outlier detection

**Shortcoming of Random Forest:**

- Random Forests aren't good at generalizing to cases with completely new data. For example, if I tell you that 1 chocolate costs $1, 2 chocolates cost $2, and 3 chocolates cost $3, how much do 10 chocolates cost? A linear regression can easily figure this out, while a Random Forest has no way of finding the answer.
- If a variable is a categorical variable with multiple levels, random forests are biased towards the variable having multiple levels

**The forest error rate depends on two things:**

1. The correlation between any two trees in the forest. Increasing the correlation increases the forest error rate.

2. The strength of each individual tree in the forest. A tree with a low error rate is a strong classifier. Increasing the strength of the individual trees decreases the forest error rate.

Reducing mtry ( Number of random variables used in each tree) reduces both the correlation and the strength. Increasing it increases both. Somewhere in between is an "optimal" range of mtry - usually quite wide. Using the oob error rate a value of mtry in the range can quickly be found. This is the only adjustable parameter to which random forests is somewhat sensitive.

**How to fine tune random forest**

- Number of trees used in the forest (ntree ) and
- Number of random variables used in each tree (mtry ).

First set the

There are two ways to find the optimal mtry :

**mtry**to the default value (sqrt of total number of all predictors) and search for the optimal ntree value. To find the number of trees that correspond to a stable classifier, we build random forest with different ntree values (100, 200, 300….,1,000). We build 10 RF classifiers for each ntree value, record the**OOB error rate**and see the number of trees**where the out of bag error rate stabilizes and reach minimum.****Find the optimal mtry**There are two ways to find the optimal mtry :

- Apply a similar procedure such that random forest is run 10 times. The optimal number of predictors selected for split is selected for which
**out of bag error rate stabilizes and reach****minimum.** - Experiment with including the
**(square root of total number of all predictors)**,**(half of this square root value)**, and**(twice of the square root value)**. And check which mtry returns maximum Area under curve. Thus, for 1000 predictors the number of predictors to select for each node would be 16, 32, and 64 predictors.

**Important Feature : Variable Importance**

Random forests can be used to rank the importance of variables in a regression or classification problem.

Interpretation : MeanDecreaseAccuracy tablerepresents how much removing each variable reduces the accuracy of the model.

**Calculation :**In every tree grown in the forest, put down the oob cases (1/3 of training data) and count the number of votes cast for the correct class. Now randomly permute the values of variable k in the oob cases and put these cases down the tree. Subtract the number of votes for the correct class in the variable-k-permuted oob data from the number of votes for the correct class in the untouched oob data. The average of this number over all trees in the forest is the raw importance score for variable k.

**Caution**

For data including categorical variables with different number of levels, random forests are biased in favor of those attributes with more levels. Methods such as conditional inference trees can be used to solve the problem.

**R Code : Missing Data Imputation**

There are two ways to impute missing data - option

**“na.roughfix”**and

**“rfImpute()”**in randomForest library. The second approach gives a better result.

The

**"na.roughfix"**option is based on the following algorithm :2. For factor variables, NAs are replaced with the most frequent levels (breaking ties at random).

data(iris)

iris.na <- iris

set.seed(111)

## artificially drop some data values.

for (i in 1:4) iris.na[sample(150, sample(20)), i] <- NA

iris.roughfix <- na.roughfix(iris.na)

iris.narf <- randomForest(Species ~ ., iris.na, na.action=na.roughfix)

print(iris.narf)

The

**"rfImpute()"**option follows a proximity matrix explained above. It gives a better result.

data(iris)

iris.na <- iris

set.seed(111)

## artificially drop some data values.

for (i in 1:4) iris.na[sample(150, sample(20)), i] <- NA

set.seed(222)

iris.imputed <- rfImpute(Species ~ ., iris.na)

set.seed(333)

iris.rf <- randomForest(Species ~ ., iris.imputed)

print(iris.rf)

**Outliers**

Decision trees are less sensitive to extreme values and hence, random forest.If you really want to detect outlier in random forest, you can use CORElearn package.

**R Code : Random forest based outlier detection**

#first create a random forest tree using CORElearn

dataset <- iris

md <- CoreModel(Species ~ ., dataset, model="rf", rfNoTrees=30, maxThreads=1)

outliers <- rfOutliers(md, dataset)

plot(abs(outliers))

#for a nicer display try

plot(md, dataset, graphType="outliers")

destroyModels(md) # clean up

**R : Random Forest**

Step I : Data Preparation

#Import data

mydata <- read.table("C:/Users/Deepanshu/Desktop/college.txt",sep=",",header=T)

#Explore data

nrow(mydata)

summary(mydata)

#Define variable types

mydata$college_flg<- as.factor(mydata$college_flg)

mydata$Degree_Flg<- as.factor(mydata$Degree_Flg)

mydata$Discipline_flg<- as.factor(mydata$Discipline_flg)

mydata$specialization<- as.numeric(mydata$specialization)

mydata$rating<- as.integer(mydata$rating)

#Install and load packages required for random forest

install.packages("party")

install.packages("randomForest")

install.packages("ROCR")

library(randomForest)

library(ROCR)

Step II : Run the following code several times with different "ntree" values

set.seed(71)

rf <-randomForest(income~.,data=mydata, ntree=200)

print(rf)

Note :If a dependent variable is a factor, classification is assumed, otherwise regression is assumed. If omitted, randomForest will run in unsupervised mode.

**Random Forest R Parameters**

**mtry :**number of variables selected at each split, By default,**mtry = floor of (sqrt of number of independent variables) for classification model. mtry = floor of (number of variables divided by 3) for regression model.****ntree :**number of trees to grow: default = 500**nodesize :**minimum size of terminal nodes default =1**replace :**To check whether**TRUE**implies with replacement.**FALSE**implies without replacement. TRUE is set by default.**sampsize :**Sample size to be drawn from the data for growing each decision tree. By default, it takes 63.2% of the data**importance:**Whether importance metrics of variables to be required.

Step III : Find the number of trees where the out of bag error rate stabilizes and reach minimum.

Step IV : Find the optimal number of variables selected at each split

SelectIt returns the optimal number ofmtryvalue with minimum out of bag(OOB) error.

**mtry**(paramter used in randomforest package). Use the number in the code below.

mtry <- tuneRF(mydata[-13],mydata$income, ntreeTry=200,stepFactor=1.5,improve=0.01, trace=TRUE, plot=TRUE)best.m <- mtry[mtry[, 2] == min(mtry[, 2]), 1]print(mtry)print(best.m)

**Arguments**

**mydata[-13]**means “All but the 13th column on the set mydata”. It means all variables except 13th column are

**independent variables**.

**dependent variable**, I give the algorithm “

**mydata$income**”, which is the income column on the set mydata.

**ntreeTry**specifies the number of trees to make using this function, trying out different numbers for mtry.

**stepFactor**specifies at each iteration, mtry is inflated (or deflated) by this value

**improve**specifies the (relative) improvement in OOB error must be by this much for the search to continue

The

**trace**specifies whether to print the progress of the search

**plot**specifies whether to plot the OOB error as function of mtry

The

**doBest**tells whether to run a forest using the optimal mtry found. If doBest=FALSE (default), it returns a matrix whose first column contains the mtry values searched, and the second column the corresponding OOB error.

Step V : Use the optimal number of variables selected at each split and run random forest again

set.seed(71)Step VI : Calculate Variable Importance

rf <-randomForest(income~.,data=mydata, mtry=best.m, importance=TRUE,ntree=200)

print(rf)

#Evaluate variable importance

importance(rf)

varImpPlot(rf)

Variable Importance Plot |

Higher the value of mean decrease accuracy or mean decrease gini score , higher the importance of the variable in the model. In the plot shown above, GPA is most important variable.

**Mean Decrease Accuracy -**How much the model accuracy decreases if we drop that variable.**Mean Decrease Gini**- Measure of variable importance based on the Gini impurity index used for the calculation of splits in trees.

# Plot partial dependence of each predictor

par(mfrow = c(3, 5), mar = c(2, 2, 2, 2), pty = "s");

for (i in 1:(ncol(mydata) - 1))

{

partialPlot(rf, mydata, names(mydata)[i], xlab = names(mydata)[i],

main = NULL);

}

**R : Calculate Predicted Probability**

#Calculate predictive probabilities of training dataset.

pred1=predict(rf,type = "prob")

You can use the same code to validate a dataset (test).

pred1=predict(rf,type = "prob",test)

asd <-cbind(mydata,pred1)

write.table(asd,file="C:/Users/Deepanshu/Desktop/prob.csv",sep=",",row.names=F)

**R : Model Performance**

#Evaluate the performance of the random forest for classification.

pred2=predict(rf,type = "prob")

#prediction is ROCR function

perf = prediction(pred2[,2], mydata$income)

#performance in terms of true and false positive rates

1. Area under curve

auc = performance(perf, "auc")

2. True Positive and Negative Rate

pred3 = performance(perf, "tpr","fpr")

2. Plot the ROC curve

plot(pred3,main="ROC Curve for Random Forest",col=2,lwd=2)

abline(a=0,b=1,lwd=2,lty=2,col="gray")

#Plot a sample tree

cforest(income~., data=mydata, controls=cforest_control(mtry=2, mincriterion=0))

Great post - can you explain a bit about how the predicted probabilities are generated and what they represent in a more theoretical sense. I'm using randomForest but getting lots of 1.00 probabilities on my test set (bunching of probabilities) which is actually hurting me as i want to use them the filter out non relevant records in an unbiased fashion for further downstream work. I'm finding that logistic regression has a lot less of this going on. I'm combining the models to try get best of both. But as we usually think a probability of 1.00 can not exist, it's got me thinking about how to interpret probabilities from the RF model. Struggling to find a clear overview anywhere (will spend more time looking later).

ReplyDeleteAnyway nice post - adding this blog to my list :)

Very informative - thank you. I'm having trouble going 1 step deeper and actually interpreting the output from the importance(model) command. Applied, to the iris data set, the output looks like this:

ReplyDeletesetosa versicolor virginica MeanDecreaseAccuracy MeanDecreaseGini

Sepal.Length 1.277324 1.632586 1.758101 1.2233029 9.173648

Sepal.Width 1.007943 0.252736 1.014141 0.6293145 2.472105

Petal.Length 3.685513 4.434083 4.133621 2.5139980 41.284869

Petal.Width 3.896375 4.421567 4.385642 2.5371353 46.323415

Can you please help me understand what the numbers in the first 3 columns are - for example, the number 1.277324 in Row 1/Col 1 (sepal.width & Setosa).

I know Setosa is one of 3 classes and width is a feature, but, can't figure out what 1.277324 refers to.

Thank you!

hi, can you please explain the chart which is produced by plotting a RF model using plot function

ReplyDeleteLovely post. Everything at one place in a very simple language. I really enjoyed reading the article even late night.

ReplyDeleteKeep writing. I am looking forward to read other your posts too.

Thank you!

Hi, your post is very great!

ReplyDeleteI don't understand partialPlot very well.

Could you explain more about it?

Great post , Got everything needed .

ReplyDeleteSay My Predictor variables are a mix of Categorical and Numeric. Random forest tells me which Predictors are important. If i want to know which level under the categorical predictor is important , how can i tell ??Do i ned to use other techniques, like GLM ??

ReplyDeleteAwesome it is..thanks a lot for sharing

ReplyDeleteGreat post, keep writing ...

ReplyDeleteCan you share dataset you are using here ?

ReplyDeleteme too...the data files are local ... it would be useful to use the same data and run it

Deletevery detailed....

ReplyDeletethanks. really helpful

ReplyDeleteExcellent writing! Thanks for sharing your valuable experience!

ReplyDeleteGreat post! Really appreciate your effort to write down all this. Really helped a lot! It now feels that using R software is much simpler.

ReplyDeleteHello,

ReplyDeleteThanks for the tutorial. I am confused about one point. I used 10-fld cross validation and applied the RF model. to chechk the model performance, I did the same thing with you -> perf = prediction(pred2[,2], mydata$income). I got the error "Number of predictions in each run must be equal to the number of labels for each run."

I made some search but couldnt solve the problem. If you could help me, I would appriciate.