top of page

Decision Tree Algorithms Simplified 2

One of the advantage of using Decision tree is that it efficiently identifies the most significant variable and splits the population on it. In previous article, we developed a high level understanding of Decision trees. In this article, we will focus on the science behind splitting the nodes and choosing the most significant split.

Decision trees can use various algorithms to split a node in two or more sub-nodes. The creation of sub-nodes increases the homogeneity of resultant sub-nodes. In other words, we can say that purity of the node increases with respect to the target variable. Decision tree splits the nodes on all available variables and then selects the split which results in most homogeneous sub-nodes.

The algorithm selection is also based on type of target variables. Let’s look at the four most commonly used algorithms in decision tree:

Gini Index:

Gini index says, if we select two items from a population at random then they must be of same class and probability for this is 1 if population is pure.

  1. It works with categorical target variable “Success” or “Failure”.

  2. It performs only Binary splits

  3. Higher the value of Gini higher the homogeneity.

  4. CART (Classification and Regression Tree) uses Gini method to create binary splits.

Steps to Calculate Gini for a split

  1. Calculate Gini for sub-nodes, using formula sum of square of probability for success and failure (p^2+q^2).

  2. Calculate Gini for split using weighted Gini score of each node of that split

Example: – Referring to example used in previous article, where we want to segregate the students based on target variable of playing cricket or not. In below snapshot, we split the population using two input variables Gender and Class. Now I want to identify which split is producing more homogeneous sub-nodes using Gini index.

Split on Gender:-

  1. Calculate, Gini for sub-node Female = (0.2)*(0.2)+(0.8)*(0.8)=0.68

  2. Gini for sub-node Male = (0.65)*(0.65)+(0.35)*(0.35)=0.55

  3. Calculate weighted Gini for Split Gender = (10/30)*0.68+(20/30)*0.55 = 0.59

Similar for Split on Class:-

  1. Gini for sub-node Class IX = (0.43)*(0.43)+(0.57)*(0.57)=0.51

  2. Gini for sub-node Class X = (0.56)*(0.56)+(0.44)*(0.44)=0.51

  3. Calculate weighted Gini for Split Class = (14/30)*0.51+(16/30)*0.51 = 0.51

Above, you can see that Gini score for Split on Gender is higher than Class so node will split on Gender.

Chi-Square:

It is an algorithm to find out the statistical significance between the differences between sub-nodes and parent node. We measures it by sum of squares of standardized differences between observed and expected frequencies of target variable.

  1. It works with categorical target variable “Success” or “Failure”.

  2. It can performs two or more splits

  3. Higher the value of Chi-Square higher the statistical significance of differences between sub-node and Parent node.

  4. Chi-Square of each node is calculated using formula,

  5. Chi-square = ((Actual – Expected)^2 / Expected)^1/2

  6. It generates tree called CHAID (Chi-square Automatic Interaction Detector)

Steps to Calculate Chi-square for a split:

  1. Calculate Chi-square for individual node by calculating the deviation for Success and Failure both

  2. Calculated Chi-square of Split using Sum of all Chi-square of success and Failure of each node of the split

Example: Let’s work with above example that we have used to calculate Gini.

Split on Gender:

  1. First we are populating for node Female, Populate the actual value for “Play Cricket” and “Not Play Cricket”, here these are 2 and 8 respectively.

  2. Calculate expected value for “Play Cricket” and “Not Play Cricket”, here it would be 5 for both because parent node has probability of 50% and we have applied same probability on Female count(10).

  3. Calculate deviations by using formula, Actual – Expected. It is for “Play Cricket” (2 – 5 = -3) and for “Not play cricket” ( 8 – 5 = 3).

  4. Calculate Chi-square of node for “Play Cricket” and “Not Play Cricket” using formula with formula, = ((Actual – Expected)^2 / Expected)^1/2. You can refer below table for calculation.

  5. Follow similar steps for calculating Chi-square value for Male node.

  6. Now add all Chi-square values to calculate Chi-square for split Gender.

Split on Class:

Perform similar steps of calculation for split on Class and you will come up with below table.

Above, you can see that Chi-square also identify the Gender split is more significant compare to Class.

Information Gain:

Let’s look at the image below and think which node can be described easily. I am sure, your answer is C because it requires less information as all values are similar where as B requires more information to describe it and A would require even more. You can say in other words also that C is a Pure node, B is less Impure and A is more impure.

Now, we can build a conclusion that less impure node requires less information to describe it and more impure node requires more information. Information theory has a measure to define this degree of disorganization in a system, which is called Entropy. If the sample is completely homogeneous, then the entropy is zero and if the sample is an equally divided it has entropy of one.

Entropy can be calculated using formula:-

Here p and q is probability of success and failure respectively in that node. Entropy is also used with categorical target variable. It chooses the split which has lowest entropy compared to parent node and other splits.

Steps to calculate entropy for a split:

  1. Calculate entropy of parent node

  2. Calculate entropy of each individual node of split and calculate weighted average of all sub-nodes available in split.

Example: Let’s use this method to identify best split for student example.

  1. Entropy for parent node = -(15/30) log2 (15/30) – (15/30) log2 (15/30) = 1. Here 1 shows that it is a impure node.

  2. Entropy for Female node = -(2/10) log2 (2/10) – (8/10) log2 (8/10) = 0.72 and for male node, -(13/20) log2 (13/20) – (7/20) log2 (7/20) = 0.93.

  3. Entropy for split Gender = Weighted entropy of sub-nodes = (10/30)*0.72 + (20/30)*0.93 = 0.86

  4. Entropy for Class IX node, -(6/14) log2 (6/14) – (8/14) log2 (8/14) = 0.99 and for Class X node, -(9/16) log2 (9/16) – (7/16) log2 (7/16) = 0.99.

  5. Entropy for split Class = (14/30)*0.99 + (16/30)*0.99 = 0.99

Above you can see that entropy of split on Gender is lower compare to Class so we will again go with split Gender. We can derive information gain from entropy as 1- Entropy.

Reduction in Variance:

Till now, we have discussed the algorithms for categorical target variable. Reduction in Variance is an algorithm for continuous target variable. This algorithm uses the same formula of variance to choose the right split that we went through the descriptive statistics. The split with lower variance is selected as the criteria to split the population:

Above X-bar is mean of the values, X is actual and n is number of values.

Steps to calculate Variance:

  1. Calculate variance for each node.

  2. Calculate Variance for each split as weighted average of each node variance

Example:- Let’s assign numerical value 1 for play cricket and 0 for not playing cricket. Now follow the steps to identify the right split:

  1. Variance for Root node, here mean value is (15*1 + 15*0)/30 = 0.5 and we have 15 one and 15 zero. Now variance would be ((1-0.5)^2+(1-0.5)^2+….15 times+(0-0.5)^2+(0-0.5)^2+…15 times) / 30, this can be written as (15*(1-0.5)^2+15*(0-0.5)^2) / 30 = 0.25

  2. Mean of Female node = (2*1+8*0)/10=0.2 and Variance = (2*(1-0.2)^2+8*(0-0.2)^2) / 10 = 0.16

  3. Mean of Male Node = (13*1+7*0)/20=0.65 and Variance = (13*(1-0.65)^2+7*(0-0.65)^2) / 20 = 0.23

  4. Variance for Split Gender = Weighted Variance of Sub-nodes = (10/30)*0.16 + (20/30) *0.23 = 0.21

  5. Mean of Class IX node = (6*1+8*0)/14=0.43 and Variance = (6*(1-0.43)^2+8*(0-0.43)^2) / 14= 0.24

  6. Mean of Class X node = (9*1+7*0)/16=0.56 and Variance = (9*(1-0.56)^2+7*(0-0.56)^2) / 16 = 0.25

  7. Variance for Split Gender = (14/30)*0.24 + (16/30) *0.25 = 0.25

Above, you can see that Gender split has lower variance compare to parent node so the split would be on Gender only.

Splitting/ Pruning:

Above, we have have looked at various algorithms to split a node into sub nodes. Now to create a decision tree, sub-nodes are further split into two or more sub-nodes and all input variables are considered for creating the split again. Fields already involved in split also get considered for split. It is a recursive process and it stops if the node ends up as a pure node or it reaches the maximum depth of the tree or number of records in the node reaches the preset limit.

In a extreme scenario, a decision tree can have number of nodes equals to total number of observation, but that would be a very complex tree. If we are expanding decision tree towards more complexity based on training data set, then it causes over fitting and losses the predictive power of the model because it is not generalized. Over fitting can be removed by pruning the nodes.

RECENT POSTS

FEATURED POSTS

Check back soon
Once posts are published, you’ll see them here.

FOLLOW US

  • Grey Facebook Icon
  • Grey Twitter Icon
  • Grey Instagram Icon
  • Grey Google+ Icon
  • Grey Pinterest Icon
bottom of page