Predictive Modeling Interview Questions and Answers (2024)

Deepanshu Bhalla 8 Comments , , ,
Predictive modeling knowledge is one of the most sought-after skill today. It is in demand these days. It is being used in almost every domain ranging from finance, retail to manufacturing. It is being looked as a method of solving complex business problems. It helps to grow businesses e.g. predictive acquisition model, optimization engine to solve network problem etc.

It is not easy to get into these roles as it requires technical understanding of various statistical techniques and machine learning algorithms with tools like SAS/R/Python. Hence, it is important to prepare well before going for interview. To help you in interview preparation, I’ve jot down most frequently asked interview questions on logistic regression, linear regression and predictive modeling concepts. In general, an analytics interview process includes multiple rounds of discussion. Possible rounds are as follows -
  1. Technical Round on Statistical Techniques and Machine Learning Concepts
  2. Technical Round on Programming Languages such as SAS/R/Python/SQL
  3. Managerial Round on Business/Domain Knowledge
During these multiple rounds of interviews, they also check your communication skill and logical/ problem solving skill.
Predictive Modeling Interview Questions

Let's start with a list of some basic and tricky predictive modeling interview questions with answers.

1. What are the essential steps in a predictive modeling project?

It consists of the following steps -
  1. Establish business objective of a predictive model
  2. Pull Historical Data - Internal and External
  3. Select Observation and Performance Window
  4. Create newly derived variables
  5. Split Data into Training, Validation and Test Samples
  6. Clean Data - Treatment of Missing Values and Outliers
  7. Variable Reduction / Selection
  8. Variable Transformation
  9. Develop Model
  10. Validate Model
  11. Check Model Performance
  12. Deploy Model
  13. Monitor Model

2. What are the applications of predictive modeling?

Predictive modeling is mostly used in the following areas -
  1. Acquisition - Cross Sell / Up Sell
  2. Retention - Predictive Attrition Model
  3. Customer Lifetime Value Model
  4. Next Best Offer
  5. Market Mix Model
  6. Pricing Model
  7. Campaign Response Model
  8. Probability of Customers defaulting on loan
  9. Segment customers based on their homogenous attributes
  10. Demand Forecasting
  11. Usage Simulation
  12. Underwriting
  13. Optimization - Optimize Network

3. Explain the problem statement of your project. What are the financial impacts of it?

Cover the objective or main goal of your predictive model. Compare monetary benefits of the predictive model vs. No-model. Also highlights the non-monetary benefits (if any).


4. Define observation and performance window?
Tutorial : Observation and Performance Window


5. Difference between Linear and Logistic Regression?

Two main difference are as follows -
  1.  Linear regression requires the dependent variable to be continuous i.e. numeric values (no categories or groups). While Binary logistic regression requires the dependent variable to be binary - two categories only (0/1). Multinomial or ordinary logistic regression can have dependent variable with more than two categories.
  2. Linear regression is based on least square estimation which says regression coefficients should be chosen in such a way that it minimizes the sum of the squared distances of each observed response to its fitted value. While logistic regression is based on Maximum Likelihood Estimation which says coefficients should be chosen in such a way that it maximizes the Probability of Y given X (likelihood)
Please note there are more than 10 difference between these two techniques, refer the link below -

6. How to handle missing values?

We fill/impute missing values using the following methods. Or make missing values as a separate category.
  1. Mean Imputation for Continuous Variables (No Outlier)
  2. Median Imputation for Continuous Variables (If Outlier)
  3. Cluster Imputation for Continuous Variables
  4. Imputation with a random value that is drawn between the minimum and maximum of the variable [Random value = min(x) + (max(x) - min(x)) * ranuni(SEED)]
  5. Impute Continuous Variables with Zero (Require business knowledge)
  6. Conditional Mean Imputation for Continuous Variables
  7. Other Imputation Methods for Continuous  - Predictive mean matching, Bayesian linear regression, Linear regression ignoring model error etc.
  8. WOE for missing values in categorical variables
  9. Decision Tree, Random Forest, Logistic Regression for Categorical Variables
  10. Decision Tree, Random Forest works for both Continuous and Categorical Variables
  11. Multiple Imputation Method

7. How to treat outliers?

There are several methods to treat outliers -
  1. Percentile Capping
  2. Box-Plot Method
  3. Mean plus minus 3 Standard Deviation
  4. Weight of Evidence


8. Explain Dimensionality / Variable Reduction Techniques

Unsupervised Method (No Dependent Variable)
  1. Principal Component Analysis (PCA)
  2. Hierarchical Variable Clustering (Proc Varclus in SAS)
  3. Variance Inflation Factor (VIF)
  4. Remove zero and near-zero variance predictors
  5. Mean absolute correlation. Removes the variable with the largest mean absolute correlation. See the detailed explanation of mean absolute correlation
Supervised Method (In respect to Dependent Variable)

For Binary / Categorical Dependent Variable
  1. Information Value
  2. Wald Chi-Square
  3. Random Forest Variable Importance
  4. Gradient Boosting Variable Importance
  5. Forward/Backward/Stepwise - Variable Significance (p-value)
  6. AIC / BIC score
For Continuous Dependent Variable
  1. Adjusted R-Square
  2. Mallows' Cp Statistic
  3. Random Forest Variable Importance
  4. AIC / BIC score
  5. Forward / Backward / Stepwise - Variable Significance

9. Explain equation of logistic regression model

10. What is multicollinearity and how to deal it?

Multicollinearity implies high correlation between independent variables. It is one of the assumptions in linear and logistic regression. It can be identified by looking at VIF score of variables. VIF > 2.5 implies moderate collinearity issue. VIF >5 is considered as high collinearity.

It can be handled by iterative process : first step - remove variable having highest VIF and then check VIF of remaining variables. If VIF of remaining variables > 2.5, then follow the same first step until VIF < =2.5


11. How VIF is calculated and interpretation of it?

VIF measures how much the variance (the square of the estimate's standard deviation) of an estimated regression coefficient is increased because of collinearity. If the VIF of a predictor variable were 9 (√9 = 3) this means that the standard error for the coefficient of that predictor variable is 3 times as large as it would be if that predictor variable were uncorrelated with the other predictor variables.

Steps of calculating VIF
  1. Run linear regression in which one of the independent variable is considered as target variable and all the other independent variables considered as independent variables
  2. Calculate VIF of the variable. VIF = 1/(1-RSquared)

12. Do we remove intercepts while calculating VIF?

No. VIF depends on the intercept because there is an intercept in the regression used to determine VIF. If the intercept is removed, R-square is not meaningful because it may be negative in which case one can get VIF < 1, implying that the standard error of a variable would go up if that independent variable were uncorrelated with the other predictors.

13. What is p-value and how it is used for variable selection?

The p-value is lowest level of significance at which you can reject null hypothesis. In the case of independent variables, it implies whether coefficient of a variable is significantly different from zero,

14. How AUC, Concordance and Discordance are calculated?


15. Explain important model performance statistics
  1. AUC > 0.7. No significant difference between AUC score of training vs validation.
  2. KS should be in top 3 deciles and it should be more than 30
  3. Rank Ordering. No break in rank ordering.
  4. Same signs of parameter estimates in both training and validation

16. Explain Gain and Lift Charts
Check out this tutorial : Understanding Gain and Lift Charts

17. Explain collinearity between continuous and categorical variables. Is VIF a correct method to compute collinearity in this case?

Collinearity between categorical and continuous variables is very common. The choice of reference category for dummy variables affects multicollinearity. It means changing the reference category of dummy variables can avoid collinearity. Pick a reference category with highest proportion of cases.

VIF is not a correct method in this case. VIFs should only be run for continuous variables. The t-test method can be used to check collinearity between continuous and dummy variable.
We can also safely ignore collinearity between dummy variables. To avoid high VIFs in this case, just choose a reference category with a larger fraction of the cases

18. Assumptions of Linear Regression Model
Linear Regression Explained

19. How WOE and Information Value are calculated?
WOE and Information Value Explained

20. Difference between Factor Analysis and PCA?

The main 3 difference between these two techniques are as follows -
  1. In Principal Components Analysis, the components are calculated as linear combinations of the original variables. In Factor Analysis, the original variables are defined as linear combinations of the factors.
  2. Principal Components Analysis is used as a variable reduction technique whereas Factor Analysis is used to understand what constructs underlie the data.
  3. In Principal Components Analysis, the goal is to explain as much of the total variance in the variables as possible. The goal in Factor Analysis is to explain the covariances or correlations between the variables.

21. What would happen if you define event incorrectly while building a model?

Suppose your target variable is attrition. It's a binary variable - 1 refers to customer attrited and 0 refers to active customer. In this case, your desired outcome is 1 in attrition since you need to identify customers who are likely to leave.

Let's say you set 0 as event in the logistic regression.
Logistic Regression Output.

  1. The sign of estimates would be opposite which imply opposite behavior of variables towards target variable (as shown in the image above).
  2. Area under curve (AUC), Concordance and Discordance scores would be exactly same. No change.
  3. Sensitivity and Specificity score would be swapped (see the image below).
  4. No change in Information Value (IV) of variables.

Sensitivity and Specificity

22. What is Fisher Scoring in Logistic Regression?

Logistic regression estimates are calculated by maximizing the likelihood function. The maximization of the likelihood is obtained by an iterative method called Fisher's scoring. It's an optimization technique. In general, there are two popular iterative methods for estimating the parameters of a non-linear equations. They are as follows -
  1. Fisher's Scoring
  2. Newton-Raphson
Both are similar except that the Newton-Raphson uses matrix of second order derivatives of log-likelihood function and Fisher uses Information Matrix. In SAS, the default optimization method in PROC LOGISTIC is Fisher's Scoring.

The algorithm completes when the convergence criterion is satisfied or when the maximum number of iterations has been reached. Convergence is obtained when the difference between the log-likelihood function from one iteration to the next is small.

Technical Interview Questions on SAS and R

The following is a list of SAS/R technical interview questions that are generally asked. It includes some tricky questions which requires hands-on experience.

SAS
  1. Difference between INPUT and PUT Functions
  2. How to generate serial numbers with SAS
  3. Difference between WHERE and IF statements
  4. Difference between '+' operator and SUM Function
  5. Use of COALESCE Function
  6. Difference between FLOOR and CEIL functions
  7. How to use arrays to recode all the numeric variables
  8. Number of ways you can create macro variables
  9. Difference between MERGE and SQL Joins
  10. How to calculate cumulative sum in SAS
You would find answers of the above questions in the links below -

R
  1. Difference between sort() and order() functions
  2. Popular R packages for decision tree
  3. How to transpose data in R
  4. How to remove duplicates in R
  5. Popular packages to handle big data
  6. How to perform LEFT join in R
  7. How R handles missing values
  8. How to join vertically two data frames
  9. Use of with() and by() functions
  10. Use of which() function
Check out the link below for solutions of the above questions plus other interview questions on R

SQL and Excel

Prior to interview, you can also look at questions on SQL concepts and Advanced Excel. SQL and Excel are still the most widely used tools for basic and intermediate analytics.
Related Posts
Spread the Word!
Share
About Author:
Deepanshu Bhalla

Deepanshu founded ListenData with a simple objective - Make analytics easy to understand and follow. He has over 10 years of experience in data science. During his tenure, he worked with global clients in various domains like Banking, Insurance, Private Equity, Telecom and HR.

8 Responses to "Predictive Modeling Interview Questions and Answers (2024)"
  1. Thanks, this post is very helpfull.

    ReplyDelete
  2. One of the best website for analytics professional.

    ReplyDelete
  3. Hi,

    I am not able to open any link mentioned e.g Detailed Tutorial : Model Performance at point 15.

    ReplyDelete
  4. hi, so i am removing multicolinearity btwn continuous variable via VIF and factor loading. can i use same approach for categorical variable as well?

    ReplyDelete
  5. i am not very clear with answer of question17, can you pls elaborate. thanks in advance.

    ReplyDelete
  6. Extremely helpful...Can you please elaborate whether it is possible to merge datasets with duplicates/duplicate IDs

    ReplyDelete
Next → ← Prev