Test 2: Machine learning

Here are the answers, see the notebook here.

In this test we will use the entire dataset from the walmart kaggle challenge, do some feature engineering and data munging, then fit a random forest model to our data.

Again, the data is a csv file which contains one line for each scan on their system, with a Upc, Weekday, ScanCount, DepartmentDescription and FinelineNumber.

The VisitNumber column groups our data into baskets - Every unique VisitNumber is a unique basket, with a basket possibly containing multiple scans.

The label is the TripType column, which is Walmarts proprietary way of clustering their visits into categories. We wish to match their algorithm, and predict the category of some of our held out data.

This time we will use the full dataset - we have about 650,000 lines, in about 100,000 baskets. Just as a heads up, using 100 classifiers, my answer to the test takes less than 3 minutes to run - no need for hours and hours of computation.

If you do need to run this script multiple times, download the dataset from the website rather than redownloading each time, as it’s around 30 mb.

Please answer the questions in the cells below them - feel free to answer out of order, but leave comments saying where you carried out the answer. I am working more or less step by step through my answer - Feel free to add on extra predictors if you can think of them.

1. Import the modules you will use for the rest of the test:

In [1]:

import pandas as pd
import numpy as np
from sklearn import ensemble
from sklearn.cross_validation import train_test_split
import operator

2. Read in the data, and check its head. The data is available on the website at: http://jeremy.kiwi.nz/pythoncourse/assets/tests/test2data.csv

In [2]:

dat = pd.read_csv("c:/users/jeremy/desktop/kaglewalmart/data/train.csv")
dat.head()
TripType VisitNumber Weekday Upc ScanCount DepartmentDescription FinelineNumber
0 999 5 Friday 6.811315e+10 -1 FINANCIAL SERVICES 1000.0
1 30 7 Friday 6.053882e+10 1 SHOES 8931.0
2 30 7 Friday 7.410811e+09 1 PERSONAL CARE 4504.0
3 26 8 Friday 2.238404e+09 2 PAINT AND ACCESSORIES 3565.0
4 26 8 Friday 2.006614e+09 2 PAINT AND ACCESSORIES 1017.0

3. Fix the Weekday and DepartmentDescription into dummified data. For now they can be seperate dataframes

In [3]:

#now fix the categorical variables
weekdum = pd.get_dummies(dat['Weekday'])
weekdum.head()
departdum = pd.get_dummies(dat['DepartmentDescription'])
departdum.head()
1-HR PHOTO ACCESSORIES AUTOMOTIVE BAKERY BATH AND SHOWER BEAUTY BEDDING BOOKS AND MAGAZINES BOYS WEAR BRAS & SHAPEWEAR ... SEAFOOD SEASONAL SERVICE DELI SHEER HOSIERY SHOES SLEEPWEAR/FOUNDATIONS SPORTING GOODS SWIMWEAR/OUTERWEAR TOYS WIRELESS
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

5 rows × 68 columns

4. Drop the unneeded columns from the raw data - I suggest removing - ‘Weekday’, ‘Upc’, ‘DepartmentDescription’ and ‘FinelineNumber’ (we could dummify Upc and FineLine, but this will massively increase our data size.)

In [4]:

#drop the useless columns:
dat = dat.drop(['Weekday', 'Upc', 'DepartmentDescription', 'FinelineNumber'], axis = 1)
dat.head()
TripType VisitNumber ScanCount
0 999 5 -1
1 30 7 1
2 30 7 1
3 26 8 2
4 26 8 2

5. Correct the Dummified data for number bought in each ScanCount. I would recommend something like:

departdummies.multiply(dat['ScanCount'], axis = 0)

In [5]:

#correct for scancount
departdum = departdum.multiply(dat['ScanCount'], axis = 0)
departdum['ScanCount'] = dat['ScanCount']
dat = dat.drop(['ScanCount'], axis = 1)

6. Concatenate back together the dummy variables with the main dataframe

In [6]:

dat = pd.concat([dat, weekdum, departdum], axis = 1)
dat.head()
TripType VisitNumber Friday Monday Saturday Sunday Thursday Tuesday Wednesday 1-HR PHOTO ... SEASONAL SERVICE DELI SHEER HOSIERY SHOES SLEEPWEAR/FOUNDATIONS SPORTING GOODS SWIMWEAR/OUTERWEAR TOYS WIRELESS ScanCount
0 999 5 1.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 ... -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -1
1 30 7 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1
2 30 7 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1
3 26 8 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
4 26 8 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2

5 rows × 78 columns

7. Summarise the data for each basket (hint, if you groupby columns, an .agg() method will not apply to them)

In [7]:

dat1 = dat.groupby(['TripType', 'VisitNumber']).agg(sum)
dat1.head()
Friday Monday Saturday Sunday Thursday Tuesday Wednesday 1-HR PHOTO ACCESSORIES AUTOMOTIVE ... SEASONAL SERVICE DELI SHEER HOSIERY SHOES SLEEPWEAR/FOUNDATIONS SPORTING GOODS SWIMWEAR/OUTERWEAR TOYS WIRELESS ScanCount
TripType VisitNumber
3 106 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
121 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
153 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
162 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
164 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2

5 rows × 76 columns

8. Use the reset_index() method to remove your groupings. As we did not cover multiple indices in the lesson, my answer was

dat1 = dat1.reset_index()

In [8]:

dat1 = dat1.reset_index()

9. Split the data into training and testing sets: Use 0.25 of the data in the test set.

In [9]:

classes = dat1.TripType
dat1 = dat1.drop('TripType', axis = 1)
classes.head()

X_train, X_test, y_train, y_test = \
    train_test_split(dat1, classes, test_size = 0.25, random_state = 0)

10. Plot the training data using matplotlib or seaborn. Choose at least 3 meaningful plots to present aspects of the data.

In [10]:

#lots of good answers here

11. Take out the TripType from our dataframe - we don’t want our label as a feature.

Make sure to save it somewhere though, as our model needs to be fit to these labels.

In [11]:

#see part 9

11. Describe and fit a randomForest Classifer with 100 n_estimators.

In [12]:

model = ensemble.RandomForestClassifier(n_estimators=100)
model.fit(X_train, y_train)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=1,
            oob_score=False, random_state=None, verbose=0,
            warm_start=False)

13. What is the score of the model on the training data?

In [13]:

model.score(X_train, y_train)
0.99994425475576609

14. What is the score of the model on the testing data?

In [14]:

model.score(X_test, y_test)
0.65140683138927213

15. What is the most important variable? Can you explain the model?

In [15]:

importances = model.feature_importances_
max_index, max_value = max(enumerate(importances), key=operator.itemgetter(1))
print('Feature {x} was the most important, with an importance value of {y}'.format(x = dat1.columns[max_index], y = max_value))
Feature ScanCount was the most important, with an importance value of 0.16855133760881952

In [16]:

print('random forests are notoriously difficult to interpret - any explantion here was fine')
random forests are notoriously difficult to interpret - any explantion here was fine

Thanks for taking the Python Course!

Please save your notebook file as ‘your name - test2.ipynb’, and email it to jeremycgray+pythoncourse@gmail.com by the 29th of April.