How to bring your python machine learning model in production using RESTful APIs

AI Data Machine Learning

In recent years there is much attention to bringing machine learning models in production whereas up to a few years ago the results of machine learning came into slides or some dashboards. Bringing machine learning in production is important to integrate the outputs of machine learning with other systems.

What does “bring in production” mean?

To bring a machine learning in production means to run the machine learning model more often and integrate and use the model’s output in other systems.

There are some ways that you can bring your machine learning models in production, such as:

  1. To build a web-service around it and use it in real time (API calls, microservice architecture)
  2. Schedule your code to be run regularly (such as Oozie and Airflow)
  3. Stream analytics (such as Spark Streams) for lambda/kappa architect

The focus of this article is to discuss the first way of going on production. This method is usually used in web-based environments and microservices architect.

Python and modeling

In this article, we build up a sample machine learning model for Online Shoppers’ Purchasing Intention Dataset Data Set, available and discussed in https://bit.ly/2UnSeRX.
In the below you can find the code for the data preparation and modeling:

python

import pandas as pd # to use dataframes and etc. 
import numpy as np #for arithmatic operationms
from time import strptime #to convert month abbrivations to numeric values
from sklearn.model_selection import train_test_split #to split up the samples
from sklearn.tree import DecisionTreeRegressor #ression tree model
from sklearn.metrics import confusion_matrix # to check the confusion matrix and evaluate the accuracy

#reading the data
dataset=pd.read_csv(".../online_shoppers_intention.csv")

#preparing for split
df=dataset.drop("Revenue",axis=1)
y=dataset["Revenue"].map(lambda x: int(x))
X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.2)

#making a copy (to be able to change the values)
X_train=X_train.copy()
X_test=X_test.copy()

#data prep phase
def dataPrep(localData):
    #The problem is "June"is the full month name and the rest abbriviation --> we turn all in abbr.
    localData["Month"]= localData["Month"].map(lambda x: str.replace(x,"June","Jun"))
    # Our model doesn't ingest text, we transform it to int
    localData["Month"]= localData["Month"].map(lambda x: strptime(x,"%b").tm_mon)
    # The weeknd should also be turned into int
    localData["Weekend"]= localData["Weekend"].map(lambda x: int(x))
    # turning string to int
    localData["VisitorType"] = localData["VisitorType"].astype('category').cat.codes
    return localData

#Sending the data through data prep phase
X_train=dataPrep(X_train)
X_test=dataPrep(X_test)
#define the regression tree
regr_tree = DecisionTreeRegressor(max_depth=200)
#fiting the tree with the training data
regr_tree.fit(X_train, y_train)
# running the predictions
pred=regr_tree.predict(X_test)
# looking at the confusion matrix
confusion_matrix(y_test,pred)

For data scientists, the above code should be very familiar. We read the data, do very few data wranglings and model it with decision trees.

Save the model

The next step which does not appear in data scientists workflow regularly is to save the model on hard-drive. This step is necessary if you bring your python code in production.
In below you can see how “joblib” can be of assistant to do this:

python
from sklearn.externals import joblib
joblib.dump(regr_tree, '.../model3.pkl')

Build your Flask back-end

If you are not familiar with building back-end programs and RESTful APIs, I highly recommend reading https://bit.ly/2AVTwxW and other related materials. But in short Web Services, and RESTful APIs are servers provide functions (on the server). The application can remotely call those function and get the outputs back. In our example, we call our machine learning model from anywhere through internet and TCPIP protocol. Once the model is called with the data, the result of classification is back to the client or the computer which has already called the machine learning model.
Discussing details about web-services and web APIs are beyond the scope of this article but you can find many interesting articles on this by some internet search.
In below we use Flask to build the webservice around the machine learning model.

python
from flask import Flask, request
from flask_restful import Resource, Api
from sklearn.externals import joblib
import pandas as pd


app = Flask(__name__)
api = Api(app)



class Classify(Resource):
    def get(self): # get is the right http verb because it caches the outputs and is faster in general
        data = request.get_json() # greading the data
        data1 = pd.DataFrame.from_dict(data,orient='index') # converting data into  DataFrame (as our technique does not ingest json)
        data1=data1.transpose() # once Json is converted to DataFrame is not columnar, we need to convert it to columnar
        model = joblib.load('../model3.pkl') # loading the model from disk
        result = list(model.predict(data1)) # conversion to list because numpy.ndarray cannot be jsonified
        return result # returning the result of classification 



api.add_resource(Classify, '/classify')

app.run(port=5001)

Test your API

You can use various techniques to test if the back-end works. I use Postman software to test if the API is working.
You need to consider that we made a GET request in our Flask application. The motivation behind choosing GET request is the ability of the webservers to cash the results helping with the speed of the webservice.
Another consideration is we send the data in JSON format (in the format after data preparation phase) for the call and the results are back also in JSON.

python

{
    "Administrative":0,
    "Administrative_Duration": 0.0,
    "Informational": 0,
    "Informational_Duration": 0.0,
    "ProductRelated": 1,
    "ProductRelated_Duration": 0.0,
    "BounceRates": 0.2,
    "ExitRates": 0.2,
    "PageValues": 0.0,
    "SpecialDay": 0.0,
    "Month": 5,
    "OperatingSystems": 2,
    "Browser": 10,
    "Region": 5,
    "TrafficType": 1,
    "VisitorType": 87093223,
    "Weekend":0
}

Microservices architect:

I personally like to bring machine learning in production using RESTful APIs and the motivation behind it is because of microservices architect. The microservices architect lets developers to build up loosely coupled services and enables continuous delivery and deployment.

Scaling up

To scale up your webservice there are many choices of which I would recommend load balancing using Kubernetes

AI Data Machine Learning
https://sharing.luminis.eu/feed/