In recent years there is much attention to bringing machine learning models in production whereas up to a few years ago the results of machine learning came into slides or some dashboards. Bringing machine learning in production is important to integrate the outputs of machine learning with other systems.
What does “bring in production” mean?
To bring a machine learning in production means to run the machine learning model more often and integrate and use the model’s output in other systems.
There are some ways that you can bring your machine learning models in production, such as:
- To build a web-service around it and use it in real time (API calls, microservice architecture)
- Schedule your code to be run regularly (such as Oozie and Airflow)
- Stream analytics (such as Spark Streams) for lambda/kappa architect
The focus of this article is to discuss the first way of going on production. This method is usually used in web-based environments and microservices architect.
Python and modeling
In this article, we build up a sample machine learning model for Online Shoppers’ Purchasing Intention Dataset Data Set, available and discussed in https://bit.ly/2UnSeRX.
In the below you can find the code for the data preparation and modeling:
import pandas as pd # to use dataframes and etc.
import numpy as np #for arithmatic operationms
from time import strptime #to convert month abbrivations to numeric values
from sklearn.model_selection import train_test_split #to split up the samples
from sklearn.tree import DecisionTreeRegressor #ression tree model
from sklearn.metrics import confusion_matrix # to check the confusion matrix and evaluate the accuracy
#reading the data
#preparing for split
y=dataset["Revenue"].map(lambda x: int(x))
X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.2)
#making a copy (to be able to change the values)
#data prep phase
#The problem is "June"is the full month name and the rest abbriviation --> we turn all in abbr.
localData["Month"]= localData["Month"].map(lambda x: str.replace(x,"June","Jun"))
# Our model doesn't ingest text, we transform it to int
localData["Month"]= localData["Month"].map(lambda x: strptime(x,"%b").tm_mon)
# The weeknd should also be turned into int
localData["Weekend"]= localData["Weekend"].map(lambda x: int(x))
# turning string to int
localData["VisitorType"] = localData["VisitorType"].astype('category').cat.codes
#Sending the data through data prep phase
#define the regression tree
regr_tree = DecisionTreeRegressor(max_depth=200)
#fiting the tree with the training data
# running the predictions
# looking at the confusion matrix
For data scientists, the above code should be very familiar. We read the data, do very few data wranglings and model it with decision trees.
Save the model
The next step which does not appear in data scientists workflow regularly is to save the model on hard-drive. This step is necessary if you bring your python code in production.
In below you can see how “joblib” can be of assistant to do this:
from sklearn.externals import joblib
Build your Flask back-end
If you are not familiar with building back-end programs and RESTful APIs, I highly recommend reading https://bit.ly/2AVTwxW and other related materials. But in short Web Services, and RESTful APIs are servers provide functions (on the server). The application can remotely call those function and get the outputs back. In our example, we call our machine learning model from anywhere through internet and TCPIP protocol. Once the model is called with the data, the result of classification is back to the client or the computer which has already called the machine learning model.
Discussing details about web-services and web APIs are beyond the scope of this article but you can find many interesting articles on this by some internet search.
In below we use Flask to build the webservice around the machine learning model.
from flask import Flask, request
from flask_restful import Resource, Api
from sklearn.externals import joblib
import pandas as pd
app = Flask(__name__)
api = Api(app)
def get(self): # get is the right http verb because it caches the outputs and is faster in general
data = request.get_json() # greading the data
data1 = pd.DataFrame.from_dict(data,orient='index') # converting data into DataFrame (as our technique does not ingest json)
data1=data1.transpose() # once Json is converted to DataFrame is not columnar, we need to convert it to columnar
model = joblib.load('../model3.pkl') # loading the model from disk
result = list(model.predict(data1)) # conversion to list because numpy.ndarray cannot be jsonified
return result # returning the result of classification
Test your API
You can use various techniques to test if the back-end works. I use Postman software to test if the API is working.
You need to consider that we made a GET request in our Flask application. The motivation behind choosing GET request is the ability of the webservers to cash the results helping with the speed of the webservice.
Another consideration is we send the data in JSON format (in the format after data preparation phase) for the call and the results are back also in JSON.
I personally like to bring machine learning in production using RESTful APIs and the motivation behind it is because of microservices architect. The microservices architect lets developers to build up loosely coupled services and enables continuous delivery and deployment.
To scale up your webservice there are many choices of which I would recommend load balancing using Kubernetes