top of page

Applications of machine learning in manufacturing | Chapter 4: Applying machine learning models into

Updated: Jul 3, 2023

In the previous article, the data analysis task was introduced, the next important task is how to apply machine learning to manufacturing and make it gain benefits for the business. There are different approaches to putting machine learning into business which has various positive impacts. This article will be divided into three main parts. The first part introduces some definitions of the machine learning model and types of machine learning models. The second part defines each type of training model. The final part is about how to deploy and improve a machine-learning model.

Read more about Applications of machine learning in manufacturing


Model definition

This is a key term that should be understood clearly. The term “model” is used widely in business. For this article, “the model” is defined as a combination of an algorithm and configuration details that can be used to make a new prediction based on a new set of input data. It is a black box that can take in new raw data input and make a prediction. A model will be trained over a set of data. It is provided with an algorithm that can use to reason over and learn from the data sources.

Figure 1: Model definition


Many tools, libraries and software programs can create models easily. For example, the Sklearn library and Spark are very popular.


Type of machine learning model option

Machine learning can be divided into two major types (depending on the problem), which are supervised learning and unsupervised learning.

Figure 2: Type of machine learning option


Supervised learning

The majority of practical machine learning uses supervised learning. Supervised learning is where you have input variables (𝑥) and an output variable (𝑦) and an algorithm is used to learn the mapping function from the input to the output.


𝑦 = 𝑓(𝑥)


The goal is to approximate the mapping function so well that when the new input data (x) come, it can predict the output variables (𝑦) for that data.


Supervised learning problems can be further grouped into regression and classification problems

  • Classification: A classification problem is when the output variable is a category, such as “yes” or “no” or “damage” and “no damage”.

  • Regression: A regression problem is when the output variable is a real value, such as “energy consumption”.


Unsupervised machine learning

Unsupervised learning is where you only have input data (X) and no corresponding output variables.

The goal of unsupervised learning is to model the underlying structure or distribution of the data to learn more about the data.

The reason this is called unsupervised learning is that there is no correct answer or specific prediction value.

Unsupervised learning can be further grouped into clustering and association problems.

  • Clustering: A clustering problem is where inherent grouping in the data is investigated.

  • Association: An association rule learning problem is where you want to discover rules that describe large portions of your data


Type of training options

There are two types of training options. The two main approaches are batch and real-time


Batch training

Running algorithms which require the full dataset for each update can be expensive when the data is large. To scale inferences, we can do Batch training. Batch training is the most commonly used model training process, where a machine learning algorithm is trained in a batch or batches on the available data. Once this data is updated or modified, the model can be trained again if needed. While not fully necessary to implement a model in production, batch training allows for a constantly refreshed version of models.


Real-time training

Real-time training involves a continuous process of taking in new data and updating consequently to improve its predictive power. This can be done with Spark Structured Streaming using Streaming Linear Regression with SGD.


Model deployment options

Once the model has been trained, it must then be deployed into production. The term “model deployment” can be replaced with terms like “model serving”, “model scoring”, or “predicting”. While there is a nuance as to when to use which term, it’s context-dependence. However, it all refers to the process of creating predicted values from the data source.


Operational databases

This option is sometimes considered to be real-time as the information is provided “as it is needed”, however, it is still a batch method. A batch process can be run at any time that is suitable for the system (usually it is at night when the factory is no longer in operation). After that, an operational database is updated with the most recent prediction. The next morning, the application can fetch this prediction to take action.


One potential problem of this kind of deployment is that a data source may have changed unpredictably since the last batch job was run. The prediction at this point could be different. It can lead to some technical issues.

Figure 3: Operational database


Notes: Apache Spark is a very useful framework in the Batch process. However, some people seem to separate the Pyspark function from some other useful Python functions, specifically the sci-kit-learn library. Apache Spark is good at taking generalized computing problems executing them in parallel across many nodes and splitting up the data to suit the job. Spark 2.3 allow the use of Pandas-based UDF with Apache Arrow, which significantly speeds this up. If the model is created by using Scikit-learn, it is still possible to use the parallel processing power of Spark in a batch scoring implementation rather than having to run scoring on a single node running plain odd Python.


Real-time model serving

Some problems require the model can make a prediction based on real-time data sources. Several deployment patterns can be used to make this work.


Online scoring with Kafka and Spark streaming

One way to improve the operational database process talked about above is to implement something that will continuously update the database as new data is generated. One solution is that use a scalable messaging platform like Kafka to send newly acquired data to the long-running Spark Streaming process. The Spark process now can make a new prediction based on the new data and fetch it into the operational database.

Figure 4: Deployment with Kafka


This is used to improve the potential incorrect prediction for an application on outdated data.


Web service API

Another way to deploy a model is by using a web service wrapper around the model. CDSW implements a real-time model serving by deploying a container that includes the model and necessary libraries to build a REST API. This API will take a request with JSON data, deliver data to the model, and return a prediction value.

Figure 5: Web Service API


The one thing to look out for with this deployment pattern is managing infrastructure to deal with the load associated with concurrency. If several requests happen at the same time, there will be multiple API calls placed to the endpoint. If there is not sufficient capacity, requests may take a long time to respond or even fail, which will be an issue for the application. In addition, the system may be down due to overload.


Device scoring

Another type of model serving option is to move the ML models right to the edge and make a prediction on an edge device. The term edge device means anything connected to the cloud, where cloud refers to something like Microsoft Azure or a company’s remote server. This allows models to still be usable in situations with limited network capacity and push the compute requirements away from a central cluster to the devices themselves.


Unfortunately, this approach will work only with relatively rare situations where IoT devices are quite powerful, perhaps along the lines of a desktop PC or laptop. Also, neural network libraries such as CNTK and Keras/TensorFlow were designed to train models quickly and efficiently, but in general, they were not necessarily designed for optimal performance when performing input-output with a trained model. In short, the easy solution for deploying a trained ML model to an IoT device on the edge is rarely feasible.


In conclusion, the main idea is to be able to make a new prediction based on the model information contained in the portable model format, without needing to connect back to the central model training cluster.


Monitoring model performance

After the model is deployed into production and provides utility to manufacturing, it is important to monitor how well the model is performing. There are several aspects of performance to consider, and each will have its measurement that will have an impact on the life cycle of the model


Model drift

Model drift is a term that refers to the degradation of a model’s prediction power due to changes in the environment, and thus the relationships between variables.

There are three types of model drift

  • Concept drift is a type of model drift where the properties of the dependent variable change in an unforeseen way. For example, in the weather prediction application, there may be several target concepts such as temperature, pressure, and humidity.

  • Data drift is a type of model drift where the properties of the independent variable(s) change(s). Examples of data drift include changes in the data due to seasonality, changes in consumer preferences, the addition of new products, etc…

  • Upstream data changes refer to operational data changes in the data pipeline. An example of this is when a feature is no longer being generated, resulting in missing values. Another example is a change in measurement (eg. miles to kilometres).


If the model falls below an acceptable performance threshold, then a new process to retain the model is initiated, and that newly trained model will be deployed.

Figure 6: Model drift


Sometimes, there will also be times when the input has changed so that the features which were selected are no longer relevant to the prediction result. It leads to the poor performance of the model. It is important to go back to the analytic task to re-look the whole process. This may require adding/ eliminating some irrelevant features. In conclusion, the model monitoring process is a critical part of the model lifecycle.


Conclusion

In general, it has many ideas to apply machine learning models to manufacturing. There is no standard as to how things should be implemented. In each factory, each piece of equipment will have different ways to use machine learning. The key here is to have a good understanding of the data and model. Knowing what model performance measurement matter to the manufacturing. This is the final article of this series. If you have any questions, feel free to contact us at info@daviteq.com.

Daviteq

bottom of page