Often when we talk about predictive maintenance, we refer to the method by which we predict the breakdown of an engine or a bearing, using data such as vibration and rotational speed. However, there is another method where we use cameras to be able to detect wear on, for example, frying belts, conveyor belts, rust on wind turbines or leakage in pipes.
This blog post will introduce various examples of using camera for predictable maintenance. Finally, we will give you some good advice on what to be aware of when getting started with this method.
Examples of Using Image Recognition for Predictive Maintenance
It will often be our own imagination that sets the limits for where cameras can be used for predictive maintenance. Below I will give a few examples of good cases where different segments can use this method.
Is your business struggling with overflow and soiling of wells? Image recognition can help you monitor critical areas in your network. With image recognition, you will be able to get an alarm when a machine learning model has detected that there is a blockage in a pipe, or when water rises in a well.
It will also be possible to detect when the growth of algae rises to a level where it may be critical. In both places, you will be able to get an alarm in proper time, to be able to act on it before a real problem arises with overflow or stopped wells.
If you as a food company have a frying belt, then image recognition can help identify small holes and curls on the belt so that you can schedule maintenance before they break.
27 Danish food companies have recently signed on to reduce their food waste by 50% by 2030. Unplanned breakdowns in production will often lead to food waste, due to hygiene, temperatures etc., which means that you are forced to discard raw ingredients.
So there is a good chance that image recognition can help these companies achieve their ambitious goals.
Another fun but slightly different use case for image recognition in the food industry is to utilize this technology to automate quality control. With image recognition, for example, a chocolate maker, can be notified if a chocolate is shaped incorrectly or is not adequately covered with chocolate.
With image recognition, it is possible to detect rust on the wind turbines offshore, and thus be able to plan maintenance based on the rust development. You will also be able to make visual inspections of the wind turbines with image recognition. This will give you a completely different degree of scaling in being able to monitor many wind turbines, and only relate to those that are detected as being critical. There are i.a. several studies that use image recognition and drones to detect wear on the wings. It is both faster and cheaper to send a drone out and detect this, than to send a jackup ship out there, if nothing is wrong.
What you must be aware of
When embarking on an image recognition project, the first thing to consider is how you want to get these images taken?
I would recommend that you first define the problem, then identify which sources to use. You can read much more about this in our blog post about our concept [Right Data]((/blog/2020/02/big-data-vs-right-data/). Once you have found the right sources, we must now identify what the requirements are in order to gather data in the right format so that we can use it for something. For accelerometers, we must always keep in mind not to introduce frequency aliasing. But when we talk about image recognition, there are a few more things we need to be aware of, namely resolution, frame rate, and lighting. Additionally, it is important to keep in mind what environment the camera sensor should be. Also, keep in mind to store the images from the beginning so that they are more structured, and you can quickly get started with the analyzes.
It is important that the pictures that are taken have such a high resolution that it is clear to separate the things in the picture from each other. A high resolution therefore means that we can clearly see what is in the image, and will often depend on how many different pixels and colors are in an image. In other words, it will be the environment and the colors that to a large extent place demands on the solubility of your camera.
If we need to detect holes in the frying belt or wear on the conveyor belt, it can be advantageous to switch on the camera sensor so that it takes pictures with a high framerate until the frying belt has run a rotation.
Then it can sleep again for one to two hours.
In such cases, you need to be able to send many pictures across the bridge, relatively quickly and it will therefore put demands on your bridge.
If, on the other hand, you want to use image recognition to detect soiling in wells, you can settle for few images over an extended period of time, as this does not develop as quickly. The frame rate will therefore be shorter than the first example given.
If the pictures are taken outside, such as at a wind farm, the lighting will change, depending on whether it is a sunny day or a rainy day.
The changes in the colors can in some cases have too great an impact on a machine learning model, if we now always detect rust when it is raining.
Fortunately, analysis tools can manipulate these colors, but it's something you need to be aware of.
Conversely, in a well or a pipe underground, there will be requirements that there can be lighting so that the image does not become completely black.
The environment the camera sensor is in
If you are in the food industry, it is important that the sensors and bridges installed in production can withstand the given environment to which they are exposed.
For example, if we are to detect if a frying belt is about to break, the camera must be placed over a lot of heat, and it is therefore important that the camera sensor is wrapped so that it can withstand the heat, and also does not get a steamed lens.
In addition, it is incredibly important that sensors in the food industry can become sterile, and can withstand being hit by the detergents you use for cleaning.
If the camera sensor is to go down into a well, it must also be able to withstand the climate down there, with different temperatures, water etc.
Storage of your images
It is important that your images are stored in such a way that they are easy to find and you get more structure on an otherwise unstructured data source. A good place to start would be to have a folder with all the pictures that belong to a particular challenge in. A bit like you probably have a picture folder on your computer called “summer holidays 2018”. For example, it could be a file named “Dirt of wells”. In this folder you now need to create subfolders. For example, a folder could be “no-dirt”. Another folder could be called “dirt”. All images that do not clearly show that there is soiling in the well belong to the folder “no-soiling”. If soiling is detected, the images must be moved to “fouling". Not only have you made it easy for you to keep track of your photos, but you have also made it easy to get started with machine learning, using supervised learning.
neurospace's AI Camp
If you are interested in learning more about the concepts of Big Data vs Right Data, and how image recognition can automate processes in your business, then our AI Camp is a good place to start your data journey.
// Maria Jensen, Machine Learning Engineer @ neurospace