News

You Are Here: Home - News - European steel industry digital transformation focus: big data analytics and cloud computing
European steel industry digital transformation focus: big data analytics and cloud computing
Publish Time: 2021.01.15 View: 11

Big data analysis and cloud computing is one of the EU’s 12 key research funding projects for the digital transformation of the steel industry.Big data analytics focus on algorithms based on historical data to identify product quality issues and reduce product failures.The traditional database technology in iron and steel industry still has some difficulties in capturing, storing, managing and analyzing a large amount of structured and unstructured data.Big data analysis technology uses new processing patterns to obtain valuable information from various data types, and then to understand, obtain information and insight and identify its connotation, so as to make accurate decisions.

There are six major projects in the area of big data analytics and cloud computing:

One is the “whole process of ladle tracking” project.

The project uses a variety of sensors, including acoustics, based on a multi-objective optimization (MOO) framework and data analysis to improve plant production and steel plant safety.The goal of the project is to automatically track ladles in a steel plant operating environment, from steelmaking to continuous casting to slab delivery.Accurate tracking of ladle position is one of the bases for digital transformation of steel-making process.The ladle tracking system not only ensures the stable production, but also ensures the optimization of ladle logistics under the abnormal condition of sudden disturbance of production plan, so as to ensure the safety and increase the output.

The second is the “Quality 4.0” project.

The project is based on advanced artificial intelligence (AI), machine self-learning analysis methods and big data processing to develop an adaptive platform that allows online analysis of big data streams to make product quality decisions and provide tailored, highly reliable quality information.For the European steel industry, where overcapacity has led to cheap steel flooding the entire steel market, there is an urgent need for differentiation and it makes strategic sense to aggressively promote a common platform.However, sharing the wrong quality information can lead to serious customer uncertainty and damage customer confidence.Adaptive “4.0” quality project platform can achieve quality information on the supply chain of horizontal consolidation, online analyzing large data flow, USES the innovation method of machine learning algorithms, the modeling customer relationship and automatic exchange data automatic matching of the available customer and order information, through the two-way exchange with the customer tailored high reliability information, decision-making in order to realize the differentiation product quality level and reduce the cost.

The “Quality 4.0” platform, as a service-oriented architecture (SOA), provides the flexibility to compose individual modules and integrate them into the existing IT infrastructure without relying on a single product or technology.The “Quality 4.0” platform consists of three service modules: one is the quality data generation service module (QGS), which generates quality data and its reasonable value;The second is the quality allocation service module (QAS), the customer orders and products reasonable allocation, and select the relevant quality data;The third is the Quality Exchange Service Module (QXS), which exchanges selected quality data compiled for each customer order.

Among them, the main function of “Quality 4.0-QGS” is to estimate the quality data of all available data sources, and quantify the confidence of the estimate through the possible value (PV), so as to finally guarantee the reliability of the quality information provided.The determination of possible values can be expressed as a function.

Effective and reliable detection of abnormal quality indicators plays a fundamental role in the “quality 4.0” project.Quality related data collected throughout the production process may have outliers for various reasons such as testing.Due to the types and diversity of outliers, there is currently no accepted method that can reliably detect outliers effectively in any situation.The informal definition of the concept of outlier relates to its deviation from normality and can be divided into five categories: distribution-based, depth-based, distance-based, cluster-based, and density-based.This project uses the Fucod algorithm to detect outliers. This method combines the four existing outlier detection methods, uses the fuzzy reasoning system (FIS) to dynamically manage the contribution of each method, mining its advantages and avoiding its disadvantages according to the processed data.FUCOD is designed for processing multidimensional data, which means that outlier level calculations take into account not only the properties of the individual variables that make up the quality data, but also their interactions.These characteristics make the FUCOD method particularly suitable for industrial data sets that deal with a large number of tasks.This method has been successfully applied to the steel industry in Europe.

“Quality 4.0 QAS” based on quality data and knowledge of the customer, realize the adaptability of supervision over the quality of products, can provide estimates of quality data and reflect confidence for rationality of value, quality data will receive information combined with knowledge of the target customer, distribution and exchange related orders, the quality of the data, prepare the quality defects in order to create valuable information, and feedback the information to suppliers.However, systems that exchange relevant quality information between suppliers and customers must be able to understand the meaning of “relevance”.Therefore, all the information needed to determine the relevance of quality information is semantically modeled based on the available customer information and order data.This model includes the degree of customer intimacy, which reflects the mutual trust relationship between the supplier and the user, so that the type and quantity of quality information can be reasonably defined.

“Quality 4.0-QXS” compiles the selected quality data separately for each order based on the results provided by “Quality 4.0-QAS” and exchanges the data using standard communication protocols.QXS is the only service accessible across plant boundaries and manages the exchange of quality data between the “Quality 4.0” platforms, enabling a customer-oriented two-way exchange of quality data and establishing a synchronous focus on product quality through horizontal integration.In order to identify appropriate IT standards and exchange quality data between customers and suppliers, solutions have been developed: QDX system, STEP system, and quality tracking system.Since there are no free standards for quality data exchange, specific IT standards will be defined and implemented within the “quality 4.0” framework.

FADI is a customizable end-to-end big data platform, an open source tool that can be deployed and integrated in a portable and extensible manner, and a platform for multiple users and participants (i.e., professional analysts, data scientists/engineers, IT administrators, etc.).FADI has five main features: it collects batch and streaming data from a variety of data sources; it stores data in different types of data stores; it uses ML and artificial intelligence techniques; it visualizes and analyzes data in the user’s Web interface; and it generates and publishes reports.

The third is the project of “sensor data mining to improve product quality”.

The project proposes a solution based on big data, feature extraction, machine learning, analysis server, and knowledge management to automatically analyze sensed time series data.By developing new methods and tools to help factories improve product quality and reduce production costs, the project focuses on three aspects of quality standards: appearance, intrinsic quality and mechanical performance.The development of the project includes, on the one hand, optimizing the manufacturing process by identifying the main causes of poor quality;On the other hand, it includes rapid prediction of product quality to better characterize product characteristics and reduce costs.

These new methods extract knowledge from a large number of complex data, for example, a large number of parameters (hundreds) based on sensor time series over a considerable period of time (2 ~ 3 years) and high frequency (1Hz ~ 10Hz), and extract specific information (for example, average casting rate) for statistical analysis.To automate the massive analysis of these sensor time series data, the project proposed a comprehensive solution built around five spindles:

1. Big data: Design and manage new types of databases suitable for data analysis of large amounts of data.

2. Feature extraction from time series: develop algorithms to build more appropriate metrics to better represent processes that may affect quality.

3. Machine Learning: Descriptive and predictive analysis of machine learning to identify causes of poor quality and make better predictions.

4.Analytics Server: Analytics servers are developed to improve modeling efficiency, optimize management, and improve communication between process experts and data mining experts.

5. Knowledge management: capitalization of expertise and valuable statistical data to standardize and optimize the exchange between process knowledge and statistical knowledge.

Fourth, the project of “developing breakthrough technologies for real-time monitoring, control and prediction based on big data to improve the stability of steel production process and product quality”.

The project focuses on the development and implementation of application scenarios for the steel industry, performance monitoring based on steel processes and data processing and data analysis using the latest technologies.With the continuous improvement of product quality and process efficiency requirements in steel production in Europe, the amount of data and information collection on process and product is also increasing.At the same time, new methods are needed to analyze and control the manufacturing process and to determine and predict the performance of intermediate and final products.Focusing on specific use cases for the steel industry, the project utilizes all the technical and scientific possibilities offered by the latest technologies in data processing and data analysis to synthesize the vast resources of information collected in steel mills.The ultimate objectives of the project are: 1. to develop and apply methods for manufacturing process analysis and control, and to extend tools for evaluating and predicting (intermediate) product quality; 2.2. Provide evidence of the applicability and effectiveness of such methods;3. Identify possibilities for developing new approaches outside of investigating use cases and make recommendations

The basic sub-projects included in this project are:

1. Application of big data.

Due to the large scale, diversity and fast pace of its data stock, the traditional data processing application software can not fully handle the research and application of overly complex data sets.The project is designed to produce high speed analysis of large amounts of data from multiple sources, covering both structured, semi-structured and unstructured data, as well as twin data, with the aim of generating economic benefits.Under the premise of ensuring data quality, NoSQL (not only SQL) data form is adopted, which is applied to “Quality 4.0” (whole-process quality management) and seamless tracking (position identification and alignment) of whole-process data tracking of steel products.

2. Event handling.

Event processing is a method of tracking and analyzing (processing) the flow of information (data) about what has happened (events) and drawing conclusions from it.The sub-project is led by RINA Italy.RINA Group’s Italian materials research and development center CSM (joint research center of Baosteel European Research and Development Center in Europe) is responsible for M’s design, development and validation of an innovative big data architecture to manage data (process and quality data) from steel production.Lambda architecture is implemented, and data can be processed in real-time stream and batch.This architecture can analyze the data of steel mill automation system in real time and store it for historical analysis, which is a suitable scenario for the integration of artificial intelligence model.

Practical cases that have been successfully implemented so far include: “Process Machine Predictive Maintenance Based on Machine Learning Techniques”, “Process Data Defect Prediction Based on Deep Learning Models”, “Prediction of Process KPIS to Preclude Process Bias”, “Defect Classification Using Deep Learning and Image Analysis”.

3. Deep machine learning and big data analysis.

The sub-project is the responsibility of the German BFI Group.Big data analysis is the process of collecting, organizing, and analyzing large data sets to discover useful information.The methods of deep machine learning include: deep neural networks (supervised), recurrent networks (supervised), convolutional networks (supervised), and “unsupervised” networks.

The fifth is the project of “applying intelligent data-driven maintenance operations to the rolling area based on the network physical system”.

The project predicts the remaining life of quality degradations, failures, anomalies, and critical components so that appropriate and cost-effective maintenance and interventions can be planned in a timely manner.

The project developed Integrated Maintenance Model 4.0 (IMM4.0) for rolling areas using experimental systems and tools based on Industry 4.0 to transform the steel Industry’s maintenance strategy from preventative maintenance to optimized predictive maintenance.The model anticipates quality declines, failures, anomalies, and remaining life of critical units for timely maintenance and intervention.

The sixth is the project of “realizing autonomous flight monitoring and spot inspection of steel mill UAV”.

Using new sensor data, the project tested the effectiveness of using unmanned aerial vehicles (UAVs) in two steel plants (ThyssenKrupp’s Duisburg plant and ILVA’s Taranto plant) in place of traditional infrastructure maintenance and the safety of personnel in related positions.

The project uses drones for spot inspection, improving safety for steel workers and significantly reducing maintenance costs.The project has improved the structure of the UAV in terms of hardware to ensure the health and safety of workers in the event of an accident;Set up independent charging station, enough to adapt to the steel plant operating environment;An integrated system was set up to obtain data from UAV sensors.At the same time, in terms of software, the project includes algorithms for autonomous and robust flight over complex areas, strategies for coordinating activation and scheduling of UAV fleets, and a human-machine interface suitable for smartphone-based UAVs.In addition, the program developed a UAV control/management training system in terms of management and ensured that all legal requirements and internal company restrictions relating to UAVs were met.