Effective data analysis is the decisive value driver in the context of Industry 4.0 and Smart Factory. Only when it is possible to aggregate and analyze big data, i.e. the enormous amounts of data generated in IoT environments, in such a way that they can be used effectively, they get real value. Modern data analytics methods make it possible for the first time to analyze completely unstructured data in real time and to quickly change the data structure to be analyzed at any time - something that ERP systems and relational databases were previously unable to do. Various approaches are possible:
- When data is intelligently analyzed and used, it not only enables faster and more precise decision-making processes, but also forward-looking planning and maintenance (advanced analytics, predictive planning and predictive maintenance). They thus help in the development and control of the strategy as well as in the continuous optimization of operational processes - for example by reducing operating and quality costs. Data analysis also provides the basis for a better and earlier understanding of customer requirements and decisions, resulting in higher responsiveness, better data quality and competitive advantages.
- By combining data analysis with learning systems, processes can be automated comprehensively. Particular potential is offered by the development of digital twins, virtual models that map and network objects and processes completely in real time, thus enabling a high degree of transparency and accurate forecasts of optimization opportunities and risks. Furthermore, an enormous number of potential correlations can be tested and analyzed using advanced analytics tools, which enables a much deeper and more precise understanding of the core processes.
- By adding sensors and software, intelligent products and services (Smart Products & Services) can be developed and additional revenues generated within the framework of new business models. The retrofitting of plants and processes will also become increasingly economical the better the data is used. It is crucial that comprehensive analyses of various processes, components and manufacturing systems are carried out as close as possible to the data source. This accelerates control and intervention processes and at the same time relieves the global IT and process network through decentralized analysis and monitoring circuits. Both embedded analytics, in which actions are executed automatically by the IoT systems, and big data analytics, in which interventions are permitted and different views can be analyzed by changing algorithms, are used.
Potential wasted when using Big Data and Data Analytics
Despite these enormous potentials, many companies still show a low degree of maturity with regard to the use of big data and data analytics. The most frequent reasons are isolated data silos, poor data quality, the lack of sufficient data volumes, or structures that do not allow holistic, cross-functional and cross-departmental data management. Many companies also lack the skills and competence to establish data analysis as a central cross-sectional discipline and integrate it into strategic and operational processes.
ROI supports its clients in Big Data and Data Analytics projects, from analysis to planning to implementation with competent and experienced manufacturing and strategy experts, data engineers and data scientists, in order to leverage the economic and process-related potential of data. ROI is based on a process model that is modularly adapted to the individual challenges and concretized on the basis of six central criteria (6 V's):
Procedure of a Big Data Project
Phase 1: Target vector
The correct handling of Big Data is derived from the corporate strategy, current and possible business models as well as the desired use cases:
What is the overall objective to be achieved?
What contribution can Big Data make to corporate strategy?
What contribution can Big Data make to process optimization?
Are new business models possible as a result?
Which use cases bring immediate benefits?
Typical use cases are:
Support of product strategies
Increase in capacity utilisation at individual locations/plants
Reduction of part tourism
Total optimum from production and supply chain costs
Improvement of delivery time and speed
Improving production availability through predictive maintenance
Improvement in global cooperation (engineering, production, logistics)
Identify basics for Smart Products
Creating a decision-making platform for new business models
Phase 2: Smart Data Architecture
In a second phase, the architecture required for the implementation of the Big Data Use Cases is created. Depending on the use case, different process-related and organizational changes have to be made, which have to be planned as well as the technological architecture. The focus is on the following architectural issues, among others:
Where is the need to build a Digital Twin?
What role do cloud services play? Which operations are to be carried out in Edge Devices?
What level of analytics competence is required in-house?
Do systems have to be digitally retrofitted?
Which (cyber)security requirements have to be considered?
How must the processes be designed according to the use cases?
How must IT and management systems be changed?
Is the organizational structure to be adapted?
Phase 3: Proof of Concept
The first step before the rollout is the proof of concept of one or more selected use cases. An initial verification of the corresponding analysis models is usually carried out using a digital twin. With this method, binding results based on productive data can already be achieved, decoupled from productive operation, and necessary adjustments can be made. After reviewing the results, a decision is made about the productive use. All necessary data interfaces and IT services have been tested in Digital Gemini and allow smooth integration into productive operation.
Phase 4: Set-up of the Smart Data Cluster
After successful proof of concept of the use cases, the Big Data Cluster is built and transferred from the PoC architecture into a productive architecture. This phase includes the creation of the ETL/ELT processes, the implementation of the algorithms as well as the setup of the visualizations and the frontend for the end users.
Phase 5: Transfer to service-oriented Cloud
If the data cluster is productive, the next step is to transfer the operation to a service provider on the basis of an operating model, connect smart devices and integrate them into the Data Lake. The end result is a Data-as-a-Service (DaaS) or Analytics-as-a-Service (AaaS) solution that consistently delivers IT services on demand.
Phase 6: Further development in Collaborative Cloud
The further development and development of additional services takes place in a collaborative cloud, which gives both internal and external developers the opportunity to contribute their development services on an app-based basis and to make further services available.
Process improvement through the use of Big Data
- Fraud Detection: Monitoring of data streams within production networks with real-time anomaly detection
- Pattern recognition: Optimization of working paths based on the analysis of movement patterns
- Condition monitoring: Analysis of aggregate states of production systems (with coupling to mobile devices)
- Production monitoring: Monitoring of environmental variables in production in real time with detection of anomalies (e.g. for thermosensitive productions)
- OEE monitoring: Visualization of the KPIs (e.g. OEE) of plants, production areas and plants on the basis of recorded data with user-specific aggregation.
- Predictive quality: Quality-optimal planning and control on the basis of influencing variables on quality (also with feedback into product and process development) and recognition of patterns in production deviations
- Predictive Maintenance: Optimal planning of maintenance orders based on wear models (with optimal scheduling of maintenance personnel)
- Model-predictive control: Optimal planning and control of production facilities, areas or plant networks in real time
- Simulation-based optimization: Optimization of planning and control processes based on simulation models, e.g. optimal order sequence of production, optimal capacity balancing in the plant network
- Logistics optimization: Optimization of goods flow based on geodata, traffic flow and weather data. (e.g. Just in Time / Just in Sequence deliveries) & Optimization of storage and supply areas (e.g. optimized coordination of goods movements through real-time recording)
Procedure of a Data Analytics Project
Phase 1: Understanding of processes and data
At the beginning of the project, an understanding of the underlying processes and existing data structures is created in order to understand the quantity structure of data sources, types and volumes as well as their significance in business processes.
Phase 2: Analysis of existing data and prediction model
If the required data is already available and accessible in sufficient quality, a first proof-of-concept is carried out - either with offline data or in a test environment. First hypotheses are created and then checked in a second step by analyzing existing data. If there are no patterns in the existing data, either extend the scope or perform a retrofit to obtain additional data from the existing processes. The prediction model is then created with relevant features and the prediction quality is validated using historical data.
Phase 3: Roll-out of the model
If the forecasting quality is sufficient, the proof of concept is transferred to a productive environment. The necessary ETL/ELT processes for the automated provision and preparation of the data are set up and integrated into the existing IT landscape. Subsequently, frontend and report processes are created for the end user before the system goes live and is validated in the real environment. The project ends with the handover of documentation and training of users and experts.
Artificial Intelligence (AI)
By means of artificial intelligence (AI), patterns in large amounts of data can be determined much faster and more accurately than humans could on the basis of big data. There are currently two categories of AI:
- Weak AI: Deals with concrete, often clearly limited application problems. Examples include services such as Siri, Alexa, Bixby, but also initial vehicle controls using voice input.
- Strong AI: Solutions are capable of independently recording and analyzing situations beyond concrete tasks and developing a solution.
The term AI covers numerous methods that are used in analytics and big data projects. ROI uses problem-specific state-of-the-art methods in Big Data Analytics projects. This includes:
- Natural Language Processing (NLP) includes methods for speech recognition and generation. Well-known applications are interactive vehicle navigation devices or consumer applications such as Alexa and Siri.
- Especially in the service environment NLP methods are combined with bots and increasingly used as interactive chatbots.
- One of the traditional disciplines of AI is the field of image processing and recognition (computer vision), since methods of pattern recognition and machine learning are applied. Methods of mathematical optimization of complex systems are also summarized under the term AI (e.g. Genetic Algorithms, Simulated Annealing).
- Machine learning is now often referred to as a substitute for AI. The term represents a collective term for methods for the generation of knowledge from experience. Combinations of mathematical optimizations with machine learning are also common.
- Deep learning, for example, comprises methods for optimizing artificial neural networks, which are very often used in machine learning.
- Other approaches to weak AI are knowledge-based systems and expert systems that formalize existing knowledge (e.g. machine translation programs).
- Classical human planning tasks such as searching and planning can be supported and automated using AI algorithms. Established methods are mathematical algorithms like nearest-neighbor or maximum likelihood, which are used for shortest path search tasks or flow planning problems.
ROI realized an "end-to-end digitization" project with a global bed manufacturer that took into account all relevant stages of value creation: from the customer experience to ordering, production and logistics.
Development of a digital twin to increase quality and productivity
An automotive supplier improved the transparency of work and organizational processes in a production plant for dashboards.
With a "Digital Process Twin" from ROI, the company reduced the reject rate and made improvement potentials in its value creation networks visible.