TOPICS & NEWS

Articles and interviews on current trends, technology and industry challenges, information on our consulting services, seminars and events as well as company topics:

Here you can find out what drives ROI-EFESO.

ARCHITECTURE DESIGN FOR THE DATA WORLD

THE GREATEST CHALLENGES IN HANDLING DATA ARE NOT OF A TECHNICAL NATURE

What has changed in recent years in the handling of data in the Smart Factory?

The focus is increasingly shifting away from individual pilot projects to holistic, integrated solutions. More and more companies are approaching the topic of analytics in a planned, strategic and conceptual manner - both in terms of the view of the entire production process and with regard to the holistic data model of a digital factory. This also applies to integration into the overall environment. Keyword Architecture Design: How do I introduce Analytics into existing legacy systems and as an integration scenario into the existing IT solution world?

Against this background, where do the greatest need for action and challenges currently exist?

The biggest challenges are not primarily of a technical nature, but lie in the approach. It is not enough to apply the roadmaps of individual providers of digital solutions - on the one hand, a broad, cross-technology and systematic approach is needed to identify potential. On the other hand, it is also necessary to take into account the specifics of your own organisation, which are not reflected in standard models and recommended procedures. Companies must ask themselves the following questions: How do I formulate requirements? How do I define my options in the development plan? The procedure is often such that one relies on the expertise of well-known providers and their procedural models. The starting point should always be your own roadmap and the individual situation, your own setup.

In production landscapes, old and new machines often coexist, producing different, sometimes incompatible data sets. How do you have to deal with this heterogeneity?

Everyone knows the situation of different data sets. Many of the companies we talk to as part of our projects or in the context of the Industry 4.0 Awards have a wide range of technologies. The companies typically see their specific problems here. But these are usually the same. The challenge is not primarily whether old machines are Industry 4.0-capable, although this aspect is often the focus of attention. Rather, the task is to use retrofit approaches to extract the relevant data from the process. This requires some manageable investments and clearly defined steps in the project.

Which data types are to be distinguished and how do they have to be generated?

What we need are the structured metadata from the production data model. This applies specifically to products, orders, variants, attributes, batches and production orders. This is the structured data world that we know from classic business applications such as ERP, BDE and MES. On the next level are the unstructured data. These are machine, status and process data as well as production process parameters. This is followed by data which we generally summarise under the term Condition Monitoring. These include measurements of temperature, humidity or vibrations. We initially record these in isolation from the structured, systematic production process. All these data together then form the basis, one could say the raw material, for the data model to be set up. With some machines it is more difficult to generate the data. In some cases further steps are necessary. Sometimes additional sensor technology is needed to measure and record the data.

Do the machines, some of which are decades old, represent an obstacle to factory-wide digitisation?

No, an older machine park is definitely not an obstacle to the successful implementation of Industry 4.0. Practice shows that the creation of an integrated data model, with which analytics and machine learning can be successfully operated, is possible in a few weeks. And this also applies when a factory has both new plants with programmable logic controllers (PLC) and old machines. In the meantime, there are already many proven and effective approaches to make older machines smart.

To what extent does the legal and regulatory framework, such as the Supply Chain Act, play a role in data management in the Smart Factory?

The keywords here are tracking and tracing and the mapping of all relevant and verifiable product and process data by the digital twin. The legal and regulatory requirements, which are reflected differently in each industry, play an important role, e.g. an ISO standard in medical technology or an FDA standard in the pharmaceutical industry. Companies are required to comply with certain regulations. But that is just one aspect of the topic. Because at the same time, the industry sees that traceability and process transparency for customers are also rapidly gaining in importance. And finally, it is also a question of their own performance: if errors and risks are detected early on, costs can be saved to a significant extent, process times shortened and bottlenecks avoided.

If, for example, a batch or series is already halfway through production when the error is detected, you have an immense disadvantage compared to a competitor who detected and eliminated the same error very early. And, of course, the end-to-end perspective, the integration of customers and suppliers in the supply chain, is also critical to success here. This is exactly what the cross-company data models and digital process chains are aimed at. In a nutshell: Regulation is an important driver for change. But it is by no means the only reason to ensure transparency and traceability in the supply chain.

What contribution does an effective use of data analytics make to increasing resilience and flexibility?

A very important one, if you can map the supply chain digitally and in real time. With the block chain approach, for example, a very decisive driver comes here. In Smart Contracts I can implement exactly that between the individual companies in the value-added network. A batch is mapped as a block chain object with all production data, material data, production process data, quality data and batch attributes. And that is exactly the necessary basis. Because I can only be resilient and flexible if I can recognise certain trends at an early stage, clearly assess their effects on various aspects of my production thanks to clean data, and thus take precise measures at an early stage.

Which mistakes and misunderstandings in dealing with data and data analytics do you observe particularly frequently?

I have already mentioned one major misunderstanding. You don’t need ultra-modern production facilities and networking implemented at the highest industry 4.0 level to obtain data and use it productively. Another misunderstanding concerns the step-by-step approach that is necessary to model data and work with it in a meaningful way. This is often underestimated. It is not as if a KI solution could push all structured and unstructured data into the data cloud at the push of a button and process them there. Data modelling requires theses and correlation assumptions and a logic that fits the production process. Only then can added value be created. And this is a learning process that requires several iterations.


CONTACT

Anna Reitinger

Anna Reitinger

Head of Marketing, ROI-EFESO
Tel.: +49 89 1215 90-0

Send mail