Lines of Research
Digital Twin
It is understood that the concept of Digital Twin was foreshadowed in David Gelernter's work titled "Mirror Worlds" (1991) and was subsequently detailed in publications like Grieves (2014), particularly in presentations about Product Lifecycle Management (PLM). In this initial view, and in a quite generic sense, Digital Twin involves an association between a real product and its virtual equivalent (LIM; ZHENG; CHEN, 2019). Since then, the concept of Digital Twin has evolved towards synergistically combining at least three elements: the executable model(s) of a system/product, a set of data related to this system/product, and some way of updating/adjusting the model based on real-world data (WRIGHT; DAVIDSON, 2020). Another recurring characteristic is the relationship of a Digital Twin with the life cycle of the system/product it refers to, including the involved processes.
Within the life cycle of a system/product, the concept of Digital Twin can be applied:
-
During the design phase, it bears significant overlap with the virtual prototype of the system/product used in experimentation to enhance and provide a better understanding of the system/product and the involved processes.
-
During operation, data and information from the real system/product are received, which can be used for monitoring and maintenance of the real system, as well as for continuous optimization and improvement of the real system/product.
Considering the above, the scope of this research line is to investigate how AI techniques and methods can contribute to the development of Digital Twins for industrial environments. Among the questions and challenges to be investigated, the following stand out:
-
How to systematize the introduction of the concept and development of Digital Twins in different industrial environments with varying levels of scope, involving everything from the model's conception to the planning of data collection infrastructure, database structuring, among others.
-
How to explore the use of AI techniques initially in the conception of a Digital Twin model and subsequently in its continuous improvement throughout the product/process lifecycle.
-
How to validate an AI-based Digital Twin system, particularly for critical systems from a safety and security perspective.
-
What are the technological challenges and limitations for introducing the concept of Digital Twin in industrial environments? What are the minimum requirements related to network infrastructure, equipment connectivity, system sensing, and other aspects?
-
How to explore integration with other Industry 4.0 enabling technologies, such as robotics, extended reality, among others.
-
How to address the issue of data fidelity/representativeness for systems in conception (e.g., using data generated by other systems, prototypes, or models).
-
How to handle data heterogeneity (in format, quality) for legacy systems, as well as data interpretation and classification.
-
How to ensure the robustness and adaptability of the developed solutions.
-
How to implement different forms of feedback for the real system.
To address these questions and challenges, solutions must rely on the following pillars of Industry 4.0:
-
Simulation - Simulation forms the foundation of Digital Twins, and as such, models can be enhanced using AI, while AI models can be improved through data generated by the Digital Twin without causing disruptions in real systems.
-
Internet of Things (IoT) - In Industry 4.0, what sets the Digital Twin apart from a traditional model is its reliance on real-time data, allowing it to mirror the real system at every instant.
-
Big Data & Analytics - This pillar aligns closely with AI as it is responsible for the continuous analysis of data collected from the real system.
Among the expected beneficial impacts are:
-
Reduction in product/process development cycles, resulting from the early detection of problems during the design and implementation phases.
-
Real-time monitoring of the product/process, enabling the anticipation of issues and process improvements during system operation.
-
The possibility of experimenting with the product/process in a simulated manner without disturbing the real system.
-
Greater robustness, provided by the analysis of scenarios that consider failures and unexpected events.
-
Increased system autonomy and adaptability, facilitated by its capacity to 'learn' from the collected data.
 
References