string(10) "newsevents"

The Institute of Informatics & Telecommunications, along with our colleagues in the MANOLO project, participated in the 13th EETN Conference on Artificial Intelligence (SETN 2024), held in Piraeus, Greece on September 11-13, 2024, organised by the Hellenic Artificial Intelligence Society (EETN) in collaboration with the Department of Informatics of the University of Piraeus.

The MANOLO project develops a complete and trustworthy stack of algorithms and tools to help AI systems reach better efficiency and seamless optimization in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments. In the context, we presented two recent research outcomes on the trustworthiness of AI systems.

On the first day of the conference, our colleague Natalia Koliou presented the paper “Comparing Prior and Learned Time Representations in Transformer Models of Timeseries“, as part of the Workshop on AI in Natural Sciences and Technology (AINST). This paper presents experiments aiming to advance the ability to explain and control the domain knowledge embedded into the time representation of a Transformer ANN, within the general framework of trustworthy AI. Specifically, we used data for predicting the energy output of solar panels, a task that exhibits known periodicities (daily and seasonal). We used this well-understood task to (a) test how well we can control the results by explicitly fixing the time representation to what we know to be appropriate; and (b) see if we can interpret the representation that the network learned, and if yes, if this learned representation is what we expected. We found that trying to control the representation reduces performance due to side-effects that are difficult to mitigate; on the other hand, when analyzing how we can interpret the learned time representation we found that the network learned what we expected. We conclude with insights on how to place humans into the learning loop in meaningful ways to improve the robustness and trustworthiness of Transformer networks.

Furthermore, Stasinos Konstantopoulos presented on the last day of the conference, the paper “On the Reliability of Artificial Intelligence Systems”. This position paper proposes a set of concrete technical requirements for robust AI methods, covering the complete life-cycle of an AI system, from its design and operational monitoring and control to its behaviour when it fails. The paper briefly reviews Explainable AI, Concept Whitening, and Neurosymbolic AI methods from the perspective of satisfying these requirements, and concludes with future work that can better align these AI fields with the goal of trustworthy AI.

 

Skip to content