Michalis Papakostas, who has received a scholarship for a Joint PhD, at the Institute of Informatics and Telecommunications, NCSR Demokritos and the Department of Computer Science and Engineering at the University of Texas at Arlington (UTA), has successfully defended his PhD thesis in the field of Machine Learning, with applications in Human-Computer Communication, in collaboration with HERACLEIA – Human-Centered Computing Laboratory and SKEL Lab. The title of his thesis is: “From body to brain: using Artificial Intelligence to identify user skills & intentions in interactive scenarios”.
Artificial Intelligence has probably been the most rapidly evolving field of science during the last decade. Its numerous real-life applications have radically altered the way we experience daily-living with great impact in some of the most basic aspects of human lives including, but not limited to, health and well-being, communication and interaction, education, driving, daily planning and entertainment. Human Computer Interaction (HCI) is the field of Computer Science lying at the epicenter of this evolution and is responsible for transforming rudimentary research findings and theoretical principles into intuitive tools, responsible for enhancing human performance, increasing productivity and ensuring safety. Two of the core questions that HCI research tries to address relate to a) what does the user want? and b) what can the user do? Multi-modal user monitoring has shown great potential towards answering those questions.
Modeling and tracking different parameters of user’s behavior has provided groundbreaking solutions in several fields such as smart rehabilitation, smart driving and workplace-safety. This research aims to investigate the potentials of multi-modal user monitoring towards designing personalized scenarios and interactive interfaces that focus on two different research axes. Firstly, we explore the advantages of reusing existing knowledge across different information domains, application areas and individual users in an effort to create predictive models that can expand their functionalities between distinct HCI scenarios. Secondly, we try to enhance multi-modal interaction by accessing information that stems from more sophisticated and less explored sources such as Electroencephalogram (EEG) and Electromyogram (EMG) analysis using minimally invasive sensors. We achieve this by designing a series of end-to-end experiments (from data collection to analysis and application) and by performing an extensive evaluation on various Machine Learning and Deep-Learning approaches on their ability to model diverge signals of interaction.
As an outcome of this in-depth investigation and experimentation, we show some state-of-the art results on individual tasks related to user behavior monitoring and we propose CogBeacon. A multi-modal dataset and data collection platform towards modeling events of cognitive fatigue and understanding its impact on human performance. To our knowledge CogBeacon is the first framework aiming to create a standardized environment for capturing and annotating multisensing data for cognitive fatigue analysis.
•Professor Fillia Makedon, University of Texas at Arlington (supervisor)
•Dr. Vangelis Karkaletsis, Director of the Institute for Informatics and Telecommunications, NCSR Demokritos (co-supervisor)
•Professor Vassilis Athitsos, University of Texas at Arlington
•Profesor Chengkai Li, University of Texas at Arlington
From June 2019 Michalis will be joining the Artificial Intelligence Laboratory at the University of Michigan, Ann Arbor as a Post-Doctoral Research Fellow under the supervision of Professor Rada Michalcea, where he will be working on projects related to multi-modal user modeling and monitoring.