The deployment of AI systems in mission and safety-critical applications, such as autonomous driving and medical decision support, is predicated on techniques that can strengthen, or even guarantee their safety and robustness. Neuro-symbolic (NeSy) techniques can facilitate progress towards that direction, by encouraging the compliance of deep learning predictors (e.g. perception networks) with functional specifications of symbolic nature, related to safety and correctness. Consider for instance the ability to enforce symbolic safety constraints on the behaviour of a compositional autonomous system consisting of several neural components, related to perception, control, action selection etc.
The purpose of this thesis is to explore the application of existing approaches to safe and robust AI, such as safe sequential decision making, or NeSy verification, to challenging application domains of temporal nature, such as autonomous driving and robot navigation. To that end, existing techniques will be extended to a temporal setting where necessary, exploring also interactions with critical event detection and forecasting techniques.
The project requires good knowledge of Python programming and a solid background on deep learning and knowledge representation & reasoning.
References:
Yang, W. C., Marra, G., & De Raedt, L., Safe Reinforcement Learning via Probabilistic Logic Shields, IJCAI 2023
Giunchiglia, E., Tatomir, A., Stoian, M. C., & Lukasiewicz, T. CCN+: A neuro-symbolic framework for deep learning with requirements. International Journal of Approximate Reasoning, 2024.
Xie, X., Kersting, K., & Neider, D. (2022). Neuro-Symbolic Verification of Deep Neural Networks, IJCAI 2022