With the continuous advancements in machine learning systems, AI-based technologies play an increasing role in our everyday lives. From movie suggestions to autonomous driving, AI systems take over daily tasks and are expected to perform reliably. This, however, includes implicit assumptions: when scanning the road ahead, we assume that the AI driving assistant perceives the objects as we do. Even when a leaf is rotated, we perceive it as a leaf. However, modern research shows that this is not necessarily true for AI systems. Given rotations in space, an object can be easily misclassified as something else: a leaf can be perceived as a car given the 'right' rotation. Besides these stark differences in object recognition, another question arises: how might the gains in computational capacities be integrated into human perception?
My doctoral project seeks to understand the differences in human and AI perception in order to address both challenges: avoiding potential misclassifications and pushing the boundaries of human perception. Hence, the project spans across two research streams (see below): 1. how AI systems can be improved with human perceptual models. 2. how AI models can be used to augment human perception.
Improving AI systems
Increasing the robustness and computational efficiency of computer vision algorithms is a pressing challenge for current AI systems, which human evolution has arguably solved. Humans can perceive and interact with diverse environments with ease. Here, I examine how models of human perception can increase AI's robustness.
Augmenting Human Perception
Sensory substitution and extensions devices have shown that having a ‘radar sense’ or enabling blind people to navigate without canes no longer belongs to science-fiction, but now is reality. Here, I examine how the coupling of artificial intelligence (AI) with artificial sensors and the human sense pushes these possibilities even further.