Effective autonomous navigation depends on three critical components: mapping, localisation, and planning. Traditionally, navigation pipelines separated these tasks, but IIITH’s car adopts a vision-language integration approach, which enhances its ability to process natural language commands like “take a left near the food stall.”
To address the inherent challenges in perception-based navigation, the system employs a differentiable planning module within its neural network framework. This allows real-time error correction and ensures consistent collision-free path planning, even when upstream predictions from perception models are less accurate..
Source :
India Today