This project explores the use of Neural Architecture Search (NAS) and knowledge distillation to optimise deep learning models for perception tasks in lunar robotic missions. The goal is to design neural models for rock segmentation, obstacle detection, and terrain classification that automatically adapt and optimise for efficient execution on resource-constrained Edge AI devices (e.g., NVIDIA Jetson platforms). The work involves selecting representative lunar-analogue perception tasks, designing appropriate search spaces, and applying NAS and knowledge distillation methods to identify architectures that balance accuracy, robustness, and computational efficiency. The resulting models will undergo hardware-aware optimisation, including quantisation and pruning, to ensure real-time performance within the strict power and memory constraints typical of the computers on potential planetary rovers.
Key Focus Areas:
Algorithm Development: Design NAS workflows tailored to lunar robotic perception tasks. Define search spaces and optimisation objectives for models such as rock segmentation, obstacle detection, and terrain classification. Implement and evaluate NAS strategies that effectively trade off accuracy, latency, and resource consumption.
Hardware Optimisation and Benchmarks: Adapt the NAS-generated models for efficient deployment on Edge AI platforms (e.g., Jetson, MemryX, Axeler, Hailo). Apply hardware-aware model compression techniques, including pruning, quantisation, and TensorRT optimisation, to achieve real-time inference within tight computational and energy budgets.
Expected Outcomes:
An algorithmic framework based on Neural Architecture Search for generating perception models optimised for lunar robotic applications and constrained onboard computing resources.
A ready-to-use pipeline for hardware-aware model optimisation and deployment on Edge AI devices, producing efficient, real-time perception modules (e.g., rock segmentation) suitable for integration into lunar rover navigation systems.