Monocular Depth Estimation for Prosthesis Control
Lower limb prostheses support individuals with lower limb amputations in daily tasks, including walking, stairs climbing or running. For adaptation to the current gait situation, it is beneficial to sense the surrounding environment, e.g., with visual sensors. The obtained situation awareness can not only help to adapt but also to prevent accidents such as falling.
Monocular depth estimation can be a particularly helpful input modality for lower limb prostheses. However, evaluating depth sensors placed on the prosthesis can be a challenging endeavour. To create situation awareness and address this and other limitations, this thesis aims to derive depth estimates based on an RGB-camera sensor. This type of sensor is relatively small and can be easily attached to the lower limb prosthesis without adding significant weight to it.
It is planned to develop monocular depth estimation algorithms that operate for the lower limb camera configuration in real-time. To that end, the state-of-the-art analysis approaches, based on deep neural networks, will be considered for real-time monocular depth estimation under hardware constraints, e.g., an embedded system. Reaching good depth estimates with a lightweight model in real-world experiments will be the outcome of the thesis.