Skip to Content

IMBALS

 IMage-BAsed Landing System


← Horizon Europe Programs Overview Next →

About IMBALS

The IMBALS project aims to use certifiable computer vision to have the computer automatically perform a visual landing of a large Passenger Aircraft like the Airbus 320. This research is in the context of the disruptive cockpit, which investigates safety with a single pilot (instead of normal two pilots). 

In this landing, there is no pilot, no GPS, no ILS or any other beacon, only a single camera at the front of the aircraft.  

Even when it is not landing, the results of the IMBALS project can provide better information to the pilot and enhance the situational awareness for the crew during any autolanding, by supporting a Combined Vision System (CVS) based HMI in the Disruptive Cockpit.

The project put a strong emphasis on safety and certifiability of the system, including addressing the challenges of certifying the image processing algorithms.

The IMBALS project is conducted by a heterogeneous consortium with Airbus as the topic leader. 

Video: Live aircraft video processed at 30 fps with certified software

Sol.One's Role

Provide DO-254 DAL A Hardware

The computer vision part of the IMBALS program ran on Sol.One's SolAero, which is a DAL A certifiable platform. 

Extend Sol Language

The Sol language is a software modelling environment that allows to automatically generate certified software. Originally  designed for path planning, aircraft control, and HMI, the language was extended to also be able to express computer vision algorithms, allowing the landing algorithm to be modelled in Sol.

Build Certifiable Blocks

The algorithm is built from computer vision blocks, defined by the K.U.Leuven. These computer vision primitives were implemented in the Sol language for the SolAero platform.

Implement Algorithm

The algorithm designed by the K.U.Leuven was rewritten in the Sol syntax, and compiled to the target platform into a DO-178C DAL A certifiable application.

Real-Time Video Processing (30 fps)

The blocks were refined and optimized, and as a result were able to process the video as fast as it arrived, at 30 frames per second. The overall generated certifiable software provided inputs to the autopilot for steering corrections at 30 Hz.

Landing

The algorithm was able to identify the runway from a distance based on expected features, track the runway as the aircraft approached while accurately providing aircraft pose (position and heading) relative to the runway. It could do this until touchdown and into roll-out, providing a safe landing for the aircraft using video taken from a real aircraft landing.


Partners

SCIOTEQ, Belgium (Coordinator)

Katholieke Universiteit Leuven, Belgium

Sol.One, Belgium

Tekever ASDS, Portugal

Resources