AI Co-pilots: More Eyes in the Sky

AI Co-pilots: More Eyes in the Sky

How AI-driven Technologies are Supporting Pilots to Ensure Safer Skies

By Dan Carmel, MEng.

FAA data shows that human error is the leading cause of commercial airline crashes, meaning that a pilot either contributed to, or was directly responsible for, the incident through issues such as lack of knowledge, improper training, mis-judgment, or poor decision-making.

The high incidence of pilot error in aviation crashes highlights the importance of proper pilot training as well as the implementation of safety measures to reduce the risk of accidents. Autopilots can be considered as one of the earliest forms of co-pilot technologies, however as part of the AI revolution, technology companies such as Daedalean are improving aircraft in-flight safety via pilot-aircraft interface technology, which, of course, must be underpinned by a culture of safety. This ‘blend of human intuition with machine precision’, is creating a more symbiotic relationship between pilot and aircraft and giving rise to a truly AI-driven co-pilot, or AICOPI for short.

Situational awareness is a crucial part of a pilot’s activities and AICOPIs offer a reliable safety net for those moments when human attention wavers. AICOPIs can scan the skies all around an aircraft in flight, analyzing incoming images for vital information and interpreting those to facilitate pilot decision-making. One key intent that regulators are looking for in this technology is to ensure that the AICOPI doesn't merely replace human judgment but complements it, leading to enhanced safety and collaboration during flight operations.

A key differentiator between AICOPIs and traditional auto-pilots, is the ability to analyze visual imagery, in real-time. One such AICOPI, MIT’s ‘Air-Guardian’, identifies early signs of potential risks through neural network-based image mapping and attention marker algorithms, which can identify and decipher an increasingly sophisticated range of information, from atmospheric phenomena to other aircraft, ground features, birds and landmarks. Rather than act as an emergency backup, an AICOPI is designed to work as a proactive partner for human pilots, combining information from weather, flight plans, NOTAMs, and sensor readings to provide real-time assistance and support during critical moments in flight.

By tracking a pilot’s eye movements, an AICOPI can understand where the pilot is focusing their attention and provide relevant information and alerts accordingly. For example, if the pilot is looking at an instrument panel, the AICOPI can provide a clear readout and analysis of instrument data. Or, if the pilot is looking out the window at a potential hazard, the AICOPI can analyze that, provide further insights and even an audible warning. AICOPIs are also being designed to monitor the pilot; where they’re looking and how much attention they’re paying to the information they’re actually looking at.

Bringing all of these aspects together – the AICOPI can provide pilots with a comprehensive understanding of the current situation, identifying potential risks and offering recommendations. The?incredibly complex neural networks that AICOPI technology relies on, can adapt to new information rather than simply processing data that it was trained on.

AICOPI systems can aid pilots in dealing with multiple situations and hazards (theoretical image)

Is AICOPI technology certifiable?

The European Aviation Safety Agency expects to certify the first integration of artificial intelligence technology in aircraft systems by the end of 2025. In fact, EASA has now received the first project applications making limited use of AI/ML solutions and these have been for first-assistance scenarios, corresponding to Level 1 AI applications for situational awareness.

The next steps – or Level 2 AI applications – will likely be in the gradual ramp up to more automated solutions to assist the pilot flying in extended minimal crew operations (eMCO) and single-pilot operations (SPO) in large commercial air transport, or, in the ATM domain, for CDR through the use of virtual co-controllers. We can expect to see these technologies in the early 2030s. From 2035, we can expect to see AI-enabled, advanced automation with human supervision (Level 3A) and ultimately, without human oversight (Level 3B) from 2050 onwards.

For certification purposes, EASA has published its first, usable guidance for Level 1 AI/ML (human assistance/augmentation), that began writing in 2021. EASA plans to be a leading oversight authority for AI technologies in aviation, as it should be. Their AI Roadmap, last updated in 2023, states their objective to:

·???????? Develop a human-centric AI trustworthiness framework

·???????? Make EASA a leading oversight authority for AI

·???????? Support European aviation industry leadership in AI

·???????? Contribute to an efficient European AI research agenda

·???????? Contribute actively to EU AI strategies and initiatives

As part of the EU’s AI Act of 2021 (in its negotiation stage), the EASA is working with industry, the EU Commission, member states and research institutions to create regulations, means of compliance and standards that establish ethical guidelines for oversight, governance, transparency and trustworthiness.

What does this mean for Cybersecurity of cockpits?

In Europe, there are various, established and emerging regulations that aircraft operators, avionics suppliers and MRO providers will need to meet. For example, with AICOPI technologies being deployed in cockpits, stakeholders will need to adopt strong cybersecurity practices (e.g. NIS2, EU reg 2023/203 or Part-IS / Part-AI), which help protect data integrity and performance, aiding compliance with the EU AI Act's requirements for data quality & security.

The industry stakeholders & supply chain involved in developing, deploying and maintaining AICOPI systems will need to have a harmonized approach to compliance with these standards. Together, they create a robust regulatory ecosystem for addressing emerging risks in aviation, particularly as AI and cybersecurity threats become increasingly intertwined in a tool like AICOPI.

Companies developing & deploying AICOPI’s need to work together to streamline compliance across these frameworks. A key approach is to avoid duplication of efforts and ensure comprehensive risk management, especially in:

  • AI-specific risks (AICOPI algorithm bias and lack of robustness).
  • Cybersecurity threats (cockpit data breaches).
  • Aviation-specific threats (such as vulnerabilities in air traffic control systems)

AICOPI technologies can aid pilots in night-time, congested and low-visibility scenarios

That means that OEMs need to work together with suppliers in their Tier 1, 2 and beyond to not only flow down their own requirements but verify they’re all working to meet the needs of these frameworks together.

What would this approach look like in practice?

  • Having a unified risk management framework implementing ISO 27001 and which integrates risk assessments for AICOPIs, cybersecurity, and aviation-specific threats.

  • Centralizing governance by creating cross-functional teams to address AI governance, cybersecurity, and aviation-specific risks

  • Securing by-design all AICOPI systems and protecting data integrity in line with technical standards outlined by selected frameworks
  • Consolidating reporting mechanisms for selected frameworks, such as AI malfunctions (as per the AI Act), cybersecurity breaches (as per NIS2) and aviation-specific ICT incidents (as per EU reg. 2023/203)
  • Third party collaboration to conduct regular tests and audits to ensure AICOPI systems, supply chain resilience and oversight accountability.
  • Training and monitoring personnel on compliance, cybersecurity awareness, and ethical AI use

In summary

AI-augmented situational awareness systems (AICOPs) fall under the high-risk AI category, due to their role in critical safety functions, as well as their reliance on heavily interconnected digital infrastructure, including sensors, communication systems, and data processing platforms.

In Europe, the AI act, NIS2 directive as well as aviation-specific cyber standards mean that a cross-supply chain, regulatory approach is needed primarily incorporating unified risk management, secure-by-design development, training & human factors as well as collaborative oversight with national regulators.

Globally, directives and frameworks vary, however many of the core principles overlap and aviation stakeholders across the supply chain seeking to deploy & maintain AICOPs will need to ensure the technology is trustworthy, secure and transparent while maintaining the highest standards of aviation safety and reliability.

The ultimate aim of AICOPIs is to alleviate pilot workloads, aid in cockpit operations and proactively identify hazards


Steve E. Tice

CTO, CEO, QuantumWorks Corp, Poseidon AmphibWorks Corp, Serial entrepreneur, c-level tech exec, project mgr, passionate technologist, strategic biz. developer, EV evangelist

1 个月
回复

要查看或添加评论,请登录

Dan Carmel M.Eng的更多文章

社区洞察

其他会员也浏览了