Polaris - Banner

#SpaceWatchGL Opinion: Europe Has The Potential To Lead The Way In Space Exploration Policy

Photo by Gertrūda Valasevičiūtė on Unsplash.

By Ioana Bratu

In February 2020, the European Commission issued the White Paper on Artificial Intelligence (AI) – a European Approach to Excellence and Trust (White Paper). The document follows a series of draft initiatives about what concerns European AI policies..

The White Paper identifies a few key sectors, alongside AI, in which the EU has the potential of becoming a global leader, including health, transport, finance, agri-food value chains, energy, environment, but also Earth observation and space.

In relation to them, the Commission recommends a project-based synchronization. For example, AI systems and sustainability objectives can be connected. AI can play a significant role in achieving the United Nations’ Sustainable Development Goals. Moreover, AI is a critical enabler for attaining the goals of the recent European Green Deal. Another example is related to the future European Defence Fund and Permanent Structured Cooperation (PESCO) that will provide opportunities for research and development in AI. The Commission recommends that these research projects should be synchronized with the wider EU civilian programmes devoted to AI.

  • The White Paper provides no further details about how the synchronization will work in practice. It is not clear if a policy synchronization is associated with such sectors. There is also no mention of a potential institutional cross-check that would oversee the synchronization process.
  • Nevertheless, it is worth noticing that by mentioning space as a sector of major importance, the EU restates its ambitions in the field. Should this be interpreted as a new step in the race for regulatory development in this sector? If so, what would be the EU’s competitive advantage in this race?
  • For advancing in the regulatory race, the EU could initiate the policy synchronization process by connecting the space sector’s policies with the AI sector’s policies, where a number of initiatives have already been drafted. These two categories of policies should be intrinsically linked, since space technology equipped with advanced AI capabilities is already being deployed in space exploration activities.

The need for synchronization was already partially indicated in the EU’s Proposal for a Regulation on the New EU Space Program, from June 2018, based on the recommendation that Copernicus (an autonomous, user-driven, Earth observation system under civil control, offering geoinformation data and services, comprising satellites, ground infrastructure, data and information processing facilities, and distribution infrastructure, and fully integrating the needs and requirements of security) should take in account recent trends in research, technological advances and innovations impacting the Earth observation domain, as well as developments in big data analytics and AI together with related strategies and initiatives at EU level. [author’s emphasis]

The next step in the policy synchronization process would be represented by: (i) a clear indication of all AI initiatives that should be taken into account in the context of the EU’s New Space Programme, (ii) concrete principles based on which the two sectors should be correlated, and (iii) indications of an institutional body in charge of the policy synchronization and regulatory cross-check.

But why is it important to develop policy synchronization in key sectors such as space and AI?

The increasing number of space actors, various activities, and new technologies deployed require a fresh approach at a policy level because they have the potential of challenging existing legal norms. For example, a long-debated topic relates to the liability of AI systems in case of damages caused by their actions or omissions. A more complicated situation might arise when damages are caused by an AI system deployed in space. Therefore, who will be responsible for the damages caused by a collision between two space objects equipped with advanced AI capabilities? According to the Convention on International Liability for Damage Caused by Space Objects (Liability Convention), the launching state will be liable for damages caused by its space object on the surface of the Earth or to an aircraft in flight. [author’s emphasis]

However, the greatest challenge occurs if the “space object” is equipped with self-learning capabilities, being able to learn on its own. By doing so, it would be able to further develop new capabilities, initially not foreseen by its designer, that would potentially lead to unexpected results. In such a case, is it plausible to consider that the provisions of the existing Liability Convention are still applicable? Or is it high time for amending the Liability Convention in the light of the recent technological changes? Is Europe ready to initiate such a new approach?

At the EU level, there are a couple of initiatives addressing various scenarios related to AI liability. The White Paper itself distinguishes between high-risk and low-risk AI applications, with the purpose of ensuring that the regulatory intervention is proportionate. The Commission is of the opinion that an AI system should be considered high-risk in light of what is at stake: it takes into consideration both the sector and the intended use which could include significant risks, with a particular focus on safety, consumer rights, and fundamental rights. There are two criteria that need to be considered in order for an AI system to be considered high-risk: (i) the AI system is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur (for example, healthcare, transport, energy and parts of the public sector) and (ii) the AI system in the sector in question is, in addition, used in such a manner that significant risks are likely to arise.

Given these criteria, there remains an open question about whether the AI space technology should be considered “high-risk” or “low-risk”. Having AI space technology considered a “high-risk” application would create a difficult task for legislators, who would need to balance the risks associated with the deployment of the technology with the necessity of supporting innovation (regulating these technologies should not hinder their future development).

Even though the AI policies and the new space policies are still under discussion without new binding legal documents in force, there is an obvious need for an integrated policy approach in the space and AI sectors. When implemented, it will offer the EU an advantage in the current race for regulatory development. It will also offer consistency and predictability in these dynamic sectors.

Photo courtesy of the author.

Ioana Bratu is a Research Associate and a PhD Candidate at the Law Faculty of Vrije Universiteit Amsterdam, focusing on liability aspects of AI technology deployed for space exploration. She is also a Legal Expert in capacity-building & knowledge sharing projects developed by VU Amsterdam in collaboration with international partners. Before enrolling in the academic career, she has been practicing business law for several international law firms and as a founder of her own private practice.

 

Check Also

#SpaceWatchGL Opinion: 10 iconic marketing campaigns in Space

Marketing in outer space seems like an innovative idea, but it has 60+ years of history. Dr. Wernher von Braun, former Marshall Space Flight Center Director, pointed out on July 22, 1969: "Because without public relations we would have been unable to do it". Today, accelerated access to space provides unprecedented opportunities for #advertising stunts and viral marketing. Some campaigns raise ethical, environmental, and regulatory concerns, while others champion sustainability.