by Anne-Sophie Martin and Steven Freeland
On the International Space Station (ISS), astronauts have increasingly relied on the support of CIMON, the astronaut robotic assistant, which was developed and built by Airbus for the German Aerospace Center Space Administration (DLR). The system, which has recently been updated with CIMON-2, is able to move to specific point inside the ISS on the basis of verbal commands. It is used to perform routine tasks, such as documenting experiments, searching for objects and taking inventory, as well as explaining complex information and instructions regarding scientific experiments and repairs to the vehicle. The capabilities of this AI-driven ‘robot’ will undoubtedly assist even more complex activities on the ISS in the future.
CIMON-2 can see, hear, understand and speak. It can also orientate itself making use of its ‘eyes’, as cameras for facial recognition.. Moreover, ultrasound sensors measure distances to prevent possible collisions.
It features with ‘ears’ consisting of eight microphones used to determine directions, and to improve voice recognition. Its ‘mouth’ is a loudspeaker that it can use to speak or to play music. The initial system has not (yet) been equipped with self-learning capabilities and requires, in any case, active human instruction.
This is a first step towards a new era in space exploration, where artificially intelligent ‘assistants’ support astronauts in their experiments and other work, which might ultimately extend to space mining activities and human settlement on celestial bodies. This technology has the potential to increase the breadth of human space exploration. Greater human-machine interaction is certainly a significant next step of development in space.
This technological advancement raises many difficult questions for the existing space legal framework, which will have to adapt to new scenarios. Increasing connectivity and interactions between humans and intelligent machines give rise to significant legal issues, but also pose complex ethical considerations. Indeed, while recognising the broad potential for opportunities, the use of AI, and the incorporation of AI components within space infrastructure, challenge important international law and ethical principles in relation to transparency, human control and judgment, responsibility and liability amongst others.
Human responsibility is an essential point in the equation, given the largely unresolved issue of responsibility in cases of ‘machine error’, and the fact that accountability cannot be relocated to a machine. This ‘accountability gap’ highlights questions about causality and compensation, thus requiring a responsible and transparent chain of human command and control. The programmer of an AI system will, in most circumstances, remain responsible, assuming that person can be identified, but the lack of ‘transparency’ of decisions made through the utilisation of AI programmes remains an element of uncertainty.
Moreover, understanding how AI has led to a particular decision, and the detection of bias in AI models by tracking the decision-making process, is even more important in the context of space activities. For instance, what are the legal consequences if, during activities on the surface of the Moon or Mars, environmental, property or human-related damage occurs as a result of (autonomous) actions taken by robots and/or AI systems? Who will be responsible and liable? It is necessary to clarify, to predict and to anticipate possible scenarios and their legal implications before complex space exploration missions are to be conducted by States and private entities.
These elements must be taken into consideration when developing policy and legal frameworks in situations where AI components and autonomy are to be incorporated into future space missions. It is important to promote responsible innovation and use of such technologies, which calls for the implementation of relevant risk assessments and appropriate mitigation measures.
According to Article VI of the 1967 Outer Space Treaty (OST), States are internationally responsible for national activities in outer space. Among other consequences, this means that the activities of private entities in space are to be subject to authorisation and continuing supervision by the appropriate State, even where those activities involve the utilisation of AI agents.
Human intervention will undoubtedly be required beyond the initial programming. Beyond this ‘responsibility’ is the realm of legal liability. According to Article VIII of the OST, a “State party … on whose registry an object launched into outer space is carried shall retain jurisdiction and control over such object”. The State of registry – a launching State according to Article II of the Registration Convention – shall retain ‘control’ over a space object, including those with AI components.
In a case of damage, the launching State(s) will potentially be liable pursuant to Article VII of the OST, and/or Articles II or III of the Liability Convention, depending on the precise circumstances. A launching State(s) is absolutely liable for damage caused by its space object on the surface of the Earth or to aircraft in flight while, in the case of damage caused to another space object in outer space, the launching State(s) is liable on a fault basis.
With this in mind, how can we readily define and prove ‘fault’ arising from actions undertaken through the utilisation of autonomous AI technology? Is the system of liability as foreseen in the UN Space Treaties suitable in the context of ‘intelligent things’ driving the implementation of space programmes? It would seem appropriate that the degree of automation should be a relevant factor in the creation of any future legal framework to cover such a contingency. Furthermore, a liability regime based on fault may not be entirely appropriate in a particular case involving robots and AI functions.
Although the concepts of the State of registry and the launching State(s) under the existing treaty principles are relevant, they may require further elaboration and clarification in the development of an adequate and appropriate legal framework for AI use in space activities. In addition, if AI can be used to enhance the achievement of space operations, to help astronauts in their daily work, as well as to ensure both the safety and security of operations, it can also be used as a tool for interference, hacking or satellite destruction. Trust and transparency in terms of AI utilisation are therefore of utmost importance to avoid “potentially harmful interference”, as articulated in Article IX of the OST.
AI technologies are becoming a reality, and their use could soon become a common standard in undertaking complex space activities. Recent missions on Mars and the Moon are already ‘robotised’ and ‘digitalised’, with rovers operating with elements of autonomy.
For the future, it will be necessary to consider exactly how much autonomy ‘intelligent’ space objects should have, and what decisions necessitate ongoing human oversight. Increasing reliance on autonomous technologies may require a new perspective on the traditional concepts regulating space activities. These technical developments may/will therefore shape and transform the existing body of legal rules, regulations and practices that apply to space activities.
These are complex and as yet unresolved issues that necessitate a clear understanding of how AI is, and will be incorporated within space activities and their associated infrastructure, followed by careful consideration and discussions amongst all relevant stakeholders and levels of expertise. That said, it will always remain somewhat of an exercise of ‘regulating for the unknown’, but that is not a reason for inertia.
We therefore look forward to those discussions and to contributing alongside many others in order to ultimately find a framework that provides appropriate governance structures to address the myriad issues associated with the use of AI in space activities.
Anne-Sophie Martin is a Post-Doctoral Research Fellow in International Law and Space Law at Sapienza University of Rome. Her doctoral research focused on the legal aspects of dual-use satellites. She received her LL.M. in Space Law and Telecommunications Law from the University of Paris-Saclay (France) and her Ph.D. from Sapienza University of Rome (Italy). On August 2017, she attended the Centre for Studies and Research of The Hague Academy of International Law. She is a member of the International Institute of Space Law (IISL), Space Generation Advisory Council (SGAC), European Centre of Space Law (ECSL), and the French Society of Air and Space Law. She is also member of the Legal Council of For All Moonkind and Board of Advisors of the Space Court Foundation.
Steven Freeland is Emeritus Professor at Western Sydney University and Professorial Fellow at Bond University. He also holds Visiting or Adjunct positions at various other Universities/Institutes in Copenhagen, Vienna, Toulouse, Hong Kong, Montreal, Kuala Lumpur, Vancouver and London. He is a Member of the Australian Space Agency Advisory Board and has been an advisor to the Australian, New Zealand, Norwegian and several other Governments on issues relating to national space legislative frameworks and policy. In June 2021, he was appointed by the United Nations Committee on the Peaceful Uses of Outer Space (UNCOPUOS) as Vice-Chair of a 5-year Working Group addressing issues related to the exploration, exploitation and utilisation of space resources. He is a co-Principal of specialised space law firm Azimuth Advisory, a Director of the International Institute of Space Law, and a Member of the Space Law Committees of both the International Law Association and the International Bar Association.