Cospar 2 - Banner

#SpaceWatchGL Opinion: Toward a decision-intelligent process to assess collision risks in space

by Dr. Moriba Jah

Starlink satellites visible in the night sky in April 2020; Credits: Patrick Pleul / picture alliance via Getty Images

Many of us were made aware of an alleged near-miss between two satellites in low earth orbit, one operated by SpaceX and the other by OneWeb. With the growing number of participants in the space domain, the issues of coordination and planning of space activities, as well as the development of common space operation practices, protocols, or norms of behavior are increasingly critical.

The space community tends to use a quantity called the probability of collision as a way to gage on-orbit collision risk and even make decisions regarding active responses or lack thereof to this risk. As it turns out, I have a few things to opine on this topic.

In reality, there are objects in space that could possibly collide, and many for which a collision is actually impossible. This set of impossible collisions is critical to account for because what I will lay out subsequently is the need for the community to embrace applying the Principle of Maximum Entropy. More on this later. Now, for space objects that could possibly collide, they either absolutely do or they absolutely don’t. It’s a binary outcome every time. There is no such thing as partial collisions. This is an inarguable fact. Two or more things in space meet at the same place at the same time or don’t. The physical and environmental consequences of this depend on how closely their velocities match when their positions do.

Even though the possibility of collision exists for a great many objects, we do not actively move satellites out of the way without knowing more because we could be making the situation worse. So, our default action is to conduct our space operations as usual or status quo, and not do anything different. The rules under which we execute our default action are (a) in the absence of evidence (b) when the evidence is too incomplete (c) when the evidence supporting our Null Hypothesis is overwhelming. This brings us to determining our Null Hypothesis and in this case, it is, “my satellite is safe from collisions.” The only time we reject our Null Hypothesis is if we have evidence that makes it seem ridiculous. This is the heart of the matter at hand. In other words, what evidence will it take to change our mind, and oh, by the way, this is not a commonly agreed set. Different operators have different evidence they consider to make these decisions and even varied risk posture. This uneven set of evidence and level of risk required for all operators sharing a common orbital habitat, to take holistic actions that lead to positive outcomes, is what is leading us to a detrimental outcome in near-earth space.

LeoLabs Radar Close Up; Credits: LeoLabs

Let’s go further down the rabbit hole. Knowing whether a collision will or will not occur depends on our knowledge and understanding of the actual motion of the objects considered. In order to know something, one must measure it; in order to understand something, one must predict it!” This is the mantra underpinning my own research group and my students internalize this to the fullest extent. Since we cannot actually measure everything involved in this process, we are left to infer what we can, and inference is drawing conclusions from evidence with associated uncertainty. So, this means that all of our decisions must be made under uncertainty. The things that influence what we infer about space object motion are (a) the causes of the actual motion of the objects (b) our models of this motion which are inaccurate and imprecise (c) our actual measurements of these objects which are inaccurate and imprecise (d) our models of the measurements of these objects which are inaccurate and imprecise in a way that differs from (c)  and (e) the methods we employ to infer the (a) given (b) through (d). Indeed, this is non-trivial.

Regarding the prediction and why this is critical to assess our understanding, we know that if we knew (a) exactly and were able to accurately and precisely represent this in (b), and knew (c) exactly and were able to accurately and precisely represent this in (d) and were rigorous about (e), we would predict the truth and predicting the truth would be quantified as us obtaining zero surprisal or information entropy from the evidence. In other words, we would exactly match the evidence and it would therefore be exactly uninformative to us; there would be nothing for us to learn because we’d know it exactly. This is our unattainable goal, to work toward knowing and modeling the truth. Realistically, we seek to understand, use and manage information in such a way that leads to desired outcomes. This ability is known as Decision Intelligence, and it’s the reason for developing situational or domain awareness.

ASTRIAGraph screenshot; Credits: Dr. Jah

OK, so let’s lay out our methodology for collision risk assessment and decision intelligence. First, we must be responsible by admitting we are not all-knowing and held hostage to the need to infer the motion of the objects whilst predicting these under uncertainty. We will rely upon abductive reasoning in this process whereby given evidence, we must seek to enumerate all the possible hypotheses that explain it. This is where determining what are impossibilities, is critical. The ensemble of all possible hypotheses given the evidence past and present is applying the Principle of Maximum Entropy and represents our uncertainty. This maximum entropy representation is the goal of appropriately applying Occam’s razor in which the most prudent representation is the one that makes the least number of assumptions or forces the least amount of constraints to represent the evidence. Another way to say this is that Occam’s razor favors being as prejudice-free as possible when attempting to infer causality from evidence. Many people have taken Occam’s razor to mean choosing the simplest hypotheses but I firmly believe this is a severely flawed interpretation.

Once we have this maximum entropy representation, the next thing we do is seek more evidence and once we get it, we ask each one of our possible hypotheses to predict the evidence we have. We can compute the information content in the evidence we have based upon how surprised each hypothesis is at the evidence. The closer the hypothesis is to predicting the actual evidence, the less surprised it is and thus the less informative the evidence is to this hypothesis because its constraints and prejudice seem to do well in predicting reality.

In this multiple hypothesis testing process, we concurrently apply Karl Popper’s Falsifiability Principle where we seek to assess which, if any of the possible hypotheses, has the evidence shown to be impossible or falsified. If a hypothesis is shown to be impossible given the evidence, then and only then do we remove it from our ensemble and our maximum entropy becomes reduced as a consequence. To be clear, we do not remove hypotheses shown to be improbable or unlikely, because these are still possible and represent our ignorance. It is irresponsible for us to disregard our ignorance in favor of prejudice which tends to lead us to confirmation bias and undesired outcomes.  As Carl Sagan once stated, “absence of evidence is not evidence of absence.”

Interestingly, the set of hypotheses that has the least amount of surprisal at each given set of evidence constitutes the minimum entropy representation of the system. There is no guarantee that this is the truth but we know that the truth would follow this path. We use this minimum entropy state representation to inform us of the underlying models and parameters so we can use this to improve our predictions. It can show us behavioral patterns or trends.

Alright, what we are truly after in this endeavor is to seek evidence that falsifies the Null Hypothesis. In other words, at the moment of having to make a decision, has the evidence we have gathered shown us that “our satellite is safe” is an impossibility, given the maximum entropy state. If the evidence which has abductively shaped the maximum entropy state has shown the Null Hypothesis to be false, we must take action and maneuver to avoid a collision. Otherwise, we should proceed with our default action.

Rarely will we be able to make this decision based upon complete evidence. When Rudolf Emil Kalman, co-inventor of the Kalman filter so important in trajectory estimation, accepted the Kyoto prize in 1985, he shared his perspective about data in his acceptance speech. If data were exact and complete, he said, then only one minimal or simple hypothesis could explain their cause. He called this the “Uniqueness Principle” of minimal modeling, and it is an idealization. Kalman went on to say, “uncertain data cannot provide exact models,” and he cautioned against allowing prejudice to influence the scientific process of deducing a unique model from uncertain data. Our evidence can be said to be complete, even if inexact if it results in uniqueness. Through our abductive inference, concurrently applying the Principle of Maximum Entropy and Karl Popper’s Falsifiability Principle, if the evidence has falsified all but one hypothesis, we can claim to have achieved uniqueness and this is the ideal decision-making scenario because it constitutes a one-to-one causal relationship.

There is the possibility of collision or not, and if possible, then it either happens or doesn’t. There is no real probability of collision even though the community uses this as a metric. If a collision is possible, we must ask ourselves what evidence is required to convince us to take evasive action and we must be responsible when making this decision by properly admitting to and representing our ignorance. The community writ large would be greatly helped if space operators sharing a common orbital habitat could agree on these issues and develop operational protocols and practices so that improved transparency, predictability, and accountability are outcomes. These will all contribute toward a safer, more secure, and more sustainable space environment.

Dr. Moriba Jah; Photo courtesy of the author

Moriba Jah is an Associate Professor in Aerospace Engineering and Engineering Mechanics as well as the director for Computational Astronautical Sciences and Technologies (CAST), a group within the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin. Moriba came to UT Austin by way of the Air Force Research Laboratory and NASA’s Jet Propulsion Laboratory prior to that, where he was a Spacecraft Navigator on a handful of Mars missions.
Moriba is a Fellow of multiple organizations: TED, American Institute of Aeronautics and Astronautics (AIAA), American Astronautical Society (AAS), International Association for the Advancement of Space Safety (IAASS), Royal Astronomical Society (RAS), and the Air Force Research Laboratory (AFRL). He has served on the US delegation to the United Nations Committee On Peaceful Uses of Outer Space (UN-COPUOS), is an elected Academician of the International Academy of Astronautics (IAA), and has testified to congress on his work as related to SSA and Space Traffic Management. He’s an Associate Editor of the Elsevier Advances in Space Research journal and serves on multiple committees: IAA Space Debris, AIAA Astrodynamics, IAF Astrodynamics, and IAF Space Security.

Check Also

#SpaceWatchGL Opinion: 10 iconic marketing campaigns in Space

Marketing in outer space seems like an innovative idea, but it has 60+ years of history. Dr. Wernher von Braun, former Marshall Space Flight Center Director, pointed out on July 22, 1969: "Because without public relations we would have been unable to do it". Today, accelerated access to space provides unprecedented opportunities for #advertising stunts and viral marketing. Some campaigns raise ethical, environmental, and regulatory concerns, while others champion sustainability.