Manchester, Artificial Intelligence (AI) has progressed at an astonishing pace over the past few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) – a form of AI that will not only surpass human intelligence but will not be tied to the learning speed of humans.

But what if this milestone isn't just a remarkable accomplishment? What if it represents a formidable obstacle to the development of all civilizations, so challenging that it thwarts their long-term survival?

This idea is at the heart of a paper I recently published in Acta Astronautica. Could AI be the "Great Filter" of the universe – a boundary so difficult to cross that it prevents most life from developing into space-faring civilizations? This is a concept that could explain how extraterrestrial intelligence ( Why the search for SETI has not yet detected signatures of advanced Technic civilizations elsewhere in the galaxy?

The Great Filter Hypothesis is ultimately a proposed solution to the Firm Paradox.This raises the question why in such a vast and ancient universe, where billions of potentially habitable planets exist, we have not found any sign of alien civilizations. The hypothesis suggests that there are insurmountable barriers in the evolutionary timeline of civilizations that prevent them from evolving into space-faring entities.

I believe the emergence of ASI may be such a filter. The rapid progress of AI potentially leading to ASI may intersect with a key stage in the evolution of civilization – the transition from a single-planet species to a multiplanetary species. This is where many civilizations may falter. , AI is progressing far faster than our ability to control it or populate our solar system.The challenge with AI and especially ASI lies in its autonomous self-amplification and improvement nature. It has the potential to increase its capabilities at a pace that would exceed our own evolutionary timelines without AI.

The probability of something going horribly wrong is very high, leading to the collapse of both biological and AI civilizations before they even have a chance to become multiplanetary. For example, if nations increasingly rely on and delegate power to autonomous AI systems competing against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including AI systems.In this scenario, I estimate that the typical longevity of a technological civilization could be less than 100 years. This is roughly the time between being able to receive and transmit signals between stars (1960) and the projected emergence of ASI on Earth (2040). This is alarmingly short compared to the cosmic time scale of billions of years.

This estimate, when plugged into optimistic versions of the Drake equation—which attempts to estimate the number of active, communicating extraterrestrial civilizations in the galaxy—suggests that, at any given time, there are only a handful of intelligent civilizations. Are. Furthermore, like us, their relatively modest technological activities can make them quite challenging to identify.Wake-up callThis research is not just a cautionary tale of potential disaster. This serves as a wake-up call for humanity to establish strong regulatory frameworks to guide the development of AI, including military systems.

This is not just about preventing the lethal use of AI on Earth; It is also about ensuring that the development of AI aligns with the long-term survival of your species. This suggests that we need to put more resources into becoming a multiplanetary society as quickly as possible – a goal that has been dormant since the heady days of Project Apollo, but has recently been revived by progress made by private companies. Has been activated since.

As historian Yuval Noah Harari said, nothing in history has prepared us for the impact of introducing non-aware, super-intelligent entities onto our planet.Recently, the implications of autonomous AI decision-making have led prominent leaders in the field to call for a moratorium on AI development until a responsible form of control and regulation is introduced. But if every country implements strict Even if they agree to follow rules and regulations, it will be difficult to rein in rogue organizations.

The integration of autonomous AI into military defense systems is of particular concern. There is already evidence that humans will willingly give up significant power to increasingly capable systems, as they can accomplish useful tasks more quickly and effectively without human intervention. Governments are therefore reluctant to regulate the strategic benefits offered by AI in this region, as has recently been disastrously demonstrated in Gaza.This means that we are already dangerously close to the precipice where autonomous weapons operate beyond ethical boundaries and circumvent international law. I live in a world where handing over power to AI systems to gain strategic advantage could inadvertently trigger a chain of rapidly escalating, highly destructive events. In the blink of an eye, the collective intelligence of your planet could be destroyed. Humanity is at a critical point in its technological trajectory. Our actions cannot determine whether we can become a sustainable interstellar civilization, or whether we can meet the challenges posed by our creations.Using SETI as a lens through which we can examine our future developments adds new dimensions to the discussion on the future of AI. It's up to all of us to ensure that when we reach for the stars, we do so not as a warning to other civilizations, but as a beacon of hope – for a species that has learned to thrive with AI .(talk) AMS