MELBOURNE: The Israeli military used a new artificial intelligence (AI) system to compile a list of thousands of human targets for potential air strikes in Gaza, according to a report published last week. The report comes from the nonprofit outlet +972 Magazine, which is run by Israeli and Palestinian journalists.

The report cites interviews with six unnamed sources in Israeli intelligence. Sources claim that the system, known as Lavender, was used in conjunction with other AI systems to target and assassinate suspected terrorists – many in their own homes causing large civilian casualties.

According to another report by the Guardian, based on the same sources as the +972 report, an intelligence official said that the system "made it easier" to carry out a large number of attacks, because "the machine gave it the cold shoulder". .As militaries around the world race to use AI, these reports show us what it looks like: machine-speed warfare with limited accuracy and little human oversight, with high costs for civilians.

The Israeli Defense Forces deny many of the claims in these reports. In a statement to the Guardian, it said it "does not use artificial intelligence systems that identify terrorist operatives". It states that Lavender is not an A system but "only a database intended to cross-reference intelligent sources".

But in 2021, the Jerusalem Post reported an intelligence official said Israel had won its first "AI war" – an earlier conflict with Hamas – using multiple machine learning systems to sift through data and formulate targets. Is.That same year a book titled The Human-Machine Team, outlining AI-powered warfare, was published under a pseudonym by an author who was recently reported to be the head of a major Israeli secret intelligence unit. Was.

Last year, another +972 report said Israel also uses an AI system called Habsor to identify potential terrorist buildings and bomb facilities. According to the report, Habsora targets "almost automatically", and a former intelligence officer has described it as a "mass murder factory".

The recent +972 report also claims a third system, called Where's Daddy? Lavender monitors identified targets and alerts the military when they return home, often to their families. Many countries are turning to algorithms in their quest for a military edge.The US military's Project Maven supplies AI targeting that has been used in the Middle East and Ukraine. China is also racing to develop AI systems to analyze data selection targets and assist in decision making.

Proponents of military AI argue that it will enable faster decision-making, greater accuracy, and reduced casualties in war.

Yet last year, Middle East Eye reported SAI, an Israeli intelligence office, required human review of every AI-generated target in Gaza, which was "not feasible for everyone". Another source told +972 that they would personally "invest 20 seconds for each goal" just to have a "rubber stamp" of approval.The Israeli Defense Force's response to the latest report said, "Analysts should conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law".

As for accuracy, the latest +972 report claims that Lavender automates the identification and cross-checking process to ensure that the potential target is a senior Hama military figure. According to the report, Lavender loosened the targeting criteria to include low-ranking personnel and weakened standards of evidence and made errors in "about 10 percent of cases."

The report also claimed that an Israeli intelligence officer had asked where is daddy? system, targets would be bombed in their homes "without hesitation, as a first option", causing civilian casualties. The Israeli branch says it "categorically rejects any claims of a policy of killing thousands of people in their homes".As military uses of AI become more common, ethical, moral and legal concerns have become largely an afterthought. There are not yet any clear, universally accepted or legally binding rules regarding military AI.

The United Nations has been discussing "lethal autonomous weapons systems" for more than ten years. These are devices that can make targeting and firing decisions without human input, sometimes known as "killer robots". Some progress was made last year.The UN General Assembly voted in favor of a new draft resolution to ensure that algorithms "should not have absolute control over decisions involving killing". Last October, the US also issued a declaration on responsible military use of AI and autonomy, which has since been endorsed by 50 other states. The first summit on the responsible use of military AI was also held last year, co-hosted by the Netherlands and the Republic of Korea. Overall, international rules on the use of military AI are in line with the efforts of states and weapons companies for high-tech, AI-enabled warfare. Struggling to keep up with the excitement.

Some Israeli startups creating AI-enabled products are reportedly making their use in Gaza a selling point.Yet reporting on the use of AI systems in Gaza shows how much AI falls short of the dream of perfect warfare, creating rather serious humanitarian harms.

The industrial scale at which AI systems like Lavender can generate goals effectively “displaces humans by default” in decision making. The willingness to accept AI suggestions without any human scrutiny expands the scope of potential goals. Which causes more damage.

The reports on Lavender and Habsora show us what current military AI is already capable of. The future risks of military AI may be even greater.For example, Chinese military analyst Chen Honghui has envisioned a future "battlefield singularity", in which machines make decisions and take actions so fast that no human can follow them. In this scenario, we are reduced to nothing more than mere spectators or casualties. Another warning was given in a study published earlier this year. American researchers conducted an experiment in which large language models like GPT-4 play the role of nations in war exercises. The models almost inevitably stuck with an arms race and conflict escalating in unexpected ways, including the use of nuclear weapons.The way the world reacts to the current use of military AI – like we are seeing in Gaza – is likely to set a precedent for future development and use of the technology. (talk)rup