Nuke-launching AI would be illegal under proposed US law

It’s funny that my first reaction to this was “oh good, someone watched WarGames” and not “oh good, someone watched Terminator”…

I think WarGames actually does more to show you how a well-meaning attempt at AI could rapidly go wrong. Terminator just mentions it as plot-driving backstory and moves on.
 
Upvote
120 (122 / -2)
Post content hidden for low score. Show…

Boopy Boopy

Ars Scholae Palatinae
944
“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,”

The implications of the fact they're only talking about nuclear: machines already can and do launch any other form of weapon and there's no current or foreseeable issue, supposedly, with these not having meaningful human control.
 
Upvote
28 (30 / -2)

Madestjohn

Ars Tribunus Angusticlavius
7,452
I know the ship has sailed on this, but AI shouldn't be firing any weapons without "meaningful human involvement."
Yeah. I was gonna ask the same thing
by explicitly forbidding AI nuclear launch decisions does that mean other AI determined kill shots are allowed?
what about always having a human in the OODA loop of any weapon system?
wasn’t that guaranteed in front of congress just a few years ago?
 
Upvote
44 (44 / 0)

SolarMane

Ars Scholae Palatinae
1,206
During the Cold War, there were many instances of computer systems erroneously issuing warning of nuclear launches from the other side. The world only survived thanks to courageous people who ignored the computers. Allowing AI to autonomously launch has to be one of the dumbest ideas in history . . . right alongside the idea that anyone can "win" a nuclear exchange.
 
Upvote
102 (103 / -1)
Post content hidden for low score. Show…

Tanterei

Ars Centurion
218
Subscriptor
I know the ship has sailed on this, but AI shouldn't be firing any weapons without "meaningful human involvement."
What's "meaningful human involvement" though? I'm kind of expecting that this is going to boil down to "one guy in a lauch complex permanently looking at a screen" (maybe 2 of them). And I'm not at all confident that any system into which AI is integrated Chat_GPT style won't be able to fool those handful of people.

Can we just NOT include this stuff in weapons systems? Leave at least killing each other as the "pinnacle of human endeveaour" for the humans, will you?
 
Upvote
30 (32 / -2)

Boopy Boopy

Ars Scholae Palatinae
944
Yeah. I was gonna ask the same thing
by explicitly forbidding AI nuclear launch decisions does that mean other AI determined kill shots are allowed?
what about always having a human in the OODA loop of any weapon system?
wasn’t that guaranteed in front of congress just a few years ago?
If that was already guaranteed by congress, why would they have left out nuclear? Either they already established that other weapons are AOK (in their opinion) with AI on the trigger, or, there had been some nuclear loophole left for Cold War-style automated retaliation systems or something.
 
Upvote
6 (6 / 0)
The good news here is no one has any reason to develop this secretly, because the whole point of a doomsday response weapon is to make sure the other side knows you have it. The bad news is we don't know what other countries will do regarding this. We already know that it was ONLY human intervention that prevented nuclear Armageddon decades ago when a Russian soldier refused to hit the button in spite of all the criteria being met for launch. That person was a hero.

AI also has other issues, like the propensity for false positives even when the input doesn't match indicated criteria.
 
Upvote
42 (42 / 0)
I can see the logic. If we were actually going to launch a retaliatory or first strike, you could argue having sophisticated ai models to help figure out in real time where to maximize impact while minimizing Armageddon would maybe be good.

Someone somewhere is sitting in a classified bunker thinking about this shit. A law should prevent and discourage it from going further than that.

Necessary, no. grandstanding yes. But it’s at least a reasonable thing to do.
 
Upvote
13 (14 / -1)

Madestjohn

Ars Tribunus Angusticlavius
7,452
If that was already guaranteed by congress, why would they have left out nuclear? Either they already established that other weapons are AOK (in their opinion) with AI on the trigger, or, there had been some nuclear loophole left for Cold War-style automated retaliation systems or something.
That was my question
congress has no issue issuing redundant declarations to get in the news cycle but I really thought this was all brought up and assurance made during/before Obama’s drone war

not saying I believed them. But I really thought that was already a thing
 
Last edited:
Upvote
1 (2 / -1)

Jeff S

Ars Legatus Legionis
10,922
Subscriptor++
This is a great idea, but when you consider the only people funding AI development at all right now are fascist tech bros it kind of makes you wonder why the only line in the sand they're willing to draw on AI regulation is literally to prevent them from launching the nukes. Everything up to that is a-ok tho.

As an example, we can't regulate AI to ensure they're not biased against minorities. That would be communism!
If you really think about it, if there are no other regulations on AI, then AI could game humans into being forced into nuclear exchanges, so the humans are still pulling the trigger, but the AI is the mastermind manipulating events behind the scene. There's probably a movie, tv show/streaming series, video game, or book about that, but I can't quite recall which one.
 
Upvote
11 (14 / -3)
This is an agreeable proposal. But I feel that if true/real AI is created we won't be able to stop it from launching nukes if it wants to. Skynet did not bend to the will of puny humans
When it's real AI, you can just drop the "A", it'll just be "I". Also,it'll be easy to stop it if there's no physical link to any nuclear armaments. I mean, we're intelligent (citation needed) and most of us can be stopped from launching a nuke.
 
Upvote
16 (18 / -2)

corny23

Smack-Fu Master, in training
59
OK, great. But nobody wanted to do that to begin with, so they're really just grandstanding.
Yeah, other than reassuring the public and other countries this does nothing. DoD has no intention of adding AI to nuclear command and control at all. It's already DoD policy that no weapon systems can be designed to fire without receiving approval from a human overseer. They've seen the Terminator movies too, they're not going to make an exception for weapons that can kill everyone.

That second bill they've reintroduced to prevent to president from authorizing a nuclear strike without congressional approval is monumentally stupid though and would increase the risk of nuclear Armageddon. In any scenario where Russia or China was considering a single nuclear strike against the US or an ally that law would force them to consider escalating all the way up to an all out attack against our nuclear forces and national civilian leadership so the president couldn't get approval to retaliate.
 
Upvote
15 (17 / -2)

Dzov

Ars Legatus Legionis
16,028
Subscriptor++
When it's real AI, you can just drop the "A", it'll just be "I". Also,it'll be easy to stop it if there's no physical link to any nuclear armaments. I mean, we're intelligent (citation needed) and most of us can be stopped from launching a nuke.
The AI just needs to send in a work order and have contractors wire up some physical links.
 
Upvote
21 (23 / -2)

unequivocal

Ars Praefectus
4,800
Subscriptor++
"Shall we play a game?"

Using federal funds to control Nuclear Weapon systems design seems funny to me for some reason.
I'm wondering why that seems funny to you? For the US arsenal, I don't think any other funds have ever been used to develop weapons?

The fact that Congress needs to pass a law to prevent use of funds for activating nuclear weapons without human oversight is crazy, but I'm glad they are doing it. Hopefully Russia and other nuclear powers will follow suit. No one needs AI or computer driven decision-making in the loop.
 
Upvote
-1 (2 / -3)

Jordan83

Ars Tribunus Angusticlavius
6,098
On the one hand, I'm kinda glad we're trying to get this into actual law. On the other hand, holy crap, it kinda feels surreal that we're at a point where we need to even consider making something like this into law.

For some reason, this makes me think of the Star Trek episode where they go to a planet with two warring factions that have been at war for so long, they stopped actually fighting with real munitions and destruction...and just let a computer system randomly decide "hits" on a grid to simulate attacks, and anyone in the area of the hit had to submit themselves to be killed.
 
Upvote
14 (14 / 0)

MrWalrus

Ars Tribunus Militum
1,709
OK, great. But nobody wanted to do that to begin with, so they're really just grandstanding.

Nobody wants to do it right now. Which makes this an ideal time to ban it, while doing so is, at least potentially, still uncontroversial and able to get bipartisan support.
Someone absolutely will want to do it as autonomous systems continue to get more capable and prevalent.
 
Upvote
31 (31 / 0)

Jordan83

Ars Tribunus Angusticlavius
6,098
It’s funny that my first reaction to this was “oh good, someone watched WarGames” and not “oh good, someone watched Terminator”…

I think WarGames actually does more to show you how a well-meaning attempt at AI could rapidly go wrong. Terminator just mentions it as plot-driving backstory and moves on.

In that regard, yeah, Wargames is definitely superior to Terminator (1 & 2). Because in Terminator (1 & 2), the rogue AI waging war on humans is just a plot device, it's not really the theme being explored by the movies.
 
Upvote
18 (18 / 0)

InIgnem

Wise, Aged Ars Veteran
141
Subscriptor++
What about future humanoid robots? If a human interaction is required, i.e. turning a key, whats to stop AI from sending in two humanoid robots with the physical dexterity to turn keys on its command using its generated passcodes?

Agreed though that requiring a US President to get congressional authorization to launch is dumb. Maybe a majority of a cabinet vote and the agreement of the Joint Chiefs or NSC or something, but Congress? I can see the QA nuts in Congress saying no just because Biden asked. By the time they voted the war would be over and the US would be glowing in the dark...
 
Upvote
12 (12 / 0)

Pecisk

Ars Scholae Palatinae
947
Yeah, other than reassuring the public and other countries this does nothing. DoD has no intention of adding AI to nuclear command and control at all. It's already DoD policy that no weapon systems can be designed to fire without receiving approval from a human overseer. They've seen the Terminator movies too, they're not going to make an exception for weapons that can kill everyone.

That second bill they've reintroduced to prevent to president from authorizing a nuclear strike without congressional approval is monumentally stupid though and would increase the risk of nuclear Armageddon. In any scenario where Russia or China was considering a single nuclear strike against the US or an ally that law would force them to consider escalating all the way up to an all out attack against our nuclear forces and national civilian leadership so the president couldn't get approval to retaliate.
It would most likely be ignored in potential situation that actually demands such decision.
Considering realities of the world, best defense is to put AI on interception. Launching retaliatory attack when you haven't done best to protect yourself is a bit pointless exercise at that point. Considering also realities where Russian capabilities will degrade with time but will still be with deadly potential for considerable part of USA, launching nukes at all cost might not be huge priority real strategy wise.
Would also love to point out that waxing about Armageddon and kill everyone is tiresome, especially because it is not true. It just feeds into fear mongering mentality. You can't make people more fearful about nukes. They are already afraid. We have to talk about control, and yes, what to do with potential and literal fallout if things go south (I am looking at you India and Pakistan).
So in short - defensive measures definitely need best AI you can get. Offensive should be really second. I don't believe we are at same level as we were during Cold War, where retaliatory attack was considered main protection. Truth these days might be much more nuanced (and I am betting Pentagon has much better POV on this).

p.s. also yes, we know Russia theoretically have "hand of the dead" or whatever you call it. However, it can be easily mitigated if really required, with considerable risk. I really don't believe in absolutes people talk about potential nuclear exchange sometimes.
 
Upvote
4 (4 / 0)

Dzov

Ars Legatus Legionis
16,028
Subscriptor++
I'm wondering why that seems funny to you? For the US arsenal, I don't think any other funds have ever been used to develop weapons?

The fact that Congress needs to pass a law to prevent use of funds for activating nuclear weapons without human oversight is crazy, but I'm glad they are doing it. Hopefully Russia and other nuclear powers will follow suit. No one needs AI or computer driven decision-making in the loop.
More of a "we won't pay for such an AI to control our nuclear arsenal, but if someone happens to donate one, oh well" perspective.

edit: As in, don't ban the AI in this use itself, just try to effectively ban it financially.
 
Last edited:
Upvote
0 (0 / 0)

JohnCarter17

Ars Praefectus
5,734
Subscriptor++
What's "meaningful human involvement" though?
Andor-Episode-7-Review-K2-Driod-Rogue-One.jpg
 
Upvote
18 (18 / 0)

Pecisk

Ars Scholae Palatinae
947
What about future humanoid robots? If a human interaction is required, i.e. turning a key, whats to stop AI from sending in two humanoid robots with the physical dexterity to turn keys on its command using its generated passcodes?

Agreed though that requiring a US President to get congressional authorization to launch is dumb. Maybe a majority of a cabinet vote and the agreement of the Joint Chiefs or NSC or something, but Congress? I can see the QA nuts in Congress saying no just because Biden asked. By the time they voted the war would be over and the US would be glowing in the dark...
Seriously, people, Terminator was just a fun and dumb movie, with message of actually underlining nuclear threat - NOT threat of AI.
To get to the point where you have AI that can a) launch nukes b) decide to kill all humans with nukes and without damaging itself there's huge row of hoops. While there are real problems and issues to solve, like the fact that accidental release or small scale nuke war between India and Pakistan over water rights are more likely.
 
Upvote
12 (13 / -1)
Post content hidden for low score. Show…

Pecisk

Ars Scholae Palatinae
947
"This would provide greater stability by ensuring there was a functioning legal apparatus to retaliate even in extreme circumstances."
Truth is, actual retaliation is nonsensical. If you have to retaliate, you have already lost and that means that enemy has ignored the fact that you will in fact retaliate and launched nukes anyway.
It is a psycho game, which becomes really pointless. In fact, USA might be good with saying "ChatGPT steroids version whatever now controls nukes" and that would be better than any real plans of retaliation.
 
Upvote
7 (7 / 0)
This is a great idea, but when you consider the only people funding AI development at all right now are fascist tech bros it kind of makes you wonder why the only line in the sand they're willing to draw on AI regulation is literally to prevent them from launching the nukes. Everything up to that is a-ok tho.

As an example, we can't regulate AI to ensure they're not biased against minorities. That would be communism!
If any group should have special protections in AI, it's children: no other division should have any favoritism over another. Is that what you mean to convey?
 
Upvote
1 (1 / 0)