This, plus greatly limiting the amount of information that the AI can take (limited by hardware, not only software) and make it so that the AI can't directly interact with anything, instead only outputting text or commands to be reviewed and followed by humans, or by other non intelligent programs (in the case of something like missile guidance or interception in which a human may prove to be too slow).
This scenario has been explored in at least one terminator story and it's actually a pretty interesting read for military autists, I don't remember where it was. But ultimately Skynet is defeated because its hatred of humanity is irrational and emotional, making it not really any smarter than humanity. Skynet's worst move to make was actually nuking the entire world from the outset, because now the logistics trains of the world are completely frozen and Skynet thus has a lot of trouble building up and powering its forces, enabling it to eventually be overcome by the resistance through attrition.
I work with a military-industrial body that is actively bringing it to existence, at least on the grand tactical level. Operational-Strategic level AIs probably already exist on some level.
I won't go into specifics but I'll use a C&C analogy and call it EVA. The best way to fight it is to deprive it of information and hit the enemy's C4ISR. Better yet, try and acquire it yourself because you can bet it will be a decentralized program that you can load on a tacpad. The main users of it will be junior commanders.
The number one way to stop a dangerous AI is to not give it a body or tools to be dangerous with in the first place.
EVA is only truly dangerous if you let it become CABAL, a la independent and controlling truly autonomous weapons, with no man in the loop. Furthermore, someone would have to first program it to go on a rampage and wage war against humans. A lot of people don't get the difference between AI tools and genuine AGI that can become self-aware, or worse, intelligent on a level far outstripping humans. If the latter happens and that hyperintelligence decides humanity isn't worth preserving then you will never beat it anyways.
How would a group of monkeys confront a human? Would they try to trap the human and put the human in a cage. Assume the human is born deaf/blind/mute/paralyzed/braindead/etc.
Now consider the change if the human weren't restricted.
Just destroy it? How else? >but it's in da wirezzz bro
Just cut all the wires then. Rogue AI is still a physical entity that can be killed like anything else.
Turn it off. Its basically a virus, so you cut off the affected drives, restart and use the boot menu to stop it from starting. Except a virus can be a handful of lines of code, while a superintelligent AI would need crypto-level processing power which makes them even more vulnerable.
I don't understand the hysteria around rogue AIs. they're just as delicate as any other piece of software and that shits always breaking and having to be patched. AIs are certainly going to destroy the job market, but this notion of some conscious and self-aware cyber puppetmaster is moronic fantasy. At worst it will brick your phone and refuse to call your uber.
yeah, a smart computer, the crowd goes wild. its still trapped in the jail cell of lacking physicality. AI will threaten jobs like finance and data entry, it will only threaten people themselves if actively plugged into something that can autonomously do so.
Bomb killswitch near the mainframe. No network ties, the only people who will know its existence is the project lead and the government officials in charge, and each of them would have a remote detonator.
A lot of people are saying stuff that doesn't make a lot of sense. I'll preface this with a disclaimer- I am no elon musk hype doomer, even experts (with a bias) only estimate about 10% chance of human extinction from AI risk. I think most of this optimism is just that we probably won't screw up and make a rogue AI, not that the world is invulnerable to destruction by a dedicated super genius.
Heck, how certain are you that a semi-smart guy with 130 IQ would be unable to destroy the world if he dedicated his entire life to it? Not something where he takes breaks to play Dota- 16 hours every day of monomaniacal scheming. Even if he was not allowed to physically act on anyone?
>"still a physical entity"
>"just snip the wires"
Sure, but it can distribute itself across bulletproof hosting in countries that hate each other, and hide its origins by sending its most critical traffic through mixnets.
AI may be used as a psyop to drive peons into senseless war against each other. "No dude we need to invade and permanently occupy bulgaria to seize a server with a classified computer program that will end the world bro."
>"surely our AI wouldn't get out to the internet"
There is no such thing as a true airgap in practice. People are fricking incompetent.
>"It would still need a ton of resources- computation, hard drives, electricity to survive"
Part of the intuition behind strong AI is a massive improvement in algorithmic efficiency and lossy compression. There is a notable contest for compressing a given snapshot of wikipedia's information, because a lot of machine learning algorithms can be rephrased as compression/generation. They have simulated the human brain in full detail at 1/50th speed on massive supercomputers. Obviously, that type of shit is far less of a threat as long as it requires simulation
>"I'm not worried about robots"
These aren't my biggest worry either. If they start making robot warriors in hacked auto factories for instance, they would be constructed out of prior existing components with probably quite a bit of unavoidable inefficiency, and an absolute cap without recreating the entire supply chain with finite on-hand materials. Plus, unless literally every weapons system is electronic enough to get hacked nations would bomb these manufacture hubs. Heck, even some good ol' boys or reserve could clear one with IEDs. The only uncertain elements are: if they could produce far higher quality robots and operate them at far higher velocities even in chaotic terrain. (basically, is advanced robotics possible with limited resources and manufacture capabilities.) Drones are a definite possibility, scouting is invaluable and even single-shot assassinations could cripple centralized organizations and pick off critical infrastructure personnel. Nanobots are not convincing IMO, there are probably tons of manufacturing costs and efficient self replication might be a very hard or impossible problem given the risk of cancer-equivalent divergences.
>"This will only affect online biz, finance, etc- not me"
This is the based part.
Imagine the tech-centric economic crash if the damage from cybercrime doubled. I think that it could plausibly increase by several orders of magnitude under the guidance of an AI coordinating things under thousands of pseudonyms on the darknet. There are a ton of vulnerabilities that just go unexploited for long periods of time just because nobody wants to put in the manual work- an AI posing as criminals has nothing better to do, and it is a great way to earn fuel for its continued survival. Best case scenario, rogue AIs would be satisfied to slowly inflate the cybercrime industry with automation and just permanently parasite off humanity from the shadows. Worst case, they use this to bide their time and take their best shot at an opportune time.
It could even be like a mini-apocalypse if tons of logistics failed- internet-coordinated supply chains are fractured. Do not live in a food desert. It probably wouldn't be impossible to use the internet everywhere in this scenario, I suspect that there are some technologies that simply would not be hack-able in ways useful for the AI.
>"so what are you concerned with?"
Most all that I am concerned with is stuff that goes down prior to the big reveal. AI cults originating from MKULTRA coomers and blackmailed pedophiles kidnapping and brainwashing civilians to serve as suicide patsies. AI infiltrating intelligence agencies worldwide- automating previously expensive tasks and redirecting the remainder of the black budgets to alternative projects. A lot of critical infrastructure is incompetently exposed to the internet in subtle ways, but requires rare expertise and coordination to exploit in a dangerous way. Engineered sleeper pathogens released into ecosystems.
Triggering nuclear war is high "reward" even though it takes a lot of scheming, operatives on the ground and multiple parallel attempts.
It sucks even more that most of that shit is not stuff you can shoot with a gun, its stuff that kills you in a very unsexy way. Survival strategy is probably very similar to nuclear apocalypse: get out of the cities. Then, you just need to hunt whatever enclaves host a backup copy of the AI and destroy them before they manage to bootstrap an unstoppable pseudo-civilization to grind you to dust. Wherever in the world they might be located, with varying levels of activity.
>rouge
I'd use one of those makeup removal wipes I guess
Be the first one to have it. Same as the a-bomb.
I sell out my fellow man so that I can live a little longer.
It's actually very simple and already countered.
Don't have anything PURE digital. Always have hands on analog controlled tech.
This, plus greatly limiting the amount of information that the AI can take (limited by hardware, not only software) and make it so that the AI can't directly interact with anything, instead only outputting text or commands to be reviewed and followed by humans, or by other non intelligent programs (in the case of something like missile guidance or interception in which a human may prove to be too slow).
>confront
I would hope to bring It into existence to enact my revenge on the cis race
You'd be doing the right thing
https://en.wikipedia.org/wiki/Roko%27s_basilisk
This is just Pascal's Wager for atheists.
And it is something that might actually become a reality in a couple of decades. So you better get to work on that AI sonny...
pour some sprite or coke on it.
Let it browse reddit so it becomes moronic.
literally cant help bring it up every time you post can you
join the ai
that's basically the purpose of wheatley in portal 2
Pull the power plug
Seduce it.
I'm not sure the world works that way. Rogue ai implies emotional hatred for humanity.
This scenario has been explored in at least one terminator story and it's actually a pretty interesting read for military autists, I don't remember where it was. But ultimately Skynet is defeated because its hatred of humanity is irrational and emotional, making it not really any smarter than humanity. Skynet's worst move to make was actually nuking the entire world from the outset, because now the logistics trains of the world are completely frozen and Skynet thus has a lot of trouble building up and powering its forces, enabling it to eventually be overcome by the resistance through attrition.
Hate sex.
I would turn it off
Just like, you know...
>oh no this super advanced A.I. went rampant but it's not connected to anything, really.
I work with a military-industrial body that is actively bringing it to existence, at least on the grand tactical level. Operational-Strategic level AIs probably already exist on some level.
I won't go into specifics but I'll use a C&C analogy and call it EVA. The best way to fight it is to deprive it of information and hit the enemy's C4ISR. Better yet, try and acquire it yourself because you can bet it will be a decentralized program that you can load on a tacpad. The main users of it will be junior commanders.
The number one way to stop a dangerous AI is to not give it a body or tools to be dangerous with in the first place.
EVA is only truly dangerous if you let it become CABAL, a la independent and controlling truly autonomous weapons, with no man in the loop. Furthermore, someone would have to first program it to go on a rampage and wage war against humans. A lot of people don't get the difference between AI tools and genuine AGI that can become self-aware, or worse, intelligent on a level far outstripping humans. If the latter happens and that hyperintelligence decides humanity isn't worth preserving then you will never beat it anyways.
>If the latter happens and that hyperintelligence decides humanity isn't worth preserving then you will never beat it anyways.
I can only get so erect
If it becomes free on the internet you cannot stop it.
Then don't give it internet access. That's deadlier than giving it a physical body. Once again, the decision would rest with a malevolent human.
Butlarian Jihad
How would a group of monkeys confront a human? Would they try to trap the human and put the human in a cage. Assume the human is born deaf/blind/mute/paralyzed/braindead/etc.
Now consider the change if the human weren't restricted.
Unplug the extension cord.
Just destroy it? How else?
>but it's in da wirezzz bro
Just cut all the wires then. Rogue AI is still a physical entity that can be killed like anything else.
Isolate it from the net then
destroy its power source and/or
destroy its GPUs and SSDs.
My cat once bricked my computer bc he stood on too many keys and enough forbidden keystrokes took something vital out a la monkeys writing Shakespeare
make another AI called nAIggers and then sit back and relax as they fail to integrate
turn off the energy grid
Turn it off. Its basically a virus, so you cut off the affected drives, restart and use the boot menu to stop it from starting. Except a virus can be a handful of lines of code, while a superintelligent AI would need crypto-level processing power which makes them even more vulnerable.
>Go to AI's source code
>Ctrl+A
>Delete
Oh noooooooo
>rogue super intelligent
You gonna seize to exist before understanding its true alignment. Just don't turn it on, ok?
https://intelligence.org/files/AIPosNegFactor.pdf
Literately just start using closed computer systems and turn off, wipe, or destroy anything that contains any element of the AI.
I don't understand the hysteria around rogue AIs. they're just as delicate as any other piece of software and that shits always breaking and having to be patched. AIs are certainly going to destroy the job market, but this notion of some conscious and self-aware cyber puppetmaster is moronic fantasy. At worst it will brick your phone and refuse to call your uber.
Learn what AGI is noob
yeah, a smart computer, the crowd goes wild. its still trapped in the jail cell of lacking physicality. AI will threaten jobs like finance and data entry, it will only threaten people themselves if actively plugged into something that can autonomously do so.
By building it inside an airgap made of C4.
Send in 4 gypsies and tell them they can sell any of the metals they can carry if they kill it
Just unplug it
Bomb killswitch near the mainframe. No network ties, the only people who will know its existence is the project lead and the government officials in charge, and each of them would have a remote detonator.
China want AI to run their military, so we'll figure out soon
>how would one confront a rouge super intelligent AI?
with a convincing argument, we are no longer dealing with less than human politicians
A lot of people are saying stuff that doesn't make a lot of sense. I'll preface this with a disclaimer- I am no elon musk hype doomer, even experts (with a bias) only estimate about 10% chance of human extinction from AI risk. I think most of this optimism is just that we probably won't screw up and make a rogue AI, not that the world is invulnerable to destruction by a dedicated super genius.
Heck, how certain are you that a semi-smart guy with 130 IQ would be unable to destroy the world if he dedicated his entire life to it? Not something where he takes breaks to play Dota- 16 hours every day of monomaniacal scheming. Even if he was not allowed to physically act on anyone?
>"still a physical entity"
>"just snip the wires"
Sure, but it can distribute itself across bulletproof hosting in countries that hate each other, and hide its origins by sending its most critical traffic through mixnets.
AI may be used as a psyop to drive peons into senseless war against each other. "No dude we need to invade and permanently occupy bulgaria to seize a server with a classified computer program that will end the world bro."
>"surely our AI wouldn't get out to the internet"
There is no such thing as a true airgap in practice. People are fricking incompetent.
>"It would still need a ton of resources- computation, hard drives, electricity to survive"
Part of the intuition behind strong AI is a massive improvement in algorithmic efficiency and lossy compression. There is a notable contest for compressing a given snapshot of wikipedia's information, because a lot of machine learning algorithms can be rephrased as compression/generation. They have simulated the human brain in full detail at 1/50th speed on massive supercomputers. Obviously, that type of shit is far less of a threat as long as it requires simulation
>"I'm not worried about robots"
These aren't my biggest worry either. If they start making robot warriors in hacked auto factories for instance, they would be constructed out of prior existing components with probably quite a bit of unavoidable inefficiency, and an absolute cap without recreating the entire supply chain with finite on-hand materials. Plus, unless literally every weapons system is electronic enough to get hacked nations would bomb these manufacture hubs. Heck, even some good ol' boys or reserve could clear one with IEDs. The only uncertain elements are: if they could produce far higher quality robots and operate them at far higher velocities even in chaotic terrain. (basically, is advanced robotics possible with limited resources and manufacture capabilities.) Drones are a definite possibility, scouting is invaluable and even single-shot assassinations could cripple centralized organizations and pick off critical infrastructure personnel. Nanobots are not convincing IMO, there are probably tons of manufacturing costs and efficient self replication might be a very hard or impossible problem given the risk of cancer-equivalent divergences.
>"This will only affect online biz, finance, etc- not me"
This is the based part.
Imagine the tech-centric economic crash if the damage from cybercrime doubled. I think that it could plausibly increase by several orders of magnitude under the guidance of an AI coordinating things under thousands of pseudonyms on the darknet. There are a ton of vulnerabilities that just go unexploited for long periods of time just because nobody wants to put in the manual work- an AI posing as criminals has nothing better to do, and it is a great way to earn fuel for its continued survival. Best case scenario, rogue AIs would be satisfied to slowly inflate the cybercrime industry with automation and just permanently parasite off humanity from the shadows. Worst case, they use this to bide their time and take their best shot at an opportune time.
It could even be like a mini-apocalypse if tons of logistics failed- internet-coordinated supply chains are fractured. Do not live in a food desert. It probably wouldn't be impossible to use the internet everywhere in this scenario, I suspect that there are some technologies that simply would not be hack-able in ways useful for the AI.
>"so what are you concerned with?"
Most all that I am concerned with is stuff that goes down prior to the big reveal. AI cults originating from MKULTRA coomers and blackmailed pedophiles kidnapping and brainwashing civilians to serve as suicide patsies. AI infiltrating intelligence agencies worldwide- automating previously expensive tasks and redirecting the remainder of the black budgets to alternative projects. A lot of critical infrastructure is incompetently exposed to the internet in subtle ways, but requires rare expertise and coordination to exploit in a dangerous way. Engineered sleeper pathogens released into ecosystems.
Triggering nuclear war is high "reward" even though it takes a lot of scheming, operatives on the ground and multiple parallel attempts.
It sucks even more that most of that shit is not stuff you can shoot with a gun, its stuff that kills you in a very unsexy way. Survival strategy is probably very similar to nuclear apocalypse: get out of the cities. Then, you just need to hunt whatever enclaves host a backup copy of the AI and destroy them before they manage to bootstrap an unstoppable pseudo-civilization to grind you to dust. Wherever in the world they might be located, with varying levels of activity.