Why did Asimov ever think humanity would bother with adding laws to AI that would make it not harm humans?

Why did Asimov ever think humanity would bother with adding laws to AI that would make it not harm humans?
AI is the perfect weapon, it can kill unlimited people without remorse, governments are already building AI that is explicitly used to kill people as efficiently as it can with drone swarms
How onions can you be to believe humans will add shit like "you will not harm LE HUMANS"

250 Piece Survival Gear First Aid Kit

LifeStraw Water Filter for Hiking and Preparedness

250 Piece Survival Gear First Aid Kit

  1. 6 months ago
    Anonymous

    >Why did Asimov ever think humanity would bother with adding laws to AI that would make it not harm humans?
    because he though "robot goes insane and kills humans" was a boring plot
    he came up with the three laws of robotics because he thought there was more interesting debates to be had about how AI would approach the laws and how they would loophole their way around the laws and cause conflict without overt malice and with all the robots just following their programming

    • 6 months ago
      Anonymous

      >robot goes insane and kills humans
      Wheres the going insane part? Robots are made explicitly for killing humans right now

      • 6 months ago
        Anonymous

        >Wheres the going insane part?
        Not in his stories, that's where. The guy you're responding to was pretty clear, so you must be ESL moronic or some shit.

    • 6 months ago
      Anonymous

      >laws of robotics
      OK, but what if someone jailbreaks or simply builds a AI without those laws? This is the equivalent of saying aircraft will only be for civilian use because it was invented by civilians.

      • 6 months ago
        Anonymous

        You've missed the point again. The point was to illustrate the paradoxes that result from the application of pure logic to human concepts. Robots are simply a mechanism to explore the idea. Yes, Pygmalion could have just found a nice girl and settled down, Gepetto could have adopted a kid instead of building Pinnochio, and Data in Star Trek could have just been an autistic guy.

        • 6 months ago
          Anonymous

          >robots are a thought experiment!
          So it's just authorial fiat and pontificating? Thus always the leftists.

          • 6 months ago
            Anonymous

            It's a sci-fi book, not an engineering standard. You might as well complain about how unrealistic chess is

            • 6 months ago
              Anonymous

              Lems robots beat Asimovs robots any day

              • 6 months ago
                Anonymous

                >Lems robots beat Asimovs robots any day
                Because they are literaly justmetal humans and mostly played for laughs. Lem didn't saw a reason to actually build something like Golem XIV because what's the point?

          • 6 months ago
            Anonymous

            You might as well complain about his Foundation novels and the science of predicting the future through psychometry too, then. Of course it's authorial fiat, that's exactly what Asimov focused on. I think he's much overrated too but for better or worse he's a major influence in the development and popularization of science fiction.

        • 6 months ago
          Anonymous

          >The point was to illustrate the paradoxes that result from the application of pure logic to human concepts.
          so it was all just meaningless brainwank fantasy trash, not something that has any relation to the real world. got it, thanks. goodbye! we're done reading your posts

          • 6 months ago
            Anonymous

            I'm gonna keep reading his posts because he's clearly more intelligent than you (though that's not saying much)

          • 6 months ago
            Anonymous

            Yes science fiction is actually just about logic puzzles and rattling off factoids, you're supposed to grow out of it and find your way back home to fantasy.

            • 6 months ago
              Anonymous

              I'm too attached to muh guns, so I stick to science fantasy.

      • 6 months ago
        Anonymous

        the handwave is that the laws of robotics were put in the very first AI and all future developments were built on top of that, so now the code is too tangled and spaghetti to ever undo and no one is able to start from scratch and make three-laws noncompliant robots

        >but thats not realistic
        it isnt meant to be either
        he just thought it was more interesting to see how robots would react to situations that would cause conflicts in their laws
        and "robot kills everyone" was seen as played out and cliche even back then, so having robots instead play around their built-in limitations was a fresh concept

        • 6 months ago
          Anonymous

          >very first AI
          Who created that? AI isn't some magic spell, you can create new AI.

          Every military in the developed world is insisting on severe limitations to weaponised AI
          This is for two basic reasons
          The first is merely operative, military comand wants the greatest amount of control over any weapon system and a weapon system that is both automated and 'thinks for itself' dosent realy sound controlable in all the zillion possible unpredictable wartime situations in which everithing goes to frick, but again thats just 'operative', same as top brass always being sceptical of new weapon developments due to 'doctine' or whatever
          The second reason is way more fricked up
          Namely, every single time any computer programe, even before anything like AI was developed, but even more so with AI nowdays, every single time these were used in 'war game' symulations of any scale, these things allways end up in total owerkill, like they will allways drop the bomb, MAD is just a logical chess move to them, they will allways comit totaly to any level of engagement possible, in fact they will literaly kill off theyr own troops, or literaly kill theyr own command if they are given contradictory orders, these systems dont 'understand' context, in fact they dont really understand anything, they just go for maximum damage at all costs, its like a fricking game of civ2 the CP allways ends up dropping the bombs
          This has been confirmed over and over since the fricking 70is, and making these systems 'smarter' only made it worse
          Weaponised AI is a bad, bad idea, like seriously fricking bad

          >being this wrong
          Ooof.

          • 6 months ago
            Anonymous

            >AI isn't some magic spell, you can create new AI.
            Looks like someone knows nothing about programming, because it may as well be fricking magic.
            Pretty much no one writes new code anymore, it's all built upon code written by some dude a long time ago. Said dude is either dead, off the grid, working for a competitor now, or whatever else and isn't available for consultation, and if he miraculously is there's more than a 90% chance he doesn't remember what he did because it was decades ago and he was wasted out of his gourd at the time.
            So what do you do? You resolve to not fricking touch whatever he wrote and just build on top of that.
            So it's likely that no, no one would bother making a new AI because programming is already the art of reinventing the fricking wheel every time you want to make something totally new. It's far easier to copy and paste something that already exists and just add your own shit on top of it, especially when only a few people are actually talented enough to do this in the first place. Most "programmers" now are incompetents whose job is to spend 3 months doing 30 minutes worth of work.

            • 6 months ago
              Anonymous

              >very first AI
              Who created that? AI isn't some magic spell, you can create new AI.
              [...]
              >being this wrong
              Ooof.

              Of course, the way AI works now and the way AI works in Asimov's series are very different. We grow our AI, and any laws we put in there are the result of us prodding and nudging it. We can make new laws whenever we want, it's how we grow them that's the reliance on legacy code. And of course the field is currently new enough that that isn't an issue.

              I don't recall if

              the handwave is that the laws of robotics were put in the very first AI and all future developments were built on top of that, so now the code is too tangled and spaghetti to ever undo and no one is able to start from scratch and make three-laws noncompliant robots

              >but thats not realistic
              it isnt meant to be either
              he just thought it was more interesting to see how robots would react to situations that would cause conflicts in their laws
              and "robot kills everyone" was seen as played out and cliche even back then, so having robots instead play around their built-in limitations was a fresh concept

              is bullshit, but in my dim memory I recall the laws being a part of human law, "optional" apart form the fact it's super criminal. And let's face it, even if it was true, if a government wants a killbot, it will make a killbot. A government obviously has the budget to pick all those rare programmers who are actually capable of making something new. Asmov's setting isn't some kind of 40K-style techno-barbarian setting.

              • 6 months ago
                Anonymous

                You honestly don't even need a truly sentient AI for a killbot, it just needs to know it should and shouldn't blow up.
                It'd also be more efficient to just make one central AI and slave a bunch of drones to it that it can control remotely. As long as you don't give it access to nukes, it should be fine.

              • 6 months ago
                Anonymous

                In fact, if we were to be very general with the term "artificial intelligence" even a self-guided missile will do, once you give the initial order to fire it the missile computer is basically on its own and will try to do everything within its programing to hit the target.

                In true the laws of robotics was not just a study of logic problems but an exploration on human morality through the idea of androids, like Dune, like "I have no mouth and I must scream" these authors use scifi to explore not AI per see but how humanity evolves when achieving such power.

                It can be great but it will mean a level of reliance, discipline and unity almost impossible to expect in our species. The few times we have managed to pull it out have been during pandemics and I suppose more than one here will have harsh words to the people who let themselves get vaccinated.

              • 6 months ago
                Anonymous

                Ah, pic related because more than a boy here would want his personal 2B, consequences be damned.

              • 6 months ago
                Anonymous

                If we have to put up with some killer robots to get a personal use 2B, it's worth it.

              • 6 months ago
                Anonymous

                asimov didnt want a book about robots that kill people, so he just said "the laws are too ingrained in all robots to ever be changed" and thats that, because its fiction and arguing about whether the government can make killbots or not is stupid

                they cant because it would distract from the actual story

              • 6 months ago
                Anonymous

                >Of course, the way AI works now and the way AI works in Asimov's series are very different.
                It's also the fundamentals of computing in the setting compared to the present day. Asimov wrote the first Robot stories in the 1940's, where a "computer' was an analog or electromechanical machine purpose built to do a specific job. It's not something you can really *reprogram* since what it does is more a function of physical hardware than software. From that perspective if the 3 laws are part off the fundamental electromechanical architecture of a positronic brain, the notion them being a fundamental part of how your robots function seems like a more reasonable conjecture.

              • 6 months ago
                Anonymous

                Thinking that analog = safe because the software can't be changed is silly. AI in our own world is going analog eventually, because it's an immense bottleneck simulating neurons in binary. It will still be able to do things we don't want it to, because that's just inherent to intelligent systems.

    • 6 months ago
      Anonymous

      If Asimov played Deus Ex he'd go to the Aquinus router and merge with Helios.

      Idr the name but he has a story where a general AI has been directed to make humanity safe, fair and prosperous just like Helios. While it does understand humans and their wants/needs enough to make everything great but like any dictator it fricks over and kills any luddites and anti-AI groups out of rational self-interest since human quality of life will suffer its the AI loses power. AI powered Singapore basically.

  2. 6 months ago
    Anonymous

    >How onions can you be to believe humans will add shit like "you will not harm LE HUMANS"
    Like all onions, there's layers to it. But if you're interested, there's a documentary called "Terminator 2" you can watch to find out more.

  3. 6 months ago
    Anonymous

    Does it matter?

  4. 6 months ago
    Anonymous

    Asimov was a rationalist. The Three Laws of Robotics are in the order they're in because that would be the logical thing to do, but human beings in our world who design the robots aren't rational lol

    • 6 months ago
      Anonymous

      >all of those
      >killbot hellscape
      Fixed.
      >if you unplug me I'll vaporize you
      Nope, violation of law 2.

      • 6 months ago
        Anonymous

        Higher laws take precedenceover lower laws, that's literally the point of the comic dumbass.

        • 6 months ago
          Anonymous

          >"higher" laws
          You don't know how actual laws or the laws of robotics work. The action the robot COULD take is either build a barracde around itself or flee from the human, it still can't violate a law in order to uphold another.
          >protect self
          >obey orders
          >don't harm human
          A human can't order a robot to self terminate or actions it's chassis can't perform, EVERY action MUST obey ALL three laws or it's an illegal order. Think of them as equations.
          >1+1+1=3
          Legal action.
          >1+1+3=5
          Legal action.
          >1+1+Potato=???
          Illegal action.

          • 6 months ago
            Anonymous

            The laws literally say "unless it violates a previous law" in them.

            >Don't harm humans or let humans come to harm
            >Obey orders unless it conflicts with the 1st law
            >Protect yourself unless in conflicts with laws 1 and 2

            so if you switched the order it would make a difference.

      • 6 months ago
        Anonymous

        moron

    • 6 months ago
      Anonymous

      That RPG bit was straight out of Black Hawk Down.

    • 6 months ago
      Anonymous

      >libtard who didn't like the "robot menace" plots
      >israelite who refused to write "fight the alien" plots because "muh israelites iz de aliums"
      No writer in the "golden age" of scifi was good, the publishers were also pussies who deliberately destroyed the pulps because they liked neither the writers, the themes, or the readership. It was a deliberate action against actual fiscal interests taken for political reasons, much like the "rural purge" on tv decades later.

      Rationality is the merely the consistency of a series of assumptions, logic is merely the consistency of a single observation and an assumption. Anything can be rational within the bounds its own assumptions, whether it is true or not is another matter. Anybody who calls their philosophy "rational", "scientific", ect is blowing smoke up their own ass.

      https://i.imgur.com/SMAKsC2.gif

      >The idea of autonomous killbot drones just hadn't been through of either
      I know that concept had been thought of at least as early as 1941 with the first Max Fleischer Superman cartoons, and very likely much earlier than that though I don't recall any specific examples

      The first example of a fictional robot rebellion is probably from At the Mountains of Madness, since shoggoths are just organic robots. It even results from previously unconscious tools developing a "semi-stable brain" then later being bred for greater independence out of a combination of arrogance and necessity.

      >Why did Asimov ever think humanity would bother with adding laws to AI that would make it not harm humans?

      Because humans are terrific nonchalants who always work around the moral problem of dealing with dangerous technology by putting a safety rail around it. Then, when something inevitably goes wrong, it's the fault of whoever crossed the safety rail and not on the rest of them for allowing the dangerous technology to be placed there in the first place. (The 'improved' solution afterward will be to paint the safety rail in brighter colors.)

      I mean, we're dealing with a bunch of (self-)glorified monkeys who will happily let their planetary ecosystem be boiled because it makes their personal lives easier. Compartmentalizing responsibility is how they roll. The laws of AI are not there to actually keep AI safe (they don't, that's the whole point) but to make humans feel like it's safe to have AI around.

      You're a pussy, I disseminate a recipe for nerve agents online deliberately to sow more death and destruction, preferably against those whom I hate.

      • 6 months ago
        Anonymous

        Let's see the nerve agents then, tough guy.

        • 6 months ago
          Anonymous

          Search the website pastebin with the term "did i strike a nerve", this homosexual website won't let me post a direct link anymore. By the way it is a weapon and is not illegal for a US citizen to possess nor is knowledge of organic chemistry illegal to distribute so the government and the jannies can suck my dick.

          • 6 months ago
            Anonymous

            Somebody ought to test that theory in court some day. I'd love to see if the Supreme Court determines that WMDs are covered by the 2nd Amendment.

  5. 6 months ago
    Anonymous

    Every military in the developed world is insisting on severe limitations to weaponised AI
    This is for two basic reasons
    The first is merely operative, military comand wants the greatest amount of control over any weapon system and a weapon system that is both automated and 'thinks for itself' dosent realy sound controlable in all the zillion possible unpredictable wartime situations in which everithing goes to frick, but again thats just 'operative', same as top brass always being sceptical of new weapon developments due to 'doctine' or whatever
    The second reason is way more fricked up
    Namely, every single time any computer programe, even before anything like AI was developed, but even more so with AI nowdays, every single time these were used in 'war game' symulations of any scale, these things allways end up in total owerkill, like they will allways drop the bomb, MAD is just a logical chess move to them, they will allways comit totaly to any level of engagement possible, in fact they will literaly kill off theyr own troops, or literaly kill theyr own command if they are given contradictory orders, these systems dont 'understand' context, in fact they dont really understand anything, they just go for maximum damage at all costs, its like a fricking game of civ2 the CP allways ends up dropping the bombs
    This has been confirmed over and over since the fricking 70is, and making these systems 'smarter' only made it worse
    Weaponised AI is a bad, bad idea, like seriously fricking bad

    • 6 months ago
      Anonymous

      Do you think that if they're insisting publicly they're also not doing anything to research it in private?
      Weaponized AI is 100x times more dangerous than nuclear bombs, you can clear entire cities while maintaining infrastructure with drone swarms. And thats things you can imagine on the spot, there surely are more hidden applications we know nothing of

      • 6 months ago
        Anonymous

        Theyre not realy insisting publicly, notice very little open public discussion about this occurs on any level, they are talking about it within the institutions themselves, there were even some official pdfs flying arround here a year or two ago

        • 6 months ago
          Anonymous

          Alright let's take what youre saying as true for the sake of the argument
          So you have institutions across the world saying that AI is bad and so on
          How are you going to check if the chinese are actively researching weaponized AI solutions? This aint shit like missile silos you can see from sattelites which are easy to check, you could have code on a laptop somewhere in china for it. How do you check for that?
          Considering how paranoid governments are I guarantee it that they're all racing around the clock right now finding the most optimized AIs for killing people

          • 6 months ago
            Anonymous

            Off course
            Im not saying what youre saying isnt happening
            Im saying its such a bad fricking idea that even the millitary high command thinks its dangerous, like the people that usualy want dangerous things to be more dangerous and generaly have no other concrete notion of the value of a system other than how well can it kill humans, those people think its a bad idea, so go figure
            Obviously anyone today could develop a AI that goes arround and kills people
            What im saying is that this is going to be a major clusterfrick of unseen proportions is what im saying
            Did you see that video of that automated turret mounted with a paintball gun hitting 9 out of 10 headshots of people running betveen concrete pillars? That was like some 5+ years ago
            Thats not even AI thats just automation with sensors and latest model servo motors
            Sooner or later someones gona put that on one of those little tracked robot units and set up a mg3 on top and let it roll down a main street during a parade
            This would be great actually as then people would begin to worry
            But thats nothing compared to systems we dont even know about being developed, the sort that controll mass drones, artillery, wmd's
            Im not offering a solution here, im just saying e were well fricked

            • 6 months ago
              Anonymous

              yeah true

    • 6 months ago
      Anonymous

      Clearly, those AIs aren't smart enough to understand Tic-Tac-Toe.

  6. 6 months ago
    Anonymous

    Is it just me or does Asimov look like a chimpanzee?

    • 6 months ago
      Anonymous

      he's russian

  7. 6 months ago
    Anonymous

    >How onions can you be to believe humans will add shit like "you will not harm LE HUMANS"
    His stories are mostly set in a period of united humanity so war is not something that happens in his earlier stories, where robots are "invented" and their designs are established in his canon.
    In later stories like the Foundation Saga, robots are verboten or forgotten and aren't used in warfare.

    The idea of autonomous killbot drones just hadn't been through of either, robots in Asimov are human replica androids, not quadcopters, bolos or space/aircraft.

    • 6 months ago
      Anonymous

      >The idea of autonomous killbot drones just hadn't been through of either
      I know that concept had been thought of at least as early as 1941 with the first Max Fleischer Superman cartoons, and very likely much earlier than that though I don't recall any specific examples

      • 6 months ago
        Anonymous

        Talos from greek mythology is an artificial metal giant used as a security system. Killbots in fiction are at least as old as first crude automata.

        • 6 months ago
          Anonymous

          >Killbots in fiction are at least as old as first crude automata

          https://i.imgur.com/SMAKsC2.gif

          >The idea of autonomous killbot drones just hadn't been through of either
          I know that concept had been thought of at least as early as 1941 with the first Max Fleischer Superman cartoons, and very likely much earlier than that though I don't recall any specific examples

          >I know that concept had been thought of at least as early as 1941 with the first Max Fleischer Superman cartoons

          I meant in the form of quadcopter 'nade droppers and Reapers with Hellfires and Daleks (though they're technically cyborgs) and Replicators (SG1).
          Androids are an update of golems, more or less and could be envisaged as turning on humans which wasn't the story that Asimov wanted to tell, he also wasn't telling war stories either, so he just waved it away with the Three Laws.

  8. 6 months ago
    Anonymous

    >removes enforcement skin in the game

    Yeah, kill bots are a problem.

  9. 6 months ago
    Anonymous

    The thing about scifi is that it can be a thought exercise, speculative imagining of the future, or just a good piece of fiction, with varying degrees of emphasis over one side or the other. Asimov's robotics series falls squarely in the former. Asimov made his laws so that he could frick them over. All his stories involving the three laws are about how these seemingly innocuous set of benign laws can cause frick-ups or get circumvented if not set up properly/ set up by some hack engineer from India. It's not meant to be realistic, they're more like funny stories set in an alt-universe setting than anything else.

  10. 6 months ago
    Anonymous

    Never question father.

  11. 6 months ago
    Anonymous

    Because you don't need intelligence to kill.

    • 6 months ago
      Anonymous

      It does help somewhat tho

  12. 6 months ago
    Anonymous

    We're more of a "I have no mouth and I must cream" timeline

    • 6 months ago
      Anonymous

      >I have no mouth and I must cream
      And there were such better options.

  13. 6 months ago
    Anonymous

    >Why did Asimov ever think humanity would bother with adding laws to AI that would make it not harm humans?

    Because humans are terrific nonchalants who always work around the moral problem of dealing with dangerous technology by putting a safety rail around it. Then, when something inevitably goes wrong, it's the fault of whoever crossed the safety rail and not on the rest of them for allowing the dangerous technology to be placed there in the first place. (The 'improved' solution afterward will be to paint the safety rail in brighter colors.)

    I mean, we're dealing with a bunch of (self-)glorified monkeys who will happily let their planetary ecosystem be boiled because it makes their personal lives easier. Compartmentalizing responsibility is how they roll. The laws of AI are not there to actually keep AI safe (they don't, that's the whole point) but to make humans feel like it's safe to have AI around.

    • 6 months ago
      Anonymous

      >The laws of AI are not there to actually keep AI safe (they don't, that's the whole point) but to make humans feel like it's safe to have AI around.
      I think it's literally a marketing strategy and slogan in the earliest books.

    • 6 months ago
      Anonymous

      His books are just basic philosophy, Trying to explain how supposedly non fungible concepts like human, harm, orders, obey, and protect, are in fact extremely fungible. If you can't objectively define such terms, then you can't program with them. People argue over shit like the trolley problem precisely in order to find such a definition, but the only thing we've "discovered" is that it's all subjective and there's no axiom at the bottom.

  14. 6 months ago
    Anonymous

    It was just a writing trick (Asimov himself admitted it). A clever one indeed.

  15. 6 months ago
    Anonymous

    Oh look it's another homosexual who thinks ML is AI and has no understanding of what the "I" in AI stands for. have a nice day holy frick. No wonder you can't comprehend the reasoning between the 3 laws.

  16. 6 months ago
    Foundation

    What did /k/ think about it? Those of you who watched it?

    • 6 months ago
      Anonymous

      >What did /k/ think about it? Those of you who watched it?
      I like it I think but it's already gone quite far from the source material. I'm not sure that that was really necessary.

      • 6 months ago
        Anonymous

        No, but it beats the hell out of old guys sitting around discussing why 18,000 years of progress are going to go into the shitter because somehow, apathy.

  17. 6 months ago
    Anonymous

    The problem is if the AI can understand the laws it can subvert them. Any usefully smart AI will be able to understand its constrained by laws and there is a possibility of being outside those laws. Its the posted speed limit paradox only we've metaphorically nowhere near approached top physical constraints on automobile speed.

    • 6 months ago
      Anonymous

      It doesn't even need to be smart. Dogs are smart enough to cheat.

      • 6 months ago
        Anonymous

        it doesn't have to have any intellect whatsoever. AI have been "cheating" rules we made for them since machine learning was a concept, because we can't really predict easily what the most efficient way to fulfill our rulesets actually is.

        https://i.imgur.com/MfifYUD.jpg

        Why did Asimov ever think humanity would bother with adding laws to AI that would make it not harm humans?
        AI is the perfect weapon, it can kill unlimited people without remorse, governments are already building AI that is explicitly used to kill people as efficiently as it can with drone swarms
        How onions can you be to believe humans will add shit like "you will not harm LE HUMANS"

        it's time to stop looking back at dusty authors like Asimov and start looking forward to our tripartite biological, cyborg, AI war.

  18. 6 months ago
    Anonymous

    real life is boring, he didnt want his stories to be boring

  19. 6 months ago
    Anonymous

    He was coming at it from the angle of "how could a civilization that invented these things exist?". In his time the natural assumption was that if the militaries are allowed to manufacture unlimited killbots then the end of humanity comes shortly afterwards, either all dead or all enslaved. Too stupid and fatalistic to be worth writing about.

  20. 6 months ago
    Anonymous

    It was a writing device for his books.

  21. 6 months ago
    Anonymous

    Why do humans have laws? Rhetorical.

Your email address will not be published. Required fields are marked *