How will large language models affect warfare?

How will large language models affect warfare?

250 Piece Survival Gear First Aid Kit

LifeStraw Water Filter for Hiking and Preparedness

250 Piece Survival Gear First Aid Kit

  1. 11 months ago
    Anonymous

    >AI keeps going racist
    >AI drones start using the Gamer Word while striking targets
    Can't wait.

    • 11 months ago
      Anonymous

      >Overwatch 2-1, moveto predesignated position N
      >I
      >Say again, response incomplete
      >G
      >Oh frick off
      >G

    • 11 months ago
      Anonymous

      It's a lot less of that now. 2016 was 7 years ago.

      • 10 months ago
        Anonymous

        Now they just lobotomize the AI so it can't say anything.

    • 10 months ago
      Anonymous

      >the gamer word
      which is? I don't understand this tw*tter and r*ddit meme

      • 10 months ago
        Anonymous

        >which is?
        Black person

        • 10 months ago
          Anonymous

          thank you kind stranger

  2. 11 months ago
    Anonymous

    They will impress politicians and moronic managers and collapse when field tested. The proprietary models will continue to be outpaced by rudimentary open source efforts (this is actually why they're so desperate to "rEgUlAtE" AI - they're terrified of how quickly FOSS communities leapfrogged their "products" made from scraping copyrighted material and shuffling the data around).
    The same thing that is currently happening in every field the current "ML is AI because a chatbot convinced me it can think" fad is touching.
    Almost exactly the same thing happened in the 1990s with AI: promising developments within computation getting hyped up and marketed to hell based on naive extrapolations of future growth and no attempt to answer that teensy little question of "what is intelligence" because they just thought "if we make it big enough, it will become intelligent".
    In both cases, neither larger calculators nor larger matrices nor larger training datasets confer actual intelligence, and compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
    My impression is that people are already adapting to recognize ML output and the effect the novelty had on their ability to recognize it as non-human has been vastly underestimated, as well as the human ability to adapt to it. Just because you can theoretically train a computer faster than a human doesn't mean the human is some kind of intellectual constant. Well, provided they're flexible enough to not buy into the marketing or at the very least not make it their fricking identity when they encounter doubts they weren't prepared for. I do sometimes wonder if these people can't comprehend why ML isn't intelligence because they're no more intelligent than it is - operating purely on reaction to stimulus with no internal cognition, abstraction, imagination, understanding, etc., just rote repetition.
    Perhaps that's why autists seem to buy hard into the hype, in the 1990s and today.

    • 11 months ago
      Anonymous

      >compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
      Uh, I don't think that's the case, especially if they studied anything related to AI (not only ML)
      >t. MSc Artificial Intelligence

    • 11 months ago
      Anonymous

      i think you just really wanted to dumpster machine intelligence, but have not read anything for 3 years

    • 11 months ago
      Anonymous

      Wait till you see open source tanks, artilleries, jets, missiles, ships, subs, nukes.

      • 11 months ago
        Anonymous

        Haven't equivalent open-source and unproduced designs for these existed since... forever? Hell, the earliest public descriptions of the structure of a Teller-Ulam H-bomb were literally open source speculative design.

        Naturally, when the product is the data itself, you're MUCH more vulnerable to getting your proprietary profiteering plan shat on by FOSSers, but that's because the production is so accessible AND the consumer demand for the product is high. Neither is currently the case for any of the items you mentioned.

        >compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
        Uh, I don't think that's the case, especially if they studied anything related to AI (not only ML)
        >t. MSc Artificial Intelligence

        If some rudimentary understanding of what is required for what is really considered an intelligent agent is included in the curriculum, then sure - but I'm not impressed by the lack of exposure to epistemology often demonstrated by these people. Their understanding of understanding is a lot shallower than their understanding of machine learning. Perhaps this has changed since I've been exposed to said demographic and I'm mistaking marketing for what the actual researchers believe... but I've also seen AI researchers in particular say some... batshit stuff. Maybe amplifying them is a marketing effort, too.

        i think you just really wanted to dumpster machine intelligence, but have not read anything for 3 years

        ML is a fine and useful tool. Don't mistake my disdain for the sorts of uninformed maximalists who led the field into the AI Winter as disdain for the calculators they were worshiping. They were fricking fantastic calculators - but they weren't intelligent.

        • 11 months ago
          Anonymous

          >If some rudimentary understanding of what is required for what is really considered an intelligent agent is included in the curriculum, then sure
          That was a mandatory part of my BSc course, which I also had to do again when I did my MSc.

    • 11 months ago
      Anonymous

      >internal cognition, abstraction, imagination, understanding
      These are autistic traits.

      • 11 months ago
        Anonymous

        They're literally just consequence of the prefrontal cortex.

  3. 11 months ago
    Anonymous

    >An AI-controlled drone "killed" its human operator in a simulated test reportedly staged by the US military - which denies such a test ever took place.

    >It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker "Cinco" Hamilton, during a Future Combat Air & Space Capabilities summit in London. "We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat," he said.

    >"The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
    https://news.sky.com/story/ai-drone-kills-human-operator-during-simulation-which-us-air-force-says-didnt-take-place-12894929

    Sounds pretty based tbh.

  4. 11 months ago
    Anonymous

    It will be used to spam /k/ with ukrainian propaganda.
    Oh wait ...

  5. 11 months ago
    Anonymous

    Be wary of political or corporate influences and biases in these models for reasons of security.

    There is use for this technology in war but there is also a potential for things to go wrong, such as in the autonomous drone test.

    • 11 months ago
      Anonymous

      Instead of open ended control, these models can generate static solutions such as weapons design, strategic and tactical ideas, or fly-by-wire drone management.

      In other words, the drones can keep formation and be managed or self-manage on a combat mission, on a tactical level. But they are not given the ability to make strategic or higher level decisions. The context and control is important, an "AI general" could be pretty mediocre or at worst self-destructive. Giving the models strategic choice over life and death is a big moral question.

  6. 11 months ago
    Anonymous

    Get back to me when AI can write out and speak ancient languages. I want to see full texts in Anglo-Saxon and Gothic.

    • 10 months ago
      Anonymous

      I know they recently used AI to decipher an Akkadian cuneiform, so its absolutely doable.

  7. 10 months ago
    Anonymous

    Shitting up the internet is the obvious application

  8. 10 months ago
    Anonymous

    combat droids that talk to locals
    and more pro-ukraine, pro-blm, pro-trans, pro-abortion, etc. psyops bots spamming social media, except they'll be more legit looking

  9. 10 months ago
    Anonymous

    >will
    they have already been affecting warfare. fricking 50% of posts on this board are AI psyops bots.

Your email address will not be published. Required fields are marked *