They will impress politicians and moronic managers and collapse when field tested. The proprietary models will continue to be outpaced by rudimentary open source efforts (this is actually why they're so desperate to "rEgUlAtE" AI - they're terrified of how quickly FOSS communities leapfrogged their "products" made from scraping copyrighted material and shuffling the data around).
The same thing that is currently happening in every field the current "ML is AI because a chatbot convinced me it can think" fad is touching.
Almost exactly the same thing happened in the 1990s with AI: promising developments within computation getting hyped up and marketed to hell based on naive extrapolations of future growth and no attempt to answer that teensy little question of "what is intelligence" because they just thought "if we make it big enough, it will become intelligent".
In both cases, neither larger calculators nor larger matrices nor larger training datasets confer actual intelligence, and compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
My impression is that people are already adapting to recognize ML output and the effect the novelty had on their ability to recognize it as non-human has been vastly underestimated, as well as the human ability to adapt to it. Just because you can theoretically train a computer faster than a human doesn't mean the human is some kind of intellectual constant. Well, provided they're flexible enough to not buy into the marketing or at the very least not make it their fricking identity when they encounter doubts they weren't prepared for. I do sometimes wonder if these people can't comprehend why ML isn't intelligence because they're no more intelligent than it is - operating purely on reaction to stimulus with no internal cognition, abstraction, imagination, understanding, etc., just rote repetition.
Perhaps that's why autists seem to buy hard into the hype, in the 1990s and today.
>compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
Uh, I don't think that's the case, especially if they studied anything related to AI (not only ML) >t. MSc Artificial Intelligence
Haven't equivalent open-source and unproduced designs for these existed since... forever? Hell, the earliest public descriptions of the structure of a Teller-Ulam H-bomb were literally open source speculative design.
Naturally, when the product is the data itself, you're MUCH more vulnerable to getting your proprietary profiteering plan shat on by FOSSers, but that's because the production is so accessible AND the consumer demand for the product is high. Neither is currently the case for any of the items you mentioned.
>compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
Uh, I don't think that's the case, especially if they studied anything related to AI (not only ML) >t. MSc Artificial Intelligence
If some rudimentary understanding of what is required for what is really considered an intelligent agent is included in the curriculum, then sure - but I'm not impressed by the lack of exposure to epistemology often demonstrated by these people. Their understanding of understanding is a lot shallower than their understanding of machine learning. Perhaps this has changed since I've been exposed to said demographic and I'm mistaking marketing for what the actual researchers believe... but I've also seen AI researchers in particular say some... batshit stuff. Maybe amplifying them is a marketing effort, too.
i think you just really wanted to dumpster machine intelligence, but have not read anything for 3 years
ML is a fine and useful tool. Don't mistake my disdain for the sorts of uninformed maximalists who led the field into the AI Winter as disdain for the calculators they were worshiping. They were fricking fantastic calculators - but they weren't intelligent.
>If some rudimentary understanding of what is required for what is really considered an intelligent agent is included in the curriculum, then sure
That was a mandatory part of my BSc course, which I also had to do again when I did my MSc.
>An AI-controlled drone "killed" its human operator in a simulated test reportedly staged by the US military - which denies such a test ever took place.
>It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker "Cinco" Hamilton, during a Future Combat Air & Space Capabilities summit in London. "We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat," he said.
>"The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
https://news.sky.com/story/ai-drone-kills-human-operator-during-simulation-which-us-air-force-says-didnt-take-place-12894929
Instead of open ended control, these models can generate static solutions such as weapons design, strategic and tactical ideas, or fly-by-wire drone management.
In other words, the drones can keep formation and be managed or self-manage on a combat mission, on a tactical level. But they are not given the ability to make strategic or higher level decisions. The context and control is important, an "AI general" could be pretty mediocre or at worst self-destructive. Giving the models strategic choice over life and death is a big moral question.
combat droids that talk to locals
and more pro-ukraine, pro-blm, pro-trans, pro-abortion, etc. psyops bots spamming social media, except they'll be more legit looking
>AI keeps going racist
>AI drones start using the Gamer Word while striking targets
Can't wait.
>Overwatch 2-1, moveto predesignated position N
>I
>Say again, response incomplete
>G
>Oh frick off
>G
It's a lot less of that now. 2016 was 7 years ago.
Now they just lobotomize the AI so it can't say anything.
>the gamer word
which is? I don't understand this tw*tter and r*ddit meme
>which is?
Black person
thank you kind stranger
They will impress politicians and moronic managers and collapse when field tested. The proprietary models will continue to be outpaced by rudimentary open source efforts (this is actually why they're so desperate to "rEgUlAtE" AI - they're terrified of how quickly FOSS communities leapfrogged their "products" made from scraping copyrighted material and shuffling the data around).
The same thing that is currently happening in every field the current "ML is AI because a chatbot convinced me it can think" fad is touching.
Almost exactly the same thing happened in the 1990s with AI: promising developments within computation getting hyped up and marketed to hell based on naive extrapolations of future growth and no attempt to answer that teensy little question of "what is intelligence" because they just thought "if we make it big enough, it will become intelligent".
In both cases, neither larger calculators nor larger matrices nor larger training datasets confer actual intelligence, and compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
My impression is that people are already adapting to recognize ML output and the effect the novelty had on their ability to recognize it as non-human has been vastly underestimated, as well as the human ability to adapt to it. Just because you can theoretically train a computer faster than a human doesn't mean the human is some kind of intellectual constant. Well, provided they're flexible enough to not buy into the marketing or at the very least not make it their fricking identity when they encounter doubts they weren't prepared for. I do sometimes wonder if these people can't comprehend why ML isn't intelligence because they're no more intelligent than it is - operating purely on reaction to stimulus with no internal cognition, abstraction, imagination, understanding, etc., just rote repetition.
Perhaps that's why autists seem to buy hard into the hype, in the 1990s and today.
>compsci majors don't have enough understanding of epistemology to know why that's not surprising at all.
Uh, I don't think that's the case, especially if they studied anything related to AI (not only ML)
>t. MSc Artificial Intelligence
i think you just really wanted to dumpster machine intelligence, but have not read anything for 3 years
Wait till you see open source tanks, artilleries, jets, missiles, ships, subs, nukes.
Haven't equivalent open-source and unproduced designs for these existed since... forever? Hell, the earliest public descriptions of the structure of a Teller-Ulam H-bomb were literally open source speculative design.
Naturally, when the product is the data itself, you're MUCH more vulnerable to getting your proprietary profiteering plan shat on by FOSSers, but that's because the production is so accessible AND the consumer demand for the product is high. Neither is currently the case for any of the items you mentioned.
If some rudimentary understanding of what is required for what is really considered an intelligent agent is included in the curriculum, then sure - but I'm not impressed by the lack of exposure to epistemology often demonstrated by these people. Their understanding of understanding is a lot shallower than their understanding of machine learning. Perhaps this has changed since I've been exposed to said demographic and I'm mistaking marketing for what the actual researchers believe... but I've also seen AI researchers in particular say some... batshit stuff. Maybe amplifying them is a marketing effort, too.
ML is a fine and useful tool. Don't mistake my disdain for the sorts of uninformed maximalists who led the field into the AI Winter as disdain for the calculators they were worshiping. They were fricking fantastic calculators - but they weren't intelligent.
>If some rudimentary understanding of what is required for what is really considered an intelligent agent is included in the curriculum, then sure
That was a mandatory part of my BSc course, which I also had to do again when I did my MSc.
>internal cognition, abstraction, imagination, understanding
These are autistic traits.
They're literally just consequence of the prefrontal cortex.
>An AI-controlled drone "killed" its human operator in a simulated test reportedly staged by the US military - which denies such a test ever took place.
>It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker "Cinco" Hamilton, during a Future Combat Air & Space Capabilities summit in London. "We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat," he said.
>"The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
https://news.sky.com/story/ai-drone-kills-human-operator-during-simulation-which-us-air-force-says-didnt-take-place-12894929
Sounds pretty based tbh.
It will be used to spam /k/ with ukrainian propaganda.
Oh wait ...
Be wary of political or corporate influences and biases in these models for reasons of security.
There is use for this technology in war but there is also a potential for things to go wrong, such as in the autonomous drone test.
Instead of open ended control, these models can generate static solutions such as weapons design, strategic and tactical ideas, or fly-by-wire drone management.
In other words, the drones can keep formation and be managed or self-manage on a combat mission, on a tactical level. But they are not given the ability to make strategic or higher level decisions. The context and control is important, an "AI general" could be pretty mediocre or at worst self-destructive. Giving the models strategic choice over life and death is a big moral question.
Get back to me when AI can write out and speak ancient languages. I want to see full texts in Anglo-Saxon and Gothic.
I know they recently used AI to decipher an Akkadian cuneiform, so its absolutely doable.
Shitting up the internet is the obvious application
combat droids that talk to locals
and more pro-ukraine, pro-blm, pro-trans, pro-abortion, etc. psyops bots spamming social media, except they'll be more legit looking
>will
they have already been affecting warfare. fricking 50% of posts on this board are AI psyops bots.