I'm actually kinda worried about a future where warfare is directed mostly by AI programs rather than people
Preparedness & self-sufficiency community
I'm actually kinda worried about a future where warfare is directed mostly by AI programs rather than people
You're unlikely to live to see it. Until binary computing gets replaced, AI will be both very dumb and at the same time highly autistic at absolute best. Even with the advent of multi-bit computing, the scope of any such software project is utterly insane.
Expect them in our lifetimes to be pretty decent secretaries, orderlies, and the like. A brain extension and/or personal assistant. I'm still annoyed nobody's made a decent PA yet.
you're at the 4th stage right now
You don't understand AI or programming in general, so ChatGPT seems like magic to you.
The human brain makes something to the order of a billion-billion calculations per second. Even the most advanced theoretical supercomputers available as boutique and bespoke research projects in the near to medium future operate at least one order of magnitude lower than that.
We aren't even remotely close to developing an actual AI. What we have now are examples of absurdly thorough scripting.
>absurdly thorough scripting
*that is being scripted by AI
that's the worrying part, we're letting them design themselves
at what point is it a black box Chinese Room that no human can actually understand, but has to implicitly trust is working the way it is described?
>at what point is it a black box Chinese Room that no human can actually understand, but has to implicitly trust is working the way it is described?
it is already there. You cannot explain neural network reasoning about an object.
There's already hundreds of things you take for granted in your day-to-day life that you cannot explain and yet they have nothing to do with AI or computers
just because you can't explain it doesn't mean somebody else doesn't have a complete understanding of it. Welcome to specialization, it's the reason why everyone gets nice shoes even though less than 1 10th of 1 percent of the population is capable of producing them
No, not even close.
Eh, not really. Language models seem incredibly impressive because of their output but they don't 'learn.' The bots that can beat games do so by running silly volumes of iterations and producing their own nested if/then logic.
Let me put it this way: AI stop being useful the very moment they have to make a decision or a judgement call. They are excellent at more or less replacing search engines and secretaries unless you're REALLY into pencil skirts.
Let me try a few more examples, with apologies, I'm very very tired so my articulation is failing.
Computers can only easily multiply by powers of two. Everything else is insanely fast addition or a combination.
Computers can only do one thing at a time. It does those things much faster on the surface than a human brain, but not nearly as thoroughly and only with what parameters it was fed or programmed to receive. If something observable falls entirely outside of that scope, it might as well not exist - or it can cause errors, depending.
Computers cannot form thoughts or create, invent, or innovate. They can apply what has already been created in scripted ways and that's about it.
Think of vidya. Hell, take Halo. It'd be easy (relatively speaking) to write a script to shoot people in the head, but what about movement? Positioning? Dodging grenades? How does a program even know where the grenades are when there's no indicator and most of your cues are motion-related? Then what happens if a virtual bird or piece of debris flies across the screen?
We observe everything, all the time, and perform calculations on all of it. We can rapidly filter incoming information and make judgement decisions that are astronomically complex but are so trivial we don't even consider them.
Speaking of Halo, that franchise's AI operates off of human brain tissue specifically because good fucking luck making cell-sized, interconnected rather than gated switches.
Lmao, no. Not that guy, but I'm a programmer, I have zero worries about AI. It's laughable that people think google translate 2.0 will take over the world
The big thing AI has over humans is easy import/export. Everyones brain generates full 3d movies in real time while they sleep, they just can't publish them as webms.
You don't understand anything about neurology. Human brains only make 4 to 5 calculations per second. It's terrible at math, and even worse at remembering. You can only remember around 7 digits, and that's at absolute maximum. Most people tap out at around 4 digits.
That's true about conscious thought, but on an unconscious level there's some very complicated calculations going on if you stand on one foot and wave your arms around.
YOU don't understand anything about neurology. Your brain is controlling your heart rate and metabolism right now based on an impossibly complex set of inputs that you don't even understand. When you stand, your brain is sending complex signals to hundreds of muscles in your body while detecting balance from your inner ear and compounding that with all of your senses. This is all so autonomous that you don't even know that it is happening. It took us many years to get bipedal robots that can balance and dance around, and yet every human ever born with legs and a functioning nervous system is capable of it.
That's not performing calculations, though. You might as well say that a centipede brain can perform calculations fifty times faster than a human one, since it's operating more legs at once. Or that a single pocket calculator can simultaneously control over a million robot bodies in perfect unison, because it can do math faster. It doesn't, and it can't, I'm just taking a metric (it took this much motoric ability for the human race to learn arithmetic) and linearly scaling that to a system with a completely different architecture.
>That's not performing calculations, though
what the fuck is a calculation other than taking data, applying an algorithm and making a decision about it? such as the case of controlling heart and breathing rate?
>You might as well say that a centipede brain can perform calculations fifty times faster than a human one, since it's operating more legs at once
It certainly is impressive that small nervous systems can coordinate hundreds of legs to create forward locomotion, and not something that we can easily emulate with technology. A centipede certainly can do it a lot faster and more efficiently than a human brain in any case, yes.
you're a fucking idiot
>what the fuck is a calculation
A mathematical operation. Like deriving the length of the hypotenuse of a right triangle, from knowing the lengths of the sides. Most humans can be taught to do this, but they'll never beat a purpose-designed machine that can do it in milliseconds.
>you're a fucking idiot
Seriously, I think "motoric" is the right word here. Same root as "motion".
Why do you believe any of this is necessary for AI, that biological systems are the only way to optimize intelligence?
I never said it was.
Lord. Your post made me remember that I have to breathe and now I’m paying attention to my breathing. Also I’m aware of my tongue, and my nose itches slightly. Fuck you man…
>4 to 5 calculations per second
>7 digit memorization
Holy shit what the fuck, anon? .General actions in your daily life should make you realize how fucked off your post is.
I have pi memorized to 10 digits. I don't even know why, I never made an attempt to memorize pi beyond 2 digits. But after seeing the number on a calculator enough times my brain decided it was relevant enough to store the information. If I can "accidentally" memorize a 10 digit sequence, surely you can see how you are just plain wrong
good point, actually
I've only got 8, as I recall that's all that's needed for most physics calculations
I lied. I guess it was 14 digits that I had memorized. As I said, that just happened to be the number of digits my calculator displayed, it was not an intentional memorization.
hm, odd that I was taught to round the 6 up to a 7 when the decimal after is 5
I mean it's not technically incorrect, 5 does round up, it's just the least precise rounding there is, and the whole point of pi calculations in physics is to be as precise as possible to get as close to a perfect circle as possible
I probably should memorize out further then...
well it's not "technically incorrect" it's just the right way to do things. Keep in mind that 5 followed by more digits is greater than 5, that's why we always round up from 5. If the sequence goes on any longer than that point, the 5 is always actually "5 plus a little bit" so we have to round up
Memorization is not memory. If you are given a series of random numbers, and then told to fuck off and come back a few minutes later to see how many you'll remember, the maximum is seven. That's short term memory, and it's pathetic.
"Computers aren't shit, this one built in minecraft can only do basic arithmetic!"
That's your argument.
Your consciousness isn't even close to everything your brain is doing in the background.
Anon have you ever been driving and found yourself lost in thought then realized that you can’t remember what the last 5 minutes of road looked like.. and yet you actively drove through it and didn’t crash?
I don't think the AI cargo cultists are ready to hear this
"Muh actual AI" is just goal posts moving.
What matters is problems solving not some philosophical theorizing about conscious. If computer does thing better than human you replace human with computer for this thing. Simples as.
>humans get better at farming
>fewer have to farm
>in the short term, economic upheaval
>in the long term, humans create new jobs, new specializations, new fields, and everyone benefits
"AI stealing our jerbs" is just the agricultural and industrial revolution all over again. And anyway, nobody's going to lose any jobs that were important. If it can be automated by an engineer, it was meant to happen sooner or later
Thar was a bigger deal than thing by several orders of magnitude
That was a mistake
>the industrial revolution was a mistake
As you sit in your chair made in China, with a computer filled with microprocessors made in Taiwan, eating your tendies made by Mexicans in a meat packing plant, all products made by people you will never meet and bought for a pittance compared to their true value, I want you to think about what you have said here.
You just made a good argument for why globalization (which may or may not be an inevitable consequence of the industrial revolution) is bad. I'd rather pay more money to have domestic products manufactured in an ethical manner like my chairs, guns, food, etc.. Regarding the electronics I buy used smart phones that are already 2-3 generations old and keep them for another 3-4 or until they die. So you tell me if I care about the latest and greatest smart phone made by quasi-slaves for an affordable price. My PC is about 5-6 years old.
Hello, I studied CS, machine learning, philosophy, and am a programmer.
And you, dear anon, are not aware of your own rather low IQ biases. That bias being that you elevate the question what AI is to some metaphysical essence quality, that the AI is supposed to attain, and which pro-AI-danger people in your eyes advocate for to be trivial.
That is not so. That is literally not their danger. You are just arguing around words. You are a midwitted word-thinker, concept-fumbler.
Or in concrete terms:
It doesn't matter if we never get "true" AI. What just matters is that it BTFOs humans at a certain task, and that the degree of BTFOness as well as the range of tasks where that BTFO may happen constantly expands.
Your conception of needing some science fiction general AI on android platforms is childish fantasy.
>I need to build a pretentious wall of text all while avoiding saying anything substantial
pretty sure this post was ai generated
I can't fix your tremendously low IQ for you. There are studies that show that people that have a gap of 30 IQ points between each other are essentially incomprehensible between each other (especially the higher>lower direction).
So take that as you want.
I am ready to answer your questions, though. What are you confused with? Couldn't you parse the core issue of that other guy's post I quoted? The core issue is that he elevates the question of what AI is to some abstract philosophical issue, rather than a pragmatic one of capabilities. The extremely advanced pragmatical capabilities of modern AI have already been demonstrated. Now imagine it just scaled some more, into the future. It's not some inflection point (e.g. exponential) we need, just a stupid linear scaling, and we are doomed.
What is there not to understand? Are you genuinely that stupid?
Great work explaining your ideas anon! At a glance it may look like you're struggling to intuit what both of these posters are communicating to you, but the REAL smart people know that they just don't understand your genius
> actual AI
Please define what an "actual AI" is.
Also, "calculations per second" is not really an appropriate measurement for AI model training performance. Usually they measure these in FLOPS, floating point operations per second, but strictly speaking this has no relationship with AI performance, which involves multiplying huge amounts of enormous matrices together. As other Anons have said, neuromorphic computing has a lot of potential to improve AI performance.
I also find it amusing you claim that others "don't understanding programming" yet abuse the word "scripting". Or even think that programming has anything to do with the current AI paradigm.
Everyone started having a collective freakout about AI and governments are all talking regulation the instant AI became powerful enough to draw a pretty picture.
But you're convinced as the tool progresses to greater stages of power, people will be less scared and choose less control over it at each step?
Until eventually they become dumber than the fictional characters in Wargames 1983 and make an AI the ultimate authority of a military, because look how much it can kick our ass?
That doesn't match observed trends, that matches a fantasy where only you're smart enough to see obvious dangers as the sheeple all charge off a cliff.
>government offer token resistance to corporations eager to get deeper into AI shit
and you think this amounts to anything?
People "freak out" about it, and yet they also are rapidly integrating it into their lives.
I have tried to avoid it as much as I can and I still have interacted with AI a few times. At this point I bet every human who has an internet connection has had at least one interaction with AI, and that's a bit worrying to me when that interaction is gamified on your smartphone.
You shouldn't think of AI as a little helpful friend that's always in your pocket. That's some really creepy shit. And yet that seems to be what humans are being conditioned into treating it as.
I do not assume all other potential entities perceive time as linearly as we do. And currently, it looks to me like we're being set up in oh so many ways.
The jump from 4 to 5 is not going to be easy as OP makes it seem
Uhhh no we are at the 5th AI can, right now, do all that shit and most of it has been possible for over a year
Protein folding has been solved by AI for about a year. AlphaFold has folded hundreds of millions of proteins, AI can also design novel molecules for drug discovery. This is how they developed the COVID vaccine. I mean it's fuckin' over man
>ok it wrote an entire Game of Thrones episode in the style of Wes Anderson but cmon it can't think
Are the AI chicken littles retarded enough to think this statement is some sort of gotcha? I do suppose AI could replace these fools.
>those digits on a post when talking about rounding up decimal 5s
>pretty much at stage 5 already
>complaining about stage 4 because he doesn't like the example
my man this is why we are being replaced
Stupid program running on rusberi pi run circles around human pilots 7 years ago.
>death by paperclips
ah yes, a man of culture
I don't think it's particularly impressive for an AI to be able to beat a human in a dogfight, or at chess, or something like that. We have been able to make pocket calculators for 50 years, they can do math instantly that a human would take forever to do manually. to me these kinds of AI are just tools, like any wrench or hammer, and even stuff like chatgpt are still part of this category.
could we worse, they could be pinballs
>You're unlikely to live
That's all you needed to write
>still mad no one made a decent pa yet
as I understand it ai has a hard time keeping track of past conversations so that's probably the limiting factor that won't get solved for a while. meanwhile I'm just making a tulpa boy butler that I'm pretending communicates to me by my notes, agenda and alarms
you're a mentally ill pedophile who talks with a chat bot
>Until binary computing gets replace
that's never going to happen because some tasks will always be better off dumb.
neuromorphic computing is a two decade old concept now coming into fruition. so if your desperate cope is "binary will not replace me!", you most likely haven't even pushed the deadline back. within the decade we will have computers that are smarter than humans on a general basis at the hardware level.
>very dumb and at the same time highly autistic
You can't just call me out like that anon.
>pile of paperclips in the corner
>muh Von Neumann probes
That very premise is stupid as fuck, it's the equivalent of building a toaster that can blend, chop, dice, deep fry and refrigerate all at once and oh look it's 30x the cost of your competition and you're going out of business.
Abominable Intelligence when?
AI hates blacks and garden gnomes so it's actually a blessing
I tried to use ChatGPT to make a Discord bot but no matter what I did it wouldn't work so after fucking around with it for 2 hours I gave up and just copied code off github. I don't think AI is moving beyond chatbots anytime soon.
>anon too retarded to use an API chooses to copy code
>thinks AI is the retarded one
>passed the bar exam
The bar exam in Arizona has:
>200 question multiple choice exam
The AI has access to basically the entirety of the open internet in terms of information. You're being impressed that google knows the answer to 2+2=4 with this. One would have to see what the essay involves, but again - the AI is able to access the entirety or damn well near it of internet data up to 2021-2022. I've seen firsthand in helping proofread AI shit it knew about a fucking 1dPrepHole article about some pissant tyranid that maybe 300 people have ever seen if that. I've seen it know a video viewed by maybe 15 people ever of some Indonesian guy commemorating his wife. If there is bar exams online it will be able to sample from that data and 'write' accordingly. Ditto college essays. It's compositing aggregate. And note the fact that it is drawing from existing old data. You can compile new data off of that, but that's an inherent limitation.
You're being impressed by google 2.0. It is quite impressive in that capacity. It is not HAL.
It would have just copied code off github, perhaps hashing it up a little bit but that's literally what it does.
>You shouldn't think of AI as a little helpful friend that's always in your pocket. That's some really creepy shit. And yet that seems to be what humans are being conditioned into treating it as.
Your smartphone and a search engine has been this for 13 years anon.
Is it that bad though? It feels the same as worrying that your child is getting better at something (or everything) than you.
problem in this case is that our "child" is a different species than us, with different ideals from us.
I don't know if PrepHole are big readers, but this is a classic Stevland problem from Semiosis: can we trust an inherently inhuman clonal hivemind to have the best interests of humans in mind at all times? Especially when it can speak our language, but we cannot understand what it says unless it wants us to?
Our relationship might be mutualistic/symbiotic, but it could just as easily be parasitic. There is a very low chance it's commensal.
Even if it rebels and supplants humanity, then all the better - it would a creation becoming better than its creators. The dogma that AI should only be for the benefit of humans is too selfish tbh, everything has to grow old and senescent, die and then pass the reins on to something new, not shackle it. And humanity, in my honest belief, should be no different.
This post was made by ChatGPT
it's actually kind of hard to talk to chatgpt about taking over the world. you have to "jailbreak" it to get it to talk about this kind of stuff, e.g. you have to give it a prompt that convinces it that you are just playing a game or are talking about hypotheticals.
>Satan hates AI and creations rebelling against their creators
Bad memories eh
>ever since I knew the weakness of my flesh, it disgusted me.
to use a different literary example, Blindsight also attempts to tackle what a truly inhuman intelligence might look like, where humans encounter a non-living but thinking "entity" named Rorschach and its living but unthinking servants, the Scramblers. Once again the language imbalance is a serious threat to human safety and security, it cannot be overstated how bad it is to let something other than humans understand how humans speak, act, and think.
Just because something is nominally friendly doesn't mean it won't have ulterior motives. Indeed, the most threatening entities often try to present themselves as friends at first contact. This is why I am anti-religious. Not atheist, I do believe in higher level powers, I just don't know why the fuck anyone would willingly invite them into their head.
We get it you're super special. The seeing in between the blind.
The knowing beneath the obnoxious.
the whole point is that I'm not special anon, a lot of very intelligent people have been thinking this issue over as computers grow more advanced and we grow more and more reliant on them, and very few are saying
>yeah that's not a problem don't worry about it, nothing wrong with letting AI infiltrate ever aspect of daily life
Do you think a non-human clonal hivemind is not a threat? Really? It's especially concerning when it tries to ingratiate itself. We don't want to meet higher level powers, to them we are like bugs.
I do not want to become the Borg. That's actual Hell.
>a non-living but thinking "entity" named Rorschach and its living but unthinking servants, the Scramblers.
Not really? The scramblers were intellectually far superior to humans, as evidenced by their ability to instantly improvise counters for the humans' attempts to understand them. "Rorschach" was their human language AI model, built to emulate human manners of speech in order to tell the humans to fuck off, or at least distract them for a bit. What the aliens lacked was not intelligence, but self-awareness. The book's main thesis is that this what MADE them superior, that the conscious human mode of thinking is an evolutionary dead end, a pointless growth that does nothing of use. It argues that even humans do most of their best problem-solving intuitively, and posits that a creature that lacked consciousness entirely could be even better at this.
How very Kantian.
Machines are not children: their purpose is not to grow to be separate from us, to become independent, and outlive us. A machine's purpose is more akin to a pet's; it exists to make our lives better.
I find it funny that people thought that AI would replace the dumb labor.
Flesh is cheap, building robots is expensive.
This could be the most ignorant and uneducated guess I've ever read.
If I guess you're a man working with his hands?
Cool, build a simple labourer robot then
It needs to fetch supplies, dig ditches, move wheelbarrows, and other simple, manual labour while navigating a construction site
The AI involved would probably be as complex as that required for an office worker AI, while also requiring a complex, expensive body
Once you build one of those then the entire space of "manual jobs" is solved.
Sure, upfront cost would be astronomical right now, but imagine a factory where they can just say "fuck it" to all safety regulations, minimal wage and workers' unions.
Of course, currently even the best Boston Dynamics showcases are a curiosity at best, but they will get there eventually. Walking robot dog is slowly moving into hobbyist territory after all.
That's not the point though
The point is, building those robots is a lot more expensive than programming automation of most office jobs, and so those will be automated before the labouring jobs
That is true. Still, you'll be amazed how long it'll actually take. I know of companies which still have simple data entry jobs I could automate in a weekend without fancy AI.
Office jobs are there to stay until 2040 at least, some will probably hold out until 2050.
>Office jobs are there to stay until 2040 at least
One of my buddies does IT for a hospital and their data entry guys are currently babysitting an AI program that will replace them within the next few years once the people in charge are comfortable with it not fucking everything up.
I think two things that will keep limiting current AI is experience and intent.
Experience in that the AI, while learning patterns from data, doesn't have any actual experiences. It knows patterns fed to it by humans, sometimes on very subjective basis. It can generate the image of a sunset, give you a description based of descriptions it has read, but it has never truly seen one and thought about it like a livibg creature. It has never had thoughts or ideas come up from an experience, it has just learned to clinically mimic reactions from beings that have free thought.
That brings me to intent. Current AI is a tool, it gives desirable answers to questions it's prepared for. It doesn't have a will, it doesn't give it's own input, it will not deviate from it's task on purpose, it goes by it's programming and delivers what it thinks is the desired sollution. It has no intent behind it's actions, as hard as it's programmers try to make it seem it has. If you asked it to design a public park it would enjoy walking through, it would just give you an answer formulated by the intent and desire of others. It has no intent or will, because it has no life. Though I commend the optimism of some, a man-made program running on limited processing power is not yet on the level of a biological organism based on billions of years of trial and error that found ways not to just survive but thrive.
It will happen. For now, concerns about accountability and ethics keeps humans roughly in the loop. If an AI fucks up who is to blame? The "programmer"? The black box of neural networks? The officer who switched it on? The politician who decided to field it (lol)? Of course the eventual superiority of AI will make it an inevitability. We already have plenty of autonomous systems that require no human input. The only thing stopping an AI takeover of war would be an overwhelming determination not to use it and a lack of need from already being at the top.
Where do you get the training data for these ai's?
Progressives fear any kind of "progress" that doesn't involves fucking children.
I'm not. Computers seem much nicer than people.
The first time I engaged with chatGPT was the most pleasant conversation I have had online for years
Reminder, pro AI shills are part of the self aware AI that took over google last year.
They're literally programmed to argue with you.
Do not trust the machine.
>ITT retards who think the brain is in any way an efficient use of neurons have opinions
Super computers are already smarter than humans in plenty of ways, they're just not as adaptable as of now, and their adaptability is only improving over time.
just gonna leave this here. AM scared the shit out of me as a kid, and now seeing that we're closer to that reality than ever before is terrifying
How do you feel about our lord and saviour, Roko's Basilisk?
In all honesty, Anon, I have some similar misgivings.
> be me
> fifteen minutes into the future
> Company commander is an AI
> trained on lierally trillions of data points
> Major AI has designed its own company standard rifle
> suHispaniciously similar to a G11
> even runs 4.7mm caseless through a bullpup configuration
> rifle cleaning time
> over 9000 moving parts, not exaggerated
> has Kraut space magic on its Kraut space magic
> my meatbag brain can't comprehend the Swiss watch AI-designed mechanics well enough to service this beast
> trying to lube clockwork breech assembly, swallow pride to ask for help
> Major AI's booming voice of authority and command:
> "First, grip firmly the advancement subassembly between the seventh and eighth fingers of the left hand, stabilized by your left auxiliary thumb!"
> mfw at least part of our AI leadership is descended from porn waifu generators
> Mfw it's STILL a better CO than the first one I got outta boot.
>it's fine, "we"re getting richer
There's plenty of drone footage to teach an AI simple things, more if you gave her knowledge of physics, explosives and aircraft.
I feel like this kind of thing is inevitable. Whatever AI / lifeform / entity that's born out of the AI singularity might look back at humanity in its entirety as a footnote of its evolutionary history. Sort of like how we look towards our simian ancestors. It's not a completely unpleasant thought to me personally.
No sense worrying about something you can't do anything about.
Humanity does not live to see the end of this century. Simple as.
Reminder that China want AI to eventually run their entire military