Tip jar

If you like CaB and wish to support it, you can use PayPal or KoFi. Thank you, and I hope you continue to enjoy the site - Neil.

Buy Me a Coffee at ko-fi.com

Support CaB

Recent

Welcome to Cook'd and Bomb'd. Please login or sign up.

April 27, 2024, 10:46:18 AM

Login with username, password and session length

AI: What's it good for?

Started by QDRPHNC, February 08, 2024, 03:12:48 PM

Previous topic - Next topic

touchingcloth

Quote from: jamiefairlie on February 08, 2024, 07:03:15 PMYeah but the more we understand of the human ecosystem the more it seems that other systems outside the brain are also involved e.g. the gut microbiome effect on emotions

That's not to say it's impossible to map but extremely complex

This is a good point. The AI boosters are incentivised to point out the ways that humans are like computers, and to downplay the ways in which computers are not like humans. I think it plays into their hands to assume that their software is on a sure path to becoming equivalent to us.

jamiefairlie

Quote from: touchingcloth on February 08, 2024, 07:51:03 PMThis is a good point. The AI boosters are incentivised to point out the ways that humans are like computers, and to downplay the ways in which computers are not like humans. I think it plays into their hands to assume that their software is on a sure path to becoming equivalent to us.

I think at the end of the day we are the same, just atoms, so not fundamentally different.

The thing is that for all intents and purposes we don't need actual AI, we just need need smart algorithms and tonnes of good data. The output of those will be good enough to function as AI for most people and purposes.

Socially, I do believe mass unemployment will soon be endemic and some form of UBI will have to be implemented.

ZoyzaSorris

Quote from: Alberon on February 08, 2024, 06:01:20 PMOh yeah, but you can bet many people will make that argument.

I'm not convinced being biological has anything to do with it. My own thought on the subject is that it doesn't matter what the neurons are made of it's the connections between them that do.

The connections in organic nervous systems are not just simple binary ones though, they have levels of chemical complexity we are only just starting to grasp. This is one of those rare moments where I agree with JF. Sentience is an emergent property of the insanely complex biochemical peculiarities of an organic brain-body synergy, and can't be replicated digitally. I'd bet a lot on that.

ZoyzaSorris

Though I do think AI will eventually be as smart as us, but only by making us thick.

TrenterPercenter

Quote from: touchingcloth on February 08, 2024, 05:01:50 PMIs scaling both of those things sufficient to make an AI that is the equivalent of a human?

I wouldn't have thought so but the area of machine learning is largely dominated by mathematicians with little understanding of human psychology.

QuoteIf it's not the equivalent of a human that we're after, then to what end are we pursuing AI given that computers have long exceeded human capabilities in certain domains, e.g. pocket calculators beating mental arithmetic.

Because it isn't AI.  The truth is we don't understand consciousness fully, we understand things about it and much more than we did 40 years ago but we are very far away from any practical understanding about it.  We don't even really understand fully how non-conscious (or rather parts that we presume do not need consciousness to operate) parts of the brain work.  It's all just tech bro Roganesque science fiction.

We are pursuing machine learning because as technology the practical applications are immense.


bgmnts

Quote from: ZoyzaSorris on February 08, 2024, 08:56:55 PMThough I do think AI will eventually be as smart as us, but only by making us thick.

Students are using AI to do their work for them, it seems: https://www.theguardian.com/technology/2024/feb/01/more-than-half-uk-undergraduates-ai-essays-artificial-intelligence

Those using AI are presumably in uni for monetary or career reasons or just because it's the done thing, rather than improving critical thinking skills or depth of knowledge, so it might not be making smart people dumber, but still...

Alberon

Quote from: ZoyzaSorris on February 08, 2024, 08:55:29 PMThe connections in organic nervous systems are not just simple binary ones though, they have levels of chemical complexity we are only just starting to grasp. This is one of those rare moments where I agree with JF. Sentience is an emergent property of the insanely complex biochemical peculiarities of an organic brain-body synergy, and can't be replicated digitally. I'd bet a lot on that.

I don't think neural networks will be run on computers like we have today. I think they will have to be physically built, but not with organic materials. I don't think there's any organic process that cannot be replicated using non-organic materials.

Sebastian Cobb


Mister Six

My writing pal and I use some AI thing to transcribe our calls and the summaries it gives afterwards are often spookily. Accurate, including the titles of the headings/chapter breaks it inserts when we change subject.

So that. And making really cool pictures for fun. I've seen some awesome psychedelic-but-realistic pictures that I don't think you could get from any other source.

But obviously the cons of a handful of companies making $$$ putting designers and artists out of work while ripping off their art outweigh that.

the hum

Quote from: Alberon on February 08, 2024, 05:20:02 PMIn principal you could have specialised neural nets for specific tasks. If a true General AI is made equal, or above, a human intellect than that opens a whole can of worms about sentience and rights.

My own personal feeling is that if you have an AI neural net modelled on a human brain, then it is human as we see the universe in a specific way that it would follow. But the whole issue will get very messy as many will claim AI doesn't have a soul or is only mimicking sentience (though if it is doing it perfectly then surely it is as sentient as we are, assuming we are also in the first place).

Language like this is part of the problem. You're going by *feels* rather than anything the technology is actually doing/capable of. The stuff you're describing is unscoped and unscopable in the current technological paradigm. The human capacity for beguilement in the face of technology's mimicry abilities is strong. You only have to look at things like the Tamagotchi craze or chatbots more generally. Then there's the famous story about the secretary of the developer of ELIZA, the world's first chatbot created in the 60s, attributing human-like feelings to the program. As the computer scientist Timnit Gebru puts it, LLMs are "stochastic parrots"; you can believe they're developing intelligence all you want, but they're not. That does though at least go some way to explaining why so many AI believers appear to be developing a religious fervour. Rather like the UFO phenomenon, they *want* to believe.

Calling it AI at all is really quite irresponsible, as it merely serves to feed the frenzy. And claiming that it leads inexorably to a conscious entity is a bit like claiming that the speed of light is not immutable on the basis that powered flight is a thing.

Alberon

What is called AI now is anything but. A true General AI is not going to come on the standard computer, no matter how fast it is. 

What I'm talking about is the neural net computers that are very much in their infancy. I don't believe in a soul or any mystical component in our sentience (if indeed we're truly conscious at all). Therefore it should be capable to replicate, and eventually exceed, human brain capacity using artificial means.

touchingcloth

Quote from: the hum on February 08, 2024, 11:12:38 PMLanguage like this is part of the problem. You're going by *feels* rather than anything the technology is actually doing/capable of. The stuff you're describing is unscoped and unscopable in the current technological paradigm. The human capacity for beguilement in the face of technology's mimicry abilities is strong. You only have to look at things like the Tamagotchi craze or chatbots more generally. Then there's the famous story about the secretary of the developer of ELIZA, the world's first chatbot created in the 60s, attributing human-like feelings to the program. As the computer scientist Timnit Gebru puts it, LLMs are "stochastic parrots"; you can believe they're developing intelligence all you want, but they're not. That does though at least go some way to explaining why so many AI believers appear to be developing a religious fervour. Rather like the UFO phenomenon, they *want* to believe.

Calling it AI at all is really quite irresponsible, as it merely serves to feed the frenzy. And claiming that it leads inexorably to a conscious entity is a bit like claiming that the speed of light is not immutable on the basis that powered flight is a thing.

I've mentioned it before, but one of Gebru's co-authors on the stochastic parrots paper said that

QuoteWe've learned to make machines that can mindlessly generate text, but we haven't learned how to stop imagining the mind behind it.

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

Weizenbaum, who created ELIZA in the mid-60s, said of it

QuoteWhat I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

I'm trying to teach myself to be mindful with the language I use around all of this stuff because of things like that. "Intelligence", "learning", "thinking", "understanding" - they're all good analogies for what "AI" tools do, but they imply too much of a mind that isn't yet there.

Alberon

One of the most striking things for me about the current machine learning systems is how good they are at processing and mimicking language without there being anything conscious there at all.

the hum

Quote from: touchingcloth on February 08, 2024, 11:23:25 PMI've mentioned it before, but one of Gebru's co-authors on the stochastic parrots paper said that

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

Ah yes, Emily Bender is great on this subject too. Emile Torres' work in this area is well worth checking out too. He really turned my head last year - in the past I'd accepted most of this kind of futurist lore largely uncritically.

QuoteWeizenbaum, who created ELIZA in the mid-60s, said of it


Not unlike many people's instinctive reactions to lights in the sky.

Berries

I'm a lifeguard.

Not getting too worked up about losing my job to R2D2.

the hum

Quote from: Alberon on February 08, 2024, 11:27:43 PMOne of the most striking things for me about the current machine learning systems is how good they are at processing and mimicking language without there being anything conscious there at all.

They're made by humans, and as it turns out they require continuous human intervention to prevent them churning out junk. Much of that intervention amounts to slave labour.

Of course there are genuine possibilities for doing good with this tech, but the more you peer inside the more unremarkable the mechanics of it become.

Bum Flaps

The recent success of LLMs etc to do a good enough job in certain circumstances suggests to me that the underlying tasks are not actually as demanding as we first imagined. That is, us humans have managed to thoroughly obfuscate our interactions with a load of superfluous chaff/undergrowth, which if one recognises the patterns, can be stripped away, revealing the simple framework underneath. All without a single drop of sentience.

If so, then maybe what the recent success of AI is really telling us is quite how trivial much of our species' vaunted achievements have been.

Bit bleak, that. Going to cheer myself up by getting DALL-E2 to generate some Gregg Wallace's Saturday images

touchingcloth

Quote from: the hum on February 08, 2024, 11:34:16 PMin the past I'd accepted most of this kind of futurist lore largely uncritically.

So had I - I took the development of AGI as a given, and was more on the doomer than the optimist side. Prior to that I was already a crypto sceptic, though, so it didn't take too long for me to see much of current AI being similar hype and grifts.

TrenterPercenter

Quote from: the hum on February 08, 2024, 11:12:38 PMThen there's the famous story about the secretary of the developer of ELIZA, the world's first chatbot created in the 60s, attributing human-like feelings to the program.

ELIZA was mostly known for its script DOCTOR which was a computerised therapist that delivered Rogerian counselling.  Rogerian counselling is based on the premise that the answers to an individuals problems are located within the individual and not the therapist, whose job it was to simply prompt the individual towards their decisions. 

This was perfect for a what was an early day chatbot, it didn't need much stochastic processing as interactions just needed "parroting back". 

User: I worry about my future
ELIZA: what is it about your future you are worried about?
User: getting older
ELIZA: how does getting older make you feel worried?
User: that someday I won't be able to look after myself
ELIZA: What parts of looking after yourself do you worry about the most?

It largely just repeated the users statements as questions, something that was completely appropriate for this style of non-directive counselling.  It was however powerful in creating a pseudo-therapeutic relationship because of its naturalistic language. It's a bit like pareidolia where we have evolved seemingly hard coded (sorry mixing up metaphors here) pattern recognition systems in the brain for survival, language is no different.

You cannot even attempt to understand "AI" without a very advanced understanding of human psychology, the majority of people that are buying into, propping up and down right exploiting "AI" all come from a computing or neurobiological/evolutionary psychology approach which are not wrong but exists within the same paradigm of deterministic cognitivism - cognitivism also isn't wrong per say its just only part of the story - the epochs of psychology's short existence is; trans-theoretical (innate psychological drives)-->behaviourism-->neurophysical-->cognitivism and where we are going now emotions (affectivism?).  You absolutely have to understand why these eras waxed and waned and what their limitations where before you can really talk about AI in any serious manner (which is that is doesn't exist and theories about why it should exist are nonsense). 

Psychology suffers from the same problem that physics does, that is the difference between the microscopic and the macroscopic the only way to try really attend to this is via interdisciplinary perspectives but there is resistance to this, wanting AI to exist and not understanding why it cannot is a good example of this.

Alberon

Google engineer Blake Lemoine went bonkers just a couple of years ago having decided that the company's AI software had become sentient.

touchingcloth

Quote from: TrenterPercenter on February 09, 2024, 09:58:18 AMELIZA: I worry about my future
User: what is it about your future you are worried about?
ELIZA: getting older
User: how does getting older make you feel worried?
ELIZA: that someday I won't be able to look after myself
User: you're reading a magazine. You come across a full-page nude photo of a girl.
ELIZA: is this testing whether I'm a chatbot, or ELIZAbian?

touchingcloth

Another piece about some of the risks, the material and climate effects in particular - https://disconnect.blog/ai-is-fueling-a-data-center-boom/

QuoteLast month, OpenAI CEO Sam Altman made a rare admission of what his future entails. Speaking to Bloomberg at the World Economic Forum, he acknowledged "we still don't appreciate the energy needs of this technology." The amount of energy needed to power his vision for AI would require an "energy breakthrough" that he had faith (not proof) would come, and in the meantime we could rely on "geoengineering as a stopgap."

There's a lot of the bolded bit about when it comes to AI itself, so it's interesting seeing it being pushed one step out and onto the effects and prerequisites of AI.

ZoyzaSorris

Quote from: Alberon on February 08, 2024, 11:21:19 PMWhat is called AI now is anything but. A true General AI is not going to come on the standard computer, no matter how fast it is. 

What I'm talking about is the neural net computers that are very much in their infancy. I don't believe in a soul or any mystical component in our sentience (if indeed we're truly conscious at all). Therefore it should be capable to replicate, and eventually exceed, human brain capacity using artificial means.

Well that is a lot more plausible to me than any kind of genuine intelligence from digital technology which I think is a certain bet not to be possible. I still think that the properties of organic materials are this universe's best and possibly only way of building sentient brains though, but this is obviously purely informed speculation at this point.

Zero Gravitas

#53
Bump as OpenAI have just announced Sora, a very impressive video generating network.

https://openai.com/sora

Better get on that fusion power soon.

QDRPHNC

Fucking hell that looks brilliant.

Zero Gravitas

Hard to comprehend the decades of video and gigawatts of power that went into storing parallax, lens barrelling and the way reflections work on darkened glass - all with temporal consistency.



For it to produce this insanity of hair blobs and headphones plugged into disembodied wrists:


touchingcloth

Quote from: Zero Gravitas on February 15, 2024, 08:13:49 PMHard to comprehend the decades of video and gigawatts of power that went into storing parallax, lens barrelling and the way reflections work on darkened glass - all with temporal consistency.



For it to produce this insanity of hair blobs and headphones plugged into disembodied wrists:



Meh. All of this AI stuff is already starting to feel lifeless in predictable ways. I'm past the uncanny valley with images and text now, and while those videos would have impressed me a year ago they're just...boring.

It feels a bit like late-stage Flash when there were millions of knock offs of fucking Salad Hands.

BlodwynPig

Bored of AI. Endless, mindless, salivating, lazy

Senior Baiano

That video's fucking boring, thought AI video was meant to be about showing women you resent for romantically rejecting you or being senior to you at work getting bukkaked by tramps

Retinend

What it comes down to is that AI is good at generating the conventional.

It tends to go for styles of speech or imagery for which there is a lot of precedence. It is good at generating the visual and verbal background noise of our daily lives.

I work in marketing, with a focus on translation, and already, much of the text and image of a typical company website is amenable to being co-created by AI tools. Corporate-sounding texts can be AI-generated. Corporate-looking images can be AI-generated. Texts can be instantly translated. It sounds like it ought to be scary and job-threatening, but in practice it's not.

Because using an AI tool is not simple - you have to tell it what you want, and you have to intervene in order to steer it away from the conventional and to inject a bit of emotion here, a bit of asymmetry there. Nothing comes out fully-formed. You have to say "I want a paragraph here about x" - you need to add facts that the machine can't generate, because it doesn't know them. You also need to stop the machine from offering reiterations of the same idea again and again. You have to scrutinize the base meanings of each sentence and equip the errant ones with a purpose, when they waffle on without any consequence. Or you kill them.

You often just need the AI tool to give you a placeholder, from which you can begin the process of reworking the thing entirely, word by word, or visual element by visual element as the case may be. This is very often true of AI translation (via DeepL), where the technically "most accurate" translation often lacks the je-ne-sais-quoi that a slightly "inaccurate" translation can give. For instance, a highly accurate translation might end up being cumbersome and overly formal, whereas a slightly more inaccurate translation (which you, as the human, must choose) often sounds far better.

So I welcome our AI overlords, who are in fact our AI lapdogs, and I enjoy the liberation from soul-destroying busywork it has given me.