1 Comment
User's avatar
awbvious's avatar

Isolation is always the problem. It's the same as when Facebook started going heavy into machine learning, long before their recent, odious chat bots that do not reveal themselves as non-human. It was driving people to fringe groups, taking them out of human interaction, radicalizing them. A lot of it was fake too, it might have been created by a human, but still fake. Some human in some other country, possibly chained to a computer, doing that between pig butchering scams. Because the motive was profit (or election interference, for later profit) the result was similarly bad--just couldn't be scaled as much because even with scripts and some automation there was still quite a bit of real human needed and Facebook's manipulation algorithms were only evil henchman level, not yet demonic eldritch level.

The problem is very simple. We have evolved to be social. Because all humans evolved generally the same, together, this desire to be social has worked out pretty well. Because mostly you would be interacting with others who wanted to collaborate in a non-zero-sum way, benefit from helping each other. Tech evolution is obviously faster than natural evolution. And AI is controlled by corporations who are not interested in collaboration in a non-zero-sum way. They are interested in exploitation. They are the sociopaths that do exist in humans, but are a minority, such a minority that they don't dissuade humans from being social. Suddenly, they can be everywhere and you can be forced to interact with them and your brain, which is evolved to interact with humans that are generally not out to be parasites, is easy pickings! Yay!

"

That’s because LLMs are not intelligences.

"

Maybe. But I think the problem isn't so much with the limitations of LLMs, but the fact they are developed and run for profit. And were it not for that profit motivation, some of the most problematic "bugs" might not exist.

I read recently the reason why AI hallucinates is that there was a positive weighting for an answer, any answer, but there is no positive weighting for "I don't know." Why is that? Maybe it is to make people engage with it more. Some might defend this choice as necessary to get it better, if people dismiss the product, they'll never use it enough to give feedback necessary to improve the product. But I don't believe that it's just to improve the product. I think they could have it "guess" and say it's a guess, and people could tune and get it better that way. I think they could have made it so responses were either "I'm not 100% sure, I have no citation for this, but ... I'm about 10% confident because of this inference based on [this] data I have" or "I'm 100% confident on this, here's a quote from the New England Journal of Medicine ' ... ', page 25 of [this] study, third paragraph." Basically, show their work. I bet they could have made it able to do that.

But they probably don't want to show sources because a lot of those sources are probably copy-written, and they are trying to sell a service. So regardless if it is more engaging or less engaging to hallucinate, revealing their sources as copy-written might get lead them to being forced to stop their services. Which clearly would mean no engagement. Either way, I think it's just the typical drug-of-choice for silicon valley. Growth. They will sell their souls and their first born children for growth statistics to sell to VCs pre-IPO and pleebs post-IPO. So they built it to be dishonest. And, apparently, deadly.

Expand full comment