Readings On Large Language Models

In a conversation on Bluesky, I commented that I have never found an explanation of the neural net part of the LLMs like ChatGPT that makes sense. “It works like your brain” is clearly not an explanation, since we know so little about how our brains work, but it’s a go-to for reporters who find all this heavy going.

I specifically asked for links, because a good explanation obviously is going to be longer than six 200-character posts, but one reply guy gave it his best, which was very bad. But I did get several good links back.

This post is far from an explanation of neural nets or ChatGPT. It’s some of the things I learned and links to resources that at least look useful. My understanding of these models is a work in progress.

Read More

KILLER ROBOTS ON THE MARCH!!!

Yesterday we had another AI kerfuffle.

This time it was a report that in a simulation, an AI-powered drone turned on its operator and killed them. I retweeted it because it was another example of obvious stupidity relative to AI. But I didn’t say that, largely because the locus of the stupidity was not clear. It could have been in whatever was done with the simulation, or it could have been in the reporting, or in a chain of half-reports that the writer summarized. The report now has a disclaimer. Scroll way, way down to “AI – is Skynet here already?”

I do not count myself as an expert in AI, although I’m learning about it daily. It is clearly Silicon Valley’s latest claim to relevance, and they are hyping it mightily with the aid of stenographic media who understand less about it than I do.

Those of us who read Isaac Asimov’s Three Laws of Robotics when we were eight years old or so recognized that something was something wrong with that report. Yes, there are problems with Asimov and with his three laws, but the need for programming a death robot so that it doesn’t attack its controller/ owner/ whatever should be obvious, particularly to the military.

But the military gets stuff wrong, and they can be as susceptible to Silicon Valley hype as the media.

The disclaimer now says that the “simulation” was just talk. But, of course, the debunking won’t get to all the people who saw the original report. And maybe that’s not so bad. If people believe that AI is dangerous, maybe we can do something to get it under control.

Photo: The MAARS is one of three robotic, unmanned vehicles demonstrated to Soldiers from the 519th Military Police Battalion, 1st Maneuver Enhancement Brigade, Aug. 5, 2015. It is equipped with non-lethal and lethal armament. (US Army photo)

Cross-posted to Lawyers, Guns & Money

Stop Me Before I Kill Again

This is bullshit.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Back in 1975, the biology community, and the larger society, were concerned about the potential for recombinant DNA. Could it be used to generate monsters or new diseases? Would people use it to interfere with human reproduction?

The biologists themselves held a conference at Asilomar, California, to assess the dangers and recommend possible mitigations. They issued a public statement summarizing their findings. Since then, they have developed and applied guidelines as new areas of research opened up.

In contrast, Our Silicon Valley Overlords want us to be worried – very worried – about their research into artificial intelligence. With their usual hype, they have loosed chatbots called AI that are very fancy autocompletes that require immense amounts of “training” on work that other people have produced. And oh yes, they can’t tell you what that training base is because you might be mean about it and point out that a great deal of what is available is sexist, racist, and classist, and those traits just might be “trained” into their wonderful creation.

If there is a hazard, show us the pathways by which it might develop, as the biologists did. Then show us how it might be mitigated. Show some serious moral purpose, in other words.

If the hazard is so great, perhaps a statement would be appropriate that these Very Principled People feel they can no longer work on it and will be leaving the field to plant a garlic farm in the Santa Clara Valley where their offices used to be.

But this is the industry that cobbles something together and leaves it to the customer to figure out how to deal with its problems.

Cross-posted to Lawyers, Guns & Money

Everything I Read On AI Is Garbage

I am sorry, but the discourse around what people are calling artificial intelligence, or AI, is so dumb that I do not see how those people get up and find the bathroom in the morning.

They fail to tell us what they understand intelligence to be – is it “learning” in the sese of making connections or memorizing things? Is it stringing words together in a persuasive way? Is it being able to back up and explain that string of words? Is it being able to use logical inference?

And those are the easy questions, before one starts to think about consciousness.

Read More

AI Can’t Write Poetry

One of the things people have asked the chatbots to do is to write poetry. I will admit to not reading every chatbot clip that comes across my timeline. Every time I read one, I can feel brain cells dying from the vacuity.

But I love poetry and probably have read more of those clips than others. So far, they have all been very bad.

We can start with a conversation on Twitter, a response to an observation that what the chatbots write is very bland. A commenter said that he requested a poem on a pandemic in the style of T. S. Eliot’s “The Wasteland,” and it wasn’t bland at all. He shared the poem with us. Indeed, it was not bland. It was “The Wasteland” with the word “pandemic” dumped in at maybe five or six places, and ended with a doggerel rhymed couplet about a plague being in the air, thus combining plagiarism with blandness.

Read More

Destroyer of Worlds

Two not entirely parallel threads this morning, on nuclear weapons and artificial intelligence.

The question came up again

It’s been answered by historians, but Oppenheimer and the Manhattan Project have so much mythology attached to them that I’m sure it will be asked again.

Alex Wellerstein, one of the best historians of the Manhattan Project: Oppenheimer probably didn’t say it at the time, and the most noted source of the quote is from a video made toward the end of his life.

Read More

Artificial – Not Intelligent

For the past few days, my Twitter feed (yes, it’s still there) has been cluttered with dialogs with the ChatGPT chatbot. Some are on the level of polite and trivial human conversation: the form is correct, but not much fact. Where fact is called for, it is often incorrect, but presented in a correct form.

There are also several art generators available. Neither they nor the chatbots can be called “intelligent.” As far as I can tell (and I’ll be happy to hear if I’ve missed something), they are basically weighted averages of their training sets, which are very large and consist of samples of art or conversation. A part of the program also recognizes, from those training sets, questions from humans and what might be appropriate responses. The programs also add the inputs of people asking them questions to their data bases.

That is a significant achievement, although machine translation, which relies on similar operations, impresses me more, particularly of agglutinative languages.

All this is enabled by large computer memories and fast computation.

Read More