Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Back in 1975, the biology community, and the larger society, were concerned about the potential for recombinant DNA. Could it be used to generate monsters or new diseases? Would people use it to interfere with human reproduction?
The biologists themselves held a conference at Asilomar, California, to assess the dangers and recommend possible mitigations. They issued a public statement summarizing their findings. Since then, they have developed and applied guidelines as new areas of research opened up.
In contrast, Our Silicon Valley Overlords want us to be worried – very worried – about their research into artificial intelligence. With their usual hype, they have loosed chatbots called AI that are very fancy autocompletes that require immense amounts of “training” on work that other people have produced. And oh yes, they can’t tell you what that training base is because you might be mean about it and point out that a great deal of what is available is sexist, racist, and classist, and those traits just might be “trained” into their wonderful creation.
If there is a hazard, show us the pathways by which it might develop, as the biologists did. Then show us how it might be mitigated. Show some serious moral purpose, in other words.
If the hazard is so great, perhaps a statement would be appropriate that these Very Principled People feel they can no longer work on it and will be leaving the field to plant a garlic farm in the Santa Clara Valley where their offices used to be.
But this is the industry that cobbles something together and leaves it to the customer to figure out how to deal with its problems.
Since joining Bluesky, I’ve learned more about developers (coders, programmers, tech bros, whatever they’re calling themselves these days) than I ever imagined.
Let me make clear, up front, what I expect from a Twitter replacement: a forum like Twitter but without the Nazis and other moderation problems. I do not want to moderate a server, nor do I want to have to put up with some of the nonsense that goes on at Mastodon.
The first day I was on Bluesky, three or four weeks ago, it was mostly developer-speak. The next day, a large number of users arrived, and the developer-speak was drowned out in a sea of body parts and shitposting. Everyone was giddy not to have to deal with the mess that Elon Musk has been exacerbating. It was fun, but the sort of thing that ages quickly. The wave of euphoria crested. Now people are trying to figure out what comes next.
I have a new article in Foreign Policy. I write about the reasons that Russia and the United States might go back to nuclear explosive testing and suggest that we might let nuclear weapons decay out of existence.
That suggestion has some historical precedent. In 1989, Carson Mark, at that time the director of weapons design at Los Alamos and earlier a designer himself, proposed with others that a way to decrease the nuclear arsenal could be to stop producing tritium. Tritium is a component of nuclear weapons that boosts their yield and whose 12-year half-life requires its regular replacement. The dissolution of the Soviet Union overtook the arms control negotiations to which Mark contributed his suggestion.
Now there are concerns about the plutonium parts in nuclear weapons. Modernization efforts will include plutonium replacement. Other parts of the weapons, like the conventional explosives, age as well.
The US is trying to restart production of plutonium parts with great difficulty, and Russia will be strapped for funds after the Ukraine war ends. It’s not possible to negotiate an arms control agreement with Russia now, but an agreement that lessens the need for modernization may be attractive to a poorer Russia.
Last night’s takedown of six of Russia’s “unstoppable” Kinzhal missiles should also help to change Russian thinking. Cold War concepts of deterrence and nuclear warfighting are obsolete. We all need to rethink them.
Image is the header on the Foreign Policy article. It’s a French atmospheric test from 1971.
Over at Lawyers, Guns & Money, Rob has provided material to read in preparation for the Ukrainian offensive. This is more of a situation report.
The Ukrainian government holds its plans for the offensive very close. They apparently are not sharing them even with the US government. So nobody outside of Ukraine knows what is going to happen, no matter what any rando bluecheck may claim.
Russia has been expending missiles on Kyiv since the “attack” on the Kremlin of a hobbyist-type drone carrying a firecracker’s worth of explosives. Ukrainian air defense has been quite effective, and a Patriot took down a Russian Kinzhal missile, one of Russia’s supposedly super weapons introduced by Putin along with a couple of things that didn’t pan out. It looks like Russia’s supply of missiles is running down, along with other equipment.
The May 9 parade in Moscow is reported to have included one (1) tank, an antique. However, antiques are being mobilized to Ukraine. A number of military experts say that Russia will have to mobilize more men soon, but there aren’t many signs of that.
Public opinion in Russia seems to be softening on support for the war, but it hasn’t turned against Vladimir Putin.
After a two-week orgy, Bluesky is settling down to the problems of being a social medium. Not solving those problems yet, but defining a problem is the first step to solving it.
Bluesky looks to me like the most likely successor to Twitter, if they can solve their problems. The others have stagnated, for their various reasons. The network – the people on the app – is the most critical factor, and Bluesky did a good job on that starting off. But there are next steps.
Bluesky’s intention is to be a better Mastodon – distributed over servers/instances, with distributed moderating. What Mastodon has gotten wrong on that is the intimidating signup, which demands that you choose a server before you have any idea of what that is. They have now said that they will make that process easier, but I am not clear whether that has happened yet.
Pamela Paul is standing up for MERIT in scientific publishing. Of course, she doesn’t know what she’s talking about, but her friends in the Intellectual Dark Web gave her a convenient press release to work from.
My colleagues who publish in professional journals have mostly responded to Paul, rather than to the paper and press release she is working from. The paper is inappropriate for the one journal she mentions, the Proceedings of the National Academy of Sciences, because the PNAS publishes short technical papers, and this is a long polemic.
I’ve thought that scientific journals could benefit from publishing more polemics, but polemics on chemical and other scientific issues. That’s not what this paper is about. It is about practices in journal publishing that the authors disapprove of. They frame their polemic in terms of merit versus identity.
I am sorry, but the discourse around what people are calling artificial intelligence, or AI, is so dumb that I do not see how those people get up and find the bathroom in the morning.
They fail to tell us what they understand intelligence to be – is it “learning” in the sese of making connections or memorizing things? Is it stringing words together in a persuasive way? Is it being able to back up and explain that string of words? Is it being able to use logical inference?
And those are the easy questions, before one starts to think about consciousness.
One of the things people have asked the chatbots to do is to write poetry. I will admit to not reading every chatbot clip that comes across my timeline. Every time I read one, I can feel brain cells dying from the vacuity.
But I love poetry and probably have read more of those clips than others. So far, they have all been very bad.
We can start with a conversation on Twitter, a response to an observation that what the chatbots write is very bland. A commenter said that he requested a poem on a pandemic in the style of T. S. Eliot’s “The Wasteland,” and it wasn’t bland at all. He shared the poem with us. Indeed, it was not bland. It was “The Wasteland” with the word “pandemic” dumped in at maybe five or six places, and ended with a doggerel rhymed couplet about a plague being in the air, thus combining plagiarism with blandness.
The United States is offering grants and tax credits to help develop technology to remove carbon dioxide from the air. This is the most difficult way to deal with carbon dioxide, our greatest source of global warming.
Carbon dioxide is currently 412 parts per million of the Earth’s atmosphere. in 2000, it was 370 parts per million.
The mass of the atmosphere is 5.1480 × 1018 kilograms. That means that it contained 1905 x 1012 kg carbon dioxide in 2000 and contains 2121 x 1012 kg now. To get back to 2000 levels, which were way above what we had before the Industrial Revolution, we would have to remove 216 x 1012 kg.
The biggest plant to date captures 4000 tonnes (4 x 106 kilograms) per year. We would need 54 million of such plants to remove the excess over the year 2000 in one year. If you spread it out, you can do with fewer plants, but don’t forget that we are adding to the total every year.
Plus that carbon dioxide has to be sequestered somewhere so that it can’t get back into the atmosphere.
We need everything we can do to decrease atmospheric carbon dioxide. The bills passed by Congress last year do more than has ever been done before to deal with global warming. We need more. Which is yet one more reason to vote Republicans out of office. We can’t afford their culture war distractions.