5 Comments
User's avatar
David Piepgrass's avatar

> America will lose its competitive edge on this technology without legislation outlawing unconstitutional uses

I had to do a double take, since anything unconstitutional is already unlawful by definition. There are, granted, certain parts of the constitution that have no mechanism to enforce them, such as the emoluments clause, which I understand Trump has violated since day 1 of term 1.

Greg's avatar

I did a triple take, because I could find nothing in the article—much of which made good sense to me—that supports the conclusion claimed in the lede. Perhaps the argument is that “AI companies will compete less successfully for fear of something negative from Trump,” but that’s pretty speculative and again, not supported by the article.

David Piepgrass's avatar

I thought the Administration already used the nuclear option by declaring Anthropic a "supply-chain risk". But, I admit, the lede made me disinclined to actually read the article, so I stopped after a brief skim of the upper portion...

Harley "Griff" Lofton's avatar

To be honest I am totally ambivalent about AI. I love what it has done to improve my searches on the internet, I love what it has done to improve my online tax filing, and I am entertained by the creative manipulation of imagery. I believe AI could already do a better job electing representatives and writing laws than what we have now in the Federal and state governments. Would AI have EVER chosen Donald Trump as President? AI could probably write better legislation. AI would probably do better at the Supreme Court than what sits there now.

But the environmental impact of AI may become unsustainable for consumers who may be required to surrender their share of electricity and water in order to maintain consistent supply for AI centers. Can AI really be trusted with "unmanned" anything let alone Weapons of Mass Destruction.

What we need with regard to legislation around AI and what is politically possible are entirely different and what we will get yet something else. Bipartisanship is in short supply just now. Even if a bipartisan bill is hammered out there will never likely be the votes necessary to override a veto. Perhaps including a provision that rebrands their technology as "TrumpAI" or some such and pay him a licensing fee might get his signature because he would no longer care what was in the bill.

The Defense Production Act has been deployed by previous administration in fairly nondefensive ways and our imperial presidents of both parties are always loathe to surrender anything that diminishes their power. At least in this case, wrong headed as they are in going about it, actually does have a connection with national defense.

Russ's avatar

My PhD dissertation in 1971 was very early AI similar to facial recognition in terms of its underlying approach. Nobody including me really understood what it was. Large language models came along much later.

AI in general is best viewed as a tool of the mind; not a replacement for the mind. My former colleague, Lou Goldberg, wrote in seminal article around 1970 entitled "Man vs Model of Man", which showed that experts using a model of their judgement did better than either the model itself or their unaided judgment. But like all technology, AI is amoral in and of itself. It is how people use it that determines its morality.

I use large language models to gather information. I had been a paid subscriber to ChatGPT for some time when this fiasco occurred. I dropped my paid subscription and now have one with CLAUDE. As a humorous aside, I use it for getting ideas for mixed drinks. A bartender in college, I have a long standing interest in the subject. CLAUDE does a better job at this than ChatGPT or my unaided mixology so I am a happy camper.