Announcing
AITRAP,
The AI hype TRAcking Project
Here:
https://www.poritz.net/jonathan/aitrap/
What/why:
I keep a very random list of articles about AI, with a focus on hype, ethics, policy, teaching, IP law, some of the CS aspects, etc., now up to 1000s of entries.
I decided to share, in case anyone is interested; I'm thinking of people who like @emilymbender, @alex, & @davidgerard . If there is a desire, I'll add a UI to allow submission of new links, commentary, hashtags.
Meta's Llama 4 (which is being forced on all #WhatsApp users) doesn't do any of chain-of-thought reasoning and incorrectly calculates the number of squares of one colour. Claims that a 7x7 checker board with one corner missing has 23 of one colour so makes tiling impossible but then continues on for several paragraphs about possible tiling approaches.
About a year ago I was subbing as a para and spent some time in a middle school history class. During a break between classes the teacher showed me their latest tech addition:
A LLM program, named Poe, I believe, that was set up to only answer queries based on texts specified by the teacher.
The sales pitch had been that middle school history students would ask Poe history questions and get factual answers.
Because, of course, that's how all of this works.
The teacher showed me how it was supposed to work. She asked Poe a comparative question about the Declaration of Independence and the Articles of Confederation.
It answered correctly.
I asked her to ask for something that it shouldn't be able to find. IIRC she asked it about the topic of slavery in both.
It extruded a long and rich answer...
That was obvious bullshit.
Or at least it was obvious to her. Because she knew these documents. Her students wouldn't know any better.
And that is why, when I left her, she was angrily clicking around in Poe.
1
Once again, Germany will get a minister of research without the slightest clue of
- how research is done
- how research is organized
- what the current issues are
- etc.
However, she has the right party membership and has already demonstrated eloquently being able to combine having no idea with a strong opinion (#blockchain).
That and the fact that we are still in an #aihype let me fear the worst. Mark my words!
Have you heard about the AI 2027 forecast? I don't believe it. IMHO the least plausible part is the leap from AI coding to AI research - the story totally underestimates the unimaginably vast spaces of potentially plausible hypotheses that good researchers must use their knowledge and understanding to prune down to hypotheses to actually test. Coding agents are not going to cut it (none available anytime soon.). #aihype #GenAI #LLM #AI2027
Inspired by @Iris 's recent poll, I suppose... I’m writing up my #psych #phd thesis, and am currently looking at the methods chapter. I’m describing all the samples, procedures, measures, statistical tools and procedures I’ve used in my articles, and ethical considerations. However, although I haven’t seen this in other theses, and although nobody has told me I need to do it, I feel like including a section on «the use of #AI technologies» (read: chatGPT and other LLMs). The thing is, I’m getting the sense that this has become extremely prevalent in a very short amount of time. If nothing else, than to use it «as a brainstorming partner», or help to paraphrase sentences for clarity or fix punctuation. And the reason I want to make a statement out of this in my thesis is that I haven’t. Not one bit, in the least sense. I never wanted to, and I’m very happy I haven’t. Is this worth making a statement of in the methods chapter? How would you go about writing it? What info would you include? Do you know good examples of this kinds of disclaimers/statements, in academic writing? #AIhype
Calling All Mad Scientists: Reject "AI" as a Framing of Your Work • Buttondown https://buttondown.com/maiht3k/archive/calling-all-mad-scientists-reject-ai-as-a-framing/ #AI #AIHype
MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.
It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.
When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.
I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.
But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?
Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."
CAN YOU SPELL ‘ANOMIE’ WITHOUT AI ? https://starbreaker.org/grimoire/entries/can-you-spell-anomie-without-ai/index.html #AI #AIHype
One thing that makes me very happy about #NaNoWriMo crashing and burning: It's good evidence that the #writing world is not even remotely buying into the AI hype. (Unlike another industry I'm involved in...
Having poor moderation practices that enabled grooming? That was bad for them. But promoting AI? That *killed* them. Writers are going: "AI? Fuck, no!
Who could have predicted this?
Do LLMs Really Understand? https://www.youtube.com/watch?v=YtIQVaSS5Pg #AI #AIHype #Debate #Tech #Technology
This article solidifies why I never take these #Accessibility businessmen/professionals seriously. First of all, for an accessibility practitioner, he doesn’t understand the technology he is writing about. A longer blog post will be coming, but this is why I roll my eyes at accessibility professionals today. AI is the future of accessibility - Karl Groves https://karlgroves.com/ai-is-the-future-of-accessibility/ #A11y #AIHype #AI
"My core theses — The Rot Economy (that the tech industry has become dominated by growth), The Rot-Com Bubble (that the tech industry has run out of hyper-growth ideas), and that generative AI has created a kind of capitalist death cult where nobody wants to admit that they're not making any money — are far from comfortable.
The ramifications of a tech industry that has become captured by growth are that true innovation is being smothered by people that neither experience nor know how (or want) to fix real problems, and that the products we use every day are being made worse for a profit. These incentives have destroyed value-creation in venture capital and Silicon Valley at large, lionizing those who are able to show great growth metrics rather than creating meaningful products that help human beings.
The ramifications of the end of hyper-growth mean a massive reckoning for the valuations of tech companies, which will lead to tens of thousands of layoffs and a prolonged depression in Silicon Valley, the likes of which we've never seen.
The ramifications of the collapse of generative AI are much, much worse. On top of the fact that the largest tech companies have burned hundreds of billions of dollars to propagate software that doesn't really do anything that resembles what we think artificial intelligence looks like, we're now seeing that every major tech company (and an alarming amount of non-tech companies!) is willing to follow whatever it is that the market agrees is popular, even if the idea itself is flawed.
Generative AI has laid bare exactly how little the markets think about ideas, and how willing the powerful are to try and shove something unprofitable, unsustainable and questionably-useful down people's throats as a means of promoting growth.
(...)
In short, reality can fucking suck, but a true skeptic learns to live in it."
Despite the hopeful tone this takes regarding LLMs and openly speculating if future LLM image descriptions will be better, this is a really great breakdown as to why AI is not capable of writing good image descriptions. Can generative AI write contextual text descriptions? - TetraLogical https://tetralogical.com/blog/2025/03/24/can-generative-ai-write-contextual-text-descriptions/ #AI #AltText #Accessibility #AIHype