More push back against AI. There is a realization that AI has severe limits in what it can do and that the field has a lot of hype. The idea that AI is going to replace large sections of the STEM workforce is simply not true. AI could replace bureaucratic or repetitive jobs but not jobs that require actual thinking. I think it could be a tool to aid in thinking and creative process but the big hope that it will "do" science, endangering, create new products on its own is not going to happen and in fact it could hinder or reduce the quality of science and engineering in some cases. A lot of LANL managers or ex managers have been saying the most naive things about AI and just unaware of where the field is actually heading.
https://medium.com/quantum-information-review/ai-has-a-critical-flaw-and-its-unfixable-06d6a5c294d4
AI Has a Critical Flaw — And it’s Unfixable.
"AI isn't intelligent in the way we think it is. It's a probability machine. It doesn't think. It predicts. It doesn't reason. It associates patterns. It doesn't create. It remixes. Large Language Models (LLMs) don't understand meaning -- they predict the next word in a sentence based on training data."
"The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok, and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be 'reasoning' is nothing more than a sophisticated form of mimicry.
These models aren't searching for truth through facts and logical arguments--they're predicting text based on patterns in the vast datasets they're 'trained' on. That's not intelligence--and it isn't reasoning. And if their 'training' data is itself biased, then we've got real problems.
I'm sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy--and incompatible with structured logic or causality. The thinking isn't real, it's simulated, and is not even sequential. What people mistake for understanding is actually statistical association."