Scientists at the labs better start worrying. You will soon have nothing to offer the labs.
Nvidia’s CEO Just Dropped a Hard Truth: “Smart” Is About to Become Worthless.
AI can generate everything.
Only humans with discernment can decide what’s worth keeping.
In other words all you "I am so smart scientists" are about to worthless since, brains, knowledge and skill will be free for all to use, at hundreds of times faster and better than what any scientist can do
What will matter will be leadership and management skills who can than ask the AI what to do. They will not have to ask humans anymore.
Comments
The real challenge is to harness AI to augment natural intelligence without destroying that intelligence. Right now, the forces arrayed in the USA are likely to fail miserably. Look at what the profit motive is doing to Google search. The excellent book "Enshitification" describes the process. If the same forces are unleashed on AI, the results will be catastrophic for society, but they'll make a not of money. So that's all that matters?
LLMs will make things worse.
It's a great study - researchers analyzed over 2 million preprints and found that scientists using LLMs posted 36-60% more papers. The gains were largest for scholars with Asian names at Asian institutions, something like 90% increased preprints in some fields. After decades of native or at least fluent English speakers enjoying a structural advantage in scientific communication, one interpretation is a linguistic barrier is falling.
But there's more data that complicates things. Among LLM-assisted manuscripts, higher writing complexity predicts lower publication success. A cynical read now is: people who need LLMs to sound sophisticated are using AI to dress up weak science. And given where the productivity gains are concentrated in this paper, one conclusion you could draw is uncomfortably close to "Asian scholars are publishing more crap."
What challenges this read is the same inverted signal shows up across all groups, ie, likely native English speakers using LLMs *also* produce papers where the polish outpaces the substance. Maybe Asian scholars simply uptake AI tools more? I don't think the methods fully rule this out.
The thing that is certain from this data is that the old shortcut – polished prose signals careful thinking – hasn't just weakened. For LLM-assisted papers, it's totally broken (see the graph).
My takeaway is that polished writing was only ever a partial signal for underlying scientific quality. As Eve Marder once wrote, "It is not an accident that some of our best and most influential scientists write elegant and well-crafted papers." I still think this is true. But maybe high quality writing also worked by priming reviewers to expect high quality science. A well-written abstract makes reviewers read generously whereas awkward phrasings make reviewers hunt for flaws. The writing shapes whether the science gets a fair hearing. That shortcut almost certainly favored people who sounded like insiders - which is disproportionately native or at least fluent English speakers.
I don't think we quite know know yet what's going on, whether LLMs enable real scientific contributions that language barriers had been suppressing, or whether they are just enabling more low-quality output.
I'm genuinely curious: for those of you who review papers or grants, do you think LLMs are unlocking latent capacity that linguistic gatekeeping was suppressing, or are they just enabling weaker work to sound better?