Skip to main content

Will natural intelligence be relevant?

 Scientists at the labs better start worrying. You will soon have nothing to offer the labs.


Nvidia’s CEO Just Dropped a Hard Truth: “Smart” Is About to Become Worthless.

AI can generate everything.
Only humans with discernment can decide what’s worth keeping.

In other words all you "I am so smart scientists" are about to worthless since, brains, knowledge and skill will be free for all to use, at hundreds of times faster and better than what any scientist can do

What will matter will be leadership and management skills who can than ask the AI what to do. They will not have to ask humans anymore.

Comments

Anonymous said…
Self serving garbage focused on his stock price.

The real challenge is to harness AI to augment natural intelligence without destroying that intelligence. Right now, the forces arrayed in the USA are likely to fail miserably. Look at what the profit motive is doing to Google search. The excellent book "Enshitification" describes the process. If the same forces are unleashed on AI, the results will be catastrophic for society, but they'll make a not of money. So that's all that matters?
Anonymous said…
snort. let me know when their souped up auto-correct stops needing a training set of human knowledge.
Anonymous said…
Ah, yes. The then/than guy speaks.
Anonymous said…

LLMs will make things worse.

It's a great study - researchers analyzed over 2 million preprints and found that scientists using LLMs posted 36-60% more papers. The gains were largest for scholars with Asian names at Asian institutions, something like 90% increased preprints in some fields. After decades of native or at least fluent English speakers enjoying a structural advantage in scientific communication, one interpretation is a linguistic barrier is falling.

But there's more data that complicates things. Among LLM-assisted manuscripts, higher writing complexity predicts lower publication success. A cynical read now is: people who need LLMs to sound sophisticated are using AI to dress up weak science. And given where the productivity gains are concentrated in this paper, one conclusion you could draw is uncomfortably close to "Asian scholars are publishing more crap."

What challenges this read is the same inverted signal shows up across all groups, ie, likely native English speakers using LLMs *also* produce papers where the polish outpaces the substance. Maybe Asian scholars simply uptake AI tools more? I don't think the methods fully rule this out.

The thing that is certain from this data is that the old shortcut – polished prose signals careful thinking – hasn't just weakened. For LLM-assisted papers, it's totally broken (see the graph).

My takeaway is that polished writing was only ever a partial signal for underlying scientific quality. As Eve Marder once wrote, "It is not an accident that some of our best and most influential scientists write elegant and well-crafted papers." I still think this is true. But maybe high quality writing also worked by priming reviewers to expect high quality science. A well-written abstract makes reviewers read generously whereas awkward phrasings make reviewers hunt for flaws. The writing shapes whether the science gets a fair hearing. That shortcut almost certainly favored people who sounded like insiders - which is disproportionately native or at least fluent English speakers.

I don't think we quite know know yet what's going on, whether LLMs enable real scientific contributions that language barriers had been suppressing, or whether they are just enabling more low-quality output.

I'm genuinely curious: for those of you who review papers or grants, do you think LLMs are unlocking latent capacity that linguistic gatekeeping was suppressing, or are they just enabling weaker work to sound better?
Anonymous said…
I see people of all ages with built-in artificial intelligence. The ones that talk in general terms , as if repeating a public service announcement. No critical thinking, just repetition, no natural intelligence.
Anonymous said…
At LANL the push for AI is to focus only on big ideas. In the end only a few of the brightest managers will be left with the sole job is to think and come up with a Big Idea every six months. Most working days consists primarily of long walks spent pondering.

Popular posts from this blog

Plutonium Shots on NIF.

Tri-Valley Cares needs to be on this if they aren't already. We need to make sure that NNSA and LLNL does not make good on promises to pursue such stupid ideas as doing Plutonium experiments on NIF. The stupidity arises from the fact that a huge population is placed at risk in the short and long term. Why do this kind of experiment in a heavily populated area? Only a moron would push that kind of imbecile area. Do it somewhere else in the god forsaken hills of Los Alamos. Why should the communities in the Bay Area be subjected to such increased risk just because the lab's NIF has failed twice and is trying the Hail Mary pass of doing an SNM experiment just to justify their existence? Those Laser EoS techniques and the people analyzing the raw data are all just BAD anyways. You know what comes next after they do the experiment. They'll figure out that they need larger samples. More risk for the local population. Stop this imbecilic pursuit. They wan...

Trump is to gut the labs.

The budget has a 20% decrease to DOE office of science, 20% cut to NIH. NASA also gets a cut. This will  have a huge negative effect on the lab. Crazy, juts crazy. He also wants to cut NEA and PBS, this may not seem like  a big deal but they get very little money and do great things.

Rumor corner

LLNS may have excluded the wrong people in last VSSOP? The exclusions were based on outdated job categories and related skills. ULM are now thinking that in the future, job categories and functional areas will have to be re-defined. The next VSSOP/ISP will be based on the new categories and functional areas. The questions I have are: 1) Why didnt they think of that before the transition. It seems like their style is “change things as you go”. Planning is out the window! 2) Who will give input on the new changes? The next RIF apparently is going to be more lucrative than the VSSOP. Depending on the length of employment, a RIFed person, not only gets their 1 week pay per year of service but also from 30 to 120 days notice, essentially 30 to 120 days pay. Please feel free to comment on the rumors or add new ones you actually heard.