Skip to main content

Will natural intelligence be relevant?

 Scientists at the labs better start worrying. You will soon have nothing to offer the labs.


Nvidia’s CEO Just Dropped a Hard Truth: “Smart” Is About to Become Worthless.

AI can generate everything.
Only humans with discernment can decide what’s worth keeping.

In other words all you "I am so smart scientists" are about to worthless since, brains, knowledge and skill will be free for all to use, at hundreds of times faster and better than what any scientist can do

What will matter will be leadership and management skills who can than ask the AI what to do. They will not have to ask humans anymore.

Comments

Anonymous said…
Self serving garbage focused on his stock price.

The real challenge is to harness AI to augment natural intelligence without destroying that intelligence. Right now, the forces arrayed in the USA are likely to fail miserably. Look at what the profit motive is doing to Google search. The excellent book "Enshitification" describes the process. If the same forces are unleashed on AI, the results will be catastrophic for society, but they'll make a not of money. So that's all that matters?
Anonymous said…
snort. let me know when their souped up auto-correct stops needing a training set of human knowledge.
Anonymous said…
Ah, yes. The then/than guy speaks.
Anonymous said…

LLMs will make things worse.

It's a great study - researchers analyzed over 2 million preprints and found that scientists using LLMs posted 36-60% more papers. The gains were largest for scholars with Asian names at Asian institutions, something like 90% increased preprints in some fields. After decades of native or at least fluent English speakers enjoying a structural advantage in scientific communication, one interpretation is a linguistic barrier is falling.

But there's more data that complicates things. Among LLM-assisted manuscripts, higher writing complexity predicts lower publication success. A cynical read now is: people who need LLMs to sound sophisticated are using AI to dress up weak science. And given where the productivity gains are concentrated in this paper, one conclusion you could draw is uncomfortably close to "Asian scholars are publishing more crap."

What challenges this read is the same inverted signal shows up across all groups, ie, likely native English speakers using LLMs *also* produce papers where the polish outpaces the substance. Maybe Asian scholars simply uptake AI tools more? I don't think the methods fully rule this out.

The thing that is certain from this data is that the old shortcut – polished prose signals careful thinking – hasn't just weakened. For LLM-assisted papers, it's totally broken (see the graph).

My takeaway is that polished writing was only ever a partial signal for underlying scientific quality. As Eve Marder once wrote, "It is not an accident that some of our best and most influential scientists write elegant and well-crafted papers." I still think this is true. But maybe high quality writing also worked by priming reviewers to expect high quality science. A well-written abstract makes reviewers read generously whereas awkward phrasings make reviewers hunt for flaws. The writing shapes whether the science gets a fair hearing. That shortcut almost certainly favored people who sounded like insiders - which is disproportionately native or at least fluent English speakers.

I don't think we quite know know yet what's going on, whether LLMs enable real scientific contributions that language barriers had been suppressing, or whether they are just enabling more low-quality output.

I'm genuinely curious: for those of you who review papers or grants, do you think LLMs are unlocking latent capacity that linguistic gatekeeping was suppressing, or are they just enabling weaker work to sound better?
Anonymous said…
I see people of all ages with built-in artificial intelligence. The ones that talk in general terms , as if repeating a public service announcement. No critical thinking, just repetition, no natural intelligence.
Anonymous said…
At LANL the push for AI is to focus only on big ideas. In the end only a few of the brightest managers will be left with the sole job is to think and come up with a Big Idea every six months. Most working days consists primarily of long walks spent pondering.

Popular posts from this blog

Plutonium Shots on NIF.

Tri-Valley Cares needs to be on this if they aren't already. We need to make sure that NNSA and LLNL does not make good on promises to pursue such stupid ideas as doing Plutonium experiments on NIF. The stupidity arises from the fact that a huge population is placed at risk in the short and long term. Why do this kind of experiment in a heavily populated area? Only a moron would push that kind of imbecile area. Do it somewhere else in the god forsaken hills of Los Alamos. Why should the communities in the Bay Area be subjected to such increased risk just because the lab's NIF has failed twice and is trying the Hail Mary pass of doing an SNM experiment just to justify their existence? Those Laser EoS techniques and the people analyzing the raw data are all just BAD anyways. You know what comes next after they do the experiment. They'll figure out that they need larger samples. More risk for the local population. Stop this imbecilic pursuit. They wan...

Trump is to gut the labs.

The budget has a 20% decrease to DOE office of science, 20% cut to NIH. NASA also gets a cut. This will  have a huge negative effect on the lab. Crazy, juts crazy. He also wants to cut NEA and PBS, this may not seem like  a big deal but they get very little money and do great things.

tcp1 looking good

I just received my annual TCP-1 letter from LLNS and a summary of the LLNS Pension Plan. Looked in pretty good shape in 2013. About 35% overfunded (funding target attainment percentage = 134.92%). This was a decrease from 2012 where it was 51% overfunded (funding target attainment percentage = 151.59%). They did note that the 2012 change in the law on how liabilities are calculated using interest rates improved the plan's position. Without the change the funding target attainment percentages would have been 118% (2012) and 105% (2013). 2013 assets = $2,057,866,902 2013 liabilities = $1,525,162,784 vs 2012 assets = $1,844,924,947 2012 liabilities = $1,217,043,150 It was also noted that a slightly different calculation method ("fair market value") designed to show a clearer picture of the plan' status as December 31, 2013 had; Assets = $2,403,098,433 Liabilities = $2,068,984,256 Funding ratio = 116.15% Its a closed plan with 3,781 participants. Of that number, 3,151 wer...