LANL just lost a world leading expert in AI! Some guy named Jason Prueit. Odd I cannot seem to find a single paper or a references on AI that this guy ever did, so I am going out on limb and say he is not expert on AI. I guess "I played with ChatGP" makes one an expert.
"If you’ve played with the most recent AI tools, you know: They’re very good coders, very good legal analysts, very good first drafters of writing, very good image generators. They’re only going to get better."
I cannot find much of any impact from this guy. He gave some talk on AI and some reddit commentators seemed to see right through it. This is what they said.
Most of the bullshitters will tip their hand pretty early that they're just hype men for AI. Right off the bat, the fact that AI is disruptive and transforming society is apparently self-evident because they never cite a single premise or event to back this up. In the quote above, the phrase "if you've played" stuck out to me. Yes if you play around with them a little it's easy to believe they're really good at so many things. When you stringently evaluate them, you begin to see they make a lot of mistakes and perform inconsistently on even trivial tasks.
Comments
Hype it and they will come...........or maybe not.
I know this guy from some connections at UCSD. He was just another below average particle theory guy that never had any chance to make it in academics or a DOE lab. Somehow he ended up at LANL (a NNSA lab), but to be clear LANL seems to get tons of these guys , so you cannot hold that against him. He would mention every 10 minutes that he used to be a fed so that must give him credibility. None of this is an issue but when it came to AI, he was beyond clueless and just parroted stuff off the internet and would give you the same level of insight into AI as the random hairdresser who used ChatGP. Maybe someone with knowledge of actual AI called him out.
You have a good point that that's why pursuing AI at LANL almost is like a waste of time. I simply do not see the goal. If it is about using AI for science problems that would just happen anyway like every other technology. That seems like a more natural approach but one does not need to push that, the lab scientists will start using for particular problems that makes sense to use it on just the same way every other DOE land and university will.
If it about developing a new AI for breakthroughs in AI that is not really feasible at LANL for a number of reasons. We simply do not have these kinds of experts and we have nothing interesting to attract them LANL in terms of scientific environment or pay. We cannot really make the biggest computers since we do not have the water, or resources needed. LANL has been pushing AI for several years now however every single breakthrough "highlight" that the lab has had on the news feed has been incremental or even low quality work. In other words despite our big investment we have had no real impact. The one place it was applied and successful at LANL is actually fairly old work on atomic potentials, of course this was also going on at many other places
Now, nothing against corporate, but in corporate:
1) The top-down scientific direction is competent. We have already determined that at LANL we have top managers talking AI, and saying things w would expect from hairdressers.
2) A bottom line, the possibility of tanking, ensures results must be achieved. In turn, it means corporate management does not decide on a whim to go all in into something where competitors can easily win. Here the taxpayer foots the bill, therefore top management can BS the most extravagant nonsense and even feel like they are genius visionaries in the process. This is clearly what we are doing with AI: I could list half a dozen places that easily smokes us on AI.
3) Salaries are much higher in corporate.
Yes they are publishing and publishing a lot. Google Deep Mind is just one example, and Google in general publish tons of their work, in high profile fields. Also UC Berkley, Stanford, U Washington, and MIT which often coupled to industry on AI work have a wealth of publications.
Geoffrey Hinton has hundreds of publications with close to a million citations.
https://scholar.google.com/citations?user=JicYPdAAAAAJ&hl=en
Also it shows you that AI research is not something that just appeard in 2022 as claimed by LANL managers but has has been going on for years and years. LANL jumping on the bandwagon this late in the game is not going to add much. Oddly enough LANL could have been a player as back in to 2000s there was some efforts in machine learning but they simply could not get funding and most of the people left.
The point is LANL has very little expertise in this field.
Frankly, I'd be out the door already but between a 2% mortgage and kids in school, it's not so easy to hit the exit. But I also may not have a choice come next FY.
The scary part is that it's not just the funding for windmills, DEI and transgender operas in Peru that has dried up. Nothing is coming in from the DOD or NNSA research complex either.
Exactly, in fact we are getting some hits on this already. Some external money has not been showing up and this already causing problems at LANL. I have also heard that FY26 is when it really hits with some possible big cuts to the BES work at LANL. The idea that we should use PIT money taxes for other science has not and will never happen.
You are also right that LDRD has been tooled to be programmatic, particularly DRs. Also a big chunk has been converted to the DI or Directives Initiative, but that has been basically a disaster. Even if there is a big push and big money for AI work by the new administration you have to ask why would LANL be a good place to invest that money?
https://arxiv.org/abs/2412.17866
Artificial Intelligence, Scientific Discovery, and Product Innovation
Aidan Toner-Rodgers
This paper studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of "idea-generation" tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complementarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization
I the age of AI do we really need peer-review?