LANL just lost a world leading expert in AI! Some guy named Jason Prueit. Odd I cannot seem to find a single paper or a references on AI that this guy ever did, so I am going out on limb and say he is not expert on AI. I guess "I played with ChatGP" makes one an expert.
"If you’ve played with the most recent AI tools, you know: They’re very good coders, very good legal analysts, very good first drafters of writing, very good image generators. They’re only going to get better."
I cannot find much of any impact from this guy. He gave some talk on AI and some reddit commentators seemed to see right through it. This is what they said.
Most of the bullshitters will tip their hand pretty early that they're just hype men for AI. Right off the bat, the fact that AI is disruptive and transforming society is apparently self-evident because they never cite a single premise or event to back this up. In the quote above, the phrase "if you've played" stuck out to me. Yes if you play around with them a little it's easy to believe they're really good at so many things. When you stringently evaluate them, you begin to see they make a lot of mistakes and perform inconsistently on even trivial tasks.
14 comments:
Sorry it is Pruet. Nonetheless is seems that he has never done actual work in ML/AI. It is off the all to see all these AI hype men that have never done any work in the field yet AI/ML has been around for some time. The hype people have uncanny way of both overestimating what AI can or is doing right now and that same time not appreciate what AI can or is doing right now.
Hype it and they will come...........or maybe not.
Hahaha. I knew that guy from LLNL years ago. Think he became a fed for a while. I literally forgot his area of expertise. I know it’s not AI.
"Hahaha. I knew that guy from LLNL years ago. Think he became a fed for a while. I literally forgot his area of expertise. I know it’s not AI."
I know this guy from some connections at UCSD. He was just another below average particle theory guy that never had any chance to make it in academics or a DOE lab. Somehow he ended up at LANL (a NNSA lab), but to be clear LANL seems to get tons of these guys , so you cannot hold that against him. He would mention every 10 minutes that he used to be a fed so that must give him credibility. None of this is an issue but when it came to AI, he was beyond clueless and just parroted stuff off the internet and would give you the same level of insight into AI as the random hairdresser who used ChatGP. Maybe someone with knowledge of actual AI called him out.
To be fair, I don't think Sam Altman has ever really published a paper on AI either. Most of the advances these days are driven by private industry and, for the most part, they aren't publishing the bulk or their innovations. I hate to break it to you, but ML is one area in the last 15 years where private industry has really blown past anything that the leave or academia could ever hope to achieve. That's why pursuing AI at LANL almost send like a waste of time. It would make more sense for the government to invest that money at OpenAI or Palantir.
Pruet is not exactly Sam Altman, but in any case the comparison makes no sense as Altman is American technology entrepreneur and investor not a scientists who developed the fundamentals of AI. In fact he has been criticized for sayings things about AI that are not correct as well since he is more on the business side not the science side.
You have a good point that that's why pursuing AI at LANL almost is like a waste of time. I simply do not see the goal. If it is about using AI for science problems that would just happen anyway like every other technology. That seems like a more natural approach but one does not need to push that, the lab scientists will start using for particular problems that makes sense to use it on just the same way every other DOE land and university will.
If it about developing a new AI for breakthroughs in AI that is not really feasible at LANL for a number of reasons. We simply do not have these kinds of experts and we have nothing interesting to attract them LANL in terms of scientific environment or pay. We cannot really make the biggest computers since we do not have the water, or resources needed. LANL has been pushing AI for several years now however every single breakthrough "highlight" that the lab has had on the news feed has been incremental or even low quality work. In other words despite our big investment we have had no real impact. The one place it was applied and successful at LANL is actually fairly old work on atomic potentials, of course this was also going on at many other places
This is also one of my problems with LANL's current evolution toward a sort of corporate place.
Now, nothing against corporate, but in corporate:
1) The top-down scientific direction is competent. We have already determined that at LANL we have top managers talking AI, and saying things w would expect from hairdressers.
2) A bottom line, the possibility of tanking, ensures results must be achieved. In turn, it means corporate management does not decide on a whim to go all in into something where competitors can easily win. Here the taxpayer foots the bill, therefore top management can BS the most extravagant nonsense and even feel like they are genius visionaries in the process. This is clearly what we are doing with AI: I could list half a dozen places that easily smokes us on AI.
3) Salaries are much higher in corporate.
You’ve angered the hairdresser profession
"Most of the advances these days are driven by private industry and, for the most part, they aren't publishing the bulk or their innovations"
Yes they are publishing and publishing a lot. Google Deep Mind is just one example, and Google in general publish tons of their work, in high profile fields. Also UC Berkley, Stanford, U Washington, and MIT which often coupled to industry on AI work have a wealth of publications.
Geoffrey Hinton has hundreds of publications with close to a million citations.
https://scholar.google.com/citations?user=JicYPdAAAAAJ&hl=en
Also it shows you that AI research is not something that just appeard in 2022 as claimed by LANL managers but has has been going on for years and years. LANL jumping on the bandwagon this late in the game is not going to add much. Oddly enough LANL could have been a player as back in to 2000s there was some efforts in machine learning but they simply could not get funding and most of the people left.
The point is LANL has very little expertise in this field.
Sam Altman was just an example for some industry guy who makes the real breakthroughs but doesn't publish. I don't know what their actual names are because they toil away anonymously. That said, the challenge at The University of Los Alamos, aka LANL, is that the glory days died years ago and they aren't coming back. BES is probably going to evaporate along with climate and renewable energy funding. All work is going to have to be directly mission focused or it's going to be cut. This also means new areas of growth with be challenged because management will never green light something new and risky. It will be incremental improvements on long-standing programs. It's already pretty much this way with everything except ER LDRD. I bet ER LDRD will soon become mission focused and management led like it is at SNL. Also, the whole place is going to have to shrink in FY26, because despite what Mason (aka Dr. Sunshine) says, the R&D money ain't coming in as we head into next year. When he says budgets will be flat, what he means is for casting pits and evaluating process compliance, not R&D.
Frankly, I'd be out the door already but between a 2% mortgage and kids in school, it's not so easy to hit the exit. But I also may not have a choice come next FY.
The scary part is that it's not just the funding for windmills, DEI and transgender operas in Peru that has dried up. Nothing is coming in from the DOD or NNSA research complex either.
"because despite what Mason (aka Dr. Sunshine) says, the R&D money ain't coming in as we head into next year. When he says budgets will be flat, what he means is for casting pits and evaluating process compliance, not R&D."
Exactly, in fact we are getting some hits on this already. Some external money has not been showing up and this already causing problems at LANL. I have also heard that FY26 is when it really hits with some possible big cuts to the BES work at LANL. The idea that we should use PIT money taxes for other science has not and will never happen.
You are also right that LDRD has been tooled to be programmatic, particularly DRs. Also a big chunk has been converted to the DI or Directives Initiative, but that has been basically a disaster. Even if there is a big push and big money for AI work by the new administration you have to ask why would LANL be a good place to invest that money?
I think I have changed my mind wand to fully embrace AI at the NNSA and make the completely AI driven. Everyone should read this great paper show how AI makes scientists so much more productive by doing all the creative work for the scientists. I think this is just want LANL wants. There are lots of good news stories about this great work as well you should check out.
https://arxiv.org/abs/2412.17866
Artificial Intelligence, Scientific Discovery, and Product Innovation
Aidan Toner-Rodgers
This paper studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of "idea-generation" tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complementarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization
Papers on arxiv aren't peer-reviewed.
"Papers on arxiv aren't peer-reviewed."
I the age of AI do we really need peer-review?
This whole thing is beyond messed up.
Post a Comment