Blog purpose

This BLOG is for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA. The opinions stated are personal opinions. Therefore, The BLOG author may or may not agree with them before making the decision to post them. THIS BLOG WILL NOT POST ANY MAGA PROPAGANDA OR ANY MISINFORMATION REGARDLESS OF SOURCE. Comments not conforming to BLOG rules are deleted. Blog author serves as a moderator. For new topics or suggestions, email jlscoob5@gmail.com

Blog rules

  • Stay on topic.
  • No profanity, threatening language, pornography.
  • NO NAME CALLING.
  • No political debate.
  • Posts and comments are posted several times a day.

Sunday, March 16, 2025

AI leadership at LANL?

 Claude said:

AI leadership?

Los Alamos National Laboratory (LANL) is claiming a leadership role in developing AI for national security applications. However, a glaring inconsistency emerges when we examine its operational practices. By neglecting to implement AI technologies to optimize its own business operations, especially project management, procurement and financial operations, LANL undermines its credibility and raises questions about its genuine commitment to innovation. If LANL truly believes in the transformative power of AI for national security, it should first harness that power internally to improve operational efficiency, reduce costs, and enhance decision-making processes. The apparent reluctance to adopt AI in its own business model suggests a lack of confidence in the technology or a disconnect between its lofty goals and practical applications. This inconsistency not only jeopardizes LANL operational effectiveness but also casts doubt on the authenticity of its mission to contribute to national security through AI.

5 comments:

Anonymous said...

The leadership at LANL has no idea what AI is or the history of it. They really do not understand it at all. To them AI is only ChatGPT and it appeared out of nowhere in 2022. The level of understanding is that of average non-STEM person with high school degree working at Wallmart. They go all ga-ga over NIVIDIA and do not seem to understand that NIVIDIA is simply trying to sell them a product. No one at LANL believes the lab will have any impact at all in AI. Maybe some of the managers believe it but also I am told that this is all about positioning ourselves to get AI money. If LANL is going to be the "AI lab" why has it not been doing serious AI work in the last 20 years?
AI and the science behind it has been around for a long long time. LANL is not suddenly going to drop into a well developed field and become the nations leading AI lab. The sad part is at various times people at LANL tried to do AI type research on machine learning, large language and other methods but never got traction and all of these people left for......Google, Microsoft, Intel, IBM...you get the point.

Here is a simple question

If you are young person who wants to make a breakthrough in AI or breakthroughs using AI, why on earth would you come to LANL? It makes no sense as there are so many other places that are much better positioned to do this and will pay you way more than LANL could ever pay. Also once the next hype thing comes along we will pretend we never did AI and say we are doing the next thing.

Most AI innovation is currently being done in industry, or tie in with places such as Stanford, MIT, Berkeley and other high level universities. This is on top of the fact that AI as being reported in the media is `overhyped in general. If you happen to anyone in the computing industry they are pretty clear that current AI has a lot limitations or may end up being useful in only a subset of fields.

Also it may revolutionize certain sub fields of science and engineering, however if you look into it a lot of those types of problems are not really relevant to the type of work LANL or LLNL does. All you have to do is look at a
current issue of Science or Nature and count how many publications are using AI and turns out not that many. In other cases where it is used it is just one of many methods used.

With that said, yes AI could certainly replace LANL leadership.

Anonymous said...

The upper management is completely clueless about what AI is, why we might or might not want it, how we can best contribute, why we are entirely the wrong organization to do so, and what leadership actually is. The days when Los Alamos could contribute to national scientific priorities are long gone. Big science, hiding behind classification, past reputation, etc. are nothing but window dressing for an organization that was strangled to death by bean counting, MBA bureaucrats decades ago.

Anonymous said...

More and more scientists are seeing huge limitations to AI.

As researchers across disciplines continue to explore the potential benefits and applications of artificial intelligence (AI), Yale physicist Chiara M.F. Mingarelli recently tested the use of AI in scientific writing—and found both strengths and weaknesses.

While attending a two-week conference on the discovery of the gravitational wave background and its large amplitude, Mingarelli, an assistant professor of physics in the Faculty of Arts and Sciences (FAS), decided to use AI to create a summary of the conference. She uploaded conference transcripts to an AI platform and reviewed each new summary iteration with colleagues.

Mingarelli said that with each new iteration, the summary grew more detailed. But its deficiencies were "glaring," she noted. At times it misinterpreted complex discussions, while also lacking in coherence and depth.

"At best, the document resembled a poorly written conference proceeding," Mingarelli wrote in an op-ed describing the experience for the journal Nature Astronomy. "At worst, a tabloid version of our meeting."

More information:
C. M. F. Mingarelli, Scientific writing in the age of AI, Nature Astronomy (2025). DOI: 10.1038/s41550-025-02493-y

Anonymous said...

In fairness, lab management's theory isn't that LANL is going to discover the next commercial or academic AI breakthrough, but rather to apply it to classified national security problems. While this could be possible, I agree that it will be a very challenging proposition. We hardly have the expertise, let alone the computing power. Venado has been completely sucked up by one single AI effort, leaving nothing for any other program. Meanwhile, the tech companies are literally investing tens or even hundreds of billions in AI infrastructure. Regarding expertise, the lab expects everybody to start out as a poorly paid contingent worker, known as a postdoc. Meanwhile, these people can easily start at $250K to $350K at tech companies. Living in rural areas like northern New Mexico hasn't been popular since the 1980s and kids today are all city slickers that don't know how to entertain themselves outside a metropolis. I'm sure the lab is simply chasing the pot of gold at the end of the rainbow, as with pits, but achieving any real success in AI is going to be quite an uphill battle.

Anonymous said...

"Venado has been completely sucked up by one single AI effort, "

The AI Venado effort makes absolutely no sense and it takes like one or two lines of math to realize why it cannot possible work. To be more precise you can still get it to do something with limited data but you would have to insane to trust it. Again the management thinks you can just get ChatGPU to do everything "LANL". They have no idea how ChatGPU works or how AI works.
To them it is like literally like "magic" the problem is if you consider it "magic" rather than understating how it is working you will have no idea what and where it can actually be used or the results could be trusted.
Sure it would be possibly to apply AI to national security efforts at LANL, but (1) LANL had many such efforts in AI the past but they all died out or the people left, as management would never support it. It was seen as just pointless "sandbox" science of no value to the mission. (2) It would require the same kind of research approach that everything else does, where you would need to test it, verify it, find problems it might work on and so on. LANL management just keep saying "AI" will change everything and will be able to do everything. Sure this is what CEOs of companies trying to sell you you stuff are saying but this is not what scientist are saying. This is on top the fact that more and more problems are being found with AI and just how completly inaccurate it can be.

Posts you viewed tbe most last 30 days