Found this from a Distinguished Sandia/LANL retried scientists from his blog. A bit of depressing read. It has a hint of politics but he has a point.
"I will start by saying Los Alamos carries some significant meaning for me personally. I lived and worked there for almost 18 years. It shaped me as a scientist, if not made me the one I am today. It has (had) a culture of scientific achievement and open inquiry that I fully embrace and treasure. I had not spent time like this on the main town site for years. It was a stunning melange of things unchanged and radical change. I ate at new places, and old places running into old friends with regularity. I was left with mixed feelings and deep emotions at the end. Most of all my view of whether leaving there was the right professional move for me. It was probably a good idea. The Lab I knew and loved is almost gone. It has disappeared into the maw of our dysfunctional nation’s destruction of science. It is a real example of where greatness has gone, and the MAGA folks are not doing jack to fix it.
Another topic of repeated discussion every day of the meeting is the growing obsession with AI. There is a manic zeal for AI on the part of managers, and it puts all our science at serious risk.
7 comments:
Do you mind disclosing the blog's address? Thank you.
This is the blog link.
https://wjrider.wordpress.com/author/wjrider/
Lots commentary about Sandia and Los Alamos along with some rather colorful social commentary about the general state of the nation. Also lots of opinion so take it for what it is. There is probably some content about LLNL as well but the basis set is on the New Mexico labs, scientific computing and social politics. Also I am not sure if the blogger is retired or soon to be retired but he sure does not hold back on his thoughts.
Much obliged .
Lab managers love AI, because they don't have to know anything.
I continue to see AI making mistakes. Google continues to put the warning that AI makes mistakes on their AI generated summations.
Doesn't mean it's not good enough for nuke design, right? After all, we don't test anyway, so whether it works or not is immaterial.
There is a decent amount of HE in the nucs. A mistake by the AI could cause an unexpected explosion killing people and scattering Pu all over the place. Will an AI design be safe? It is not just a question of whether the nuc works.
Post a Comment