Blog purpose

This BLOG is for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA. The opinions stated are personal opinions. Therefore, The BLOG author may or may not agree with them before making the decision to post them. Comments not conforming to BLOG rules are deleted. Blog author serves as a moderator. For new topics or suggestions, email jlscoob5@gmail.com

Blog rules

  • Stay on topic.
  • No profanity, threatening language, pornography.
  • NO NAME CALLING.
  • No political debate.
  • Posts and comments are posted several times a day.

Sunday, July 7, 2024

Intelligent AI in the decade ahead

 This is an interesting set of articles that are hyping AI developments. It points out how we will soon have super-intelligent AI systems:

Check out the menu at:
https://situational-awareness.ai/

7 comments:

Anonymous said...

Does the world need that much intelligence? There is already an overproduction of college graduates including Ph.D.'s in many fields, and the government has programs to forgive loans as the value of degrees is less than the cost. Many talented people cannot find satisfying work as they are overqualified for the jobs that are available.

This brings up another question -- how will trillions of dollars get funneled into these projects, is there some sort of business model where some end user pays for all this capacity, or is it a bubble? As intelligence begins to be overproduced, won't it be (even further) devalued? In that case, how does money keep flowing into these AI projects?

Anonymous said...

I think AI is a bit of a bubble at this point.

Anonymous said...


At LANL at least every thing "AI" and AI is the future of the labs, it will controll everything, LANL will make big advances in AI and so on. At the same time the managers that go on about this seem to only have a very superficial knowledge of AI using the same talking points you find in popular media. They simply do not seem to get it at all. The do not understand the kinds of problems it could used for, what are the limitations, and what could actually be done. Mason gave a talk recently and it was kind of embarrassing when he spoke on AI. Somone asked him wha AI advance as LANL ever made, and kind of seemed confused and said we have a "new AI machine!" This is just a new machine, it is NOT an AI machine. He also that four years ago no could have predicted the rise of AI and it all took off last years with ChatGPT! I know a number of people are grumbling that any AI work at LANL looks pretty grim at this point. If our leadership has no real idea what AI is, it is going to be hard to have any real advance in it. Of course LANL actually had a number of efforts in AI and machine learning going back 25 years or more. Most of these people that lead these efforts left LANL for better positions, also in many cases this line of work was not promoted as it was seen as "not lab relevant". LANL is one odd place.

Anonymous said...

Shades of LANS is starting to return to LANL.

I have noticed that many of the bad aspects of LANS are starting to to return to LANL. I am not sure if this just nature of the labs or the old LANS managers are just slipping back to the old ways but I have seem a couple of hints of this. (1) We had DuPont safety survey and there was a huge push to get everyone to take it and it was stated to be a big deal, super important big event and so on. During Masons all hands meeting he said something along the lines of "we had a survey with some results we did not expect or some confusion by the workforce." I am guessing it said something they did not like and we will never hear about this survey again, if it is brought up again, the management will say,. there was never a survey. A similar thing happened with LANS where some surveys were taken and the results did not come out the way management wanted.

(2) The weird AI stuff at the lab. LANL is now an AI lab but they way it is talked about it is at the level of a morning TV broadcast, very shallow, vague and in most cases inaccurate. You are seeing more and more staff becoming rather skeptical of this. Many lab people either know somthing of this, or have connections to universities and other labs. If you happen to know any professors most are now saying that there is too much hype on AI, overselling, or lack of understanding of what it is and what it does. Some LANL manger uses ChatGPT to write a mindless memo and now thinks it AI can do everything for us at the labs.



Anonymous said...


The AI craze is like the dot.com bubble, it is going to fizzle and return normal level. The hype right now is crazy however you have to see who is pushing the hype, which is business people, politicians, and celebrities. Oddly enough you do not see this craze on the science front. If you happen to read Science, Nature, PNAS and so on there are simply not that many articles on AI. Sure there are a few and some people incorporate elements in some studies which they have been doing for about 10 or more years but the amount of public hype versus actual scientific hype is huge which is a sure giveaway that this is a bubble. Go to any science conference in a major field and look at the talks and what people are excited by and you find that AI or AI methods are not nearly as prevalent as the media hype would imply. By the way people have been using computers, neural networks and leaning models for 30 years or more now. A lot of the methods are not that new, or just standard progress one would expect with faster computers. I suspect LANL/LLNL and DOE is just jumping on a band wagon late in the game.

It is open question right now about just how important AI will be to science in general. It looks like it can well work for certain fields, but
in some cases is simply not that much better than other methods or even worse. Also for many scientific issues AI currently cannot offer much of anything, perhaps it can in the future but those a pretty big leaps.

I have heard that some managers are saying the push for AI is just get lab people to start using it. This is beyond bizarre if that is the reason as scientists will naturally find or pursue new promising methods. AI is not something one should just push on scientists as they themselves would drive the push since they would naturally be wanting to use these methods to stay at the cutting edge of any science. There are already a number of people at the labs that do that just as there are at universities. There are also a bunch who have tested the method and find that it simply is not that useful for certain problems, not as good as other methods, or have read enough papers and have not seen how it will add much at this point.


Anonymous said...


Top Goldman Sachs Analyst Warns On AI Bubble

https://www.investors.com/news/technology/top-goldman-sachs-analyst-warns-on-ai-bubble-but-likes-nvidia-infrastructure-plays/

However, Covello cast doubt on artificial intelligence solving critical business problems and companies getting a return on AI investment — which Goldman Sachs estimated will be over $1 trillion "in coming years."

In the "Top of Mind" report, Covello added: "How long investors will remain satisfied with the mantra that 'if you build it, they will come' remains an open question. The more time that passes without significant AI applications, the more challenging the AI story will become. And my guess is that if important use cases don't start to become more apparent in the next 12 to 18 months, investor enthusiasm may begin to fade."

Anonymous said...


There is worry that AI will take over jobs. Of course this can be seen as a good thing if you want to save money or reduce spending. The question for the labs is can AI reduce the workforce to save money and if so what kind of jobs will this be. At LANL one of the big pushes is that it is a force for good in that it in employs thousands of workers in New Mexico. The issue is how do we balance AI with replacing the workforce yet still being a force for good? I would guess we need to concentrate AI that can do technical or engineering tasks. For example if we can get AI to build pits we would be way ahead of the game.

Posts you viewed tbe most last 30 days