Blog purpose

This BLOG is for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA. The opinions stated are personal opinions. Therefore, The BLOG author may or may not agree with them before making the decision to post them. Comments not conforming to BLOG rules are deleted. Blog author serves as a moderator. For new topics or suggestions, email jlscoob5@gmail.com

Blog rules

  • Stay on topic.
  • No profanity, threatening language, pornography.
  • NO NAME CALLING.
  • No political debate.
  • Posts and comments are posted several times a day.

Tuesday, May 30, 2023

AI poses ‘risk of extinction’

 Are the labs and nukes irrelevant now with the coming AI age? For past 80 years nuclear war has been the biggest immediate threat to humanity with LLNL and LANL being

part of this. Now the the real threat is AI, does that mean the labs are irrelevant and we should close them down and make a new project to fight AI, call the Brooklyn project where we assemble the best minds to create a AI to to protect humanity
against other AIs? I it is starting to look that way so we better start rethinking our budget and science priorities if we are go survive. As such the importance of the labs must fade.

https://www.msn.com/en-us/news/technology/ai-poses-risk-of-extinction-on-par-with-nukes-tech-leaders-say/ar-AA1bT5yh

16 comments:

Anonymous said...

This is just a bunch of shameless self promotion and attempt at regulatory capture by the big tech companies. "Oh, look at us, our technology is so incredible that it is more dangerous than nukes and it's gonna take over the world!". On top of that they are begging Washington to regulate and license AI to shut out smaller competitors and new innovations. This is a classic playbook. Any time you see big business begging the government for regulation you know something fishy is at work.

Anonymous said...

Elon Musk on AI

https://m.youtube.com/watch?v=a2ZBEC16yH4&pp=ygUMZWxvbiBtdXNrIGFp

Anonymous said...


You guys at the labs know your are irrelevant now. AI is the biggest threat we face, most or all of the funding for defense needs to be poured into ways to defend against AI.

Anonymous said...

Same shut down Los Alamos crap, different ridiculous claim on why it's necessary. Same monkey, different tree. HoHum.

Anonymous said...

AI is not a "threat" it is real and inevitable, over time we will see that humans cannot somehow "defend against AI" using inferior thought processes. I am sure, humans have a bright future, but we will not be the ones in control.

The labs have a wonderful mission though, in bringing about this new Golden Age through their scientific and computing expertise, and minimizing the inevitable chaos, suffering, and death that could occur before AI gains dominance over the human species.

Anonymous said...

Kamala has been put in charge so all is well.

Anonymous said...

Perhaps, Kamala's speech about the significance of the passage of time, holds greater significance now that time has passed, as more people have become aware of the significance of the passage of time, as it relates to the significance of the significant growth of our AI capabilities with the passage of time.

Anonymous said...

Certainly, Kamala's bizarre speech was quite insightful and deep, as it leads us on a journey of understanding, through its obscurity and ambiguity. Also it is still apparent that it invites us to fix ourselves firmly in the security of enjoying our present existence, grounding these deep insights firmly with everyday reality.

I tried asking Google's chatbot Bard on the other hand, to address each of my concerns about the future path of AI given the particular constraints imposed by everyday reality, and how humans actually behave, that might frustrate any attempt to control AI related issues. Bard does have a deep knowledge, consisting of much of the internet, for example it has great knowledge regarding human history as well as almost any obscure topic.

What surprised me though, was that in each case, it said my concerns were interesting, discussed them, and returned canned talking points as the best course of action -- things like "global conversation regarding the future of AI", or "AI needs transparency and accountability", there were a list of these.

Anonymous said...


6/06/2023 10:32 PM

We know that climate change is a problem that has to be addressed as great risk to humanity. Although AI is a risk that risk outweighs the benefits. With AI
augmentation, people will travel less since you can travel in simulation, in other words you can sit in a Paris cafe without ever leaving your home. You will buy less stuff such as sports equipment since you will play in simulation. You will not have to travel for work since you can work in your meta verse. Being online also means you will not have to move as much and consume less calories. Additionally this could fix the housing crisis since you can live a much smaller place but have it augmented by AI. In other words AI could lead to a much lower use of energy for the current population. I know some people are going to say this sounds horrible but in many ways it will be better and people will probably like it better than reality. You can have tea in London, a jog in Paris and work in your office by the ocean side. You will save money on gas, travel, clothes and food. You will have much smaller mortgage, and will hardly need a car. Say what you want but this will be improvement over how 90% of us live and can save the climate. It is not some dystopian landscape if people live happier lives in a more sustainable way.

Anonymous said...

I do recall, there was a scenario outlined in Homer's Odyssey that might be relevant, dealing with the so-called Lotus Eaters:

https://en.wikipedia.org/wiki/Lotus-eaters

A sort of scenario where people retreat into a virtual world, minimizing their environmental impact, bears some similarities to this. I would point out if this really happens climate change might not matter at all in terms of any impact on human civilization, but the counterargument might be that this is similar to voluntary total extinction of the human species, where only our dreams remain.

Anonymous said...

When did the crazies take over this blog? Why did Scooby let them?

Scooby said...

6/9 6:52
Nobody is taking over. Comments which do not violate rules are allowed even if they seem odd to you or me.

Anonymous said...

I would assume, chatbots are a lot like physicists, in fact some people have outlined a scenario involving monkeys on typewriters, that do not understand what they are doing. This has been used I believe, as a sort of critique of modern science. However, AI chatbots will certainly have a vast scientific potential, we are already seeing their contributions begin to take hold.

I would assume in particular AI-driven research can produce vast quantities of well-documented but mostly useless facts and observations, which can later be synthesized by AI enabling the development of new technologies which may in some cases, be inscrutable to humans. In some cases this might even be a never-ending cycle of seemingly magic developments.

In a sense, then maybe it will be the Age of Aquarius, as in this unusual video, that could end up being some sort of outcome of all the science and weapons stuff, that science and technology become irrelevant, and people adopt some sort of new age beliefs:

https://youtu.be/ajgeaOt_HTQ

I myself would not necessarily advocate for this, or believe it is possible, but as you might be aware there are people pushing stuff like this as an outcome -- perhaps it is government-funded misinformation.

Anonymous said...

Maybe on the other hand, some of the narrative on Aquarius is a so-called left wing proposal:

https://youtu.be/2QiKmqSG5Lk

https://youtu.be/5Jn1WLJTvQA

Or maybe an elite plot to ensure depopulation?

Anonymous said...

Nobody is taking over. Comments which do not violate rules are allowed even if they seem odd to you or me.

6/10/2023 8:25 AM

What about the requirement for relevance to LLNL/LANL/Labs?

Anonymous said...

6/15/2023 5:49 PM

I guess you do not work at the labs because Sandia, LLNL and LANL all have a number of projects involved with AI, such as machine learning, pattern recognition, simulation and so on.

I am am always amazed when we talk about something like national security, computing, politics related to nuclear weapons and so on when the same person says "how is this relevent to the to the LLNL/LANL/Labs. I don't get it, is the person long sense retired, or works in some super isolated part of the lab and does not pay attention to what is going on or simply projecting some vision that they think the lab should be doing some super narrow thing that is irrelevant to the world.

Posts you viewed tbe most last 30 days