Blog purpose

This BLOG is for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA. The opinions stated are personal opinions. Therefore, The BLOG author may or may not agree with them before making the decision to post them. THIS BLOG WILL NOT POST ANY MAGA PROPAGANDA OR ANY MISINFORMATION REGARDLESS OF SOURCE. Comments not conforming to BLOG rules are deleted. Blog author serves as a moderator. For new topics or suggestions, email jlscoob5@gmail.com

Blog rules

  • Stay on topic.
  • No profanity, threatening language, pornography.
  • NO NAME CALLING.
  • No political debate.
  • Posts and comments are posted several times a day.

Monday, May 26, 2025

AI demystified

 Hello blog contributors! 

First, I apologize for the post title. It suggests I am trying to demystify AI while in fact I am asking you for help in doing it.

I have used chatbots to ask far-fetched questions and to check for plagiarism, mainly.

Is anyone versed enough in AI in  both its software and hardware aspects to explain its engines, sources, challenges and applications?

Or anything else. It is a relatively new field, so no one should expect a symposium.

Thank you 



10 comments:

Anonymous said...

This video from a recognized worlds leading authority on all things AI will answer all you questions and more. Watch and learn. It is tremendous the depth of knowledge, the technical expertise.

https://www.youtube.com/watch?v=Yk1P8l4svQU

Scooby said...

Thank you for your valued contribution. It was 30 minutes of captivating history!
I will let the scientific minds weigh in.

Anonymous said...

History! It was the future!! A technical master work of the highest order. It is beyond obvious that the future of LANL is AI.

Scooby said...

The speaker went through a whole lot of history and pure theory before getting to AI.
This does not take away from ithe presentation value.

Anonymous said...

I just watched Sabine Haushoffer's (or something like that) youtube stream today where she talked about AI making people dumber and especially scientists. There are now gazillion scientific papers witten by or with AI that are totally worthless and wrong. As a software developer I know there is gazillion lines of worthless code in the world written with AI. I've written many, many comments where and how AI fails but my colleagues seem to be idiots. Maybe I release some white paper about AI fallacies one day.

Anonymous said...

https://www.youtube.com/watch?v=hVkCfn6kSqE

AI slop has already become prevalent across the internet (and in the classroom). But according to multiple reports, artificial intelligence is posing an increasingly serious threat to scientific literature, too. Researchers have uncovered a concerning number of AI-generated papers published in reputable journals, even experts can no longer identify AI-generated “science” images, and the amount of AI slop submitted to publishers will only increase as the technology continues to improve. Let’s take a look at what this means for the future of science.

To be clear, I have been able to spot these AI papers pretty fast. There is a lot of stuff from China, which is low quality work. The AI helps the english but the quality of the paper is still bad. Also it adds extra words and sentences that add no value but are grammatically correct. My worry is more of the students who are coming through the systems now. Also you can use AI to give you a summary of the paper. I find that if you feed it a paper you know very well that it usually misses the deeper implications or connections to other fields. This is usually up to the knowledge the reader has who can read a paper and see how it connects to another work even if the original authors did not intend it. The AI just gives a longer version of the abstract. It takes years of reading papers to start seeing the connections between other works and have yet to see AI do that in meaningful way. You can ask it to do so and it makes trivial connections.

AI can help generate lost of low or even negative quality results for science.

Anonymous said...

https://finance.yahoo.com/news/ai-effectively-useless-created-fake-194008129.html

AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”

Anonymous said...

There was a report by the LANL lab fellows discussing the decline of publications and decline of the impact of these papers relative to the other DOE labs. Maybe with AI we can just churn out papers and pretend everything is ok? I am not sure LANL can even do that at this point.

Anonymous said...

There is a "reductionist" theory that intelligence is related to understanding in the following sense. Being able to understand something is to be able to explain it with less information : this is the Kolmogorov complexity :

https://doi.org/10.1038/s42256-025-01033-7

https://en.wikipedia.org/wiki/Kolmogorov_complexity

Of course, this reductionism reaches its peak with higher mathematics as used in the physical sciences; see this essay by the Nobel Laureate Wigner:

https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences

It should be realized of course, that large language models often have limited ability for symbolic manipulations needed for this at the current time, and a limited capability for abstract thought in general.

Anonymous said...

On the hardware side, this video goes into what is needed to run AI and language models locally, on a single machine that can be plugged into a standard wall outlet:

https://youtu.be/7kgMkzeX650?si=BlCs0zLa8i-PTRkK

Posts you viewed tbe most last 30 days