Skip to main content

AI demystified

 Hello blog contributors! 

First, I apologize for the post title. It suggests I am trying to demystify AI while in fact I am asking you for help in doing it.

I have used chatbots to ask far-fetched questions and to check for plagiarism, mainly.

Is anyone versed enough in AI in  both its software and hardware aspects to explain its engines, sources, challenges and applications?

Or anything else. It is a relatively new field, so no one should expect a symposium.

Thank you 



Comments

Anonymous said…
This video from a recognized worlds leading authority on all things AI will answer all you questions and more. Watch and learn. It is tremendous the depth of knowledge, the technical expertise.

https://www.youtube.com/watch?v=Yk1P8l4svQU
Scooby said…
Thank you for your valued contribution. It was 30 minutes of captivating history!
I will let the scientific minds weigh in.
Anonymous said…
History! It was the future!! A technical master work of the highest order. It is beyond obvious that the future of LANL is AI.
Scooby said…
The speaker went through a whole lot of history and pure theory before getting to AI.
This does not take away from ithe presentation value.
Anonymous said…
I just watched Sabine Haushoffer's (or something like that) youtube stream today where she talked about AI making people dumber and especially scientists. There are now gazillion scientific papers witten by or with AI that are totally worthless and wrong. As a software developer I know there is gazillion lines of worthless code in the world written with AI. I've written many, many comments where and how AI fails but my colleagues seem to be idiots. Maybe I release some white paper about AI fallacies one day.
Anonymous said…
https://www.youtube.com/watch?v=hVkCfn6kSqE

AI slop has already become prevalent across the internet (and in the classroom). But according to multiple reports, artificial intelligence is posing an increasingly serious threat to scientific literature, too. Researchers have uncovered a concerning number of AI-generated papers published in reputable journals, even experts can no longer identify AI-generated “science” images, and the amount of AI slop submitted to publishers will only increase as the technology continues to improve. Let’s take a look at what this means for the future of science.

To be clear, I have been able to spot these AI papers pretty fast. There is a lot of stuff from China, which is low quality work. The AI helps the english but the quality of the paper is still bad. Also it adds extra words and sentences that add no value but are grammatically correct. My worry is more of the students who are coming through the systems now. Also you can use AI to give you a summary of the paper. I find that if you feed it a paper you know very well that it usually misses the deeper implications or connections to other fields. This is usually up to the knowledge the reader has who can read a paper and see how it connects to another work even if the original authors did not intend it. The AI just gives a longer version of the abstract. It takes years of reading papers to start seeing the connections between other works and have yet to see AI do that in meaningful way. You can ask it to do so and it makes trivial connections.

AI can help generate lost of low or even negative quality results for science.

Anonymous said…
https://finance.yahoo.com/news/ai-effectively-useless-created-fake-194008129.html

AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”
Anonymous said…
There was a report by the LANL lab fellows discussing the decline of publications and decline of the impact of these papers relative to the other DOE labs. Maybe with AI we can just churn out papers and pretend everything is ok? I am not sure LANL can even do that at this point.
Anonymous said…
There is a "reductionist" theory that intelligence is related to understanding in the following sense. Being able to understand something is to be able to explain it with less information : this is the Kolmogorov complexity :

https://doi.org/10.1038/s42256-025-01033-7

https://en.wikipedia.org/wiki/Kolmogorov_complexity

Of course, this reductionism reaches its peak with higher mathematics as used in the physical sciences; see this essay by the Nobel Laureate Wigner:

https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences

It should be realized of course, that large language models often have limited ability for symbolic manipulations needed for this at the current time, and a limited capability for abstract thought in general.
Anonymous said…
On the hardware side, this video goes into what is needed to run AI and language models locally, on a single machine that can be plugged into a standard wall outlet:

https://youtu.be/7kgMkzeX650?si=BlCs0zLa8i-PTRkK

Popular posts from this blog

Plutonium Shots on NIF.

Tri-Valley Cares needs to be on this if they aren't already. We need to make sure that NNSA and LLNL does not make good on promises to pursue such stupid ideas as doing Plutonium experiments on NIF. The stupidity arises from the fact that a huge population is placed at risk in the short and long term. Why do this kind of experiment in a heavily populated area? Only a moron would push that kind of imbecile area. Do it somewhere else in the god forsaken hills of Los Alamos. Why should the communities in the Bay Area be subjected to such increased risk just because the lab's NIF has failed twice and is trying the Hail Mary pass of doing an SNM experiment just to justify their existence? Those Laser EoS techniques and the people analyzing the raw data are all just BAD anyways. You know what comes next after they do the experiment. They'll figure out that they need larger samples. More risk for the local population. Stop this imbecilic pursuit. They wan...

Trump is to gut the labs.

The budget has a 20% decrease to DOE office of science, 20% cut to NIH. NASA also gets a cut. This will  have a huge negative effect on the lab. Crazy, juts crazy. He also wants to cut NEA and PBS, this may not seem like  a big deal but they get very little money and do great things.

Why Workplace Jargon Is A Big Problem

From the Huffington Post Why Workplace Jargon Is A Big Problem http://www.huffingtonpost.com/2014/04/25/work-words_n_5159868.html?utm_hp_ref=business&ir=Business When we replace a specific task with a vague expression, we grant the task more magnitude than it deserves. If we don't describe an activity plainly, it seems less like an easily achievable goal and more like a cloudy state of existence that fills unknowable amounts of time. A fog of fast and empty language has seeped into the workplace. I say it's time we air it out, making room for simple, concrete words, and, therefore, more deliberate actions. By striking the following 26 words from your speech, I think you'll find that you're not quite as overwhelmed as you thought you were. Count the number that LLNLs mangers use.  touch base circle back bandwidth - impactful - utilize - table the discussion deep dive - engagement - viral value-add - one-sheet deliverable - work product - incentivise - take it to the ...