Blog purpose

This BLOG is for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA. The opinions stated are personal opinions. Therefore, The BLOG author may or may not agree with them before making the decision to post them. Comments not conforming to BLOG rules are deleted. Blog author serves as a moderator. For new topics or suggestions, email jlscoob5@gmail.com

Blog rules

  • Stay on topic.
  • No profanity, threatening language, pornography.
  • NO NAME CALLING.
  • No political debate.
  • Posts and comments are posted several times a day.

Wednesday, August 14, 2024

AI papers

 This may be hype or nonsense but there is evidently a new "AI scientist" program that produces papers for $15 each, which are also reviewed by an AI review system to automate research:


https://sakana.ai/ai-scientist/

Obviously, they may not be very good papers, but with further improvements in computing power the cost per paper will be less, and the papers will become better. Also the current focus of the papers produced by system is to advance AI itself, which may also improve quality of the papers and lower their cost further aside from developments in hardware.

One might also assume that eventually the papers will become more objective and less biased, and more innovative than the things currently published, since in the current system of scientific output scientists must deal with funding agencies and various career and personal problems.

7 comments:

Anonymous said...

Aside from this there are also rumors that new "reasoning" AI models will be produced soon, such as Strawberry AI or Q*, as well as no doubt similar developments from other companies such as Google. These models would supposedly have better planning and reasoning capabilities, due to their training and internal design, and are hyped as being an important step towards achieving human-level intelligence. This could allow better capacity to carry out more advanced reasoning and production of better scientific or engineering output when used in automated publication and peer review systems.

While these things have the potential to transform areas of computational science, as well as ultimately theoretical areas and pure mathematics, and generate sophisticated engineering designs and output, there will still of course be issues created by experimental science and the related uncertainties, as well as the intrinsic need for large amounts of computation in computational science, and the conceptual difficulties and computational irreducibility related to theory and math.

In the near term of course, it is certainly possible that these systems may produce a flood of millions, billions, and ultimately trillions and quadrillions of documented results which are perhaps not that interesting.

As you know of course, Stalin is sometimes credited with the saying "Quantity has a quality all its own" which relates to the immense military production of the Soviet Union which ultimately prevailed in WWII. Perhaps the use of these automated systems will improve modern science and technology through the sheer quantity of their output, even if quality is somewhat lacking.

Anonymous said...

This could also be used to mass produce patents and other legal filings, while automating the process of patent review. Along those lines it could even be possible for government agencies to mass produce laws and regulations, producing a vast multitude thereof to improve economic output or attain climate goals. Obviously this could be done through the aid of central bank digital currencies being used to micromanage every aspect of the economy, and through various new taxes, subsidies, exemptions, and rationing measures.

Automation would of course, greatly reduce the cost of the Federal and State governments as bureaucratic work begins to be automated. And individuals would be freed from dealing with the Kafkaesque system that would result, since those dealings would be conducted on their behalf by various automated systems.

Anonymous said...

Makes sense to me. Why not automate the entire scientific pipeline with generative adversarial networks? NNSA will love this too because it has the word AI in it, computers don't ask questions and no real physical work needs to be done, eliminating any chance of slips, trips falls and other accidents.

Anonymous said...

This link has some good information on the some of the AI hype.

https://www.aisnakeoil.com

A lot of AI work in science is pure garbage and we are seeing more and more push back about the AI hype for science. Many of the recent breakthroughs are way over hyped or just wrong. Just because something is in "fashion" does not mean it is actually useful. AI methods have some place for certain kinds of problems in science but are of limited value for many of the problems at the NNSA labs. Also you keep hearing LLNL and LANL mangers talking about AI but if you listen to what they are saying they either do not understand it or simply have the NYT, USA Today version of what AI is. I keep finding more and more scientists who tried something with different AI methods but after a while realized it was rather limited or simply not very well suited to
their issues. However the hype train rolls on.

Anonymous said...

12:26 -- A lot of non-AI stuff in science is pure garbage as well, in fact there are many breakthroughs not involving AI which are over-hyped or wrong. Are you sure that AI-produced papers would be inferior in general, or is that your bias as a member of the human species?

A good example would be the multitude of studies claiming to show benefits for hydrochloroquine and ivermectin for COVID-19, which are known to have no benefit whatsoever. Or the many papers which claim that various sorts of masks are ineffective, even those that filter air and thereby prevent inhalation of any virus.

Anonymous said...


A good example would be the multitude of studies claiming to show benefits for hydrochloroquine and ivermectin for COVID-19, which are known to have no benefit whatsoever. Or the many papers which claim that various sorts of masks are ineffective, even those that filter air and thereby prevent inhalation of any virus.

I have not heard of any AI studies like this. Most of the studies you cite are inconclusive but have been proven to be wrong. A huge amount of medical research is problematic even without AI. You can say what you want but even people who study Covid will tell you behind closed doors how bad some of the science was.

Anonymous said...

Did you know that in the middle ages, they would burn witches during pandemics? This worked surprising well, or appeared to do so at the time, as at the end of the pandemic, they could attribute success to it:

https://youtu.be/Ii68tPIiZOo?si=U2T4NdjI02tQE_4q

https://www.discovermagazine.com/the-sciences/people-often-blamed-and-executed-witches-for-plague-and-disease

If you are of European descent, it is perhaps quite possible than your ancestors took part in such practices and so believed in their efficacy.

Posts you viewed tbe most last 30 days