Blog purpose

This BLOG is for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA. The opinions stated are personal opinions. Therefore, The BLOG author may or may not agree with them before making the decision to post them. Comments not conforming to BLOG rules are deleted. Blog author serves as a moderator. For new topics or suggestions, email jlscoob5@gmail.com

Blog rules

  • Stay on topic.
  • No profanity, threatening language, pornography.
  • NO NAME CALLING.
  • No political debate.
  • Posts and comments are posted several times a day.

Saturday, February 24, 2024

Imagine doubling the processing power of your devices!

 https://www.eurekalert.org/news-releases/1035231


This might be hype, but it seems like great news if true -- we're getting several years worth of Moore's law type improvement's (a doubling every two to three years) for "free". Who said there is no such thing as a free lunch?

Maybe the growth of AI will exceed expectations!

3 comments:

Anonymous said...

No this is not some great revolution doubling the speed of all computers. It might be technique to help speed up some very specific architectures for some specific tasks. It is not a change in paradigm of anything. Did you read the article by chance?

Anonymous said...

Does that mean that Google could "colorize" all of the pictures in all of history twice as fast as it did the other day? The mind boggles.

Anonymous said...

8:51 -- The claims could be somewhat true, in that very few codes run at peak performance given the hardware that is available, under the constraints of running at minimum cost and minimum wall clock time (or minimum power consumption in mobile devices, etc). It can be far more than a factor of two in lost performance.

Of course, many scientific codes involve computations such as modeling events in several dimensions or in the quantum domain, and suffer from unfavorable scaling in any case, so the algorithm in use may play a greater role in changing the scaling power law or exponent, it may not be that important to lower the prefactor.

Better operating systems, languages, and compilers and scheduling algorithms can help performance though, and this could be part of the answer.

In real code environments, code maintenance and refactoring can also be an issue, code can be bloated or intentionally obfuscated by developers, and of course in high performance computing there can be perverse incentives to utilize more resources rather than less, and so on. Code features also become bloated and obfuscated, as in Microsoft office and Windows, as a way of generating a technological moat, driving ever-greater computing needs for basic tasks.

Maybe AI coding tools will cut this Gordian knot and lead to more efficient and maintainable code, it could also erode some of the power of entrenched platforms and operating systems at the big tech companies, leading to a new wave of innovative and performant software.

Posts you viewed tbe most last 30 days