Here's something from Sakharov in the 1960's, he was still a supporter of communism at the time (his views changed I believe as they went after him), parts of it are still relevant in that he is highly critical of bureaucracy, mass media, the influence of the rich on politics in capitalist systems, and expresses worries about AI in general, censorship and the psychological manipulation of the general public. He also criticizes Chinese communism in particular and Mao, as well as Stalinism, Fascism, the arms race, the environment, poverty, it is a laundry list of complaints and worries, most of them justified at least in part. Also he puts forth a perhaps naive in retrospect utopian plan for world peace and prosperity. It is well worth reading in full:
https://www.sakharov.space/lib/thoughts-on-peace-progress-and-intellectual-freedom
9 comments:
It seems like von Neumann was a lot smarter than him, and also made better predictions about the future many years previous to this. Sakharov seems to have a poor understanding of nuclear strategy and nuclear issues in general as well, or is making deliberately misleading and emotional statements, while of course von Neumann was the first to subject many of those issues to rational analysis, doing so with great clarity, rigor and depth.
It is interesting though, to ignore those parts of the essay, of course, and notice how his various concerns relate to present-day issues.
Also the Soviet Union had particular ideological worries about information technology, and computing machinery, which is echoed in Sakharov's letter, whereas in the US the programs were driven by pragmatic aims. von Neumann was of course, a major figure.
https://web.archive.org/web/20170504041648/http://wilsonquarterly.com/stories/the-peculiar-history-of-computers-in-the-soviet-union/
Sakharov states for example:
We also must not forget the very real danger mentioned by Norbert Wiener in his book Cybernetics, namely the absence in cybernetic machines of stable human norms of behavior. The tempting, unprecedented power that mankind, or, even worse, a particular group in a divided mankind, may derive from the wise counsels of its future intellectual aides, the artificial "thinking" automata, may become, as Wiener warned, a fatal trap; the counsels may turn out to be incredibly insidious and, instead of pursuing human objectives, may pursue completely abstract problems that had been transformed in an unforeseen manner in the artificial brain.
Such a danger will become quite real in a few decades if human values, particularly freedom of thought, are not strengthened, if alienation is not eliminated.
By the way, I've always felt von Neumann deserves credit for both the US and Soviet bombs, rather than Teller, Ulam, Sakharov, etc. as he did so much for the development of modern computing machinery! We should give thanks for all the wondrous technology of the modern world, he deserves a full share of credit, including of course, our space program, silicon valley, modern medicine, and the technology which actually addressed many of the problems Sakharov alluded to, such as global warming, world hunger, ignorance, and want -- von Neumann's work on computing machinery is putting us on a true path to utopia!
Here's a bit of a video that describes von Neumann's apparent vision, although it does not refer to him:
https://youtu.be/3K25VPdbAjU?si=SsDMfUX_7NO_NOhC
There are historical reports of course that he viewed the nuclear program as only an opportunity to push for advanced computers and technology, to realize his long-term visions of unlimited economic and technological growth. Nuclear weapons would of course, help create the conditions for this to take place, in his view, by either creating peace or removing adversaries who would frustrate this.
Of course, some recent estimates are that AGI may arrive earlier than the timetable given here, perhaps in the next 1 to 5 years.
Von Neumann seemingly made a famous mistake by the way, in understanding quantum theory, it was seemingly too hard for even the smartest human to understand in a clear fashion:
https://www.nature.com/articles/nphys1899
Hey 1/12/2024 6:28 AM,
What is AGI? Adjusted Gross Income?
Don't assume everyone is on the same page.
Von Neumann would have no place in the current LANL lab.
AGI is artificial general intelligence, a loosely defined term suggesting an AI that could substitute for humans in most tasks:
https://en.wikipedia.org/wiki/Artificial_general_intelligence
One key to a more human-like AI was recently discussed in this paper:
https://www.nature.com/articles/s41593-023-01514-1
The idea is that rather than learning from extremely large data sets, "training" a neural network might be done from a smaller number of examples, as a sort of on the job training as it were. Currently, neural networks learn in a more idiot-savant fashion of course for example, from datasets consisting of the entire internet.
There are of course, other aspects to human cognition, such as the reward systems in the brain that allow behaviors to be planned, prioritized, and carried out, and systems related to conscious awareness focusing activity on relevant tasks, etc, also the use of concepts and symbols for mathematics, and the ability to integrate various sensory data.
The human brain, by the way, may have no better computational capacity than current high-end GPU's:
https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#id-1-introduction
Contrary to popular belief, the human brain is actually an ordinary primate brain that has evolved to be larger:
https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full
Post a Comment