Here's something from Sakharov in the 1960's, he was still a supporter of communism at the time (his views changed I believe as they went after him), parts of it are still relevant in that he is highly critical of bureaucracy, mass media, the influence of the rich on politics in capitalist systems, and expresses worries about AI in general, censorship and the psychological manipulation of the general public. He also criticizes Chinese communism in particular and Mao, as well as Stalinism, Fascism, the arms race, the environment, poverty, it is a laundry list of complaints and worries, most of them justified at least in part. Also he puts forth a perhaps naive in retrospect utopian plan for world peace and prosperity. It is well worth reading in full:
https://www.sakharov.space/lib/thoughts-on-peace-progress-and-intellectual-freedom
Comments
It is interesting though, to ignore those parts of the essay, of course, and notice how his various concerns relate to present-day issues.
https://web.archive.org/web/20170504041648/http://wilsonquarterly.com/stories/the-peculiar-history-of-computers-in-the-soviet-union/
Sakharov states for example:
We also must not forget the very real danger mentioned by Norbert Wiener in his book Cybernetics, namely the absence in cybernetic machines of stable human norms of behavior. The tempting, unprecedented power that mankind, or, even worse, a particular group in a divided mankind, may derive from the wise counsels of its future intellectual aides, the artificial "thinking" automata, may become, as Wiener warned, a fatal trap; the counsels may turn out to be incredibly insidious and, instead of pursuing human objectives, may pursue completely abstract problems that had been transformed in an unforeseen manner in the artificial brain.
Such a danger will become quite real in a few decades if human values, particularly freedom of thought, are not strengthened, if alienation is not eliminated.
https://youtu.be/3K25VPdbAjU?si=SsDMfUX_7NO_NOhC
There are historical reports of course that he viewed the nuclear program as only an opportunity to push for advanced computers and technology, to realize his long-term visions of unlimited economic and technological growth. Nuclear weapons would of course, help create the conditions for this to take place, in his view, by either creating peace or removing adversaries who would frustrate this.
Of course, some recent estimates are that AGI may arrive earlier than the timetable given here, perhaps in the next 1 to 5 years.
https://www.nature.com/articles/nphys1899
What is AGI? Adjusted Gross Income?
Don't assume everyone is on the same page.
Von Neumann would have no place in the current LANL lab.
https://en.wikipedia.org/wiki/Artificial_general_intelligence
One key to a more human-like AI was recently discussed in this paper:
https://www.nature.com/articles/s41593-023-01514-1
The idea is that rather than learning from extremely large data sets, "training" a neural network might be done from a smaller number of examples, as a sort of on the job training as it were. Currently, neural networks learn in a more idiot-savant fashion of course for example, from datasets consisting of the entire internet.
There are of course, other aspects to human cognition, such as the reward systems in the brain that allow behaviors to be planned, prioritized, and carried out, and systems related to conscious awareness focusing activity on relevant tasks, etc, also the use of concepts and symbols for mathematics, and the ability to integrate various sensory data.
https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/#id-1-introduction
Contrary to popular belief, the human brain is actually an ordinary primate brain that has evolved to be larger:
https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full