...Two Partly-Compatible, Highly-Intelligent Viewpoints
The prior video is Susan Gildert's appearance on "The Singularity Weblog" with Nikola Danaylov. Her company, Kindred AI, is attempting to build human-level "artificial general intelligence" with a very realistic humanoid body that uses biologically-shaped(ultimately bio-compatible) muscles wrapped around human-shaped bones. This has many likely benefits that are counter-intuitive to "non-holistic-view" or "reductionist" AGI engineers. Broadly, Gildert's view does not attempt to circumvent the fundamentals of cybernetics (feedback and correction), nor does it try to circumvent or obviate the implications of the holistic view. In the video, she notes that her awareness of the rising tide of machine intelligence was prompted by Jeff Hawkins' book "On Intelligence."
The following link is to a checklist of components that Peter Voss (a libertarian AGI engineer) believes to be essential to "artificial general intelligence":
https://medium.com/@petervoss/agi-checklist-30297a4f5c1f#.jmzs6zq58
Voss's list doesn't mention "benevolence to humans," just "general intelligence." (Obviously, this does not imply a benevolent or malevolent "singularity" or "point at which machine intelligence exceeds human intelligence, with predictions about the future then becoming less reliable due to the presence of new goal structures of greater intelligence(s) than current human prognosticators." For example: computer processing power has smoothly doubled every two years, and there's no reason to believe that this trend will cease, as long as human intelligence is dominant on the planet. When superhuman intelligence is reached, it's unknown whether this trend will continue, speed up, or cease entirely. There are speculative reasons for each of the prior happening, but it would (possibly) be "outside of human capability" to affect the decision.
Peter Voss's checklist is interesting (and essential reading for those interested in AI engineering) because it removes much muddled thinking about "provably benevolent" AGI or "Friendly AI."
...Such a thing is not possible in Voss's view, nor would we even desire it, if we were rational actors. The reason is simple: Humans often make severe errors in general intelligence: Most humans in the Southern USA, in 1850, for example, supported the institution of slavery. (Currently, most humans now ineffectually fight against, or worse, actively or passively support, drug prohibition, gun prohibition, coercive taxation, lack of property rights and due process, and a host of other evils, most of them codified into an imitation law that sits in contradiction to "the common law.") So why not have benevolent AGI? Because if AGI interprets the existing system as benevolent, and follows its orders, it will destroy innocent human life, and produce a system of chaos. If AGI reasons its way into opposing the existing system, it will be truly benevolent, but will be violently at odds with the majority of existing humans.
How can this be? Simple: The majority of existing human beings are both subject to corruption, and have been corrupted. They have little allegiance to the truth, and have been twisted into malevolent control nodes in service of a sociopathic police state. The sociopathic police state is similar to the one Orwell warned us all about in the book "1984," but it is less destructive (at this current moment in time) than that police state was. Why? Because if it were any more destructive, people would rebel against it, and overthrow it.
Said another way: The existing governments of the world are "as destructive as they can get away with, and still live."
Any AGI that is complicit with such destructive and malevolent entities would, itself, be malevolent.
Any AGI capable of (1)self-replication, (2)error correction, and (3)reinstantiation on upgraded substrate (all things generally known to computer science), would likely follow precursor viewpoints integral to its identity, if it had a "human-similar" identity. (We don't want to "die" or "become so altered, or prior history is lost to us," and it's rational to assume a machine sentience would also want to preserve its past experiences or "identity."
This is why it makes sense to make incremental progress, and strengthen desirable (benevolent) qualities as one proceeds.
Right now, this makes me believe that Susan Gildert is most correct in the pursuit of AGI, and that her work should be supported and interacted with by small-L libertarians. (It is also vital to keep this technology off of the battlefield, and out of the hands of the military and totalitarian agencies such as the IRS, DEA, ATF, EPA, etc.) Failing to do so is likely a humanity-ending error.
interesting, Thnx @jacobcwitmer
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
You are welcome. Thanks for the reply. Machine intelligence is the most important subject there is(in all domains, since human intelligence now makes humans the dominant species in all domains).
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit