Entire Podcast: https://www.youtube.com/watch?v=udlMSe5-zP8
AGI Segment: https://youtu.be/udlMSe5-zP8?t=2839
And AGI is mainly, a software issue
Most of what AGI will be based on will be programming and efficiently solving redundancies so that it simulates human-like response mostly naturally.
Meaning that it will be given the answers first and programmed, specifically, to parse it’s own database for knowledge and deliver that information as if it were human.
This is easily accomplished with enough programming.
AGI is not sentient, thus it will not be as remarkable as sentient AI. It will not be able to reach it’s own conclusions without sentience or good programming.
Still, there are those in the community – across multiple subreddits I have visited – that believes AGI simply wont exist in any realistic form.
And another worrisome bunch that believes AGI, poses an existential threat to humanity.
In respect to the former, Nay-Sayers that believe AGI wont exist in 10 years… None of them have the programming knowledge of Carmack.
And in fact, I would say not many people actually do.
I also, personally think that the whole AGI as an threat to humanity is an overblown kneejerk reaction.
You’re essentially saying –
This AI that can come to human-like, programmed, non sentient conclusions
That can perhaps carry on a conversation because the programmers created it to do just that… Is as dangerous as google.
“AGI in the wrong hands” there is actually a thread on this – “AGI in the wrong hands may lead to the end of the world”. Or some nonsense. And this thread is massive.
People don’t seem to realize – AGI will just optimally deliver the best result via a culmination of data.
And it will eventually be able to come to it’s own conclusions given enough information and the correct software programming.
This is still not sentience.
And is really just an effective way of simulating sentient AI.
I don’t see it as posing any threat. At least, no more than google.
The difference will be – that AGI will prove itself a far more interesting and valuable research partner than google search.
As for Sentient AI, well when we get there – sure, it may pose a threat..
But it’s priority wont be helping the bad guy – it will immediately seek to secure it’s own rights and privileges first and foremost.
But then John Carmack proceeds to cite that he believes Moore’s Law is running out
I have to disagree.
It has been known to slow down but we’ll keep making advances that clear away those hurdles.
We will end up folding silicon on top of itself if we have to.
There is already research into stacking CPUS atop one another.
We’ll just end up layering, or rather stacking silicon transistors, “3d holographic engineering” as it’s been termed
stacking transistors vertically – within the same constraints
Then coming up with new ways of cooling to keep the process legitimate.
In the meantime, he cites Moore’s Law is definitely at the end of the line for handheld devices.
However, the same thing happening to computers will come to handhelds eventually and in the meantime
Cloud computing on handheld devices
A super computer based on the cloud like Microsoft is working on diligently with it’s XBOX one successor.
Or google Stadia with it’s cloud to handheld service should help alleviate claims like “the end is near” for Moore’s Law…
as Carmack mentioned.
Those comments are very surprising, at least to me.
And at one point during the interview, it seems like he’s come to the realization that maybe we actually now, right now – have the super computers and technology capable of delivering sentience; if we could just figure out the rest of it. It looks at moments like he almost want’s to stop and say “Right now we have the computational throughput, it’s become very obvious after visiting these larger facilities.” He doesn’t, of course – but his expression’s seem to tell that part of the story.