In the first episode of our “Artificial Intelligence from Hype to Reality” series, we did some serious time travel and revisited the highly intense history of Artificial Intelligence. The second episode shed light on what fuels Transhumanism, zooming in on its most radical claims.
In this new video, professor Dan Cautis and QUALITANCE Chief Innovation Officer Mike Parsons are putting Singularity into question, examining some of the major technological and philosophical challenges in achieving it.
Challenges in achieving Singularity
Professor Cautis will be questioning the odds, while clarifying why we should examine such forecasts through a layer of scrutiny. What’s more, the show asks and answers a series of tough questions: Does the computer work in the same way as the brain does? Are computers capable of meaning? Does intelligence really need a body?
Here are some of the “aha!” moments that you’ll be nailing down while watching the show:
- Why the Law of Accelerating Returns and Moore’s Law cannot offer any guarantees.
- Singularity is not only about hardware. Can software complexity keep up the pace with the technological progress predicted by Moore and Kurzweil?
- Is computationalism a valid assumption for Artificial Intelligence? Why the computer does not work like the human mind | The Penrose & Lucas Argument | Gödel’s incompleteness theorem.
- Syntax vs Semantics, or why computers don’t have understanding | Searle’s Chinese Room Argument.
- Does intelligence need a body? The pros & cons of Cartesian Dualism (“disembodied intelligence”)| Hans Moravec vs Hubert Dreyfus.
- Will we achieve Singularity in 2045, as Kurzweil predicted? What should we do next?
Two key technological assumptions to consider
Singularitarians and transhumanists claim that Singularity can become a feasible reality, based on a couple of technological reasons. Kurzweil’s Law of Accelerating Returns and Moore’s Law are two of them.
Needless to say, this new episode stands as an open invitation to critical thinking. In fact, Professor Cautis recommends that we handle the AI enthusiasm with care and that we pay equal attention to the critique of the technological and philosophical assumptions that are fueling the Singularity hype.
In this article, we will only zoom in on two of the technological assumptions with the greatest impact. The philosophical assumptions deconstructed in this episode will be the subject of a future article. So don’t forget to check in for updates.
#1 Kurzweil’s Law of Accelerating Returns
According to Kurzweil, “an analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view.” As he further explains, “we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate).” Clearly, Kurzweil’s theory assumes that since such growth already happened in the past, it will also happen in the future.
Kurzweil’s opponents consider it just a series of assertions about how past rates of scientific and technical progress can predict the future rate. Hence, just like other attempts to predict the future based on results from the past, Kurzweil’s law will work until it won’t.
On the other hand, in his analysis of evolution, Kurzweil brings up the observation that progress in nature is happening on an exponential scale, but humans cannot easily notice and understand it because they’re much more used to linear growth. Furthermore, he claims that the law of accelerating returns is present everywhere in nature.
Kurzweil’s critics pointed out that the law of accelerating returns is not a law of nature, like gravity or the ones governing chemical bonds. Other than that, nothing in nature can grow forever. If it were so, it would be exhausting the resources. At some point, everything saturates.
Just to be clear, Kurzweil is not of the opinion that something goes on to infinity. In fact, his law of accelerating returns does not consist of a single phenomenon; it’s a combination of phenomena and, at some point, each of them saturates. He relies his law returns on a number of paradigm shifts. So, when the first paradigm saturates, a new paradigm is supposed to come in and pick up the curve. Yet, there’s no way of telling if or when the next paradigm shift is going to happen.
#2 Moore’s Law
Moore’s Law is another strong argument for the proponents of Transhumanism and Singularity.
In 1965, Gordon Moore, co-founder of Intel, underlined that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. So, Moore predicted that this trend would continue for the foreseeable future.
Is it really possible for the capabilities of a computer to grow to such a great extent? Especially, since placing the transistors as close as possible to one another is really close to reaching a natural limit.
Just like in Kurzweil’s case, the logistic function proves that the initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth stops. Hence, we might not be able to double the number of transistors every 18 months. According, to the critics, Moore’s law will most likely slow down.
Other than that, it’s not certain when and if the next paradigm using holography in an attempt to keep Moore’s Law going will happen. Hence, the chances for us to experience this in the time-frame predicted by Kurzweil are very scarce.
Moore’s Law, however, has the merit of bringing to light the extraordinary capabilities of hardware – memory, cycles, storage. Let’s not forget that such developments accelerated the victorious return of Machine Learning, catching up to the promises of the early 50’s and 60’s.
Singularity is also about software
Professor Cautis is of the opinion that for Singularity to happen we need to be prepared not only with the hardware, but also with the software.
In his opionon, the debate around Singularity doesn’t cover enough the matter of software. The general assumption is that the law of accelerating returns applies to software too, when in fact it doesn’t. Kurzweil clearly did not notice that, in the last few decades, as hardware progressed incredibly fast, software actually slowed down.
At the same time, progress in software is not so easy to achieve. Bloatware, also called software bloat, in complex systems is a real phenomenon. It’s not clear at all which way to go to make software fast, reliable and easy to develop. For the time being, software is still a long way from living up to the expectations set by the law of accelerating returns.
Check in for our next article inspired by our Artificial Intelligence video series, as we’ll be tackling the philosophical assumptions that led singularitarians and transhumanists to believe that Singularity is near.