Artificial intelligence, aka AI (strong thesis)
ARTIFICIAL INTELLIGENCE, aka AI (strong thesis): the two-part thesis which says (i) that rational human intelligence can be explanatorily and ontologically reduced to Turing-computable algorithms and the operations of digital computers (aka the thesis of formal mechanism, as it’s applied to rational human intelligence), and (ii) that it’s technologically possible to build a digital computer that’s an exact counterpart of rational human intelligence, such that this machine not only exactly reproduces (aka simulates) all the actual performances of rational human intelligence, but also outperforms it (aka the counterpart thesis).
See also the entries on “artificial intelligence, aka AI (weak thesis),” “Turing machines,” and “Gödel’s incompleteness theorems.”
Controversy: The strong AI thesis is not only immensely controversial, but also strongly apt to be seriously muddled, for at least three reasons.
First, the strong AI thesis is very often confused with the weak AI thesis, but (i) the weak AI thesis is itself ambiguous as between a non-trivial version and a trivial version, and (ii) even if both of the versions of the weak AI thesis were true, nevertheless strong AI could still be false.
Second, the strong AI thesis, as such, overlooks the fact that rational human intelligence is also conscious: hence if strong AI were true, then human consciousness would also have to be explanatorily and ontologically reducible to Turing-computable algorithms and the operations of digital computers, which is equivalent to the materialist/physicalist thesis of metaphysical functionalism about the mind-body problem, which in turn puts an extra-heavy burden of proof on defenders of strong AI.
Third, the very idea of “being artificial” is ambiguous as between (i) being mechanical, as opposed to being organic, and (ii) being able to be built or constructed or synthesized, as opposed to not being able to be built or constructed or synthesized, for whatever reason, but (i) and (ii) are mutually logically independent of one another: something could be mechanical but not buildable, constructible, or synthesizable (for example, digital computations involving more digits or computations than there are particles or future moments of time in the cosmos), and conversely something could be buildable, constructible, or synthesizable but not mechanical (for example, certain exactly reproducible uncomputable, non-equilibrium thermodynamic biochemical processes, including organismic processes).*
ELABORATION
Over and above the controversies, the strong AI thesis is demonstrably false, for at least five reasons; and the weak AI thesis is either false (the non-trivial version) or boringly trivially true (the trivial version).
-
Necessarily, intelligent rational human minds are alive, but systems that conform to the strong AI thesis are inherently mechanical and non-living, so the strong AI thesis is necessarily false.
-
Necessarily, intelligent rational human minds are embodied, but systems that conform to the strong AI thesis are possibly disembodied, so the strong AI thesis is necessarily false.
-
Necessarily, rational human knowledge requires a non-accidental connection between judgment or belief and truth, and also a non-accidental connection between true belief and justification, but systems that conform to the strong AI thesis only ever provide accidental content-connections, so the strong AI thesis is necessarily false.
-
Again, necessarily, rational human knowledge requires a non-accidental connection between judgment or belief and truth, and also a non-accidental connection between true belief and justification, but systems that conform to the strong AI thesis only ever provide accidental content-connections, so if the strong AI thesis were true, then our intelligent rational human minds would be nothing more than Turing machines, and therefore we couldn’t ever know the truth of the strong AI thesis, hence the strong AI thesis is self-undermining. 5. Kurt Gödel’s incompleteness theorems (Gödel, 1931/1967), which say (i) that all Principia Mathematica-style systems of mathematical logic based on the Peano axioms for arithmetic will contain undecidable/ unprovable sentences, and (ii) that no such system of mathematical logic can prove its own consistency, hence the truth of mathematical axioms has to be demonstrated outside those systems—for example, by acts of rational human mathematical intuition—guarantee that there will be uncomputable/undecidable mathematical axioms that only intelligent rational human minds can know, so systems that conform to the strong AI thesis inherently fall short of the actual performances of rational human intelligence, and therefore the strong AI thesis is false.
Moreover, if the weak AI thesis says that not all but only some actual performances of rational human intelligence are exactly reproducible (aka can be simulated) on Turing machines (i.e., the non-trivial version), then since the strong AI thesis is not only false but impossible, then the non-trivial version of the weak AI thesis is false and impossible too.
But if the weak AI thesis says merely that some behavioral or formal features of some actual performances of rational human intelligence are either operationally or isomorphically representable on Turing machines (the trivial version), then this is indeed true, but at best boringly trivially true, since the very same thesis is true of even the simplest counting or calculating procedures, using for example one’s fingers, hockey pucks, or an abacus.
If you feel so inclined, please feel free to show your support for Robert via his Patron page (https://www.patreon.com/philosophywithoutborders) or purchase his recently published book, The Fate of Analysis (2021).
The Fate of Analysis (2021)
Robert Hanna’s twelfth book, The Fate of Analysis, is a comprehensive revisionist study of Analytic philosophy from the early 1880s to the present, with special attention paid to Wittgenstein’s work and the parallels and overlaps between the Analytic and Phenomenological traditions.
By means of a synoptic overview of European and Anglo-American philosophy since the 1880s—including accessible, clear, and critical descriptions of the works and influence of, among others, Gottlob Frege, G.E. Moore, Bertrand Russell, Alexius Meinong, Franz Brentano, Edmund Husserl, The Vienna Circle, W.V.O. Quine, Saul Kripke, Wilfrid Sellars, John McDowell, and Robert Brandom, and, particularly, Ludwig Wittgenstein—The Fate of Analysis critically examines and evaluates modern philosophy over the last 140 years.
In addition to its critical analyses of the Analytic tradition and of professional academic philosophy more generally, The Fate of Analysis also presents a thought-provoking, forward-looking, and positive picture of the philosophy of the future from a radical Kantian point of view.