Obviously, reason remains a bedrock value in fields like science, engineering, medicine, and law. Certain kinds of questions can only be answered that way. But this becomes a problem when reason gets pushed into everything. Much in the humanities, arts, and areas social sciences simply can’t be reduced can’t be reduced to rigid proofs or numbers. And quantification alone often faulters when looking at multidimensional or intersecting matters. Some critics argue that economic worries lie behind a push to rendered everything like a spreadsheet –– and that the logic of business is intruding where it shouldn’t. This becomes an issue of fairness when judgements get made about programs or people that might otherwise be valued differently (and with different findings) if assessed by other means. These matters now drive ongoing debates over standardized testing.
After all, stratification drives testing whether on an IQ test or the SAT. The first IQ tests were developed in 1908 by Alfred Benet to help schools in France to sort children. Shortly afterwards, Stanford University’s Lewis Terman modified the instrument into what would become the “Stanford-Binet IQ Test.”[ii] The U.S. used the measures during World War I in assigning different roles to troops, and before long American public schools started using IQ to identify “gifted” youngsters. Before long, eugenic beliefs in inherited traits soon began influencing U.S testing. This led to racialized assumptions about intelligence differences, confirmed at the time by what appeared to be “scientific” data. Things soon got worse for others scoring badly on the tests, especially the economically disadvantaged and those with learning differences. In its darkest moments, eugenic proponents used IQ results to identify “idiots” and the “feebleminded” ––branding them threats to the American gene-pool. This led to forced sterilizations of 65,000 with low IQ scores in a campaign validated in the U.S. Supreme Court’s 1927 Buck v Bell decision. [iii] Those sterilized were disproportionately poor or of color. Continued government sterilization on the basis of intelligence or criminality continued in some states until the 1970s.
Opposition arose from legal, civil rights, and scientific camps, beginning in the 1970s with suits from the Southern Poverty Law Center on behalf of prison inmates. Later challenges accused the test of cultural bias, especially in portions of the exam based on Eurocentric knowledge. Further scrutiny confirmed that performance on such tests was less a matter of “intelligence” per se, as it was an individual’s upbringing, socioeconomic status, and access to quality education. By the 1980s, psychologists like Howard Gardener would argue that IQ’s narrow focus excluded the “multiple intelligences” at work in people’s minds.[iv] Gardener said that an exclusive emphasis on reasoning ignored aptitude in language, communication, practical problem-solving, creativity, and ethical analysis. Later this argument was modified by neuroscientists to deemphasize the idea of isolated brain functions, replacing it with the concept of a “general intelligence” capable of performing different tasks.[v] In recent years, accumulating evidence of unfairness in standardized testing has led many colleges and universities end use of exams like the SAT and ACT.[vi]
The “information age” of the twenty-first century brought even more attention to intelligence, as cognitive skills replaced physical work in many jobs. Consumer culture has become overrun with books, apps, and devices to improve one’s “brain power,” as well as so-called “smart” foods and drugs. Cognitive enhancement also underlies the nomenclature of smart phones, smart appliances, smart cars, home assistants, and wearables. Many of these items truly do increase human capacity, inasmuch as a device as commonplace as an iPhone now carries more than 250,000 times the data capacity as the guidance system in the Apollo 11.[vii] In this arena “smartness” functions alternately as metaphor for connectivity and the ability of devices to operate semi-autonomously. Regardless of specific meanings, the appeal of all smart devices lies in their interface with a human mind.
Then there is artificial intelligence (AI) – a technology rife with mixed feelings of hope and fear. Amid enthusiasm in the late twentieth about rising computer capability, speculation swelled about the future of AI. Now a famous science fiction trope, predictions began circulating about computers outpacing humans and taking on a life of their own. Such worries about runaway technology date to the machine age, with AI giving them renewed urgency. In the early 1990s science fiction writer Vernor Vinge predicted what he termed a technological “singularity,” occurring when super-intelligent computers gained sufficient capacity to improve themselves without human involvement. [viii] Frightening many was Vinge’s claim that the singularity would happen in an unexpected “explosion.” Soon this became a favorite theme in futuristic stories about AI, many of which saw bad outcomes in a world ruled by machines. Tempering such paranoia have been futurists like More, who has argued that AI will evolve more gradually and in ways that can be anticipated and controlled.[ix] In More’s view, the more pressing threats from AI lie in job losses in easily automated tasks, as seen today in call centers and warehouses. (See Chapter 1).
None of this has slowed the search for superintelligence, whether though additions to the body or external supports. Today these come together in a growing array of exotic devices. These include a technology known as a “Multielectrode Array” (MEA), entailing implants with thread-like electrodes (thousands in some cases) that precisely trigger parts of the body. Musk launched the tech-startup Neuralink in 2019 to develop MEAs to stimulate portions of the brain. Initially Neuralink has been working to alleviate physical paralysis, with hopes of later creating “superintelligence” according to Musk.[x] Neuralink claims the robotically-done insertion procedure is as “painless and safe” as laser eye surgery, although no patient has yet undergone the operation. Since 2016, a parallel project also has been working on neural interfaces, owing to $100 million from Braintree founder Bryan Johnson. Dubbed Kernel, the ambitious enterprise plans to use implants to help people with conditions like Parkinson’s disease, and later “reboot the brain” to create real-life cyborgs.[xi]
Keep in mind that implants already are widely used in medicine. In cataract surgery the natural lens of the eye is removed and replaced by an intraocular disk made of plastic or acrylic. First introduced in the 1950s, now over 3 million such procedures are done annually in the U.S. alone.[xii] Heart pacemakers entered mainstream medicine in 1960s owing to the invention of the long-lasting lithium battery. Pacemakers use a tiny computer inserted under the skin to keep the heart beating consistently. Cochlear implants to improve hearing came into use in the 1970s. Such implants bypass parts of the periphery auditory system to electrically stimulate the cochlear nerve that sends sound messages to the brain. People with diabetes now can get an Eversense implant to monitor blood sugar with results seen on a mobile phone. This eliminates the need for painful finger-sticking to check blood glucose. And of course, various mechanical procedures long have been used on the brain itself. These date to the introduction in the 1930s of Electroconvulsive Therapy (ECT) for people unresponsive to other treatments. Today’s highly calibrated ECT treatments are done under general anesthesia and often produce minimal visible effects. Another less invasive treatment called “transcranial direct-current stimulation” places low-current electrodes on the skull to reduce depression, with some research showing it can improve cognition.[xiii] A similar technology called “transcranial magnetic stimulation” (placing magnets against the skull) already is in use to treat conditions like anxiety, bipolar disorder and substance abuse.
Most of what I’ve just described is geared toward medical treatment rather than supplemental enhancement. Indeed, the vast majority scientists working on the machine-brain interface say their goal is on healing rather than anything else.[xiv] But as is often the case, procedures introduced to serve pressing medical needs often tempt those who simply want more capacity. This means the future of intelligence enhancement is very much up for grabs. As always, questions of fairness haunt such projects, especially in research funded by the super-rich. If “prosthesis is the origin of human inequity,” as philosopher Bernard Stiegler has argued, there certainly is cause for concern.[xv] As the above discussion has highlighted, the competitive history of intelligence measurement, testing, and institutional inequity has often worked to reinforce existing hierarchies and social biases. The question is not so much whether AI will “take over” humanity as it is whether humanity will use such technology to its own detriment.
[i] “Intelligence,” Cambridge Dictionary (2020) https://dictionary.cambridge.org/us/dictionary/english/intelligence (accessed May 14, 2020).
[ii] David Shenk, “The Truth about IQ,” The Atlantic (Jul 28, 2009) https://www.theatlantic.com/national/archive/2009/07/the-truth-about-iq/22260/ (accessed May 14, 2020).
[iii] Daphne Martschenko, “IQ Tests Have a Dark, Controversial History – But They’re Finally Being Used for Good,” Business Insider (Oct. 11, 2017) https://www.businessinsider.com/iq-tests-dark-history-finally-being-used-for-good-2017-10 (accessed May 14, 2020).
[iv] Howard Gardener, Frames of Mind: The Theory of Multiple Intelligences (New York: Basic Books, 1983).
[v] Kendra Cherry, “How General Intelligence Influences Cognitive Tasks,” verywellmind.com (Spr. 30, 2019) https://www.verywellmind.com/what-is-general-intelligence-2795210 (accessed May 26, 2020).
[vi] Scott Jaschic, “University of California Board Votes Down the Sat and Act,” Inside Higher Ed (May 22, 2020) https://www.insidehighered.com/admissions/article/2020/05/22/university-california-votes-phase-out-sat-and-act (accessed May 24, 2020).
[vii] David Masci, “The Scientific and Ethical Dimensions of Striving for Perfection,” Pew Research Center (Jul. 26, 2016) https://www.pewresearch.org/science/2016/07/26/human-enhancement-the-scientific-and-ethical-dimensions-of-striving-for-perfection/ (accessed May 15, 2020).
[viii] Ben Goertzel, in Max More and Natasha Vita-More, The Transhumanist Reader: Classical and Contemporary Essays on Science, Technology, Philosophy of the Human Future (Malden, MA: Wiley-Blackwell, 2013) p. 129.
[ix] Ibid.
[x] Natashah Hitti, “Elon Musk’s implant will merge humans with AI,” Denzeen.com (Jul. 22, 2019) https://www.dezeen.com/2019/07/22/elon-musk-neuralink-implant-ai-technology/ (accessed May 15, 2020).
[xi] “Kernal” bryanjohnson.co (n.d.) https://bryanjohnson.co (accessed May 15, 2020).
[xii] “The 11 Most Implanted Medical Devices in America,” businesinsider.com (Jul. 11, 2011) https://www.businessinsider.com/the-11-most-implanted-medical-devices-in-america-2011-7 (accessed May 28, 2020).
[xiii] “Transcranial Direct-Current Stimulation,” Johns Hopkins Psychiatry and Behavioral Sciences (2020) https://www.hopkinsmedicine.org/psychiatry/specialty_areas/brain_stimulation/tdcs.html (accessed May 25, 2020).
[xiv] “The Scientific and Ethical Dimensions of Striving for Perfection.”
[xv] Bernard Stiegler, as cited in The Prosthetic Impulse, p. 244.