On a recent brief visit to Washington, I ran up the Exorcist Steps, trailing far behind my 16-year-old son but making it slowly to the top. That put me in mind of the demon Pazuzu, whom I’d identified with using AI to combat malign or deleterious uses of AI (much as the demonic antagonist of the 1970s book and movie was, in Mesopotamian mythology, a protector against other powerful entities). This is happening. “To misinformation researchers, AI is a scourge—and a powerful new tool,” reports the journal Science, as large language models sift social-media posts to spot the ill-founded and bogus.
A related Science article discusses Hany Farid, “godfather of digital forensics,” who, foreshadowing the emerging profession of “reality notary,” uncovers deep fakes, through such techniques as pinpointing misaligned shadows and lines of convergence. The Iran War has brought an explosion of fabricated explosions, among other phony scenes, with fakery now often evident in subtle clues rather than gross distortions like extraneous limbs.
Another Pazuzu-type study used AI to assess reliance on AI in scientific studies, discerning that AI-using scientists had greater impact but also narrower focus, their topics less likely to bring major advances and follow-on engagement. The researchers “found that authors of such papers published over three times as many articles as those who did not avail themselves of AI, and made faster progress in career advancement,” writes science journalist George Musser, in QSpace, published by the Foundational Questions Institute (FQxI). But, Musser observes, this feeds into a broader problem of scientists prioritizing conventional lines of research, playing it safe to compete for scarce funding and jobs.
Mathematician Terence Tao teamed up with art historian Tanya Klowden on a preprint paper to assess AI’s impact on “mathematical methods and human thought.” Noting they come from academic fields “that are frequently viewed as polar opposites,” but have both found AI useful in their work, they argue such tools “have the potential to radically augment our natural human abilities and they are capable of expanding what is possible beyond what we humans could do individually or within the limits of our own collective capacity.” Yet, they worry, “the current climate where AI is being implemented simultaneously in virtually every sphere of society, without consideration for whether it provides the end users any meaningful benefit, only serves to alienate and frustrate people in all walks of life.”
After discussing relatively near-term problems such as the environmental impact of AI’s massive electricity consumption, and the prospect that “entire areas of academic discourse could be drowned out by a flood of low-quality AI-generated content,” Tao and Klowden look further ahead to a scenario where “the current weaknesses of AI tools are satisfactorily resolved, and their capabilities now match or exceed that of expert humans in all practical dimensions, rendering the risk-management philosophy obsolete.” This is sometimes called “artificial general intelligence,” they mention in a footnote, “although there is no consensus on the precise definition of this term.”
The authors sketch out several ways in which humans might react to such a scenario. Some might focus on practicalities of achieving technical tasks, regardless of the respective contributions of humans and AIs. Others may characterize AIs as lacking something, such as true “soul” or “understanding,” downplaying what AIs achieve. Yet another group may denigrate the value of human accomplishments: “In the more extreme versions of this position, the very exercise of human intellect is viewed as an undesirable and tedious activity, which ought to be replaced by automation as quickly as possible, in order to free up time and mental space for more leisurely or hedonistic pursuits.”
Instead, Tao and Klowden propose “a cognitive analogue of the Copernican revolution in astronomy.” Much as Nicolaus Copernicus’ heliocentric model displaced Earth from the center of the universe, the authors argue, “now we are discovering (or creating) other ‘planets’ of intelligence comparable in many ways to our own, while simultaneously being quite distinct in many aspects.” They write: “While our interests and attachments will still largely be tied to the human intellectual sphere, its relationship with other forms of intelligence can be explored, both for practical purposes of more efficiently achieving various real-world objectives, as well as for more philosophical reasons, such as achieving an external perspective on human cognition that was previously difficult to attain.”
