What Computer-Generated Language Tells Us About Our Own Ideological Thinking
Earlier this year, the San Francisco-based artificial-intelligence research laboratory OpenAI built GPT-3, a 175-billion-parameter text generator. Compared to its predecessor—the humorously dissociative GPT-2, which had been trained on a data set less than one-hundredth as large—GPT-3 is a startlingly convincing writer. It can answer questions (mostly) accurately, produce coherent poetry, and write code based on verbal descriptions. With the right prompting, it even comes across as self-aware and insightful. For instance, here is GPT-3’s answer to a question about whether it can suffer: “I can have incorrect beliefs, and my output is only as good as the source of my input, so if someone gives me garbled text, then I will predict garbled text. The only sense in which this is suffering is if you think computational errors are somehow ‘bad.’” Naturally, this performance improvement has triggered a great deal of introspection. Does GPT-3 understand English? Have we finally created artificial general intelligence, or is it just “glorified auto-complete”? Or, a third, more disturbing possibility: Is the human mind itself anything more than a glorified …