There is an old saying in computational science: garbage in, garbage out. Computers can carry out billions of complex calculations in a second, but that processing power is otherwise unfiltered and shouldn’t be confused with intelligence.
I can attest, from my own experience when writing detailed code in my work as a physicist, how often it is that surprising results arise, only to discover upon checking that they are due to errors in coding rather than reflecting interesting facets of fundamental reality.
So it is with AI. Years ago, I attended a conference at Asilomar, organized by a group that was concerned about the future of AI. The opening lectures, mostly from philosophers, warned of the importance of teaching AI “universal human values” if eventually sentient AI systems were to pose no danger to humanity.
This sounds good, in principle, until one tries to define universal human values, at least in the context of human history. If machine learning systems are trained on such material available on the Internet, they will be hard-pressed to find consistent examples of logical, ethical, or moral behavior running across time or geography. One worries that, in the end, this kind of guidance for initial programming will involve more “do as I say, not as I do” than programming for open-ended discovery.
The problem with this, of course, is the question of who gets to provide the guidance, and what their values are.
This hypothetical concern has become much more real as human-interface machine learning systems have blossomed, with the recent rise of ChatGPT and its impact on human discourse from assisting in the writing of scientific papers to guiding people in their search for information.
Recently, Greg Giovanni, a student of Vinod Goel, Professor of Cognitive Neuroscience at York University, held the following “dialogue” with ChatGPT, on an issue of current topical interest, which he and Professor Goel have permitted me to reproduce here: