
“John Henry, he could hammer. He could whistle. He could sing.”
So begins Harry Belafonte’s famous 1954 rendition of a classic American song. Because his family needed money, John Henry went to a nearby mountain where he asked the contemptuous “captain” for work building the railroad. A challenge was set between our mythic hero and the captain’s new steam drill. Although the machine soon sputtered and failed, John Henry broke his body and his heart in the quest. He died with a hammer in his hands.
The ballad’s message: “A man ain’t nothing but a man.” Spirited creatures, we work as we must, but we also do many other things, whistling and singing among them. We love our families and sacrifice ourselves on their behalf. We fight those who challenge and disrespect us. We live and die as bravely as we can.
Machines, it seems, have only one purpose: to accomplish their assigned tasks. They may break, as that drill did, but they can be repaired. And a new, technically improved version soon arrives. Their lack of sympathy and moral reflection makes them relentless.
The ballad of John Henry—like the modern transformation itself—has an air of tragedy about it. As baby John predicted (“sitting on his daddy’s knee”) that “steel” would be his downfall, so we moderns understand that the machine world will only expand in scope and importance. On the one hand, those devices make us more powerful and scrutinizing than any generation in history. On the other, they cause us, as individuals, to become ever less competent to judge or even understand those processes we now depend on. Social and cultural living moves beyond the human scale.
Knowledge machines
The age of the steam engine, with its clattering contraptions, boiler rooms, and black locomotives terrifying horses, is long gone. New machines and new sources of power have risen in its place. Workers themselves have moved from being “users” to being “tenders” to just “monitors” of what are increasingly self-regulating devices.
Central to this advance, of course, are computers. “Information” or “knowledge,” especially in its various applications, is now thought to be the chief product of contemporary societies. Indeed, so important are networks of powerful computers today that the concept of “technology” has become, for many people, synonymous with communication machines.
The storing and calculating abilities of computers have risen to the point where the term “artificial intelligence” is applied to them. Essentially, this means that computers can now perform certain cognitive tasks like learning, reasoning, and pattern recognition that were once thought to be hallmarks of humanity. Using step-by-step formulas, or algorithms, computer programs can identify, assess, and provide directives that respond to real-world phenomena. Some programs can modify themselves, that is, adjust or even improve their recognition-response systems.
Welcome to a world that provides us with individually tailored “suggestions” for our social media and video screening, continuous tracking of our automobile trips, and essays written in a style that mimics our own. Rising generations will find robot soldiers, computer-managed medical procedures, and driverless vehicles to be ordinary. Some will prefer artificial companions—people or pets—to real ones. They’ll spend increasing periods of time inside the dazzling, game-like environments of cyberspace.
Is artificial intelligence different from human intelligence?
For the most part, computer programs manifest a certain kind of “thinking.” What if the phenomenal world—so filled with colorful, noisy, aromatic movements—could be depicted as sets of finite dots or graphic positions, each comprehended as elements of larger patterns?
So represented, these matters become input as numeric data for algorithms, again, step-by-step formulas that describe the best sequence of actions to achieve desired results. Powerful computers can evaluate almost instantaneously many possible scenarios and the likely outcomes of such contingencies. They can make predictions based on both extant and hypothetical data. So, we learn of the likelihood of rain today, the “future” of this morning’s stock market opening, and the prospect that our health condition will turn into something much more serious.
Like John Henry, we must confront the fact that computer-guided machines can perform many tasks better than we can. Most of these tasks are highly specific. Most can be described as exercises in formal, arithmetically expressed logic. They can “remember” greater amounts of specific information, access this information much faster, and then perform symbolic manipulations that humble the efforts of our best thinkers. We’ve seen them beat our champions in chess and Jeopardy.
Again, such maneuvers represent only one style of thinking. Computers do well tracing relationships between sharply defined units of information. They consider the “technical” (again, sharply defined) applications of alternative procedures well.
However, there is much more to knowledge-making than this. Historically, at least, computer programs have taken a formulaic approach to problem-solving. They consider what they are instructed to consider as factors or variables. They are less able to do what people routinely do, that is, envision outside-the-box scenarios.
As many of my writings on this page emphasize, we humans glory in the hypothetical. Those speculations are not “random” insights but instead combinations of perspectives which we ourselves do not clearly understand. The computer program tells us to write the next, predictable series of words. We do something else.
More importantly, computers cannot rival our abilities at moral and emotional knowledge, both of which may be deeply situational. Doing or feeling the “right” thing at any time relies on the most complicated judgments. Routinely, we shift the criteria we use. We respond to one person differently from the way we do to another. One moment’s response may be different from another’s. After all, we are biochemical as well as symbolic creatures. Catch us on a “good” and not a “bad” day.
Computers can simulate interaction in the “agreeable” style of voice-activated AI resource agents. But who would say this is the same as true interaction, which frequently is quirky and rude? Nor is it equivalent to the better forms of friendship, which commonly mix acceptance with resistance.
As a social scientist, I am fully aware that human discourse is patterned and therefore predictable. All of us “tend” to think, talk, and behave in certain ways. However, we also have the ability to resist our own inclinations, to declare our freedom from them.
AI’s challenge for humans
A century ago, the sociologist Georg Simmel argued that the crisis of modern culture is the increasing disparity between “objective culture” (publicly accessible knowledge, norms, and skills) and “subjective culture” (the knowledge and capabilities of individuals). Computers accentuate that gap. Giving up the quest for broad-based knowledge, people content themselves with limited pursuits. Many “outsource” skills they used to have—doing long division, reading a map, learning a foreign language—to the machines. Younger people may never learn these skills in the first place. All this should make us think hard about what it is we need to know.
Just as mechanization in agriculture and industry altered skilled labor, so too will computers displace many white-collar jobs. That said, the challenge for humans, as always, is to maintain control of our own devices and to find new forms of meaningful labor as we have done in past centuries. We must not conform our thinking to the style of mechanized intelligence. The wide-ranging, quirky spirit of our species is the best defense against that.

