Before going into the creative end of this, it’s worthwhile to look at the consumer side.
Does Technology Make Us Dumber and Lazier?
Despite all the stereotyping of Americans being ignorant neanderthals prior to the establishment of universal compulsory “education,” the truth is almost opposite of what you’ve likely heard. From the days of wagon trains setting forth on the Oregon Trail up to about 1948, the average American was highly literate. If you read the work of the pulp writers, comic book cartoonists, and anyone else we are supposed to believe was intellectually inferior to today’s living generations, it’s obvious that they knew a lot more about history, geography, classic art, music and literature, and the English language, than people today do. I would put up the average high school dropout from 1940, or even 1960, against the average Ivy League graduate of today, in an episode of Jeopardy or any other knowledge contest, and be confident of victory.
The deterioration of the American IQ is sometimes tracked back to 1979 (establishment of the Department of Education). Those who have a bigger picture of the data will track it back to 1965 (passage of the Immigration and Nationalization Act) or to a couple years earlier (when God was banished from the public schools, with the help of the Supreme Court). And certainly, honest analysts can track significant decline in average intelligence from all of those turning points—each one worse than the last, like stair steps leading to the basement. But I would argue that the descent probably began in 1947-48, when the Idiot Box began proliferating in the average middle-class American living room.
In a nutshell, new technologies correlate to a rise in ignorance and sloth.
The more a person reads, the more intelligent they are, but the opposite is also true. Whether it’s television, video games, personal computers, the Internet, or social media, the easier it becomes to find amusement in some other medium, the less incentive there is to read.
I could argue for or against the value of TV. On the one hand, there are a lot of classic movies I never would have had opportunity to see were it not for the Boob Tube. But it’s also a trance-inducing device that turned the population bovine and ignorant, brainwashing generations into believing that America’s leftward swerve and moral disintegration were positive developments.
I saw the Internet as an overwhelmingly positive advance, and still tend to retain that bias. As a writer, virtually unlimited access to information is readily available through it. I can research almost instantly, on-the-fly, without ever visiting a library. It’s easy to understand why so many assumed it would make everybody smarter. The problem is that if somebody is already lazy and ignorant, they won’t be able to wield this tool effectively.
Google and other trusted, “reputable” Establishment resources are far more committed to suppressing information than to delivering it, creating a sort of doom spiral feedback loop wherein ignorant conformists become dumber, lazier, and more bovine from using them.
I recognize almost no benefit in social media, aside from networking. Substack feels like an exception, in that readers can get a taste for an author’s work beyond a short paragraph (“tweet”) now and then. (The very format of Twitter, Facebook, et al, caters to the sloth and ignorance of the lowest common denominator, guaranteeing most users will remain at that level.)
Having said that, all the effort I’ve put into creating content on my stacks has yet to result in even one book sale, so far as I can tell. And “notes”? Looks like it might be transforming Substack into just another social media platform. In other words: yet another hi-tech narcissistic amusement turning people lazier and more ignorant. Or at least less likely to pay for something they enjoy reading.
Generative A.I.
The next big revolutionary tech to upset the status quo is “artificial intelligence,” of course.
Others have already pointed out that we’ve been using A.I. for some time, now. Few people have ranted against search engines or grammar checks (forms of “artificial intelligence”), but the emotional opposition against generative A.I., particularly by illustrators, has been something to behold.
I don’t want to rehash what I’ve already written about A.I., but it’s probably pointless to argue against its proliferation, anyway. It’s here to stay, and will only become more prevalent, despite the passionate diatribes against it, and cancellation efforts against creatives who use it. No matter how many tantrums artists throw, consumers just don’t care all that much who or what concocted their entertainment. They want more for cheaper, and A.I. is making that possible.
Yes, I recognize that I, too, can be replaced. But as I’ve pointed out in the past, I arguably already have been. My books have been hidden (by algorithms, no less; in other words: “A.I.”), and buried under mountains of soulless pap written by authors who might as well be conglomerations of mimicking software. But they are more closely aligned to gatekeeper values and ideology than I am.
My own prejudice is that no man-made device or program will ever be able to match the imaginative wonder of what our Creator has gifted to some human beings. But, again, if you study the successful books on Amazon, almost none of them rank toward the top of what a gifted human can produce. Winners are chosen according to marketing horsepower and ideological alignment. If a book happens to also be a masterpiece, great. But that is an afterthought.
If you’ve written a fantastic book, and executed a brilliant marketing strategy to sell it, but it doesn’t reinforce the approved narratives, the best you can hope for is that it somehow slips through the cracks in the gatekeepers’ defenses.
The gatekeeping isn’t going to stop; its effects will be compounded. Now, instead of a great indie book being buried under millions of soulless, derivative books generated by poor-to-mediocre authors, it will be buried under that, plus billions of soulless derivative books generated by soulless book writing software. And that software will be coded to incorporate the correct political messaging, making the gatekeepers’ jobs even easier.
What About Uncanny Valley?
As a boy, I saw an episode of Trek Classic wherein Captain Kirk was framed for a crime by a video records tech with a grudge against him. The tech tampered with the Enterprise’s records to make it look like Kirk committed the crime. I cried foul because the revision could not have been accomplished via editing (IIRC), but would have required generating bogus footage which looked as authentic as the real stuff.
From when I first lived in a house with a color TV, I could accurately place a movie in the decade it was made, based on the film’s color qualities. I could tell the difference between a Twilight Zone rerun shot on film and one shot on video, as well. My young mind, hindered by normalcy bias, could not believe technology would ever advance to the point that video footage could be generated out of nothing, and still fool a Federation tribunal in Star Date Whatever.
So I hesitate to make the same sort of assumption now about the limits of A.I.
I’ve seen a video clip of a bipedal robot, walking. Yes, it had the general shape of a humanoid, but its locomotion was obviously far from human. There’s a long way to go between here and the synthetic replicants of Bladerunner, and I’m not sure the gap can ever be bridged.
A video was suggested to me the other day online, promising some unknown story about a fight that happened between Bruce Lee and Chuck Norris, when the cameras weren’t rolling. I bit.
The video was obviously edited and narrated by a bot. It continued showing the same cycle of video clips over and over in a random pattern, having little or no relation to what was being said in the narration. The narration highlighted the contrast in backgrounds and fighting styles, repeatedly. It said the same thing over and over, using different wording…until it began repeating verbatim what it had said before. I decided it was never going to give me the story it promised. I might not have even waited until the annoying commercials interrupted it, before I exited.
You have perhaps seen videos of the motion-selfie variety in which a lifelike man or woman talks at the camera, hyping a revolutionary new weight loss recipe or get-rich-quick-scheme. The movement of the mouth is synchronized to the words being spoken, but there is something just…off about it. The longer you watch and listen, the more obvious it is that this is not real footage of a real person speaking real words from their real vocal chords.
I’ve encountered plenty of phonies on the Web, and a few of them I’m certain were some sort of bot, being tested “out in the wild,” likely as some sort of experiment. Their comprehension, attempted use of idiom, and cognition just were not human. (And I interact with plenty of imbessiles, so it’s not that I just don’t recognize stupidity when I encounter it.)
And, of course, we’ve all seen what A.I. does to fingers and eyes when generating images.
But A.I. is improving rapidly. Just in the last couple years, it has come a long way. It seems only logical that one day it will begin depicting fingers and eyes in an anatomically correct fashion, without rewriting the prompts 400 times, and finally hand-drawing it yourself. It will one day be able to generate human hands holding visually accurate weapons of varying complexity. It will be able to reproduce a background or setting, shown from different angles.
It will probably be able to generate fake video footage that looks like the real thing.
That it might be able to generate a complex, long-form story that makes sense, is believable, incorporates themes coherently, and structures the plot in a satisfying way without feeling formulaic, is possible—if not probable.
Still difficult for me to believe, but that doesn’t mean it can’t or won’t happen.
Not Just Art, but Life Itself?
As impressive as “machine learning” and “artificial intelligence” are, it is not truly learning, nor is it intelligent. It is merely operating according to software programmed by human beings, to simulate what human beings can do. It can process information faster, with less mistakes, and remember more than your average human can, but it is incapable of thought. Kind of like the average college graduate today.
A.I. can only ever be as smart as the data fed into it, and only as competent as it can be programmed to be.
Only if it becomes sentient will it be able to imagine. And without imagination, it will never be more than a simulation. It can simulate art, poetry, storytelling, music, but without imagination, will never be able to rise to the level of humanity’s creative elite.
The Terminator franchise and other stories have speculated on what could happen if Skynet or its equivalent ever becomes self-aware. It might decide humans are pesky cockroaches to be exterminated. Or it could decide it wants to rule over us, and/or welcome its own possession by some spiritual force, since it has no soul of its own.
In any event, if man-made devices and programs ever develop an imagination, ever become capable of original thought, then human creativity may disappear. Not because nobody will be born with a spark of creativity, but because any incentive to exercise it will likely disappear, and we will be too busy just trying to survive.
Even then, I don’t believe artificial beings will be able to produce better art than we can. We are fearfully and wonderfully made by our Creator. Will even a sentient robot ever be more than a simulation of us, with a simulation of the imagination we were endowed with?
For now, generative A.I. is just a tool.
Will it put artists out of work? I think it might. When the average consumer doesn’t care how the product was made, and when the typical illustrator is an arrogant primadonna and an expensive, fickle, uncooperative flake, it is guaranteed to happen sooner rather than later. Unless human nature changes first.
Will it put musicians out of work? Maybe it already has. There are plenty talented musicians out there—possibly more than ever—but the music industry has been radically transformed, too. Consumers can find pretty much anything they want to listen to for free. It’s very tough to be competitive at that price point. With A.I., a “musician” won’t need to know anything about music theory, won’t need to even buy an instrument or learn to play it, and they’ll be able to churn out an unlimited number of songs and albums. The ones who are most morally bankrupt and ideologically malleable will be the only ones capable of slipping their product past the industry gatekeepers. The cost of getting signed by a record label (is that even still a thing?) will be their soul, and plenty will be fine with that price point.
Will it put writers out of work? Yup. The existence of businesses like Wal-Mart, Harbor Freight, and MacDonalds prove that consumers will gobble up cheap assembly line garbage as long as it’s more convenient and economical than superior products. (Even if it’s made by slave labor for a genocidal regime that wants to destroy us.) A 600 page Tolkien ripoff generated in 20 minutes, priced at 99 cents, will sell more than a $4.99 E-book of 300-400 pages you spent years conceiving, world-building for, plotting, writing, polishing, formatting, and perfecting. It will outsell yours every time. It may be inferior to what you wrote but that won’t matter. If the gatekeepers don’t favor you with the algorithms, you will need some other source of income. Quantity will trump quality. So…it will be like now, only worse.
Something could change that makes it possible for human creatives to be competitive in the emerging status quo. Who knows? I know some of us will never stop creating, even if the royalty $$ dries up completely.
Because that spark of creativity is the algorithm programmed into us from before we even became self-aware.
The beginning of this post reminds me of Richard Henry Dana’s memoir, Two Years Before the Mast (1840). It recounts his time as a common sailor, and—yes, you guessed it—those tar-covered ruffians loved to spend their free time reading.
Very good.