Screen Time

I don’t recall the precise reason, but it became eminently clear it was no longer a living room accessory. It had silently departed the house in our absence, and we never had a chance to say goodbye. As our parents drove my brother, sister, and me from our grandparents’, where the three of us had spent the weekend, and we made the hour-long trip back home, one of us must have expressed a screen-starved eagerness to play our game system — an Atari 2600 — as soon as we would set foot in the door. Upon hearing this, mom pounced from the front seat with a ready reply. Our anticipation was summarily extinguished by her unwelcome news.

“It broke.” “It messed up the TV.” “The dog chewed it up.” Whatever the reason, all we heard was, “It’s gone.” In the history of parental excuses, whichever mom chose was likely as common as they come. The subtext, however, was, “You were spending too much time on it.” So, with the cold-heartedness of a hitman, she offed it. Even dad, I expect, quietly mourned while simultaneously presenting a united front with our mother. No more heavily-pixelated “River Raid” bombing runs after we were in bed. Cue single tear.

It would be the only game system our parents would purchase themselves for the family while growing up in our single-screen home, and it came as quickly as it went. Even in the 80s, long before the advent of iPads, smartphones, and the conscientious parenting term “screen time,” it was still possible for one’s kids to oversaturate on the latter, my mother believed. As I matured, I inherited a similar belief and learned to approach screens and what they deliver with a measure of caution.

A few years later in June of 1992, following the collapse of the Soviet Union, our family migrated to Ukraine to engage in charitable mission work supported by churches in Texas. Among the wealth of experiences we collected over what would culminate in a full and very meaningful year in our lives (about which I intend eventually to write more), we had the opportunity to live screenless. We rarely encountered a television. Moreover, smartphones and tablets wouldn’t make their introductions to society at large for more than a decade. We were strangers disconnected in a strange land, where even a brief, poor-quality, long-distance phone call literally required scheduling with the phone company ahead of time. While such conditions sound primitive and unacceptable by today’s standards, we found ways to keep ourselves occupied. When a screen simply isn’t anywhere, it ceases to be an option, so attention naturally shifts elsewhere. We learned to live comfortably and happily without it.

Fast forward nearly 30 years, and here I sit tapping out these very words one character at a time with my thumb on a modest rectangular screen that never leaves my side. I find it hard to remember any longer what day-to-day living was like prior to constant connectivity and endless entertainment in our palms. We obviously made life work without this technological privilege and have for century after century of human history, but it only took a meager year or two of the current 100-year span to change us forever. We all take for granted that each of us now carry in our pockets a tool far and away more advanced and complex than the earthshaking machines that sent men to the moon. With that kind of personal power, we all know there’s no going back, regardless of the extremity of any downsides discovered since.

Among this technology’s vast number of advantages — and there are, indeed, too many to name — there is really only one familiar disadvantage that matters: we seldom have the willpower to put these devices down as often as we should, turn them off, and interact with the real world we inhabit instead of staring endlessly into a carefully edited and framed projection of it.

Very early in his work with Apple, Steve Jobs once said dismissively of market research, “People don’t know what they want until you show it to them.” The later success of his products would seem to have proved him right. While he wasn’t the inventor of the smartphone, per se, his “i” devices found themselves in the impatient, hungry hands of the majority very soon after their introduction to the market. Apparently, he alone knew what we really wanted, judging from the volume of cash we appreciatively threw at him. Once software/app developers got a hold of the iPhone and similar devices, they found further ways to keep us hooked until this “want” gradually morphed into a “need.” Whether we’re at home, at work, dining out, or even driving, we feel the itch every inactive moment and impulsively silence reflection by reaching reflexively for our phone. Few of us know any longer how to sit still and quiet with our thoughts.

Ironically, many of the developers of these breakthrough devices and apps see their way differently through the distracted digital fog and distance themselves and their families from what they’ve wrought. Bill Gates is said to have established very strict limits on his kids’ use of technology. Jobs himself stated shortly after he released the iPad that he would not allow his children to make use of one. Many of the movers and shakers of social media platforms shared in the cautionary Netflix documentary “The Social Dilemma” that they significantly limit when their children are able to visit the sites they curate, if at all, and some even go so far as to enroll them in low- or no-tech schools. Jaron Lanier, an early pioneer of virtual reality and well-known voice in Silicon Valley, published a few short years ago “Ten Arguments for Deleting Your Social Media Accounts Right Now,” among other works in a similar vein counseling caution in the technological universe we now inhabit. All in all, it appears very telling that these creators are not the strongest advocates for their creations.

A closer look, though, reveals it’s not the technology itself they’ve rejected but its overuse, abuse, or misuse. Any technology, like any tool, is fashioned for a specific function; but I can use it for different ends if I or others so choose and the functionality affords it. A hammer’s designed purpose is to drive or pull a nail, but certain individuals have been known to employ it effectively as a weapon.

Our middle is currently in a months-long process of demonstrating she is mature enough to acquire a phone (and use it as intended). Good grades and behavior are the criteria. While it’s a great motivator for her, I admit mixed feelings about what waits for her at the end. This new rite-of-passage we Gen Xers and earlier never experienced; we transitioned as adolescents just fine without it, so, naturally, we harbor concerns about what it should signify and when they have earned the right to carry, so to speak. They’ve never known a reality that wasn’t populated with a personal device for every person, so they feel less cautious about the change than us, who remember a time when our attention had vastly fewer interruptions and was more consistently present with the world around.

While she stands to gain, I can’t help feeling like something will be lost. Given the option, we’ve found, kids will almost always pick a screen over any other available activity. It’s clear, consequently, that it takes a great deal of vigilance to monitor, educate, and, most importantly, model how to handle joining the ranks of the connected. If my kids observe me glued to this device, they will naturally assume the same posture.

In 1984, William Gibson published the seminal, critically-acclaimed science fiction novel “Neuromancer,” which, in a nutshell, envisioned a future in which we essentially plug our conscious brains into digital reality. The story directly inspired the later box office blockbuster “The Matrix.” While we aren’t precisely there yet (and I hope and pray we never are), reading the novel 25 years after its publication, I couldn’t help but observe that I lived in a version of Gibson’s vision. There is no need to physically “jack-in” if we can’t pull our eyes from our screens. In a way, even then, we were already there.

The best science fiction, in my humble opinion, does not merely spin a fun and adventurous tale of gadgets, lasers, and spaceships. To the contrary, the greatest among them, I would argue, closely scrutinize the present to a purpose. These stories presciently trace out causality and utilize the platform of the unknown future, wittingly or unwittingly, to describe where we’re headed if we do, or don’t, change course. The visions are often extreme and imprecise, making it a challenge to recognize if we’ve arrived at said dystopia. I once expressed to a roommate incredulity about the likelihood of Bradbury’s future tale of the temperature at which books burn. He replied thoughtfully, “Why burn books if no one is reading them?”

I don’t know where we’re all headed with our screens, but I know there are times I think back to that disconnected year in Ukraine and realize their absence is rarely woven into the stories our family recounts about our life there. We didn’t miss them. Nonetheless, at the same time I appreciate the facility to instantly send pictures to my parents of their grandkids, pull up a detailed driving route to anywhere in the world, and post my ramblings online to someone like yourself (whose attention, incidentally, is still intact enough to see these thoughts to their conclusion; and I thank you for that), I wish sometimes I could return to a time when I wasn’t burdened with the task of monitoring my kids’ online presence and activity or paying responsible attention to my own.

But this is the world we now live in, and we take the good with the bad. So, perhaps we should blame neither the tools nor the toolmakers. They simply give us what we want.

One thought on “Screen Time

  1. In addition to Neuromancer, another novel on the subject especially relevant to young adult readers and the effect of social media on them is Feed by M.T. Anderson (ISBN: 9780739344392)

    Like

Leave a reply to brucefarrar Cancel reply