Table of Contents
It is a slow, myocardial rhythm.
Like a heartbeat or the glowing pulse of a traffic light at midnight, it’s a hypnotic beat that’s all too familiar.
From Microsoft Word to Google Docs, the blinking cursor is a companion that compels us throughout text documents and text messages and naughty Google searches.
When we falter in our prose, the blinking cursor is there to patiently ask “What’s next?”
The blinking cursor is not just some 1970s invention of yesteryear — it oriented millions of people in the digital world. It’s why and how the words you’re reading right now were created.
Of course, not everyone’s relationship with the blinking cursor is codependent. In fact, months of research to uncover the origin of this ubiquitous feature reveal that it’s been largely relegated to a dusty, forgotten shelf of computing history. Perhaps it’s time to change that.
The blinking cursor claims a backstory of how intuitive computing can stand the test of time and hold its own in an ever-changing digital environment.
Grandfather of the Google Doc
The Oxford English Dictionary is a symbolic vessel of the English language that has persisted for hundreds of years, but constructing a 1960s school edition brought Oxford lexicographers to their knees.
“Half of it was gobbledygook.”
Staring bleary-eyed into the incredibly small screens of early computer terminals, the lexicographers found themselves lost in a sea of green and black as pieces of unintelligible code merged with stranded locutions. Barely able to decipher the mess in front of them, they sought refuge in their printer — only for it to churn out ink smears on flimsy paper.
These were some of the first growing pains of early word processing. Devoid of the seamless trackpad and mouse control we take for granted today, wordsmiths of the era were instead forced to hack through a digital jungle of their own creation. Unbeknownst to them, engineers were already developing a seemingly innocuous feature that would quietly change computing forever: the blinking cursor.
Paul Luna is a typography historian and emeritus professor at the University of Reading in England. He arrived at Oxford just a few years after their initial struggle with this phototype machine, an early predecessor to the personal computer. For editors used to receiving clean and fully stylized proofs to review, the hodgepodge printout of the phototype machine was a shock.
“Suddenly these poor beggars were confronted with a screen with an array of green type — that you couldn’t read anyway — on black stuff,” Luna tells Inverse. “Half of it was gobbledygook. And every now and then you’d see a word.”
Sitting against a personal library of typography volumes and dated computer manuals, Luna paints a picture for Inverse of what printing used to look like before the age of the personal computer.
“Typesetting is getting words and spaces — or written language — into a form that you can then multiply,” he says. “Typesetting is not merely a mechanical process, it’s a value-added process. You do that by [hand] selecting the font you’re using, changing the font you’re using, changing the column width, etc.”
Comparing this analog process to modern word processing, it’s human hands themselves that take on the role of the blinking cursor or insertion point. Need to change the position of the word? A typesetter could simply move the blocks of text around like a jigsaw puzzle.
Aside from the tedious nature of the work, another major disadvantage was that you couldn’t preserve the arrangement of text after printing. If you wanted to return to a typesetting months or years later to make a small revision, you’d have to start setting it again the beginning. This is a problem modern text editors would attempt to solve.
“Anything blinking is an artifact of video.”
Moving into the 20th century, typesetting underwent several evolutionary moments, from dinging typewriters to the earliest computers like the teletype and phototype machines.
As Thomas Haigh, a professor of technology history at the University of Wisconsin-Milwaukee, tells Inverse, the teletype was “essentially typewriters hooked up to a computer.”
While these advances made it much quicker for publishers to go to print and even save their digital typesetting on CPUs and floppy discs, one crucial aspect of vintage typesetting had been lost in the mix: an intuitive way to insert or remove text.
Across the ocean from Oxford, this was a question that electronics engineer Charles Kiesling had already begun to solve.
A military invention
Charles Kiesling — or Chuck, to his friends — was born in the small town of Murdock, Minnesota in 1930. Kiesling, a naval veteran of the Korean War, spent his immediate post-war years on a new challenge: the exploding computing age.
Still decades away from personal computers — let alone portable ones — Kiesling was joining the ranks of engineers tinkering with room-sized computers like the IBM 650 or the aging ENIAC. He joined Sperry Rand, now Unisys, in 1955, and helped develop the kind of computer guts that casual users rarely think about.
This includes innards like logic circuitry, which enable your computer to make complex non-binary decisions like “or,” “and,” or “if only” instead of simply “yes” or “no”. One of these seemingly innocuous advancements was a 1967 patent filing Kiesling made for a blinking cursor.
Kiesling’s January 2014 obituary puts it plainly: He is the father of the blinking cursor:
He was the father of the Logical expansion circuitry for display systems, or “Graphical Computer Video Card” and the flashing or “Blinking Cursor”.
Despite the endurance of Kiesling’s invention in our word processors today, little else is publicly known about the man himself. After 38 years of service to Sperry Rand, Kiesling retired in 1994 and passed away among family in 2014 at the age of 83.
According to a post on a computer science message board from a user purporting to be Kiesling’s son, the inspiration for this invention was simply utility.
“I remember him telling me the reason behind the blinking cursor, and it was simple,” Kiesling’s son writes. “He said there was nothing on the screen to let you know where the cursor was in the first place. So he wrote up the code for it so he would know where he was ready to type on the Cathode Ray Tube.”
The blinking, it turns out, is simply a way to catch the coders’ attention and stand apart from a sea of text.
Enter: the computing giants
The blinking cursor as we know it today still wouldn’t make its general public debut for a few decades. Its functionality first appeared on the Apple II in 1977 and was later incorporated in Apple’s undersold younger sister of the infamous Macintosh, the Apple Lisa, in 1983.
Andy Hertzfeld is a retired Apple software engineer who worked on the Macintosh. He says that Apple II’s wink came at the expense of another common computing feature that the machine’s famous designer, Steve Wozniack, chose to ax.
“The original Apple II did not support lowercase letters which is kind of surprising to most people,” Hertzfeld tells Inverse, laughing. “But the designer, Wozniak, made a trade-off that blinking characters were more important than lowercase letters.”
The choice was made simply based on necessity based on the limited memory, or ROM, that chips were capable of at the time, Hertzfeld says, but it nonetheless had a lasting impact.
In the Apple II, this blinking — which could be extended to the cursor as well — was enabled using hardware, Hertzfeld explains. In the Apple Lisa and Macintosh, it was done using system graphics and software. These two systems were both in the perfect position to capitalize on the release of the first commercially popular word processor, WordStar, in 1978.
But Hertzfeld actually first laid his eyes on the blinking cursor years earlier, he says, as a student in the 1970s.
“The first time I saw a blinking cursor was on a video terminal when I was in college,” Hertzfeld says, referring to a screen not unlike our laptops today that displayed your work in real-time. “Video terminals started taking over from the teletypes. Anything blinking is an artifact of video — if you’re just printing [like the teletypes] you can’t blink.”
At first, the cursor was stuck to the bottom of your text document, Haigh says. But the invention of the mouse in 1964 by Douglas Engelbart and the addition of cursor or arrow keys to keyboards made it easier for typists to move through the document with ease. Prior to this invention, computers were primarily navigated through command line codes that told the machine’s software directly what to do. For example, instead of clicking an application on your desktop, you would use code to open it.
“Their sense of time, place, and self dissolve.”
While he was in support of the blinking cursor itself, Haigh says Steve Jobs was famously against controlling it using cursor keys. He attempted — and failed — to remove these keys from the original Mac in an effort to force users into using a mouse instead. In an interaction with biographer Walter Isaacson years later, he even pried them off with his car keys before signing his autograph on the keyboard.
Nowadays, you’d be hard-pressed to find a text-based platform that doesn’t include the quietly beating cursor. Whether you’re Googling, posting on Facebook or Twitter, or penning your memoir in Word, the blinking cursor follows.
Aside from fitting the adage “if it’s not broken don’t fix it,” Karl MacDorman, an associate professor of human-computer interaction at Indiana University, says the resilience of the blinking cursor may also have to do with its seamless and intuitive user interface design.
“A blinking cursor was relatively easy to implement and became the standard,” MacDorman tells Inverse. “Much of good HCI [Human computer interaction] design is about the interface letting the user work effectively. It’s not really designed to make the user feel anything, except perhaps in control. Good HCI design lets the user concentrate on the work, not the interface… They are working in the moment without self-consciousness. Their sense of time, place, and self dissolve.”
An uncertain future
At 54 years old, the blinking cursor is firmly middle-aged. But how long will this wallflower of a feature continue to make a cameo in our devices? Change might be coming sooner than you think, says Hertzfeld, the retired Apple engineer.
“When you have augmented reality and you’re just kind of seeing the world, how are we going to indicate selection?” Hertzfeld ponders. “Animated collection boxes — called marching ants — is the progeny of the blinking cursors [in these cases.]”
These boxes are the result of the same kind of motion you do to highlight a text field — a click and drag — and create hollow squares with a moving, perforated perimeter around your selection. Hence, the marching ants.
Perhaps more importantly, should we stop and remember the blinking cursor’s history today, while it’s still salient?
I’ve come to think about it like this: the items we cherish, protect, and even ignore in our daily lives are all part of a larger and often unexamined picture. Small moments or inventions may not live vividly in the public consciousness, but they are still nonetheless crucial points of color — like strikes of gold creating a pointillism sun. If we can appreciate small legacies like these, maybe we can learn to appreciate our own as well.
But of course, not everyone feels so sentimental about these things. It’s up to us all to decide where we fall.
“I haven’t heard about a person finding a flashing cursor to be exciting,” MacDorman says. “Perhaps that person should get out more?”
https://www.inverse.com/innovation/blinking-cursor-history