Every school day since 2009 we’ve asked students a question based on an article in The New York Times.
Now, seven years later, and in honor of the Oct. 20 National Day on Writing, we’ve collected 650 of them that invite narrative and personal writing and listed them by category below. Consider it an update of a previous post, and a companion to the list of 301 argumentative writing prompts we published in 2015.
LET me hazard a guess that you think a real person has written what you’re reading. Maybe you’re right. Maybe not. Perhaps you should ask me to confirm it the way your computer does when it demands that you type those letters and numbers crammed like abstract art into that annoying little box.
Because, these days, a shocking amount of what we’re reading is created not by humans, but by computer algorithms. We probably should have suspected that the information assaulting us 24/7 couldn’t all have been created by people bent over their laptops.
It’s understandable. The multitude of digital avenues now available to us demand content with an appetite that human effort can no longer satisfy. This demand, paired with ever more sophisticated technology, is spawning an industry of “automated narrative generation.”
Companies in this business aim to relieve humans from the burden of the writing process by using algorithms and natural language generators to create written content. Feed their platforms some data — financial earnings statistics, let’s say — and poof! In seconds, out comes a narrative that tells whatever story needs to be told.
A shocking amount of what we’re reading is created not by humans, but by computer algorithms. Can you tell the difference? Take the quiz.
OPEN INTERACTIVE FEATURE
These robo-writers don’t just regurgitate data, either; they create human-sounding stories in whatever voice — from staid to sassy — befits the intended audience. Or different audiences. They’re that smart. And when you read the output, you’d never guess the writer doesn’t have a heartbeat.
Consider the opening sentences of these two sports pieces:
“Things looked bleak for the Angels when they trailed by two runs in the ninth inning, but Los Angeles recovered thanks to a key single from Vladimir Guerrero to pull out a 7-6 victory over the Boston Red Sox at Fenway Park on Sunday.”
“The University of Michigan baseball team used a four-run fifth inning to salvage the final game in its three-game weekend series with Iowa, winning 7-5 on Saturday afternoon (April 24) at the Wilpon Baseball Complex, home of historic Ray Fisher Stadium.”
If you can’t tell which was written by a human, you’re not alone. According to a study conducted by Christer Clerwall of Karlstad University in Sweden and published in Journalism Practice, when presented with sports stories not unlike these, study respondents couldn’t tell the difference. (Machine first, human second, in our example, by the way.)
Algorithms and natural language generators have been around for a while, but they’re getting better and faster as the demand for them spurs investment and innovation. The sheer volume and complexity of the Big Data we generate, too much for mere mortals to tackle, calls for artificial rather than human intelligence to derive meaning from it all.
Set loose on the mother lode — especially stats-rich domains like finance, sports and merchandising — the new software platforms apply advanced metrics to identify patterns, trends and data anomalies. They then rapidly craft the explanatory narrative, stepping in as robo-journalists to replace humans.
The Associated Press uses Automated Insights’ Wordsmith platform to create more than 3,000 financial reports per quarter. It published a story on Apple’s latest record-busting earnings within minutes of their release.Forbes uses Narrative Science’s Quill platform for similar efforts and refers to the firm as a partner.
Then we have Quakebot, the algorithm The Los Angeles Times uses to analyze geological data. It was the “author” of the first news report of the 4.7 magnitude earthquake that hit Southern California last year, published on the newspaper’s website just moments after the event. The newspaper also uses algorithms to enhance its homicide reporting.
But we should be forgiven a sense of unease. These software processes, which are, after all, a black box to us, might skew to some predicated norm, or contain biases that we can’t possibly discern. Not to mention that we may be missing out on the insights a curious and fertile human mind could impart when considering the same information.
The mantra around all of this carries the usual liberation theme: Robo-journalism will free humans to do more reporting and less data processing.
That would be nice, but Kristian Hammond, Narrative Science’s co-founder, estimates that 90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention. If this projection is anywhere near accurate, we’re on a slippery slope.
Yes, but can a machine convincing tell us that it was at an event when it wasn’t, the way Brian Williams and Bill O’Reilly can?I don’t think…
It’s mainly robo-journalism now, but it doesn’t stop there. As software stealthily replaces us as communicators, algorithmic content is rapidly permeating the nooks and crannies of our culture, from government affairs to fantasy football to reviews of your next pair of shoes.
Automated Insights states that its software created one billion stories last year, many with no human intervention; its home page, as well as Narrative Science’s, displays logos of customers all of us would recognize: Samsung, Comcast, The A.P., Edmunds.com and Yahoo. What are the chances that you haven’t consumed such content without realizing it?
Books are robo-written, too. Consider the works of Philip M. Parker, a management science professor at the French business school Insead: Hispatented algorithmic system has generated more than a million books,more than 100,000 of which are available on Amazon. Give him a technical or arcane subject and his system will mine data and write a book or report, mimicking the thought process, he says, of a person who might write on the topic. Et voilà, “The Official Patient’s Sourcebook on Acne Rosacea.”
Narrative Science claims it can create “a narrative that is indistinguishable from a human-written one,” and Automated Insights says it specializes in writing “just like a human would,” but that’s precisely what gives me pause. The phrase is becoming a de facto parenthetical — not just for content creation, but where most technology is concerned.
Our phones can speak to us (just as a human would). Our home appliances can take commands (just as a human would). Our cars will be able to drive themselves (just as a human would). What does “human” even mean?
With technology, the next evolutionary step always seems logical. That’s the danger. As it seduces us again and again, we relinquish a little part of ourselves. We rarely step back to reflect on whether, ultimately, we’re giving up more than we’re getting.
Then again, who has time to think about that when there’s so much information to absorb every day? After all, we’re only human.
Related: Interactive Quiz: Did a Human or a Computer Write This? A shocking amount of what we’re reading is created not by humans, but by computer algorithms. Can you tell the difference? Take the quiz.
Daniel Floyd’s ten minute YouTube video, “Video Games and Storytelling,” is a video lecture you won’t soon forget. Reminiscent of “RSA Animate – Drive: The surprising truth about what motivates us,” about Dan Pink’s book “Drive,” this video by Daniel Floyd is the most intense, “rapid-fire” visual presentation of related images I’ve seen in a video lecture to date. Several clever visual quips in here, but you certainly have to pay close attention! The content is outstanding also… as a digital storyteller and a storychaser, I’m quite interested in the confluence between storytelling and gaming.
Daniel has a total of eight “video lectures” about video games in his YouTube playlist, “Video games and…”Check them out!
Daniel’s official blogger profile page identifies him as an animator living in Athens, Georgia, but his only blog on Blogger hasn’t been updated since September 2010. If you know what he’s up to now and/or have a link to his current work, please share via a comment. Several of the video descriptions on Daniel’s YouTube channel indicate he’s moved to Escapist Magazine, so perhaps that’s the answer. If Daniel is on Twitter or maintaining another blog, however, I’d like to follow.
by Kyle Wiens
If you think an apostrophe was one of the 12 disciples of Jesus, you will never work for me. If you think a semicolon is a regular colon with an identity crisis, I will not hire you. If you scatter commas into a sentence with all the discrimination of a shotgun, you might make it to the foyer before we politely escort you from the building.
Some might call my approach to grammar extreme, but I prefer Lynne Truss’s more cuddly phraseology: I am a grammar "stickler." And, like Truss — author of Eats, Shoots & Leaves — I have a "zero tolerance approach" to grammar mistakes that make people look stupid.
Now, Truss and I disagree on what it means to have "zero tolerance." She thinks that people who mix up their itses "deserve to be struck by lightning, hacked up on the spot and buried in an unmarked grave," while I just think they deserve to be passed over for a job — even if they are otherwise qualified for the position.
Everyone who applies for a position at either of my companies, iFixit or Dozuki, takes a mandatory grammar test. Extenuating circumstances aside (dyslexia, English language learners, etc.), if job hopefuls can’t distinguish between "to" and "too," their applications go into the bin.
Of course, we write for a living. iFixit.com is the world’s largest online repair manual, and Dozuki helps companies write their own technical documentation, like paperless work instructions and step-by-step user manuals. So, it makes sense that we’ve made a preemptive strike against groan-worthy grammar errors.
But grammar is relevant for all companies. Yes, language is constantly changing, but that doesn’t make grammar unimportant. Good grammar is credibility, especially on the internet. In blog posts, on Facebook statuses, in e-mails, and on company websites, your words are all you have. They are a projection of you in your physical absence. And, for better or worse, people judge you if you can’t tell the difference between their, there, and they’re.
Good grammar makes good business sense — and not just when it comes to hiring writers. Writing isn’t in the official job description of most people in our office. Still, we give our grammar test to everybody, including our salespeople, our operations staff, and our programmers.
On the face of it, my zero tolerance approach to grammar errors might seem a little unfair. After all, grammar has nothing to do with job performance, or creativity, or intelligence, right?
Wrong. If it takes someone more than 20 years to notice how to properly use "it’s," then that’s not a learning curve I’m comfortable with. So, even in this hyper-competitive market, I will pass on a great programmer who cannot write.
Grammar signifies more than just a person’s ability to remember high school English. I’ve found that people who make fewer mistakes on a grammar test also make fewer mistakes when they are doing something completely unrelated to writing — like stocking shelves or labeling parts.
In the same vein, programmers who pay attention to how they construct written language also tend to pay a lot more attention to how they code. You see, at its core, code is prose. Great programmers are more than just code monkeys; according to Stanford programming legend Donald Knuth they are "essayists who work with traditional aesthetic and literary forms." The point: programming should be easily understood by real human beings — not just computers.
And just like good writing and good grammar, when it comes to programming, the devil’s in the details. In fact, when it comes to my whole business, details are everything.
I hire people who care about those details. Applicants who don’t think writing is important are likely to think lots of other (important) things also aren’t important. And I guarantee that even if other companies aren’t issuing grammar tests, they pay attention to sloppy mistakes on résumés. After all, sloppy is as sloppy does.
That’s why I grammar test people who walk in the door looking for a job. Grammar is my litmus test. All applicants say they’re detail-oriented; I just make my employees prove it.