Ever since the term “robot” was coined in 1920 (popularized by Karel Capek’s play, R.U.R. — Rossum’s Universal Robots), someone’s been worried about robots taking over their jobs. A few years ago, there were some National Public Radio (US) stories about programs that could write news stories. (One here: https://www.npr.org/sections/money/2015/05/20/406484294/an-npr-reporter-raced-a-machine-to-write-a-news-story-who-won.)
I’ve read a few of these stories – honestly, there are only so many ways to report the weather on 90 percent of the days. Or to report on the stock market. In the vast majority of cases, you can randomly select a template, and plug in the numbers and adjectives for that day, and you have readable information.
Some artificially generated fiction can be strangely moving and seemingly full of thought. That’s because the reader is expected to do some of the work in fiction – she searches her brain for the source of allusions, or makes the connections that make the subtext clear. Is there so much difference between certain types of highly experimental fiction and vague robotic meanderings? As far as satisfaction goes, I think they both can deliver. Not every piece, of course. It’s Sturgeon’s Law that 90 percent of any genre is dreck. I think that goes for non-human writing, as well. Computers, over the short term, generally do better than an infinite number of monkeys plonking away on typewriters.
However, the inputs still matter. Today on Twitter, Janelle Shane shared some of her results from a neural net. Continue reading