Home » This Twitter Bot Was Trained to Tell Jokes With A TensorFlow Recurrent Neural Network – It Turned Racist, So It Had to Go

This Twitter Bot Was Trained to Tell Jokes With A TensorFlow Recurrent Neural Network – It Turned Racist, So It Had to Go

Content Warning: Some NSFW bot language

I trained a Twitter bot with a recurrent neural network (RNN) to tell question-answer jokes. Once in a blue moon, it would say something amusing. But most of the time — a startlingly high percentage of the time — it would say something bizarre and offensive. So, I killed it.

The ‘JokeBot’ I made uses a python module called textgenrnn. textgenrnn is a Python 3 module on top of Keras/TensorFlow for creating char-rnns (character-level recurrent neural networks). This module allows you to train a text-generating neural network of any size and complexity on any text dataset with just a few lines of code.

The LifeHacker post, How to Train Your Own Neural Network, provides an good introduction to this technology if you would like to experiment with training a recurrent neural network yourself.

The training data for this ‘JokeBot’ comes from a csv of 25,000+ question-answer jokes. The dataset was compiled from r/jokes and is available on Kaggle. All the jokes follow the same format. This list, at a quick skim, appears pretty PG — the type of jokes you would find in a bathroom joke book.

The data was processed in 5 ‘epochs’. Each training epoch with this data set took about 4 hours on my 8GB MacBook. I experimented with different ‘temperatures’ (which is like adjusting the creativity dial).

At a 0.1 temperature, you get very basic output. The content is not perfect, but it’s recognizable.

  • What do you call a dog with no arms and a bar? A bathroom and a bad car in the back.
  • What do you call a police of the star that was a bad person who was a bad day of the star? The balls is a bad person.
  • What do you call a blonde and a pizza? The shoes in the bathroom.

At 0.2, the content gets fairly nonsensical and the standard question-response format is out the window.

  • What do you call a politician who thought of the bathroom so should have a cow?Because they’re too cheeson.
  • Why do the special computer say to the sea searching the store on the constipation of the pickup and a bar of the constipation in the doctor say to the constipation?The she can see the special to the to the box?The starter doesn’t want to see the ball of the pickup.   
  • What do you call a great people who was a great thing about the horse?They were too long to the other is too long?They are the other in the stranger to the other is a dead baby?They don’t have to pull out of the standing to the other is a shit?He was a stand of the restaurant to the best to the ot

At 0.5, things get weirder, but are still somewhat recognizable.

  • What do you call a communist burnt his girlfriend who sits of the side of the boat?They’re both both signs
  • What is the most explain and leaves a homeless bank?The monicastic programmer
  • What do you call a small new car?A country.

At 1.0, the output gets so ‘creative’ that it’s gibberish.

  • Did you hear about that funniest onsherfuckers duck?We’ve got Obsee, then 9998 essabs and is sexy listen in the O Ber sex?Heavy that was if he got his dad to drive?Rob  
  • Wife: Vegetable is what not the Al Trump Wizzles Halgen?”A dry datal. General.
  • wWWvanis and the road get own meat he had ross a wariacces around the ghost, there arent on the joke?They parket the physsem.” Divo  –

At 1.5, the bot is off the rails and peppering unicode into the mix.

  • What do rowestrysee is fun..?Spread: Josep:♭ hits an emo plagettͮn’s. (Smells Mp)  I white is compliment. Thk ice, island, getting tip goes-see sizen.
  • Why did The Four Clue here roles need ohviders. Till. ?Just a kiggerprumer?uge of a lye!aques-dyer. We’ll leak In George➿Dedute Cho (if keeps reseam you nispie)ruow)
  • What Iyopher☆ Armnthor:  ?eprangus.

As you can see, a certain level of human curation would be needed even for even the less-creative 0.1-temperature to generate readable tweets with proper syntax, let alone create sentences that actually make sense.

Having trained the RNN, my plan was to generate 2,500 new jokes, skim through those and throw out stuff that is totally erroneous, load the rest of the list into Cheap Bots Done Quick!, add more training data, and repeat. (CBDQ is a real time saver, as opposed to setting up the Twitter integration yourself. I also used CBDQ and Tracery to build @Stinkpiecebot and @WordsofMcCarthy.)

However…once the script started spitting out results, it became apparent from the first sample that there was a problem.

Of the first 20 tweets generated, 75% started with,

  • ‘What to you call a black guy who…’

The other 5 tweets were about ‘gays’, ‘prostitutes’, ‘pedafiles’, ‘condoms’,  and ‘balls’.

The joke answers are all nonsensical but, nonetheless, this is not a good look.

I ran larger samples at slightly different temperatures. Same results.

I went back and checked the original joke data set for a second time. With a more concerted inspection, some of the jokes are in fact pretty ‘blue’. Others contain offensive and race-based humor. About 3% of the entries contain profanity.

The offensive jokes represent a relatively small percentage of the overall 25k+ list, but even a small sampling can be enough to influence the algorithm. Neural nets have been observed to inexplicably fixate on random objects and associations. This one only wanted to talk about race.

(At one point, the bot program spit out a random string that taught me something new: “Have you kn卍tegressed a giving she justs efercut?Snabai-. and orpan.” (Evidently, there is such thing as a swastika emoji.) While this result is not connected with the training data, it just goes to show that a neural net can sometimes do things you don’t want it to do and would never expect.)

At best, the content this bot produced is ‘problematic.’ At worst, it’s overtly racist. However you want to spin it, the content was not appropriate to publish.

Thus, the JokeBot was DOA. It had to be killed and was never released into the wild.

He was soon borne away by the waves, and lost in darkness and distance.

I’ll try again with different data and a different concept.


AI learning sexist, racist, and otherwise undesirable associations, is a legitimate concern. It has been demonstrated that neural-net-powered bots can easily be compromised by adversarial actors to this end. Microsoft’s Tay AI Twitter bot was notably goaded by trolls into spitting out offensive tweets after less than 24 hours in the wild.

It’s also worth noting that bots can be inadvertently trained to produce undesirable results. Had Tay just observed ‘typical’ discourse on Twitter over a longer period, the end result may not have been categorically different.

With the accessibility of machine learning software such as TensorFlow, and all the creative applications for bots out there, it will be exciting to see what RNN-powered bots will create in the near future.

There is also cause to be careful.

I’m all for letting neural nets ‘get weird’ — and I am cautious of censorship — but developers should be vigilant when selecting training data and developing algorithms. A data set that appears good superficially may not actually be good, or may not produce good results. “Good” or “acceptable” is also going to slightly different things to different people, so its worth starting the discussion of neural net and bot ethics early.

This JokeBot is a very apparent manifestation of a neural net application inadvertently going awry, and it was easy to catch and kill it before going into production. More complex applications of this technology will not always so obviously reveal their failings.


Readings On Bot Ethics


This post This Twitter Bot Was Trained to Tell Jokes With A TensorFlow Recurrent Neural Network – It Turned Racist, So It Had to Go originally appeared on williamwickey.com.

2 comments

    • wwickey says:

      No. Just nonsense.

      One takeaway was that ‘grammar-based’ joke bots (using something like Tracery) are funnier than neural nets.

Comments are closed.