This is the web version of Eye on A.I., The Washington City Times’s weekly newsletter on artificial intelligence and company. Sign up here to get it delivered to your inbox every week.
Imagine the life of an ad man and a scene from Crazy men probably comes to mind: Don Draper charms a few Kodak marketing executives with a perfectly crafted pitch on the emotional pull of nostalgia (“It’s delicate, but powerful …”) to win the account for their new slide projector. “This device is not a spaceship,” Draper tells the enchanted Kodak men about their slide carousel in a famous pitch from the television show. “It’s a time machine.”
Well, it turns out that those days have mostly gone the way of three martini lunches, skinny ties, office smoking, and widely tolerated sexual harassment at work. In the digital age, instead of a high-stakes, high-wire act focused on high concepts, advertising has been largely reduced to a volume game. Marketing departments or creative agencies need to produce dozens or hundreds of variations of digital ads for Facebook, Instagram or web banners, each with slightly different images, display text and calls to action, then run a series of A / B experiments to find out what works for a certain target group. It’s a slog.
A few weeks ago I wrote about a company trying to use machine learning to take a bit of boring work out of this work, automating the testing of various ads. Today I want to talk about another one: Pencil, a startup that actually makes A.I. to create the ads yourself. Based in Singapore, but with employees working remotely around the world, Pencil will automatically generate dozens of Facebook video ads of six, ten or fifteen seconds in minutes.
“The advertising industry has moved from big ideas to small ideas,” said Will Hanschell, Co-Founder and CEO of Pencil. “Instead of a Superbowl ad, which costs millions of dollars once a year, it is increasingly about very small online ads. And in that environment, you have to run 10 ads, throw out the 9 that don’t work, and start over with another 10. That has made work unpleasant for many creative people. “
Pencil hopes it can free these creative folks to work on the big picture while A.I. does the rest. “It cuts videos into scenes, generates text, applies animations and then uses a predictive system that looks at variation and tries to determine what feels most curious and resembles things that have worked for the brand in the past,” says Hanschell.
A company gives Pencil’s software the URL of its website, and that software automatically grabs the logos, fonts, colors and other “brand image information” found there to use in a company’s advertisements. It can use images from the website or a company can choose to provide the system with additional images or video. It uses advanced computer vision to understand what’s going on in an image or video so that it can match the ad text. To write the copy itself, Pencil uses GPT-3, the ultra-large natural language processing A.I. built by OpenAI, the San Francisco A.I. research bureau.
Hanschell says that when Pencil started using GPT-3’s predecessor, GPT-2, the ad text it generated was only usable 60% of the time. Now, with GPT-3 and a better understanding of how to use the existing web copy to query the system, Hanschell says the system will generate a usable copy 95% of the time. In addition, the system can generate new ideas, he says. For example, for a company that sells protein powder, the system can come up with ideas about energy as well as ideas about the morning ritual or fitness, he says.
I watched a demo of Pencil’s software where it created a series of Facebook ads for an eyewear company. It came with the slogan: “Your frames, your way” and also: “Your wildest looks, perfectly crafted”, each paired with appropriate still images. Not exactly Don Draper. But not bad. And as Hanschell points out, in the volume game of today’s digital advertising jungle, good enough to get customers.
In addition, the system can predict how well a particular ad will perform compared to what the company has displayed in the past. For example, it predicted that the ad “ Your Wildest Look, Perfectly Designed ” would do 55% better than previous ads from the same company. That is something most human advertisers cannot do.
Pencil is already used by about 100 companies, including some large multinationals such as Unilever. It is a good example of a new generation of products – and indeed entire companies – made possible by rapid advancements in natural language processing, or NLP. (For more information, watch the latest episode of The Washington City Times’s Brainstorm podcast. Also last year, my The Washington City Times colleague David Z. Morris wrote about several other companies that sell A.I. to automatically create or refine digital advertisements. )
But at the same time, more and more ethical concerns are being voiced about these underlying NLP systems. For example, GPT-3, despite all its apparent power, still does not fail simple tests of common sense. It also has a bias problem: because it’s trained all over the internet, there’s a good chance it’s picked up on a tendency to write sexist or racist prose.
One area in which OpenAI itself has already recognized a problem: the system may display a clear anti-Islamic bias, with a tendency to portray Muslims as violent. A recent paper by two Stanford researchers found that GPT-3 associated Muslims with violence in more than 60% of cases – and that the system previously wrote about black people in a negative context.
This led tech journalist David Gershorn, who named A.I. for tech site OneZero, to wonder why OpenAI would allow it to be used in a commercial environment and why OpenAI’s investor and partner, Microsoft, would incorporate the capabilities of GPT-3 into its own products. How broken is an A.I. system, Gershorn asked, before a tech company decides not to release it?
I asked Hanschell about the problem of possible bias. He noted that OpenAI had developed filters that filtered out some of the worst examples. And he said that in Pencil’s case, ads never run without a human approving them first. “One of the principles of this is that we wanted a person to always be in control,” he says.
So I guess we might not be able to go all the way back to those three martini lunches just yet. There is still work to be done.
With that, here is the rest of the A.I. news.