Read
← Back to IdeasWill AIs Replace ADs at Agencies?
This interview was originally conducted by Isadora Lorient and published by la Réclame and has been translated from French into English.
For a few years now, the media, companies and researchers have been suggesting that the job market will - in a few years' time - be greatly impacted by the automation of tasks and the generalization of artificial intelligence. We were already mentioning it in 2017 for creative jobs, but the ads produced were until then quite anecdotal, even laughable.
And then, in the spring-summer of 2022, a technological leap took place. AI programs like DALL-E (OpenAI), Stable Diffusion and Midjourney emerged, making generative art accessible - almost to everyone. These AIs can create photos, illustrations, and other works of art with astounding realism in a few clicks - or rather thanks to a few "prompts", these texts detailing to the AI what kind of image to create, like a man-machine brief.
Do these new tools represent a danger for the art sector, as some online art communities suggest? Should illustrators and photographers fear for their jobs? Or for art directors in agencies? Or on the contrary, will it be an opportunity for these professionals to take advantage of technology to free themselves from repetitive tasks related to craft? David Raichman, Executive Creative Director of Social and Digital creation at Ogilvy Paris and street-photographer AI by night gives a glimpse of the potential of these generative art solutions.
AI-generated art and visuals seem to have experienced a major qualitative upswing in recent months. Do you share this observation?
David Raichman: Absolutely. In recent months, artificial intelligences have been able to imitate art forms such as photography, illustration, graphic arts, painting, and 3D. It is becoming very difficult to differentiate between work generated by an AI and a real artistic work.
Being a street-photography enthusiast, I had fun "going around the world" with AIs, as if I was only taking street pictures in New York, Jerusalem, Lhasa, Paris, Bombay... The result was quite surprising. What is extremely interesting is that you can use the same parameters as a real photographer (angles to shoot, lens opening, shutter speed, type of film, etc.).
In fact, we have a lot more control than you might think! After that, it's a personal approach. Everyone has their own method and way of working. Those who want to work with illustration or 3D will use their own codes, different from mine.
AIs have never been as accessible as today: on the web, on Discord.... The revolution of generative art also seems to lie in its accessibility.
D.R.: Absolutely. I've already been able to create images on my phone in the street while walking, and even while waiting for the subway. AIs can be used anytime, anywhere. Thanks to completely open-source AIs (like Stable Diffusion) people can install them on their own computers.
The only barrier is the price. You can spend a lot of money on these tools - which are addictive, I'll admit [editor's note: David concedes that he spent around €100 for his latest image tests, which is still reasonable for such tools]. But I think that in the future, these will be available at much lower costs.
What can you do, and not do, with programs like DALL-E or Midjourney today?
D.R.: First, there are ethical constraints. We can't create images that could incite hatred, words are already banned in the "prompts" of some AIs. The most open platform is Stable Diffusion. DALL-E is quite difficult to tame. During the death of the Queen of England for example - whose funeral is taking place today - it was impossible to enter the name "Queen Elizabeth II". In Midjourney, however, this was not a problem. We can also ask an AI what style we are interested in in terms of photographic rendering and give artist references (Guillermo del Toro and HR Giger are frequently used). This can be problematic from an ethical point of view as well, but I find it interesting to mix styles to create your own style.
Secondly, there are technical limitations. For example, AIs are not competent today to represent people kissing and intertwining. The same goes for faces, they are often deformed and not coherent. For this reason, users will use several AI to be more accurate and realistic in these representations. The GFP-GAN model is for example frequently used to improve the rendering of faces in an image generated by solutions like Midjourney or DALL-E.
You can also give AIs "starting images", thus influencing their results - this is something Stable does very well. This gives it more freedom, but also constraints. Today on Midjourney, I think you can't go beyond 2 or 3 images, but tomorrow you will be able to give it many more images.
Apart from that, there are no real limits to what we can do with AI today. On the contrary, there is a huge field of possibilities.
D.R.: Yes, just now! We just launched a campaign for La Laitière. To contextualize, a few days ago, DALL-E and its latest "Outpainting" feature revealed what is hidden outside the frame of Vermeer's Girl with a Pearl. We did the same, but with The Milkmaid.
This painting by Vermeer inspired an emblematic advertising saga. That's why, in the digital age and web 3.0, we wanted to use AI to serve this idea, so that it could imagine a whole environment around the original work. A timeless scene signed: "C'est si bon de prendre le temps".
Watch the progression of the collaboration with AI Outpainting feature in the creation of the artwork for Nestle’s brand La Laitière by Ogilvy Paris.
These AIs feed on visual data, their inspiration databases in short. Can we imagine that in the future you could provide the AI with all the artistic history of a brand, so that the visuals generated respect its charter and graphic heritage?
D.R.: This is already the case to some extent. When you give an AI the name of a well-known brand, it knows the references and the history of the brand. For example, an AI recently did a great campaign for Heinz - a ketchup that is dominant in the collective unconscious. The AI is able to grasp the visual heritage of a brand, but it's the prompts that will help it set the tone, move in the right direction, and explain what the user wants - or doesn't want. As far as words go, these visual AIs don't write them very well. On the other hand, you can very well have alphabets or specific typography generated for a brand. This is something that could be possible in the future.
Afterwards, there are not only visual AIs, GPT-3 (OpenAI) is a textual intelligence software. Today, it is possible to imagine having an AI write texts in a brand tone.
As a reminder: an AI is neither a human being nor a tool, it is an in-between. Faced with a problem, it will propose solutions - which are certainly not all good - but it is a force of proposal. This is what is new. We can't ask an AI to find us a slogan or a logo. There is a whole human process behind it.
Is the prompt a new kind of brief?
D.R.: It's both a form of brief, but it's also almost an art form. I've done 200-word prompts before - which is pretty substantial. But it's a way of being hyper precise and influential on how the AI will generate something. There's an art to the design of the prompt.
A month ago, marketplaces were set up in which people sell their prompts to have beautiful images.
There is also a platform (Replicate) that practices reverse engineering prompt - in exchange for an image, the AI will offer the prompt behind it. But that's not all, some AIs even help to generate quality prompts. Others generate infinite prompts - just give it a word, and it comes up with many versions of the original word.
The prompt has become the lifeblood of these creations. It is with the prompt that we create.
Are these AIs a threat to the professions of illustrator, photographer, or art director?
D.R.: Recently, many photographers have posted alarmist videos, thinking that robots will steal their job. It's a major innovation that we're going to have to reinvent our craft, improve it, and above all "embrace" the technology - rather than going out and denouncing something that's quite extraordinary.
The field of illustration must particularly reinvent itself; we could imagine that AIs could make roughs (visual outlines of an illustration, a layout or a storyboard) very quickly.
Photography jobs can also be impacted, but I don't think that AIs will destroy these jobs. Let's say it's an advance that can help them go further. It's another kind of creation, like computer graphics once was.
On Photoshop, there are already a lot of plugins coming out (directly using AIs) that help DAs get extra image effects or do difficult tasks. AIs are not new in this field. For example, Photoshop's Neural Filters could change a look in a photo, add depth of field, etc.
What is new here is that we can use Outpainting, create out-of-field, keep a given style. We must see this innovation as a way to have faster processes (or ways of producing) in the service of something that is vaster. It can be a real gas pedal in production. That's how I see it, rather than a threat.
Today, brands often react to news with memes. I think that tomorrow, these reactions will be made with images produced by AI, in a much more creative way.
And beyond brand reactions?
D.R.: I think AI is not just about generating beauty, but also about bringing out more concept in our image production. There has been a tsunami of image generations (on Instagram), and it's also interesting to see how you can distinguish yourself from all of that and have your own style. That's been one of my quests: how do I manage to have my style in there, and not the AI style. This was a problem on Midjourney since this platform has a very prominent style.
There is a real issue for the artist or designer of tomorrow. It is a sector which is in full explosion, every week there is something new. Nobody can know today exactly what impact it will have. There will surely be an impact in fashion for example, but also in the field of video. Indeed, an AI - for the moment experimental - would be able to tell stories with a script. We can already start to make scenarios, to create animated artworks... Well, if we can call it a work of art, that's another question!