Principal Designer at Adobe, speaker, and champion for artists, Brooke Hopper’s passion is designing better experiences for some of the most talented people in the world. She joins live at SXSW to discuss the role of machine intelligence and new technologies in creativity.
Debbie Millman:
In the fall of 2022, ChatGPT was unleashed on the world. And ever since then we’ve seen an explosion of interest in artificial intelligence and an explosion of worrying questions. Will AI eat my job? Will AI destroy humanity? And for many creative people, the specific worry has been, will AI steal my work? Or on a more hopeful note, will it help me do my job? I recently had an opportunity to pose such questions to someone with some clarifying answers. Brooke Hopper is the principal designer for machine intelligence and new technology at Adobe. I spoke with her in front of a live audience on March 9, 2024 at the South by Southwest Festival in Austin, Texas. Welcome, Brooke.
Brooke Hopper:
Thank you.
Debbie Millman:
So I understand you grew up in an isolated rural area in Kansas and your aunt who was an interior decorator was one of your first artistic inspirations. And I’m wondering if you can share in what way?
Brooke Hopper:
Yeah, my aunt owned an interior decorating and design store and when I was eight, she gave me the opportunity to design the storefront. And I don’t know if that was a brilliant idea or not, but it got me into this idea that creating was a big part of who I was. My father was also a food scientist. He did R&D. And so I think creativity and being creative has just been a big part of who I was, regardless of where I grew up.
Debbie Millman:
I understand that when you were 12 years old, you begged her to hire you and she did, but though you thought she did this out of pity. So tell us what kind of work, aside from the work that she asked you to do at 8 years old you were doing by the time you were 12.
Brooke Hopper:
I was doing some accounting stuff. Again, more it was filing papers. I was helping her clean, just general help around the store, but I really wanted to be immersed in the patterns, the colors, the textures. It’s something that I was just enamored with.
Debbie Millman:
Now, the accounting that you did for her, is that what inspired you to originally go to college for finance?
Brooke Hopper:
I thought I needed to be smart about what I was doing and that I should make money. And as far as I knew at that point, I actually, again, I had no idea what graphic design was. I don’t think I’d even heard of the term, which shows you how isolated of an area I grew up in, but I thought I needed to make money. And that seemed like-
Debbie Millman:
Well, graphic design is the big sort of pile of money in the sky just waiting to be grabbed.
Brooke Hopper:
Right. It is to this day. No, I thought I needed to be smart and I needed to make money and to me, art did not equal money. And so finance equaled money.
Debbie Millman:
You got your BFA from Purdue University, then you went on to get an MFA in design and interaction from the California College of the Arts. You graduated in 2015 and then you got your job at Adobe where you’ve been ever since. You started as a senior experience designer, then moved up to lead experience designer in drawing and painting, then principal designer in drawing and painting. Since 2022, you are now the principal designer in emerging design, artificial intelligence and machine learning.
Brooke Hopper:
Yep, that’s correct.
Debbie Millman:
So the first question I want to ask you about all of that is a simple one, and I want to apologize to the audience in advance if any of these seem fairly rudimentary, entry level, but I think it’s important to really start at the ground level. So the first question is really a simple one. What is the difference between artificial intelligence, machine learning and generative AI? And what are some examples of each?
Brooke Hopper:
I love this question because my favorite thing to ask is how many people in this room use spell check? And I better see everyone’s hand go up because you do it every single day. That is artificial intelligence. We’ve been using artificial intelligence for decades across all different sectors of our lives. Banking relies on artificial intelligence to detect numbers, banking fraud. There’s this whole big bucket that is artificial intelligence that we use all day every day, and we don’t even realize it. As a subset of AI is machine learning. And that is essentially just machines understanding and recognizing patterns. Speech detection is an example of machine learning and then there’s deep learning. And then below that, these are all just circles within each other, under deep learning is actually generative AI. So generative AI is a smaller subset of machine learning and deep learning. And so they’re all very different and I think one of my big passion points is helping people understand what AI is because I feel like we throw around this term AI, it feels like it’s new and it’s actually not new at all. We’ve been using it forever.
Debbie Millman:
Is it possible to teach machines morals, empathy, or compassion?
Brooke Hopper:
That is also one of my favorite questions to answer. I love to quote a woman, Ovetta Sampson. She’s amazing. And she says, “Data comes from humans and humans have bias.” And I think that’s one thing to remember is these machines rely on information that we as humans put out into the world. And so humans are biased ultimately. Whether we try to be or not, we are. And so therefore the machines are. And so we need to do things in order to mitigate that bias and the issues that come along with candidly being human.
Debbie Millman:
So, how has AI been impacted by human bias?
Brooke Hopper:
I think we see it today. I think one of the biggest issues, I think just broadly within the banking industry, I know that they deal with a lot of bias when it comes to approving people for loans and mortgages and things like that. I think that’s probably one of the biggest ones from a general term. When we’re speaking about generative AI, we still see tons of issues. I think it was last week there were some issues on ethnicity and racial diversity in the outputs. I was just listening to one of the keynote sessions before this and we’re still generating white male CEOs. And so there’s a lot that has to be done, again, to mitigate a lot of those patterns that the AI is seeing within what’s being put out online or in the world.
Debbie Millman:
Is it actually possible to ensure that the data that’s being used to train AI is actually fair?
Brooke Hopper:
Oh yes, absolutely. Right now there are a lot of AI’s that are just trained from data on the internet that’s not licensed, the copyright isn’t owned. And I think that’s one of the very most important first steps that we should be taking is these machines should be trained on data that’s licensed and that you do have ownership of content that is public domain that isn’t copyrighted necessarily, or we have access to use. And this is one of the big passion points of mine is we were joking before about the loads of money available in graphic design.
Debbie Millman:
That was sarcastic.
Brooke Hopper:
I know. Art and design is not a high paying industry. And a lot of times… I was principal designer for drawing and painting. And so I worked with a lot of illustrators and many of them have to have a regular full-time job just to do their hobby or their passion, which is illustration. And so I think the thing that’s difficult for me to see is their content being used to train the machines that are outputting content that looks very much like what it is that they’re trying to do and trying to make a living on.
Debbie Millman:
So how do we enforce that? How is it enforceable for AI to be relegated to only being able to use non-copyrighted or a commons type of license?
Brooke Hopper:
Yeah. And I think there’s a couple ways that that can be approached. One is just companies saying, “Hey, look, I’m going to sign up for this. I am willing to train my models on data that I have license to use.” I think there’s a couple other things that being from Adobe we’re doing. We started the content authenticity initiative in 2019 to help with avoiding some of the deep fake issues that were going on, but that also includes embedding metadata in the content that’s being created that tells how and when it was being created. In addition, being able to tag content with do not train credentials. And so there’s other companies that are also working towards this, but I think just laws and rules and regulations aren’t going to cut it. We need to actively be pursuing ways that we can make sure that there are artist protections and that creators are being protected.
Debbie Millman:
One of the things that I learned a long time ago when signing on to a site that needed passwords was the quizzes that they would give you about writing a word or picking a stop sign or a streetlight. And I found out fairly recently that all of those tests were being used to train AI. And it would have been nice to know that I was providing this information, not that it would have changed my ability to do it or not do it. If I wanted to get into the site, you’re forced to do it. But I’m wondering if there will ever be a way for layman people just on the internet to be able to decide what they’re contributing to or not?
Brooke Hopper:
Absolutely, there should be and I know that there’s a lot happening right now in Congress and the Senate to help protect just normal people with their data. I don’t think any of us want that information being out there. Unfortunately, we live in a world where our data is being taken all day every day, and that’s not the point to get into that part of it, but I think there’s a couple things. There’s the transparency and on the end of the companies who are using that data. And then I think there’s also an education piece for people who are signing into those sites or being part of this because there will be bad actors, there always will be.
We can’t stop them, but if the general population is educated on how to spot a deep fake, how to know if a website is not secure. I think there’s a lot of us that we look for the HTTPS on the end of something and that’s, “Oh, this website is secure.” And so I think that there are things that will need to happen and are honestly probably actively happening right now to help us understand how when to spot things that are either a deep fake or content that’s created in the style of X creator who didn’t actually create that information or that content.
Debbie Millman:
That actually is inspiring me to ask a question that I hadn’t thought to ask. And that is how do you or how can you spot a deep fake?
Brooke Hopper:
Hands.
Debbie Millman:
Hands?
Brooke Hopper:
For generative AI. But I think that that’s something that we’re still grappling with because as the technology to create deep fakes gets better, and unfortunately it’s the same technology that’s helping people create new and different content, I think that a lot of it is going to have to boil down to information that is behind the scenes embedded in that content somehow. And I don’t have the answer to that. I wish I did, but my knowledge is that I know Adobe has, our chief legal officer is in Washington DC advocating for things like this and so are many other companies. And so like I said, I don’t have the exact answer other than looking in detail, but I am very hopeful that as a society and as a culture and as a human population, that we will grow to overcome some of these issues that we’re dealing with.
Debbie Millman:
Yeah. It’s one thing when somebody makes a deep fake sex tape of Taylor Swift. It’s quite another thing if somebody decides, “You know what, I’m going to make a deep fake sex tape of Debbie’s sister.” And Taylor can go out and say, not only, “Don’t look at this,” but she can somehow get her Swifty clan to pull it down. My sister can’t. So do you know of any protections in some way of thinking about how to enforce personal privacy?
Brooke Hopper:
There is an act called the Fair Act that, again, I feel like I’m a marketing commercial for Adobe, but there is a lot that we’re actually doing on this front because it’s a super important issue. And so there’s the Fair Act, which is giving humans the rights to go after the person who is creating the deep fake. And so there is a lot happening on that front because it affects everyone. It could affect me, it could affect you, it could affect anyone in the audience, and that’s not right.
Debbie Millman:
Actually, it leads me to something that I am just learning about. I do a lot of work with an organization called the Joyful Heart Foundation and the Joyful Heart Foundation is in existence to eradicate sexual violence. And I work very, very closely with Mariska Hargitay, who’s the star of Law and Order SVU to help eradicate sexual assault, domestic violence, child abuse, and the rape kit backlog. And we’re just beginning to become aware of and start to work against something that’s being called image abuse, which is when your image is being used without your permission in some nefarious way. And so it’ll be interesting to see where AI fits into these new laws that we hope to bring into Congress to be able to pass to prevent things like this legally. Morally we know it’s terrible, but if nothing is done legally, then morality is always a little bit of a slippery slope.
Brooke Hopper:
Yeah, right. I think the thing is is, look, throughout history, there have been people doing things that are not right. It’s not okay, they’re copying someone else’s work, they’re copying someone else’s likeness. It’s sort of same story, new technology situation. And I think a lot of the is how do we combat what the technology is enabling because there’s always going to be bad actors. And I think it’s about approaching that.
Debbie Millman:
You mentioned transparency before, data transparency, and I’m wondering how can humans ensure transparency in how AI makes decisions?
Brooke Hopper:
And I think that is encoding, and again, I feel like I’m repeating myself, but it’s true and I’m super passionate about this, is embedding that information that this was made using AI. Here is what AI did to help me in this process. And again, that’s something that as Adobe we’re doing. So we have a generative family of tools called Adobe Firefly, and what we do is anything that is created with Adobe Firefly, we embed directly in the metadata, “Created with Adobe Firefly.” So there’s no question that that was created using generative AI. And we believe it’s important that, and there’s work on the copyright of this, that a human needs to substantially modify something created with generative AI piece of content in order to consider it theirs.
It has to have a significant amount of human editing done to it, and so we’re doing that. And if you open Photoshop today, you can turn on something called content credentials and it actually tracks everything that you did to that file and it embeds that in the PSD, which is amazing. And so we’re trying to lead with this piece of transparency to make sure that we’re tracking this stuff. And I think transparency is key to all of this.
Debbie Millman:
I see questions coming in and I always love when people have questions that are better than mine. So I want to ask a couple of them. “What existing AI first design products do you value as a design principle at Adobe?”
Brooke Hopper:
There are a lot of AI first products, and I’m not if the question is outside of Adobe or not. But I would say at this point, and it’s hard because I need to say okay is it AI first? Is it generative AI? But pretty much everything we use is AI. Most of your favorite tools are very AI centric.
Debbie Millman:
How so? That’s something I don’t understand.
Brooke Hopper:
Yeah, that’s a good question. So I always use Photoshop as an example because most people are familiar with it, but there is so many features in Photoshop that are AI. So there’s something called content aware fill, which has been around for over a decade, which helps you smartly fill in content when you’re selecting something or expanding the crop on something. That’s AI. And so we use it all day every day. I know there’s a lot of people here who probably use Figma. Figma is super AI first. So, there’s so much that we do. It’s just become an innate part of creativity, so much so that we don’t even realize it.
Debbie Millman:
Yeah, the first question that came in was, “Can you share a moment when you thought, how did I ever live without this?” And I thought, there’s so many things that I can say at one point in my life that I couldn’t do without that are now obsolete. And I’m wondering how healthy is it to become so dependent on a machine helping you create something that might have been more creative if you did it on your own?
Brooke Hopper:
Are you ready for my take on all of this?
Debbie Millman:
Yes.
Brooke Hopper:
So here’s the thing is AI is a machine. Machines are created to recognize data and repeatable patterns and continue those things. As humans, the major benefit of us being human is you have a completely different perspective of the world than I do, a machine can’t replicate that for every single person in the world. We have emotions, we have experiences, we have things that make us innately human. And so when I think about machines and humans coexisting together, let the machines do what the machines are good at, and also take the information that the machines are maybe bringing up to the surface. There’s a ton of harm and bias issues, like we talked about previously, all the issues of wow, there’s a lot of white people being generated. That’s bringing to the surface patterns that weren’t necessarily immediately recognizable to us previously.
I think that’s a good thing. This is an opportunity for us to address some of those issues. But when it comes to being creative and things being more or less creative with a human, it’s all about your perspective. It’s about your emotions. It’s about breaking the rules. Machines don’t know when and how to break rules. They follow the rules. And so that’s what we lean into. One of the biggest design principles is you have to learn the rules in order to break the rules. And breaking the rules is what makes something creative and enjoyable. And so it’s that serendipitous rule breaking as well that feeds into creativity.
Debbie Millman:
So, how is it possible to ensure that AI doesn’t fail?
Brooke Hopper:
Can you explain that a little bit?
Debbie Millman:
So, you said that AI follows the rules. What happens if they don’t? They, as if. But what happens if AI doesn’t follow a rule or disobeys a rule? Is that possible?
Brooke Hopper:
I’m sure they could be, it could be trained. They again, it could be trained to randomly break the rules, but it’s all about patterns. You could say every five whatevers do this opposite thing or randomly take from this certain thing. But it’s still a rule that it’s following.
Debbie Millman:
And it’s still a prompt. I want to talk about plagiarism. How can educators ensure that students do not use AI to write their papers?
Brooke Hopper:
I love this. I think it’s less about ensuring that students don’t use AI to write their papers or create their design work. I think it’s actually flipping the question. I read somewhere that for paper writing, there was one instructor who, rather than having students write papers, they would actually give them papers and ask them to make the corrections and find the mistakes. And I was like, I love that. I think that something similar, I guess for one, for design professions and design students, if they use it, they use it. And maybe they will go on to make a career of generating only content to make money. If they do, that’s fine, but ultimately-
Debbie Millman:
But I don’t want to give somebody a degree to do that.
Brooke Hopper:
Exactly.
Debbie Millman:
And that’s what I concerned about from an educational perspective.
Brooke Hopper:
Exactly. And I think that’s where some of the stuff that we talked about before, which is transparency and making it easy to understand 10% of this thing was used with generative AI. And I know a lot of universities are putting AI guidelines into place, but it’s hard and it’s one of those things that we’re still grappling with. And I think right now we’re in probably the most difficult part of this because it’s been fairly widely adopted at this point. We’re trying to understand, we’re at this point in technology where people are adopting it. We’re not really for sure how to handle it. And I think that that’s the point that we’re at right now, and I think that we’ll get over that hump. And I know that’s probably the worst way to describe that, but we’ll understand how to deal with this as time goes on. But we’re grappling with these really hard questions right now and I don’t have the answers to all of it.
Debbie Millman:
Yeah, no, I know. And I don’t know that anybody does yet, which is both terrifying and exciting. I run a graduate program at the School of Visual Arts, and the students have been using AI with our permission for research purposes. And in some cases it’s been really interesting to see how young people are able to adapt to this new technology or the new parts of this technology to help them generate images and ideas. We did a project where students had to think about how the Archie brand, Archie comics could be repositioned in the future to be more inclusive, to have more diversity. And AI was able to do something that the students wouldn’t have been able to do, which is to imagine Archie as a trans person, to imagine Betty and Veronica as a lesbian couple. It was super interesting to see how AI could generate these images fairly quickly with very specific kinds of prompts.
But then we caught one of our students writing a paper that I suspected was AI. It was missing a certain kind of soul. It was missing a voice. It was missing a real point of view that didn’t feel very generic. And I asked the student, “Did you do this with AI?” And fortunately, they cop to it and then redid it, but not everybody’s going to cop to it. I hope that people can see it. But are there ways in which you can view something and know that it was generated by AI or using machine learning?
Brooke Hopper:
Yes, you absolutely can. I mentioned the content authenticity initiative previously. We’ve also created a content credential website where you can upload any image, screenshot, doesn’t matter what it is, and it will actually search, do a quick search, and it will tell you whether there is other content out there that to a certain extent matches this. And I think those types of things are super important. Again, this is part of the education that we’re trying to do as Adobe and that I’m super passionate about. Again, I know I keep saying how passionate I am about this, but I really, truly am. I care a lot about education. I care a lot about creativity. I care a lot about the creative field, and I think it’s so important to make sure that we are setting ourselves up to be successful.
Debbie Millman:
Min Jin Lee, the writer Min Jin Lee, she wrote Pachinko, Free Food for Millionaires. She was very vocal on social media when she realized that her books were being used to train. What can we do to protect the rights of writers that are being used? Apparently now there is a website where you could go put in your name and see what books you’ve written that might have been used to train AI. Is there anything that can be done about that?
Brooke Hopper:
I think it’s just an extension of that technology, putting in your name to see if my books were used to train AI all the way to somehow marking that content to, “Do not train this, do not train on this. I do not give you permission.” And I think that as we go on, I think that I am very hopeful that companies will start respecting this point of view.
Debbie Millman:
There’s a question here that I love. The question is, “What contemporary artists are using AI in any of its forms in interesting, creative, or productive ways?” There are two that I know of that I will share, and I’d love to know if you have some as well. There’s a woman named Pum Lefebure of Design Army. She’s a phenomenal art director and designer, and she recently used AI to create an entire ad campaign. Now, Pum is very image driven, very highly styled photography in her work that she does, and you would never know. And it had such a Design Army look to it that you would never consider the possibility that this was AI. And she was very forthright about it and she got a lot of press about it because of how good it was. So she’s one.
And then Marian Bantjes is working in Vancouver, Bowen Island actually, and she’s doing art generated AI. Heavily, heavily patterned work, very surprising and is really challenging AI in a lot of ways to make things that she didn’t think were possible to actually make. So I would recommend those two Pum Lefebure of Design Army and Marian Bantjes. Do you have others that, because you know so many people and work with so many, who do you see as really on the cutting edge of doing work in design and advertising, branding and creativity or fine art that’s pushing the boundaries?
Brooke Hopper:
Yeah, so I was hoping you’d ask this question because I’m actually working with an artist right now who’s getting ready to do a show at MoMA. And he’s a furniture designer and he wants to use generative AI to reimagine what it would look like if furniture that we consider iconic furniture, what would it look like if it was created by designers of color? And so we’re actually using generative AI to reimagine not just patterns and the aesthetic of it, but the entire form factor. And it’s such an exciting project because it allows us to imagine what would happen if there were more equity among the types of designers. For such a long time, design was a white male industry. And I remember coming in as a female being the only female on design teams. I’m sure you were in a very similar situation. And so what if that was expanded to designers of color, different cultures, all these different things, and I’m so excited for the project. I believe it goes into MoMA in September.
Debbie Millman:
And do you know who that artist is?
Brooke Hopper:
Norman Teague.
Debbie Millman:
Wonderful.
Brooke Hopper:
Yeah.
Debbie Millman:
So, part of what is so wonderful and mysterious about creativity is that it’s all imagination fueled. You also need skill, you need talent, but the ideas all come from one’s imagination. Some of it is combinatorial creativity, some of it is original. How does AI enhance or compete with that? With imagination?
Brooke Hopper:
I think it sits alongside of it. From my experience and some of the artists that you mentioned, Marian Bantjes is being able to push and break and do things that maybe we didn’t think were possible. What I love about what AI does is it gives you some really, really weird results sometimes. You’re like, “What is going on here?” One of my favorite images, it was during early Firefly, and I use this in some of my talks that I do. Firefly wanted to put human faces on any living thing, and so there was an image of this chihuahua with a human face and it was terrifying. You see these things, and that’s what I love is it’s not that I would actually use a dog with a human face. It’s that that sparks something. It’s that serendipitous spark to go in a specific direction. That’s what I really like about it. You see things, you’re like, “Oh, I could do this.” And you’re not just taking the thing for what it is, it’s giving you the idea to go in a completely different direction.
Debbie Millman:
How do you ensure that AI enhanced imagination is actually original?
Brooke Hopper:
I don’t know if you can. I can have an original idea or what I perceive to be as original, and it could be a very similar look and feel to an original idea that someone across the world had. So, I think it’s hard to determine that. I think it’s all about intent really.
Debbie Millman:
Do you believe that AI enhances ideas or does it just fuel variations?
Brooke Hopper:
I actually think it might be both. And I don’t know if that’s a cop out answer, but as I just mentioned before, it can spark some sort of completely different direction that you wouldn’t have thought to be possible. Then when it comes to variations, I know a lot of artists and a lot of agencies who use AI in the beginning process because clients want to see X number of variations. And as we all know, the number of variations you show a client is a much more curated set than what you already came up with. And so they use it as a tool to generate a lot of different directions and variations and then make edits to those and maybe push them in different directions. So I think it’s a combination. I actually think that that’s where generative AI is super useful is within the ideation phase because that’s when you’re generating tons of ideas and imagine being able to generate literally and figuratively even more ideas and more different directions to be able to come to such a better end goal.
Debbie Millman:
One of the jobs that AI has made possible is a title called prompt engineer, which I kind of love and I’m also terrified by. How do you become a prompt engineer and what are some of the best in class techniques that you’re aware of for feeding information to AI to get the best possible results? Whatever I’ve done so far has failed miserably. So, I’m just curious if you can help me with some ways of thinking about how to become a better prompt engineer.
Brooke Hopper:
Well, you’re actually in luck because a lot of the sites, I know Midjourney and a few others are actually moving away from this concept of prompt engineering. But the way it came about is speaking a natural language to a machine didn’t really work because it has this whole collection of knowledge that it’s been trained on, camera angles and lenses and all this other stuff, and that’s really how prompt engineering came to be is that you can put all of this information in it, very, very detailed specific information to get the exact result that you want. Now they’re moving away from some of that, and I don’t know exactly all of the details behind how they’re doing it.
Debbie Millman:
But why are they doing that?
Brooke Hopper:
To make it more accessible, I believe.
Debbie Millman:
So, for people like me that are just really flummoxed by the entire thing.
Brooke Hopper:
Yeah. The way Adobe dealt with it is we made everything much more visual and listed it out. Maybe we have a couple of different lenses that you could apply to it, different visual effects that you could apply to it. We talked about style references previously. And so there’s a lot that we’re doing to make sure that you can get the result you want, but creativity ultimately is about control. And I don’t about you, but there way is no way I could write a sentence that would describe any piece of design work that I’ve made. It’s really hard.
And so having different modalities to be able to, using Gen AI to assist you as a tool, and I want to be very specific about that as a tool, as part of your tool set to get what you want. And so a lot of times you’ll say, I’ll give an example, “I want a basketball and I want the basketball to be in the bottom left corner.” And even still, I’m sure that there’s prompts that can get you close, but to be able to get the exact style of basketball, with the exact lighting, with the exact angle, with all these details that you have in your head that you want to achieve, it is difficult, if not impossible to get that using text. And so being able to have different modalities or something called control net that allows you, which sounds terrifying. The first time I heard that I was like, “What is this??
Debbie Millman:
What is it?
Brooke Hopper:
So, it’s coming for us all. It basically allows you, we talked about style reference, so it’s taking the visual, but control net allows you to take maybe the depth of a photograph. So say I have an image of some buildings and I want to reimagine those buildings, those exact buildings. What that can do, it can take basically the outline and all the depth of a photograph and keep to that structure. So being able to, we call it multi-modality, but you can imagine having a 3D model and being able to place it where you want it. And then describe the materials on different parts, going back to the basketball, the material on different parts of the basketball or a handbag or something like that. So, it’s all about control and using gen AI in context that I think is going to be a super important piece of this where we haven’t… We’re still working towards that, candidly.
Debbie Millman:
I’m old enough to remember when computers were first being used to help designers. Quite a lot of elder states people thought that that would signal the demise of creativity and the elimination of a lot of jobs. And in some ways, it did eliminate a lot of jobs, typographers, retouchers, et cetera. However, it also created probably 10 times as many jobs for the people that were making all the things that go into the computers and the programs and so forth. A lot of people have been talking about the job elimination possibilities for and with AI. Are you worried about that at all?
Brooke Hopper:
I think this is something that we deal with with every wave of technology. You mentioned some of it, the transitioning things from analog to digital. I was speaking with someone who worked at Pixar in the early days where they were still drawing out by hand and some of the digital animation tools came to be. And half of the people there, he said half of his cohort were terrified, “We’re going to lose our jobs.” And then he was part of a group of them that were like, “You know what? We’re going to dig in and we’re going to learn this and we’re going to become the best digital animators possible.” And they’re still there working and doing it. And so I think part of this is creatives will become the best at using whatever technology it is that we’re using in context. And we see this already.
I like to joke that I can put in the same prompt as a non-designer and somehow mine’s going to look better in the end. And it’s just because there’s a sensibility and a visual aesthetic that I am aware of and I have that attention to detail. And so I think that, again, I mentioned this before, but I think we’re in the most difficult part of this right now. And I don’t know how long that’s going to last, but with every new technology, there are jobs eliminated, but new jobs are created, to your point. And we’ve seen this throughout history all through the printing press, the camera, the digital camera, and I think we see this all the time. And so I’m curious to see where it goes.
Debbie Millman:
It’s interesting, whenever there’s any type of new or original thinking or art or performance, people somehow get really insecure about what that means for our future. That somehow we will lose something as opposed to gain something. And I think that’s just a lot of what it means to be human and be uncertain about the future. Generally, people don’t think, “Oh, change, woohoo. Let’s do something that we’ve never done before and really succeed at it.” Most of the time we’re much more nervous about that. What do you see the potential for AI being able to do for designers, for artists, for writers that we might not be able to realize yet on the side that we are, that we might be able to do?
Brooke Hopper:
I think it’s going to be able to allow us to work in more mediums and media that we don’t know. I am not a 3D designer at all. I’m not. And I think that gen AI is going to allow me to be able to explore things in 3D, whereas previously and right now, I would terrify someone if I… I tried to make something in my master’s thesis where I was like, “I’m going to 3D print this thing.” And the 3D printer was like, “You are definitely not printing this thing.” And so you can imagine that gen AI could help me 3D print that thing. That’s not to say that I’m going to become a professional 3D artist by any means, but it allows me to work in medium and media that I wouldn’t be able to or would struggle to previously. And that’s what I’m super excited about.
Debbie Millman:
I know a lot of your thesis was about experimental typography. How would you see now, looking back on creating a piece of work that was all about experimenting, would you see AI being able to have enhanced that? Aside from the printing part?
Brooke Hopper:
Oh, it would have been amazing. I was literally generating by hand some of this stuff. I was going back and forth from Illustrator into Photoshop and doing blends and doing frame by frame animated GIFs. And if I would’ve been able… It would take me, I’m not kidding, it would take me five hours to do a animated GIF in Photoshop. And I could have done so much, I could have explored so much more. And at the end of that two years, I felt like I had so much more that I could have done, but it was literally a matter of time limitation for me. And so it would almost be kind of fun to revisit that in a way.
Debbie Millman:
One of the things that people talk about when thinking about the potential for AI is that it could never be sentient. It’s not possible for us to be able to create the trillions upon trillions of neural pathways in our brains into a machine. And I always come back to this sort of imagination. If you gave AI a prompt, make it more imaginative or make it feel more original, could they answer something like that? They, it?
Brooke Hopper:
I think that, and again, this is based on what we have today, and so I can’t speak to the future. But whatever imagination is is what the machine learned through all the patterns and data that it already has. And so I would imagine it would literally go to something that had imagination in it and try to replicate that.
Debbie Millman:
So, it couldn’t really do that.
Brooke Hopper:
I’d be curious to see what it came up with. It could spark some imagination definitely, but it’s hard to say.
Debbie Millman:
Yeah, it would be an interesting prompt, “Use your imagination to…”
Brooke Hopper:
Yeah, it might just ignore that word, candidly.
Debbie Millman:
There’s a question that’s gotten a lot of love that’s here, “Are the AI models impacting creativity and creative people? In the battle between artists and AI who wins?”
Brooke Hopper:
When it comes to creativity, I have to say, as humans, I truly believe is to be human is to be creative. I think that is a core part of who we are, whether or not you consider yourself a creative person. And as I mentioned before, it’s our emotions, it’s our point of view, it’s our life experiences, it’s spontaneity, it’s deciding when and where to break the rules. And so I do think that there is a coexistence of humans and machines. I really do think so. And I think that humans do what humans are good at and we can make the decisions. And ultimately those machines are learning from us. We have to give them that data. At this point in time, they are not making up data on their own. They’re simply taking the data. It is taking the data that we feed them, breaking it down, and then recreating it from noise. And that’s a very dumbed down version of what gen AI does, is it takes all the data that it’s been trained on, it destroys it, and then it rebuilds it from noise.
Debbie Millman:
You said that you don’t think that AI could falsify data. Do you think that there’ll be a time when it could?
Brooke Hopper:
So AI definitely can falsify data because, well, if you’re talking about gen AI, and that’s the thing is because gen AI is taking the data that it has, it destroys it and it rebuilds it. It’s literally called hallucination. And so it is nearly impossible, and I don’t want to say impossible, it is nearly impossible to get the same thing twice with gen AI. And so that’s why with things like ChatGPT and large language models, it used to be you could go say some obscure knowledge like, “Who was the designer of XYZ font?” And it would give you literally three different answers. It is not built to be a factual database.
It is good for giving you travel plans, giving you ideas for different ways of phrasing a sentence or writing something, but it is not a search engine. And so I think that that’s the thing to always remember is it’s not a fact machine. Even Google search, we’ve been trained to, if you say, “Is this current event actually true or not?” You have to sift through that information. And so I think we need to remember that as gen AI becomes more a part of how we interact with the world, that you can’t just believe everything that is there.
Debbie Millman:
Yeah. I think we’ve all learned that via using Wikipedia, that not everything is true on Wikipedia. I want to talk for the last couple of minutes that we have about the future. How do you envision a world with AI in the next 10 years and maybe the next 100 years?
Brooke Hopper:
Oh, wow. 100 years. I don’t know if I can speak to that one, but I think in the next 10 years, I think we’re going to see an explosion of more creativity and content and I think more awareness. I always think about this from the creative angle just because that’s what I’m in and that’s what I do every day, but I’m really excited about the possibilities of more immersive art, more immersive design and experiences that will be allowed with some of this stuff.
So I’m going to a museum and currently you view the pieces or in some instances you get to go inside of something and it feels immersive. But what happens when you’re potentially interacting with the artist in this piece or you become part of the piece? I think a lot of this is very speculative, but I think that there’s so many opportunities to have art become a bigger part of our culture and our society, and no longer be something that you go to see or you go to do or that you make the intentional decision to go create or observe. That it just becomes a part of who we are. And I think that’s really exciting. Thank you.
Debbie Millman:
My conversation with Brooke took place at the South by Southwest Festival in Austin, Texas on March 9, 2024. This is the 19th year we’ve been podcasting Design Matters, and I’d like to thank you for listening. And remember, we can talk about making a difference, we can make a difference, or we can do both. I’m Debbie Millman and I look forward to talking with you again soon.