Posted on

The State of UX in AI with Josh Clark

User Defenders podcast
AI
The State of UX in AI with Josh Clark
Loading
/

The State of UX in AI with Josh "Dr. Touch" Clark on User Defenders: Podcast

Josh Clark enlightens us to everything we need to know about the current state of UX in AI. He takes us on an unsettling stroll through uncanny valley. He encourages us to let machines do what they’re really good at, and humans do what they’re really good at. He guides us into how to begin getting our hands dirty with AI/Machine-Learning. He also articulates how our software/machines are embedded with values, and inspires us (for future’s sake) to be intentional about the kinds of values we embed into them.

Josh Clark spent nearly two hours with me talking all about the state of UX in AI. He answers important questions like:

  • How will failure and presenting errors be addressed as people rely on AI more & more?
  • Is there hope for the UX of Voice UI?
  • How do algorithm’s work?
  • What part of the AI development process should UX get involved?
  • Should we be worried about our jobs?
  • Will the machines we’ve built one day eventually overtake and possibly destroy us?
  • Much, MUCH more…

TIMESTAMPS

  • Josh passes the Turing Test (07:33)
  • What is AI/Machine Learning? (10:04)
  • Is AI’s uncertainties driving engineers nuts? (18:28)
  • Listener Q: How do we address AI’s failure in presenting errors? (27:00)
  • Taking a stroll down “uncanny valley” + “mechanophilia” (34:35)
  • Is there hope for the UX of Voice UI? (42:05)
  • Will the machines we’ve built eventually overtake and destroy us? (55:08)
  • Will AI eventually take our jobs? (68:35)
  • Is there a specific area of focus that will benefit designers interested in AI? (77:50)
  • Listener Q: At what point in the AI development process should UX get involved and how? (91:09)
  • Do we take these awesome technologies for granted? If so, why do we do that? (100:13)
  • If today was your last day, what would be your final plea to designers and developers building the future of AI? (105:03)

LINKS
Product Design in the Era of the Algorithm [VIDEO]
Weapons of Math Destruction: Our machines are only as good as what we feed them [ARTICLE]
Thinking, Fast and Slow [BOOK]
Motorist Trapped in Traffic Circle by Autonomous UX [ARTICLE]
Children of the Magenta by 99% Invisible [PODCAST]
This clever app lets Amazon Alexa read sign language [ARTICLE]
Kranzberg’s Six Laws of Technology
Microsoft Azure
Giorgio Cam
Lorem Ipsum AI Generator


TRANSCRIPT

Show transcript

Jason Ogle: Welcome to User Defenders. I’m super excited to have you on the show today.

Josh Clark: Oh my gosh! Thanks again. I’m happy to be here.

Jason Ogle: To have you back I should say yes. Josh was on episode 9 Defenders and that was an amazing episode. It’s evergreen content. It’s really good. So, I recommend you check that out as well. [Crosstalk] It is and I think [crosstalk] to defy Hollywood I think it could even be better than the first. We’ll see though.

Josh Clark: Let’s not over promise. It’s not over.

Jason Ogle: So, this is going to be all about the state of UX and AI, and I can’t wait to dive in there so much, so let’s just jump in here. Josh, are you familiar with the Turing test?

Josh Clark: Well, sure! Yeah! The Turing test, the sort of the thing that was defined way back in the middle of the last century of, you know, whether or not a machine can fool human into thinking that it is human being on the other side of the screen.

Jason Ogle: Yes, that is absolutely correct. It was developed by Alan Turing in 1950. And as you said perfectly, it was a test to see if humans could recognize whether as a machine or not. I learned about that from the movie Ex Machina, which terrified me, by the way. We can we’ll talk more about that, I think as we go. But, Josh, my question for you to kick this off is how do I know this is actually you and not an artificially intelligent clone you have concocted for this interview?

Josh Clark: Well, because if you had asked say Alexa that question, I would have responded by saying, playing classic rock mix because I would have completely misunderstood your words and intention.

Jason Ogle: Thank you. So let’s…

Josh Clark: Probably these are not quite getting there yet. I think that there is this idea that the machines are on the cusp of becoming these are perfect replacements for humans or these interfaces that we completely can’t tell the difference from and yet I think that our day to day experience shows us completely otherwise that sort of Siri and Alexa front is being that, but generally failed pretty miserably at beyond really narrow tasks.

Jason Ogle: Yeah, that’s so true. And we’re going to definitely dive more into why that is, and maybe how it can be fixed if it can be, so I’m interested in that as well, but let’s start from the basics. You know, the Defenders listening a lot of them are newer in in this field, they’re trying to navigate and this is an area as I disclose to you Josh, before our time, I’m a little nervous more than I usually am about interviews in it because this is such on foreign kind of field and it’s I mean, even though it’s certainly becoming the norm, as you mentioned too, there’s so much undiscovered and so much untapped potential, and also some as we’ll get into as well, like some things to kind of maybe be a little bit afraid of in this, but I want to kind of kick it off with, you know, let’s just kind of start from the basics. What is AI? What is machine learning?

Josh Clark: It’s a great question. It’s a big one, right. So artificial intelligence is the general umbrella term for, you know, machines showing some kind of smarts or insight and it’s a really broad term and one that frankly, it has been sort of so infiltrated by science fiction expectations and sort of pop culture expectations that it’s not necessarily a particularly useful term anymore, but that’s what it is, that’s the broad term. I think it’s helpful to break it into the categories of general AI and week or narrow AI. I like narrow AI better than weak AI because even narrow AI can be quite powerful. But general AI is I think the science fiction notion of artificial intelligence. This is a machine that can reason and that can really think and shows real logic across a whole broad range of topics. This is a machine that you could have a conversation with, it’s how from 2001, right.

Narrow AI is something that is really effective only on a really very specific domain of information or knowledge. And that’s really where we’re at with almost, with all of our AI right now is really narrow applications, centered around machine learning. And deep learning is the particular technology and flavor of machine learning that’s made just a number of breakthroughs. It’s sort of algorithms that then create their own data models upon data models upon data models and just go very deep at understanding a very specific problem. And it’s really sort of a brute-force approach to a problem. It finds patterns and then creates strategies based on those patterns. So it’s not something that is about logic exactly. It’s so much as about pattern recognition at a level that human beings aren’t able to do.

AI as a field, its decades old, it used to be more preoccupied with something that’s called knowledge engineering, this idea that you could actually model human cognition to turn that into software so that computers could learn to reason like a human expert to think, and that sort of ran on the rocks in recent decades and just sort of wasn’t able to move very far forward until we just sort of got the processing power to just analyze huge amounts of data and extract patterns from them and make decisions based on those.

Jason Ogle: Whoo! Well, said my friend, so that’s really good information. You mentioned algorithms in your response as well. And you’re fantastic mind the product talk. You said algorithms are the oxygen powering the next generation of emerging technologies. Now, how do algorithms work, you can give us even a high level.

Josh Clark: Yeah, sure. Well, firstly, just to sort of say, what I mean by that sort of sentence that you quoted there is, you know, when you look at all of the new interactions that are happening, things that are based on camera vision, or computer vision, natural language processing, speech recognition, all of these things are based on this sort of narrow AI that I’m talking about. It is a pattern recognition being able to extract patterns and understand what’s happening there. So effectively because of machine learning, the machines are able to understand all the messy stuff that we create, that they weren’t able to understand before our handwriting and our speech and our photos and our doodles.

So that’s sort of what I mean is it’s that kind of narrow machine learning, it’s starting to power and enable this whole new set of promising interactions and models. And that includes, you know, things like virtual reality and augmented reality, all these things that need to be able to understand the world around us and impose information upon them or extract information from it.

But what an algorithm is you know, in algorithms are centuries old is stuff that was you know, middle-eastern logicians and philosophers way back centuries ago created the phrase algorithm, and of course the Greeks turned it into that but based on I believe the name of a logician whose his name is lost to me, anyway. I’ll get over the weeds here. [Crosstalk] Yes it was Al Gore you got to

Jason Ogle: You remember “Al-Gore-Rithm’s”?

Josh Clark: He’s been around for centuries.

Jason Ogle: He made the internet, right.

Josh Clark: Yeah, “Al-Gore-Rithm”. That sounds like something like just horrible vanity album that he might have created.

Jason Ogle: He actually did come up with that back in the early days of the internet. He was trying to and I forgot what it was but he was trying to push something out there and I was like as little narcissistic the guy named it after yourself. But I mean he can’t help it, that’s his name.

Josh Clark: That’s it. That’s right. It’s an “Al-Gore-Rithm”. I think our work here is done.

Jason Ogle: That’s it. Okay! Thanks for listening everybody. See you on the next one.

Josh Clark: But you know, I mean an algorithm is really just, it’s not all that different from little, you know, computer program. It’s a set of logical rules to arrive at a conclusion. So more practically it’s instructions for stepping through a set of computations or calculations of mathematical functions. So in machine learning those functions generally map to, again, mathematical models that are tuned to reflect patterns discovered in the world based on mountains of data, and so, you know, it’s the kind of thing of like, I’m just going to show the machine a bazillions pictures of a dog or have dogs so that then it can recognize when dogs are in a picture, it will recognize that pattern.

But because things are so narrow, again, these algorithms in order for it to be able to find the patterns, the patterns have to be really in a very specific domain. And so if you train the machine to see dogs, it will tend to see dogs and all kinds of things, So, you know, it’s you sort of reinforce these patterns and then it will find those patterns everywhere, which is a strength in that very specific narrow domain. But it can be a little bit of a weakness when you try to broaden the domain, and, you know, it can see its bias and all kinds of different places, and so we have to be careful how we feed these machines data and be mindful of that their responses are based on the data that they’re trained on.

Jason Ogle: I was just thinking about something when you’re answering that and you know, you’re talking about math and dude, math is hard, okay. Like, I’m a designer like, and I’m a designer who knows HTML CSS, and that’s it. I haven’t learned JavaScript. I’ve tried several times probably not hard enough, but math is hard. And I love the partnership between designers and developers and of course, some of our greatest products have been created as a result and I know you yourself, have a superhero force that you work with big medium, which is pretty awesome for projects on a project basis. But I was thinking about programmers, you know, and I’ve been around programmers for 25 years or so now, and I understand in a lot about how programmers think and I know they thrive around logic and certainty, you know, it’s either 1 or 0, right. It’s either Boolean yes or no, so to speak. But I was wondering, you know, our developers going nuts working on this stuff because there’s so much uncertainty you know, you feed the machine like you said, you feed the machine a bunch of pictures and then you search for a dog picture and then it pulls up blueberry muffins you know, instead of a Chihuahua, you know, things like you think our developers going nuts around this stuff when is so uncertain.

Josh Clark: You know it interesting. I think that the developers and the engineers who are designing these actually have a real respect for the range of gray that the machines see in the world. And that, you know, which is natural that frankly we as human beings often see in the world and do our best to process that. When you look at the output of a lot of these sort of, you know, these algorithms, when they sort of give their results, you know, they often express their confidence in the result, you know, and sometimes as designers we surface that confidence and our interfaces you know, Netflix shows you what percent likelihood that this show is going to be for you, right, it’s like 85%, and you’re like this, you know. We use that as viewers to understand, you know, how well likely this matches and we’re sort of starting to bring that sort of absorb that machine confidence into our into our logic as just watching TV.

So, the engineers I think cooperate with the machines in a sense to figure out how good the model is at predicting the thing. What I would say is actually as designers, I would say as designers, we’re actually struggling a little bit with expressing that. And part of that, I think is, there’s a certain enthusiasm about using this stuff. It’s like the machine gave the answer. And frankly, there’s a cultural bias toward rushing to give the right answer and, you know, we think about that a lot in terms of performance, and sometimes, you know, I think there’s been a really productive push toward performance of, for example, web results of making sure the page loads as quickly as possible. And I think that’s great. That is part of a user great user experience.

I think another part of user experience is how can we give somebody their answer as quickly as possible. And you’ve seen, for example, Google in particular, really over the years pushing toward that, but it used to be a few search for weather and New York, you would get a bunch of a list of web pages that could tell you what the weather in New York was. And then it started to actually at the top of the search results, it would just show you the weather in New York. And now in Google Chrome, if you start searching, whether in New York, they will actually show you in the search box before you even do the search right there. They’re trying to minimize the amount of time to the answer. And, of course, that requires really strong confidence in the answer, but it’s only valuable if the answer is correct, right, like rushing to the wrong answer does more damage than just taking a little bit of time? You know, it’s better to say, I don’t know than to provide the wrong answer. And I think that we’re seeing that I think that’s sort of the subtext in a sense of this cultural moment of the last couple of years as we’ve been grappling with the algorithms, relationship to news and information and how that can be manipulated or provide information that reinforces our own individual bias without necessarily creating much enlightenment for people, any of the right or the left.

And I think it’s come back to Google again. A great example of that, again sort of the rush to give an answer whether or not that’s right or wrong is in those featured snippets that appear in the box above search results. You know what I’m talking about? If you’re like, I’m searching for how long to hard boil an egg and then it gives me like the two sentence snippet, so it’s not just here’s the page that’s likely to give you your answer. It’s like we think we’ve identified the two sentences on the internet that answer your question. It’s I feel lucky on steroids, right?

Which is great, except for when it that you’re in sort of an area of controversy, you know, so if you ask a political question, it will tend to give you an answer that may be correct and may not be or may even be somewhat controversial, this is especially true of areas that may have been gained by hate speech. So for a long time, if you asked and sometimes it will even suggest these answers right and Google suggest. So for until about a year ago, if you started typing in “Jews”, Google would suggest or are Jews evil, and then if you chose that, it would just give you these search results or a featured snippet that would tell you exactly how that was the case.

I want to be super clear, Jews are not evil, you know, it’s like that’s sort of like the result of this thing and it was even more sort of damaging is that those featured snippets are used to power answers for Google assistant or Google Home. Because, you know, in a speech interface, you can’t give a list of search results, so it’s like, oh, here’s a tidy answer to that thing. And so again, until about a year or so ago, if you ask are women evil, Google Home, you know, it’s like, okay, Google, are women evil. Google Home would say yes, and give you like, a 30seconds explanation for exactly why.

And again, I just want to be super clear to you guys, Defenders, women are not evil, that is not a good answer. But that I guess, so this comes back to the question and sort of like, how do we think about binary answers? And I think that we have to recognize that there’s a spectrum of rightness, you know, and that in certain categories of things. How long do you hard-boiled egg? What’s the weather? We can have a pretty high confidence of that there’s a right answer. There’s a whole mess of questions where there isn’t necessarily a right answer, and in fact, there might be a lot of controversy and pain in rushing to a wrong answer. The algorithms will say, I have a pretty good idea whether I’m right or not. And yet, the way we display these things suggests that confidence that I don’t think the algorithms actually have, that I’m not sure that if you ask the algorithm, how confident are you that women are evil, they would not say, 100%, you know, there was that I found this information and it matches and I got a 59% confidence.

That’s something that we as designers need to start creating a kind of a visual body language, you know because it’s not just sort of, I don’t know. It’s sort of like, I think maybe this is true, or I don’t know the answer, or, boy, this is a tricky topic. You know, it’s like we have these ways of expressing it in our own body language and verbal language. How do we start to express that gray that the algorithms will report back? How do we express that in our interfaces as we show this machine generated information or data? Boy, I’m sorry, that was a really long ramble.

Jason Ogle: Oh, that was great. That was great. And you know, I was thinking about its funny, because the way that the machines are responding and at least how they were, I think it’s changing. I think you are a big part of that conversation that’s, you know, changing the way that especially answers are displayed on the front end, you know, like it’s okay to say, I don’t know, I think that we just rely so much on machines to have the answer always, and, you know, we have the canon of the world’s information in our pocket now, which is pretty impressive, but I feel like the machines are doing, with some of these responses, they’re doing everything that we’re not supposed to do. When we are practicing UX design, we’re trying to design for users. We’re not supposed to make assumptions, right? We’re not we’re not supposed to do that, and it’s okay to say I don’t know, so i think like that’s an area of change that’s needed with how these, you know, things are displayed on the front end for users, and this is a perfect segue into my listener question. The first one, this is from James Mitchell, his handle on Twitter is @Mitchelljames and he asks, I’m curious what your opinion is regarding the challenges in presenting errors to users. AI is now becoming more ubiquitous than ever. AI is now being geared toward being present in the moment and automatic in our lives, but how will failure be addressed as people rely on it more and more. We touched on it a little bit, but do you have anything more to add to that?

Josh Clark: Yeah, I mean, it’s such a great question. Thanks, James. And the hard one, you know, because I think one thing that we’re sort of getting at here is as you get to machine generated content and machine generated conclusions and machine generated interactions where the machine is actually in charge of this, what it means for our craft is that we are moving away from effectively designing path through information that is in our control, you know that essentially our job has been to nudge people and through pretty constrained paths of information to design for success. And I think what this area evolves where it’s sort of like, we don’t exactly know what is going to be asked of the machines, or how the machines might respond when we go into sort of this pure thing of like, great, give it any information, let it respond on its own. What it means is that we’re really designing for uncertainty and designing for failure. So chances question is right on how do we design for failure. And what I mean by that is, the machines are weird. And so our job as designers is essentially to try to put our arms around this thing so that we can hopefully kind of corral and cushion the weird answers or at least give them context so that they don’t do real damage.

And I think so part of the thing that we can do as designers is really having a really good sense of what the specific algorithm we’re working on is good at and narrowing the scope of the application to fit what it’s good at, and to accurately report its results. And so, again, you know, we come back to sort of that trivial example of training a machine to recognize dogs in the photos, you know, it’s like this is something that is specifically about recognizing dogs, you know, and it’s like, it’s not going to work on other on other animals. Let’s sort of make sure that we do that because otherwise we’re going to be spotting dogs and all kinds of places that they don’t actually exist.

But you know, I think so, coming into this example again, of how might we sort of let people know when they should or shouldn’t trust the answer. It’s not just I don’t know, it’s also sort of reporting. You know, I think I know, but I’m not sure. And I think part of it is finding new ways to really signal that it’s time for the human being to engage their critical brain, and sort of, we have this proxy, I think, and when we think about self-driving cars, it’s like they’re a context where the cars are, you know, and really pretty good shape, you know, like long straight highways, you know, sort of where we’ve got pretty reliable information on how the cars around us are going to be driving, you know, those are pretty good situations, but there are probably times when the driving situation becomes a little bit trickier for the car and it’s like, you know, this would be a good time to engage the human on this one.

And I think that for all kinds of sort of fields of information, it’s like how might we signal that time that’s sort of like, Hey, we actually could use a little human judgment here. Because the moment that an algorithm fails or becomes uncertain, is exactly the right time to engage human judgment. It’s not all the machines all the time. It’s got to be a partnership between humans and machines. Because machines are terrible at some things, just like humans are terrible at some things, and how do we sort of create and encourage that partnership instead of just handing over our brains completely to the machines, because we do that all the time. You read about these things, right about somebody who like just followed Google Maps, driving to their 30 mile thing and somehow, like wound up driving all over Europe and like, 900 miles later, it’s like, use your brain, you know, but we see that and I do it all the time too, I just sort of seed this trust the machine.

Jason Ogle: Yeah, it’s kind of like we’re using our system one thinking as Daniel Kahneman mentions in “Thinking Fast and Slow”, you know, he talks he breaks the brain down into two part systems. The first system is the lazy one, it’s the automaticity, it’s the automatic response. Somebody asked you your name, you don’t even think about it, right, and typically we do this and conversations as well, for better or worse that when somebody asked how you’re doing, what do we automatically say – good. Are we really doing good? Like, I mean, 90% maybe 50% of the time, maybe we’re not doing good, you know what I mean? But it’s kind of like engaging that system one thinking.

We just trust the machines maybe too much, and I think that’s the kind of the tightrope that we’re walking here because we want to have trust, we want to have, you know, faith in these things, but they let us down often, so we were skeptics and we’re you know, so there’s a lot of work to be done as well as you’re touching on you know, and you mentioned the self-driving cars there is an actual story. I know there’s a lot more coming out you know, the more of these become more prevalent in society you know, unfortunately people getting you know, killed you know that you know, it happens sadly, you know, when you’re testing things.

But there’s an actual story where an older fella was trapped in his self-driving car because the car didn’t know what to do in a roundabout, he got caught in a roundabout and the car went around. I’m not kidding you, this is an actual story. It went around for 8 hours in that circle, and the guy finally got out, but he had to be admitted to the hospital for dehydration and probably, you know, there’s a lot of mental issues as a result of that. So, things like that are happening, and I think about, you know, like, you talk about the balance between the humans and machines. It’s like, there’s certain things that we still need to have an option for humans to control.

And a perfect example of this that I mean, I mentioned sometimes is there’s a podcast called “99% Invisible”. They did an episode called “Children of the Magenta”, and there’s an actual story where, you know, they started doing a lot more automation and test piloting in aircrafts, especially for longer flights. Of course, it makes sense but the problem is that there wasn’t enough training for when something went wrong and unfortunately in this case if something did go wrong, and the pilots that they were copilot, so the actual pilot had like kind of a nice party weekend or something out in the Caribbean or whatever and he was passed out, so the copilot didn’t know what to do, in that case and sadly the plane crash and there was no override available or they didn’t know how to access it.

So there’s a lot of I think and this is a good kind of segue into our next area because I do like to talk about the thud factor I’m always you know the fear uncertainty doubt because I mean I think that we need to stay human no matter what you know and that’s I think that’s a prism, I think you would agree with that Josh like we as much as we let machines take over, we still always need to practice humanity and civility and things like that.

So, let’s take a stroll down uncanny valley, shall we? On uncanny valley, you know, for Defenders listening, it basically means that the more human the machines becomes, the weirder and more off-putting they can also become. Josh, do you watch Silicon Valley? Do you watch that show?

Josh Clark: I watched a little bit, but I confess that it’s a little too close, and it just sort of seems it’s painful for me. So, I don’t watch a ton of it I admit.

Jason Ogle: Yeah, well, it’s and I’ll be honest with you, like it’s getting it’s kind of like taking a turn by the writing is, I don’t know. It’s not as funny as it used to be kind of things. But the last season there was this..

Josh Clark: I’m not either by the way, it happens to all of us. I’m not at funny as I used to be.

Jason Ogle: Oh, yeah, right. You remember the scene as well. So, there was a scene that you probably don’t remember this, but it was a scene where there was a creepy programmer and he made this AI and it was a female, it was really just like a torso like, you know for her face, and then just her like her shoulders and stuff. And he was sexually violating that robot and it came out in the episode it was discovered and she was almost like sending signals like cries for help, like she was almost suffering from some sort of synthetic PTSD. It was really weird. It was creepy and then it kind of you know I’ve been really interested in psychology lately and I started wondering like is there an actual diagnoses of folks who kind of have weird attractions to machines and sure enough there is it’s called it’s called Mechanophila. And it’s the love or sexual attraction to computers cars robots or androids, washing machines and other domestic appliances, even lawn mowers and other mechanized gardening equipment, and now I realize why somebody had to put a disclaimer on a chainsaw that said “do not use on your genitalia”.

Weird, I know that’s actually exists. So and then of course I thought of the movie “Her” you know, there’s kind of some of that in there. So all of that to say, am I leading to my question is, you know, I can see us getting more and more attached to our machines and forgetting that they don’t and will never have the capacity for genuine empathy. And I feel like it’s hard enough to lose a loved one who’s a real human being in this life, you know, and thinking about mourning over the hard drive failure of a humanized machine, feels a bit disturbing to me and frankly, psychologically dangerous. Do you have any opinions on this? Or am I the only weirdie who thinks about things like this?

Josh Clark: Well, so first of all, I mean, I think that you are hitting on this sort of real trope in general of that we see in mythology throughout and there’s a sort of a misogynistic blonde to it, I shouldn’t say sort of this is really this misogynistic line of sort of storytelling of how can man reinvent women and improve women, you know, and so it’s like in things like her, this is sort of part of it, and of course, in that particular story, you know, the operating system is sort of like becomes in for at least a little while his perfect woman, but then of course, sort of exceeds him. But that’s actually also part of the story always is sort of this you know going back to I don’t know in the 80s Weird Science, I don’t know if you remember that one two nerds, you know, sort of create their perfect woman as a sort of goes awry, “Ex-Machina” is another example of that. But it sort of all starts with this idea this desire to create a human and particularly to try to fix woman, and it never really goes right and it’s sort of rooted in this sort of problem that some idea that there’s something wrong with women in general, right, which you know I think part of this is this desire to play God or to fix humanity in one way or another or at least to duplicate it and some improved way and I think that’s obviously sort of mythology and all these stories tell us that this is a terrible and misguided idea.

And I think that we probably even know that deep down and sort of goes to some of the fun that you’re talking about. It’s are like, is this really what we want sort of have? Like, do we want to replace humans with machines? And I would say that there is an inventor’s fascination with the idea of, can we do it goes to this to the Turing test thing that you sort of started with, right at the beginning, can we do it? Can we make a convincing replica of a human being? And, you know, I don’t know if that’s ever going to be technically possible, but I doubt it.

And I think in general, anyway, you know, let’s, instead of trying to make machines act like humans and be convincing humans, why not let the machines do what they do best and let people do what they do best. And what I mean by that is, you know, at the moment what machine learning is really good at, again, this narrow AI and perhaps this is something that could inform our general motion toward general AI and that sort of more intelligent, artificial intelligence is to recognize that the machines see the world in different ways and sort of logic in different ways than us. And in particular, for narrow machine learning are great at taking on time consuming or repetitive, detail oriented, error-prone or even I would say kind of joyless tasks, joyless from a human point of view and really getting insights out of them. It’s like that feels like that’s a great place to start is how can we take these jobs that people are not good at and require a lot of effort and generate a lot of air and let machines take care of that.

In other words, they should not be a replacement species for human beings, but should be a kind of a companion species, how can we work together in interesting ways? And I think that so much of the conversation around artificial intelligence and machine learning tools is how can we replace people and I think really a much more interesting thing is how can we amplify what people do best. And I guess that again, it goes back to this platform notion that I think about is I like to think about systems that let us be our best selves rather than diminish us. And I think, you know, if you look at what a lot of technology is about right now, in terms of sort of popular services coming out of Silicon Valley, it’s often these convenient services that I would sort of argue, let us be lazier or let us be less or let us just sort of have more leisure time which I don’t diminish. Leisure time is great, and in many ways does help us be our better ourselves and more rested and more creative, but I think, you know, it’s a kind of thing it’s like how can we actually create systems that help us be more of what we are, do more reach the goals that we want to have rather than sort of just let us just sort of take over things for us.

Jason Ogle: I like that a lot Josh. I think one of the core principles to this entire talk right there is how do we do that? How do we, you know, create systems, like you said, that amplify what people do the best and I do have a question coming about that. But I want to continue down this path a little bit more about voice UI, you know, and we’ve mentioned the movie her, you know, that voice UI is prominently featured as a pretty much a way of life in the not too distant future it already kind of is now in a way, but you know, obviously voice UI is you know, it’s flawed. I mean, you know that and we have many popular systems. We got Siri, Alexa, Cortana, Google Assistant, probably more coming. You know, there’s just a lot of frustrating flaws to this experience still, however, you know, like varying dialects can be of challenge, you know, volume levels of our voices, Bluetooth device handoffs, like when I try to use Siri and my truck to even like record a memo or something, I have to turn my Bluetooth off because my car speaker doesn’t work well enough. I mean that’s probably a whole another challenge but there’s just a lot of different inherent challenges to using this and it overlaps even, you know, like we are, you know, we say, hey Siri, hey X or whatever assistant name. If we’re all doing that in a subway, I mean, there’s this a lot of overlap. I just feel like there’s still a lot of UX challenges with this technology. Do you feel like and by the way, when I when I get to heaven, I feel like God’s going to tell me how many hours of my life I wasted trying to get Siri to do things that I ended up having to do myself. So is there hope for these virtual assistants, you know if so what is it or is it better way coming? Your thoughts?

Josh Clark: You know, I think you’re touching on a whole bunch of different things which are all super interesting and I think it’s worth saying it’s, you know, there’s nothing more frustrating than talking to a system whose entire purpose is to understand your speech and have it not under and you and that, you know, if it doesn’t understand you the first time then it was probably not worth asking it in the first place, right? Because it’s like, now you’re repeating it a second and a third time, and it’s just and it’s not clear that the result is going to be any better. So it’s actually taking away time from you. Right? So it’s a really frustrating experience.

There’s a whole bunch of different parts of this. And one is just that notion of, does it understand me, and even there, their sort of their sort of two kinds of levels of understanding, one is the simple speech recognition. So in our house, we have an echo, we have, you know, an Alexa device and, well, you know, my wife lives in kindred. She was on the show a little while ago.

Jason Ogle: Yeah, mindful technology

Josh Clark: Yeah, exactly right. So, as a mindful technologist, I can say that Liza does not at all like Alexa and specifically the echo device. So if you understand our 18 year old daughter, Veronica and me, I would say 90% of the time, which is pretty good, right? That’s maybe not as much as time, as much as another human and understands as it turns out actually, which is pretty good, but it understands lives I’d never. And so, you know, I was like, I’d set up a few things we could turn on our lights with it and she’ll come in and she’ll say, you know, Alexa, turn on the lamps, and Alexa will say, playing Christian children’s music, you know, it’s just in it, and I’ll try to help and be like, Alexa, turn on the lamps, and then it’ll do it, just go. You girlfriend does it for you and like, you know, in joke, but it’s also sort of annoying. And so now it became awkward for Veronica or I to speak to Alexa in front of Liza. Like it’s sort of this unwanted person in our house [crosstalk]

Yeah, that’s right. So if there’s a thing that is there’s a personal nature to this that creeps in that you know that we know better, but it felt like Alexa didn’t like Liza, and for Liza certainly didn’t like Alexa. And so it’s sort of like, but it’s just a machine. It’s just an algorithm that’s running this, but it has this by just the nature of presenting as a voice has this powerful, you know, personalizing aspect to it that has this emotional aspect that does get at your earlier question about sort of our emotional relationship to these machines. But even beyond understanding the voice, there’s understanding intention, understanding what does the question mean, so often even when they get the voice recognition exactly right. Understanding the intention doesn’t always doesn’t always work, right. And I think that’s sort of what you’re getting at too. It’s like, you know, to sort of this I don’t know the answer to that. I have no idea what to do with that with those words, is a frustrating thing that happens all the time.

Part of the thing too is specifically like with Alexa, you have to know the incantation right. Like you have to know exactly how to phrase the thing or to remember what it knows because it’s not it. I think one of the problems with these voice assistance as they sort of front as general AI asked me anything, I can help you with anything, but it turns out of course they can’t. They’ve got, you know, still a relatively well as broadening every day it is still in the grand scheme of things, a relatively narrow set of abilities. And yet because they’re in the thousands of things that we can ask them, not the millions or bajillions or anything that we could ask humans but they’re just in this still impressive but narrow band of a few thousand things. That’s still a lot more than I can remember what I’m allowed to ask Alexa for and also the specific way that I have to ask Alexa or have to ask Siri if you know, it’s like if I want to add a reminder to a specific to do list in my things app on my phone, I have to ask Siri and a really specific way that I can never remember.

And so there’s a Benedict Evans, speaking of the uncanny valley has the uncanny valley of speech assistance, which is that as they become more and more capable, they actually become harder to use, because now it’s on me to remember how to use them. And so when they have three or four skills, I can remember that I can do that, but as they sort of present as more and more human but not quiet, then basically between when they have like, five skills and when they have all the skills, infinite skills, there’s that whole gap in between where it’s just like man, I don’t know and which is why everyone basically asks Alexa to set a timer and play music, and that’s about it, because I can’t remember the commands.

Jason Ogle: Yeah, that’s true. So there’s still room for innovation around this of course, like for example, like I’ll ask her to add something to cut my costs go list. But I have to, I can’t say I can’t just say Costco like we do because there’s somebody in my address book whose last name is Cosco. So I have to like [crosstalk] you know, Siri has trained me to say to actually really pronounced enunciate that T in Costco. That’s the only way you know, and it gets right about 90% of the time when I do that. But as you know, there’s those a lot of little thing and that’s a small little detail, but yet something that has to be addressed by the engineers involved.

Josh Clark: Well, and that’s a hard thing, the more that we talked to robots, the more that we talk like robots and you know, that’s not the outcome that we want. I mean, you know, I think there’s always this sort of periods of transition but you know, obviously, we want technology that bends to our lives instead of the reverse. And I think that so often, especially as it’s sort of like, we have this well-intentioned thing, like, great, we’ve got the machines that can understand our speech. And yet we have to talk to them, and sort of this weird, stilted speech or in very specifically structured commands.

Now, this is early days, you look back at where we were a decade ago and this seems this is all amazing, you know, this is all like the world of science fiction. So part of the challenge is that technical advance inflates expectations faster than it fulfills them. And so often as technologists, and as consumers, we see the glimmer of possibility here. We can imagine how Siri or Alexa could be super useful if they would just understand us and anticipate what we wanted better but they aren’t there yet, and, you know, they’re making leaps and bounds and speech recognition and sort of natural language processing to map that speech to actual intense. But it’s coming sort of maybe some more slowly than we would like, and then, you know, gradually it will happen over sort of the tasks that we want, and they’ll just fold into our lives.

I do think that speech is a very convenient interface for a lot of different contacts, and I think often people imagine that sort of, like, well, I can’t do everything with speech, and that’s okay, if you don’t that, you know, I think that the idea that that our future is going to hold a variety of different interactions and interfaces, just as they always have before machines even came into it in the way that we will write it or gesture or speak or, you know, use imagery for a whole bunch of different contexts between people I think that that will evolve with machines to that you know, will adapt to the right channel and the right mode of conversation as the machines get better at doing that.

Jason Ogle: Yeah, and you know what I was thinking too. I kind of left out of what I was saying about the different kinds of things that happened, you know, with dialects and volumes and I kind of left off hearing impaired folks, deaf people, they can’t really use Alexa, right? They can’t. I mean, a lot of them can’t talk clearly because of the hearing impairment, and so that’s a challenge to. But here I want to kind of shine a little sunshine on this, just this part of it because just yesterday, saw an article or tweet by Fast Company design that some app developer created an app that allows Alexa to actually communicate with deaf people. It’s really cool, it’s impressive, I’ll be sure to link to in the show notes Defenders, but it uses the camera and it actually understands sign language and in the way it communicates back to the hearing impaired person is through some really large subtitles. So like things like that, like I love innovations around that, you know, where we’re just trying to, you know, make an experience that works for everybody as much as possible, so that was a nice little bit of sunshine. But I want to say my hearing-impaired friends, please don’t use this while driving.

Josh Clark: There you go. But, you know, and you’re totally right about that, but also what a boon for the blind to these sort of speech and audio interface that rather than sort of having to hack these visual interfaces when screen readers that have mixed results based on how the software or web pages built that you know, there’s something that’s actually designed for that, and I think that as we get more and more options and ways to get an into information. And of course these voice assistants also work with text as well that you can, you can have these sort of text based interfaces that the natural language processing just works on the writing instead of on the speech, but as we have all these different options that we can, people will be able to adapt to their specific needs or contacts, whether those are related to their abilities or their physical context and what they’re doing at the moment, if they’re driving, when we’re driving. We are all essentially blind because our eyes are occupied, you know, and our hands are occupied, so that’s a good place for speech interface for all.

Jason Ogle: Fully agree. I’ve been grateful for having the ability to do that. And, you know, even having an Apple Watch, like I try to remember to put it on, you know, raised awake before I actually start driving, and that way I don’t even have to take my eyes off the road and really don’t have to move my hand that much. So that’s been like, things like that have been really neat. I think that’s where we kind of want to remain as where technology is a servant and not a master. And I feel like you know, when computers first came out in the scene early, you know, on the 70s and 80s and endemic software started happening was like, the promise was to humans, we want to be your assistance, we want to serve you right? And now I feel like to, you know, 30 years later, I feel like it’s kind of become reverse like we’re kind of serving the machines a lot more. It feels like you know especially with a lot of the addictive apps and such and you know your lovely wife, Liza Kindred is an expert on that stuff, and Defenders listen and listen to mindful tech episode userDefenders.com/mindfultech where we talk a little bit more about that, but I feel like we’ve kind of come to this place where we become more of the servants to the technology.

And that kind of brings me into one more question in the Fudd section it. One day there’s this quote from Nathan in the movie Ex Machina, he’s played by Oscar Isaac, and he’s kind of the genius that created the these AI’s, and he said something that really caught my attention when I was watching and that is actually write this down. He said “one day the AI is are going to look back on us the same way we look at fossil skeletons.” That kind of scare me a little bit and then I thought about you know how Hollywood you know, I know it’s just movies or stories but I mean, there’s a lot of stuff that Hollywood is kind of predicted. Like, I mean, even Minority Report, people are still trying to create that, you know, after all these years, it’s been at least two decades, you know. So I feel like there’s a lot, so sometimes things to be said like Hollywood’s pretty good at capturing kind of the imaginations and where humanity is that with morality and things like that. So, Westworld, I Robot, Terminator, Black Mirror, Ex Machina, the list goes on and on. You know Hollywood, there’s no shortage of productions that kind of depict the machines we built will one day rise up and take over and possibly destroy us. And in your Adobe XD interview which Adobe’s actually sponsoring this episode. Thanks Adobe. You address these common fears that many of us have about AI. You said basically, all of this stuff is you know, said quote unquote still a long ways off and quote. Do you believe this will eventually and inevitably become a threat to us like that the machines we’ve made will eventually over taken, possibly destroy us.

Josh Clark: You know, I think that it is possible. I mean, and I guess I want to sort of say when I say that this is a long way off, I’m really talking about general AI as an idea of the sort of like the conscious sentience machine, sort of the singularity stuff that that you hear about it sort of machines become Terminator, and, you know, all the good all the good science fiction, nitty-gritty comes true. I think I think science fiction is powerful. I think that we often that it creates the models and expectations for what we ought to create or what we ought not to create. And so, you know, I mean, I think that you know, the Star Trek communicator that, you know, the flip phone essentially, you know, it sort of made that, you know, that that Captain Kirk’s communicator basically made it inevitable that we would have a clamshell phone, you know, so I think the stories that we tell about the future have a certain quality of defining what the future will be, so which also makes those stories important.

I think that we are in a phase of sort of dystopian storytelling. In a way to counter a little bit of the exuberant optimism that comes out of Silicon Valley and sort of this very sort of willful optimism about the power of technology, and yet, we also feel its effects going a little bit sideways. And I think that that’s really come to the fore in the last year or so, with some of in particular Facebook’s foibles, and missteps and kind of in their use of data. So, you know, I think a part of that is a corrective that is necessary and useful. And I think also this again, you know, we have just so many centuries of myth of the risks of trying to create our own human being, you know, it’s like Mary Shelley with Frankenstein, you know, all this stuff. It’s like, don’t try to play God. Right? That’s sort of like the story. So I think that that’s like a constant theme about humanity in general, sort of, like, you know, let’s understand our place and the natural order and not trying to overstep it, so any part of those stories are that and I think we should recognize them as that.

And, you know, I think reminding ourselves as creators of some useful humility there is important. I will say, you know, I think that artificial intelligence broadly and even the narrow intelligence and narrow AI that we’re experiencing right now, has incredible power to help us as well as to hurt us. And I think even the stuff that is not necessarily clear and I think that sometimes it’s going to surprise us when it helps us or hurt us. There’s a historian of technology named Melvin Kranz Berg and he has his six laws of technology. In Greensburg first law is that technology is neither good nor bad, nor is it neutral. And what that means is, it’s sort of like the good or the bad that we intend to do with technology depends on again, our intention, how we use it, how we deploy it. But the nor is it neutral means that we may not understand its effects until we put it out there. And I mean, I think that that’s been a little bit of Silicon Valley’s experience. It’s sort of like what I will go ahead and believe is good intention about actually making the world a better place, which I think is sort of so much refrain of the technologists there. I think to take that at face value. I think that a lot of the stuff that we’ve created, you know, hasn’t turned out that great and, you know, I feel that even in my own experiences as focusing on mobile software for a long time, it’s sort of an assumption that, hey, this is an opportunity to fill empty moments with, you know, productivity or play or something, you know, as I feel culpable, and some of the theft of attention that’s happened in the last decade in terms of my work there.

So, you know, that’s all to say, it’s like, we don’t necessarily know how this is going to turn out. We have a really powerful new technology and machine learning, something that can find patterns, give insights, make decisions on our behalf or at least recommendations or suggestions, surface information and in some ways could take agency from us, you know, whether or not it actually becomes a 70 at being that takes over the world you know, I don’t know, but I do think that we’re already giving these systems agency and important specific areas of our life from prison sentence saying to granting loans, to determining prices for health insurance, to hiring and promotion, that, you know, I think that we should really be careful about how those machines make decisions because they have our own human biases cooked into them. They all have values, all software has values, all software as political, it’s built with the values of the things that we put into it intentionally or unintentionally. And so I think that at least in the short to medium term, the opportunities and the risks are around how we apply those systems and being really clear and explicit about the biases that we’re putting into those systems.

Jason Ogle: Very well said Josh. Yeah, I think that you know, before we switch gears here into the how, I feel like well – lose my thought here, sorry. Okay, here we go. Yeah, before we switch gears here, I want to touch on kind of what you just mentioned about, you know, the output of these things. And I think largely, the output equals, you know, input really, and you said it in your talk to I know minded product is garbage in, garbage out into these things. And the thing that I think scares me the most about this is that and we’ve already seen it a lot in Silicon Valley that oftentimes money will actually have a best interest over humanity, and I think that’s what scares me the most because, you know, and we’ve seen like, some brilliant hackers do things that are just unthinkable and unimaginable, you know, and you just go wow, you could use that for good, you could use those superpowers and do some really good things with it. But I feel like you know, there’s this that hit inherent nature within us it’s sometimes there’s that greed and when we saw it with Facebook with the data cells and you know, the privacy invasions. And I think that’s what scares me the most about AI is, it depends on who’s programming it, it depends on who’s developing it, and depends on what their motives are. So that scares me. That’s the last thing I wanted to say about that.

Josh Clark: I mean, I think that you’re absolutely right, and you know, so I think we look at some of the things now. Technologies that have been sort of used specifically around like CGI for decades and in Hollywood to create remarkable special effects, you know, sort of the proliferation of machine learning and computer vision and generative software generative that can create its own imagery and own video has opened up that sort of essentially made that technology available to the public or at least sort of the semi-skilled public here when it comes to having some familiarity with how to put these models to use and so you have things like deep fake going out there which puts the faces of celebrity women into pornographic film scenes. And you know, I want to be super clear about that, you know, this is an assault on those women, you know, this is like this is a this is really an awful thing to do to somebody to effectively impose your identity on somebody else and particularly to do it and sort of that specific scenario.

So here’s this technology that we’ve been using and these really sort of benign ways for entertainment effectively, but it is evolved into a way that it can be used for assault. So I think you’re right, it’s sort of like how is this going to be used and also how can we learn as sort of just as citizens in this world and as Defenders, how can we start to protect against those kinds of manipulations where we can no longer trust our senses, because the manipulation has become so good, that now we can make video of any public figure to make them say anything that we want. Here’s the evidence, you know, we can no longer trust our senses in some ways that can be also be used to good. So there’s more and more, we’re getting this effectively kind of human like hearing aids for everybody basically, that if you think about the ear buds that you’ve got, but they have little computers in them, things that can be used to do live translation, that’s something that Google’s pixel buds introduced about a year ago is, you know, it’s like somebody says something in one language you get the translation is what you hear in your ears, you know, Star Trek’s universal translator in action or this thing, it’s sort of like I’m at a concert, but I want to actually adjust the concert to my own specific needs that I can, you know, again, it’s using sort of machine learning algorithms to process the sound of coming in and change it, I can change the base, change the treble, change the volume, I can focus more on the person who is speaking right in front of me or tune them out if that’s if that’s what I’m doing or you know, eavesdrop on the people behind me.

But the point is that that personal hearing experience is now completely subjective to me, I’m no longer hearing the same thing that you are. And when you sort of think about that as sort of the, you know, sort of this proxy of, you know, one of the problems that we have in our culture right now is I no longer perceive the world the way that others do that I have this information bubble, how does that get reinforced even more when now the things that I see or hear or manipulated either with good intention or bad, you know, it’s a strange world. So how do we create I don’t know, what’s the browser tab thing to sort of, let me know that this image was generated by artificial intelligence, you know, are these the kinds of things that we need can those be created to sort of help us navigate a very strange world that’s going to come.

Jason Ogle: All very provocative questions and important questions to ask, actually, too. So I’m going to switch gears here, Josh, as we kind of start to kind of wrap this up. I’ve certainly got some more questions, but I want to kind of start talking about kind of how this is done because I know a lot of the Defenders listening are very interested and very excited about what’s possible. And I know that my Defenders listening, they want to do good, they want to make a difference, and they want to make things better for people for their users. And so that’s what excites me, that’s one of the things that makes me feel good about doing this is being able to influence those folks, you know, so I want to talk about kind of, you know, how these things are done.

But first of all, I want to talk about our jobs because, you know, we’ve kind of seen a lot of the writing on the wall, as web designers even, we’ve seen software come out that just says, Hey, you don’t need to know how to do anything. You don’t need to hire a professional designer. You know, we have a machine that will just spit out a custom website for you, you know, things like that, and that’s actually been around for a little while now. So that’s kind of one thing that, you know, I feel especially for like freelancers, you know, like that maybe don’t have a good loyal customer base. They have I think a bit of a reason to be concerned about that. What about, you know, there’s certainly a lot of other things coming too, but should we should we be worried about our jobs? I mean, the futurist author, Kevin Kelly says that “we shouldn’t worry since even though many normal jobs that we know today will be extinct due to artificial intelligence, there will be a host of new jobs for us to do.” My concern is nobody knows exactly what those jobs will be, and he doesn’t specify it either. And where do you stand with these notions? You know, do you have an idea or opinion of what these jobs quote unquote might look like?

Josh Clark: Well, you know, I think that the history of technology is one where we see that vulnerable jobs often can be replaced by technology, that if the machine can do the job that will probably get absorbed into that it will probably get absorbed by the machine. And there’s, you know, that goes back to, you know, as soon as we’ve been started, started to build tools and simple machines. I think that as we think about in terms of our jobs, that kind of work that we do that, I think the kind of jobs that are vulnerable are the ones where it’s sort of like there are kind of clearly settled solutions, you know, so to go back to this idea of you know, perhaps web designers will lose work to services that can create a website instead, you know, and since that has happened already, just by the virtue of having templates, it’s like, you know what, creating that photo portfolio website, that’s a solved problem, we know how to do it. So perhaps that’s not really thinking that we perhaps we don’t need a lot of jobs for that particular thing because we actually have that solve problem. If you think of a designer’s role as solving problems, well, that’s not really a fertile area anymore anyway.

And so, you know, I think that the kinds of things that the machines will be good at, let me sort of pull back. I think to understand what machine learning is good at it’s useful to understand that really, it’s good at just deciding what’s normal that you pointed in a direction, it finds the patterns, it figures out sort of a range of normal for that thing, and it predicts the next normal thing, or it understands aberrations with things. It’s like, Oh, these people are healthy, this person is tracking in a weird way something could be wrong, it could be unhealthy. And so, when you think about that, in terms of our own work, it’s really sort of like finding patterns of just the way stuff is done. So, it’s the kind of thing that perhaps over time it could construct, you know, you could have the system that could construct sort of the ideal ecommerce experience based on traffic and effectively machines doing their own little A B testing, or at least the scaffolding for it.

But I think that’s a good way to think of it is that I think that the machines at least in the in the near term, are going to be good at being assistance for creating scaffolding for kind of low level production work. Airbnb has done some interesting experiments with this, where essentially they gave a machine model sketches of different elements from their design system, essentially visual symbols for each element in their design system so that then the system could read those symbols and basically pluck those design patterns out of their code and construct a web page, so effectively, you could take a whiteboard drawing and suddenly have a web page from it. You know, that’s not going to be the perfect web page, but it does get you started it. It sort of short-circuits the need for having – it replaces having sort of a high fidelity wireframe sketch with sort of moving directly into the browser in a place where you might be able to have a direct conversation between designer and developer about what to do next, that sort of like it moves that production along.

In fact, there’s a really interesting service called man, I don’t know how you say it’s like Uizard or maybe it must be wizard but U-i-z-a-r-d, and it does sort of a similar thing. It takes sketches and creates sketch files for you, or actual like working iOS app, you know, basically mapping symbols to UI elements. And, you know, I think we’re also starting to see artificial intelligence are being used to sort of identify how to crop an image, you know, what’s a good crop, rather than just sort of being like crop from the center of crop from the corner, it sort of can understand what the focus of the images and crop to that and you know, so there’s sort of these jobs that are, again, kind of lower level production work that we often do, but that I think fall into that category of tasks that I mentioned earlier that machines are good at. Things like that are time consuming and repetitive and detail oriented and error-prone and kind of joyless and machines can be good companions to that.

So that’s a long answer. I don’t know that I do think that are what we may consider some junior level production tasks that are going to be vulnerable to this next generation of AI design tools or machine learning driven design tools. But you know, I think that hopefully what that does is it frees us up to solve new solve new problems instead of continuing to repeat settled solutions.

Jason Ogle: I like that answer Josh. Is there a specific area; actually, let me repeat this question. Do you think that digital designers who don’t get behind designing for AI and machine learning will eventually be left behind?

Josh Clark: I feel like this is a little bit like the early days of mobile, where it’s sort of like, Man this is seems like it’s and when I say that, I mean, maybe before the iPhone came out, where it was sort of like something here that’s really feels like it’s a thing, we’ve all got these phones they are kind of computery, there seems like there’s going to be something here, we got to figure out how to design for these. And then when the iPhone came out, and it became really popular and Android followed soon after, and it was just everywhere and suddenly there was like this boom was like, oh holy cats, we got to get on this, like this was really clear that it was there.

This feels like this is something that is brewing to me. I think that right now, machine learning is sort of broadly available to that to all is not yet broadly available to all organizations, but I think that’s going to change pretty soon. And I think that once we started to get some thinking into how do we use this in the right way, I think this is something we should talk about this is how can designers and UX designers and researchers play this role and understanding this is where can we deploy machine learning that as we get that going, I think, again, this is the that I really believe that this is the oxygen that’s powering all of the interesting stuff that’s coming in the next generation of technology, so just as mobile defined the last 10 years of our industry. I really think that machine learning and start to define the next so it is time to get started and figuring out what designed for all generally is in this and what designers roles individually are in this. But it’s early, it’s not panic time, but it’s time to start thinking about it because this is going to be part of our job.

Jason Ogle: Very cool! That’s that is encouraging and a bit alarming, but in a good way. So Defenders here, listen to what Josh is saying, and I know we’re going to provide even more context here and maybe some areas where you can go kind of dive in a little more. Is there a specific area of study or focus you believe would benefit those Defenders wanting to dive into designing for AI the most?

Josh Clark: Well, there are two things, I think one is I think there is sort of some technical sort of familiarity that you can get with sort of what the systems are good at and what they’re not. But I think the secondary is also really applying the skills that we’ve already got as designers and UX experts and researchers to this new problem. But to talk about the first, I think that it’s useful to get to know but sort of broadly, you know, what it is, how machine learning works, the different types of models. You don’t have to like, learn the math behind them, but I think it’s useful to understand what they’re good at and what they’re bad at. So in the same way that if you are a visual UX designer for the web, it’s useful to understand how to work with the grain of the web, what it’s good at, what it’s bad at with hard to render as an interface, and what’s easy to render is an interface that you can design with that. I think it’s a good thing to understand with the sort of different flavors of machine learning. So there are a lot of intro courses that are sort of intended to teach how to make machine learning work and the different models that are out there. I think that’s useful to kind of get a big picture for how this stuff works. Again, what it’s good at and what it’s bad at.

More that too, I think if there are algorithms that your organization is working with, and starting to play with, getting a little bit of hands on time with them is helpful. Again, understand the kind of results they give, how they report their confidence, and whether or not the result is correct. Is this useful hint or suggestion or is this a firm answer? You know, I think a good example of this, and this is an easy place to get started is all the big sort of technology companies, when you look at Microsoft or Google or Amazon or IBM, they all are not only in sort of a race to come up with the best machine learning algorithms for speech recognition and natural language processing and computer vision or computer or camera vision. They also share those algorithms because they want you to use them, they want your data to sort of make those things even better.

And so basically, if you use any of their hosting services, if you use Microsoft Azure for your host and you get their machine learning API’s for free, and they’re pretty easy to play with. In fact, Microsoft has, you know, on their cognitive services API is what they call them cognitive services. On their cognitive services API page, you can sort of link into any of them. They have an easy web interface that you can upload images and see the data that comes back and how confident it is and recognizing the things that are in that image. And just playing with those things, you get a real sense of, it’s sort of like, Oh, this is the kind of way that I might expect information to come back. This this tells me how I can structure an interface that is honest about how confident the answer is, for example.

So, I think that would just actually getting your feet wet and sort of playing with these things directly is really important, splashing puddles a little bit play, and maybe even make some ridiculous applications with these things that are these services that Microsoft and IBM and Amazon and Google provide are pretty easy to work with from a web development standpoint. I think that you get a web developer to work with you guys, you know that the two of you could make something kind of interesting and fun sort of play with that.

Jason Ogle: I like that.

Josh Clark: I think that the tools are out there for us to sort of start, you know, you build products on these, but it’s a useful sandbox to start to play in and, and get familiar with. But the second piece that I mentioned earlier is how do we use our skills that we are really great at to solve new problems using machine learning? Because I think so far it’s been a domain that’s been dominated by data scientists and algorithm engineers, and rightly so they’ve been sort of figuring out what can we do with these huge amounts of data to extract insights and make predictions and identify patterns and they’ve shown us what’s possible. But you know, and I think that for a long time design has sort of been outside of this, and there’s been maybe some head-scratching of well, so what is my role, I don’t know how to write an algorithm.

I think that there are a few different things to think about. One is that creating a brand new machine learning model, it takes a lot of effort and a lot of data and a lot of expense as a result. It’s really important to and because these are very narrow applications right now. It’s really important to understand where to apply that problem. So if you look at sort of, like, you know, what do we do, what are the tasks that happen in a specific context, you know, you’re a radiologist and your job is to identify signs of cancer, you know, you’re not going to create a system that is expert on cancer broadly, where can the machine step in. And again, thinking about those areas, and sort of time consuming, repetitive, detail oriented, error-prone, joyless tasks. Where can the machine step into that? You know, how can they help to analyze all these mountains of images to, again, amplify the real talents of the radiologists, which are not just sort of looking at image after image after image? You know, it’s like how can they actually bring to the fore the real skill of the radiologist there.

And so I think part of it is really understanding the problem to be solved, understanding where machines can be helpful. And this is really, you know, contextual inquiry. This is research that you’re doing here to really understand the task and the workflow involved in it. And then to understand, you know, where is the data that would help to solve the problem at hand, who holds that data, what is the audience for that information, how can we make sure that we are getting an audience of people or of training data that is really reflective of the world that we’re trying to influence or create? And these are all research questions. This is good UX stuff here. You know, it’s like, these are the things, and then once we have the information, how do we present it in a way that is honest to the actual confidence of the algorithm? You know, how do we express that I’m positive, I’m not sure, or maybe this is a hint, this is a suggestion or just calling out that this needs some attention or some human eyeballs?

So all these things I think are ways to think about basically, there’s a new tool that you can use this new powerful technology for detecting patterns. Where in the process should we use it, and what might that look like and how would that change people’s lives? This is sort of a design concept question. I think those are the areas where we badly need some help.

Jason Ogle: So good Josh. I was thinking when you’re answering that I thought about the Defenders listening who are going to want to go and play in the sandbox is and by the way, I love it if you could send me some links that I can put on the show notes where they can kind of start playing with some of these sandboxes. Yeah, but I was thinking about you know, be careful Defenders, be careful in what you apply this to, you know, you’re not finding, and I think I learned this from you Josh. Be careful that you’re not finding a solution in search of a problem, don’t just do it because you can, and I think about like kind of you know in the early days of the web and Photoshop like we were applying drop shadows and bevels and emboss to like everything you remember that.

Josh Clark: Yeah, good times.

Jason Ogle: Yeah love it. Yes, bevels and emboss drop shadow. But now when we have flat design and now, well Apple did it, so everybody should just do flat design now. Not necessarily, you know, I just want to encourage you Defenders listening to you know, have a purpose in mind that will it will do good and solve real needs. When you do this, don’t just do it because you can, I guess is what I’m trying to say.

Josh Clark: Well, I think that is so important, especially as we think about what we choose to put out into the world as a product and for broad use. I think it’s useful to distinguish to sort of what we do just to play and the practice and our own sort of private laboratory, you know, before we sort of take it out into the public is that, I think it’s also okay to make some frivolous toys and be silly, because in a way that helps us to get outside of our own heads and outside of our own routines, and so because machine learning helps us solve problems in a different way than we might be accustomed to, it sometimes helpful to try to solve different kinds of problems. And sometimes making toys, making something silly helps us think differently. You know, it sort of brings out that sort of childlike sense of possibility and imagination that we might not bring, I hate to say day to day to our own sort of professional work.

So, you know, I think there’s one just sort of fun example that I like to show that’s called Georgiou, which is something that some Google Developers may know about a year ago. And it strings together some of these publicly available API’s, and so it just works right in the browser, and at least on Chrome, on Android phones, for example, you can just have it just bring up the webpage, and you point the camera at something, and it identifies it and makes up a little rap about it. So, it’s basically sort of sending out this, it’s grabbing the image, sending it to image recognition, getting that back, playing it through speech, you know, with some sort of music behind it. And it’s a sort of fun little toy, but it’s sort of saying, you know, what could we make by stringing together different things, you know, so it’s like, oh, I’ve got an image recognition API. Great! And I’ll run that through a translator and then through a speech synthesis thing and it’s like wow, now I can actually take a picture of anything and have that translated live have that recognized and translated into I don’t know Japanese and maybe speak that out loud in both languages and now I’ve turned the entire world into a flash card for learning Japanese out you know, I don’t know. But it’s like make some stuff see what’s possible especially by chaining some of these publicly available API’s together and see what you make.

Jason Ogle: I’m so glad you said that Josh and I appreciate that as well because I think you know, and I’m not discouraging you Defenders from playing, and I’m really glad you said that Josh because I think about when we’re children, right, like there’s no bound, there’s no limit to what we can do and with our imaginations, and kids are just fearless, right, and they’re making, like when you look at it like as an adult, like some of it looks like some of the crown’s look like a surf suicide or something, right, the crayon drawings. It’s just like oh, but you know what, it’s awesome for kids. And I think we need to kind of have a Kind of a beginner mindset and a more of a childish approach to learning new things, especially technologies like this. So I’m really glad you said that, like Defenders do, play with it, do play, you’ll build those neural connections, you’ll form those neural connections and those ideas will kind of offer input into other ideas that will be a real business, a real world business and user problem. So I really appreciate you on correcting me on that, adding more context to that, but I also think…

Josh Clark: Think of it not as a correction but as a supplement because I hope if you’re completely right that we don’t need more frivolous products and services in the world. I think that but a little play in our own practice is helpful.

Jason Ogle: Absolutely, Yes. And, you know, in the Lorem Ipsum generator, here’s another like kind of seemingly simple, but I think useful tool that somebody took it upon themselves to create, you know, our dreaded Lorem Ipsum that we tend to have the put in like all of our marks and stuff that don’t mean anything and confused clients often times. somebody came up with an AI generator that basically you put in some keywords of whatever area of focus the businesses or the product is or whatever website you’re trying to make, you put in a few keywords and it’ll actually go and scrape like Wikipedia or wiki whatever stuff, and it’ll pull in like real like germane data into your mock, so it’s like that’s just one of those things I probably didn’t take that much effort for that person to develop that, but it’s very useful. So I think that there’s things like that that you know that you can play with and that was probably result of him doing the same thing or he or she doing the same thing, you know, playing and then coming up with something that actually does solve a problem. So I love that that’s really great encouragement.

Josh Clark: That’s great. It’s a great example.

Jason Ogle: So, I have a listener question and I really like this one and this happens to be from the girl that actually whose idea sparked this conversation Josh. Her name is Shari Benko, and Her Twitter handle is @SheetwiseDesign. So she has a really great question, she asks in what part of the AI development process should UX get involved and how do you suggest they do that?

Josh Clark: I love it, and I love Shari that you’re interested in that. You know, I think it goes back to a lot of what we were talking about earlier. I think that in particular thinking about the research of really understanding the people and process that we’re aiming to help and where they run into kind of gaps of insight, or of kind of mountains of frankly joyless work, where the machines could help to both save that joyless work and offer insight are really important things. You know, where basically I think that UX has a really important and particularly the research side has a really important role of saying where the data scientists should point their algorithms.

And I think then also looking at the presentation of it, I think that it’s the kind of thing it’s really, I think, in particular, I think that we are maybe have been a little bit complicit and that sort of fantasy of what the algorithms are able to do in terms of sort of, really our interfaces tend to report at face value their recommendation, and so I’ve mentioned this a few different times. But ultimately, I feel like so many of our machine generated interfaces have an overconfidence problem that is not really at the source of the algorithm itself, but the interface represents the information as completely true where the algorithm might be sort of like man, I’m only 50% sure about this.

So, I think part of the work that we have to do as an industry is start to develop some of the interface and interaction design patterns to express the proper amount of confidence, and I think that’s one piece. I think we also have a really dangerous moment about bias and our data. I mentioned this earlier, I think that’s another place where UX can be super helpful, both in helping to prevent sort of bad bias as much as possible, and also surface it and the algorithm when were necessary. But what I’m talking about here is that the machines only know what we give them and when we look at in the risk is, well, I think that there’s a hope and an assumption that the machines can be more objective than people that perhaps if the machines are making decisions, they will not have the same kind of human bias that we bring to our own decisions. But when you look at, for example, systems that are determining presence, sentencing based on history or profile. When we look at hiring and promotion, things like that, we are giving these systems data based on our past and based on metrics of the past. So, if we have sort of an over the naive model that said, that tries to predict who will be the next great CEO of the company that is going to, you know, historically favor, you know, tall, white middle-aged men, you know, and it’s like, that’s not great, that’s not a diverse or really adequate solution.

The idea being, in other words, that by using our own historical data, we may freeze that history into the operating system of our culture itself. And so how can we – so first of all, I think it’s important to note that correcting that historical bias, is itself a kind of bias, you know, and that but ignoring the fact that there’s bias at all and the system means that it’s just not being honest with ourselves. And so I think part of it is how can we as UX professionals, help to understand where there may be points of bias, really understanding the culture of the domain that we’re looking at, and making sure that we’re getting training data from all types of people within that audience, and not just people who look like the team developing it, for example. And I don’t mean just in terms of race or age or ethnicity, but also in terms of kind of worldview. You know, it’s sort of I think the technologists have a very specific worldview that can paint data that’s drawn from them. How do we get also people of all classes countries creed’s but also professions and worldviews of philosophers and artists and politicians and you know everybody who contributes to our to our culture.

So, I think that’s another aspect of with we can mine for data, what is the responsible sort of way to go, what are the possible biases, and how might we correct them or what are the risks? And how can we surface them? I think, you know, it is alarming when the machines sort of naively surface their own bias, you know, and you see these really, I would say painful examples of it pop up where, you know, I think it was 3 or 4 years ago now Google photos identifying black people and pictures as gorillas. Incredibly painful [crosstalk] Where you have in New Zealand, there was an example of man of Asian descent who was applying for a passport online and the machine wouldn’t accept it because it thought his eyes were closed. You know, it sort of just like that man just wasn’t trained on a broad enough sort of set of data, and that is incredibly painful.

But it also at least surfaces that bias in a way that we can address it. And I think that one thing that we’ve seen in our own culture with the Me Too Movement, for example, is that at the right moment, when really painful and unpleasant facts are surfaced in a system, they can be addressed and perhaps fixed. And so I think if we think about that, in our own way, when we look at sort of the bias that again, can be painful to be surfaced in the machines, or just surfacing it in this naive way, based on the data they’ve been given, we can say, Man, this is something that we should fix, we should apply our own bias to address this. The trick is, as we do this, this is another I think, conversation that UX folks can lead is sort of what are the values of our system that we’re creating, you know, what do we want it to achieve from a human and cultural perspective as well as a business perspective? I think this is a tricky thing because right now you know, it’s like it is all big companies effectively that are defined the values either intentionally or unintentionally, who gets to decide.

And one thing that we’ve seen in our culture right now is that reasonable people can have incredibly different notions of what fairness and justice should look like. And I don’t know the right solution for that, but I think that the idea that sort of naively believing that we’re not embedding our systems with some notion of fairness and justice is you know, as I say, naive and we need to at least be intentional about it.

Jason Ogle: Such a great answer, Josh, I want to encourage the Defenders listening to you know, as Josh was touching on, you know, bring those skills in view psychology and of your art of the poets, philosopher, philosophy, all of those things that you do bring that in to this because we, you know, as we are trying to make machines that do act and behave more human which makes sense, let’s make sure that we’re bringing that with us for sure. So, I appreciate that, you know, psychology has been really a fascinating subject for me, especially of late and so I feel like that, you know, Defenders, I really want to encourage you just learn more about how the brain works, how it behaves and processes and then always be practicing and building up your empathy skills. You know, I always say empathy is a choice that becomes easier to make the more we practice it, so I want to definitely encourage the listeners on that.

Josh, a couple more here, I’m a big critical serious you know, as I mentioned before my response, and how often you know, she gets things wrong and we even persona fire as she is that, you know, that’s kind of how we’ve been trained, you know, but she does get things right very often, you know, and I can add eggs to my Costco list as long as I say Costco, like I can do that in bed by talking to my wrist like Dick Tracy. Are we living in an age that the disgraced Louis CK calls, quote, everything’s awesome and nobody’s happy? In other words, do you feel like we take these amazing I’ll be at flawed technologies for granted? And if so, why do we do that?

Josh Clark: I would say yes and no. You know, it’s like sometimes these things do cost us time, right? That if sometimes it is would have just been faster not to a vast area in the first place, right? When they work, they are amazing. And I think part of it is the fact that even though the error rate might be lower than we realize, we just feel that error more often, then we feel that mistake because especially with voice interfaces, they’re time consuming mistakes, you know, it’s like a man just lost a minute talking to Siri and I could have just looked it up myself, right. And so part of it is some of the tasks that these things enable our, you know, kind of low effort tasks to fulfill that are worth having the machine do if they can do them quickly, you know if there’s no cost to it.

And so I think part of the thing that we have to do as designers is, you know, I think it’s okay to solve small problems as long as the cost of using those solutions is low. And so I think that our responsibility is to make sure that that cost is low, and that it can recover quickly or repay that expense when things go badly. And so, you know, I think part of the problem to hear that we can help with as designers is that, I think when I talked about designing for failure, and for uncertainty, what it really means, and this has always been true in all kinds of interfaces. But I think it’s especially true here is that our job is really to set appropriate expectations for the system’s ability and then channel behavior in ways that match that system’s ability.

So it’s really, you know, just being super honest about what the thing can do. And I would say that there’s a real challenge for designers of systems like Siri and Alexa, because what they’re capable of is a moving target, they get more capable every day. And so it’s not a fixed system to design for, to educate for, but their promise is asked me anything. And, you know, you ask, and you don’t get the answer, it’s not a good promise, it isn’t setting the expectation correctly. And again, this is I don’t say this at all to diminish the Alexa and Siri designers on the contrary, it’s a really interesting thorny problem of how do you educate against a vast possibility of capabilities of real capabilities that they can really answer without, and with a really low resolution interface, like speech, you know, how do you sort of over time start to introduce those capabilities and ways that don’t just drain tons of time for the part of the user.

So I think that there’s really possibility here. I think that there’s also those kind of its death by 1000 cuts, right, these little tiny expenses that you accrue by trying to use Siri and not working, despite its overall success as a product. It’s an amazing technical achievement as designers, you know, how do we save that expense and help to educate and ways to use it, use the systems in ways that won’t frustrate us.

Jason Ogle: Great, great answer Josh. Technology is a wonderful servant, both a terrible master.

Josh Clark: It’s an interesting thing. There’s a guy named John Culkin who was a Reverend in New York in the 60s, I’m not sure if he’s still around or die. But at the time, he was a friend of Marshall McLuhan, and he has this lovely line that we shape our tools and thereafter our tools shape us. And you know, we create these things and they have these follow on effects that changed the way that we behave or the way that we say Costco and it’s a little bit like Kranzberg’s first law that I mentioned earlier. You know, the technology is neither good nor bad, nor is it neutral. It has these effects that we don’t necessarily realize, but after we put them out into the world day, they change us.

Jason Ogle: Couldn’t agree more. Josh, this is my last question for you. And I like to end on a bang, sometimes this has been just such an incredible deep dive. And I think this is going to be reference material for many, many years to come. But I want to ask you this last question. I’m kind of building it up. Can you tell?

Josh Clark: I’m excited and nervous.

Jason Ogle: If today was your last day on earth, what would be your final plea to designers and developers, engineers building the future of AI and machine learning, that our kids and potentially their kids will be experiencing and even building off of.

Josh Clark: I think that the biggest thing we’ve been talking around this a lot is to really recognize that software is embedded with values and even with politics and that to ignore that is to really sort of willfully ignore that potential impact that they’re going to have on the world. And that, you know, values and politics are highly fraught right now, but it’s important to be intentional about the values that were cooking into these systems through the training data that we give them through the people we choose to gather that training data from, by the people we choose to impose the system upon. Let’s be intentional about what kind of effects we want this thing to have. We may not be able to control those effects. This is again Kranzberg’s first law, we don’t always know exactly how it’s going to turn out, good or bad. But let’s be really intentional about the values that we believe will make a better, fairer, more just world.

And I think that some of those values go to, you know, how can we be respectful of our own humanity, right, and to amplify our humanity, and the things that we are best at. But I think also just sort of broadly it’s when we think about the bias that these systems will inherently have, how do we want to bias at, you know, what is the bias toward the right kind of world and how do we possibly have a conversation where we can agree on what that world might look like?

Jason Ogle: Amen brother. That’s awesome! So Josh, as we close here, what’s the best way for the Defenders listening to connect and to keep up with you, because I know they’re going to want to.

Josh Clark: Well, thanks. I tweet occasionally @bigmediumjosh and I’m also bigmedium.com. That’s like the two sizes big and medium where I occasionally blog. And I show up at conferences and give talks about this stuff sometimes that’s a also available on the speaking page of my website at bigmedium.com and if you happen to show up at one of those, please say hello.

Jason Ogle: Excellent! So, finally Josh, the use of the machines know only what we feed them, and I want to implore you Defenders listening, those of you regularly feeding the machines, I implore you to keep them on a steady diet of empathy, humility, servitude, and humanity.

Josh, thanks so much for being here. Thanks so much for spending a good chunk of time with me talking about this stuff. I feel like we’re just going to be diving, we’re going to be surfacing from the well, it’s going to probably take a little while, but we’ve gone real deep here and I’m so appreciative of how you’ve done that, you’ve made this conversation such a deep dive.

And so thank you so much for all the work you’re doing and getting this message out there, for making the world a better place. You truly are. You’re not just a Silicon Valley, they’ll I want to make the world a better place. You really are. So please, please my friend as always fight on my friend.

Josh Clark: Thanks so much. And thank you for creating such a great forum for helping all of us do all that good stuff of making the world better. Thank you!

Hide transcript


ABOUT JOSH CLARK
Josh Clark is a design leader who helps organizations build products for what’s next. He’s founder of the powerful design laboratory Big Medium, a Brooklyn design studio specializing in future-friendly interfaces for artificial intelligence, connected devices, and the web. He’s the author of five books including “Tapworthy” and “Designing for Touch”. He’s also a prominent blogger and speaker, the creator of the popular Couch to 5K running program, and he happens to be Maine’s 11th strongest man.


SUBSCRIBE TO AUTOMATICALLY RECEIVE NEW EPISODES
Apple Podcasts | Spotify | Pandora | Amazon Music | Stitcher | Android | Google Podcasts | RSS Feed

USE YOUR SUPERPOWER OF SUPPORT
Here’s your chance to use your superpower of support. Don’t rely on telepathy alone! If you’re enjoying the show, would you take two minutes and leave a rating and review on Apple Podcasts? I’d also be willing to remove my cloak of invisibility from your inbox if you’d subscribe to the newsletter for superguest announcements and more, occasionally.