This week on the show, we talk about a new study on society’s readiness for AI ethical decision making. We also answer some questions from the community about how to deal with ‘know it alls’, what to do in a horrible interview experience, and discuss the merits of SEO being user research.
Recorded live on June 16th, 2022, hosted by Nick Roome & Barry Kirby.
Check out the latest from our sister podcast - 1202 The Human Factors Podcast - on HFES - The Presidents Perspective - An interview with Chris Reid
HFC Pride:
News:
It Came From:
Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!
Follow us:
Thank you to our Human Factors Cast Honorary Staff Patreons:
Support us:
Human Factors Cast Socials:
Reference:
Feedback:
Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.
Welcome to Human Factors Cast, your weekly podcast for Human Factors psychology and design.
Hi, I'm feeling better, and I hope you are, too. It's episode 248. We're recording this live on June 16, 2022. This is Human Factors Cast. I'm your host, Nick Rome, joined today by Mr. Barry Kirby. Hello. Hello. Tonight on the show, we got some fun stuff for you. Let me talk about a new study on society's readiness for AI ethical decision making. What does that mean? And we'll figure it out later. We're going to be answering some questions from the community about how to deal with no at alls, what to do in a horrible interview experience. And we'll discuss the merits of search engine optimization being user research. But first, we got some programming notes and a community update for you all. If you're unaware, this is Pride Month, and we are in full steam with our Pride content over here. Shame on me. I don't have my Merch. But we are still doing a fundraiser for you all or for the LGBTQIAP plus community if you become a patron this month or if you buy our Merch, any of those proceeds from the Merch and 30% of our patreon earnings will go towards the Trevor Project. We talked about it last week on the show. That is still going. Link to our Pride content is in the show notes this week. You can hear I guess it's out there now, our Human Factors minute on designing for LGBTQIA P. And that is guest read by Katie Sabo, one of our Digital Media Lab. I guess she's like a founding member of her Digital Media Lab research assistance. Also, next week, next Friday, June Pacific, join us live on LinkedIn. We're going to be doing another one of those wonderful HFES presidential town halls. We're going to sit down with Chris Reed, who I understand Barry just had to sit down with. We'll also have Carolyn Somerick and Tom Albin and then friend of the show Bars in Sansa Mgohar. He's been on a couple of times. So we'll have a wonderful cast of folks there to talk about HFCs, the state of things. It will be a great time. Barry, I want to know what's going on over at 1202. Well, Chris Reed clearly gets around everywhere because the interview with him is still up and actually getting some really good traction, particularly some really good feedback here from the UK about just nice to hear what goes on across the pond in HFPS and some of the similarities and differences. But coming up on Monday is another new interview, and actually we're diving into health this time. So, Peter Brennan, who's a surgeon here in the UK, he's been really driving Human Factors in his surgery, and he's operating theater and been doing a lot of stuff kind of on his own back. And so it was really good to just chat to him about what he's driving so that goes live on Monday, and I thoroughly recommend everybody goes out and listen to it. Yeah, I finally got a chance to listen to that Chris Reed interview. Great job, Barry. Really appreciate it. All right, we know why you're here. You're here for the news, so why don't we go ahead and get into it?
That's right. This is the part of the show where we read you a Human Factor's news story, and then we talk about it in some capacity. Barry, what is the news story this week? So this week, researchers study society's readiness for AI ethical decisionmaking. So, with the accelerating evolution of technology, artificial intelligence plays a growing role in decision making processes. Humans have become increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions on their behalf. A research team has studies how humans react to the introduction of AI decision making. Specifically, they explored the question, is society ready for AI ethical decision making by studying human interactions with autonomous cars? Researchers observed that when the subjects were asked to evaluate their ethical decisions of either a human or an AI driver, they did not really have a defensive processor either. However, when the subjects were asked about their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI operated cars. Researchers believe this is the rejection of a new technology that is mostly due to incorporating individuals belief around society's. Opinion is how it's likely to apply to other machine and robots. Therefore, it would be more important to determine how to aggregate individual performances into one social preference. Moreover, this task will also have to be different across countries, as the research here has suggested. So, Nick, are you ready for AI to make ethical decisions in your life? You know, I saw this question that you posed to me, are you ready for AI to make ethical decisions in your life? And I said, Oh, I can answer this easily. I don't know. I genuinely don't know. I might be putting too much thought into this, but I genuinely don't know. Because what information does an AI agents have about a certain situation that maybe I don't see? And so I understand the power of AI. And I understand sort of where we're at with that. And I can do some crazy, scary things and understand things that maybe we can't even see yet. And so to me, I don't know. That's an interesting question. And I think that's what we're seeing here with this story here, because, I mean, a lot of these people probably have that same thought as I do. I would imagine many people kind of understand what AI can do and understand that it can see things more objectively than us humans can do. But because it's such a newer technology or I guess something that maybe we still don't know all the variables around it, how it makes these decisions, those type of things. Maybe that is why they're airing on the side of caution and saying maybe the human should be the one in charge here. Barry, I'm curious what you think. I'll ask you the same question since it was such a curveball for me. Are you ready for AI to make ethical decisions in your life? I don't know. It depends. So I think this is really interesting because what is good ethical decision making? Because fundamentally now that we'll get into the experiments in a moment, but this is all about asking difficult questions. You've been asked to make choices that invariably lead to a level of hurt, pain, emotional outcome, things that really an AI doesn't understand. An AI. When it makes a decision, an ethical decision, or any sort of decision, doesn't have to live with the outcome we do. And I think that's where an interesting nuance is. The other thing, I think you kind of alluded to it, which is why I was playing around with the idea, is an AI will presumably know more than we do or can see all the data almost at face value, more so than what we do, but also presumably, if it has time, it could actually work on different potential outcomes, whereas we react now whether we do that in our head. But if you have to make a split second decision, you make a split second decision with the best information you have available at the time, an AI can presumably process a whole lot, almost run scenarios almost in the blink of an eye, and therefore make logical decisions based on values, but not necessarily emotional decisions. And again, it doesn't have to live with the outcome of what is decided. The decision is made. It does it. So as much as I really want to see AI do a lot of stuff, and I think it can help us massively, I still think there is an element there of I'm not sure that's I don't know. Yeah, right. We're both I don't know on that camp. But let's talk a little bit more about sort of what actually happened in this experiment, because I think there's some nuance here that we want to capture in the way in which they sort of analyzed whether or not people were comfortable with AI making these ethical decisions. So let's talk about these experiments. Right. There were two of them. I'll kind of talk about the first one. Barry, I'll let you take the second one. In the first experiment, the researchers presented these human subjects with an ethical dilemma that a driver might face. And we can talk about some ethical thought experiments and a little bit through the lens of AI making these decisions and kind of poke holes at it, see what we would come to in those situations. Right. In the specific scenario that these researchers created, the car driver had to decide whether to crash the car into one group of people or another. This is your traditional trolley problem. The collision was completely unavoidable. It had to happen and the crash would cause severe harm to one group of people, but would save the lives of the other group. Right. Traditional trolley problem. And coming back to sort of the results of the study, right, you mentioned this in the blurb. Basically, when asking to evaluate the ethical decisions of a human versus an AI driver, they didn't have a preference for either. But then when they were asked whether or not the driver should be allowed to make the decisions on the road, they had sort of a stronger opinion against the AI making that decision. And so in this situation, they said the driver, I guess, should be sort of responsible for determining which of these two groups the car crashes into. Barry, you want to talk about the second experiment here? Yeah. So the second experiment was a bit broader in terms of understanding how people react to the debate over AI medical decisions once they become part of the social and political discussions. So they had two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. The other scenario allowed the subjects to vote in whether to allow autonomous cars to make them sort of decisions. And so when they were asked to do this, again, as we mentioned in the blurb, the subjects were really keen that actually they were uncomfortable with the already being made, that they wanted to allow the subject to vote on what they wanted to play with. So, yeah, it's really interesting about how a lot of this comes out. It falls back to why should we, as people have the only thing these are an element of blame culture here as well? Not culture, but do we just want to be able to point the finger at somebody and not just a thing? Right.
I thought this was going to be quite easy. The more you think about this, the deeper we get into it. Yeah, I've never thought about it quite with this twist on it before. One thing I do want to bring up about that second experiment that you talked about. Right. I think the purpose of that experiment was really to look at sort of how to introduce this concept of having a I make these ethical decisions into a society, right. Is it something that is mandated by the government in charge or does it allow its citizens to make that decision? And that's really interesting to me as well, because you have sort of these two schools of thought about how theoretically, right, if you're talking about democratic societies, you have sort of these elected officials who are making the decision on your behalf, or do you let the general populace kind of vote for that as well? Or even in another case, you have sort of a fascist dictatorial sort of government where the decision is solely made by one ruler. Right. There's different ways in which to introduce this into society. It's fascinating to me to see how those might actually be implemented. So Barry, you and I can kind of slice and dice this like we normally do in terms of all the different facets, but I think do you have a particular one that you want to start with? We can start anywhere. I don't care. Oh, good. Well, for me, I think one thing that I find really interesting in all of this and it kind of does play into that, into that last discussion you just had is around the organization and social because for me, the value of culture really hits into this quite strongly because you take the trolley problem that they've adapted and the different variations on this, do you allow it to crash into a bunch of school children, young people or an elderly person? And different cultures place different values on different people. So Western cultures tend to place more value on young people because presumably they've got more to live for. They're all out there. Whereas Eastern cultures tend to place a lot more value on elderly people because they've got so much more experience and they've lived more life. So they've got all this knowledge that they can pass on to people and improve the wealth of society knowledge base that way. So I think where AI is going to be quite interesting is how do you implement that in different cultures, in different societies in a way that is right and just and who is the right person to talk about that? Because then you also then take that next step into we already have a strong recognition that AI is quite sexist. It's very male dominated because the people who are coding them generally are male. And so it has that inherent bias in it. And so if we're going to do this, how do we make sure that we because you don't be culturally agnostic because culture is an inherent part of everybody's fabric. So how do you make that work? But then how do we make the organization that is developing this type of stuff develop it in such a way that it does reflect gender bias and type of attitude. So I think there is that bigger thing at play that really I think AI has got a long way to travel in being able to help us solve that. Yeah, you brought up gender bias. It's not unique to sort of gender either. There's racial bias,
I guess, right
there. The next thing that I want to talk about here is, I guess, sort of understanding what the AI's intent is. Right. So you have sort of this decision that's being made by an artificial intelligence agent. And when you think about sort of why it cut decision or how it made that decision, we as a society, we're kind of transferring from that societal aspect now, kind of a training mindset, if you will. Right. We will need to be trained. We will need to understand what factors went into that decision. Right. Especially when it comes to, I think you mentioned it earlier, hurt and sort of understanding if an AI chose my somebody else's ever going to be a new win into that decision? Will I ever and I think the other person on the other side happened to them.
There is definitely elements around that stuff we got to engage with the other bit around the training. One thing I think is quite interesting, when we started using Google Maps, for example, we wanted to drill in and understand where that decision making was coming from. And you have that element there to understand and drill into. Yes, you said a whole lot of settings for that, whereas now we are using Google Maps, you drop into it, you don't really care why it's made, the decision that it's made. You don't care about the route. You just assume that the route is given is right. And that's why I think in many ways we will get to a point where we don't really worry about why the decisions were made, we just understand why that they were made. We'll get to a level of trust, but we're going to be on a journey there. I think the Martin Third is dropped in a point here on the street that he's listening through where he said that the AI will be underpinned by algorithms written by people. The benefit is he sees it is the outcome will be consistent and acceptable at a societal level, even if it's different from individuals choice. And that goes back into the organizational things that we talked about earlier, which is almost democratic decision making. Everybody would, everybody bought into it. But this then leads itself into what you've been talking about is as long as people understand how that decision make was made, which you could argue we might struggle with because people don't actually know how decisions are made on our behalf at the moment anyway, even before AI, with the way that government is
going to be fundamental, isn't it, to be able to exploit it. Yeah. So here's another piece to that puzzle too, right? So imagine if we talked about sort of incorporating AI into a government or into a society. What if they also sort of voted on not only whether or not to have an AI make those ethical decisions, but also what decisions that AI could make. It's kind of getting at that underpinning of AI by people. Right. So if you had a society vote, like, okay, should AI be allowed to vote whether or not or I guess the method in way in which an AI would triage in a health care setting, are we voting on that? But then how do you get to that point of all the nuance. It's not just a simple decision. There's a lot of information that goes into that. Do you have sort of health care professionals, then, that lobby for one versus the other? Or do you have the society kind of representing what the best decision would be from their perspective? It's a whole interesting twist. I'm thinking like piecemeal AI, where you just plug in pieces to this AI system that ultimately makes all the ethical decisions for you. And I don't know, it's fascinating. I think we could go into some of the separate industries or sectors to kind of talk about what some of these decisions could be. I don't know. Do you want to do that? Yeah, I mean, an easy one for me to pick up would be the whole defense bit, because it happens kind of all the time already that a decision needs to be made about, I guess, what people would love to call kinetic effect, who lives and who dies in this type of thing. And at the moment, we have very strong policy decisions there. That is a human decision. That is something that somebody has to take responsibility for. But is there something there about from an AI perspective, we can make better quality decisions, but again, what happens when it goes wrong and it goes back to that blame piece? I think fundamentally, when we're talking about taking somebody's life, no matter how bad it is, there is a level of responsibility there that we ethically, morally feel like that should not just rest in some sort of data stream. Have you got a particular one that you would pick up and highlight? I mean, the healthcare example is really interesting because you have sort of this triage aspect that we kind of mentioned a little earlier. You have sort of understanding who gets treated first, who gets seen first based on not only factors like who's easily able to be treated or whose injuries are more severe, who's more likely to live, who's not, right? You have all this stuff. So that's one interesting application. Then you also have sort of making the difficult decisions, whether or not somebody's in a coma or in a vegetative state or anything like that, right? You have sort of these really tough decisions or whether or not to even go forward with the surgery. You have AI kind of looking at all that stuff. But then even on the other side of things, you have AI doing wonderful things in healthcare too. You have AI looking at scans and diagnostics of individuals who are picking up diagnoses before healthcare professionals can even see them themselves. And so there's all these different types of ways in which AI could integrate with health care. But when you talk about those tougher decisions, right, how do you sort of assign because really we're talking about who lives, who dies. And you mentioned that in defense too, right? It's the same thing. It's the same thing. Who's responsible for that? But I think I guess the subtle difference is that in the healthcare setting, the AI would make a decision based on the overall capability and capacity of the healthcare system. So yes, you might make a decision based on the resources available, based on the capacity and capability of the surgeons available or the nurses available. And so I guess he must be making it would be looking at it like how many people can we save as opposed to rather focus on this one person rather than focus on this one person? You might say two people and things like that, which it is the same outcome, but I guess from a very different perspective, because the other one, where we're talking about this as well, I guess, is in the criminal justice system. So when we're looking at the use of AI, I've sat as a magistrate, as a judge, where you have to make decisions on people's guilt and innocence. And that is a very human thing in many ways. It's a very emotional thing to sit through because you have to hear all the evidence, you have to weigh it all up, interpret whether you believe that person is telling the truth or not, and then come to a decision. And then that decision has consequences because they either go to jail or in the UK they go to jail, and in the US, you have slightly more extreme versions in certain areas. So could the AI make better, cleaner decisions? In many ways, if you take out all that emotion and just nail it on the facts and then that is that judge jury executioner piece again, would it make decision making in the justice system quicker? Because the fact of the fact of the fact, would it make it more just? That's the other question too, right? It's like you have sort of yeah, I mean, I don't know. Criminal justice is really hard to talk about, especially when you think about sort of a fair trial by your peers and there's a lot of issues with criminal justice in general and I mean, I wonder how much of those problems would be solved by AI. I don't know if many of them, because you have systems that are in place, but you think about all that stuff and which roles does it actually play? Does it actually play sort of the does it play the judge? Does it play the jury or does it play the executioner? And when we say executioner, I'm almost thinking about having a I sort of step in and say, well, what is an appropriate punishment that would sort of have the most likely impact for a successful recovery for this person. Right. So that's kind of what I'm thinking from that perspective. And I think from that perspective, it's probably a great application. But when you start applying it as a judge or a jury, then I think you're maybe getting a little bit more dicey in terms of sort of what society at large is comfortable with. Well, again, because you're absolutely right. The justice system is more about society because it's society's rules. That's what the whole justice system is. It's about what is acceptable and what is not within the functional society. And then social rules, somebody breaks them social rules that have been codified and then the judge is there to say, yes, you have definitely broken the rules. The Jew is there to do that. And as you say, the executioner is there to extract some sort of recompense on. That a really interesting quote, that what I was doing, my training at the time was not about justice is not about rehabilitation. Justice is about extracting a penalty from the person who has broken the law for the victims, on behalf of the victims. Sorry. And I think it's interesting that actually out a lot of this. We do forget the victim role in it all. And what would this provide better outcomes for a victim? I don't know. I don't know whether it would. Yeah, that's a great point. Right. Because maybe there's some system in place that would not only be better for the recovery, but also isn't an appropriate sort of punishment that would be satisfactory to the victim as well. Right. Good point. I do want to talk a little bit about surface transportation because that's kind of what they're talking about here. They're talking about sort of the trolley problem. And when you think about the example in the experiment that they ran, right, you're looking at the classic trolley problem. And I do want to talk about sort of autonomous vehicles. And I think this is one place in which sort of we're already looking at this issue. And I think that's why the researchers chose to focus here on this area because this is already out there, right. You already have AI systems and whether or not they're making the correct choices, they are making decisions now that are impacting lives and yeah, how do we just build responsible systems? All that stuff. I don't know. I just wanted to bring that up because that is happening now and it's one of those things where we need to figure it out quickly. Well, and I think, again, it's a really good story from this perspective because we've never had that. I can think of such a rapid deployment of technology that is properly groundbreaking, game changing, but potentially lethal all at once. And we've talked about AI and being able to do different things. I think we've all had this idea that we know that aircraft are being flown by autopilot and they could have varying levels of autonomy in there and artificial intelligence making decisions, that's fine because they're up in the air, they don't affect us on a day to day basis. Trains could be similar. So things that are very highly controlled. Having that AI. The thing that has made a distinct difference is being able to get a piece of AI doing stuff for you in your hands. And this is where we sort of hit this with people can drive around utilizing AI. We need to deal with this right now. And are we happy with our road space being used as largely an agile test bed? Because that's what it is at the moment. We've got AI doing its thing and it's making mistakes. Not many when you compare to the other accidents that go on in the grand scheme of things, but they are still being made. And this I think that's possibly what makes people feel more uncomfortable than anything else. Yeah, let's throw a couple of ethical thought experiments out here just to see kind of how we might think about AI in society and culture and how it might approach some of these issues. Right. So we talked about the trolley problem. That's pretty ubiquitous, right? So let's look at the Experience Machine. Have you heard about this one, Barry? The Experience Machine. There's a book Anarchy, State and Utopia. And basically we're looking at an ethical system that places pleasure above all other values. So the concept here is that there's a machine that could give you whatever experience you wanted at any moment. And I guess the first question is, would you plug into a machine like this? Would everyone do it? If everyone was plugged in, who would run it? Those types of questions, right? And so this whole concept of wanting to plug into the Experience Machine, I think this is a little bit more fun to think about because I plug into it. But then would it become addicting? I don't know. Probably. I could have every podcast episode where my Internet doesn't suck and that's the experience I'm living. But I don't know. Would you plug into something like this? And then how would an AI system sort of work with this in society? Theoretical, but yeah, what do I plug in? See, my good answer is yes. Cost you would because why wouldn't you want to experiment? That sort of thing. But then it's almost that addictive thing. How do you know that you will be able to then unplug? Because I think the idea of pleasure is only pleasurable because it doesn't happen all the time. You can't have unlimited pleasure all the time. Or maybe I'm just doing things wrong. I don't know. But you know what I mean. I think there's a whole thing where you have that, the whole yin and yang, pleasure pain thing. Because you recognize the good stuff. Because if you have nothing but the good stuff, then that it dulls out. So I don't know. Interesting. Okay, here's a better example. What about the utility monster? So basically when we talk about a monster here, we're talking about AI, right? This experiment, this thought experiment really considers maximizing pleasure, happiness or good as the highest moral goal. And they are sort of set at the core of this AI thought process. Right. So we're looking at the AI's ability to focus on the consequences of actions, and the AI would be set for maximizing utility towards good in the world. Right. And so basically, we're imagining an AI that gets 100 times more good out of things than being a human ever could, or letting humans make these decisions than they ever could. Right. So, for example, a human eats chocolate chip cookie, would it taste 100 times better to you?
This is a bad example. I'm trying to adapt it from another thing. But here, think about sort of like, you know, if you made a choice that was selfish to you, but it was maybe not so good for others in the world, when AI step in and correct your action to sort of make sure that there's more good in the world based on your decision, is it really a decision at that point? Right. It's an interesting thought experiment. Yes. Does the group exist only because the actions of the individuals or does the group exist because the actions of the group solely for the good? For the group? Yes.
The problem with thought experiment is to make you think. Yes. Which is a bit of that actually, on that I'm a great believer in that the groups are made up of people. So without people making individual decisions, you don't get a group. But you can do because we've seen so many examples of how that works were just right through covert. You've seen the examples of groups of people doing crazy things, yet everybody individually said, Oh, why would you do that? Initially, we went through a whole piece of things happening where people where you can buy like toilet paper and stuff because people were bug buying stuff because they could, because they thought everything was going to go. So if you just left it alone, it would have been fine. But no, we act together as a group, like sheep, if you will. All right. Any other loose rounds on this article before we go ahead and get into our little break? No, I think it's a really interesting one. I like this one because it's made me think a lot more than I anticipated it would make me think, because normally we've been around these sort of AI discussions for quite a while, but then sometimes you just get a topic that makes you think of things in a slightly different twist, different way. So well done for everybody who voted it in to be the story this evening. Yeah. No other closing thoughts for me. Thank you to our patrons this week for selecting our topic, and thank you to our friends over at Hiroshima University for our news story this week. If you want to follow along, we do post the links to the original articles on our weekly round ups on our blog. You can also join us on our discord for more discussion on these stories. We're going to take a quick break. I'm going to reset my router and then we'll be back to see what's going on in the Human Factors community. Right after Human Factors Cast brings you the best in Human Factors news, interviews, conference coverage, and overall fun conversations into each and every episode we produce. But we can't do it without you. The Human Factors Cast network is 100% listener supported. All the funds that go into running the show come from our listeners. Our patrons are our priority, and we want to ensure we're giving back to you for supporting us. Pledges start at just one dollars per month and include rewards like access to our weekly Q and A's with the hosts personalized professional reviews and Human Factors Minute, a Patreon only weekly podcast where the hosts break down unique, obscure and interesting Human Factors topics in just 1 minute. Patreon rewards are always evolving, so stop by patreon. Comhumanfactorscast to see what support level may be right for you. Thank you. And remember, it depends. Yes, huge. Thank you, as always to our patrons. We especially want to thank our honorary Human Factors Cast staff patron Michelle Tripp. Patrons like you keep the show running. You keep the lights on over here. What does Patreon actually pay for? Well, this month, like I said, if anyone wants to become a patron, 30% of your proceeds this month will go to the Trevor Project for Pride Month. But normally what it does, what it helps pay for here is kind of the monthly hosting fees that we usually use to keep our podcast up. That costs money. I don't know if you know that. It's not free. You can't just put a podcast out there. You got to pay for it. Anyway, it does that. We have annual website domain fees that we got to take care of. We have the capability for that website behind the scenes that we have as well, and we got to pay for that somehow. We have automation behind the scenes handling a lot of stuff and products and services to help us with the audio and video production. And above all, it kind of helps with my ability to get good internet. Although not tonight. We'll see. Anyway, yeah, it pays the bills to keep the lights on. Thank you. I appreciate the support. Let's get into this next part of the show.
That's right. This is the part of the show we like to call It Came from. This is where we search all over the Internet to bring you topics the community is talking about. Any topic is fair game for us to sit here and talk about as long as it relates to the field of human factors and wherever you're watching. If you find these answers useful, give us a like to help other people find this content. All right, we got three tonight. This first one here is by Mystery to me. UX Research subreddit how to Deal With Know It Alls they go on to write. I'm new to a team where a few of us were hired at the same time. One of them male former engineer, seems to dominate all meetings with his input on how to change processes or random things in the organization. So new employee really wouldn't have the confidence to come across that way trying to change things. They've been that way forever since. I feel like it comes off as I know better than everyone else, I'm really just trying to learn in this first month on my job. He also often brings up his past experience during meetings. I feel like it sidetracks us a lot. The manager doesn't really do a good job of keeping the meeting focus, so it ends up going for an hour with him mansplaining on how to do things or how things should be done. Trying to be more vocal in the meetings, but he seems just to interject again right away. How do you deal with this behavior? I'm also wondering if this type of behavior is actually rewarded over my approach of keeping quiet until I learn more or should I try to be more vocal too? And how would one do that without coming across as arrogant? Barry, what are your thoughts on this? How do you approach that situation? It's difficult and because some people do this as a defensive behavior, so particularly with new people coming to organizations, it is probably more of a male issue than a female issue, where you feel like you have to go prove your worth as soon as you possibly can. So you end up either being quiet or you end up talking an awful lot. I'm basically trying to justify your existence. My approach to this because I've seen it quite a lot, unfortunately, one way or another, is if you see have a quiet word with people and sort of just help them reflect on themselves about what it is that they're doing. Because the other killer part of this really is that actually quite often this type of behavior can end up going quite well rewarded because there seems to be more, particularly if there may be managers who don't necessarily not all over the brief and they're literally just managers, people managers, not technical managers. If they don't really know what's going on, then they think, oh, actually this person being really vocal, clearly then they know what they're talking about when we all know that isn't necessarily the case. If you haven't got a manager who is good at basically getting everybody to put forward their point of view and that type of thing, we can suffer with this. So really it's a difficult thing to do, but I would suggest a cup of tea, cup of coffee, have a chat with them and somehow it's not an easy thing to do. I don't have a playbook of being able to go and start that conversation. But most people, when you point out that behavior to them, do tend to be a bit more reflective, particularly in our domain. But I've seen this sometimes before and yes, it's not an easy thing to deal with because you feel like particularly if you've been there a while, you feel like they come and stomping in on your ground. You feel like you're being let down, particularly if the managers aren't supporting you in that way. So I feel for you Nick. What about you? Have you dealt with deal with this before? Oh, certainly, yeah. This is a tough one because for me as a new employee, I'm typically the quiet one and so I'm very much in like information absorption mode and so I don't feel comfortable talking about things I don't understand. The exception is asking for questions or asking questions about those things that I don't understand. Now I will say you're right Barry, this behavior oddly does go rewarded in some cases where you do have sort of the manager that is looking at the ability to contribute to a conversation rather than the content of that contribution. And so in this case, what I would recommend for somebody in this situation is to go to that manager and say, hey look, I don't think that this person is contributing anything meaningful. I feel like a lot of our conversations go off topic and you don't even have to bring up the person, just say hey, I think our conversations could be a little bit more guided. I have concerns about the direction in which some of these conversations get off the rails that might subtly nudge them to sort of reign in the conversation as it does get off the rails with people like this. I don't know, that's kind of where I'd start. If it ultimately becomes a huge problem, obviously call them out by name and tell the manager so and so is doing this and then from there you might want to call it out in the meeting to just say, hey, can we table this discussion and get back to the thing at hand. People tend to be pretty receptive when you call it out in the middle of a meeting. But I would go the other options first. Any other closing thoughts on that one? Yeah, I guess part of it is actually I've probably been half of this person just so I certainly have them elements where I've been, I've gone into a new role, felt completely out of my death and you do feel like you have to do some of that. But this was like really early on in my career and I did have somebody pull me aside and say you are worth it, you are fine, you don't need to shout your mouth off all the time. And that really early on was actually quite a good learning experience. I like to think I wasn't going quite as far as what this was going this story was alluding to, but sometimes maybe you don't necessarily understand everything what's going on inside that person. So I think there's probably possibly two levels there to think on as well. I love that that person told you you don't have to run your mouth and yet you are a regular on this podcast and you have your own. Anyway, just a little observation. All right, this next one here is filled with some fun language, but I'm going to censor most of it. Horrible UX interview experience. This is by wishing for nuggets on the user experience. Subreddit. I had a bad interview with a senior UX designer at a company. How do I truncate this? Apparently he's an engineering grad that makes films in his free time, which is great, except he himself has just a year's experience in UX, which I found out after the interview by stocking him and after that experience. And that experience also includes a course from Udemy UX Fundamentals. I don't know, it seems ridiculous that I'm being interviewed by someone who themselves is starting out in UX, not to mention the condescending tone. I was talking about inclusive design and he cuts in and tells me that's great, but it's not relevant to UX at all. Wondering where to put you since your UX is very quote unquote basic. That's what he said after looking at my case study and portfolio. They advertise this as a UX and UI design role, but the guy says, no, we're looking for a UX researcher, which is very different. He's asking me stuff like, do you know what an artboard resolution is? I'm genuinely questioning because I have four years of visual design experience. Anyway, is this normal? Am I missing something? Or I'm genuinely so annoyed and upset right now. I have picked this because this post made me annoyed and upset. Barry, how do you go about an interview where it might not be going in your favor or your way or anything like that? Again, this isn't easy because I've been there a few times where you turn up thinking you're going for a job. It's been advertised as a certain job and it isn't. You sort of get there and they're like, actually no, we're really looking for this. And then you may be getting to that thing. Oh, have I made a mistake here? Or somebody told me. And then you realize, I guess we see it in Human Factors quite a lot, in that you get interviewed by somebody because if you're going to a role where possibly you're the only HF person in there, you've been done by a senior person, you've been interviewed by senior person who's heard of Human Factors and has maybe done some sort of cool course once in late a long time ago, but knows all about it when they probably don't. It's not easy. But also it is kind of incumbent on you to realize that the interview process is a two way process. If what they're giving you is such bad vibes, then you probably don't belong there. Your evaluation of it is if you give you such a horrible experience, move on, find another one. I think there is maybe an expectation that every interview that we go to should be perfectly aligned to us and everything on the other side of the table should be absolutely perfect on our way. And that's just not the case. And it might be that people are interviewing because they don't really know what they're talking about there. Or they might think that what sounds in this case that they've got a bit of an inkling of knowledge and because of company politics they're in a position that they have some power of a hiring and stuff. Yeah, they might be coming out with stuff that you think is not right. But quite frankly, I mean that kind of suck it up and move on. Find the job that is right for you, find something that you really want to want to go with. You owe them nothing. Go and find the next bit. I think that you might have something slightly different. Let's break it down. You have good interviews, you have bad interviews. This post made me angry. Not at the interviewer. Look like I don't know. The thing that I want people to take away from interviews is to learn from them. In this situation. This person has clearly had a bad interview from their perspective. They have been interviewed by someone that they think is not qualified. You also don't know their full story. You've stalked them on the internet to find out what their sort of experience in UX was versus understanding what their job roles were and past work experience. I don't know. To me it just seems like a little pretentious. The point here is that if you do have one of those interviews where maybe there wasn't a communication about what exactly that role was, in this case they were looking for a designer role, but the company hiring was looking for a researcher role. And so is that something that you ask as a clarifying question before the interview actually starts? That's a learning experience there. That's a question that you can ask in the organizing state. And then I think there's ways to handle it in the moment too. If you're asked a question or if somebody's combative with a statement that you made, that's another way in which you can evaluate the people that you're going to be working with, right? You said it's a two way street. It is. You're evaluating the people that you're going to be working with. And if somebody's combative against something that you're saying or maybe it's you, maybe you're not listening to what they're saying. I don't know I just think there's something that you can learn from this. And, you know, if that thing is that there are some good interviews and some bad interviews and that's the thing that you learn. It's the thing that you learn.
It's not rocket science, I guess. Any other thoughts on that one, Barry? Well, get over yourself. All right, thank you. No, not you. No, thank you. Last one up here tonight is search engine optimization, user research. This is by rejuvenates on the user experience. I read it, I did a redesign for a website. SEO research was involved. I looked at competitors in the area and keywords ways to improve the homepage experience. I also looked at reviews. How do I display this information on my portfolio next to my design? So is search engine optimization user research, Barry? If so, why is it? And then also this is a separate question. How do you sort of put that work experience that might not have direct interaction with users but still benefits them into portfolios? So I think it is using research because it's all about how fundamentally to be able to use website effectively. People are going to get there in the first place, right. And have done for a long time, counted it as one of the fundamental precept and a post step when we do this type of work. How do you put it as part of your portfolio? That's a slightly different question and it's more of a narrative view about how you implement the process, why you implemented the process, because fundamentally you're trying to put yourself into your users mindset before they even get onto the site that you're designing. So it's about context. It goes into some of the intangibles. I wouldn't be able to comment, I don't think, about how to put it into that portfolio piece because from that perspective, I've never done it. We do it as part of reporting. We do it as part of that piece so I wouldn't have to showcase it in the best way to show that you could do it. I would be struggling with. Nick, what do you think? Yes, search engine optimization is user research. It's understanding what terms, what sort of things are going to be most relevant to people before they even get to the place, like you said. Right? It's kind of understanding the user before you even get there. And really it's your best approximation of sort of what that user needs to get to that thing that you're working on. Right. And so if you're looking at competitors along the same lines, I'd look at competitive analysis as user research, you are looking at other tools, industry standards that are out there doing similar things to you that obviously users of those platforms might have thoughts and feelings about too. And you can find user research everywhere. I think there's a lot of different places in which you can find user research that a lot of people don't think about, right? Like, I don't know, if you're making a product like Google Sheets and you go and research and Excel for them, you're going to get a lot of the same stuff. Like competitive analysis is totally user research. So when you look at other products and what SEO they're using, totally. Now when you're looking at things like reviews also same thing you're looking at what are some of the key terms that they're using, what are some of the thoughts around the product. There's not too much information here, but in terms of displaying on a portfolio, this is a little bit tricky from a research perspective or human factors perspective. A lot of times we do job talks in which we talk about a process. You mentioned narrative. It totally is. With a designer, it's a little bit trickier, but I think throw together a graphic of all the things that went into the search engine optimization, that might be a good place to start and you can talk through it during your interview. So I don't know, ultimately, if you came up with a design from search engine optimization or somehow played into each other, I think that might be a good way to go. Put it right next to it. I don't know. Any other closing thoughts on that one? No, I think, as you quite ready, say, visualize your process and show how things feed in, that's probably the best, certainly in terms of that display, but will be the way I would do it from that point. Yeah. All right, let's get into this last part of the show we like to call One More Thing. It's where Barry and I just sit here and talk about one more thing. Barry, what's your one more thing this week? Well, I'm going to go with one More thing because that's what it's called. It's not called two or three things. I spoke a while ago about the fact that I'm getting an EV, and this is like months ago now, but apparently it's coming next month and I'm very excited. So excited. I had the car charger installed today, and so it's now like the thing pointing out to the wall on the side of my house, just teasing me, waiting for the car to turn up. So it feels like we're making progress. So I've got about a month ago and then I don't know, we can maybe go on a guided tour of my new car as a live experiment or something. That'd be fine. Yeah, we could do a live sort of test of live music review. How scary would that be? That'd be great. I love it. Contrary to popular belief, I am also going to do just one more thing here. I do have a fun story, though, if you are listening, and if you want to watch something funny, go watch our preshow. Because I talk about a color scheme thing. It's funny you should go watch it. Anyway, I have something I'd like to talk about today, and it is a chair related thing. So I don't know people who watch the show again, another visual thing, but people who watch the show might notice I'm a couple of inches taller today because I have fixed my chair. Now, my chair before had an issue with the gas piston that lifted and lowered. It actually broke off the weld underneath the chair. And so my chair was wobbling from side to side. And so as I was sitting here podcasting with you, Barry, I'd be like trying to stabilize myself with just my legs and arms. And it was bad. And it was like that for weeks. And ultimately I ended up bungee cording my chair together. So that way it wobbled less. Still wobbled. Anyway, didn't realize that. This might be completely obvious to some of you, but chair replacement parts are a thing that you can buy. I bought replacement parts for the underpiece and made it heavy duty, so that way it will support my heavy booty. And then I also got a piston. And the first piston I got was way too tall. In fact, I was sitting probably if you're looking at the screen right now, I'm in the middle of the screen, it was probably putting my head up here. My knees were at my keyboard at the lowest setting. It was really bad. So I had to order a replacement part. And I had got these fun wheels that are like rollerblade wheels for the chair. I don't know if I have them. I think I have them. They're like rollerblade wheels for your chair, but these added height to it as well. And so even with the new piston, the shortest piston I can find, and those wheels, I was still looking at my thighs, kind of pushing up my keyboard. And so ultimately, I had to make my chair stationary by removing the wheels, putting in those little tiny stops. So that way all that stuff. Anyway, that's my one more thing this week. I'm happy to have a working chair. That's it. And that's it for today, everyone. If you like this episode and want to hear Barry and I talk about our AI girlfriends, since we're talking about AI, I'll encourage you to go listen to episode 240, where we talk about how AI might be able to provide companionship for others. Comment wherever you're listening with what you think of the story this week. For more in depth discussion, you can join us on our discord community. You can always visit our official website, sign up for our newsletter, stay up to date with all the latest human factors news. If you like what you hear, you want to support the show, you can leave us a five star review that is free for you to do. You can tell your friends about us. It's also free for you to do and it really helps us grow. And if you want to consider supporting us on Patreon again, 30% of our proceeds this month are going to Trevor Project and, as always, linked to all of our socials can be found on our website and are in the description of this episode. Mr. Barry Kirby, thank you for being on the show today. Where can our listeners go and find you if they want to talk about the trolley problem? If you want to talk about that, you can find me on Twitter and other socials at bazamskoke or come and listen to some of my interviews at Twelve or two Humanfactorspodcast at Twelve or Two Podcast.com. Ask for me. I've been your host, Nick Rome. You can find me on Discord and across social media at nickrum. Thanks again for tuning into Humanfactorscast. Until next time. Yeah.
Managing Director
A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.
If you're new to the podcast, check our some of our favorite episodes here!