This week on the show, we talk about how Artificial Intelligence in vehicles will be able to adapt to our needs and answer some questions from the community about working at a company that doesn't use particular tools, how to approach a situation if you do the work of a Human Factors or UX person, but don’t have the job title, and how to create a portfolio with projects that don’t have “real users”.
Episode Link: https://www.humanfactorscast.media/227
Recasted on September 8th, 2022. Originally recorded live on December 2nd, 2021, hosted by Nick Roome, & Barry Kirby.
Programming Notes:
Human Factors Cast Joins #Teamseas
https://www.humanfactorscast.media/HFC-Joins-TeamSeas/
News:
It Came From:
Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!
Follow us:
Thank you to our Human Factors Cast Honorary Staff Patreons:
Support us:
Human Factors Cast Socials:
Reference:
Feedback:
Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.
Hey, everyone, Nick here. Just Nick for now. Due to some recent world events, our friend from the UK, Barry Kirby, will not be joining me tonight. He regrets not being able to bring you a wonderful discussion on the Human Factors implications of using buttons versus touch screens in cars. And normally we try to find a replacement, but just because this is so short notice and because we already have the streams up and running, I wanted to get in and kind of address it. We'll be back next week to talk about the buttons and touch screens in cars discussion. But in the meantime, if you want to hang out with us, you can join us on our Discord community. You can visit our official website, sign up for our newsletter, all that stuff. If you want to see what the latest over at Twelve Two is our sister podcast. Or if you want to follow Barry, he's on Twitter at Baz underscore K. And I'm Nick Rome. You can find me on Discord or across social media at nick underscore rome. We'll leave you with a rerun of one of our older episodes that can serve as a primer to next week's discussion. So, without further ado, here's our episode on how cars of the future will understand their passengers.
Welcome to Human Factors cast your weekly podcast for Human Factors Psychology and Design.
Hello. It's episode 227. We're recording this live on December 2, 2021. And this is human factors. Cast I'm your host, Nick Rome. I'm joined today across the sea by Mr. Barry Kirby. Good evening. It's great to see across the pond. Over the way across the pond, yes. Okay, well, you're here, and I'm happy you're here because we got a great show for you all tonight. We're going to be talking about how artificial intelligence and vehicles may be able to adapt to our needs. And later, we're going to answer some questions from the community about working at a company that doesn't use a certain tool that you're used to how to approach a situation if you don't have a UX or Human Factor's job title but do the work, and creating a portfolio with projects that don't have real users. Let's get to why the reason you are all here is
that's right? It is human factors news. Barry, what is the news story this week? So this week we're talking about the next generation of AI enabled cars that will understand you. So in the emerging era of smart vehicles, it's the cars that will manage the drivers. We're talking about cars that buy recognize the emotional and cognitive state of their drivers, can prevent them from doing anything dangerous. So monitoring systems will need to have insight into the state of the entire vehicle and everyone in it to have a full understanding of what's shaping driver behavior and how that behavior affects safety. People start to realize that measurement impairment is more nuanced or it's just simply more complicated than making sure that the driver's eyes are on the road and it requires a view beyond just the driver. These monitoring systems need to have insights into the state of the entire vehicle and everyone in it to have a full understanding of what's shaping the driver's behavior and how that basically affects what's going on. So how do we know that when the driver isn't paying attention? So simply by tracking the driver's head position and the eye closure rate, basically, you need to understand that larger context. Where does that need for interior sensing and not only driver monitoring come into play? Our previous episode looked at the use of artificial intelligence outside the car, whereas this show is all about how the use of AI and associated technology can be utilized to get a better understanding of the current state of the driver and all of the influences inside the car. So, Nick, how does the use of AI and monitoring inside the car grab you? Yeah, it grabs me pretty well. So, look, I think this is great because it kind of is a complimentary episode to a lot of the stuff that we've been talking about recently. Right. My general impressions here is that this is exciting to see that we're thinking about the entire system, not just like, okay, well, how does a driver deal with an autonomous system, but how can an AI system monitor the driver and provide subtle recommendations to that driver? Also, last week or not last week, two weeks ago, I guess we looked at how the car itself can interact with the system outside of the vehicle. So we're looking at this whole thing holistically. And then even further back, we talked about AI in healthcare. AI has been pretty big topic on the show recently. So I guess our patrons want to hear a lot about AI, which I'm happy to talk about. It's awesome. I think this comes with a lot of interesting questions, more than answers. There are certainly some applications that we can look at for this type of technology. AI inside the vehicle, monitoring the driver, monitoring the passengers. I think the interesting piece of it comes perhaps with the ethics, and that's where I'll leave it. But what did you think of this article? So I thought it was really interesting because it is clearly a next step. I think when we spoke in the last episode, we spoke a lot about how drivers help us have to be there, poised, ready to take action. And we know that it's almost that task has seen, taskers done type approach, that we know that drivers are just not going to sit there. They're not going to be in the loop enough just to jump in at a moment's notice. Otherwise, you might as well be driving. So this is really getting really appreciating, the fact that in autonomous vehicles, particularly, that they're not going to be doing they're not going to be doing that. And this would really allow us to recognize the state of what people are doing, but just in everyday driving tasks as well. It's the recognition that other things go on in the car. As a driver, you're not just solely focused on the driving. You got the radio on, you've got people in the back, the kids crying, all that sort of that sort of thing. And they all add to the general ambience of your driving task. It really made me think, I don't know if you have that remember that Simpsons episode where Homer has to design a car design account and busy the drivers in a bubble at the front? And that is the ideal possibly for many people, but we're not there. And I think this sort of recognizes that. So, yeah, I think it's good. I think we need to get stuck into it a bit more. Should we have a look at where we're at with refresh everybody on where we are with the current state of AI systems within cars? Yeah, that's a great segue. I was going to say, why don't we take a look at this? So let's take a look at AI inside cars, then we'll maybe talk about the human factors issues with the drivers and passengers. And then we'll link it all back to the story like we usually do. We circle back and talk about all this stuff. So let's talk about AI systems and cars, right? So we have the elephant in the room, which is autonomous vehicles and that we talked in depth for last week. I keep saying last week, it was two weeks ago. So go back and listen to that discussion for more of a state on how autonomous vehicles themselves are doing. I think we can look at kind of AI in the sense of a big picture in terms of automotive technology, right? Because there are various ways in which AI is being utilized from start to finish, the development lifecycle of automobiles, right. So you have everything from development of the concept, I'm thinking maybe even AI models of wind resistance. You have that, right? And this is talking about the Holistic thing. And we'll get to inside the cabin here last, but you have kind of looking at the Holistic, the air models, you have even AI in terms of the assembly line and manufacturing. So whether or not a car needs a certain piece, how to optimize the whole manufacturing piece to make sure that the timing of systems are just right and all that stuff. And then you also have AI on the road. Like I said, that's kind of what we talked about last week. And then this week we're talking more about inside the cabin, monitoring drivers, passengers, that type of thing. It's almost worth it to take a step back and look at AI in terms of just AI, not necessarily even in the cabin, but we certainly can look at that. Too. I'm thinking for AI, it's good for these complex solutions that we may not necessarily have the time or resources to have a human do those things. And right now it's kind of viable for these automated vehicle use cases. And I think that's where we're at right now. We're kind of just on the threshold of understanding how these automated vehicles can leverage AI to be effective. Right. But I think there's a lot of drawbacks with this. These models, they're becoming so large because there's a lot of things that you need to account for. And you can start to extrapolate this to inside the cabin too. Right. And understanding human behavior is incredibly difficult thing. Humans are bad at it. We're better than most other species understanding human behavior. But even so, we're probably the best thing we have at it. And it's still difficult to read and understand body language or even inflection and tone and all these really important things when it comes to communication. So if you're trying to sort of develop an artificial intelligence system that is trying to monitor that stuff inside the cabin, you're going to need a bunch of data for this type of thing. You're going to need understanding of how decisions are made
on the models themselves. Right. The data comes in. How does that model make decisions based on that data that comes in? There's a lot of things that go into artificial intelligence and safety is another piece of it. Right. We can talk about this with some specific examples later. But I'm thinking of the case where maybe the car or automated system within the vehicle misunderstands what a human is doing and creates a situation where it then puts the human at risk because of
exposure to a what's the word I'm looking for exposure to that bad stimuli. So like it might pull the car over where that might be more dangerous than just driving forward. Anyway, that's kind of some of the things I'm thinking about. What about you, Barry? Where's the current state of AI to you? What key things do you want to bring out? One of the big things that I think we should be looking at is regulation as well. And the impact of regulation is going to have on AI in general. So there's all sorts of things around personal information. So the EU, we've got GDPR, the ability to understand where all your data is being used and how it's being used and the appropriateness of that. And really there's almost a big hammer to crack small nut. At the moment, we don't really know how AI is going forward and how that is going to play. But we've got some really sledgehammer rules that really could be stifle how AI is developed and going on moving forward. But I think some of the interesting bits here are going to be around where AI in automotive is going to be and where that's going. And so to drive into some of that, I guess
AI technology and automotive has got a whole bunch of bits around it that we're starting to get to use, but it's almost at the precipice of things that we could do really well. So we've got to have the decisions that it makes. We got to have them, they've got to be understandable people are going to drill into them and understand why they're making the decisions, which goes back to the point that you were making earlier. So they'll use neural networks to come up with these decisions, but we've got to understand why and they've got to make the same decisions on a repeatable basis. So not just choosing A on one day and B on the next, and not understanding when that's going to happen. And then the thing that always trips a lot of this thing up is the edge cases is when things are not things happen that you don't expect and how does it handle that and will you do it in a really good way? So the current AI gets used on sort of speech recognition, user interfaces, diagnostics and maintenance and things like that, vision recognition, so making sure that I stayed in lane and things like that. But as we get, as we going forward, the driving element is more and more being taken over by AI. We've talked about things like Tesla and other things in the past, about having their auto drive capabilities and we starting to get a lot more down that route of how cars taking over, more of the driving task. So we should be able to see in the future that actually software, we're doing a lot of the driving. We've seen elements of where taxi services, for example, will be expected not to have a driver in them. You'll get in the vehicle and it will take you where you need to be. There's going to be some issues around that, around how do we do the software development and the cybersecurity of it. So how do we make sure that something that is using AI doesn't get hijacked? And there has been some really interesting things on YouTube and things like that, where people have been shown to demonstrate how they can hack a car system through an open leak on the windscreen wiper and things like that. So there's going to be a fair bit of work done around that cybersecurity bit to have that complete trust in there. Yeah. And you bring up cybersecurity and I think to me, cybersecurity is one of the largest sort of not missing links, but sort of gaps right, in sort of the way that we're thinking about this. And I want to link that back to something you said about data and personal data and what happens with it. Right, so you mentioned GDPR, the whole ethics of what happens to data, especially when you're collecting data on a person. Right? It's one thing to get driving habits. You have a car that's attached to an owner, and you get driving habits from them. And that, to me, is like one of those cases where it's probably okay to gather because you can't really tell who's behind the wheel at any given time unless you have data on seat height. There's other things that you can get right, but it becomes a little bit less clear. But when you're pointing a camera right at somebody and you're trying to get information on their body language, on the number of yawns that they're doing, the number of blinks that they're doing, how long their eyes are closed while you're on the road, this becomes very dangerous when you start to think about the implications for what insurance companies can do with that type of data. Right. Or even worse, imagine if those are markers for other things, like, I don't know, sales habits or something. I'm making a huge stretch here, but if that data is exposed, it's going to be a big problem for people to solve. And so cybersecurity comes down on all that. Then we have the ethics of that. There's laws in some states, in some countries that require people who are getting data collected on them to be notified. Right. GDPR, I think, is kind of that, but for the Internet, that's my understanding. And so will you need to have something where you hop into an automated taxi service that monitors the inside of the cab and collects data and operates on that data? Will you have to notify the passengers? Is it something where you summon an Uber or Lyft with your phone while you're waiting, and say, hey, we're going to give you an automated vehicle while you're in there, we might collect video on you, and this is what we're going to do with it? And if you agree, hit yes, otherwise we'll get you a human driver.
We don't have humans anymore. We got rid of them. And so is that something that you agree to upfront? I feel like that's probably the loophole that a lot of companies are going to do. Hey, we might collect data on you while you're in the vehicle.
It'll be the latest implementation of the cookie agreement. So whenever you go to a website, we got to play cookies on you. Do you agree? Does anybody read that anymore? No, they don't. They just click Go. It's bit like TS and C's on the small print. Very few people read the TS and CS that no matter how many studies you do, they don't do it. They just click okay. And they'll go through to use the capability. Right. I was hinting at it a little bit, but I want to get into sort of the human factors bits within the people inside the car. Right. We talked a lot last week. Keep saying last week. Talked a lot about last time the people outside the car and how the car interacts with them. Let's talk about some of the issues facing people inside the cabin right. And kind of the state of where things are at right now in terms of cabin monitoring. You have some things I think this is your point. Do you want to talk about this? Yeah. So there's already some basic monitoring tools on the market, so a lot of the systems have a camera mounted, the steering wheel or somewhere in the cabin that's tracking driver's eye movements and particularly blink rates and things like that, to determine where the person is impaired. So they could be destructive, they could be drowsy, they could be drunk, or they might just have a really weird blink rate, who knows? But fundamentally, driving is a complex, it's a cognitively demanding activity. You're doing a whole lot of things at the same time, so you're constantly planning and replanning what you're doing. You've got to concentrate on what you're doing, on the task itself. So you're trying to get from A to B, you're trying to navigate junctions, you're trying to understand where you are in relation to other vehicles on the road, as well as what's going on inside the car. So have you got children crying? Have you got your partner talking to you? Have you got the radio on? Are you singing very loud music, as I sometimes do? Right, but it's not also not just about being in that moment. You've got to be anticipating what you're going to do next. Are you going to be going for are you going forward, are you going to do some sort of turning? Is somebody else going to do something that you might have to react to? So you're anticipating where things may or may go wrong or a light might stop, you might have missed a junction or something. So you actually have to problem solve on the go and do that sort of thing. You've got to be able to take complex situations with lots of different things going on. You could have bad weather and that type of thing. You got to be able to react to that quite quickly and efficiently. And apparently you meant to do all this quite calmly. Some people may or may not do, and that's where AI might be slightly better, they might be slightly better than me. But you're going to do things quite quickly, quite effectively, because everybody's on the road and the road is a very dynamic place and different people have different attitudes to it. So then if you're doing all of this stuff all of the time, if you're tired and it could be just multi display, post lunch dip, or it's because you've had a lack of sleep and long hours and that type of thing, that affects how you concentrate and it will give you this idea of what they call in the health terms of near misses. So where you haven't actually had an accident, but because you took a last minute diversion, then you could have had an accident. It was very close. It was a knee and talk a bit more about knee. And yeah, those, by the way, are really not difficult, but yeah, they are difficult to collect information on. Right. Because no one reports near Mrs unless there's some spectacular thing that happened. Like, you watch these dashcam videos of people who do these amazing recoveries but don't have any damage to their vehicle, and you're like, oh yeah, wow, that could have been really bad. In those instances. It's like we don't really collect a whole lot of data on that that I know of. But these types of events, they don't really cause injury and they don't have that much immediate impact other than kind of minorly disrupting road flow or something like that. But these near misses, they tend to be kind of these indicators of higher risk individuals that might get into accidents, right? So the more near misses you might have, those are good indicators of sort of whether or not you'll get into an accident later on. You mentioned Drowsiness. Near misses apparently are 14 times more common than actual accidents. And so if you're thinking about that, for somebody who's Drowsy, that is incredibly important for getting them off the road or getting them an energy drink or something. Think about concentration. That's the other thing you mentioned, all these complex factors about driving. There's, of course, the concentration part of it too, where you mentioned people in the car might be a distraction if you're looking back at the baby, if you're talking to somebody in the passenger seat, if you're singing out loud, closing your eyes, those types of things, those play a large role in some of these accidents. Right? So the National Highway Traffic Safety Administration, they estimate that about 25% of all accidents that are reported by the police involved some sort of inattention. Either just drivers distracted, asleep, fatigued, lost in their thoughts, et cetera. And that's really big. And so now if we start to piece all this stuff together, where AI is at right now, what the human factors issues are within the cabin, we can kind of revisit this article, the promise of using AI inside the cabin to help the human ultimately. Right. I think this is where we have the discussion. What key takeaways do you want to take away from this discussion, Barry? Well, let's look at where we need to drive. A lot of this. I mean, the point you were just making there around the and Mrs in other domains, the Ms reporting is quite a significant thing in the workplace, you meant to report nemesis. In aviation, you meant to report nemesis. We don't do that on the road because there's no drive to do. So there needs to be something around if we're going to use this sort of technology. There's going to have to be some sort of policy, some sort of push to be able to do that. So in Europe, the end Cap, which is the new car assessment program, is now going to be or has been doing since 2020, been rating cars based on the advanced Occupant status monitoring. So what can you do already to do that? But to get a five star rating, car makers will need to build in car technologies that check for the driver fatigue and distraction. In Saturday 2022, NCAP will award rating points for technologies that detect the presence of a child left alone in a car, potentially preventing tragic deaths by heat stroke, et cetera. That's kind of an aside. It's more about this is the fact that we've got to have some busy policy, we've got to have some driver within the car market to be saying, these technologies, they need to be developed, they need to be done for safety reasons. So I think that's going to be quite a good thing. But can you see any sort of examples out there about what sort of things we could do in quite simple terms to be able to help drivers? Yeah, I think there's a couple that the article here actually mentions itself. So if the driver is glancing at the speedometer too often or something like that, where they're looking at it, the vehicle's display screen could send a gentle reminder on the speedometer to keep his or her eyes on the road. Right? That's like kind of bare bones right there. You're looking at something too often. You get them to look back at the thing that they should be paying attention to, which is the road. You also have sort of the other extreme where if a driver is texting or turning around to check in on a baby vehicle could send a more urgent alert or even suggest that the driver pulls over to a safe space. And then you also have like even more extreme than that, you have the system actually taking over for the driver and pulling off the side of the road because they've deemed the driver to be too incapacitated to even do that much. Right. I mean, you think about driving is your central task and everything else is kind of periphery. To have an AI system jump in at that point would be kind of a big step. But I mean, there's other things that you can do too, right? Let's say the system notices you are not looking at something often enough either. It could do a subtle gentle push to that area too. Right. I'm thinking like you're not checking your rear view mirror often enough and so maybe just a subtle orange glow around it that might go, oh, that's a change. I need to look up in that direction. What was that change? Oh, hey, I need to check my rear view. What's going on back there? Right? And it's not even something that has to be constant, right? You're not like instructing them to look at their rear view, but you just give them a subtle queue to say, okay, hey, right, look at those. And then if they don't do it again, more attention to it. Maybe a slightly brighter orange or to the red where it's like you really need to look. So I think there's some good examples out there of what a system like this could do. There's also sort of the conversation of, well, where could this go in the future and what kind of crazy things could we think? Barry, can you think of any crazy things we could think up in the future? Well, I'm surprised we haven't caught up with the idea of the artificial intelligence ejecting the baby that's been distracting. Oh yeah, there you go. That's perfect. All cars are now riding around the ink or entertainment, aren't they? So even if you own a vehicle or it's a ride sharing vehicle, why can't we leverage that artificial intelligence to deliver content based on the writer's engagement, their emotional state, their reactions and personal preferences? And that could vary on the different type of trip you're going on, the situation you're doing, the meeting you're going to, or the family occasion you're there. So if you go into say, I don't know, a sporting event, the system could serve the ads that are relevant to that activity. And if you think that the passengers responding well to the ad, then he might offer a coupon or for a snack of the game, happy customers, happy advertisers, really sort of focused event and focused journey. And I guess that then leads you to say, well, actually is it more of a mobile media lab? Because by observing the reactions to the content, which you've got your audience, this is out there in a fairly fixed space. You could really read the reactions. The system could offer recommendations. It could pause the audio if the user becomes too inattentive and customize the ads in accordance with their preferences. So content providers could really use this to determine which channels deliver the most engaging content and use it to set at premiums and stuff. So it could be a really sort of proactive way to give you that sort of media experience that ties in your entire journey or is that just got a bit too crazy? No, I think that makes sense. Right. I think it gets a little questionable when we're trying to advertise to people. But I mean, in terms of content recommendations, let's say I'm on my way to a conference versus a vacation with my family, right? You mentioned that kind of personalization based on the context or reason of the trip that I could see being something really cool. I'm going to Human Factors in Ergonomic Society and I want to see things about the city that I'm in because maybe I've been too focused on the. Conference and not enough about the destination. And, oh, hey, check this out. There's something down the way here that you might want to check out, or here's some Human Factors history in the city that I don't know who's producing that content. If you're producing that content, let me know. I'd love to have you on the show, but yeah, it's cool. And I think these types of examples that we just mentioned here are just kind of breaking the surface, right? Absolutely. I think ultimately these types of systems can be used to make things much safer, to make systems much more reliable, and to just make transportation, surface transportation in general, much more enjoyable and sort of keeps coming back to safety. But yeah, I mean, that's really the goal, to keep people alive when they go from point A to point B. Any other closing thoughts on this one, Barry? I guess just one morning it was kind of inspired. I think I mentioned it the other week. Is appointed professor Paulson made on Twitter, I think it was. Today, basically, we are spending a lot of time trying to make AI inside and outside the car fit inside the current world, which is not made for AI. We're trying to crowbar something that is very clever into a non televised environment. So should we be thinking also not just about the car itself out of the environment that he's driving in in order to meet that safety ambition? So I guess very altruistic, very high level thinking there. It just sort of chimed a thought with me there that maybe we should also be looking outside the car in terms of technology. Yeah, good thoughts. All right, well, thank you to our patrons this week for selecting our topic, and thank you to our friends over at the IEEE Spectrum for our news story this week. If you want to follow along. I've not been doing office hours, admittedly, but that's because news has been light. It usually gets pretty light around the holidays anyway. I sometimes do office hours. You can find me there. And we do post the links to all of our original articles on our weekly roundups on our blog. You can also join us on our Slack or Discord for more discussion on these stories. We're going to take a quick break, and then we'll be back to see what's going on around the Human Factors community right after this. Human Factors Cast brings you the best in Human Factors news interviews, conference coverage, and overall fun conversations into each and every episode we produce. But we can't do it without you. The human factors. Cast network is 100% listener supported. All the funds that go into running the show come from our listeners. Our patrons are our priority, and we want to ensure we're giving back to you for supporting us. Pledges start at just $1 per month and include rewards like access to our weekly Q and A. With the hosts personalized professional reviews and Human Factors Minute, a Patreon only weekly podcast where the hosts break down unique, obscure and interesting Human Factors topics in just 1 minute. Patreon rewards are always evolving. So stop by Patreon.com Humanfactorscast to see what support level may be right for you. Thank you. And remember, it depends huge thank you as always to our patrons especially want to thank our honorary Human Factors cast staff patrons Michelle Tripp, patrons like you keep the show running. Thank you all so much for your continued support. So we're going to do a little something a little different here. Normally this is where we talk about patreon. We talk about Patreon a lot. But one of those things that we do for Patreon is Human Factors Minute. And this is something that I like to do from time to time because I'm a huge data nerd. I really like understanding what exactly we promised and what exactly we're delivering. So let's talk about that right now. As of the time of this recording we have 96 episodes of Human Factors Minute available for your consumption. This is counting. The team sees episodes which everyone gets for free. But our total time in Human Factors minutes is 1 hour, 56 minutes and 16 seconds. That is a long time. Hold on a second. 96 episodes of Human Factors minutes. So that should be 96 minutes. You would expect that, right? So let's talk about it. The average length is actually 73 seconds. You're actually getting a little bit more than a minute on average. So we should be calling it Human Factors Minute and a quarter. We could it doesn't roll off the tongue as easily. That's true. But let's actually look at some of the stats here, right? So 25 of them are clock in at 1 minute exactly or less and 75 of them are 61 seconds or longer with 14 of those 14 of those 45 being longer than 90 seconds. So we actually did our longest one was the most recent one that we did on surface transportation. The technical group at HFS that was at a minute and 59 seconds. I think any longer than that, that's kind of my absolute limit with it. Right. We don't want to go beyond that because then it wouldn't be a Human Factor's Minute. I'm still counting that first minute we're rounding down that's a minute. I hate to be the bearer of whatever or twice as much as what you promised. Look, I'm rounding down as long as it has one. We round up anything below a minute and we round down anything above a minute in between two months. So that's how that math works out, right. One of my favorite episodes of Human Factors and it still remains ancient Human Factors history. That's available for free to everyone right now. In fact, the first ten episodes as well as the TMC's Human Factors are available to everyone for free. You can check out our patreon for details on that. And one last little tea for you all. That is the only place where you can get Blake for exclusive Blake content. Right now. We're still working on getting him back soon, so bear with us on that. But that's where you can get to Blake for now. That's all. It's kind of fun to jump into those metrics from time to time. It's a little extra thing. All right, well, why don't we go ahead and get into this next part of the show we like to call
that's, right? This is the part of the show where we search all over the Internet to bring you topics the community is talking about. It came from this week. It is Reddit. If you find these answers helpful, give us a like to help other people find this type of content. All right, we got three tonight. They are all great questions. Let's tackle this first one. First, it says, would you work at a company that doesn't use Pigma? And I'm going to go ahead and extrapolate here and say we're going to be talking about not just Figma but other tools here. Right? I had an interview at a tech company. The interview said they primarily use Azure since they have better prototypinginteraction features than Figma. They want me to do a design challenge for the next round. I asked if I could use figma. They said to try and download the free trial of Azure since that's what they use. I asked what they like better about Azure than Figma. They said they can make detailed prototypes for user testing if it's a design challenge and that I'm being tested on my UX and critical thinking skills, shouldn't the tool not matter so much? I use Figma plus Protopy. I haven't heard that one. And I find it way more efficient than a sure. Mind you, the job description listed Figma as one of the tools also. So let's talk about this from two perspectives, right? One, you from completing tasks. And two, what happens if a company is forcing you to use a tool for either an interview or the work itself? Barry, take it away. Yeah, so it's an interesting one because I think if they're just wanting to see an example of you doing things, then actually maybe the tool doesn't matter so much. But if you're going to go you're wanting to go and work for a company and their tool set is that they use or whatever the tool is, it doesn't matter. It could be almost across the piece. I have a really interesting discussion with sister company of ours that they use Google for most of their Office suite and we use Microsoft. It's one of the things of like, well, if you want to come to our ecosystem is set up to use that, whatever it is. And if they're using as their design tool, then why do you want to go in and improve. Basically stick something in there that says I want to come and work for you, but I don't want to come and use any of your tools. Therefore you're making it harder for them to accept you and for them to if you want to go and work for them, then you need to be making yourself understand that's what they're going to go and use. So I kind of see that what this person is saying actually, you want to do my design, you want my UX skills, critical thinking skills, and it shouldn't matter the way I do it. The truth of the matter is, it does matter because if you're not willing to go and use the toolset that the company is going to use and yet the person next to you is willing to go and use the tool set they're going to use and you can both do UX, that's going to be deciding factor. So yeah, I think you just need to crack on and learn how to use as you because I do agree to a certain extent that the two tools have their pluses and minuses, but if that's what the company wants you to use, then I suck it up. But it's conflicting views on this, right? I think a company should be supportive if an employee wants to use a different tool within reason. If a key part of their package as an employee relies on a specific tool, then I think maybe the employee should be willing to expand. Maybe a key functionality or something is contained within a certain tool. It might be a good justification, but that employee is the one responsible for bringing up that justification. At the same time, I think you should be flexible and I don't think tools really matter. I could use Excel or I could use Google Sheets, it doesn't much matter. I can do most of the same things in both of those programs with some differences, obviously, but tools shouldn't matter. It's exactly what you're talking about here. And in terms of interviews, I'm a little torn on this one. I think maybe the company should be more receptive to using a different tool. Maybe they don't have a license and especially if you're trying to use a free version and evaluate somebody's skills based on their unfamiliarity with the new product, I think that's a little unfair. I think if an employee truly values somebody's ability to do the core responsibilities of the job, they should be willing to see what their work looks like and their preferred tool and then kind of ask them to switch over and provide the training required. A tool you can train up on a new tool. And tools change all the time. So, I don't know, I think everything is kind of bad here. I think the point you made around, and I think this is a bit of a distinction for me, is if you've got an employee. I've been there where an employee has turned around to me and actually it was with zero as well. They turned around and said, actually, you're doing stuff in this way. If used as you, then we could actually do it a lot cleaner, a lot simpler, you could do a lot of this stuff. And I was like, absolutely. Well, yeah, let's try it. Let's give me a demonstration, let's pick a system. We'll design the system and that's one thing I think. But here I think you are wanting the job, and I kind of have that feeling of why, fundamentally, if the company asked me to do X as a demonstration and there's two of us out there, why would I do something that puts me at such distinct odds? If I'm wanting the job, and if I don't want the job that much, that I'm willing to have a fight with the person who is not even my employer yet, then you don't want it that much. So what's the gold nugget at the end of this? You clearly want employment, and if you don't want employment, then why are you interviewing in the first place? Yeah, well said. I don't know. That's where I stand on that one. It's not too controversial. Learn to adapt to new tools and let people use their preferred tool. It's all the same, right? Completely conflicting information. Just make it happen. All right, let's get into this next one here. This one is by Idk what to put as my user on the User Experience subreddit. It says UX job without the title. I'm a junior with a new junior role doing user experience. However, my title doesn't sound like the typical UX ones. It sounds more related to data or marketing. Will this matter when I'm applying for more UX jobs in the future? Barry, what do you think? Does job title impact your hire ability for future prospects? No, not all right, we're moving on to the next one. Go ahead. Fundamentally, if you're doing stuff and you really put down just a job title, you put down a bit of description as well. Job titles. We've seen some amazing job titles from people who are at, like, Google and then sort of things where you make up your own job title and stuff. I think the job titles itself doesn't really matter. I think you have a little bit of blurb which will say that you do UX work or whatever it is, and then it can make it a bit more intriguing. So I don't think it really matters too much. But I go back to what I've said in previous episodes about the different cultural differences with the UK us. The UK, we typically don't have straight, we don't have very discreet UX roles or UX research or UX design. More people are clumped in together. You must do two or three trades and that's fine. So there might be a bit of a difference in the US. What do you think? No, look, I've heard it both ways here. It depends button, because in some ways, I wouldn't look away from anyone who submitted their resume for a position, like, just off the merits of job title. That's not something that I would do. I wouldn't go, oh, this person says their data analysis and they're applying for desirable, that's not something that happens. I would look at it and I might scratch my head a little bit going, data analysis, that's interesting. But as long as everything checked out on the resume, like, the stuff that they've written in their resume was like, hey, did this type of research, did this type of and it's like, okay, well, I can see what you're going for here. Your job title just didn't match your role, and it's really what you're writing below that. And so to me, that doesn't matter. The thing where it might matter is where systems use automated like, let's say this is your only job, and systems use automated sorting techniques to weed out resumes for a potentially highly sought after position. That might be a case where you might just want to change it on your resume and say, like, hey, look, if you go and contact the employer, they'll say, I actually did this as my job title, but kind of explain it away in the resume. But I mean, really, you shouldn't have to lie too much. Just it's fine. And like you said, Barry, I think it is actually more intriguing if somebody is coming from one of those other titles, because then it's like, tell me that story. Yeah, exactly right. Makes you unique. Yes. There you go. All right, let's get into this last one here. This one's from Gellendrail on the User Experience summer and how to get projects for your first portfolio when you don't have real customers. Is there something like front end mentor I don't know what that is. Or brief box for UX projects. I don't know what those are. But how do you get, let's see here data when you don't have real customers? They go on the right. I want to switch careers from building websites and doing UI designs to a full blown UX career. Been working in the web field for seven years now. Now I'd like to switch to the UX field and really take user's needs front and center. The agency I'm working at really isn't that professional. I don't have great products to show, which means I have to create about four to five new portfolio projects. I'm doing the Google UX certificate course right now, and I'll have some projects by the end of it when I have something that doesn't just look like a course project and that may look real. Do you have any ideas on where to get briefings and data for projects like this? Or do you have proven approaches on how to get such a project rolling all by myself. Any help would be highly appreciated and thanks in advance. All right, Barry, we've tackled this question before, but I think it bears repeating repeating. Why don't you go ahead and tackle this question. So I guess for me a lot of it is around actually some cool ways of doing this is look at where local charity, local voluntary organizations might want some sort of experience and you can basically get breeds from them and ensure that you're going to use some of your time and effort into producing some great product. Not only then will you get some things in your portfolio, but you'll actually find you'll probably make some really interesting contacts that you didn't think about making in the first place. So that will work. Also, actually make up your own project isn't necessarily a bad thing. If you can work out problems and ideas that you know that could be solved in terms of web designer websites, then as long as you document what you're doing and the use case that you're sort of doing it for and the problem that you're trying to solve, you can actually roll through quite a lot of it on your own. But certainly the best examples I've seen of that is going down that voluntary route and finding people who need the websites that wouldn't necessarily be able to engage with them. Normally that kind of works for me, but I know you've got some different ideas as well. Nick, let's see. Here a couple of things. I think your points are great and I don't disagree with them at all. I think this topic is something that I feel pretty passionate about and I have a nontrivial more serious draft of like a master class type program for exactly this. And if that's something that people are interested in and want to actually find out more about. Encourage you to write in to let me know that this is a great idea and that I should pursue it step by step guidance for how to do something like this to help students or people even who are looking to switch over into UX to build up their portfolios without the user data. If this is something you're interested, reach out. I'd love to hear from you. But my high level advice here is that the problems exist. You just need to go and look for them. And so whether that is, like you said, Barry, at a local agency or if it's with an established product that you feel passionate about, I think there are plenty of opportunities to look for it. And when it comes to real data, there are ways to get that without faking it. It's real user data. And that's the interesting piece to me, right? You can go to some of these forms that are out there, like the reddit subcommunities or anything like that, and look for problems that people have with the program, the game, the system, whatever you're wanting to use for your portfolio and find a way to solve it. And in your project, in your portfolio site, that as a real issue that users needed fixed with whatever metric that you're going by. Right. You can state it was by number of upvotes days visible on whatever, or number of votes in a Zendesk platform or something like that. Right. There are ways of getting real user data that you can use to populate projects and it's a great place to look for ideas if you're stuck and don't know where to go next. Anyway, that's my tease. If that is something that you're interested in, let me know. Like I said, I have a not trivia. It's pretty thought out. If that's something you want to see me produce, let me know. I'm happy to do that. Alright, I think we get into this last part of the show. We called one more thing. It needs no introduction, so let's just get into it. Barry. What's your one more thing this week? So my One More Thing was actually I was doing my planning for the rest of you for my podcast. I can't believe it's. December. It's December. It's absolutely crazy because I basically got one more podcast episode for my thing to record this year and I'm done 2021. It's out there. So I think now it's that case of even though we've only just turned December, but it's now we're getting into that festive season and we're able to think about parties and things like that and whether we can go for parties with everything else that's going on. But it's a fundamental thing. Now we can start that sort of that wind down. I've got a bit of a wind up before, a bit of a wind down this month, but I just can't get over that. We are talking about being in December already. It's mind blowing in many ways. Yeah, I want to jump in because the last six to eight weeks of the year has this weird time dilation effect where it's both fast and slow simultaneously. Insane. Like Thanksgiving was a week ago for us here in the States and it's like that was just last week. Wow. And it's like Christmas is going to be here tomorrow. All right. Yeah. Anything else for you? One More Thing? No, that was it. This year has gone by in a flash of an eye. What about you? Where are you at? Yeah, well, my One More Thing is going to be the same one More Thing that I had for last week, except now we have more details. So on Friday, December 17, at 01:00 PM. Eastern is going to be the first HFS presidential town hall. So HFES is putting on this town hall. Yours truly is going to be moderating the event and it's basically an opportunity for you to discuss the latest human factors, industry news, trends, even state of the society if you want to show up for that. Chris Reed is going to be there for the show, and we'll also have President elect Carolyn Somerville is going to be there as well, so it's going to be a great time. Come ask your questions. We're still working on setting up the event itself. You can find it in our feed. Right now, we're working on getting it linked up for all of them as well. You can find it on any of our channels, any of the HFPS channels. It will be out there, but that is going to be it for today, everyone. Let us know what you guys think of the news story this week. If you liked this episode, we do invite you to check out the episode that we did just last time, episode 226, where we take a look at the state of autonomous vehicles and how they interact with the pedestrians. Comment wherever you're listening with what you think of the story this week. For more in depth discussion, join us on our Slack or Discord communities and always visit our official website. Sign up for our newsletter to stay up to date with all the latest Human Factors news. If you like what you hear, there's a couple of ways that you can support the show. One, you can leave us a five star review that is free for you to do. You can do that right now, and it makes us happy. When we see those two, you could tell your friends about us. That is also free for you to do a little bit more social pressure because you have to work it into conversation. But that is how we grow. That is an awesome way to help us grow. And three, if you're able to financially, it is the holiday season, so I totally get it if not. But if you want to make our holidays, you could consider supporting us on Patreon. We are too away from being self sustainable and we have some interesting applications that we're looking to spend the next tier of donations in. So, as always, links to all of our socials and our website are in the description of this episode. I want to thank Mr. Barry Kirby for being on the show today. Where can our listeners go and find you if they want to talk about automated vehicles? AI inside cars on Twitter. You can find me at Bazam and you can also listen to my Ramblings Twelve to the Human Factors podcast, which you'll find at www.dotpodcast.com. As for me, I've been your host. Nick Rome. You could find me streaming on Twitch on Mondays for office hours and across social media at nickrome. Thanks again for tuning into Human Factors cast. Until next time, it depends.
Managing Director
A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.
If you're new to the podcast, check our some of our favorite episodes here!