On this episode of the podcast, we explore the potential privacy concerns surrounding wearable brain devices. Plus, we tackle some of the community's burning questions, including advice for neurodivergent researchers and tips for onboarding a new team member.
#podcast #braindevices #privacy #research #neurodiversity #teammanagement #onboarding #communityquestions #conversationstarter
Recorded live on April 6th, 2023, hosted by Nick Roome with Barry Kirby.
Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on Jenny Radcliffe - The People Hacker:
News:
It Came From:
Follow us:
Thank you to our Human Factors Cast Honorary Staff Patreons:
Support us:
Human Factors Cast Socials:
Reference:
Feedback:
Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.
Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!
Follow us:
Thank you to our Human Factors Cast Honorary Staff Patreons:
Support us:
Human Factors Cast Socials:
Reference:
Feedback:
Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.
[00:00:00] Nick Roome: Greetings to you. This is episode 279. We're recording this episode live on April 6th, 2023. This is Human Factors Cast. I'm your host Nick Rome, and I'm joined today by Mr. Barry
Kirby.
[00:00:12] Barry Kirby: Hey, great to be back. I was a bit thinking about Blake taking my seat, so I Couldn't let him
row.
[00:00:17] Nick Roome: Couldn't let Blake, keep that seat warm too much, right? hey,
we gotta, we do have a great show for you all tonight. We're gonna be talking about how wearable brain devices will challenge our mental privacy. We're also gonna answer some questions from the community on visiting real world locations for user research, neuro divergent researchers, and onboarding a new person to a project.
let's get into some programming notes before we get into anything. First thing I'd like to talk about is, Hey, by
the way, we have coverage from the Healthcare Symposium, from the Human Factors Ergonomic Society
International Symposium on Healthcare and Human Factors. Okay, that's a long name. We have coverage on it.
It's coming out. We are just waiting on some final release forms,
for everyone to be okay with the final releases. We do have one piece of coverage out already, and that is Safe and Effective with Heidi Meza. And if you're unaware, we have just recently announced Safe and Effective Human Factors, a new podcast part of the Human Factors Podcast Network.
human Factors Cast Network of Podcasts. It's going to be a great show. Heidi and I have been working on this thing for months, years, and so be on the lookout for that. Some of the other Healthcare Symposium coverage. You can look forward to talking about the workshops that happened there. we check in with the Student Design competition winners
and navigating challenges in human factors in healthcare.
And we also do some conference and journal updates with some old friends of the pod. But Barry, I am dying to know what's going on over at 12.
[00:01:53] Barry Kirby: So 1202 Live is the interview with Jenny Radcliffe, the people Hacker who talks to me
about her career in so
busy social engineering pen testing and how she got into
it, all the way from
the early days all the way through to what she's doing now, being, wor world renowned keynote speaker and,
Ted Talker.
her new book has just, has been released and is available on Amazon and
is actually a really good read. we get into all of that, but coming up on Monday is a, an interview talking about naturalistic decision maker with Rob
Hutton. So that is gonna go live on Monday, and so that is also thoroughly recomme.
[00:02:31] Nick Roome: Ooh. Looking forward to that one. All right. we know why you're here. You're here for the news. so let's get into it.
Yes, that's right. This is the part of the show all about human factors news. Mr. Berry Kirby, what do we have
this week?
[00:02:44] Barry Kirby: So this week we are talking about wearable brain devices that will challenge our mental privacy. So the rise of brain trucking devices in the form of earbuds, smart watches,
headbands, sleep aids, and even tattoos could revolutionize healthcare and transform our
lives. According to this article in Scientific American, the possibilities are endless from improving our ability to meditate, focus, and communicate, to offering personalized treatments for depression, epilepsy, and cognitive decline.
However, the proliferate proliferation of these devices also poses risks to mental privacy, freedom of thought, and self-determination. Employees increasingly seek out neural data to track worker fatigue levels and personality traits. And there are also reports of Chinese employees being sent home if their brain metrics are less than optimal.
Governments also seek to access our brains raising
questions about individual privacy and autonomy
as brain wearables advance
along alongside artificial intelligence.
The line between
human agency and machine
intervention.
Is becoming blurred, prompting concerns
about the offloading of mental tasks to AI
and the
erosion of independent thought.
the article makes a call for action to ensure that neurotechnology
revolution benefits humanity rather than leads us,
rather than
leading us into an Orwellian future
of spying on our brains.
It advocates for, prudent vigilance,
an open and honest debate about the risks and benefits of neurotechnology and the preservation of individual
cognitive liberties.
Nick, what are your thoughts? are you able to give us your own true
thoughts? are you being
brain hacked by
ai? Just to tell us What the,
what technology wants us to think?
[00:04:27] Nick Roome: probably that one. So here's yes, we do need to be careful here, and I think that's the bottom line. Yes. Let's be careful. I am still dubious about brain computer interface technology as it stands today. I am, I'm not denying that it will be a thing in the future. and I'm especially dubious of the ones that are connected to online systems that are tracking information and the descending and receiving of data from your brain to your brain.
It's all very, What is this going to do? Is it going to be very similar to how we treated the AI uprising, just in the last couple
months here? I think this article brings up some really good
points. I think that it's alarmist in a lot of ways, but maybe we can actually think ahead of the curve when it comes to brain computer interfaces, unlike we did with ai.
That is a potential thing here that we can look at. AI rose very quickly and there's a lot of ethical privacy concerns when it comes to ai. Are we gonna have those same questions when it comes to BCIS as they are eventually more mainstream? That's a question I have. I think there are some interesting just high level things that we can point to.
Like once we start communicating brain to brain, the speed at which we communicate right now is through verbal communication. You and I, thousands of miles apart, but we are communicating right now what happens if I can communicate with your brain in such a fast way that we don't even need to have a physical conversation, that information can be transferred in a millisecond.
And then if we're just transferring information back and forth, where does my brain end and yours begin? And if we're doing that with not just each other, but everybody else that we're interconnected with and they're doing it with everybody else that they're connected with,
where does our brain start and end?
And where does the hive mind begin? These are like some really high level scary things to think about, but that's just my initial thoughts. Barry, where are you at with all of
this?
[00:06:37] Barry Kirby: it just shows that, we've spoken, about brain HCI or bcis before,
even though we couldn't necessarily find which episode it was.
but, but I've said, on a more than one occasion that, we've spoken about before in the context of Elon Musk doing, cause he was trying to drive a lot of this, at least you talk 12 months ago or whatever.
And. We'd said around, I'm quite up for the idea of bcis, but not if FI's doing it. I don't want him in my head. but it just shows that we were joking around that to a certain extent. But it shows that technology must be getting developed in a more serious way, cuz now people are concerned about it.
It's now got to a level where we must be, I haven't seen it, so I'm, I can't comment on it directly, but the, it must be getting to a level of maturity where people are like, actually there's something here. what's interesting about the scenario you just painted around passing that data between us.
One of the big things I've always been really, aware of when Dev developing, hcis is about, is around mental workload. And at the moment you can manage your own mental workload that if somebody's given you too much information, you can stop. Can you just wait a minute, let me process what you've just told me.
or I, look, I'm getting, lots of information coming in from lots of different sources. You can switch some screens, you can manage your, your inputs that way. If we are connected on that level, how do you switch it off? How do you possibly manage the flow of data and how do you not just end up in a sort of wibbling mess on the floor with all of the,the information coming through.
Fundamentally, I think it's exciting. I think it's,
if
we, it's a, there's a balance here between policy and,
being terrified about what it's gonna become,
and, then, and therefore
stopping the development in a
way that we possibly don't
anticipate with un unintended, with unintended, consequence.
but equally
the, like the, Like we've, like you alluded to the rise of ai,
I'm still very,
I still don't think it's as big a deal as
people is making out because
the way that we've described what it's doing at the moment,
it's more about the abuse
of it than what the AI itself
does.
And I think this is a similar thing.
I think the, it's a technology that needs to be managed. and so we'll see how it develops. I'm quite excited though.
[00:08:59] Nick Roome: I wanna comment on one of the things that you brought up here about
communicating something and mental workload. This is a very interesting point because right now we are limited in the amount of information that we can send over. Like I said, I am communicating to you via verbal, via a verbal modality.
Right now, if I was communicating to you through Brain Signals, Barry, I would not only be able
to communicate ideas And thoughts, but I'd be able
to communicate how I'm feeling
about a certain thing. and so not only would I be able to,
but then I'd also be able to read things, right? I'd also be able to read how much your brain is
taking in right now, and I'd be
able to read whether or not I should push that information. and
all this would happen in like a split second. So it's very interesting from that perspective where you're trying to. Think about all this communication piece. Again, we're looking at
more of the abuse of this technology and I think we should focus
on that for the rest of this episode. But I do wanna just mention that it's going to be a very
interesting piece once we start communicating brain to brain.
That's.
[00:10:08] Barry Kirby: But I, I guess there's a massive assumption here, which is something I'd like to drill into before we get into, how it can be abused, is how do you know that you think the same way as I do? Because I know that, we have this discussion now
and I'm, communicating my thoughts, but I'm also reading the notes that are going on either side.
Keep an eye on the,the comments that people are making on online. And so how do you know that the bit that your. Chiming into with this brain interface is the bit that you actually want to hear, you could be hearing about, also about the fact that, maybe I need to take a drink or at something at some point, and that's got nothing to do with what it is that we're trying to communicate.
How are we meant to monitor and
manage the
information and how it flows? Do you just get a splurge of everything?
I, this is where I think we don't necessarily, we need a better understanding of the technology in order to better understand what the impact
is. Cause
if it's just the fact that you, it's almost a, a,
Vulcan mind meld type thing, just bringing the Star Trek reference there, where, you,
you are literally just
sharing the experiences and as
you say, emotion, and all that sort of stuff.
That's a very different pro proposition to almost telepathy,a telepathic conversation. because even in telepathic conversation, you are still monitoring what's going on in your head. So yeah, I think
is this an all or nothing technology, which I think will have a big impact on this idea of privacy.
[00:11:32] Nick Roome: Yeah, I agree. I wanna get into some social thoughts here because
there's some interesting points that some of our patrons and lab members have made on this. So I want to, let's take this first point here by Alex. This is horrendously concerning. Is this being used to
support
staff or to be punitive?
Is p t o offered to
those who are sent home? And this is with respect
to the Chinese employees being sent home. If brain metrics are less than
optimal,
and I think this is an
interesting question because these are metrics that we have now. If we
can hook, put some brainwave reading device
on
people's heads, or even we can measure
physiological data right now, that is another metric to track against.
what is that threshold? What does all this mean? I don't think in
China that they are getting p t o for this. this is probably a performance related thing. You must maintain a certain level of performance on this job. Is that right? Is that. No, but I think that is what's going on in this case.
Barry sound off.
[00:12:38] Barry Kirby: Yeah, I think the, again, I, it's ironic
cause ju just this week I've been talking around,
employee wellbeing and doing a, helping a research study by,
do it by being interviewed on this. And it's how much is punitive, how much is supportive? Cause if you know that your staff are stressed out, and this could be a way of understanding their level of stress.
Cause if you, if you're not looking after an employee's wellbeing, which I, which in my perspective from a, as a small business owner, then if I can look after my staff's wellbeing, then actually they're better on the job. if they're, if we know that they,
your,
you've got the
optimum amount of stress level, you've got the right sort of
driver.
You're
feeling good in yourself, everything's great at home, you are gonna perform on your
job. Brilliant. That, that, that's in my interest to do Whereas
larger businesses don't
necessarily take that the same sort of approach. and so I think this idea of could it be supportive or is it punitive, is almost, dev is basically down.
what is that company driver is? It bombs on seeds for. Every hour on the hour at the right level of, optimization. and what happens if they're, if they are sending you home, is there then somebody waiting in the wings who is just on autumn piece and coming in. So
Yeah, I think that's, that's interesting idea.
and different companies and different societies will take different approaches.
[00:13:58] Nick Roome: Yeah, let's,
let's just look at
some of
the companies that are trying to develop this technology, right? We mentioned neurolink already, but you also have major tech companies
like Meta Snap, Microsoft, apple, they are
all looking into this type of technology, which is scary when you
think about all the data that they
already
have access to.
Can you pair that up?
I don't know. I'd imagine you could,and find
some really interesting things about individuals based on their brain signals, and that
again triggers some
privacy concerns. What can you glean from, you can already glean a lot of information about
somebody
based on the trackers that are on websites and the programs that you use, and there's a lot of data already out there.
And that
aggregation of data is what gives you away not necessarily any individual, one individual piece. It's the sharing of that information across all these things. So if you are sharing brain thoughts with. Advertisers. Imagine that then you've thought about, a very small addition to your kitchen and now you're seeing ads all over the place for that product
because you had one little thought about it. That's the kind of the danger that we're in here. These targeted
ads are one aspect of it, but there are also larger concerns with respect to things like freedom of thought. and that's a really big
one.
[00:15:22] Barry Kirby: Yeah, you're absolutely right. I always ha have a rye grin when we talk about targeted ads, because actually I think targeted ads are a good idea. because when you, if you can have something that, that is focused
around what it is that you are actually looking for
in principle,why do I want to see adverts around things that I'm not looking for?
However, it is that whole piece around how did they get that information in the first place, which is the sneaky bit around it. And this is very much around that, isn't it? Because it's not only a. Have you been? What have you been thinking about? But it's the on, is it unintended? May by be intended.
It's the, it's that consequence of those people around you knowing that you've been thinking about whatever it is. So if you are sat there going, oh, I just want a new car, then suddenly it comes up in front of your partner that you've want a new car and you can't afford, that type of thing could even be worse than a new car.
then that, that's a problem. but, let's look at the positives of this to a certain extent as well,
is
if we have, I think we've spoke, again, we've spoken on the, on this podcast before around
really our understanding of mental health,
and mental health issues is
still woefully almost,
stone age in the way that we deal with it.
If we can use these
bcis
to better understand mental health, then that's gotta be a good thing. And also just revolutionize healthcare, as a whole. If it's this understanding, if we can get a better understanding
of not only what is wrong, what people,what people's symptoms are and that type of thing, then surely that, that could make, the healthcare domain a
lot more reactive and give, healthcare professionals
a lot better insights into what, is not only what people are
saying that is wrong with
them, but what is actually wrong.
[00:17:05] Nick Roome: Yeah, I think that's true. That is true. But we all have, so here's where I'm at with this. We all have intrusive thoughts. We don't act on those impulses many times, I should
say, many times.
[00:17:21] Barry Kirby: I was gonna say, I've seen the amount of, star Wars stuff behind
you.
[00:17:24] Nick Roome: okay. Okay, fine. Sure. So look, we all have in
intrusive thoughts, and if I'm sitting.
in a therapist's office or a doctor's office, and I have a thought about, am I going to die?
I might as well just end it now. very
fleeting. I'm not suicidal in any way, but that's just a thought.
oh, I have six months to live. Why? Why am I bo? And that, that flags them to say, this person needs a little bit extra mental assistance.
That could be true. We need to find what that threshold is for those types of intrusive thoughts, and
we need to protect people from feeling like they can feel those intrusive thoughts or thoughts that are
not acted upon because there is this concept of freedom of thought. Are you think about things without being prompted sometimes, and here's a dramatic example. Overthrowing the government, right? Is that going to be illegal to think about that? what if it was just a fleeting thought
and it just,
it, you're not serious about it, you're not gonna act on it. I'm getting us into some really dicey territory here in terms
of content, but I'm just using this as an example because
what if you have these weird thoughts, I, I don't even wanna give
examples,
[00:18:50] Barry Kirby: Yeah,
[00:18:51] Nick Roome: but.
[00:18:51] Barry Kirby: so there, they don't even have to be fleeting thoughts. you can still believe some things and not act upon them. so you might, there might be certain people that you just don't like, but.
doesn't mean you go into their face and say, I don't like you, blah, blah, blah, blah, blah.
You, you manage the situation accordingly. And if you cannot put your, politicians,full stop,very good, whether you support them or not, doesn't matter what party they're on or all that sort of stuff. But politicians generally manage the way the interface with people all of the time.
And if you don't have the ability to put them barriers in place, then, may maybe you'd be better, you'd be better at electing representatives who knew. But,but that would be really interesting. But then also the, the companies that you are working with, companies that you're engaged with, again, with the social thoughts.
Alex highlighted that there's companies out there that already incentivize wearables and get you to send your data to their internal systems. so if you added five, 5,000 steps to your watches monthly, you get gift cards. in here in the uk, and I don't, they'll probably be out in the US as well, but there are, health insurance companies that if you go to the gym and you've, you your watch sends in the data to show that you've done a certain amount of exercise a month, you get cheaper health insurance.
so if will this also, where does the boundary stop
there
in terms of that stuff
coming that way?
where does it start? Where does it stop? at what point? you mentioned earlier about us being able to
think,
And
then use
the single line of communication, ie.
voice, body language as well. You
could
argue. but you get the opportunity to
think about what you need to
say before you say it. And sometimes I find it very
useful to think
about it
before I say it. Cause I tend to drop myself in,
into awkward
situations. So the ability to, reflect is a really good thing. if you had
no filter between that, the dangers there, I, yes, I ju
it does
[00:20:47] Nick Roome: It'd be
very dangerous. And Alex also bringing up in the chat, thank you
for mentioning intrusive thoughts and also that this could be very dangerous for people with severe mental illness because there's a lot going on in everybody's heads. And when we look at people with severe mental illness, that can definitely. Challenging to experience some of those thoughts when again, maybe not going to
act upon them, or maybe the intent is there to act upon them, and how do you tell
what the intent is behind somebody
when they want to
do or act upon a certain thought.
This, I think, really highlights the concerns around the right to protect somebody's thoughts as it relates to brain computer interfaces.
How do you weed that noise out from the intent if you are using it to control something, right? You are a factory worker and you are attempting to harmonize with some robotic component within the factory, and you are controlling it via your brain. How does it parse out all of the other stuff that's going on in that brain as you are trying to control that thing?
Is it also monitoring and downloading that data and parsing that data and understanding what about you is going on in that moment? If you're distracted, if you're thinking about things at home and you're at work, is that going to impact your ability to communicate via BCI with that robotic arm? I think all this really comes down to is. The need for A brain computer interface Bill of rights, if you will. I think there's like this AI bill of rights that everyone's talking about, and I think we need something similar for brain
data because these are the issues that arise when you start to think about everything that's
being collected and everything that could potentially be misconstrued because it's a data signal coming from your brain.
[00:23:08] Barry Kirby: So what happens then with your BCI Bill of Rights Go coming in right now and then? Restricting the technology development because you are almost neuturing it before it gets there. So before we actually understand what the capability is actually about,
you are already saying no.
Do you think that's a problem?
Do you
think that's an issue there? That, that we
might not?
[00:23:36] Nick Roome: I think it's the right approach. I think we should have done that with ai there's been a massive pause,
like just use AI as an example. There's been a massive call for a pause from a lot of
top AI developers, and I
think a lot of this is probably competitive
reasons that G P T is so far ahead of them that
they need six months to catch up. But
there's still. This call for a
pause, and the
reason cited for this
pause is that, we need to think about the imple,the implications of artificial intelligence
before we start implementing it everywhere. I think thinking about these problems or at least defining the boundaries, and I don't think that would necessarily limit the technology, Barry.
I really don't.
Because if we establish, we all have freedom of thought and we cannot, we, that is a right that we cannot overstep, treat it like guns in the us we, you have more
protections than women's choice, right? No, I'm just saying think about it that way, right? Because that is how we need to think about
our freedom to think what we want to think and not have that be fed into malicious actors or having some sort of consequence for having intrusive thoughts or having a mental illness in your head because you've thought something, There's, there're, I think establishing. Boundaries ahead of time is good
because then it allows us to put those limitations
on the devices that we're building,
and it forces us to think about how these devices are going to interact with us, given those
rights that we want to establish.
So I think it's the right way
to go.
[00:25:28] Barry Kirby: so just playing devil's advocate, cause I largely agree with you. given that we don't actually know what, what construes a thought legally, so what is a thought? and what gives you a basis for an actual thought as opposed to a fleeting thought, as opposed to a grounded thought or, are there different types of thoughts, and how is it represented?
how is it actually stored in the brain? How can you prove that you had a thought in the first. all them sort of issues. what is the definition of a thought? I th there's a whole, because this technology, I still don't think, or our understanding of how this technology, works, I feel could just with the best will in the world.
I've seen lawmakers and policymakers, we've, and we've spoken about this before, in our national institutions, who know nothing about what they're talking about, have no,
I, was
gonna say intelligence, that's probably the wrong word. oh, maybe the right word. But, you've seen the way that they,
they interviewed the likes of Mag Zuckerberg and people like that with
just in name questioning.
imagine with something like
this, them trying to create policy around something they truly don't understand and just want a political point score that will kill off any benefits that we've described tonight around,the ability to treat people's mental and physical health in a better way, the better way to do some of that.
So I think you're absolutely right. There is gotta be some, there's
gotta be some protection, some, something around that. I just don't know. I don't think that we are mature enough to articulate it.
[00:27:08] Nick Roome: If only we could access our own thoughts and, but I just, but there are also risks
when it comes to
these types of things too, because even if we say we
establish a hard boundary to
say these types of thoughts we store
behind
secure data centers that should
not be accessed for malicious purposes.
then you introduce
this whole cybersecurity issue of
what happens if you, a malicious actor actually gets into those data stores and processes those thoughts that are, we've
established
should not be public and. Is should we then, as a, society, really a western society, I should say, should we determine that all these thoughts cannot be stored and I think that is the right way to go?
Certain thoughts, and you're right. Categorize the thoughts and then certain thoughts should not be stored in a data center anywhere, because that would be trouble. I think at least it would introduce a lot of privacy concerns. It would get rid of the issues. When you think about hacking or malicious use, especially well hacking the data centers, at least where that information is stored.
I think there's a lot to think about here, pun intended, but I really like, you're thinking about all these major tech companies, they want your data for various reasons and we bring up the different applications that this could play in
healthcare. And I think if we treated it very similarly to how we treat health data without the breaches,
then
[00:28:45] Barry Kirby: say would, yeah. Okay.
[00:28:47] Nick Roome: without the breaches.
And that's always gonna be an issue. I don't know what else to do
about that other
than
[00:28:52] Barry Kirby: but it, but again, we don't stop building cars
because I think because of the risk of car theft, we don't stop, we don't stop building supermarkets because the, cuz the risk of shoplifting. I think it's absolutely something we need to be aware of
and, and, cognizant of an engineer for, it should be something that is part and parcel of our development.
But I also think we don't stop developing just because of the risk of otherwise. you'll never fix it. it'll never get there. I think the, again, the interesting thing about you, we spent a lot of time thinking about, what happens if these thoughts get out in the public and that type of thing.
But actually I think there's a really interesting human factors issues here around how do we, how do you turn it on and off? How do you,what are the processes involved
in
communicating? I Establishing one-to-one communication, not only between people, but actually, as you
said that if you're controlling something, how does that happen?
Because not only, I mean you use the example of a remote arm
or not an airplane.
How could you know if you are flying something? I was gonna, I was gonna say a fast jet, but even just a passenger airline or
something, a fright thing.
And you are,
we talk about the obviously intrusive thoughts.
you're flying a long
guard and then you're
oh, that's a pretty clap. oh,
clap dead.
what are the,
cause I can't stay, you can tell just
by the way I
talk, I rarely stay on track with one thought
process from start to finish. So
how do you,
the interesting bit about developing software that is going to deal with that is gonna be incredible.
[00:30:21] Nick Roome: Yeah, I agree.
I think you're right. There are plenty of domains in which this could be mission critical. you, have military applications, don't shoot. Like I'm just saying, th this as a weapon of war could be really dangerous. You also have government uses where it could really raise some important ethical privacy concerns.
When it comes to individual rights. are we allowed access
to our public
servants, elected officials? Are
we allowed access to their thoughts? Or do they have that freedom too?
Should they have that freedom? These are all questions
that we should be thinking
about.
[00:31:00] Barry Kirby: Oh, just with the government. Just what about your justice system and your person sits there? are you telling the truth? Lesh hook you up and find
out.
[00:31:09] Nick Roome: Hook you up and find out. and then
there's also the exacerbation of some of these inequalities that exist right now, in society. You could have further discrimination based on how people. If that information is made public, I can you imagine a system that like shocks somebody every time they have a racist thought, I would love that personally, but like
you, you think about it,
right?
People, I hate to even say it, but people
should be, that they should have that freedom of thought, and that is what we're talking about here, right? We can't shock somebody every time they have a
racist thought or every time they
think about,skinning a cat or something. these are things that should be, I hate to say even you should, but you should also have the freedom to think about, man, what would my life look like if I had a million dollars?
Something completely innocuous, some something completely unrealistic, but at the same time, it's you're not gonna do anything with those. positive, well-intentioned thoughts. So why would you do anything with the negative thoughts as well? That's, it's
a very tricky, dicey question. And again, who has access to this?
All this stuff is just insane, right? I can barely wrap my brain around it. I think I am experiencing
some workload overload here,
[00:32:23] Barry Kirby: Maybe
[00:32:24] Nick Roome: Barry,
[00:32:24] Barry Kirby: to be able to jack into your brain and just get them thought in a pure state and put them out.
[00:32:29] Nick Roome: exactly. podcast can be
communicated in half a second in the
future. Barry, do you have any other closing thoughts on this article
before we
[00:32:36] Barry Kirby: Oh,I think bring this on. I can't wait to see how this develops. I think it's, yes, it's scary and we are, it's bit, again, it's analog, just the AI thing. We're on the brink of something that we don't necessarily understand. But isn't that exciting?
[00:32:51] Nick Roome: Ah, ki I don't know. I'm down with the AI thing, but when it comes to getting in my brain and understanding
my thought thoughts, I'm a little more, and maybe that's just me. I don't know how do you all think
about it though? that's, I wanna know that, thank you to our
patrons and everyone for selecting our news topic this week, and thank you
to our friends over at Scientific American for our news story. If you wanna follow along, we do post the links to
all the original articles on our weekly
roundups in our blog. You can also join us in our Discord
for more discussion on these stories and
much
more. We're gonna take a quick break. We'll be back to see what's going on in the
human factors community right after this.
Oh yes, huge. Thank you as always to all of our patrons, but we especially wanna thank our human factors. Cast all access, patrons like Michelle Tripp, patrons like you truly keep the show going. And, if, if we're gonna read this dumb commercial q funky music, I don't have funky music. hello listeners.
Do you love our podcast so much that you wanna become more than fans? Fans with benefits. you can always support us on Patreon. Just a buck gets you in the door. But what exactly does your generous support on Patreon pay for you ask? allow me to enlighten you with all the nitty gritty details.
Firstly, it covers the monthly hosting fees. It takes cold hard cash to keep our podcast accessible for your listening pleasure. Next up, annual website domain fees. who knew websites you have to pay for real estate too, but it's necessary to keep our little corner of the internet up and running.
Let's not forget about the annual website capability fees. Let's just say we need some extra magic behind the scenes to keep things running smoothly for all you lovely listeners. Patreon also helps us pay for programs and automation behind the scenes. Yep, we like it. Fancy and efficient at the same time.
And last, but not certainly least, your support on Patreon helps us get products and services that help assist with our audio and video production. Because let's face it, we may love talking, but we also want it to sound as fancy and as polished as possible. And the cherry on top, we get to livestream to help us reach more people on different platforms.
That's right. We're not just a one platform wonder. So listeners, let's become fans with benefits and become a Paton supporter. Now, help us continue to produce the fun and informative podcast that you know and love. Trust me, we'll make it worth it. Alright, listen, get into the next part of the show. Yes, this is the part of the show we like to call. It came from. This is, where we search all over the internet to bring you topics the community is talking about. If you find any of these answers useful, wherever you're watching, give us a thumbs up, whatever it is to help other people find this content.
We have three up tonight. The first one here is from the user experience subreddit. They write, sorry, this one's by sank. They write Any user researchers here that regularly visit facilities or factories or shops to do street interviews? they write. Are there any user researchers who visit real world locations for their research work?
I want to learn about the experiences of people who conduct research outside of their offices, like in hospitals or factories. I feel tired of being
confined to my home and want to know about the real life experiences of people. Can anyone share their industry
experiences and what their day-to-day work looks like.
Barry, what are
your thoughts on this question?
[00:36:04] Barry Kirby: Isn't that the day job? how can you be a user researcher and not actually go and talk to, I cl I must be missing something because to answer the question, I go and talk to all my users, either where they're at or where I'm at, and that includes everything from people within, across the defense industry and,
around and about how could you do, how could you do all of that just in your own office?
I don't understand.
[00:36:33] Nick Roome: I understand
[00:36:34] Barry Kirby: Please enlighten me.
[00:36:36] Nick Roome: there. If you work in
software, this is actually very
easy to do,
this is very easy to just get on a call
with somebody who
uses your product. It's very easy to send out
a survey. This what they are
talking
about if you're unaware, is
a contextual inquiry. It's fairly common in many domains or in
ethnographic analysis.
I think is another way that, another name that it goes by. Anyway, this is fairly common actually. And so
seeing this question is
really
surprising,
but this highlights for me the difference in some ways between user experience and human factors.
Where some user experience
researchers will go
out and experience a products lifecycle from start to finish, especially if it has other physical components to it, like using a
device, in conjunction with software.
This is frequently done and you need to see it in context, which therefore context in contextual inquiry. I think this question seems really junior to me, that they may have come into UX research from a.
Design background or from another background that doesn't have that history in engineering or psychological research.
And so I do wanna just highlight this because it seems like a fairly basic question, but yes, this is something that you should be doing and you should be advocating within your company to do to go out and understand how the product is used, not just the product, but also the end-to-end lifecycle of what a user is doing in their workflow.
That helps put the product that you are developing at, in, into context with the
rest of everything else. I don't know. You have any other thoughts on that one, Barry?
[00:38:25] Barry Kirby: It's interesting cuz again,
the
whole
junior
piece, I often find that it was, I got to visit more interesting things when I was a junior
than when my career has evolved.
I'm more now sending out other people to do it. I mean for me it's part the job, it is part of the exciting bit of the job about where you get to go and see people actually doing their stuff.
So yeah, if you're
not doing that yet, then you, yeah, just bang a drum, go and do it. Cause it, it's weather fun lies.
[00:38:52] Nick Roome: Yeah, agreed. All right. This next one
here is, Insatiable writer on the UX research subreddit, neurodivergent
researchers advice. Hi, I'm a UX researcher
recently diagnosed as neurodivergent. Any tips from fellow ND researchers, especially those with a D
H D, and or autism or strategies that have
helped you in your work?
Barry, you have a lot to say about
this one.
[00:39:17] Barry Kirby: it, I think this is, it's really inter, it's really topical for us as a family because we are learning more and more about this and perhaps how it affects us. now I've mentioned this before, but generally as a family, we've home educated our children, and it's not until our eldest children have gone into, mainstream or more mainstream education, university and stuff that we've realized, around level things with ADHD and other neuro neuro diver, neuro divergent issues.
and it's the fact that we see these as a problem. But actually for us, it's more about what are the super, what are the superpowers that they give you? which I think is more interesting. We always tend to talk about ADHD and autism. how do we cope with it? How do we,how do we try and make you normal again?
And, and for me that. Butter rubbish because you know the people who have,ADHD or maybe so somewhere on the autism scale or what, however we classify it, it just means that you've got a slight, you've got different skills, you've got different ways of thinking about it, which actually give you different advantages.
You, cause you process things in different ways. So it's not, for me, it's not about how, how we cope. But I quite like the way that this has been been put is about how has it helped and for me it's about, understanding and having a better understanding of really, once you understand how this affects the way that people work and therefore what makes them work better, then you can actually scope the work around how they work.
And so if they're better at doing maybe some more analytical things or they only work, really well at a certain time of day or
something like that, then,
then worked. That I fundamental it worked at the superpowers.
cuz everybody's got them, everybody's got their own ways of working and the, it's just different flavors.
but it is interesting around, I think now
as well that our
understanding of viewpoints around A D H D, autism and mental health in
general is evolving actually more
rapidly than I give you credit for. Because, it isn't that long ago
that we'd be looking down. ADHD with a certain level of disdain.
Whereas now
I think, we are,much more grown
up in terms of
recognizing what it is
and it, and that it's not a disability, it's a,
it's a thing. It's an actual something. It's not something to be looked down on anymore. Nick, what do you think?
[00:41:40] Nick Roome: I speaking as someone diagnosed with A D H D, it comes in many flavors. There's, the executive dysfunction, the distraction, the hyper fixation. They all come and go and they're, there's various cocktails of mixtures of all those things. And people diagnosed with a D H D and I think, like you said, Barry, use them to your advantage in whatever you're doing. For me, one of the things that I found out
about
myself is that I work most efficiently when I can block
out distractions. So between the
hours of 12 and
3:00
AM when there's no news cycles going
on, there's no kid awake, my wife's asleep. I can solely focus on the things that I need to focus on.
Now that.
Focus on time, and I'm fully aware that I focus and that hyper focus from 12 to three and get something done. There's also the distraction element, and I experience a lot of this throughout the day where, you know, even in my day job where, focusing on something like user interviews, that is really hard.
So I try to record things if I can. When you're in a space where you can't record things like a classified space, it makes things very difficult. But having a dedicated note taker so I can focus more on things like the flow of the conversation rather than the content. I am aware of what's being said, but I am not committing any of that to memory.
So this is these are some of the tricks that I've picked up along the way. That is a best practice to have a note taker, but something like that where you are planning for the things in which you know are going to be more difficult for you or that actually, like you said, Barry, are superpowers in some way, like that hyper fixation and so planning for those types of things in the way that.
Plan your work is what I've found to be really helpful. Now I will mention anyone interested in this topic can go and look at the Reddit post further. There's a lot of really great advice in that thread, and I highly encourage you all to go look at that. let's get into this last one here. This one is by Kava on the user experience subreddit.
Say, I'm a lead designer at a consultancy firm.
I'm onboarding a new person to my project. What's your go-to way of doing? hi. As a lead uxr, how do you efficiently onboard a new member to the
team? I'm currently onboarding a new colleague to
our project, and then looking for some tips and advice to make it a smoother process.
Barry, what do you think?
[00:44:15] Barry Kirby: So firstly, have a process, that helps. And generally if you're part of larger company, HR will have a process, but just because the HR has a process to get them into the company doesn't necessarily mean that,that's the best way of getting them onto your project or within your team. So I always still advocate that within a team and within a project, have an onboarding process.
And that should just be a, an overview about what the project is. So if we're talking about a, the project as per the question, then have a, two or three slides on what the project is about, that you can, Offload to anybody who joins a project and make sure you keep that updated. have the key list of contacts, up to date and available.
This is where actually I find having an internet site for projects, is really useful because you have an information page f FAQ type thing. and I do that for all projects, no matter how large or small they're, so that helps new people, come on board. But fundamentally as well, it's never smooth.
cause different people have different, different needs, different ones, they're coming into different path of the project. they'll have different anxieties. so when somebody joining a project new is going to be anxious about something, you're bound to be,
because
you're stepping off into the unknown.
Some people might be worried about the process, some people might be worried about the customers, some
whatever it is, that,
imposter syndrome, all that sort of stuff. So whilst you have your process there,
don't forget the person ha,
talk to them all the way through it. certainly within the first two weeks is generally make sure you have constant catchups with them.
ask them how it's going, and ask them what they think they need to be able to do stuff. and then the last bit I, we always try to do is write it down At the end of their onboarding, try and get them to write down what they thought went well, what they. Didn't,and include that in,
use that feedback for the next person, because you will, you just enrich in that process.
Nick, how do you do it?
[00:46:05] Nick Roome: I am actually gonna use this as an opportunity to plug the lab because we've been going through some updates to our onboarding process
recently
and
this just felt very topical for that. So I like, here's the thing, we have a lab onboarding
checklist that we look at, and
this is, first things
first, get them
access to
everything That you
use.
That is the baseline, but you're right, there are more things to
that. And
as it
comes to each person, you're going to want to have a one-on-one meeting with
that person,
to talk about their goals, responsibilities, expectations, working on any given project. You're also going to want to
them tasks that work well with their skills and that are relevant to their
experience and that are allowing them to build confidence in their role.
You don't wanna send them off the deep end right away. You want to do a slow role into working on these projects. And like just me personally, I've been working on a lot of stuff behind the scenes at the lab just to make sure this is a seamless process all the way from automated workflows. You have an automated email that comes out with, here's some next steps.
Here's everything that you need to do. Like I said, that onboarding checklist, making sure that the workflow of that checklist accurately matches how information should be revealed and exposed to the new person in the lab. There's a lot that goes into onboarding, and it can go very wrong. Barry, like you said, it's never smooth and just know that frequent communication
is a good thing, and I'm not always the best at that, but I think that
is
what is going to help in those cases for onboarding new
folks.
Yeah. Okay. One more thing. That's it. One more thing. what's going
on, Barry?
[00:47:55] Barry Kirby: talking about, automation I've been playing with, power Automate, which is Microsoft Tool part, the Microsoft 365, ecosystem. And I've never really touched it until this week because
I've never, you just don't have the time,to sit down and just explore something new. I had a lot of deadlines this week, and as is the case, we've been told about new, neurodiversity and stuff, as is the case, whenever I have a lot to do, I certainly find I need to do everything else at the same time.
And so I lost pretty much an entire Wednesday morning, playing with Power Automate and actually it's been able, it is now getting to the point where it's actually doing some useful stuff that I wanted to do. so actually, just with that last question, I, whenever I, whenever we onboard have a new project, I always created a brand new project space,in SharePoint for it to use.
And I had a templated version of that, which kind of worked. Whereas now with Power Automate, I found I could do it more efficiently and update my SharePoint list and create links. But what's been the reason, it's my one more thing really is not just because I've been geeking out on, on, on some coding, is that y it's yet another example of a tool that w is nearly so good that it's bad, because it does.
Just about right, but I still, I'm gonna have to go back and tinker with it. as an example, it will create my SharePoint space. It will create a SharePoint group, now an Office 365 group, but it won't automatically link it to teams and create a channel. I have to then do it, do that extra, even though that should just be a thing.
in my, in, I then get it to update a, a SharePoint list, which has a link of all of my live projects and it assigns it status and stuff. But the link it gives,
normally, you have a, a, an alias for
it. So it
can be just a simple name and the hyperlink
lives under it as
you, as your underlined highlighted thing. It doesn't do that. It won't allow you to
put an alias in. So you get the entire link in there,
and I'm gonna have to go back
and put the alias in
and it's just
frustrating and that,
the amount of Googling I'm having to
do, until I hit
a moment of realization that I could just ask Chuck
g p t. And then, so now
I'm now in another loop of I'm trying
to use Power Automate and then use Chat g p t to teach me. I have to use Power Automate. And he just feels like a really
AI overloaded world that, perhaps has maybe lost control and
we should have some policy around to,to stop it over everything anyway.
Nick, what's your One more
thing?
[00:50:22] Nick Roome: Oh, I could go a million directions with this. I mentioned to Blake last week that I had six different items on my, one more thing. And now I'm picking and choosing something that, that, that is going along with your, one more thing. As long as we're talking about automation, I have some really like high importance automation.
Tasking that I'm working on and I haven't touched it in a week and I'm forgetting everything that I need to do to
make that,
automation work. and to give some perspective, this is not a simple automated workflow. This workflow has something like 30 something steps and these automations happen over a two week time period, sp span.
So it's
a lot of, and they're referencing multiple different documents and multiple different, like I'm trying to automate a major pain point and to just put it down for a week. I keep telling myself I gotta sit down and I haven't touched this since last Wednesday. I keep telling myself I gotta sit down and actually look into it and actually figure this out cuz it's going to have measurable impact for the things that I'm doing.
And I just, it's, it's not necessarily the work, it's the thought process of how do I get this thing to work in the way that I want it to. Cause of, like you said Barry, if you get just close enough, it's not gonna be, it's almost gonna cause you more rework to go back and fix those things then it would be to just get it right.
And I'm trying to get it to the point where it works very similarly to how we have our show notes now. We don't have to do a whole lot of work on our show notes. So that's where we're at with that. But I'm trying to get this to be very similar to that. Ugh. Yeah. And I can keep these other one more things for later.
All right. that's it for today, everyone. If you like this episode and enjoy some of the discussion about bcis, I'll encourage you to go listen to episode 205, where're Blake and Elise. Break down what it's like to drive an exoskeleton using Brain Signals. Comment wherever you're listening with what you think of the story.
This week for more in-depth discussion, you can always join us on our Discord community. Visit our official website, sign up for our newsletter, stay up to date with all the latest human factors news. If you like what you hear, you wanna support the show, there's a couple things you can do. One, wherever you're listening, just stop what you're doing.
Just go leave us a five star review whatever platform it is. That's really helpful. Two, you could always tell your friends about us. That is the number one way in which we grow as a podcast is that word of mouth. And three, if you have the financial means to do you want to do it. You can always
consider supporting us on Patreon.
Just a buck gets you
in the door for a bunch of different cool things. As always, links to
all of our socials and our website are in the description of this episode. Mr. Barry Kirby, thank you for being on the show today. Where can our listeners go and find you if they wanna talk to you about hooking up to your brain and understanding your
thoughts?
[00:53:12] Barry Kirby: You don't want to do that, but if you come chat about it anyway, come find me on social media. you can come and listen to me interview various, amazing people around the Human Factors community on 12 Human Factors Podcast at 1202 podcast com.
[00:53:26] Nick Roome: As for me, I've been your host, Nick Rome. You can find me across Discord
and across social media at Nick underscore Rome. Thanks again for tuning into Human Factors Cast. Until next time. It depends. Woo. Good
Managing Director
A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.
If you're new to the podcast, check our some of our favorite episodes here!