Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
MononcQc
May 29, 2007

"I believe I did, Bob."





Human factors draws its origins at the turn of the last century in trying to understand why the gently caress pilots were having so much trouble flying planes. They looked into all the things that made people suck at flying planes, like noise, g-forces, lack of oxygen, reading your bad posts, or inconvenient temperatures.

This was mostly run by psychologists because even the discipline itself didn't really get its name until WWII, where pilots were judged to be crashing so many planes that people started to think maybe they were not holding it wrong and the problem was actually with the plane cockpit design.

The story goes that the control that lowered the landing gear and the one that lowered the wing flaps looked identical. It was all too easy for a pilot, especially at night, to reach for the landing gear control and grab the wing flap one instead. If that happened, instead of putting down the plane's wheels for a safe landing, he would slow the plane and drive it into the ground. Rather than pilot error, a dude name Alphonse Chapanis called it "designer error"--the first time anyone had used that term.

This threw human factors and ergonomics to the forefront of integrating humans and automated systems together, in trying to figure out how to best deal with the limitations of one and the other. The early ideas were extremely enticing, and some of them, despite having been improved many times over, still subsist in their original form.

One of them is the Fitts' List, also dubbed "haba-maba" (humans-are-better-at -- machines-are-better-at), which aimed to give some tasks to people who are soft and mushy and adaptive and able to improvise, and then to give others to machines, which are great at going fast and never getting tired:



This is babby's first automation design, and if this is how you think about interactions between humans and machines, congratulation's you're designing poo poo like it's 1951. Don't get me wrong, some of the aesthetics were kick rear end, but things have evolved since then:



Essentially some of the limitations are understood to be similar (it's not like all of a sudden you will get faster than the computer and make fewer mistakes doing math), but it rather casts the overall relationship between the human and the machine as interactive, as one team that must work together rather than two entities that must be kept in isolation, and it has deep impacts on all of that.

This is a thread for human factors stuff, and the disciplines that eventually and necessarily intersect with it as the studies of human performance and characteristics when interacting with machines had to cross paths with all other areas where humans and machines interact: design, psychology, safety, resilience engineering, cybernetics, cognition, and a bunch of others.

MononcQc fucked around with this message at 01:18 on Jan 20, 2022

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

"I believe I did, Bob."



to seed this thread, here's a couple of good posts about VR design from Expo70 in the tech bubble v4.3 thread:

Expo70 posted:

the rotation thing is not new, and its why you often have instant snapturns in lots and lots of games. what causes the nausea is the perception of objects moving unnaturally when the person isn't.

in terms of vision, it happens because human vision's far peripheral is used to determine a horizon around the 30/60 degree mark, which is fundamentally 2D vision anyway because you only form a true field of 3D vision inside an arc of 30 degrees in front of your eyes and then everything deconverges which the lenses are dogshit at representing.

one thing that pisses me off is everybody is still doing linear interpolations for all of the rotations of objects and characters and is unbelievably willynilly with visible horizons in non-useful vision during rotations. Its why nausea in games with cockpits is way way way lower. Like no poo poo, nothing in nature works like that! nothing in nature interpolates razor smooth or starts and stops like an fps camera - the sudden stops and bizzare smoothnesses are not natural: of course they're going to feel goddamn weird. then there's the fact the actor body has to rotate from the up axis centroid of the character WITH THE RIGHT DISTANCE BETWEEN THE EYES AND THE FULCRUM OF THE NECK or you're adding weird transformation inheritance which is like a giant putting you on a baby carrier and swinging you around: not nice at all!!

here's a secret: you can partially solve the rotation problem by having a non-linear rotation speed. I find something like D=cD*((p*v^2)/2)*a) following the old/classical drag equations for dampening for *deg/sec with interpolated addition to the rotation rate works really nicely and its managed to get a 6DOF high speed mech project I'm working on feel surprisingly serviceable in VR. Couple that with some frames of reference for a cabin space and some creative lying about the horizon and it all gets extremely reasonable very quickly.

bonus: frame objects using position instead of rotation in 3rd person. strafing movement is practically not nauseating at all if you separate it properly and manage your peripheral view and horizon stuff properly in the lower visual field!

the amount of money we spend developing better lenses and not just doing tons and tons and tons of trial and error with this stuff just makes my brain melt with pain

(sorry if this derail is annoying, i can take it elsewhere if you want my smooth brain is not used to yospos yet)


Expo70 posted:

so this concept is kind of flawed? to explain, the balancer mechanic i'm working with is an inversion of this where you steer the lower and bias it using one of the triggers to add in the automation and cruising movement either in global world-space or relative to the camera. about the only time that a torso twist makes sense if you're doing an absolute ton of cruising but you're also inheriting inclines and changes in vector rotation of the upper body unless you perform turret stabilization (which basically renders torso twist kinda obsolete for anything but long distance cruising)

something to consider is that we've moved away in warfare and engineering from the control of systems to the employment of systems -- you're not manually driving turrets, a commander has a workflow where you designate and tag a target then you pass that task from a tasklist to a gunner based on priority and then the fire control system takes over and the gunner provides fine context correction while the commander is already searching for next target and issuing drive instruction. net result is your time to attack is cut in half and the fire control reduces your rate of engagement failure.

now for a single player alone in a vehicle, i think that workflow is extremely valuable. its also.. how to put it... like its more usable? you add a usability affordance by *having* a fire control system, and even though it might be flawed and not instantaneous with locktimes and slew-times and all kinds of other subspecializations with ranging or deliberate pid miscalibration to keep things clumsy, heavy and gamey you just freed up a player to make complex movement control decisions which means its actually worth bothering to implement ballistics and evasion mechanics instead of pure hitscan.

or if you prefer to be more tactical, you suddenly have freed up a ton of mental bandwidth of a player to make command decisions for a unit and thus, you can actually make group employment tactics wherein you are forced to specialize by the game design in your loadout and therefor your team's subspecializations also become important. specialization is amazing for replayability because it means you can find your preferred control style, have weaknesses in your engagement capabilities you have to adapt, adjust and cope for and in turn you have to start making serious decisions that a hyper-generalized unit does not.

remember, the solving of constraints in situations and the learning that allows players to perform that act is the entire soul of game design. what you are allowed to know and not know, what you can do and not do are just as important. sometimes the best feature in a game is the one that isn't present, that does not let a player design the fun out of a game by picking the least risky and most conservative strategy. if a player doesn't feel threatened and their own ability isn't solving their problems, they aren't learning and learning is play, and play is practice. if your player fails, they have to know exactly why it happened and what mistake not to repeat. the only time this becomes a problem is when your game is story dense, and failures disrupt the flow of a story and thus the irritation is on the entitlement of the player *for* the story. they will in these situations, design the difficulty out by playing something that doesn't challenge them. its a difficult and mostly unsolved problem that i think most of the industry isn't even aware of which makes me die inside a little.

i need to figure out how trig and curves and all that weird poo poo works so i can break waves into their component waves. i know it can be done because i think that's how we do stuff like compression, and how we can fake round ones out of square ones and square ones out of round ones like we do with intervals in the pid. i don't know if there's a name for it but i think if i can find certain wave histories or likelihoods in the pid's past inputs for each element of the pid, i can make the pid tune itself and adjust based on whether or not the wave history matches an ideal or not instead of having to make tables which attributes bias from a list which i loving hate doing. pid tuning is loving miserable. i think it would be great if it could have many ideas of what's "right" or not and infer them and slowly dial it in and i think i could represent it with branching logic and minor adjustments to an imaginary number and use time to make up for the loss of precision. if i do it fast enough (probably 2x the rate of the physics sim or so) literally nobody will notice. i feel so loving stupid sometimes.

MononcQc fucked around with this message at 16:00 on Jan 19, 2022

Silver Alicorn
Mar 30, 2008

The Game is Never Over


I have an ergonomic keyboard op

bump_fn
Apr 12, 2004



posting w/o reading

Shaggar
Apr 26, 2006


Nap Ghost

i like to have human interactions with the computer

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


Is this the thread where we talk about how once people know you’re measuring them with a given metric they target the metric and it stops being useful

MononcQc
May 29, 2007

"I believe I did, Bob."



Captain Foo posted:

Is this the thread where we talk about how once people know you’re measuring them with a given metric they target the metric and it stops being useful

Hell yeah I'm all for it.

This specifically is Goodhart's Law and it will ruin your manager's life.

Shame Boy
Mar 2, 2010

THE HORROR
THE HORROR





MononcQc posted:

to seed this thread, here's a couple of good posts about VR design from Expo75 in the tech bubble v4.3 thread:

it's Expo70, which is a clever reference to Expo 70 you see

Silver Alicorn
Mar 30, 2008

The Game is Never Over


bump_fn posted:

posting w/o reading

Silver Alicorn
Mar 30, 2008

The Game is Never Over


op I am happy for you that you had the energy and knowledge to put this thread together. keep posting and being you. I will not read the op however

Expo70
Nov 15, 2021


hang on writing another post

i'm not 100% used to the culture of sa/yospos yet and soc-anx (came from an edgier community previously, super enjoying the mega fluffy atmos here that's super welcoming and lovely and makes me wish i'd used SA for all these years instead) makes everything complicated and i'm mostly sorta winging it with experience and knowledge rather than ultra formalized learnins for a lot of it. i imagine a bunch of this might be wrong given i'm not citing rules and stuff, but it is just really fun to talk about and makes the little seratonin hamster in my brain run in her wheel extra fast pleez don't be angery, im just kinda enthusiastic and a bit dumm. yospos has big comfy energy.

--

I think probably one of my favourite change histories is that of nuclear power station control consoles which are about as complex and mind-bending a machine as you can possibly get.

Having set foot in a few in my time, one thing you start to notice is older systems will have buttons on most every surface, including places you can bump with your leg accidently, and they will have really important lights in strange places and often it was common to use a hook on a stick to yank some of the levers simply because the shape of the console was dictated by spatial rules rather than human rules. If you wanted to know something, you streamed through reams of papers coming out of a hole in the wall somewhere like a mad scientist, with the camera about to pan in on you. There's a completely harrowing scan from Human Factors Engineering and Design 7th edition by Sanders and MCCormick which depicts such a console and I wish I could scan it for you and I just remember thinking when I saw it "if anybody knocks ANY of these switches or trips reaching for something else, you don't even know what the previous 'correct' state is, and you'd have to go through all the steps manually on a checklist to verify it all!" -- akin to programming a computer's program memory manually with dip-switches!

Moving into systems in the 1970's, you start to see stuff like the more important stuff you check very often being in the middle of your approximate eye-line, and any indicators which are above your vision or out of your normal vision lines on the console stacks flashing more often and with brighter colors if something is wrong.

Eventually the entire system matured in the 1980's into what's thought of as a sort of two-fold control mechanism -- direct, and indirect and that's still the metaphor we use today.

What this means in simple terms is you have all the old equipment which is there as a fallback and shows all values constantly in a super overwhelming sea of information (direct) -- and then you have instrumentation being filtered through computers which only tend to show you when something is wrong and otherwise, just have palettes of useful information. This is indirect and is how 90% of the use is handled.

You can think of this as a bit like the MFD pages of the Apache or fighter jet -- you get a WCA (Warnings, Cautions and Advisory's page, which also lists faults and software test states for simulations or showing you the results of a future action) and then you get proper "system pages" so if you see a WCA master caution come up, you can skim through the page and get a specific idea of system faults, though typically the WCA master page will indicate what the rate of change and current state of a system is.

The advisory thing is really important particularly, because you often don't have time to sit down and drill through a full user manual to find specific pages so a WCA will either give you a direct suggestion or tell you to turn to a specific page of one of those nice glossy operations-manuals which have specific trouble-shooting steps (which in the case of your apache pilot, is usually a kneeboard with flashcard like prompts to remind the pilot of their training should they be panicking and forgetting things -- this is obviously very bad if you are spinning to your death, which is why you want warnings to fire BEFORE the pins-you-to-the-cabin death-loop starts)

In the case of a reactor, you have what's called an alarm room console, which is diagrams of every conceivable system and system-relationship context you can imagine and graphs upon graphs of data which often look like the NORAD room in wargames with little drawings of the reactor and cooling assemblies everywhere, or something like that. a system context relationship is like "well, these three things have a certain set of known phenomenon which arise, so we're going to take them like palettes in photoshop, plug them into eachother, draw up some formula to plug into their values and look for coefficients as warning biases which indicate WCA alerts if something spikes". in this way, the whole system is designed so you can kinda just keep adding stuff and modifying those values, and driving the biases up and down. they are every bit as cool as wallstreet trading bro screenwalls wish they were and honestly they are fudging fascinating to watch.

This is actually a command-center, rather than a directed control room in a lot of situations and you can think of the difference as being a little like a bridge from an engineering area of a ship as a derivative of one another and this is one extra derivative between the bridge and the engineering area.

This gets really important because you're suddenly concerned not just with operations of the reactor and anticipating faults, but also the demand of the grid and how you weigh up burndown of resources vs payment of the reactor (eg: does it make financial sense to run the turbines at a temperature slightly outside of their best longevity time given factors like time to next maintenance cycle or what the current grid demands are, or what the budgetary demands are so you can meet your plant's fiscal goals).

What's interesting is in the space of designing these systems, there's a very no-nonsense focus on making sure what's built "makes sense today" rather than is something you see in the future to avoid architecture-astronaut like thinking which is kind of the death-loop of these kinds of systems due to their absurd complexity - and the future-proofing comes from planning for growth rather than accounting for all possible use-cases.

As you can imagine in these spaces, they're incredibly loud, full of people and there's a lot of distractions so you need diagrams and readouts which not only "make sense" in terms of what it is communicating, but also that the diagrams quite often are very spatially liberal so you can see the clear lines of what system feeds which, with colors indicating rates, big visible valve markers with state information and often marching lines if a line is bidirectional.

I don't remember who said it but my favourite quote on these suckers is "if your design is good, then you should enable every homer simpson to do the right thing because it is the obvious thing in a given scenario". I know it sounds pretty silly but it always really makes me smile. HFE is full of lots of weird little things like that.

--

part of my brain wants to rant a little about the history of tetris in hfe and how nes vs modern tetris are **RADICALLY** different games, and what the different community demands are and why they exist. tetris is actually a very complex use-case for some very weird unexpected reasons and i think building an example of tetris should honestly be a mandatory hfe exercise.


MononcQc posted:

Hell yeah I'm all for it.

This specifically is Goodhart's Law and it will ruin your manager's life.

iirc isn't the trick for this to signpost and create a fake metric so the attentional window of your evaluee is somewhere else? its a little bit unethical but if your usecase is harmless, it solves the problem and adds a bias which you can negate for by evaluating against a previous ground-truth

the other trick iirc is to oscillate metrics and do template testing so you can determine which of n-states your evaluee's attention was on so you can then account for a known bias by using a pattern?

like the cognitive load of trying to account for a metric is finite and you can saturate it with dark patterns though i think outside of maybe game design or some extremely minor stuff this quickly becomes VERY unethical and makes me very sad

Expo70 fucked around with this message at 15:43 on Jan 19, 2022

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


tbf yospos and sa as a whole have had the edges filed off a lot in the past couple of years

this was not a nice a place a decade ago

Expo70
Nov 15, 2021


Captain Foo posted:

tbf yospos and sa as a whole have had the edges filed off a lot in the past couple of years

this was not a nice a place a decade ago

i mean, i stumbled into chons in 2003ish and i had to depattern brainworms for the better part of 2012 when that whole storm went down and i realized i wanted out -- its weird, that chons got edgier and sa got softer. how is this place so drat comfy

anyways, offtopic sorry

Shaggar
Apr 26, 2006


Nap Ghost

everyone here is old

Expo70
Nov 15, 2021


Shame Boy posted:

it's Expo70, which is a clever reference to Expo 70 you see

man that entire place could honestly do with its own thread somewhere on this site. Expo 70 felt like gazing into the future. i remember looking at photos in of it in a book and i fell in love with it. you know they had this concept called 'data swallows', as in the bird swallows and how they build their nests? they were ladies in cool uniforms who went around sharing information by moving documents and drives around and by interviewing strangers and doing data collection work. in a way, they became the human central nervous-system of the expo and they had a surprising amount of decision making power that was able to overrule the people running the various kiosks and plaza.

they had this idea that the people who were "the boots on the ground" would have a better holistic understanding of what was taking place and what the psychology of the audience was than managers who had different incentives and put them against eachother so they would compete as opposing halves of the same combined interest system -- which is kind of akin to letting someone in a lower position add a voting power against someone in a higher position in a company and being given radio tools to communicate with those people in realtime while transporting information, and being able to veto the delivery of that information if they choose to do so if they can write up a written statement to explain why.

it was i guess an early example of a successful network based decision making system instead of a higherarchal decision making system at a large event, as a proof of concept (since most of what was even at expo 70 was a proof of concept). i think in its own way, the organization of companies and the human relationships of systems are kind of a pure social-technology employment of human-factors engineering in a way. huh, weird.

i wish that made it into the future.

Expo70 fucked around with this message at 15:57 on Jan 19, 2022

bump_fn
Apr 12, 2004



what are "the chons"

Expo70
Nov 15, 2021


bump_fn posted:

what are "the chons"

bad website, do not wish to talk about, move along

MononcQc
May 29, 2007

"I believe I did, Bob."



Shame Boy posted:

it's Expo70, which is a clever reference to Expo 70 you see

Yeah my bad, I typoed the name when setting up everything. Will edit it. I'm more familiar with Expo67 being in Montreal and all!

Expo70
Nov 15, 2021


MononcQc posted:

Yeah my bad, I typoed the name when setting up everything. Will edit it. I'm more familiar with Expo67 being in Montreal and all!

ah no problem, no problem! the architecture there was extremely cool and i think i should probably spend the time to study it and look into forms. i wonder if any of the metabolistic architecture movement was present there?

also

i don't wanna be rude, but can you recommend a few hfe books or adjacent materials? i'm almost entirely self-taught and i'm wondering what's highly valued by others in the space, or what weird things you think don't get talked about enough. its all extremely interesting to me.

Shame Boy
Mar 2, 2010

THE HORROR
THE HORROR





bump_fn posted:

what are "the chons"

4chan, both of us were once shithead edgy teenagers/young adults, have grown out of it, and are ashamed of the past

Expo70
Nov 15, 2021


Shame Boy posted:

4chan, both of us were once shithead edgy teenagers/young adults, have grown out of it, and are ashamed of the past

yeah, the internet really should feel more like the magic school bus and a lot less like the mad max war rig

seatbelts, everyone!

bump_fn
Apr 12, 2004



oh i ve never heard them called "chons" just "chans"

Expo70
Nov 15, 2021


bump_fn posted:

oh i ve never heard them called "chons" just "chans"

i wanted to call them the chuns, as a portmanteau of chud and chan but everybody thought i had a fixation with muscular thighs

¯\_(ツ)_/¯

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


Expo70 posted:

yeah, the internet really should feel more like the magic school bus and a lot less like the mad max war rig

:3

MononcQc
May 29, 2007

"I believe I did, Bob."



Expo70 posted:

What's interesting is in the space of designing these systems, there's a very no-nonsense focus on making sure what's built "makes sense today" rather than is something you see in the future to avoid architecture-astronaut like thinking which is kind of the death-loop of these kinds of systems due to their absurd complexity - and the future-proofing comes from planning for growth rather than accounting for all possible use-cases.

As you can imagine in these spaces, they're incredibly loud, full of people and there's a lot of distractions so you need diagrams and readouts which not only "make sense" in terms of what it is communicating, but also that the diagrams quite often are very spatially liberal so you can see the clear lines of what system feeds which, with colors indicating rates, big visible valve markers with state information and often marching lines if a line is bidirectional.

I don't remember who said it but my favourite quote on these suckers is "if your design is good, then you should enable every homer simpson to do the right thing because it is the obvious thing in a given scenario". I know it sounds pretty silly but it always really makes me smile. HFE is full of lots of weird little things like that.

Things changed a bit as far as I understand by talking to safety folks who worked in close relation to the nuclear industry. The big reckoning in the western world was around Three-Mile Island, which was peak "remove all failure potential from the system and ensure redundancies exist in all places" nuclear design afaict, but still had one of the worst incidents in US soil. It was one of the cases where in the years that followed people had to evolve their safety event models to account for humans as a fundamental part of systems, and led to the emergence of models such as Perrow's Normal Accidents, which later paved the way to concepts such as Safety-I (avoiding adverse events) and Safety-II (the mechanisms for success are the same as those that can feed into failure, so study successes as much as failures).

TMI is also where people in safety adopted the term "fundamental surprise error", which is the idea that when new information comes through and shakes the foundations on which you built your (mental) models and around which you structured your vision of the world, you may encounter a fundamental surprise. A fundamental surprise is this new event that negates or exposes major flaws in the model you have. So for example, if you have a machine running, you can assume that you can safely turn it off and it won't break or damage itself since it's not moving. But surprise, you're in Texas and at some point you get a very cold snap and the water in there freezes and damages some tubes and the next morning you turn it on and it won't work at all. This was never a problem because when the machine runs at night, it warms itself and you had never seen this before. The fundamental surprise may be to now consider the temperature of the machine while it does not work to be extremely significant, and you now have to consider environmental control as part of your system. Fundamental surprise error is to avoid all implications that your model might be wrong, and to instead lay the blame on rare one-off things or bad actions people made and correct nothing.

We saw that with TMI because the loss of coolant came from cases that were not properly handled in the manuals, for which people had limited training, and that couldn't be seen from the normal control panels. A fundamental surprise there is more adequately responded to by Perrow, who adjusted and changed views of how we felt we conceptualized systems. A fundamental surprise error would have been to consider the problem to be humans not being trained properly and loving up, and next time with more training, they'll avoid the meltdowns.

I'd really love to find a good source about the adjustments there, but then again Chernobyl happened by the end of the next decade and a bunch of wrong lessons were learned there ("human error" got the blame and reverted a lot of the learnings in the public eye, although experts saw things a bit differently, but there was also a big rush to say "no, our systems are fundamentally safer here" that obviated a lot of the lessons for the general public)

Expo70 posted:

iirc isn't the trick for this to signpost and create a fake metric so the attentional window of your evaluee is somewhere else? its a little bit unethical but if your usecase is harmless, it solves the problem and adds a bias which you can negate for by evaluating against a previous ground-truth

the other trick iirc is to oscillate metrics and do template testing so you can determine which of n-states your evaluee's attention was on so you can then account for a known bias by using a pattern?

like the cognitive load of trying to account for a metric is finite and you can saturate it with dark patterns though i think outside of maybe game design or some extremely minor stuff this quickly becomes VERY unethical and makes me very sad

Yes and no. It likely depends on what you consider the system to be. If it's a rather static one (say the classical example of a room being heated with a thermostat as a pid controller), then you can get away with a limited set of metrics. If it's a dynamic one with no clear bounds and a lot of independent unpredictable free agents, you can't really do much there.

I mean it's a similar issue with things such as leading indicators (which I ranted about in the tech bubble thread). Essentially the risk with a leading indicator is that if it's a good predictor of events and you decide to act on it, your own actions cancel all predictive ability of the indicator. Because you acted on it, the events that follow the early indicator no longer happen, and it's no longer a leading indicator: the new actions have their own effect that impact things and the system is dynamic; you'll need to continuously adjust and adapt to new indicators.

So a lot of metrics end up being a proxy for something more important (eg. you could decide that GDP is a good proxy for economic prosperity, which in turn [should] tell you about well-being). The issue is that the proxy value is often chosen as something simpler and easier to reliably measure, but as it becomes its own objective, it stops correlating adequately to the actual important thing you want to impact, and may also negatively impact other previously unaccounted for values (eg. monocrop productivity is improved greatly under industrial farming until you erode soils, wash away nutrients, and kill the microorganisms that make your soil healthy).

My general understanding is that the healthiest attitude is one where the system designer and the people who choose metrics have to consider themselves as part of the system (an omission that greatly harmed the first wave of cybernetics research) and assume that they'll have to constantly look at what metrics they pursue to keep them in line with the higher level goals they actually want to attain.

Plan to have to adjust and replace metrics, recalibrate objectives, and abandon things that were great predictors yesterday.

Expo70 posted:

i don't wanna be rude, but can you recommend a few hfe books or adjacent materials? i'm almost entirely self-taught and i'm wondering what's highly valued by others in the space, or what weird things you think don't get talked about enough. its all extremely interesting to me.

I'm also self-taught, though my slant is far more towards resilience engineering and its safety roots, with my intent of applying that to software operations. I'll probably wait to drop more resources but for those relevant to this current post:
  • Field Guide to Understanding Human Error by Sidney Dekker. Dekker is essentially a former pilot turned safety expert who is known for extremely flippant and biting comments. He's seen as a bit of a wild card but extremely useful communicator in the environment, and this book is the best intro to fundamentals around human error and how it should be framed.
  • Producing Power: The Pre-Chernobyl History of the Soviet Nuclear Industry, which takes a systemic approach to investigating the Chernobyl disaster and digs 50 years back into how the Soviet system had divided civilian and military nuclear power divisions, how the designs they had tangled with existing values and infrastructure, and how the nuclear reactor design they used was absolutely the rational choice at the time. I like it because it ties into the nuclear discussion above, but also into what a broader view that exits the direct confines of the technical system is absolutely relevant to understanding systems.
  • Behind Human Error is my actual bible when it comes to systems and errors and human interactions. It's the more advanced version of Dekker's book and is an absolutely fascinating tour of cognitive science, safety, human factor/design, and the overall interaction of humans in systems.

I'm hoping to save some books and tons of papers to drop at various points in time, but the first page is a good place for these three books.

MononcQc fucked around with this message at 16:50 on Jan 19, 2022

MononcQc
May 29, 2007

"I believe I did, Bob."



gently caress it, here's a cool and good paper by David Woods while I'm at lunch: https://twitter.com/ddwoods2/status/1442590755126661124

Woods wrote this in the 80s but this recently decided to call out tech companies on our terrible Human-Computer Interfaces (HCIs) in a tweet for having “low visual momentum”

He defines visual momentum as “a measure of the user’s ability to extract and integrate information across displays, in other words, as a measure of the distribution of attention” and states:

quote:

When the viewer looks to a new display there is a mental reset time; that is, it takes time for the viewer to establish the context for the new scene. The amount of visual momentum supported by a display system is inversely proportional to the mental effort required to place a new display into the context of the total data base and the user’s information needs. When visual momentum is high, there is an impetus or continuity across successive views which supports the rapid comprehension of data following the transition to a new display. It is analogous to a good cut from one scene or view to another in film editing. Low visual momentum is like a bad cut in film editing—one that confuses the viewer or delays comprehension. Each transition to a new display then becomes an act of total replacement (i.e. discontinuous); both display content and structure are independent of previous “glances” into the data base. The user’s mental task when operating with discontinuous display transitions is much like assembling a puzzle when there is no picture of the final product as a reference and when there are no relationships between the data represented on each piece.
He says that not doing this right tends to often show up in ways interpreted to be “memory bottlenecks,” where people get lost in their displays and information. He warns that those are not memory issues, but symptoms of mismatches of the human-machine cognitive system (the machine isn’t being helpful or playing to the human’s expectations and the human is adjusting)

Woods writes in a dense style, but seems to state, briefly, that humans are good at spatial cognition and that playing to that strength reduces the cognitive cost of dealing with information. The layout and transitions for displays/graphs/charts/whatever can be used to give information about what the data they contain represents and how it connects to the rest; not doing this means that we must use a different costlier type of attention to keep track of everything mentally.

He creates a gradient of visual momentum sources, and defines each of them, using ‘maps’ as an abstract metaphor (you can think of them as literal maps, but also just as “organization structure” and he seems to hint at the latter):


  • total replacement / fixed format data replacement are just flashing a new page
  • long shot is about providing a summary view of status that contextualizes later views or direct digging from there
  • contextual landmarks is about providing cues for a display to lead to another such that you could follow information across locations
  • display overlap is about using layers to show multiple sources over a single visual framework, tied together by function. Inherently more limited due to the presentation surface (I’m having a hard time breaking out of the “map” analogy here, but I could imagine being able to highlight the data path probed by some user-facing alert to contextualize components involved and their state)
  • spatial representation is about making use of the layout of presented data to also provide further information about their meaning and the location you’re in right now. He mentions navigable topologies of databases info, route knowledge (and breadcrumbs), maps to supplement menus, etc.
  • spatial cognition: finding ways to represent systems through analogies of ‘routes’ that can be selected and navigated, since these can be laid out and considered in parallel; it requires a spatial framework that focuses on relationships between components

It’s followed by a short discussion on the way people process such information. Anyway, I found that paper super interesting in the context of “y’all computer ops folks have lovely support for people overviewing a system and it takes a lot of cognitive load”

Expo70
Nov 15, 2021


MononcQc posted:

gently caress it, here's a cool and good paper by David Woods while I'm at lunch: https://twitter.com/ddwoods2/status/1442590755126661124

Woods wrote this in the 80s but this recently decided to call out tech companies on our terrible Human-Computer Interfaces (HCIs) in a tweet for having “low visual momentum” ...


this stuff is literally why i gutted an old n52, packed it with scripts and got one of those big metal trading trusses from ebay and do the six monitor shuffle because i hate having to have multiple tabs of unreal open on the same window when i'm doing development and its just so much nicer for the bottom three to have leftside library, midside 4k 4x node graphs, rightside project and have a clear path from A to B to C in terms of what the heck is going on when stuff is running.

i can see what the graphs are like actually doing and i bury complexity into subnodes so i can drill down and see exactly what's going on with clear symbolic reasoning, named graphs and their specific hotkeys to their bookmarks and explanation with bookmarks in ue4 mapped to f15 to f22 combinations of keys on the n52 to rapidly load and swap graphs

its overkill but it brings me enormous joy and makes me wish this stuff could be procedurally generated in a ux some way instead of hand-made

about the only "new" feature i think i could ask for is graphs for the debugging features so i can see rates of change and histories instead of just the current variable values like this: http://worrydream.com/MediaForThinkingTheUnthinkable/



--


i think in terms of actual applied versions of this concept successfully, my favourite is the primary mfd optics on the rafale, which use special lensing so the human eye doesn't have to refocus from distant to near from reading the hud's optics in infinity focal length sighting. net result is that display is *always* used when you're reading the world as it is part of your lower periphery so you use the other two either side when engaging with bvr and any information you need during dynamic response, you do in an entirely different focal length which cuts down on eye fatigue but massively leverages spatial reasoning similarly to how hmd systems often do as an entirely new optical anchor-point.

it immediately scrubs fine information from the two mfds either side when you look out at the world through the hud and thus, you end up with two information domains and your brain's natural culling behaviors get super aggressive, so the central lensed display has markers to indicate warnings/cautions/alerts to tell you which mfd to look at meaning it also happens to cut down on fixation phenomena which is a huge huge problem

i'm also just obsessed with the hybrid throttle stick concept (its both! horizontal and vertical on the left side!), with the different hand positions representing different control scenarios (eg, one for beyond visual range and long range navigation, one for visual flight rule navigation) its just so drat CLEAN and it doubles the clustering capacity that comes with automaticity in a way that the control crown work on the F35 weeps at with its messy pickle muffin sandwich design.

Expo70 fucked around with this message at 17:30 on Jan 19, 2022

post hole digger
Mar 21, 2011


is this thread about chairs

Best Bi Geek Squid
Mar 25, 2016


Silver Alicorn posted:

I have an ergonomic keyboard op

this

also op I had to drive a new car for work last week and the radio controls were hell. completely flat buttons and touchscreen controls. couldn’t change the station without taking eyes off the road

Expo70
Nov 15, 2021


post hole digger posted:

is this thread about chairs

does your chair make you think in different ways?

bob dobbs is dead
Oct 8, 2017

I love peeps

Nap Ghost

i learned hci and human factors eng at plutocrat dchool formally and they, ironically, rigorously crammed down our throats that this is fundamentally a self-taught discipline and that there are no firm rigorous complete textbooks

that said, bret victor's syllabus is not a bad syllabus for something pertaining to the subject, more narrowly construed for computers than engineering in general

http://worrydream.com/#!/Links

rotor
Jun 11, 2001

Official Carrier
of the Neil Bush Torch

 
 
 
 
teh butts


Expo70 posted:

does your chair make you think in different ways?

yeah ... it makes me think i need a new chair!!! lmbo.

rotor
Jun 11, 2001

Official Carrier
of the Neil Bush Torch

 
 
 
 
teh butts


So i posted this about VR in the other thread and a bunch of whiners told me it was behind a paywall(???) so here it is again

https://qz.com/192874/is-the-oculus-rift-designed-to-be-sexist/

quote:

In the fall of 1997, my university built a CAVE (Cave Automatic Virtual Environment) to help scientists, artists, and archeologists embrace 3D immersion to advance the state of those fields. Ecstatic at seeing a real-life instantiation of the Metaverse, the virtual world imagined in Neal Stephenson’s Snow Crash, I donned a set of goggles and jumped inside. And then I promptly vomited.

I never managed to overcome my nausea. I couldn’t last more than a minute in that CAVE and I still can’t watch an IMAX movie. Looking around me, I started to notice something. By and large, my male friends and colleagues had no problem with these systems. My female peers, on the other hand, turned green.

What made this peculiar was that we were all computer graphics programmers. We could all render a 3D scene with ease. But when asked to do basic tasks like jump from Point A to Point B in a Nintendo 64 game, I watched my female friends fall short. What could explain this?
At the time any notion that there might be biological differences underpinning computing systems was deemed heretical. Discussions of gender and computing centered around services like Purple Moon, a software company trying to entice girls into gaming and computing. And yet, what I was seeing gnawed at me.

That’s when a friend of mine stumbled over a footnote in an esoteric army report about simulator sickness in virtual environments. Sure enough, military researchers had noticed that women seemed to get sick at higher rates in simulators than men. While they seemed to be able to eventually adjust to the simulator, they would then get sick again when switching back into reality.


Being an activist and a troublemaker, I walked straight into the office of the head CAVE researcher and declared the CAVE sexist. He turned to me and said: “Prove it.”

The gender mystery

Over the next few years, I embarked on one of the strangest cross-disciplinary projects I’ve ever worked on. I ended up in a gender clinic in Utrecht, in the Netherlands, interviewing both male-to-female and female-to-male transsexuals as they began hormone therapy. Many reported experiencing strange visual side effects. Like adolescents going through puberty, they’d reach for doors—only to miss the door knob. But unlike adolescents, the length of their arms wasn’t changing—only their hormonal composition.

Scholars in the gender clinic were doing fascinating research on tasks like spatial rotation skills. They found that people taking androgens (a steroid hormone similar to testosterone) improved at tasks that required them to rotate Tetris-like shapes in their mind to determine if one shape was simply a rotation of another shape. Meanwhile, male-to-female transsexuals saw a decline in performance during their hormone replacement therapy.

Along the way, I also learned that there are more sex hormones on the retina than in anywhere else in the body except for the gonads. Studies on macular degeneration showed that hormone levels mattered for the retina. But why? And why would people undergoing hormonal transitions struggle with basic depth-based tasks?

Two kinds of depth perception

Back in the US, I started running visual psychology experiments. I created artificial situations where different basic depth cues—the kinds of information we pick up that tell us how far away an object is—could be put into conflict. As the work proceeded, I narrowed in on two key depth cues – “motion parallax” and “shape-from-shading.”

Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didn’t suddenly grow and concludes that it’s just got closer to you.

Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.

In the real world, both these cues work together to give you a sense of depth. But in virtual reality systems, they’re not treated equally.

The virtual-reality shortcut

When you enter a 3D immersive environment, the computer tries to calculate where your eyes are at in order to show you how the scene should look from that position. Binocular systems calculate slightly different images for your right and left eyes. And really good systems, like good glasses, will assess not just where your eye is, but where your retina is, and make the computation more precise.

It’s super easy—if you determine the focal point and do your linear matrix transformations accurately, which for a computer is a piece of cake—to render motion parallax properly. Shape-from-shading is a different beast. Although techniques for shading 3D models have greatly improved over the last two decades—a computer can now render an object as if it were lit by a complex collection of light sources of all shapes and colors—what they they can’t do is simulate how that tiny, constant flickering of your eyes affects the shading you perceive. As a result, 3D graphics does a terrible job of truly emulating shape-from-shading.

Tricks of the light

In my experiment, I tried to trick people’s brains. I created scenarios in which motion parallax suggested an object was at one distance, and shape-from-shading suggested it was further away or closer. The idea was to see which of these conflicting depth cues the brain would prioritize. (The brain prioritizes between conflicting cues all the time; for example, if you hold out your finger and stare at it through one eye and then the other, it will appear to be in different positions, but if you look at it through both eyes, it will be on the side of your “dominant” eye.)

What I found was startling. Although there was variability across the board, biological men were significantly more likely to prioritize motion parallax. Biological women relied more heavily on shape-from-shading. In other words, men are more likely to use the cues that 3D virtual reality systems relied on.

This, if broadly true, would explain why I, being a woman, vomited in the CAVE: My brain simply wasn’t picking up on signals the system was trying to send me about where objects were, and this made me disoriented.

My guess is that this has to do with the level of hormones in my system. If that’s true, someone undergoing hormone replacement therapy, like the people in the Utrecht gender clinic, would start to prioritize a different cue as their therapy progressed.

We need more research

However, I never did go back to the clinic to find out. The problem with this type of research is that you’re never really sure of your findings until they can be reproduced. A lot more work is needed to understand what I saw in those experiments. It’s quite possible that I wasn’t accounting for other variables that could explain the differences I was seeing. And there are certainly limitations to doing vision experiments with college-aged students in a field whose foundational studies are based almost exclusively on doing studies solely with college-age males. But what I saw among my friends, what I heard from transsexual individuals, and what I observed in my simple experiment led me to believe that we need to know more about this.

I’m excited to see Facebook invest in Oculus, the maker of the Rift headset. No one is better poised to implement Stephenson’s vision. But if we’re going to see serious investments in building the Metaverse, there are questions to be asked. I’d posit that the problems of nausea and simulator sickness that many people report when using VR headsets go deeper than pixel persistence and latency rates.

What I want to know, and what I hope someone will help me discover, is whether or not biology plays a fundamental role in shaping people’s experience with immersive virtual reality. In other words, are systems like Oculus fundamentally (if inadvertently) sexist in their design?

bob dobbs is dead
Oct 8, 2017

I love peeps

Nap Ghost

Shaggar posted:

everyone here is old

the chans got a huge influx of like 50, 60yos when the whole place got suffused with highly reactionary politics

oldness wasnt the thing, its bans that stick

bob dobbs is dead fucked around with this message at 17:44 on Jan 19, 2022

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


Best Bi Geek Squid posted:

this

also op I had to drive a new car for work last week and the radio controls were hell. completely flat buttons and touchscreen controls. couldn’t change the station without taking eyes off the road

my corolla is like this but there’s still thumb buttons on the wheel for station and volume

MononcQc
May 29, 2007

"I believe I did, Bob."



post hole digger posted:

is this thread about chairs

It can be: https://worldwarwings.com/1950s-study-reveals-no-such-thing-as-average-pilot-you-might-have-a-chance/

quote:

One bleak day in the late 1940s, 17 pilots crashed for no apparent reason. At this time in history, the United States was experiencing a baffling mystery: good pilots were crashing good planes on a regular basis and they didn’t know why.

At first, the pilots were blamed. The planes were tested repeatedly but no defects were found. But the pilots knew it wasn’t them.

So engineers had to think harder. Were pilots bigger than they had been in 1926? Was that messing with how cockpits were designed?

To find out, a new study was launched to take the measurements of over 4,000 pilots. They measured down to the details, including thumb length, crotch height, and even the distance from ear to eye. Then, they calculated the average. Everyone believed that this would result in a better pilot-to-cockpit fit and that, finally, there would be fewer crashes.

But Lt. Gilbert S. Daniels had doubts. He chose to look at each of the 4,000 pilots’ measurements side by side with the average and found a revelation that was shocking for the times. Not a single pilot even came close the measurements of the average.

Designing a cockpit for the average pilot meant that it was essentially designed for no one.

This was an enormous breakthrough. But how would they be able to fit each cockpit to an individual pilot? That was not possible.

Enter adjustable seats. While pilots still needed to be within a certain range of dimensions, these new adjustments revolutionized production. The solution was cheap, easy, and better yet, pilot performances soared.

Silver Alicorn
Mar 30, 2008

The Game is Never Over


this is a good thread even if I don’t understand it. I use a trackball

rotor
Jun 11, 2001

Official Carrier
of the Neil Bush Torch

 
 
 
 
teh butts


Silver Alicorn posted:

this is a good thread even if I don’t understand it. I use a trackball

i swap between a mouse and a wacom tablet

Expo70
Nov 15, 2021


rotor posted:

So i posted this about VR in the other thread and a bunch of whiners told me it was behind a paywall(???) so here it is again

https://qz.com/192874/is-the-oculus-rift-designed-to-be-sexist/

so another phenomenon to note here is this isn't confined exclusively to women, but you also see it in certain ancestries.

https://pubmed.ncbi.nlm.nih.gov/8825456/

its why for example, the japanese for the longest time resisted the use of the right analogue stick in their games, or thought the idea of manually controlling a "camera" was unacceptable and you can see it in a lot of asian game design vs if you look at games coming out of say, russia they seem totally resistant to this stuff - hence why russia loves fps games, and japan by en large hates them

the real lesson to take from this is you should be building affordances in your game design that mean different players can play however they like, and that limitations are provided to keep things fair -- eg, your camera and your player-character rotation being independent from one another and to be prepared to teach 3d movement skills to your players.

a huge problem in my experience is the player action physics in lots of games are substantially faster than those of real world situations which comes from the programmatical history that the expectation from 2d games comes from 2d animation -- and 2d animation as a whole is basically a hallucination of illustration representing a physical concept vs 3d animation which is primarily a real representation of a geometry which is why often, a lot of techniques used to make 3d objects look good are boarderline hallucinatory (eg, geometry which deforms based on specific camera angle)

eg: example 1,example 2

this research also tells us lots of really interesting things about how humans perceive not only space, but also velocity -- eg, motion sickness caused by translation in vr happens based on the perception of frames of reference and where they lie. right now vr is still assuming that a geometric first approach is best, when really we know that the brain's cognitive model of the world is supplied by a synthetic vision provided by what amounts to synthesized holistic hallucination so i'm wondering if maybe that kind of approach can be used in some way.

this does make me wonder if the cornea has some sort of difference from estrogenic vs androgenic dominant bodies, given i know a number of people have experienced changes in their prescriptions when undergoing hrt. i think its something that really does need to be studied more, and better understood.

Adbot
ADBOT LOVES YOU

rotor
Jun 11, 2001

Official Carrier
of the Neil Bush Torch

 
 
 
 
teh butts


Expo70 posted:

this does make me wonder if the cornea has some sort of difference from estrogenic vs androgenic dominant bodies, given i know a number of people have experienced changes in their prescriptions when undergoing hrt. i think its something that really does need to be studied more, and better understood.

one of the most interesting lines in that whole thing to me was:

quote:


Along the way, I also learned that there are more sex hormones on the retina than in anywhere else in the body except for the gonads.

who knew? not me.

yeah i thought the studies she mentioned on people undergoing hormone therapy was interesting and id really like to see more rigorous research

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply