Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
hes got books

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

books take forever; I'm not closed to the idea of writing one at some point -- maybe if it's an assembly of various blog posts work has me write, but podcasts sound like a lot of loving work especially if alone to run it. Shitposting is easy and effective and can be done over the course of many days while just sitting on my rear end in between two bad tv shows or something. So the format is pretty convenient.

DELETE CASCADE
Oct 25, 2017

i haven't washed my penis since i jerked it to a phtotograph of george w. bush in 2003

zokie posted:

That was a top notch effort poat, shared it with my mom who is a Professor of Sociology.

You should start a podcast or something

how's it feel to know your mom is full of shiiiiiiiiiit

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

DELETE CASCADE posted:

how's it feel to know your mom is full of shiiiiiiiiiit

well she pushed out a 9 lb turd thirty years ago so it’s probably not news

jk zokie idk who u are

Oysters Autobio
Mar 13, 2017
(shout out to the sociology prof mom. gently caress that p-hacking pop-psych "ooohhhh look at those margins of error" bullshit, gimme some qualitative ethnography gently caress yeaaa

Long time lurker just jumping in and saying this is awesome.

I know this was touched on earlier but I'm really fascinated to the future of possible UI / UX designs and concepts in computing, but specifically future personal computing and designs for the average knowledge worker shmo. I don't know if this is far outside the scope of this thread but not sure where else this sort of discussion might happen.

I have no background in CS or software engineering but I find it so interesting how for normal everyday users (especially your typical "knowledge worker" who just reads an email inbox all day for a living and the actual "work" is essentially advice in some form or another) the actual "office" type work has just really not changed all that much.

It seems like in order to get that first generation of personal computing users, designers had to really emphasize that skeumorphic design that mimicked administrative offices and all their physical objects. "Files" get put into a "Folder", you have an email inbox where you send and receive memorandums, you save documents you might need later in some sort of shared drive (i.e. a cabinet). If you're like a realtor, or a sales/marketing person, or HR professional, is the future of UI/UX for this just super custom apps that are just essentially fancier dashboards to visualize data specifically for your domain that layers on top of our already existing desktop computing? Most of this stuff to me just looks like either a fancier/nicer looking version of spreadsheet software or basically MadLibs walkthroughs of whatever esoteric "process" you have to do ("Click here to generate your TPS report cover sheet")

Is there ever going to be a major new re-design for desktops or email (i.e. memorandums) or personal computing that somehow "transcends" paper all of this? It seems like digitizing office ephemera seems to have been the only goal and now everyone uses a PC as part of their daily work life without ever being a "computer person", and even with all the recent stuff in VR/AR, this all seems to only translate into physical type jobs like being a mechanic and having AR visualize schematics or construction or whatever.

anyways thanks for hearing my meandering bullshit.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Oysters Autobio posted:

Is there ever going to be a major new re-design for desktops or email (i.e. memorandums) or personal computing that somehow "transcends" paper all of this?

This has already happened. iOS has killed the file / hierarchical nested folders metaphor, and made it entirely application/content specific. Web apps have killed the rest; everything is tied to your account and stored in the cloud.

There are a few places where the file structure leaks, but they’re legacy.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
i mean, it's happened in the minds of people, and it hasn't happened in any way shape or form in the underlying technical implementations, which is an absolute lol if one of your ostensible goals is yadda yadda make computing more accessible

E4C85D38
Feb 7, 2010

Doesn't that thing only
hold six rounds...?

MononcQc posted:

books take forever; I'm not closed to the idea of writing one at some point -- maybe if it's an assembly of various blog posts work has me write, but podcasts sound like a lot of loving work especially if alone to run it. Shitposting is easy and effective and can be done over the course of many days while just sitting on my rear end in between two bad tv shows or something. So the format is pretty convenient.

I've read several books that were mostly "slightly reorganized blog posts" and I think your writing is engaging enough to get away with it, I'm always excited to see a new post here.

Oysters Autobio
Mar 13, 2017

in a well actually posted:

This has already happened. iOS has killed the file / hierarchical nested folders metaphor, and made it entirely application/content specific. Web apps have killed the rest; everything is tied to your account and stored in the cloud.

There are a few places where the file structure leaks, but they’re legacy.

I guess I'm showing how little I've ever used iOS or even Linux.

I don't feel that old but sometimes I get it when my parents or older people complain about learning the new "iPad whatsadoodle" because it does feel like with custom apps for everything and constant updates and different OS designs people are sort of being forced to relearn their main interfaces on an almost yearly basis.

Whereas growing up pre smart phone era and being on PCs since Win 95, nothing really changed if I wanted to find content in an app because while the content was accessible through the app itself I still could access it through the file explorer. Was this simply because of Windows ubiquity especially in the 90s?

Again, maybe this is getting into a stoned teenager's philosophy of HCI rather than the science itself, but what exactly is the ultimate goal of HCI in effect and how is this parsed with the fact that the most amount of money and time and effort today in this field seems spent convincing you to enter into some behavioural advertising panopticon that is offering an "experience" that's been min-maxed to convince you that you even need this to begin with?

There's a part of me that's really into UI / UX and the possibility for accessibility for all sorts of people into tools and things that would help them, but it's also hard to segregate the fact that the two most dominant fields are food or consumer good apps on the one hand, and Lockheed Martin Cruise Missile as a Service (CMaaS) connected through your AR/VR headset, while municipal governments are sorting out social services on an MS Word table that gets printed out and annotated for revisions.

(Sorry, I'm all over the place today and I swear I'm not just trying to hijack this into a CSPAM discussion about capitalism or whatever)

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
some of the incompetence of government in creating software is self-inflicted: you need to give software peeps serious power in order to get good software made in an organization, and this has happened in corporations by the general extermination of every and any medium-sized and larger corporation that can't deal with software poo poo (including ostensible software companies that couldn't deal with software poo poo) but has not happened in government. quite separate from ui/ux itself or hci itself or ergonomics itself (which is done by separate peeps from hci)

MononcQc
May 29, 2007

I probably need to put my ideas in order, so this is going to be some rambling, but for the longest time I’ve felt that apps & services going away from “files” is a sort of mistake in the long run (also: rent seeking by making your workflow and data into a single proprietary piece).

But files alone aren’t necessary the good thing; resources are. And I am thinking of resources in the REST spirit of “some document somewhere that has a type and addressable name that can be shared, transferred, or stored”

The data is more shareable than the app and its workflow. It’s more archivable, transformable.

Systems where the data and the functionality to modify it are bundled always feel inherently riskier to me. If the API lets me get my data, it’s cool and good. If the data is only accessible through a complex interaction, it’s never mine; I aways must go through the app to get it, I can’t tweak my workflow, I can’t integrate its workflow to mine, it’s a bubble of its own that forces me to do whatever.

Files or documents or resources tend to live long because what matters is their content and that content may outlive the app that originally built it, and can be adopted by other mechanisms that further extend them. I care about the content more than the tool letting me modify it; as my needs change so do my tools. Coupling the content to the tool constrains what can be done in the long run.

In the worst case there is always a need for an export function, but I can’t easily remember a long-lived, important piece of content that couldn’t or shouldn’t have been transferable or only ever needed to live in one app, even at a loss. The only specific cases I can recall are specifically documents you’re not supposed to have access to outside of a specific context (hello music and video streaming apps) and even then, it’s easy to make copies whether they want you to or not.

A lot of data-centric formats will look like files because files on a file system have these properties. s3 is blob storage, git is a database (file-based but you always use the app to interact with it), websites abstract it all away behind URLs, but they all feel more open than say, a slack channel or a Mural app’s board.

And clearly some apps are very good, but if it can’t export poo poo it instantly feels like a lock in and I have a hard time trusting it.

MononcQc
May 29, 2007

I also never got on board with watching streamers, I entirely dismissed games and poo poo in my post, and I loved encarta but Wikipedia is just better to me so it’s very possible my perspective is just ramblings about why I like things I like, have no real backing in fact, and entirely lack the perspective of newer paradigms I never bought into.

I mean online stores are nicer than ordering from a catalogue so there you go, +1 to the interactive workflow-centric approach.

MononcQc fucked around with this message at 05:49 on May 17, 2022

Midjack
Dec 24, 2007



that puts into words the distrust i feel of the ios data management scheme and why i always feel compelled to add a file browser to my phones.

Endless Mike
Aug 13, 2003



bob dobbs is dead posted:

some of the incompetence of government in creating software is self-inflicted: you need to give software peeps serious power in order to get good software made in an organization, and this has happened in corporations by the general extermination of every and any medium-sized and larger corporation that can't deal with software poo poo (including ostensible software companies that couldn't deal with software poo poo) but has not happened in government. quite separate from ui/ux itself or hci itself or ergonomics itself (which is done by separate peeps from hci)

it's a larger issue that software development for government is largely handled via external contractors, and unless you have a software developer writing contracts (you do not), you're not going to accurately capture what you want and how it needs to be done.

contractors will happily give you exactly what you ask for and not a single thing more

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
power, as i said

there are even good contractors on this earth, its just that governments don't get to use them. don't know, can't pay, can't work with, etc

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



Endless Mike posted:

it's a larger issue that software development for government is largely handled via external contractors, and unless you have a software developer writing contracts (you do not), you're not going to accurately capture what you want and how it needs to be done.

contractors will happily give you exactly what you ask for and not a single thing more

theyll also "give" you an expensive support contract

MononcQc
May 29, 2007

This week's paper is a draft from Ross Koppel, Sean Smith, Jim Blythe, and Vijay Kothari titled Workarounds to Computer Access in Healthcare Organizations: You Want My Password or a Dead Patient? First of all, great title. This paper is a work of ethnography, where the authors sat and studied how people in medical settings did their work interacting with computers, and denoted all sorts of workarounds they'd take to bypass security rules that they judge are a hindrance to their work.

The idea behind the paper is that clearly, people behind the computer systems are not working from a realistic understanding of what medical professionals have to contend with to do their job. And maybe, just maybe if they sat and figured out how said professionals do their work, it may be different:

quote:

Cyber security efforts in healthcare settings increasingly confront workarounds and evasions by clinicians and employees who are just trying to do their work in the face of often onerous and irrational computer security rules. These are not terrorists or black hat hackers, but rather clinicians trying to use the computer system for conventional healthcare activities. These “evaders” acknowledge that effective security controls are, at some level, important—especially the case of an essential service, such as healthcare. [...] Unfortunately, all too often, with these tools, clinicians cannot do their job—and the medical mission trumps the security mission.

Mostly, the idea is that computer and security experts rarely happen to also be clinical care experts. What the paper finds through observations, interviews, and reports, is that:

quote:

workarounds to cyber security are the norm, rather than the exception. They not only go unpunished, they go unnoticed in most settings—and often are taught as correct practice.

They break down workarounds in categories, and they're just amazing.

Authentication
They note endemic circumvention of password-based auth. Hospitals and clinics write down passwords everywhere, sometimes as "sticky notes form sticky stalagmites on medical devices and medication preparation room". They've noted things like:
  • entire hospitals sharing a password for a medical device (the password is taped on the device)
  • emergency rooms' supply rooms with locked doors but the code is written on the door as well
  • vendors that distribute stickers to put your password on your monitor
  • computers with all employees passwords in a word doc shortcut on the desktop


In general, this happens because no one wants to prevent a clinician from obtaining emergency supplies and someone dying because the code slipped their mind. In some cases, passwords are shared so everyone can read the same patient charts, even if they do have shared access. In some cases, bad actors can use this to mess with data.

But really even the passwords themselves are worse in healthcare. The paper states "the US Inspection General notes that NIST will certify EHR systems as secure even if passwords are only one-character long", for example.

Password expiry also gets a slam:

quote:

one physician colleague lamented that a practice may require a physician to do rounds at a hospital monthly—but that unfortunate expiration intervals can force the physician to spend as long at the help desk resetting an expired password as he or she then spends treating patients.

De-Authentication
This one is neat. After you authentified someone, you need to de-auth them when they walk away so their session ends and nobody surfs on their login. In some cases forgetting to log out can lead to abuse or mistakes where people enter information for wrong patients. Unfortunately, this is often undesirable as well and so they note the following workarounds:
  • defeating proximity sensors by putting styrofoam cups over detectors
  • asking the most junior person on staff to keep pressing the space bar on everyone's keyboard to prevent timeouts
  • clinicians offering their logged-in session to next clinicians as a "professional courtesy" (even during security training sessions)
  • nurses marking their seats with sweaters or large signs with their name on them, hiding computers, or lowering laptop screens to mark them as busy
One clinician mentioned that his dictation system has a 5 minutes timeout that requires a password and that during a 14-hour day, he spends almost 1.5 hours logging in. In other cases, the auto-logout feature exists on some systems but not all of them such that sometimes staff expect to be logged out when they are not.

One specific example of such usability problem is:

quote:

A nurse reports that one hospital’s EMR prevented users from logging in if they were already logged in somewhere else, although it would not meaningfully identify where the offending session was. Unfortunately, the nursing workflow included frequent interruptions—unexpectedly calling a nurse away from her COW. The workflow also included burdensome transitions, such as cleaning and suiting up for surgery. These security design decisions and workflow issues interacted badly: when a nurse going into surgery discovered she was still logged-in, she’d either have to un-gown—or yell for a colleague in the non-sterile area to interrupt her work and go log her out.

Breaking the Representation
Usability problems often result in medical staff working around the system in a way that creates mismatches between reality and what the system sees reported.

One example given is that one Electronic Health Record (EHR) system forces clinicians to prescribe blood thinners to patient meeting given criteria before they can end their session, even if the patient is already on blood thinners. So clinicians have to do a risky workaround where they order a second dose of blood thinners to log out (which is lethal if the patient gets it), quit the system, then log back in to cancel the second dose.

Another example comes from a city hospital where creating a death certificate requires a doctor's digital thumbprint. Unfortunately for that hospital, there is a single doctor that has thumbs that the digital reader manages to scan, so the doctor ends up signing all the death certificates for that hospital regardless of whose patient the deceased was.

There's yet more for these mismatches:
  • the creation of shadow notes, paper trails that get destroyed because they are not wanted in an official formal record
  • "nurses brain" notes that list all tasks for a patient for their shift (something the computer does not support)
  • the creation of shadow notes because the computer doesn't allow enough precision
  • needing to note the operating room (OR) admission time precisely when the computer is 2 minutes away from there and won't allow future dates (on paper, nurses wrote now()+2 mins); so the nurse logs in, turns off the monitor, wheels the patient into the OR, then runs out to mark the record with a more accurate time

Permission Management
Access control loving sucks:

quote:

Clinicians often have multiple responsibilities—sometimes moving between hospitals with multiple roles at each one, but accessing the same back-end EHR. Residents change services every 30 days during their training. If access is limited to one service, it needs to be reconfigured that often. However, a resident may be consulted about a former patient, to which he/she no longer has access. More frequent are clinicians who serve in multiple roles: the CMIO may need access to every patient record, not only those in her/his specific medical sub-discipline. A physician who focuses on infectious disease may also be on the committee that oversees medication errors, and thus requires access to the pharmacy IT system and the nurses medication administration system. In some hospitals, nurses sometimes authenticate as nurses and sometimes as doctors.

Undermining the Medical Mission
Many health IT systems are so bad they're seen as harming the medical objectives of practitioners.

The example given here is that some hospitals have tele-ICU, where patients must be monitored from distant nurse stations, which has a video feed and all the vitals relayed there. However, when bathing patients, the nurses have to cover the cameras to protect their privacy, and so the ICU can't monitor them adequately anymore.

There's also a case where a doctor couldn't find the required medication in the software. He found a custom field with free text where he noted the prescription, but the box was not visible on the other end so the prescription was never given and the patient lost half his stomach.

-------

The authors circle back on the value of ethnographic investigations to properly adapt tools to work. They end by stating:

quote:

in the inevitable conflict between even well-intended people vs. the machines and the machine rule makers, it’s the people who are more creative and motivated.

If your system conflicts with what the humans consider as their end goal, they'll work around your system with great creativity and tenacity.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

shaggared again

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

but that's a healthy dose (heh) of :stonklol:

Shaggar
Apr 26, 2006
EMRs are all really loving bad for sure, but then you throw on top of that how lovely hospital administrators are and how doctors are all whiney babies, and theres basically no system they wont gently caress up.

Shaggar
Apr 26, 2006
reasonable person: "im gonna solve authentication by giving these doctors prox cards!"
EMR: "we dont support that"
administrator: "we dont want to pay for it"
doctor: "i left my prox card at home, give me yours"

Shame Boy
Mar 2, 2010

MononcQc posted:

One example given is that one Electronic Health Record (EHR) system forces clinicians to prescribe blood thinners to patient meeting given criteria before they can end their session, even if the patient is already on blood thinners. So clinicians have to do a risky workaround where they order a second dose of blood thinners to log out (which is lethal if the patient gets it), quit the system, then log back in to cancel the second dose.

lmao in what case would it ever conceivably be okay for a record system to enforce prescriptions

tk
Dec 10, 2003

Nap Ghost

Shame Boy posted:

lmao in what case would it ever conceivably be okay for a record system to enforce prescriptions

Because it will take two days to implement a dismissible prompt. We need this out tomorrow!

Pulcinella
Feb 15, 2019
Sounds like Dynamicland (as a physical space you could actually visit) is no more.

Andy Matushack posted:


One thing which comes to mind is that Dynamicland is a strange laboratory. It was a space in Oakland that is no more, but it's a physical environment where the primary activity being undertaken was creating this very unusual computing system.

And in fact, that's exactly what the principal investigator is doing right now. He's picking up and relocating the work to very interesting synthetic biology lab, where maybe now that the further development of the system will happen in a way that's meant to support this professor's research.

https://www.notion.so/blog/andy-matuschak

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
and it looks like even omar's halfassed attempt to do an oss thing so you could replicate it isn't on his github anymore, so it's gone gone.

it was pretty good. i got to visit it throughout its development every year or two from cdg to harc to dynamicland and it was always interesting and always struggling for funding

i dunno if there's any actual eulogy anywhere

zokie
Feb 13, 2006

Out of many, Sweden
This presentation reminded me of the article about how experts think

https://youtu.be/Jor-Rh0gwus

KidDynamite
Feb 11, 2005

Shaggar posted:

reasonable person: "im gonna solve authentication by giving these doctors prox cards!"
EMR: "we dont support that"
administrator: "we dont want to pay for it"
doctor: "i left my prox card at home, give me yours"

this was definitely not me just now. 😅

MononcQc
May 29, 2007

This week's review is one of my favorite chapters ever because it explained limits of automation so well to me. It is On People and Computers in JCSs at Work, Chapter 11 of the book Joint Cognitive Systems: Patterns in Cognitive Systems Engineering (pp.143-165). And yeah, it's by David D Woods again.

The chapter introduces a concept called the Context Gap that I just want to refer to over and over again, write blog posts and talks about. It first starts by defining Joint Cognitive Systems (JCS), reminding people that people and machines are not in opposition (machines are better at this, people better at that) but part of a team that should work together.

They set that up through an example scenario with 3 characters:
  1. the problem-holder: this is an experienced practitioner who is able to self-reflect, and mostly is concerned with meeting needs in the field of practice. Machines and robots are tools to meat objectives, and he operates under pressure.
  2. a roboticist: this is someone who wants to make better machines. He can make interfaces for communication, guidance, robot interactions, and deal with the social/organisational consequences of machines. Challenges felt by the problem-holder are opportunities for this guy, and he wants to make his machine more autonomous to help.
  3. a cognitive systems engineer: this person thinks of joint cognitive systems, and wants to consider the adaptation and interactions between the humans and machines
They're put together in a scenario of emergency first responders, say, a chemical or biological incident (like a sarin gas attack). The roboticist will want to focus on more autonomous robots ("how can I get the robots to do things the problem-holder needs?"), the problem-holder will talk of pressing demands ("how do I enter a room that has both hostile people but civilians and assess the type of chemical damage they have?"), and the cognitive systems engineer will want to focus on adaptation needed and surprises ("how do they trade off time vs. energy constraints?")

Generally there's a tendency for each role to focus inwards: the cognitive system engineers will think of ways to use robot prototypes to explore human-robot coordination (how to gather information), the problem-holder will focus on learning about what the robots can do to know what they can do (what is reliable and deployable), and the roboticist will think of reconciling constraints (which capacities are worth prioritizing given budget and performance).

The chapter asks how can we cross-connect these perspectives to make the general framing less fragile and more adequate for insight generation:

quote:

Problem-holder: “What obstacles can it clear?”
Roboticist: “It can go over items 15 inches or less.”
Cognitive systems engineer: “How do (would) you tell?”
Practitioner: “We drive up to the obstacle and if it’s higher than the treads we know it cannot be scaled.”
Cognitive systems engineer: “The practitioner’s heuristic is an example of workarounds and inferences people develop to make up for the impoverished perceptual view through the robotic platform’s sensors. In contrast, when people are physically present in the environment being explored, they pick up these affordances immediately.”
Fusing the perspectives can yield new findings:
  • Many types of tradeoffs must be respected and balanced
  • Automata take directions and inform distant parties of local conditions.
  • Practitioners see robots as a resource with limited autonomy
  • Team members must pick up and adapt to the activities of others
  • The field is demanding and always requires adaptation
  • Inevitably, automation will exhibit brittleness as situations exceed planned conditions
  • Human adaptive capabilities can be used as a model
Making this fusion requires being able to translate across all 3 areas of expertise.

Responsibilities in JCSs

A basic premise is that Some human practitioners bear ultimate responsibility for operational goals. The pilot dies in the plane that goes down, the software operator gets paged and has to fix issues regardless of what it is, the dude remote-controlling the robot is considered to be in command. But what does it mean to be in command? NASA's flight director role defines control as:
  • Must be involved
  • Must be informed
  • Must be able to monitor automation or other subordinate agents
  • Must be able to track the intent of other agents in the system
So the automation and agents' activities must be comprehensible and predictable. Problem-holders are responsible for the consequences of decisions and actions; the person who holds responsibility for a problem is one who has some scope of authority to resolve the situation, and links authority and responsibility.

A critical part of this is dealing with goal conflicts. Multiple simultaneously active goals are the rule, not the exception. They produce tradeoffs and dilemmas which must be resolved under time pressure and through uncertainty. And sometimes, these are exacerbated by the way the authority-responsibility duality is handled.

For example, you may be responsible for the outcomes of a system but without the ability to influence or control the processes leading to them. This was common in nuclear power plants (following Three Mile Island) where operators had to strictly follow written procedure. However procedures are often incomplete and contradictory (brittle), and new circumstances for which no procedures exist arise, which demands adaptation and resilience, meaning you couldn't succeed by following procedures.

The double-bind is that if you follow procedures you can't meet productivity and safety objectives, but if you don't follow them and there was a problem, you could create safety or economic problems. But as operators, they did not have authority to adjust procedures, so you end up with risks of over- or under-adaptation that is invisible to management. This ends up with pithy informal rules like “Our policy is to do the right thing.”

But this sets up one of the core concepts around the Context Gap: all procedures and algorithms are brittle:

quote:

It is impossible to comprehensively list all possible situations and encode all appropriate responses because the world is too complex and fluid. [...] Thus the person in the situation is required to account for potentially unique factors in order to match or adapt algorithms and routines—whether embodied in computers, procedures, plans, or skills—to the actual situation at hand.

People adapt in one of the ways when they have responsibility (can be sanctioned) but lack authority to influence outcomes:
  • pass responsibility back to others. Reject responsibility by narrowly following rules even when you know they are inappropriate.
  • develop covert work systems where they meet higher-level goals while looking like they follow written procedures in official documentation and reporting
The latter meets immediate goals, but ends up degrading the system's own perception of itself over time. So instead, giving more control and reducing sanctions can help keep the system functioning without degradation of communication. This requires constant investment and renewal.

Literal-Minded Agents
Automation tends to do right by its model of the world. The issue is that the model of an automated agent is often limited, and the system can't tell if its model of the world is the world they're actually in or not:

quote:

As a result, the system will do the right thing [in the sense that the actions are appropriate given its model of the world], when it is in a different world [producing quite unintended and potentially harmful effects]. This pattern underlies all of the coordination breakdowns between people and automation

This is essentially the context gap: the gap between the situation assumed in the model, and the actual situation in the world. The context gap represents the need to test the assumptions in literal-minded agents are correct. Monitoring this gap is fundamental to avoiding solving the wrong problem.

quote:

These results point to a general pattern in technology change and JCSs at work: When a field of practice is about to experience an expanding role for automata, we can predict quite confidently that practitioners and organizations will adapt to develop means to align, monitor, and repair the context gap between the automata’s model of the world and the world. Limits to their ability to carry out these functions will mark potential paths to failure—breakdowns in resilience. In addition, we note that developers’ beliefs about the relationship of people and automation in complex and high consequence systems (substitution myth and other over-simplifications) lead designers to miss the need to provide this support and even to deny that such a role exists

The author points out that people are also vulnerable to being trapped in literal-mindedness where they correctly react to the wrong situation because their model of the world was inaccurate. However, practitioners are generally able to probe and test whether the situation they face is correct, and have an ability to repair their understanding that machines do not.

They introduce Norbert Weiner's contrast as a warning:

quote:

Artificial agents are literal minded and disconnected from the world while human agents are context sensitive and have a stake in outcomes.

The key is a comparison in how automata and people start from opposite points.

Automata:
  • start from a literal point of view
  • developers exert effort and inventiveness
  • the automata becomes more adaptive, situated, contextualized
  • limits exist in this process and humans need to maintain and repair the link between the model and actual situation (surprises and resilience)

On the other hand, people:
  • start from a contextualized position
  • developers exert effort and inventiveness to move human systems towards
  • more abstract and encompassing models for effective analysis and action
  • move away from local, narrow surface models
So you have this tension between context vs. general rule-based approaches, and automation and people sort of start from opposing points. People are contextualized and narrow down their understanding to create automation, and automation requires gradually expanding these models to work properly. But this is fundamentally iterative.

Paradoxically, literal-minded agents are less predictable because they are insensitive to context. When pilots in a cockpit are confused ("Why is it doing this? What will it do next?") it comes from that mismatch between them knowing context cues and the autopilot not doing so.

quote:

The computer starts from and defaults back to the position of a literal-minded agent. Being literal-minded, a computer can’t tell if its model of the world is the world it is in. This is a by-product of the limits of any model to capture the full range of factors and variations in the world. A model or representation, as an abstraction, corresponds to the referent processes in the world only in some ways. Good models capture the essential and leave out the irrelevant; the catch is that knowing what is essential and irrelevant depends on the goal and task context

As Ackoff (1979, p. 97) put it,

quote:

The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is.

There's always a need to revisit connections between the model deployed as algorithms, plans, procedures, routines, skills, and the actual conditions faced:

quote:

It is up to people situated in context to ground computer processing in the world, given that particular situations can arise to challenge the boundary conditions of the model behind the algorithm (the potential for surprise). For people in context there is an open future, while literal-minded agents are stuck within the walls of the model underlying their operation.
[...]
Closing the context gap is about knowing and testing what “rules” apply in what kind of situation. The key is to determine the kind of situation faced, and to recognize how situations can change. People can be sensitive to cues that enable them to switch rules or routines as they test whether the situation is different from what was originally construed. And despite not always performing well at this function [...], people provide the only model of competence at re-framing or re-conceptualizing that we can study for clues about what contributes to expert performance and how to support it.

Improving systems requires analyses of the brittleness of automata, but also of sources of resilience to determine how and how well people are supported in their roles.

But that's not all, because people have their own limits too:

quote:

On the other hand, people as context-bound agents adapt to what they experience in the particular. This is a local perspective that is open to the potential for unbounded particularity, despite regularities. This perspective is important in the sharp end of practice as people bound to a local situation pursue their goals, with their knowledge, given their mindset[...]. The danger is that the local agent responds too narrowly to the situation in front of them, given that they can call to mind only a limited set of the relevant material and entertain only a limited set of views of the situation, missing the side effects of their actions and decisions at broader scales and time frames.
[...]
Without continued effort, people tend to fall back to strategies that are dominated by local factors and context.

The abstractions of automation and procedures serve a purpose of connecting useful abstractions back to guided actions. There's always going to be a tension. People are limited by their local focus, and automation is limited by its brittleness and lack of contextualism. Together, they can work for better alignment.

Directions for Design
At this point we've covered the context gap which is the one thing I wanted to introduce, so I'll be a lot more terse here.
  • we have to break the fixation on autonomy of automation, and realize that we should instead frame automation as ways to improve perception and action. They improve our capacity, scope, activities, precision, forces, and indirect actions on the world.
  • automation that acts independently does not remove humans from the scene, but couples them more. Automation projects intent into the world, and must be seen as a tool that will be exploited by people for information and action.
  • No plan survives contact with a disaster-in-the-making
  • Although the potential for surprise isn't the same everywhere, all JCSs are adapted for it. Resilient systems are prepared to be surprised.
  • In complex settings, difficulties cascade and demands escalate. Automation will exhibit brittleness that will challenge practitioners
  • Think of designing for these brittle boundaries in mind, and letting people figure out that automation is reaching its limits
  • Any increase of autonomy requires an increase of observability and feedback, especially in communicating future intent. Ignoring this increases the chance of coordination surprises.
  • Increases in autonomy also requires increases in directability (telling automation to adjust and change actions)

And with this, the chapter ends and the next one covers laws that cover joint cognitive systems.

MononcQc
May 29, 2007

This week's paper is a short one, Alarm system management: evidence-based guidance encouraging direct measurement of informativeness to improve alarm response, by Michael F Rayo, and Susan D Moffatt-Bruce.

It's an interesting one because it talks of alert fatigue and how to evaluate the effectiveness of alerts to make sure they're relevant and well used in the context of hospital clinical care. This is the sort of stuff where if you've been on call you have opinions about, but not necessarily a lot of data on.

quote:

[T]he alarm fatigue label subsumes a myriad of potential contributors, and only some have been shown to adversely affect alarm system performance in either laboratory or real-world settings.
[...]
It is reported as the cause when clinicians improperly ignore, override, silence, or mute clinical alarms that signify critical patient events
[...]
However, of the many phenomena that are thought of as part of alarm fatigue, only some have been shown to predict the alarm system’s ability to redirect attention to emerging hazards and, therefore, improve patient outcomes.

Generally we sort of think that number of alarms (too many of them) as being the thing that creates alert fatigue, there's nothing in literature that really defines what is "too much." There is no clear link established between reducing alarm count and actually reducing false alarms or mental workload, although the two latter elements are known to have an impact. They also specifically call out that reducing the overall alarm count shouldn't be done without explicitly measuring the false alarm rate because you don't want to cut out real positives.

All alerts have to face real, well known problems, which they categorize as "Informativeness":

quote:

Informativeness is the discrimination power of an alarm system to detect abnormalities in the world and infer what is worthy of attention. It measures the proportion of alarm signals that successfully convey a specific hazard, which requires a combination of the sensory, informational, attentional and cognitive aspects described in table 1.

Here's table 1:



They say that informativeness drops when systems notify people of events that are not happening (false alarms) or some that are happening but not hazards (unnecessary or non-actionable alarms). Also alarms that are groups and stand for many things or many severities cause informativeness to drop.

Decreased informativeness is associated with worse response rate and response time. They frame alert fatigue interestingly:

quote:

This research reframes alarm fatigue as a necessary calibration by the human operator to pay decreasing amounts of attention to signals that do not merit that attention

In high tempo situations, you pay less attention to less important alarms, the same way pilots and drivers narrow their field of vision and ignore more peripheral information when demands augment. They consider the problem to be technical rather than one of competence or motivation.

They state that research has shown that good alerts have an informativeness rate of ~71% in order to be more helpful than harmful. Healthcare alerts tend to range from 1-23%. Informativeness is measured as positive predictive value (PPV):

quote:

the proportion of true positives out of all positive responses, but have also been measured by negative predictive value, sensitivity and specificity, and could be measured by any measure of discrimination power. Most studies calculate PPV based only on the traditional definition of false alarms, but there is recent awareness that requiring that true positives be neither false nor unnecessary may be a better predictor of alarm response in some settings

The rest of the paper mentions healthcare-specific contexts, like the design of audio or sensory alerts to cope with the operating room environment. They also state the usefulness of some "continuous signalling" like the beeping of an ECG.

Their hope is that these factors can lead to better alert evaluation in practical settings.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

I don't have much to add right now but i really appreciate this and have used some of your insights in work chats

MononcQc
May 29, 2007

Thanks! I'm assuming people read at their own pace and I :justpost: whatever, though it does feel nice to see some of them acknowledgements vs. void-posting and hoping it reaches some people v:shobon:v

Gnossiennes
Jan 7, 2013


Loving chairs more every day!

:same:

really enjoying these and i appreciate your posting!

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
it's really helpful in structuring my thinking since i work on, well, automation for experienced practitioners (lab scientists in my case) so I'm like the exact person these articles are aimed at. i love it.

Midjack
Dec 24, 2007



yeah i read all these but don't feel like i have much to contribute and there are plenty of other threads to noisepost in so i just stay quiet.

MononcQc
May 29, 2007

This week's paper is Anticipatory Thinking by Gary Klein. Anticipatory thinking is the process of recognizing difficult challenges that may not be clearly understood until they're encountered, and preparing for them. It is overall a sign of expertise in most domains.

Klein mentions that it would be a mistake to conflate "anticipatory thinking" with "predicting what will happen"; in fact, the concept is framed as "gambling with our attention to monitor certain kinds of events and ignore or downplay others." It's deeply tied with uncertainty, ambiguity, and blending it with various concepts and experiences.

They mention, for example, experienced drivers actively scanning for hazards and potential trouble spots whereas beginners just try to stick to specific lanes. The experienced drivers are not expecting hazards or predicting them, but are managing their attention knowing they could appear.

Part of that focus is not purely probabilistic, and some of it is aimed at low-probability but high-threat events. Prediction is directed with guessing future states of the world, and anticipatory thinking is about preparing to respond, not just prediction. The whole activity is centred on what the people doing it could do.

There are three common forms of anticipatory thinking noted, while also specifying that they expect more forms to be found:

Pattern Matching
Pattern matching takes circumstances of the present situation to bring out similar cues and events from the past. Experts have a repertoire of these and can react instantly:

quote:

We will sense that something doesn’t feel right, or that we need to be more vigilant. Greater experience and higher levels of expertise make it more likely that our anticipatory thinking will be accurate and successful. [...] They also carry a danger, namely overconfidence in our experience may lead us to make a diagnosis, but miss something new or novel that may be seen by the naïve observer.

Trajectory Tracking
This one is about being 'ahead of the curve' -- looking at where events are going and preparing ourselves for how long it will take to react:

quote:

[A]anticipatory thinking here blends our assessment of external events with the preparations we make to handle these events.

Trajectory tracking is different than pattern matching. It requires us to compare what is expected with what is observed. The process of tracking a trajectory and making comparisons is more difficult than directly associating a cue with a threatening outcome.

Convergence
This one is about seeing connections between events:

quote:

Instead of responding to a cue, as in pattern matching or to a trajectory, we also need to appreciate the implications of different events and their interdependencies.

The paper mentions a long example about how often this does not happen due to communication and contextual challenges, and contrasts it with a cool example of when it does:

quote:

In one high-level Marine Corps exercise [...] The plan left a very weak defense against an attack from the north so the controllers got ready to launch precisely this type of attack. They were gleefully awaiting the panicked response of the Marine unit they were going to punish. However, the Marines had augmented their staff with some experienced Colonels who had formerly been on active duty but now in the reserves.
[...]
One of them noted a situation report that an enemy mechanized brigade had just moved its position. That was odd – this unit only moved at night, and it was daytime. He wondered if it might be on an accelerated time schedule and was getting ready to attack.
[...]
Checking further, the Colonel talked to the Senior Intelligence Watch Officer who was also suspicious, not because of any event but because of a non-event. The rate of enemy messages had suddenly declined. This looked like the enemy was maintaining radio silence. Based on these kinds of fragments, the Colonel sounded an alert and the unit rapidly generated a plan to counter the attack—just in time. The Colonel didn’t predict the enemy attack; he put together different cues and discovered a vulnerability in his unit’s defenses.

---

These three mechanisms play together to let the decision maker mentally simulate courses of actions, to know what sort of problems might arise. It's also essential for team work and coordination. For teams to be effective, they need to be able to predict each other's actions and reactions to unexpected events.

We've covered this with a previous paper, but the author reiterates that problem detection isn't just about accumulating more and more discrepancies until you tip the scales; in most cases it requires re-framing the situation to get the significance of things. Anticipatory thinking is part of the mental stimulation that helps generate expectations.

To be surprised means you had to have anticipated things (albeit wrongly), and it's a sign that you need to repair common ground/shared understanding. And there are barriers that may cause this to happen.

Common one are fixation (maintaining a frame despite evidence that it is entirely wrong, think Chernobyl and the operators thinking it was impossible it had blown up despite graphite on the ground), explaining away inconsistencies with other knowledge, or be overconfident in your abilities (and therefore badly evaluating risk).

At the organizational level, there are extra barriers around policies that hide weak signals, perverse incentives, gaps between the people with the data and those who know how to interpret it, and challenges in directing people's attention -- we covered some of this in a previous paper about coordination challenges.

In fact they ran experiments to improve techniques and noted:

quote:

Each scenario included weak signals – to determine if and when the team noticed these signals and their implications.
[...]
A key finding is that at least one individual in every group did notice the weak signals and their implications and typically half the group noticed the weak signals, based on the individual notes. However, no team took these early signs seriously. Usually, they weren’t mentioned at all. If mentioned, they were dismissed. So the groups themselves did not "consciously" pick up or act on those signals. Therefore, the challenge shifts from helping people recognize weak signals to helping their groups and organizations take advantage of the anticipatory thinking of individuals.

Improving Anticipatory Thinking
Gotta love papers that tell you how to fix things, rather than just "poo poo's hard, tough luck". A lot of them seem to be referring to Cynefin.

For fixation, they state that you want an outside view to provide a reality check. There's also value in bringing someone with fresh eyes so they're not stuck in the current interpretation of a situation -- which isn't improved with a devil's advocate. Fresh eyes are needed for an authentic dissent.

Weak mental models are going to be a limit as well. They refer to a lot of exercises, one they dub attractors/barriers (which I googled to find more about and barriers seem to be about creating simple rules to always respect, and attractors being about embracing positive reactions in people). They also mention the Future backwards exercise to increase the lessons learned in sessions. Other tricks there are to slow down how often people rotate around responsibilities so they have more time to gain experience.

For organizational barriers, Klein breaks them out into two categories.

For between-organization barriers, they mention things that help the flow of ideas, interpretations, and information. One example is creating new units/teams from people that were in competing groups under a unified hierarchy so they can share their experience without competing anymore. This is judged more effective than another tip, which is to create "liason officers" who are to play the role of easing communication.

For within-organization barriers, they want people to voice unpopular concerns. They once again say that devil's advocates don't work. They mention good results from organizations ritualizing dissent, specifically suggesting PreMortems, a practice where you assume your project failed and investigate a virtual failure to find what would have been plausible mechanisms behind its failure. Another view comes from high-reliability organizations in saying that they are always mindful and active towards potential problems rather than being dismissive of them.

Another blocker is going to be complexity:

quote:

Military organizations try to overcome complexity by structuring situations. The costs of this structuring process include a difficulty in seeing connections that cut across the boundaries and a vulnerability to situations that don’t fit the pre-existing structure.

Automation can also be a blocker. Here they recommend the approach mentioned many times here of requiring an active mindset, which means automation should support cognitive work by augmenting people rather than replacing them (which we covered a lot here).

For Team coordination, they mention that it's essential to have a see-attend-act approach, meaning that you have to be aware that people who see a threat, people who are in command for it, and people who can solve it may all be different people.

And that's about it for the paper! It adds a section going deeper and commenting on books and resources that add weight to the idea that anticipatory thinking is very much a thing that is part of sensemaking but distinct from other macrocognitive functions (decision making, planning, coordination), even if it intersects with all of them.

MononcQc fucked around with this message at 05:12 on Jun 12, 2022

Pulcinella
Feb 15, 2019
I debated about putting this here or in the CJs thread, but I hate server controlled feature flags because most people who have access to them don’t actually understand what they do and inevitably (today in my case) someone will see a flag turned off (either because that feature isn’t done or is broken or something else) and turn it on (sometimes without telling anyone) thinking that “oh this is why that feature wasn’t working.”

I feel these kinds of flags are similar to what happened in the 3 Mile Island incident where a relief valve was stuck open but the light on the control console said it was closed. Except the light didn’t really indicate the valve was closed but something like the motor that controlled the valve was in a state similar to what it would be if the valve was closed normally.

Shame Boy
Mar 2, 2010

Pulcinella posted:

I debated about putting this here or in the CJs thread, but I hate server controlled feature flags because most people who have access to them don’t actually understand what they do and inevitably (today in my case) someone will see a flag turned off (either because that feature isn’t done or is broken or something else) and turn it on (sometimes without telling anyone) thinking that “oh this is why that feature wasn’t working.”

I feel these kinds of flags are similar to what happened in the 3 Mile Island incident where a relief valve was stuck open but the light on the control console said it was closed. Except the light didn’t really indicate the valve was closed but something like the motor that controlled the valve was in a state similar to what it would be if the valve was closed normally.

the light indicated only that power was running to the "close valve" motor, not the actual state of the valve, yeah

MononcQc
May 29, 2007

All of our feature flag changes go to the related ops channel along with incidents/near-miss discussions so people are aware of their changes.

MononcQc
May 29, 2007

For this week I decided to leave systems and cognitivism behind a bit and go look for human factors stuff, so I headed to a journal's website, asked to see papers with plenty of citations for the last few years and found External Human-Machine Interfaces on Automated Vehicles: Effects on Pedestrian Crossing Decisions.

This is a small experiment done using VR and 24 participants, to detect which mechanisms are most effective for self-driving cars to communicate to pedestrians that they can safely cross the street. This fits a broader theme of "external Human Machine Interface" (eHMI):

quote:

Twenty-eight participants (21 males, 7 females) with a mean age of 24.57 years (SD = 2.63) took part in the study. Only people from right-hand side driving countries were allowed to participate. Participants had five different nationalities: 22 German, one Swiss, three Italian, one Chinese, and one Spanish. They were all living in Germany at the time of the experiment. Two participants reported being color-blind. Nine people wore glasses, and two people wore contact lenses during the experiment.

The experiment is sort of tricky to describe. It's done in VR, and the person is standing on the side of the street (they can't see arms or legs or whatever):





The vehicles (of many types and sizes) arrive from the left (at 90 meters), and follow the yellow line, and turn left ~30m after the participant. They had two types of behaviour for cars: yielding and unyielding. The yielding vehicles start slowing down at ~35m from the pedestrian and stop at 7.5m from the pedestrian. Non-yielding vehicles just maintain a constant speed. The yielding vehicles have multiple behaviours:



(The knightrider bar is going left-to-right in half a second to indicate "you can cross", the front brake light was green when non-yielding and red when yielding). These behaviours ("eHMI state change") can be triggered early (50m from the pedestrian), intermediate (35m, when starting to slow down), or late (20m). Each participant saw 340 vehicles over 30 minutes, with 45 yielding, 45 non-yielding and the rest being fillers, in five blocks of 9 waves (45 waves).



Participants were given a handheld remote:

quote:

Each time you feel safe to cross, please do the following: (1) Press the button on the remote. (2) Keep pressing the button as long as you feel safe. (3) When you do not feel safe to cross anymore, release the button. The task was practiced in a session of 3 min without eHMI, and was repeated after the preexperiment questionnaire.

They also had a form to fill in after the exercise.

The paper describes the experimental setup, which I'll elide here. It does contain caveats around the resolution, comfort of VR headsets, etc. They also mention the statistical work for each of the things they tested. I'll cut to some high-level findings:

  • Larger vehicles are consistently considered to be less safe than smaller vehicles
  • non-yielding vehicles' usage of eHMI makes no difference at all (they did not switch state regardless)
  • All four eHMIs were feeling safer than the baseline non-communicating car
  • It is workable to signal intent to yield before starting to yield and have people listen to it
  • The improvement of eHMIs was of 10%, which represents 8.42s of difference -- people felt safe in the short distance for the baseline, and intermediate distance for eHMIs
  • eHMI vehicles required some learning, where people initially did not know what they meant, aside from the text-based one which instantly showed good results

This last point is where all the fun meat is:

quote:

In essence, the smiley, front brake light, and Knightrider provided the same information as the text, as the eHMIs changed state at the same moment. However, the nontextual eHMIs (front brake light, Knightrider, and smiley) provided no explicit instruction to the participants. For example, when the smiley changes to “sad,” this could mean several things: A participant may think that the sad face pertains to him/herself (an egocentric perspective) or to the vehicle (an allocentric perspective). Research suggests that switching from an egocentric to an allocentric visual perspective absorbs cognitive processing time. In particular, young children and older persons appear to have difficulty in taking another agent’s perspective.

The front brake was green when the vehicle was maintaining speed. Our choice of green is consistent with a survey study [...] which found that respondents associated green with a moving vehicle. However, our color coding may still have been confusing for the participants. [...] We argue that the nontextual eHMIs—such as the smiley and front brake light—can only be interpreted after having learned that a change of state implies that the vehicle will yield.

Authors also point out that while text is less ambiguous, it is also more demanding and may be less clearly visible from a distance or in bad weather conditions.

The big factor to keep in mind is in disambiguating whether the eHMI communicates information about the intent of the car or a demand for the pedestrian. Who is it talking about? When it is not clear, people take a bit of time to learn what the car means through observation before starting to trust the behaviour. This makes me think that using 'stop' or 'go' would be equally ambiguous as text, but that showing an icon of a pedestrian crossing may convey more information without requiring text.

Anyway, that's it for the text. It's not like this is a big foundational paper or anything, but I felt a bit of a shakeup could be good and went with something with a few hundred citations. I enjoyed getting that sort of surprise on the focus around egocentric vs. allocentric perspectives in interpreting the end message. Good thing to keep in mind!

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

MononcQc posted:

I enjoyed getting that sort of surprise on the focus around egocentric vs. allocentric perspectives in interpreting the end message.

https://m.youtube.com/watch?v=wX1x7pfH8fw

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply