Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
MononcQc
May 29, 2007

Shaggar posted:

it is the ultimate irony that what you are talking about here is implementing real process controls. something you claim as been done, but has never actually ever been done in healthcare. process controls and process improvement are literrally how you close that gap from work-as-prescribed to work-as-done.

the problem with the current system is they have no real process controls beyond vague guidelines, and they have no process improvement mechanisms. in many cases they cant even do real process improvement because they lack the controls and metrics to monitor things. or atleast they lack the comprehension and desire to collect the metrics.

The constant "overdosing" is a great example of how you have loads of excellent data trapped in the EHRs about the outcome of every overdose event. They should be taking that data across the entire US and using it to establish new guidelines based on actual data that eliminate the need to warn people about fake overdoses.

theres also a larger problem with people taking components of process systems when the system doesnt work unless you take all of it. project management is a great example thats simpler to understand. i.e. management takes the 2 week cadence of agile, but then leaves out all the capacity planning components and they think they're agile.

its the same thing with quality and process controls. you have to do it all or it doesnt work.

You conveniently flip back to process controls as if you were right all along. I'm not arguing against trying to improve processes, I'm arguing against the idea that suing people and sending them to jail is a good start. It's not, because it makes improving process much harder. Don't try to take my argument as supporting yours, it's the opposite of that.

On the idea of "process controls," yours is an incredibly vague description. As broadly defined, it often ends up being some sort of variation on Taylorist scientific management. In that form it has been attempted in healthcare in both the US and UK already, and didn't give great results. Both you and I have discussed about this in another thread in the past, where I kept referring to the book Still not Safe which does an analysis of the last 20 years of evidence-based medicine and its lack of results. You mostly ignored anything I posted and repeated your opinion like that gave it any more weight.

I'm not willing to throw away the idea of "process control" out of the window, but you need to be clearer in what you mean, because you deservedly have a reputation for bad faith arguments for the sake of riling people up and I'm not necessarily going to entertain your arguments when we've repeatedly discussed this and your involvement limits itself to "no this isn't what I meant" but never defining what it is that you mean in the first place.

Either engage in good faith or kindly gently caress off.

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

Oh here's a timely one from "metrics in healthcare", where UK hospitals had targets for normal births without complications that drops just today: https://www.theguardian.com/society/2022/mar/26/shropshire-maternity-scandal-300-babies-died-or-left-brain-damaged-says-report

quote:

Three hundred babies died or were left brain-damaged due to inadequate care at an NHS trust, according to reports.

The Sunday Times has reported that a five-year investigation will conclude next week that mothers were denied caesarean sections and forced to suffer traumatic births due to an alleged preoccupation with hitting “normal” birth targets.

The inquiry, which analysed the experiences of 1,500 families at Shrewsbury and Telford hospital trust between 2000 and 2019, found that at least 12 mothers died while giving birth, and some families lost more than one child in separate incidents, the newspaper reported.

The full report isn't out yet, and the intermediary report from 2020 had reported on this already -- it had also found that some mothers were blamed for the death of their own infants -- and recommended various emergency changes to the reporting structures within the units, and re-training when bad cases kept happening from inadequate choice of drugs being used in risky pregnancies (it had mentioned that they were not adequately learning from past mistakes).

The full report is likely to be more nuanced than the big media lines. Anyway, I don't mean this as a random dig, just that picking metrics and how their selection will play out when they start becoming objective is god drat tricky.

Sagebrush
Feb 26, 2012

https://twitter.com/banovsky/status/1507890882938380288

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


touchscreens in cars should be fuckin illegal

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
what if, in addition to both of those, the wheels also had scroll acceleration

DELETE CASCADE
Oct 25, 2017

i haven't washed my penis since i jerked it to a phtotograph of george w. bush in 2003
why do checklists work for pilots but not for nurses?

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
because pilots and ex pilots write pilot checklists and worthless administrators write nurses checklists

Sagebrush
Feb 26, 2012

DELETE CASCADE posted:

why do checklists work for pilots but not for nurses?

my quebecois uncle linked a great paper about it.

https://www.researchgate.net/publication/278789222_The_problem_with_checklists

quote:

Healthcare checklists do not always share design features with their aviation counterparts. For an Airbus A319 (figure 1), a single laminated gatefold (four sides of normal A4 paper) contains the 13 checklists for normal and emergency operations. Tasks range from 2 (for cabin fire checklist) to 17 (for before take-off checklist), with an average of seven per checklist. Each task is described in no more than three words
and can be checked immediately, with usually a single word of confirmation. It has no check boxes, does not require signature and is designed to be used by one person, with specific checklists performed aloud.

In contrast, the Centers for Disease Control and Prevention central line-associated blood stream infections checklist11 has 18 tasks, with no less than 4 word descriptors (and up to 22 words), and describes non-procedural tasks that need to be completed over several minutes (and hours), which cannot be ‘checked’ (eg, ‘empower staff ’). The WHO safer surgery checklist (first edition) 12 has 21 tasks (7+7+7), with wording ranging from 2 to 16 per task, and involves several people simultaneously. Some tasks are easily checked and completed, while some require discussion and some cannot be ‘checked’. One feature of checklists in healthcare, in comparison to most other industrial uses, is that they increasingly feature items intended to promote communication and teamwork (eg, introductions, discussion of patient risk factors, concerns and so on), in addition to straightforward categorical checks (eg, have hands been washed, has consent been obtained and others). If used in the right way, they can indeed assist in the change of communication patterns and specific coordination tasks (such as ‘call outs’, task location and task visibility).10 However, creating an opportunity for a more general team talk is not a traditional feature or necessarily a particular strength of checklists. In fact, authentic checklist completion will rely on good communication and teamwork in the first place, which is not always the case.

basically: medical administrators cargo culted the idea of checklists without understanding why and how they are used in aviation, wrote checklists with uncheckable non-procedural tasks, and filled them full of bullshit management voodoo.


EMPOWER STAFF. lmao

Sagebrush fucked around with this message at 06:37 on Mar 28, 2022

vuk83
Oct 9, 2012

DELETE CASCADE posted:

why do checklists work for pilots but not for nurses?

Because the pilot can always stop the flight, the nurse can't delay treatment. Also the staffing levels are much lower, a nurse is solely responsible for x number of patients, the pilot and his co pilot are responsible for 1 plane.
Information is also more easily accessed for a pilot versus a nurse.

Sagebrush
Feb 26, 2012

Pilots can't just stop the flight when they feel like it FYI

MononcQc
May 29, 2007

http://picma.info/sites/default/files/Documents/Events/November%20Oscar%20article.pdf is an interesting case from 1989 on that front.

A pilot on his way to Heathrow with a sick crewmember didn't divert to Frankfurt when he learned weather would be horribly foggy in London, nor to non-Heathrow English airports (with a general understanding that you don't want to divert an aircraft because that lands you a lot of questions for the costs it entails), was short on fuel, had an auto-pilot that didn't lock onto localisers come landing time, and ended up with an impractical checklist. He missed a landing, took another go and succeeded with no injury nor damage, but the first attempt missed a hotel by ~12ft and caused sprinklers to go off in the hotel, causing damage there.

quote:

Stewart, in defence of his actions during the company’s own inquiry, had doggedly raised issue after issue, some of which danced around the question of exactly what had gone wrong and why. Accused, for example, of failing to file immediately upon landing the necessary MOR – ‘mandatory occurrence report.’ Stewart argued that because he had at least initiated the go-around from decision height and had landed successfully out of the second approach, it didn’t constitute an ‘occurrence.’ Few agreed.

He argued that nowhere was it officially written that a proper go around required a pitch up in the airplane’s attitude of three degrees per second, which the airline claimed was the proper technique. (Stewart had applied back yoke that rotated November Oscar at a rate of less than one degree per second).
Well, maybe not, but it is the way to get the job done.

At one point, Stewart created a transcription of every oral call-out, checklist response and radio transmission required by company and CAA regulations during the approach and demonstrated that simply reading the script aloud, nonstop, took seven minutes. The entire approach itself had consumed only four, thus demonstrating that the letter of the law required the impossible. It was an interesting point, but nobody cared.

The whole thing looked like it involved bad procedures, doctors bending to pressure to make crew fly even if sick, and a sort of cover-up by the airline industry, but the pilot alone got sued and suffered the consequences. The guy was demoted, lost his lawsuit, his job, and eventually killed himself.

Anyway, it's interesting to think that even the real good checklists aren't always perfect and sometimes get disregarded for being impractical. Some of them have the benefit of decades of refinement and constant training/simulation, so of course just having management making a checklist in a vacuum isn't going to be sufficient.

MononcQc fucked around with this message at 17:50 on Mar 28, 2022

vuk83
Oct 9, 2012

Sagebrush posted:

Pilots can't just stop the flight when they feel like it FYI

But they can initiate abort/divert procedures.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
checklists are pretty good imo

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost

vuk83 posted:

But they can initiate abort/divert procedures.

and then they get fired afterwards if they hosed up

jobs, aint they poo poo? and it was worse at aeroflot so not even capitalism, just havin a job

Midjack
Dec 24, 2007



to be fair it's not just commercial pilots who follow checklists.

echinopsis
Apr 13, 2004

by Fluffdaddy
I take a checklist on first dates so I can remember to check for all red flags not just the easy ones to remember like “do you have standards?”

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
still here, learning in progress

very happy

good thread

Midjack posted:

to be fair it's not just commercial pilots who follow checklists.

its usually also anybody building anything and isn't there some sort of authority that comes from being given a checklist extrinsically that means a person doesn't even evaluate the contents meaningfully?

i've seen teams become more satisfied with addressing the need of a person in charge, rather than alerting the team to a potential alternative or explaining an edgecase that exists outside of the manager or team's knowledgebase so many times.

months of time is wasted chasing rabbits with problems that could have been entirely circumvented if the entire team wasn't on autopilot through the meetings and discussions.

you need one person in the room who can put their hand up and say "nope, that's not a good idea" to people who are designing the recipe because although they understand cakes, they know nothing about kitchens, staff or equipment so to speak.

Expo70 fucked around with this message at 15:05 on Mar 31, 2022

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

Carthag Tuek posted:

but to be serious, there wont be any meaningful change until you start putting managers, vps, & ceos in prison

MononcQc
May 29, 2007

I haven't posted in a short while (didn't find great papers, was busy reading other stuff) and instead today I'm posting to let you know that some of the poo poo these disciplines do, for all their love of helping cognition, includes some of the worst charts and diagrams you'll ever see. I had already referenced some bad diagrams in a previous post:

MononcQc posted:

Now, the diagram I'm posting is crucial to a lot of the things referred in the paper and many other ones, and unfortunately, people in the humanities have the jankiest most loving ridiculous diagrams (there's much worse than this one):



Well here's a post for some more as I recall them.

Here's a couple of my favorite ones from Jans Rasmussen in Decision-Making in Action:


(I'm the cowboy of decision-making)

And a few from Designing for Expertise by Woods, which I absolutely love and should probably annotate for here one of these days, but I can't imagine it being considered "good" in terms of graphics.




Don't get me wrong, sometimes they get it absolutely perfectly and the diagrams are great explanations for complex mechanisms, even if they look a bit funky. Here's of my favorite ones from Rasmussen's "Risk Management in a Dynamic Society," which essentially encapsulate the drift model in a single image:


But even for all the good hits, nothing can beat these ones from one of Woods' presentations on releasing the adaptive powers of human systems for the Ohio State University's program on cognitive systems engineering, where the following visuals are used to introduce "the dragons of surprise":

Share Bear
Apr 27, 2004


elden ring dlc looking kinda low effort

great posts everyone , thank you very much, keep em up

MononcQc
May 29, 2007

I'm back at reviewing some David D Woods stuff: Designing for Expertise. It's a book chapter, but it's nevertheless interesting and cited here and there. I had forgotten about it until I started looking for terrible diagrams in my previous post, so here we go.

First, the expert relies on a conceptual model, which is essentially a mental model; the things the expert knows about the domain and that can be used to simulate what will happen. Designers essentially end up shaping how experts can form and augment these models. A basic thing they suggest in line with that is to replace the term "user" with the term "practitioner" because the people using the tech are not passive people being imposed the product, they're people doing poo poo with objectives and challenges who are sometimes relying on your product to do something. Practitioners will modify unsatisfactory design, devise workarounds, or simply abandon things that do not let the meet their goals.

So to predict how your tech is going to impact your experts, you gotta know what the hell expertise is, and have an idea what their expertise is. But you can't expect someone who designs a surgeon's tools to also be a surgeon on top of being a designer. This is something dubbed the Ethnographer's Challenge:

quote:

in order to make interesting observations, they have to be, in part, insiders in the setting they are observing, while remaining, in part, outside the domain in order to have insights about how practice works, how practice fails and how it could work better given future change. Design observations in the field of practice, where designers watch experts doing cognitive work, relies on being prepared to be surprised in order to distinguish unexpected behaviors that reveal how expertise works and how these experts work

You tend to end up with multidisciplinary teams where designers consult with experts to design for other experts. This can create clashes because designers tend to look for simple solutions to problems whereas systems engineers assume that only complexity can cancel out complexity. So both approaches that aim to design for simplicity and those that are more analysis-based are needed, but insufficient. This cross-disciplinary team ends up having to gain some of each other's expertise to work as well. So this starts the chapter's long detour on defining expertise. There's a big section that contains a tour of the history of the study of expertise, which I'm eliding here, after which they conclude:

quote:

One of the key results is that expertise uses external artifacts to support the processes that contribute to expert performance – expertise is not all in the head, rather it is distributed over a person and the artifacts they use and over the other agents they interact with as they carry out activities, avoid failure, cope with complexity, and adapt to disruptions and change.
If you've read Don Norman's The Design of Everyday things this is a sort of reference to "knowledge in the head" vs. "knowledge in the world," but told in an academic manner.

Anyway, past that section, we get to initial definitions of expertise. The first perspective is one where expertise is definable in terms of how much and how well organized your domain-specific knowledge is. The more you know the better you are. This perspective can be expanded by saying "hey sometimes, knowledge is social too" which changes things a bit that says that expertise is having a rich repository of strategies for applying knowledge based on context. This further means that a) expertise is domain-specific b) experts adapt to changes c) they rarely act as solo individuals.

This gives them a list of 5 key attributes of experts:
  1. They are willing to re-adjust initial decisions
  2. They get help from others when uncertain and can identify experts un sub-domains
  3. They make use of formal and informal external decision aids
  4. They may make small errors but tend to avoid making big ones; they focus on not being wrong rather than being right
  5. They decompose complex situations into manageable chunks that can then be re-constructed
This is accompanied by the hilarious Kite diagram from my previous post.

The next question is how you identify people with the knowledge of experts. They mention:
  • They perceive more stuff; they can extract information that non-experts will miss
  • They have a good idea of what is relevant and when it is; they have a lesser tendency to be side-tracked
  • They can simplify complex problems effectively; novices tend to oversimplify however.
  • They can communicate information they are experts about.
  • They can deal with more diversity in terms of situations encountered
  • They can identify and adapt to exceptions
  • They can identify changing conditions to know when to shift their strategies
  • They're self-confident and trust their decisions
  • They have a strong sense of responsibility
A lot of these characteristics make experts difficult to work with, but also makes experts able to identify other experts in their domain. So how do you acquire expertise? There are a couple of models.

The first one is: Novice (slow performers who follow rules), Advanced Beginners (they see patterns that support rules), Competence (lots of patterns known, hierarchical reasoning sequences, can deal with more situations, but are still slow), Proficient (intuition takes over reasoning, decision structures are adapted to the context and the knowledge flows naturally), and Expert (they know what needs to be done and can do it; immediate response to identified situations).

A second one is 10+ years of deliberate practice, going through 4 phases: 1. playful activity with the domain, where those with potential are selected 2. extended preparation with trainers/coaches, 3. full-time engagement to practice and performance, making a living off of it, 4. making an original contribution to the domain by going above the teachers and innovating.

That requirement for innovation is one of the tricky ones when trying to design for experts: the time spent by the designer in the domain can never match that of the domain expert. The observations can't easily be linked to practice, so there is a need for a very iterative process of trial and evaluation to anchor the design. This gives us that god drat image:



The legend sort of explains what they mean. They're messy diagrams, but they sort of try to load a lot of meaning into both. The top one sort of puts you in a given role (the dotted circles with floating labels) and moving clockwise or counter-clockwise represent activities required for design synthesis or analysis. The second diagram tries to put normal project labels to the map when used counterclockwise, for design creation (synthesis), to show how it would translate to practice.

The paper spends a couple of pages explaining the map, and introducing an even more confusing one which tracks the development of domain-specific expertise of the designer as they interact with the domain, and the places where you may want an expert to cope for your own lack of expertise there:



It took me a while to get it, but this is the first model of expertise development in black (flowing counterclockwise from 'novice' to 'advanced beginner' all the way to 'eminent expert'), along with significant activities in light grey (implementing change, directing observations, etc), overlaid on top of the Figure 8.2A. The big dashed lines are essentially "regressions", where an eminent expert put in contact with a new device or technology suddenly reverts to simply being "competent" and needs to gain new knowledge and mastery again, and the cycle partially starts over.

Anyway that's what I think it means, and it would have been better served by 2-3 different images IMO. It makes me feel this is a screen grab of a powerpoint slide that has had 5 minutes of animation and explanations collapsed into one unfathomably complex still image.

What happens when you introduce new technology or solutions then is that your assessment of the expert has also changed: since they needed to adjust to the new added complexity (and they've created fancier conceptual models) and that this is done in a broader system (where there may be other changing pieces of equipment, teammates, other experts, and unrelated people or interferences in play), each new thing you designed becomes part of the environment and must now be accounted for. So as you understand expertise, you're able to better design for it, but as you do so, your understanding of expertise also melts away because you changed what it means to be an expert!

This leads to once again reminding people that you can only design for experts with an ongoing collaboration between designers and experts.

The authors then summarize what expertise is once more with extra factors that were added over the courses of a few pages:
  • Experts have learned, observed, and practiced a long time. Their expertise is domain-specific, driven by context, and part of the social structure of the domain
  • Expertise is both knowledge and skill in the understanding of observations in the context of a situation
  • Expert practitioners have a model of their domain and strategies that they keep refining
  • Expertise changes and evolves towards various improvements, and so do social standards in the domain
  • Expertise is a form of contextual understanding that help form new strategies to make sense of observations in said contexts
  • Expertise is limited by the perspective of the expert. When needing to go broader, there is a need for collaboration
  • At an eminent level, experts innovate and generate original contributions to the domain
Finally, this innovation factor means that anything you do that changes the field causes ripple effects by causing more need for adaptation for more experts, which in turn creates new design demands. Assessing the expertise of practitioners is therefore both a requirement and a consequence of design work.

FalseNegative
Jul 24, 2007

2>/dev/null
This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

FalseNegative posted:

This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.

Midjack
Dec 24, 2007



FalseNegative posted:

This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
i saw mononcqc's dealio postin about the hosed up org on twitter (https://twitter.com/mononcqc/status/1514397732332527623) and i've always had a little weird hobby-horse that this can be modelled as a pure formal issue with eutrophication dynamics (really vanishing gradient dynamics, but eutrophication dynamics is vanishing gradient dynamics on the food chain energetics...).

basically, many layers of indirection of system can only be modelled mathematically as function composition, but if you compose functions and change parameterization of the original function the chain rule sez you'll get insanity, complete insane behavior. much like the insane behavior you get in many-layers-of-indirection systems in general

of course this is related to exponential bein the eigenfunction of derivative. so i am always amused by exponential discount functions, which do this poo poo in time instead of iterative function compositions, and how peeps are jazzed about avoiding hyperbolic discounting - hyperbolic discounting wrt function compositions is the thing that peeps in neural net land aim for when they try to whack vanishing gradients

MononcQc
May 29, 2007

Bob Dobbs is dead has posted my tweet, and while it's out of ergonomics/HCI/RE as a paper, it's still humanities and I still like it a lot, so here's my take on it. It's the sort of thing that retrospectively explains large trajectories of my career and validate a lot of hunches and things I believed to be true but with no evidence. And of course, it ties in with a lot of the stuff we discuss here.

The paper, Moving off the Map: How Knowledge of Organizational Operations Empowers and Alienates is a work of ethnography where a researcher embedded herself into 5 organizations including 6 projects aiming at re-structuration—business process redesign (BRP)—and which all had a phase of extensive process mapping. She noticed that at the projects' conclusions, most employees returned to their roles and got raises within the organization, but a subset of them who were centrally located within the organization decided to move to peripheral roles. She decided to investigate this.

What she found was that tracing out the structure of how work is done (and what work is done) and decisions are made was a significant activity behind the split. It happened because people doing this tracing activity had a shock when they realized that the business' structure was not coordinated nor planned, but an emergent mess and consequence of local behaviours in various groups. Their new understanding of work resulted in either Empowerment ("I now know how I can change things") or Alienation ("Nothing I thought mattered does, my work is useless here"), which explained their move to peripheral roles.

Some of these reports are also just plain heart breaking. I have so many highlights for it.

The paper starts by mentioning that centrally-located actors (people at the core of the management structure of an organization) are less likely to initiate change, and more likely to stall it. Additionally, desire for change are likely to come from the periphery, and as people move towards the center, that desire tends to go away. This is a surprise to no one.

However, if central actors are to initiate change, it comes from either a) contradictions, tensions, or inconsistencies are experienced and push them to reflections, and b) being exposed to how other organizations (or even societies) do things, which opens more awareness. These two things are called "disembedding", and can lead to central actors pushing for structural change.

The paper accidentally "discovered" a third approach: taking the time to study how things are done in the organization can cause that dissonance, and encourage central actors to move to the periphery of the organization in order to effect change because they lose trust in the structure of the organization itself and their role in it.

This was found out while the author was doing a study of 5 big corporations with 6 major business restructuration projects with hundreds of workers, and she noticed that while some employees went back to their roles (but with promotions), or towards roles that were more central when it was done, a subset of employees instead left very central roles to go work on the periphery, for sometimes less interesting conditions. So she started asking why and ran a big analysis.

What she noticed is that all the employees who eventually left their roles were assigned different specific tasks from the rest of people in these projects: they had been ask to do process mapping, where essentially they had to make a representation of "what we do here", how the business works, how decisions are made, and how information moves around. People not involved didn't find it significant, but people involved were shocked into leaving their roles, to make it short.

The author makes a point that it's not process mapping causing this, but rather that having a deep engagement in representing and understanding the operations of the organization and how their own role would fit in it would cause this to happen—it was probabilistic.

The tracing was done by employees who would do things like walk the floor, ask people how they do work, sit in meetings with question like "What do we do?" with people in various roles, asked them to list tasks on whiteboards, connecting them with strings, and consolidated into huge maps like the following, which connected local experiences into a broader organizational context:



This had the effect of surface things that were previously invisible and make it discrete. This likely ties into concepts mentioned here before of "work as done" vs. "work as imagined":

quote:

The map allowed them to see how the system operated below the surface, integrating all the pieces to generate a comprehensive view. They commented on the uniqueness of this comprehensive view: “We don’t allow people to see the end-to-end view... to see how things interrelate.” One explained that the experience “ruins [one’s] perspective in a good way.” Another described how it gave her a “whole different way of looking at things.” By revealing the web of roles, relations, and routines that coalesce to make the organization, the map made the organization’s actual operation intelligible.
[...]
Competent members of organizations draw on everyday knowledge [...] as they perform their roles, but this knowledge does not speak to the organization’s broader order. Despite how remarkably capable these employees were at “recognizing, knowing, and ‘doing’ the lived order,” the broader order or structure is often “resistance to analytic recovery” from the inside. Even if they would like to observe and reflect on their organization’s detailed operating process, they rarely have opportunities, such as building process maps, that provide time and access.

So what were the immediate consequences? I'm quoting this directly:

quote:

They expected to observe inefficiencies and waste, the targets of redesign, and they did. Tasks that could be done with one or two hand-offs were taking three or four. Data painstakingly collected for decision-making processes were not used. Local repairs to work processes in one unit were causing downstream problems in another. Workarounds, duplication of effort, and poor communication and coordination were all evident on the map.

Beyond these issues, they observed a more fundamental problem. A team member explained, “I’m getting a really clear visual of what the mess is.” Standing back from the wall, he sighed, and said, “The problem is that it was not designed in the first place.” Instead of observing a system designed, adapted, and coordinated to achieve stated goals, he pointed to three examples on the map that demonstrated the exercise of agency in various places and at various levels in the organization. These change efforts lacked broader perspective and direction as well as coordination and integration with other efforts

They mention examples such as a "kingdom builder" where the map revealed some manager who kept accumulating departments for the sake of accumulating power but was invisible to the organization, and essentially just found a lot of "what the gently caress, this is just random poo poo that's leftovers from really old decisions." People see local problems, general approaches, and they try to fix things. This clashes with things the organization tries to do (when it tries), and there is no coherent organization to anything:

quote:

Some held out hope that one or two people at the top knew of these design and operation issues; however, they were often disabused of this optimism. For example, a manager walked the CEO through the map, presenting him with a view he had never seen before and illustrating for him the lack of design and the disconnect between strategy and operations. The CEO, after being walked through the map, sat down, put his head on the table, and said, “This is even more hosed up than I imagined.” The CEO revealed that not only was the operation of his organization out of his control but that his grasp on it was imaginary.

They learned that what they had previously attributed to the direction and control of centralized, bureaucratic forces was actually the aggregation of the work and decisions of people distributed throughout the organization. Everyone was working on the part of the organization that they were familiar with, assuming that another set of people were attending to the larger picture, coordinating the larger system to achieve goals and keeping the organization operating. They found out that this was not the case.

This may not necessarily be surprising to people, but it may be surprising for people to learn that CEOs and others think they have so much more control than they do!

Anyway, the two reactions in general were either Empowerment or Alienation.

On the front of Empowerment, this is caused because:

quote:

Members of the organization carry on as though these distinctions are facts, burdening the organization’s categories, practices, and boundaries with a false sense of durability and purpose.
[...]
The idea that organizations are an ongoing human product was a provocative insight for these employees. This new perspective, as one explained, “made things seem possible.” Once they could see the “what” as a dynamic social creation, they could begin asking better questions about “how.” A team member explained that the logic of organization should not be fixed and how its rules, synthetic creations, are free to deviate
[...]
Their peripheral role choices allowed team members to exploit this new understanding of the organization’s operations. They could work with new assumptions about the mutability and possibility of the organization and create structures and systems to coordinate and direct the web of roles and interactions. Their new role choices also allowed them to remain above and outside of the organization’s daily operations.

So in short, getting how a lot of it isn't fixed, how a lot of it is arbitrary but flexible meant that these people felt they understood how to effect change better, and that by moving away from the center and into the periphery, they could start doing effective change work.

Alienation is so god drat heartbreaking though, and the author warns that before starting this process in an organization, you have to be ready that some people may feel a major shock that the work they thought was valuable and important is in fact useless and worth nothing. In fact the author warns that finding work and jobs that were not meaningful or useful at all was a common theme:

quote:

As part of the map-building process, employees were invited to identify their role on the map and to indicate how it was connected to other roles through either inputs or outputs. Team members recounted that it was difficult to observe employees “go through a real emotional struggle when they see that what they are doing is not really adding value or that what they are doing is really disconnected from what they thought they were doing.” In one case, a finance manager noticed that his role was on the wall but that it was not connected to any other role on the wall. He had been producing financial reports and sending them to several departments because he understood them to be crucial for their decision-making process; however, no one had identified his work as an input to theirs.

This realization was, in the end, devastating for him. He was on the verge of tears... at first, he became very argumentative and was trying to convince people that you go from this Post-it note down here to mine. [Other employees explained] Well no, we don’t do that. It was a two-hour conversation. And he finally sat down, and he said, so why am I doing this? It was devastating.

The eventual outcome of this “aha” was that the manager was moved to another role in the department after working four years in a position that had served almost no purpose.
After such analyses, team members could not look at particular roles and people in the same way.

A lot of people also found out that they thought they were solving real problems, helping people with real issues, finding real work-arounds, but found that in the overall organizational map, it was meaningless and had no impact: they could be fixing real problems in departments that themselves were not useful.

Others found that they had properly fixed issues by introducing new databases with critical information, but that they had been unable to get any buy-in for that, so analysts and people having spent a lot of time on these just had no impact at all:

quote:

Their knowledge of the limits of local, small-scale change and the futility of changing parts of the organization without addressing the system as a whole, discouraged employees from returning to their career in the organization. They did not want to contribute to the mess or reproduce the mess they had observed.
[...]
What they had learned could not be unlearned or ignored.

The author state that whether it is due to alienation or empowerment, both behaviours push people to move to the edges of the system, where they can either find new roles or types of changes that they believe are more useful. The strucutral knowledge gain essentially let them know of better ways to do useful things and enact change. Specifically, learning that the organization's structure is the result of interactions rather than a context in which they take place is a key learning that sociologists knew already:

quote:

This perspective or comprehension affects how we speak and act. We speak about organizations as if they are objects that exist independent of us, and we act as though they constrain and guide our actions. When we objectify social systems (organizations, communities, families, gender roles), we apprehend them as “prearranged patterns” that impose themselves on us, coercing particular roles and rules. We free ourselves to talk about and inhabit them as independent of us: as existing prior to us, standing before us, outliving us, and operating without us. Given this, we are relieved of greater responsibility for them. Our responsibility is to skillfully fulfill our role within these objectified realms.
[...]
Whereas, as some sociologists “know that organizations and institutions exist only in actual people’s doings and that these are necessarily particular, local and ephemeral”, employees may be less likely to know this. When they do, it problematizes their past and future participation.
[...]
The realization that social worlds do not have an independent, stable existence but instead emerge from our collective action is “sometimes arrived at in a moment of heady delight, but often as a horrifying realization”. This realization is considered a “fatal insight” because it destroys assumptions that the current order, roles, rules, and routines are given. Within the system of roles, rules, and routines, there is far more room to maneuver than previously assumed. Rejection of objectivity puts possibility, perhaps even responsibility, squarely in the court of subjectivity.


I think this quote above is real loving good.

I'm going to conclude with it, although the author adds a bit of a section about mentioning that given this research means that we can suspect some of the most effective change to be driven by actors who once were at the core of the system and moved to its periphery. This likely is a sign that they know how poo poo works and have an idea of how to challenge it. Insider knowledge dragged to the edges may be a key for strong means to modifying how things work. I'll let you read the paper if you want the details of that.

MononcQc fucked around with this message at 17:32 on Apr 15, 2022

Gnossiennes
Jan 7, 2013


Loving chairs more every day!

The last two papers you've gone through have been really helpful for me in understanding my job/role and struggles that I have with it -- seriously, thank you.

Midjack
Dec 24, 2007



good find. this is a better development of ideas i'd once had.

echinopsis
Apr 13, 2004

by Fluffdaddy

FalseNegative posted:

This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.

MononcQc
May 29, 2007

This week's paper is When mental models go wrong. Co-occurrences in dynamic, critical systems by Denis Besnard, David Greathead, and Gordon Baxter. This is a bit of a lighter text, but hints at some interesting approaches around mental models, specifically in airline pilots although the lessons are applicable more broadly.

One of the patterns that is highlighted in many sorts of incidents is one where someone's mental model and understanding of the situation is wrong, and they end up repeatedly ignoring cues and events that contradict their understanding. So the paper looks into what causes this in someone who is trying to actually do a good job. The paper states:

quote:

Humans tend to consider that their vision of the world is correct whenever events happen in accordance with their expectations. However, two sequential events can happen as expected without their cause being captured. When this is the case, humans tend to treat the available evidence as exhaustively reflecting the world, erroneously believing that they have understood the problem at hand. These co-occurring events can seriously disrupt situation awareness when humans are using mental models that are highly discrepant to reality but
nonetheless trusted.

We've discussed the issue of there being more signal to process than capacity to process them. So rather than building a mental model that handles all of the information we have in our environment, we build goal-directed abstractions. Their main aim is to understand the current state and future states of a situation, without necessarily having an in-depth awareness of all its subtleties. They're built by a) the things you know to achieve a goal, and b) some [but not all] data extracted from the environment. So the core features and concerns of a given problem are overemphasized, but the peripheral data is easy to overlook.

Another interesting aspect of this is that essentially, the more overloaded you are with limited ability to focus, the more likely you are to automatically simplify your model and deal with correlations of the strongest elements, at the cost of all the peripheral data. This is important because complex systems—such as an airplane cockpit during an emergency situation—increases the demands on the crew. The crew possibly has to deal with nervous passengers, change of plans with air traffic control, and keeping on flying the plane while it operates abnormally. So the at-rest capacity to fully reason through everything is likely to get reduced because you don't get more bandwidth but you do end up with more demanding tasks.

There's a reference to a great concept called Bounded Rationality, which states essentially that because of the above limitations, we tend to pick cheap adequate solutions (heuristics) over optimal solutions. We go for good-enough even if sub-optimal, because it is a compromise with the cognitive cost required.

Another aspect highlighted in the paper is regarding the validation and invalidation of mental models:

quote:

Flaws in mental models are detected when the interaction with the world reveals unexpected events. However, these inaccurate mental models do not always lead to accidents. Very often, they are recovered from. In this respect, error detection and compensation are significant features in human information processing. The weakness of mental models lies in their poor requirements in terms of validity: If the environmental stream of data is consistent with the operator’s expectations, that is enough for the operator to continue regarding the mental model as valid. The understanding of the mechanisms generating the data is not a necessary condition.

We are not concerned here with how operators could build exhaustive mental models, as their incompleteness reflects a strong need for information selection. The issue of interest is to understand the conditions in which operators believe they have a good picture of the situation whereas the underlying causal mechanisms have not been captured.

This is done through an analysis of The Kegworth air crash in 1989. This incident has to do with a plane that has two engine (one on the left, one on the right). A fan blade detached from one of the engines, causing major vibration, and smoke and fumes to enter the aircraft through the AC system. The captain asked the first officer which it was, the first officer was unsure, said it was the right one. The captain throttled that engine back, and the vibrations went away. So they thought the decision was right, for about 20 minutes. When they had to land, they added more power to the left engine, and vibration came back real strong. They tried to restart the right engine, but not in time to avoid a disaster.

So big thing there: you see a problem with vibration, you turn off an engine, vibration goes back to normal. Problem solved, mental model is pleased. This makes it different from fixation errors, which are patterns seen as in Chernobyl. The Chernobyl example is the one where the operators thought the powerplant couldn't explode, and they came up with a different explanation. Even when graphite was visible and the whole thing had gone boom, it was hard for the operators to think it was an explosion anyway. Fixation occurs when you disregard increasing amounts of data to remain with your current explanation, whereas this incident is one where unrelated events (vibration stopping) was seen as an agreement with the current explanation and the mental model felt confirmed despite being wrong.

There are other interesting factors that contribute to this:
  • while both the captain and the first officer were experienced (over 13,000 hours and over 3,200 hours flying time respectively), they had only 76 hours experience in the Boeing 737-400 series between them.
  • There was a work overload (demands from air traffic control, the passengers, etc.)
  • The captain mentioned not scanning the Engine Instrument System (EIS) for vibrations because they are often unreliable in other aircrafts
  • The EIS on that airline model had moved away from physical gauges into digital ones; 64% of pilots said it was bad at getting their attention and 74% preferred the older (non-digital) style
Here's the EIS of a 737-400 cockpit, circled in black:



The secondary EIS is magnified on the right-hand side of the picture. The vibration indicators are circled in white.

So what the paper says here is that we had a great example of the cognitive workload of the crew growing out of control. Management of cognitive demands makes it so when we look for confirmation of existing models, we are okay with partial confirmation. But when we want to contradict our model, we wait for more consistent data to do so. This is related to confirmation bias. For this incident, turning off an engine and a noise reduction that follows was that partial confirmation.

The authors state that one of the reasons for this behavior is that dealing with discrepancies may mean a loss of control. If you have to stop what you're doing to correct your mental model, you can't spend as much energy keeping the plane flying. So this clash of priorities may explain why people focused on a more important concern (keep the plane in the air) have it take precedence over updating a mental model that is no longer entirely right nor adequate:

quote:

Provided they can keep the system within safe boundaries, operators in critical situations sometimes opt to lose some situation awareness rather than spend time gathering data at the cost of a total loss of control.
Critical situations can be caused by the combination of an emergency followed by some loss of control. When this happens, there is little room for recovery.
[...]
The emergency nature of the situation and the emerging workload delayed the revision of the mental model which ultimately was not resumed.

What are the implications for system design? Two avenues are mentioned. The first is operator training. The supposition is that if you know about these biases and mechanisms, you may end up aware of them when they take place, which should have a positive impact on system dependability. They mention catering to these possibilities by improving communication, better stress management, and more efficient distribution of decision-making.

Another one is the same one mentioned in a lot of papers: automation has to be able ot eventually cater to the cognitive needs of the user, and better plan and explain the state transitions it is going through and the objectives it is trying to attain. Essentially, find ways to give relevant data to the operator without them having to cognitively do all the work to filter it out and know its relevance.

This is, again and unsurprisingly, an open problem, because all of that stuff is contextual.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

drat this is great stuff

MononcQc
May 29, 2007

This week's paper is one I found through an episode of The Safety of Work podcast, referencing a text titled Observation and assessment of crossing situations between pleasure craft and a small passenger ferry. I'm picking it for the same reasons the podcast did, which is asking the question Can we get ready for automation by studying non-automated systems?.

This is because the paper looks at a system that should on its face be really easy to automate, if we assume that navigational rules are respected. The paper in fact studies the Ole III, which is a small passenger ferry in the Husøysund strait in Tønsberg municipality, Norway.

The ship is 8m long by 2.6m wide, carries 11 passengers at most (plus the captain who's responsible for the passengers), has a single 38hp engine, and uses only optical navigation with binoculars and a magnetic compass. The captain makes all assessments and decisions according to his experience and judgment (no communication overhead), and the crossing it takes is always the same, which takes roughly 2 minutes when traffic is low and weather is good.



The strait it carries them across is between 100 and 150m wide shore-to-shore, and has a central channel 6 meters deep with no traffic separation zones. By navigational rules (they're complex and listed in the paper), the Ole III ought to have right of way by virtue of being a commercial craft with passengers. However, the channel is heavy with traffic, and is used frequently by pleasure crafts (which the authors expected would not respect all rules). Similarly, since the Ole III is small and maneuverable and has no large draft, it cannot with certainty claim right-of-way with vessels on its starboard side. The law also states that all other ships (including pleasure crafts) should "as far as possible keep away", which is not the same as actually giving way.

All in all, this sounds like it should be as straightforward as it can be: small, short route, always the same, with a general right of way and no need for fancy instrumentation. But it's a bit more complicated than that.

While the captain of the Ole III might be able to claim that he has legally right of way, being able to know this is what would happen in practice depends on other ships' understanding of navigational law as well. Specifically pleasure crafts are possibly manned by incompetent skippers, who may be on vacation, driving at high speeds, while drunk.

So what happened in the paper is that the scientists sat in between 10am and 8pm between June 4 and August 4 2018, for nearly 4,802 2-minutes long crossings, and look for all sorts of incidents or near-misses. They wanted to account for all deviation from navigational laws that would be encountered by Ole III, to calculate the risks and to see how the captain dealt with them.

They encountered a total of 7415 other vessels coming through, with 4150 from starboard and 3265 from port side. 6225 passengers were recorded, with 1227 under 16 years old and 60 requiring assistance to get on-board (kindergarten age kids). 3995 bikes were also transported.

They recorded 279 instances of other vessels being on a conflicting course that could be given a risk classification of incident or near-miss, accounting to 5.8% of crossings or 8.9% of crossings with vessels nearby. They involved behaviours where other ships didn't respect the rules; notes were taken, thematic analysis was done, and two people analyzed them (a navigator with 8 years of sailing onboard vessels in the Royal Norwegian Navy and 20 years of experience in different jobs in the maritime industry and a Professor of maritime human factors.)

They came up with the following risk categories:
  1. deviations: Ole III actually gave other vessels the right of way when it had it; there was no imminent danger, and accounts for 229 times (4.8% of crossings)
  2. dangerous: situations where the captain of the Ole III had to reduce his speed or change course to avoid conflicts, and vessels speeding in its path. There were 39 of these (0.8% of crossings)
  3. critical: Ole III had to reverse thrust, do evasive manoeuvre, use the horn to signal and avoid incidents. Those were considered high risk events and happened 12 times (0.2% of crossings).
Interestingly enough, there's no way to know if this is a large or small amount of deviations, because there's almost no other data to compare it to. The authors' experience says that this did not feel abnormal however. The incidents were mostly related to not giving way, high speed, lack of attention, people being on the wrong side of the fairway, and high traffic density (averaging 5 other vessels on a crossing course).

The ways the captain of the Ole III avoided incidents is divided in two categories, passive and active control strategies. Passive strategies risk reduction was done by avoiding other vessels, such as waiting before entering the fairway, sailing behind (aft) other ships, reducing speed, active reversal of thrust, and emergency deviation. Active strategies had to do with maintaining steady heading and speed (even if other vessels were around), and communicating in some way, whether through hand signals or the horn.

So for deviations, 89.5% of situations were handled by passive means and 10.5% via active means. For dangerous situations, 79% were handled passively and 21% actively. For critical incidents, passive handling was required 67% of the time and actively 33% of the time.



So the obvious trend here is that the more critical the situation, the more active the management. Something else revealed by the data is that most of the incidents having to do with pleasure crafts coming from the side which the Ole III should have definite priority are cases where the captain can be considered to creating safety by taking actions that defuse other people's errors. By this perspective, the captain frequently bends the rules and gives way to unlawful behaviour but in a way that can be thought as a counterweight to human error: it's adaptive behaviour that is out of the norms and restores safety.

Those coming from the Ole III starboard's side are more complex. The discussions of the authors with the captain revealed that the captain believed he had the right of way, but maritime law experts don't know if it's a clear-cut case whether he'd be responsible for any collisions due to the amount of control it has compared to say, a sailing ship.

The authors say that it's not necessarily important why captains of vessels act the way they do (ignorance, carelessness, lack of attention, intoxication, etc.), the practical navigational situation itself needs to be resolved:

quote:

One way of resolving this is to take a descriptive approach, such as focusing on whether people follow rules; however, this will only help in attributing blame, or judicial responsibilities, and will not help in explaining actual behaviour (i.e. why people choose to follow a rule or not).

They come up with a decision table:



The main key point is whether the intent of both vessels match, not necessarily who is right or wrong. They mention that this match vs. mismatch situation is true whether vessels are manned by humans or automation on either side. Either type is considered an "adaptive agent" and any disagreement in model is riskier than agreement in model:

quote:

Irrespective of the nature of the adaptive agent, the challenges described in Table 8 are not possible to resolve unless (1) it is possible to establish communication of intention between vessels or (2) it is possible to ensure that all agents follow the [navigational laws] at all times. The last request is highly unlikely to ever happen as long as pleasure craft skippers lack elementary navigational competencies and knowledge of [navigational laws].

So what are the suggested control strategies? Active control strategies (following the rules and asserting your right of way) actually reduce the safety margins. So long as you can't be sure the other vessel understands your intentions or able/willing to deviate, they're not advisable. Passive situations prevent most risks. For small passenger crafts, this may be advisable. It would however reduce the efficiency more dramatically when traffic is higher.

A third option would be to formalize ways to communicate intentions between vessels (including pleasure crafts). Existing projects are about finding ways to share route plans, which is still tricky because pleasure crafts don't tend to have route plans. A lot of other suggested equipment is generally too expensive. So for the time being they mostly suggest passive strategies.

---

So this should give interesting ideas about tricks around automations and what can be challenging. It nicely fits in with a lot of the literature linked here before about being able to capture and guess intentions, and that rule breaking is sometimes—if not often—a desirable way to maintain safety. Assuming that rules are going to be respected is a sort of dangerous affair, and a lot of systems aiming for automation that take rules for a granted ("otherwise blame will be on the other anyway") can end up reducing overall system safety compared to having human operators.

Shame Boy
Mar 2, 2010

oh i really like this boat study one a lot, thanks!

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



this would not happen if Ole III had a large cannon

Shame Boy
Mar 2, 2010

Carthag Tuek posted:

this would not happen if Ole III had a large cannon

that reminds me, i recently learned that for hundreds of years the exclusive territorial limit a country could claim out into the ocean was ~2 miles.

because 2 miles was the maximum effective range of a land-based cannon, you see

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

cool boat story!!

MononcQc
May 29, 2007

For this week's paper, I decided to dig into NASA's voice loop system, because I kept hearing good things about it. The paper is Voice Loops as Coordination Aids in Space Shuttle Mission Control by Emily S. Patterson and Jennifer Watts-Perotti. The paper is from 1999, where online voice communications for high-pace coordination weren't quite common place. But even then, there's something quite cool about it even by today's standards, especially if you've ever done live operations during outages in tech.

The voice loop design is sort of opaque-sounding from the outside. They're essentially a bunch of synchronous audio channels to allow group coordination. They're also used in air traffic management, aircraft carriers, and as is the case for this paper, the space shuttle mission control. The overall structure of the voice loops are matching the structure of mission control itself:

quote:

During missions, teams of flight controllers monitor spacecraft systems and activities 24 hours a day, 7 days a week. The head flight controller is the flight director, referred to as “Flight.” Flight is ultimately responsible for all decisions related to shuttle operations and so must make decisions that trade off mission goals and safety risks for the various subsystems of the shuttle. Directly supporting the flight director is a team of approximately sixteen flight controllers who are co-located in a single location called the “front room”. These flight controllers have the primary responsibility for monitoring the health and safety of shuttle functions and subsystems. [...] These controllers must have a deep knowledge of their own systems as well as know how their systems are interconnected to other subsystems (e.g., their heater is powered by a particular electrical bus) in order to recognize and respond to anomalies despite noisy data and needing to coordinate with other controllers.

Each of the flight controllers located in the front room has a support staff that is located in “back rooms.” The front room and back room controllers communicate with each other through the voice loop system by activating a voice loop channel through a touch screen and talking into a headset. The back room support staff are more specialized than the front room controllers on specific shuttle subsystems and monitor more detailed information sources.

This diagram is provided in the paper:



Controllers (people working in mission control) can listen on any loop they want at any time they want, even multiple at a time. They can also use a primary loop they can talk on; they typically listen to ~4 of them simultaneously as well.

So you can see that the flight director loop has all the top-level controllers able to talk on there. The flight director loop has all the critical core information broadcast to everyone, and pretty much everyone listens to it. The front-to-back loops are how the higher-level controllers can delegate comms to their subteams, for whom they communicate with the top level. The conference loops are pre-set loops where controllers from pre-defined peer groups can go and talk to each other. But all these loops, even the front-to-back and the conference loops, can be listened to and monitored by anyone:

quote:

By formal communication protocols in mission control, flight controllers have privileges to speak on only a subset of the loops they can listen in on. In the voice loop control interface, each channel can be set either to monitor or talk modes. Only one channel at a time can be set to the talk mode, although many channels can be monitored at the same time. In order to talk on a loop set to the talk mode, a controller presses a button on a hand unit or holds down a foot pedal and talks into a headset.

Each controller customizes the set of loops they monitor by manipulating the visual representation of the loops at their console. The controllers can save a configuration of multiple voice loops on ‘pages’ under their identification code. The most commonly used loops are grouped together onto a primary page. The controllers then reorganize and prioritize the loops to fit the particular operational situation going on at that time by changing the configuration of loops that are being monitored and by adjusting the relative volume levels on each loop.

The voice loop interface is generally considered to be easy to use and an appropriate communication tool for a dynamic environment like space shuttle mission control. The fundamental display units are visual representations of each auditory loop, which captures the way controllers think about the system. In addition, if individual loops are analogous to windows in a visual interface, then the pages of sets of loops are analogous to the ‘room’ concept in window management (Henderson and Card, 1986). Controllers are able to customize the interface by putting their most commonly used loops together on a single ‘page.’ Active loops on these pages can be dynamically reconfigured in response to the constantly changing environment. Dynamic allocations of which loops to listen to are done by directly selecting loops to turn off and on. Controllers increase or decrease the salience of particular loops by using loop volume controls to adjust relative loudness

What's interesting as a property of setting up loops like this comes from the ability to coordinate. A disturbance in one of the control systems is going to be detected in one of the back rooms, and discussed among people there, who may eventually escalate the issue to their controller. Their controller can then bring it up to other top-level controller or directly to flight control, at which point the information is broadcast everywhere. This approach ends up doing a few things:
  • Each higher level in the loop turns from technical details at a low level to an event description that is high-level
  • Controllers can monitor top-level loops of their most connected systems, and if they detect noise, they can start anticipating the need for diagnostic tests and hear what happens on these related subgroups; this lets them react faster through anticipation
  • Higher-level loops have more formal communication patterns where a controller announces their department then who they intend to talk to, say there messages, etc.
  • Being able to snoop on other people's loops lets people gauge their interruptibility to know when to escalate or wait for your turn to talk
  • The conference rooms are pre-defined and allow for ad-hoc reorganization or diagnosis without overloading other channels
  • The levels of the loops also represent levels of expertise. Each expert in the backroom has a higher level of specialization on subcomponents they're in charge of than the people representing them in the front room.
  • The top-level loop acting as a broadcast mechanism ensures that most relevant information is heard by everyone and can help create alignment for the entire mission control

Other interesting properties mentioned:

quote:

When controllers hear about the failure on the Flight Director's loop, they can anticipate related questions from the flight director and prepare to answer them without delay. Controllers can also anticipate actions that will be required of them. For example, an anomaly in one subsystem might require diagnostic tests in another system. When the controller hears about the anomaly on the voice loops, he can anticipate the requirement of these tests, and prepare to conduct them when they are requested.
[...]
For example, if an event like a complex anomaly occurs in a shuttle subsystem, the event triggers diagnostic activity in all related subsystem teams. This activity generates more communication across teams over the voice loops. Therefore, it is possible for controllers to track the cascade of disturbances in shuttle systems by tracking the escalation of activities that occur in response to these disturbances. This general indication of activity tempo allows controllers to synchronize their processes and activities with rest of the flight control team.
[...]
In contrast to a system of direct communications where only invited parties are involved in a conversation, voice loops allow controllers to listen to communications without announcing their virtual presence. This ability allows controllers to better gauge the relevance of their communications in relation to what is happening on a loop before interrupting. Controllers are then better able to time their communications, either by speeding up or postponing communications in relation to spurts of activity or by waiting for a pause in the communications to interject.


The paper includes a sample log showing loop communications with annotations that describe intent and escalations. Things at the same horizontal level happen at the same time:




If you want explanations about that log, the paper contains them, but I'm eliding them here.

The authors decide that two main factors explain voice loops' success:
  1. Peripheral awareness of other discussions where you can drop in, without disrupting people having these conversations either nor requiring their attention. The burden of interaction rests with the people who benefit from the information.
  2. The voice loop structure reflects that of the organization. Members of a team share their front-to-back loop, controllers of a subsystem share conference loops, the flight director loop allows high-level/importance communications for central decision-making, and air-to-ground loop lets you know specifically what the astronauts are going through.
These characteristics make them a lot better than a single shared loop (a big conference call), and the fact that you don't get to create random loops also means you get to be more focused on existing structures and get to be predictable to everyone.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

bröther that is very cool

MononcQc
May 29, 2007

This week's paper is a book chapter by Gary Klein called Seeing the Invisible: Perceptual--Cognitive Aspects of Expertise, from 1992. As is the pattern, a lot of the work I go through is by Woods or Klein because they're just titans of that stuff, and in this chapter, Klein tries to define what makes the difference between an expert, an adept, or a novice. This one is gonna be hell to cover because it's a scan and I can't copy/paste quotes.

The first sentence is sort of the whole thesis: Novices see only what is there; experts can see what is not there. The question is why, or rather how? The paper first covers the difference between an expert and a novice, the development of experts, ways of framing expertise, and then the implications for their training.

So what's the difference between a novice and an expert? In physics problems, both the students and the experts were able to pick up the critical cues. The observed difference was that the experts could see how they all interacted together. In tank batallions, novices can name all the critical cues and things to look out for as well without getting overwhelmed. In medicine, the observation is that diagnoses are not really related to how thorough the practitioner is in cue acquisition, and higher levels of performances are generally not the consequences of better strategies in acquisition of information that is directly perceivable.

The difference noted is that rather than being able to pick up more contextual cues, experts are able to pick up when some expected cues are missing. They're able to see things unfold, and make more accurate predictions about what is about to happen, and form the according expectations.

There's also a difference between expertise and experience. A rural volunteer firefighter getting 10 years of experience may learn less than a professional firefighter spending 1 year in a decaying dense city, although some minimum amount of time is required. We expect experts to make harder decisions more effectively, even in non-routine cases that would stymie others. You can spot experts because:
  • variable, awkward performance becomes relatively fast, consistent, accurate, complete
  • individual acts and judgments are integrated into overall strategy
  • learning shifts from focused on individual variables to perception of complex patterns
  • more self-reliance
There is a mention of Dreyfus' model we've already seen in designing for expertise, so I'm skipping it here, even if there's a sizable chunk of the chapter dedicated to it (it's the one going novice, advanced beginner, competent, proficient, then expert). There's also a mention that while we can experts to be pretty good at all things under their area of expertise, we shouldn't expect them to show mastery at all of them.

The chapter covers a bit of literature about what makes experts different from novices, and settles on the idea that experts and novices don't use different strategies: they just have a different knowledge bases to work with. Experts have more schemata, but both experts and novices do reason by divide and conquer, top-down and bottom-up reasoning, think in analogies, and have multiple mental models. The richness of the knowledge base seems to be the difference.

There are however more subtle differences: novices tend to encode their models based on surface features whereas experts tend to think in terms of on deep knowledge (functional and physical relationships) and can better gauge conditions and importance of information. The issue is: how the hell do we train people? How do you teach that? Generally this means you just train people by giving them more and more information, which the authors don't dispute, but they want to look at the cognitive angle and how things change.

Seeing typicality
The first thing they mention is the ability to see typicality. To know what is normal and what is an exception requires having seen lots of cases. Identifying a situation as typical then triggers a lot of responses and patterns about courses of actions (what is feasible, promising, etc). This was observed in firefighters, tank platoons, design engineers, and in chess. In fact, at higher levels of expertise, this becomes sort of automated -- it's not an analytical choice, more like a reflex, or automated heuristics. Particularly, this also comes with an ability to see what situations are atypical because expected patterns are missing. It has been found that for some physicians, the absence of symptoms is often as useful as their presence in making a diagnosis.

They also noticed that experts with this ability do not show a lot of skill degradation with time pressures, whereas it does with journeymen (blitz chess observations were behind this). Physicians don't really use an inductive process in diagnoses. Even if they're trained not to, they can't help but form early impressions. The idea there is that these early hypotheses, which are also found in software troubleshooting, could direct the search for more evidence, rather than just gathering facts over and over again.

How is this developed? Well, not by analogies. Analogies are used a lot by novices and journeymen, and rarely by experts. Though when experts use analogies, they're on point. One explanation is that as you gain more experience, things blend together and lets you more easily reason about typicality. Another possible explanation is pattern matching (which would not be sufficient, lest experts also were to suck rear end at dealing with novel situations). There's no great theory underpinning how this happens.

Seeing distinctions
Experts just can see more things. The example is simple: watch olympic gymnastics or diving, where you just go "well the splash was small so that had to be good" or "gosh that was a fast flip, amazing" and then the analyst just points out 40 things that were imperfect but you'd never see unless it was in slow motion. This can mostly be formed when you get accurate timely feedback for your judgment (and you can validate your hit rate).

Seeing antecedents and consequences
This is essentially mental simulation to let you know how you got there, and where you're likely going. Doing this lets you evaluate a course of action without necessarily having others to compare it with, you just know if it's likely to be good or bad, regardless of alternatives. The more expertise you have, the further ahead you're likely to reliably project things, or the most likely you are to imagine further back in time how things were to get where they are now.

Implications for training
For chess, the idea is that you need 10k-100k patterns, which takes ~10 years to acquire. It takes 5-10 years in many other disciplines as well. There is no reason to think you can train experts by showing novices how experts think. The only thing they tracked that could be reliable helped is metacognition (thinking about how you think about things, assessing your performance, framing yourself as a learner). They point out 4 strategies to improve perceptual skills:
  1. personal experiences: spend time doing poo poo, but with a lot of variation in challenge and difficulty (eg. 10 years of experience, not 1 year of experience 10 times)
  2. directed experiences: this is on-the-job training and tutoring. The challenge is in making sure your tutors know how to train people and pass their own experience on to others.
  3. manufactured experiences: this is a fancy way of talking about simulations and simulators. If expertise requires you to go through rare events, then you can make experts faster by making the rare events happen more often.
  4. vicarious experiences: storytelling and accounts from others such that the listener can learn the important lessons and signals from the person who lived them. War stories are a good way to get compressed experience.
One perspective in the chapter is to treat expertise or knowledge as a resource, which you then want to locate and develop.

So to do that, you have to be able to spot who the experts are. They define three criteria:
  1. performance: variability, consistency, accuracy, completeness, and speed. They don't actually point great ways of evaluating that and just point at chess ratings as an example of a rating that should predict how often games are won.
  2. content knowledge: the things you know. They mention things like coming up with conceptual graphs and multidimensional scaling, or semantic nets. In short, lay out information you know and organize it.
  3. developmental milestones: the Dreyfus & Dreyfus model mentioned earlier or Piaget's model are examples of this.

The paper concludes by reiterating that expertise is seeing what is not there, what is missing. The idea that experts have special strategies tends not to hold to scrutiny; a broader knowledge base is instead what seems to be the differentiating factor. This is however disappointing (their words, not mine) because it doesn't tell much about how to make more experts, so they suggest once again looking at how experts perceive things instead, and ways to better transfer the experiences.

I'd probably like to see a more modern version of it that could build on the last 30 years or so of progress in cognitivism, not quite sure where I'd find it though.

Adbot
ADBOT LOVES YOU

zokie
Feb 13, 2006

Out of many, Sweden
That was a top notch effort poat, shared it with my mom who is a Professor of Sociology.

You should start a podcast or something

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply