|
Shaggar posted:it is the ultimate irony that what you are talking about here is implementing real process controls. something you claim as been done, but has never actually ever been done in healthcare. process controls and process improvement are literrally how you close that gap from work-as-prescribed to work-as-done. You conveniently flip back to process controls as if you were right all along. I'm not arguing against trying to improve processes, I'm arguing against the idea that suing people and sending them to jail is a good start. It's not, because it makes improving process much harder. Don't try to take my argument as supporting yours, it's the opposite of that. On the idea of "process controls," yours is an incredibly vague description. As broadly defined, it often ends up being some sort of variation on Taylorist scientific management. In that form it has been attempted in healthcare in both the US and UK already, and didn't give great results. Both you and I have discussed about this in another thread in the past, where I kept referring to the book Still not Safe which does an analysis of the last 20 years of evidence-based medicine and its lack of results. You mostly ignored anything I posted and repeated your opinion like that gave it any more weight. I'm not willing to throw away the idea of "process control" out of the window, but you need to be clearer in what you mean, because you deservedly have a reputation for bad faith arguments for the sake of riling people up and I'm not necessarily going to entertain your arguments when we've repeatedly discussed this and your involvement limits itself to "no this isn't what I meant" but never defining what it is that you mean in the first place. Either engage in good faith or kindly gently caress off.
|
# ? Mar 26, 2022 16:48 |
|
|
# ? Apr 25, 2024 10:06 |
|
Oh here's a timely one from "metrics in healthcare", where UK hospitals had targets for normal births without complications that drops just today: https://www.theguardian.com/society/2022/mar/26/shropshire-maternity-scandal-300-babies-died-or-left-brain-damaged-says-reportquote:Three hundred babies died or were left brain-damaged due to inadequate care at an NHS trust, according to reports. The full report isn't out yet, and the intermediary report from 2020 had reported on this already -- it had also found that some mothers were blamed for the death of their own infants -- and recommended various emergency changes to the reporting structures within the units, and re-training when bad cases kept happening from inadequate choice of drugs being used in risky pregnancies (it had mentioned that they were not adequately learning from past mistakes). The full report is likely to be more nuanced than the big media lines. Anyway, I don't mean this as a random dig, just that picking metrics and how their selection will play out when they start becoming objective is god drat tricky.
|
# ? Mar 26, 2022 23:54 |
|
https://twitter.com/banovsky/status/1507890882938380288
|
# ? Mar 27, 2022 19:07 |
|
touchscreens in cars should be fuckin illegal
|
# ? Mar 27, 2022 21:43 |
|
what if, in addition to both of those, the wheels also had scroll acceleration
|
# ? Mar 28, 2022 06:12 |
|
why do checklists work for pilots but not for nurses?
|
# ? Mar 28, 2022 06:16 |
|
because pilots and ex pilots write pilot checklists and worthless administrators write nurses checklists
|
# ? Mar 28, 2022 06:17 |
|
DELETE CASCADE posted:why do checklists work for pilots but not for nurses? my quebecois uncle linked a great paper about it. https://www.researchgate.net/publication/278789222_The_problem_with_checklists quote:Healthcare checklists do not always share design features with their aviation counterparts. For an Airbus A319 (figure 1), a single laminated gatefold (four sides of normal A4 paper) contains the 13 checklists for normal and emergency operations. Tasks range from 2 (for cabin fire checklist) to 17 (for before take-off checklist), with an average of seven per checklist. Each task is described in no more than three words basically: medical administrators cargo culted the idea of checklists without understanding why and how they are used in aviation, wrote checklists with uncheckable non-procedural tasks, and filled them full of bullshit management voodoo. EMPOWER STAFF. lmao Sagebrush fucked around with this message at 06:37 on Mar 28, 2022 |
# ? Mar 28, 2022 06:35 |
|
DELETE CASCADE posted:why do checklists work for pilots but not for nurses? Because the pilot can always stop the flight, the nurse can't delay treatment. Also the staffing levels are much lower, a nurse is solely responsible for x number of patients, the pilot and his co pilot are responsible for 1 plane. Information is also more easily accessed for a pilot versus a nurse.
|
# ? Mar 28, 2022 09:47 |
|
Pilots can't just stop the flight when they feel like it FYI
|
# ? Mar 28, 2022 17:03 |
|
http://picma.info/sites/default/files/Documents/Events/November%20Oscar%20article.pdf is an interesting case from 1989 on that front. A pilot on his way to Heathrow with a sick crewmember didn't divert to Frankfurt when he learned weather would be horribly foggy in London, nor to non-Heathrow English airports (with a general understanding that you don't want to divert an aircraft because that lands you a lot of questions for the costs it entails), was short on fuel, had an auto-pilot that didn't lock onto localisers come landing time, and ended up with an impractical checklist. He missed a landing, took another go and succeeded with no injury nor damage, but the first attempt missed a hotel by ~12ft and caused sprinklers to go off in the hotel, causing damage there. quote:Stewart, in defence of his actions during the company’s own inquiry, had doggedly raised issue after issue, some of which danced around the question of exactly what had gone wrong and why. Accused, for example, of failing to file immediately upon landing the necessary MOR – ‘mandatory occurrence report.’ Stewart argued that because he had at least initiated the go-around from decision height and had landed successfully out of the second approach, it didn’t constitute an ‘occurrence.’ Few agreed. The whole thing looked like it involved bad procedures, doctors bending to pressure to make crew fly even if sick, and a sort of cover-up by the airline industry, but the pilot alone got sued and suffered the consequences. The guy was demoted, lost his lawsuit, his job, and eventually killed himself. Anyway, it's interesting to think that even the real good checklists aren't always perfect and sometimes get disregarded for being impractical. Some of them have the benefit of decades of refinement and constant training/simulation, so of course just having management making a checklist in a vacuum isn't going to be sufficient. MononcQc fucked around with this message at 17:50 on Mar 28, 2022 |
# ? Mar 28, 2022 17:48 |
|
Sagebrush posted:Pilots can't just stop the flight when they feel like it FYI But they can initiate abort/divert procedures.
|
# ? Mar 28, 2022 20:19 |
|
checklists are pretty good imo
|
# ? Mar 28, 2022 20:26 |
|
vuk83 posted:But they can initiate abort/divert procedures. and then they get fired afterwards if they hosed up jobs, aint they poo poo? and it was worse at aeroflot so not even capitalism, just havin a job
|
# ? Mar 28, 2022 20:30 |
|
to be fair it's not just commercial pilots who follow checklists.
|
# ? Mar 29, 2022 02:40 |
|
I take a checklist on first dates so I can remember to check for all red flags not just the easy ones to remember like “do you have standards?”
|
# ? Mar 29, 2022 02:55 |
|
still here, learning in progress very happy good thread Midjack posted:to be fair it's not just commercial pilots who follow checklists. its usually also anybody building anything and isn't there some sort of authority that comes from being given a checklist extrinsically that means a person doesn't even evaluate the contents meaningfully? i've seen teams become more satisfied with addressing the need of a person in charge, rather than alerting the team to a potential alternative or explaining an edgecase that exists outside of the manager or team's knowledgebase so many times. months of time is wasted chasing rabbits with problems that could have been entirely circumvented if the entire team wasn't on autopilot through the meetings and discussions. you need one person in the room who can put their hand up and say "nope, that's not a good idea" to people who are designing the recipe because although they understand cakes, they know nothing about kitchens, staff or equipment so to speak. Expo70 fucked around with this message at 15:05 on Mar 31, 2022 |
# ? Mar 31, 2022 15:01 |
|
Carthag Tuek posted:but to be serious, there wont be any meaningful change until you start putting managers, vps, & ceos in prison
|
# ? Apr 4, 2022 01:41 |
|
I haven't posted in a short while (didn't find great papers, was busy reading other stuff) and instead today I'm posting to let you know that some of the poo poo these disciplines do, for all their love of helping cognition, includes some of the worst charts and diagrams you'll ever see. I had already referenced some bad diagrams in a previous post:MononcQc posted:Now, the diagram I'm posting is crucial to a lot of the things referred in the paper and many other ones, and unfortunately, people in the humanities have the jankiest most loving ridiculous diagrams (there's much worse than this one): Well here's a post for some more as I recall them. Here's a couple of my favorite ones from Jans Rasmussen in Decision-Making in Action: (I'm the cowboy of decision-making) And a few from Designing for Expertise by Woods, which I absolutely love and should probably annotate for here one of these days, but I can't imagine it being considered "good" in terms of graphics. Don't get me wrong, sometimes they get it absolutely perfectly and the diagrams are great explanations for complex mechanisms, even if they look a bit funky. Here's of my favorite ones from Rasmussen's "Risk Management in a Dynamic Society," which essentially encapsulate the drift model in a single image: But even for all the good hits, nothing can beat these ones from one of Woods' presentations on releasing the adaptive powers of human systems for the Ohio State University's program on cognitive systems engineering, where the following visuals are used to introduce "the dragons of surprise":
|
# ? Apr 4, 2022 21:03 |
|
elden ring dlc looking kinda low effort great posts everyone , thank you very much, keep em up
|
# ? Apr 6, 2022 22:12 |
|
I'm back at reviewing some David D Woods stuff: Designing for Expertise. It's a book chapter, but it's nevertheless interesting and cited here and there. I had forgotten about it until I started looking for terrible diagrams in my previous post, so here we go. First, the expert relies on a conceptual model, which is essentially a mental model; the things the expert knows about the domain and that can be used to simulate what will happen. Designers essentially end up shaping how experts can form and augment these models. A basic thing they suggest in line with that is to replace the term "user" with the term "practitioner" because the people using the tech are not passive people being imposed the product, they're people doing poo poo with objectives and challenges who are sometimes relying on your product to do something. Practitioners will modify unsatisfactory design, devise workarounds, or simply abandon things that do not let the meet their goals. So to predict how your tech is going to impact your experts, you gotta know what the hell expertise is, and have an idea what their expertise is. But you can't expect someone who designs a surgeon's tools to also be a surgeon on top of being a designer. This is something dubbed the Ethnographer's Challenge: quote:in order to make interesting observations, they have to be, in part, insiders in the setting they are observing, while remaining, in part, outside the domain in order to have insights about how practice works, how practice fails and how it could work better given future change. Design observations in the field of practice, where designers watch experts doing cognitive work, relies on being prepared to be surprised in order to distinguish unexpected behaviors that reveal how expertise works and how these experts work You tend to end up with multidisciplinary teams where designers consult with experts to design for other experts. This can create clashes because designers tend to look for simple solutions to problems whereas systems engineers assume that only complexity can cancel out complexity. So both approaches that aim to design for simplicity and those that are more analysis-based are needed, but insufficient. This cross-disciplinary team ends up having to gain some of each other's expertise to work as well. So this starts the chapter's long detour on defining expertise. There's a big section that contains a tour of the history of the study of expertise, which I'm eliding here, after which they conclude: quote:One of the key results is that expertise uses external artifacts to support the processes that contribute to expert performance – expertise is not all in the head, rather it is distributed over a person and the artifacts they use and over the other agents they interact with as they carry out activities, avoid failure, cope with complexity, and adapt to disruptions and change. Anyway, past that section, we get to initial definitions of expertise. The first perspective is one where expertise is definable in terms of how much and how well organized your domain-specific knowledge is. The more you know the better you are. This perspective can be expanded by saying "hey sometimes, knowledge is social too" which changes things a bit that says that expertise is having a rich repository of strategies for applying knowledge based on context. This further means that a) expertise is domain-specific b) experts adapt to changes c) they rarely act as solo individuals. This gives them a list of 5 key attributes of experts:
The next question is how you identify people with the knowledge of experts. They mention:
The first one is: Novice (slow performers who follow rules), Advanced Beginners (they see patterns that support rules), Competence (lots of patterns known, hierarchical reasoning sequences, can deal with more situations, but are still slow), Proficient (intuition takes over reasoning, decision structures are adapted to the context and the knowledge flows naturally), and Expert (they know what needs to be done and can do it; immediate response to identified situations). A second one is 10+ years of deliberate practice, going through 4 phases: 1. playful activity with the domain, where those with potential are selected 2. extended preparation with trainers/coaches, 3. full-time engagement to practice and performance, making a living off of it, 4. making an original contribution to the domain by going above the teachers and innovating. That requirement for innovation is one of the tricky ones when trying to design for experts: the time spent by the designer in the domain can never match that of the domain expert. The observations can't easily be linked to practice, so there is a need for a very iterative process of trial and evaluation to anchor the design. This gives us that god drat image: The legend sort of explains what they mean. They're messy diagrams, but they sort of try to load a lot of meaning into both. The top one sort of puts you in a given role (the dotted circles with floating labels) and moving clockwise or counter-clockwise represent activities required for design synthesis or analysis. The second diagram tries to put normal project labels to the map when used counterclockwise, for design creation (synthesis), to show how it would translate to practice. The paper spends a couple of pages explaining the map, and introducing an even more confusing one which tracks the development of domain-specific expertise of the designer as they interact with the domain, and the places where you may want an expert to cope for your own lack of expertise there: It took me a while to get it, but this is the first model of expertise development in black (flowing counterclockwise from 'novice' to 'advanced beginner' all the way to 'eminent expert'), along with significant activities in light grey (implementing change, directing observations, etc), overlaid on top of the Figure 8.2A. The big dashed lines are essentially "regressions", where an eminent expert put in contact with a new device or technology suddenly reverts to simply being "competent" and needs to gain new knowledge and mastery again, and the cycle partially starts over. Anyway that's what I think it means, and it would have been better served by 2-3 different images IMO. It makes me feel this is a screen grab of a powerpoint slide that has had 5 minutes of animation and explanations collapsed into one unfathomably complex still image. What happens when you introduce new technology or solutions then is that your assessment of the expert has also changed: since they needed to adjust to the new added complexity (and they've created fancier conceptual models) and that this is done in a broader system (where there may be other changing pieces of equipment, teammates, other experts, and unrelated people or interferences in play), each new thing you designed becomes part of the environment and must now be accounted for. So as you understand expertise, you're able to better design for it, but as you do so, your understanding of expertise also melts away because you changed what it means to be an expert! This leads to once again reminding people that you can only design for experts with an ongoing collaboration between designers and experts. The authors then summarize what expertise is once more with extra factors that were added over the courses of a few pages:
|
# ? Apr 8, 2022 03:36 |
|
This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.
|
# ? Apr 9, 2022 13:29 |
|
FalseNegative posted:This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.
|
# ? Apr 9, 2022 15:15 |
|
FalseNegative posted:This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.
|
# ? Apr 9, 2022 17:05 |
|
i saw mononcqc's dealio postin about the hosed up org on twitter (https://twitter.com/mononcqc/status/1514397732332527623) and i've always had a little weird hobby-horse that this can be modelled as a pure formal issue with eutrophication dynamics (really vanishing gradient dynamics, but eutrophication dynamics is vanishing gradient dynamics on the food chain energetics...). basically, many layers of indirection of system can only be modelled mathematically as function composition, but if you compose functions and change parameterization of the original function the chain rule sez you'll get insanity, complete insane behavior. much like the insane behavior you get in many-layers-of-indirection systems in general of course this is related to exponential bein the eigenfunction of derivative. so i am always amused by exponential discount functions, which do this poo poo in time instead of iterative function compositions, and how peeps are jazzed about avoiding hyperbolic discounting - hyperbolic discounting wrt function compositions is the thing that peeps in neural net land aim for when they try to whack vanishing gradients
|
# ? Apr 14, 2022 17:21 |
|
Bob Dobbs is dead has posted my tweet, and while it's out of ergonomics/HCI/RE as a paper, it's still humanities and I still like it a lot, so here's my take on it. It's the sort of thing that retrospectively explains large trajectories of my career and validate a lot of hunches and things I believed to be true but with no evidence. And of course, it ties in with a lot of the stuff we discuss here. The paper, Moving off the Map: How Knowledge of Organizational Operations Empowers and Alienates is a work of ethnography where a researcher embedded herself into 5 organizations including 6 projects aiming at re-structuration—business process redesign (BRP)—and which all had a phase of extensive process mapping. She noticed that at the projects' conclusions, most employees returned to their roles and got raises within the organization, but a subset of them who were centrally located within the organization decided to move to peripheral roles. She decided to investigate this. What she found was that tracing out the structure of how work is done (and what work is done) and decisions are made was a significant activity behind the split. It happened because people doing this tracing activity had a shock when they realized that the business' structure was not coordinated nor planned, but an emergent mess and consequence of local behaviours in various groups. Their new understanding of work resulted in either Empowerment ("I now know how I can change things") or Alienation ("Nothing I thought mattered does, my work is useless here"), which explained their move to peripheral roles. Some of these reports are also just plain heart breaking. I have so many highlights for it. The paper starts by mentioning that centrally-located actors (people at the core of the management structure of an organization) are less likely to initiate change, and more likely to stall it. Additionally, desire for change are likely to come from the periphery, and as people move towards the center, that desire tends to go away. This is a surprise to no one. However, if central actors are to initiate change, it comes from either a) contradictions, tensions, or inconsistencies are experienced and push them to reflections, and b) being exposed to how other organizations (or even societies) do things, which opens more awareness. These two things are called "disembedding", and can lead to central actors pushing for structural change. The paper accidentally "discovered" a third approach: taking the time to study how things are done in the organization can cause that dissonance, and encourage central actors to move to the periphery of the organization in order to effect change because they lose trust in the structure of the organization itself and their role in it. This was found out while the author was doing a study of 5 big corporations with 6 major business restructuration projects with hundreds of workers, and she noticed that while some employees went back to their roles (but with promotions), or towards roles that were more central when it was done, a subset of employees instead left very central roles to go work on the periphery, for sometimes less interesting conditions. So she started asking why and ran a big analysis. What she noticed is that all the employees who eventually left their roles were assigned different specific tasks from the rest of people in these projects: they had been ask to do process mapping, where essentially they had to make a representation of "what we do here", how the business works, how decisions are made, and how information moves around. People not involved didn't find it significant, but people involved were shocked into leaving their roles, to make it short. The author makes a point that it's not process mapping causing this, but rather that having a deep engagement in representing and understanding the operations of the organization and how their own role would fit in it would cause this to happen—it was probabilistic. The tracing was done by employees who would do things like walk the floor, ask people how they do work, sit in meetings with question like "What do we do?" with people in various roles, asked them to list tasks on whiteboards, connecting them with strings, and consolidated into huge maps like the following, which connected local experiences into a broader organizational context: This had the effect of surface things that were previously invisible and make it discrete. This likely ties into concepts mentioned here before of "work as done" vs. "work as imagined": quote:The map allowed them to see how the system operated below the surface, integrating all the pieces to generate a comprehensive view. They commented on the uniqueness of this comprehensive view: “We don’t allow people to see the end-to-end view... to see how things interrelate.” One explained that the experience “ruins [one’s] perspective in a good way.” Another described how it gave her a “whole different way of looking at things.” By revealing the web of roles, relations, and routines that coalesce to make the organization, the map made the organization’s actual operation intelligible. So what were the immediate consequences? I'm quoting this directly: quote:They expected to observe inefficiencies and waste, the targets of redesign, and they did. Tasks that could be done with one or two hand-offs were taking three or four. Data painstakingly collected for decision-making processes were not used. Local repairs to work processes in one unit were causing downstream problems in another. Workarounds, duplication of effort, and poor communication and coordination were all evident on the map. They mention examples such as a "kingdom builder" where the map revealed some manager who kept accumulating departments for the sake of accumulating power but was invisible to the organization, and essentially just found a lot of "what the gently caress, this is just random poo poo that's leftovers from really old decisions." People see local problems, general approaches, and they try to fix things. This clashes with things the organization tries to do (when it tries), and there is no coherent organization to anything: quote:Some held out hope that one or two people at the top knew of these design and operation issues; however, they were often disabused of this optimism. For example, a manager walked the CEO through the map, presenting him with a view he had never seen before and illustrating for him the lack of design and the disconnect between strategy and operations. The CEO, after being walked through the map, sat down, put his head on the table, and said, “This is even more hosed up than I imagined.” The CEO revealed that not only was the operation of his organization out of his control but that his grasp on it was imaginary. This may not necessarily be surprising to people, but it may be surprising for people to learn that CEOs and others think they have so much more control than they do! Anyway, the two reactions in general were either Empowerment or Alienation. On the front of Empowerment, this is caused because: quote:Members of the organization carry on as though these distinctions are facts, burdening the organization’s categories, practices, and boundaries with a false sense of durability and purpose. So in short, getting how a lot of it isn't fixed, how a lot of it is arbitrary but flexible meant that these people felt they understood how to effect change better, and that by moving away from the center and into the periphery, they could start doing effective change work. Alienation is so god drat heartbreaking though, and the author warns that before starting this process in an organization, you have to be ready that some people may feel a major shock that the work they thought was valuable and important is in fact useless and worth nothing. In fact the author warns that finding work and jobs that were not meaningful or useful at all was a common theme: quote:As part of the map-building process, employees were invited to identify their role on the map and to indicate how it was connected to other roles through either inputs or outputs. Team members recounted that it was difficult to observe employees “go through a real emotional struggle when they see that what they are doing is not really adding value or that what they are doing is really disconnected from what they thought they were doing.” In one case, a finance manager noticed that his role was on the wall but that it was not connected to any other role on the wall. He had been producing financial reports and sending them to several departments because he understood them to be crucial for their decision-making process; however, no one had identified his work as an input to theirs. A lot of people also found out that they thought they were solving real problems, helping people with real issues, finding real work-arounds, but found that in the overall organizational map, it was meaningless and had no impact: they could be fixing real problems in departments that themselves were not useful. Others found that they had properly fixed issues by introducing new databases with critical information, but that they had been unable to get any buy-in for that, so analysts and people having spent a lot of time on these just had no impact at all: quote:Their knowledge of the limits of local, small-scale change and the futility of changing parts of the organization without addressing the system as a whole, discouraged employees from returning to their career in the organization. They did not want to contribute to the mess or reproduce the mess they had observed. The author state that whether it is due to alienation or empowerment, both behaviours push people to move to the edges of the system, where they can either find new roles or types of changes that they believe are more useful. The strucutral knowledge gain essentially let them know of better ways to do useful things and enact change. Specifically, learning that the organization's structure is the result of interactions rather than a context in which they take place is a key learning that sociologists knew already: quote:This perspective or comprehension affects how we speak and act. We speak about organizations as if they are objects that exist independent of us, and we act as though they constrain and guide our actions. When we objectify social systems (organizations, communities, families, gender roles), we apprehend them as “prearranged patterns” that impose themselves on us, coercing particular roles and rules. We free ourselves to talk about and inhabit them as independent of us: as existing prior to us, standing before us, outliving us, and operating without us. Given this, we are relieved of greater responsibility for them. Our responsibility is to skillfully fulfill our role within these objectified realms. I think this quote above is real loving good. I'm going to conclude with it, although the author adds a bit of a section about mentioning that given this research means that we can suspect some of the most effective change to be driven by actors who once were at the core of the system and moved to its periphery. This likely is a sign that they know how poo poo works and have an idea of how to challenge it. Insider knowledge dragged to the edges may be a key for strong means to modifying how things work. I'll let you read the paper if you want the details of that. MononcQc fucked around with this message at 17:32 on Apr 15, 2022 |
# ? Apr 15, 2022 16:00 |
|
The last two papers you've gone through have been really helpful for me in understanding my job/role and struggles that I have with it -- seriously, thank you.
|
# ? Apr 15, 2022 16:58 |
|
good find. this is a better development of ideas i'd once had.
|
# ? Apr 15, 2022 17:46 |
|
FalseNegative posted:This whole thread continues to be fascinating, thank you for taking the time to write these excellent posts.
|
# ? Apr 15, 2022 18:01 |
|
This week's paper is When mental models go wrong. Co-occurrences in dynamic, critical systems by Denis Besnard, David Greathead, and Gordon Baxter. This is a bit of a lighter text, but hints at some interesting approaches around mental models, specifically in airline pilots although the lessons are applicable more broadly. One of the patterns that is highlighted in many sorts of incidents is one where someone's mental model and understanding of the situation is wrong, and they end up repeatedly ignoring cues and events that contradict their understanding. So the paper looks into what causes this in someone who is trying to actually do a good job. The paper states: quote:Humans tend to consider that their vision of the world is correct whenever events happen in accordance with their expectations. However, two sequential events can happen as expected without their cause being captured. When this is the case, humans tend to treat the available evidence as exhaustively reflecting the world, erroneously believing that they have understood the problem at hand. These co-occurring events can seriously disrupt situation awareness when humans are using mental models that are highly discrepant to reality but We've discussed the issue of there being more signal to process than capacity to process them. So rather than building a mental model that handles all of the information we have in our environment, we build goal-directed abstractions. Their main aim is to understand the current state and future states of a situation, without necessarily having an in-depth awareness of all its subtleties. They're built by a) the things you know to achieve a goal, and b) some [but not all] data extracted from the environment. So the core features and concerns of a given problem are overemphasized, but the peripheral data is easy to overlook. Another interesting aspect of this is that essentially, the more overloaded you are with limited ability to focus, the more likely you are to automatically simplify your model and deal with correlations of the strongest elements, at the cost of all the peripheral data. This is important because complex systems—such as an airplane cockpit during an emergency situation—increases the demands on the crew. The crew possibly has to deal with nervous passengers, change of plans with air traffic control, and keeping on flying the plane while it operates abnormally. So the at-rest capacity to fully reason through everything is likely to get reduced because you don't get more bandwidth but you do end up with more demanding tasks. There's a reference to a great concept called Bounded Rationality, which states essentially that because of the above limitations, we tend to pick cheap adequate solutions (heuristics) over optimal solutions. We go for good-enough even if sub-optimal, because it is a compromise with the cognitive cost required. Another aspect highlighted in the paper is regarding the validation and invalidation of mental models: quote:Flaws in mental models are detected when the interaction with the world reveals unexpected events. However, these inaccurate mental models do not always lead to accidents. Very often, they are recovered from. In this respect, error detection and compensation are significant features in human information processing. The weakness of mental models lies in their poor requirements in terms of validity: If the environmental stream of data is consistent with the operator’s expectations, that is enough for the operator to continue regarding the mental model as valid. The understanding of the mechanisms generating the data is not a necessary condition. This is done through an analysis of The Kegworth air crash in 1989. This incident has to do with a plane that has two engine (one on the left, one on the right). A fan blade detached from one of the engines, causing major vibration, and smoke and fumes to enter the aircraft through the AC system. The captain asked the first officer which it was, the first officer was unsure, said it was the right one. The captain throttled that engine back, and the vibrations went away. So they thought the decision was right, for about 20 minutes. When they had to land, they added more power to the left engine, and vibration came back real strong. They tried to restart the right engine, but not in time to avoid a disaster. So big thing there: you see a problem with vibration, you turn off an engine, vibration goes back to normal. Problem solved, mental model is pleased. This makes it different from fixation errors, which are patterns seen as in Chernobyl. The Chernobyl example is the one where the operators thought the powerplant couldn't explode, and they came up with a different explanation. Even when graphite was visible and the whole thing had gone boom, it was hard for the operators to think it was an explosion anyway. Fixation occurs when you disregard increasing amounts of data to remain with your current explanation, whereas this incident is one where unrelated events (vibration stopping) was seen as an agreement with the current explanation and the mental model felt confirmed despite being wrong. There are other interesting factors that contribute to this:
The secondary EIS is magnified on the right-hand side of the picture. The vibration indicators are circled in white. So what the paper says here is that we had a great example of the cognitive workload of the crew growing out of control. Management of cognitive demands makes it so when we look for confirmation of existing models, we are okay with partial confirmation. But when we want to contradict our model, we wait for more consistent data to do so. This is related to confirmation bias. For this incident, turning off an engine and a noise reduction that follows was that partial confirmation. The authors state that one of the reasons for this behavior is that dealing with discrepancies may mean a loss of control. If you have to stop what you're doing to correct your mental model, you can't spend as much energy keeping the plane flying. So this clash of priorities may explain why people focused on a more important concern (keep the plane in the air) have it take precedence over updating a mental model that is no longer entirely right nor adequate: quote:Provided they can keep the system within safe boundaries, operators in critical situations sometimes opt to lose some situation awareness rather than spend time gathering data at the cost of a total loss of control. What are the implications for system design? Two avenues are mentioned. The first is operator training. The supposition is that if you know about these biases and mechanisms, you may end up aware of them when they take place, which should have a positive impact on system dependability. They mention catering to these possibilities by improving communication, better stress management, and more efficient distribution of decision-making. Another one is the same one mentioned in a lot of papers: automation has to be able ot eventually cater to the cognitive needs of the user, and better plan and explain the state transitions it is going through and the objectives it is trying to attain. Essentially, find ways to give relevant data to the operator without them having to cognitively do all the work to filter it out and know its relevance. This is, again and unsurprisingly, an open problem, because all of that stuff is contextual.
|
# ? Apr 24, 2022 02:54 |
|
drat this is great stuff
|
# ? Apr 25, 2022 15:22 |
|
This week's paper is one I found through an episode of The Safety of Work podcast, referencing a text titled Observation and assessment of crossing situations between pleasure craft and a small passenger ferry. I'm picking it for the same reasons the podcast did, which is asking the question Can we get ready for automation by studying non-automated systems?. This is because the paper looks at a system that should on its face be really easy to automate, if we assume that navigational rules are respected. The paper in fact studies the Ole III, which is a small passenger ferry in the Husøysund strait in Tønsberg municipality, Norway. The ship is 8m long by 2.6m wide, carries 11 passengers at most (plus the captain who's responsible for the passengers), has a single 38hp engine, and uses only optical navigation with binoculars and a magnetic compass. The captain makes all assessments and decisions according to his experience and judgment (no communication overhead), and the crossing it takes is always the same, which takes roughly 2 minutes when traffic is low and weather is good. The strait it carries them across is between 100 and 150m wide shore-to-shore, and has a central channel 6 meters deep with no traffic separation zones. By navigational rules (they're complex and listed in the paper), the Ole III ought to have right of way by virtue of being a commercial craft with passengers. However, the channel is heavy with traffic, and is used frequently by pleasure crafts (which the authors expected would not respect all rules). Similarly, since the Ole III is small and maneuverable and has no large draft, it cannot with certainty claim right-of-way with vessels on its starboard side. The law also states that all other ships (including pleasure crafts) should "as far as possible keep away", which is not the same as actually giving way. All in all, this sounds like it should be as straightforward as it can be: small, short route, always the same, with a general right of way and no need for fancy instrumentation. But it's a bit more complicated than that. While the captain of the Ole III might be able to claim that he has legally right of way, being able to know this is what would happen in practice depends on other ships' understanding of navigational law as well. Specifically pleasure crafts are possibly manned by incompetent skippers, who may be on vacation, driving at high speeds, while drunk. So what happened in the paper is that the scientists sat in between 10am and 8pm between June 4 and August 4 2018, for nearly 4,802 2-minutes long crossings, and look for all sorts of incidents or near-misses. They wanted to account for all deviation from navigational laws that would be encountered by Ole III, to calculate the risks and to see how the captain dealt with them. They encountered a total of 7415 other vessels coming through, with 4150 from starboard and 3265 from port side. 6225 passengers were recorded, with 1227 under 16 years old and 60 requiring assistance to get on-board (kindergarten age kids). 3995 bikes were also transported. They recorded 279 instances of other vessels being on a conflicting course that could be given a risk classification of incident or near-miss, accounting to 5.8% of crossings or 8.9% of crossings with vessels nearby. They involved behaviours where other ships didn't respect the rules; notes were taken, thematic analysis was done, and two people analyzed them (a navigator with 8 years of sailing onboard vessels in the Royal Norwegian Navy and 20 years of experience in different jobs in the maritime industry and a Professor of maritime human factors.) They came up with the following risk categories:
The ways the captain of the Ole III avoided incidents is divided in two categories, passive and active control strategies. Passive strategies risk reduction was done by avoiding other vessels, such as waiting before entering the fairway, sailing behind (aft) other ships, reducing speed, active reversal of thrust, and emergency deviation. Active strategies had to do with maintaining steady heading and speed (even if other vessels were around), and communicating in some way, whether through hand signals or the horn. So for deviations, 89.5% of situations were handled by passive means and 10.5% via active means. For dangerous situations, 79% were handled passively and 21% actively. For critical incidents, passive handling was required 67% of the time and actively 33% of the time. So the obvious trend here is that the more critical the situation, the more active the management. Something else revealed by the data is that most of the incidents having to do with pleasure crafts coming from the side which the Ole III should have definite priority are cases where the captain can be considered to creating safety by taking actions that defuse other people's errors. By this perspective, the captain frequently bends the rules and gives way to unlawful behaviour but in a way that can be thought as a counterweight to human error: it's adaptive behaviour that is out of the norms and restores safety. Those coming from the Ole III starboard's side are more complex. The discussions of the authors with the captain revealed that the captain believed he had the right of way, but maritime law experts don't know if it's a clear-cut case whether he'd be responsible for any collisions due to the amount of control it has compared to say, a sailing ship. The authors say that it's not necessarily important why captains of vessels act the way they do (ignorance, carelessness, lack of attention, intoxication, etc.), the practical navigational situation itself needs to be resolved: quote:One way of resolving this is to take a descriptive approach, such as focusing on whether people follow rules; however, this will only help in attributing blame, or judicial responsibilities, and will not help in explaining actual behaviour (i.e. why people choose to follow a rule or not). They come up with a decision table: The main key point is whether the intent of both vessels match, not necessarily who is right or wrong. They mention that this match vs. mismatch situation is true whether vessels are manned by humans or automation on either side. Either type is considered an "adaptive agent" and any disagreement in model is riskier than agreement in model: quote:Irrespective of the nature of the adaptive agent, the challenges described in Table 8 are not possible to resolve unless (1) it is possible to establish communication of intention between vessels or (2) it is possible to ensure that all agents follow the [navigational laws] at all times. The last request is highly unlikely to ever happen as long as pleasure craft skippers lack elementary navigational competencies and knowledge of [navigational laws]. So what are the suggested control strategies? Active control strategies (following the rules and asserting your right of way) actually reduce the safety margins. So long as you can't be sure the other vessel understands your intentions or able/willing to deviate, they're not advisable. Passive situations prevent most risks. For small passenger crafts, this may be advisable. It would however reduce the efficiency more dramatically when traffic is higher. A third option would be to formalize ways to communicate intentions between vessels (including pleasure crafts). Existing projects are about finding ways to share route plans, which is still tricky because pleasure crafts don't tend to have route plans. A lot of other suggested equipment is generally too expensive. So for the time being they mostly suggest passive strategies. --- So this should give interesting ideas about tricks around automations and what can be challenging. It nicely fits in with a lot of the literature linked here before about being able to capture and guess intentions, and that rule breaking is sometimes—if not often—a desirable way to maintain safety. Assuming that rules are going to be respected is a sort of dangerous affair, and a lot of systems aiming for automation that take rules for a granted ("otherwise blame will be on the other anyway") can end up reducing overall system safety compared to having human operators.
|
# ? May 1, 2022 01:28 |
|
oh i really like this boat study one a lot, thanks!
|
# ? May 1, 2022 05:17 |
|
this would not happen if Ole III had a large cannon
|
# ? May 1, 2022 14:47 |
|
Carthag Tuek posted:this would not happen if Ole III had a large cannon that reminds me, i recently learned that for hundreds of years the exclusive territorial limit a country could claim out into the ocean was ~2 miles. because 2 miles was the maximum effective range of a land-based cannon, you see
|
# ? May 1, 2022 15:13 |
|
cool boat story!!
|
# ? May 1, 2022 18:53 |
|
For this week's paper, I decided to dig into NASA's voice loop system, because I kept hearing good things about it. The paper is Voice Loops as Coordination Aids in Space Shuttle Mission Control by Emily S. Patterson and Jennifer Watts-Perotti. The paper is from 1999, where online voice communications for high-pace coordination weren't quite common place. But even then, there's something quite cool about it even by today's standards, especially if you've ever done live operations during outages in tech. The voice loop design is sort of opaque-sounding from the outside. They're essentially a bunch of synchronous audio channels to allow group coordination. They're also used in air traffic management, aircraft carriers, and as is the case for this paper, the space shuttle mission control. The overall structure of the voice loops are matching the structure of mission control itself: quote:During missions, teams of flight controllers monitor spacecraft systems and activities 24 hours a day, 7 days a week. The head flight controller is the flight director, referred to as “Flight.” Flight is ultimately responsible for all decisions related to shuttle operations and so must make decisions that trade off mission goals and safety risks for the various subsystems of the shuttle. Directly supporting the flight director is a team of approximately sixteen flight controllers who are co-located in a single location called the “front room”. These flight controllers have the primary responsibility for monitoring the health and safety of shuttle functions and subsystems. [...] These controllers must have a deep knowledge of their own systems as well as know how their systems are interconnected to other subsystems (e.g., their heater is powered by a particular electrical bus) in order to recognize and respond to anomalies despite noisy data and needing to coordinate with other controllers. This diagram is provided in the paper: Controllers (people working in mission control) can listen on any loop they want at any time they want, even multiple at a time. They can also use a primary loop they can talk on; they typically listen to ~4 of them simultaneously as well. So you can see that the flight director loop has all the top-level controllers able to talk on there. The flight director loop has all the critical core information broadcast to everyone, and pretty much everyone listens to it. The front-to-back loops are how the higher-level controllers can delegate comms to their subteams, for whom they communicate with the top level. The conference loops are pre-set loops where controllers from pre-defined peer groups can go and talk to each other. But all these loops, even the front-to-back and the conference loops, can be listened to and monitored by anyone: quote:By formal communication protocols in mission control, flight controllers have privileges to speak on only a subset of the loops they can listen in on. In the voice loop control interface, each channel can be set either to monitor or talk modes. Only one channel at a time can be set to the talk mode, although many channels can be monitored at the same time. In order to talk on a loop set to the talk mode, a controller presses a button on a hand unit or holds down a foot pedal and talks into a headset. What's interesting as a property of setting up loops like this comes from the ability to coordinate. A disturbance in one of the control systems is going to be detected in one of the back rooms, and discussed among people there, who may eventually escalate the issue to their controller. Their controller can then bring it up to other top-level controller or directly to flight control, at which point the information is broadcast everywhere. This approach ends up doing a few things:
Other interesting properties mentioned: quote:When controllers hear about the failure on the Flight Director's loop, they can anticipate related questions from the flight director and prepare to answer them without delay. Controllers can also anticipate actions that will be required of them. For example, an anomaly in one subsystem might require diagnostic tests in another system. When the controller hears about the anomaly on the voice loops, he can anticipate the requirement of these tests, and prepare to conduct them when they are requested. The paper includes a sample log showing loop communications with annotations that describe intent and escalations. Things at the same horizontal level happen at the same time: If you want explanations about that log, the paper contains them, but I'm eliding them here. The authors decide that two main factors explain voice loops' success:
|
# ? May 8, 2022 01:28 |
|
bröther that is very cool
|
# ? May 11, 2022 15:21 |
|
This week's paper is a book chapter by Gary Klein called Seeing the Invisible: Perceptual--Cognitive Aspects of Expertise, from 1992. As is the pattern, a lot of the work I go through is by Woods or Klein because they're just titans of that stuff, and in this chapter, Klein tries to define what makes the difference between an expert, an adept, or a novice. This one is gonna be hell to cover because it's a scan and I can't copy/paste quotes. The first sentence is sort of the whole thesis: Novices see only what is there; experts can see what is not there. The question is why, or rather how? The paper first covers the difference between an expert and a novice, the development of experts, ways of framing expertise, and then the implications for their training. So what's the difference between a novice and an expert? In physics problems, both the students and the experts were able to pick up the critical cues. The observed difference was that the experts could see how they all interacted together. In tank batallions, novices can name all the critical cues and things to look out for as well without getting overwhelmed. In medicine, the observation is that diagnoses are not really related to how thorough the practitioner is in cue acquisition, and higher levels of performances are generally not the consequences of better strategies in acquisition of information that is directly perceivable. The difference noted is that rather than being able to pick up more contextual cues, experts are able to pick up when some expected cues are missing. They're able to see things unfold, and make more accurate predictions about what is about to happen, and form the according expectations. There's also a difference between expertise and experience. A rural volunteer firefighter getting 10 years of experience may learn less than a professional firefighter spending 1 year in a decaying dense city, although some minimum amount of time is required. We expect experts to make harder decisions more effectively, even in non-routine cases that would stymie others. You can spot experts because:
The chapter covers a bit of literature about what makes experts different from novices, and settles on the idea that experts and novices don't use different strategies: they just have a different knowledge bases to work with. Experts have more schemata, but both experts and novices do reason by divide and conquer, top-down and bottom-up reasoning, think in analogies, and have multiple mental models. The richness of the knowledge base seems to be the difference. There are however more subtle differences: novices tend to encode their models based on surface features whereas experts tend to think in terms of on deep knowledge (functional and physical relationships) and can better gauge conditions and importance of information. The issue is: how the hell do we train people? How do you teach that? Generally this means you just train people by giving them more and more information, which the authors don't dispute, but they want to look at the cognitive angle and how things change. Seeing typicality The first thing they mention is the ability to see typicality. To know what is normal and what is an exception requires having seen lots of cases. Identifying a situation as typical then triggers a lot of responses and patterns about courses of actions (what is feasible, promising, etc). This was observed in firefighters, tank platoons, design engineers, and in chess. In fact, at higher levels of expertise, this becomes sort of automated -- it's not an analytical choice, more like a reflex, or automated heuristics. Particularly, this also comes with an ability to see what situations are atypical because expected patterns are missing. It has been found that for some physicians, the absence of symptoms is often as useful as their presence in making a diagnosis. They also noticed that experts with this ability do not show a lot of skill degradation with time pressures, whereas it does with journeymen (blitz chess observations were behind this). Physicians don't really use an inductive process in diagnoses. Even if they're trained not to, they can't help but form early impressions. The idea there is that these early hypotheses, which are also found in software troubleshooting, could direct the search for more evidence, rather than just gathering facts over and over again. How is this developed? Well, not by analogies. Analogies are used a lot by novices and journeymen, and rarely by experts. Though when experts use analogies, they're on point. One explanation is that as you gain more experience, things blend together and lets you more easily reason about typicality. Another possible explanation is pattern matching (which would not be sufficient, lest experts also were to suck rear end at dealing with novel situations). There's no great theory underpinning how this happens. Seeing distinctions Experts just can see more things. The example is simple: watch olympic gymnastics or diving, where you just go "well the splash was small so that had to be good" or "gosh that was a fast flip, amazing" and then the analyst just points out 40 things that were imperfect but you'd never see unless it was in slow motion. This can mostly be formed when you get accurate timely feedback for your judgment (and you can validate your hit rate). Seeing antecedents and consequences This is essentially mental simulation to let you know how you got there, and where you're likely going. Doing this lets you evaluate a course of action without necessarily having others to compare it with, you just know if it's likely to be good or bad, regardless of alternatives. The more expertise you have, the further ahead you're likely to reliably project things, or the most likely you are to imagine further back in time how things were to get where they are now. Implications for training For chess, the idea is that you need 10k-100k patterns, which takes ~10 years to acquire. It takes 5-10 years in many other disciplines as well. There is no reason to think you can train experts by showing novices how experts think. The only thing they tracked that could be reliable helped is metacognition (thinking about how you think about things, assessing your performance, framing yourself as a learner). They point out 4 strategies to improve perceptual skills:
So to do that, you have to be able to spot who the experts are. They define three criteria:
The paper concludes by reiterating that expertise is seeing what is not there, what is missing. The idea that experts have special strategies tends not to hold to scrutiny; a broader knowledge base is instead what seems to be the differentiating factor. This is however disappointing (their words, not mine) because it doesn't tell much about how to make more experts, so they suggest once again looking at how experts perceive things instead, and ways to better transfer the experiences. I'd probably like to see a more modern version of it that could build on the last 30 years or so of progress in cognitivism, not quite sure where I'd find it though.
|
# ? May 15, 2022 16:45 |
|
|
# ? Apr 25, 2024 10:06 |
|
That was a top notch effort poat, shared it with my mom who is a Professor of Sociology. You should start a podcast or something
|
# ? May 16, 2022 22:10 |