Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
MononcQc
May 29, 2007

kitten emergency posted:

idk if it’s come up in the thread but I like pointing people at this paper (https://www.england.nhs.uk/signuptosafety/wp-content/uploads/sites/16/2015/10/safety-1-safety-2-whte-papr.pdf) about safety-1 and safety-2 to draw similar distinctions that rosenthal does in the piece that was linked earlier

kitten emergency posted:

the reason most of this stuff lands like yesterdays trout on the desks of execs is because they can’t fathom that “people will do the right thing the overwhelming majority of the time”

To add to this, once you know about that stuff, you start seeing whether the org you're in is functioning based on trust (higher ups set a direction and defer decision-making authority of details to lower-level staff by believing they will do what is needed) vs. on control and authority (decision-making is moved up to middle-management levels and workers are seen as doing what they're told).

You can see it in some policies as well: are code reviews done so senior code owners catch the mistakes of juniors or to distributed knowledge about what is going on? Are "best practices" just suggestions that you are expected to apply judgment to and improve, or are they hard rules from which any deviation is seen as problematic? Are teams empowered to adjust and change how they [self-]organize work, or is the structure dictated from above?

Safety-I and Safety-II are seen as two necessary parts (and not as Safety-II replacing Safety-I even though that argument is sometimes made), and one of the key concepts is making sure that in the process of Safety-I (preventing incidents and risky deviations) you don't end up also accidentally preventing Safety-II (adaptations and deviations that make work successful and safe).

The concepts of Work-as-Done vs. Work-as-Imagined enter that discussion because top-down decision making with little trust dictates how work should take place based on what higher-ups think the work is like (Work-as-Imagined), which is necessarily not the same as how people on the shop floor actually make things work (Work-as-Done). Trust and deferring authority means that you let people who know how Work-as-Done functions to participate in defining it.

Purely authoritarian approaches (eg. Taylorist scientific management) end up punishing deviations, and create covert systems where Work-as-Done drifts further and further away from Work-as-Imagined. Other categories are invented to describe this, such as Work-as-Reported (what people say they do, different from what they actually do) and Work-as-Prescribed (what people are told to do, which is based on Work-as-Imagined and differs from Work-as-Done).

It's often an organizational pathology to let that drift grow bigger and bigger, and a much saner pattern to accept deviations and use them to inform Work-as-Imagined and Work-as-Prescribed to be better and close the gap. The more trust you show in workers and the more trust they in turn show for their management in supporting them, the more accurate Work-as-Reported is going to be as well.

The risks of just "removing fragility" is that it often comes or is applied in ways that hinders the necessary adaptations that work-as-done relies on, based mostly on what is reported and imagined. And it may actually create fragility by removing mechanisms by which multiple other threats and issues are actively or passively prevented by workers doing their job every day. So stamping out one weakness exposes half a dozen more that do not show up while the one weakness is present, as a consequence of a tradeoff.

MononcQc fucked around with this message at 17:51 on Oct 1, 2022

Adbot
ADBOT LOVES YOU

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
in a more limited context (operation of production software systems) I tend to argue the replacement theory because incidents are impossible to prevent and reliability isn’t a fixed target etc etc

MononcQc
May 29, 2007

I had somewhat successful buy-in by comparing software system incidents to forest fires, in that they're not preventable and they're often natural consequences of practices, and that it's better to learn how to manage them (and even do controlled burns -- as in chaos engineering) than trying to prevent them all.

Californian coworkers understood it more intuitively, and it was the first argument I made that really registered well in favour of not using outages/incidents as objectives, and rather making sure that our objectives reflected what we felt would be good reactions and preparation, rather than just "number of bad events." It made more intuitive sense or led to better results in negotiating objectives than just "if you make a metric people will game it" ("yes, but we still need one")

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
sizes of incidents and forest fires both tend to be 1/f noise iirc. the sfi peeps have a little sideline giving distributional predictions

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

MononcQc posted:

I had somewhat successful buy-in by comparing software system incidents to forest fires, in that they're not preventable and they're often natural consequences of practices, and that it's better to learn how to manage them (and even do controlled burns -- as in chaos engineering) than trying to prevent them all.

Californian coworkers understood it more intuitively, and it was the first argument I made that really registered well in favour of not using outages/incidents as objectives, and rather making sure that our objectives reflected what we felt would be good reactions and preparation, rather than just "number of bad events." It made more intuitive sense or led to better results in negotiating objectives than just "if you make a metric people will game it" ("yes, but we still need one")

oh, that’s a good one, gonna steal that

MononcQc
May 29, 2007

This week's paper is The Cultural Transmission of Tacit Knowledge by Helena Miton and Simon DeDeo. This is a fresh paper from this year and it has few citations, so who knows if it's any good! However, it mentions some interesting things about tacit knowledge and how it manages to be kept alive despite being pretty much unexplainable. Also it's the one paper I read this week so I was taking notes as I went.

The paper introduces a somewhat mathematical model of what tacit knowledge is and how it spreads, but I'm going to stick to a higher-level discussion. If you want to see the math and illustrations about the model, do click the link and read the paper.

Tacit knowledge is essentially the set of things you know but can't explain; it is made up of cultural practices, know-how, practical knowledge, and is found in everything from sports and arts to medicine and science, and therefore covers crafts as well as professional jobs. Generally, 3 approaches are used in teaching, and tacit knowledge is immune to them:
  1. A mental representation which is shared through verbal instruction ("sharing your understanding"). But tacit knowledge, by definition, includes things one can't explain.
  2. Showing the end result and trying to emulate it. Tacit knowledge tends to however be combinatorially complex, some parts of it are rare and never observable, and very context-dependent, so this tends not to work great because there aren't enough opportunities.
  3. Copying the actions of an expert to obtain a similar result. This requires knowing which part of the behaviour are desirable, and which are incidental. This knowledge is itself tacit, and generally a person teaching tacit knowledge won't know which actions make sense to transmit or not.

What the paper does with their model is come up with a structure that makes it possible to transmit knowledge despite these challenges:

quote:

The solution we propose sees tacit knowledge as the emergent product of a network of interacting constraints, and transmission as a process of guiding a learner to a solution by the simultaneous, and mutually interfering, demands of both a teacher and the environment. The knowledge is tacit even in transmission because only an enigmatic fragment is ever present in the mind of either teacher or learner. The structure necessary to reconstruct the practice emerges from the interaction between the practitioner and the environment, and the teacher’s task is to guide a learner towards the correct use of that structure. In particular, by careful intervention on a small fraction of the features, a teacher can guide the learner to discover the full structure of the culturally-specific solution.

or, in short: the teacher can pick up a few core points that, if they are properly transmitted, then interact with the environment to provide solid feedback that lets the student figure most of the other implicit stuff automatically.

Their model essentially uses a bunch of "facets", which are a list of conditional behaviors ("if this happens, then you do this other thing"). These facets are then part of sets of interacting constraints ("in this context, doing this other thing is bad"). This network of constraints are what lets people form intuitions about whether a given practice is coherent. Some constraints are positive ("doing A and B together are good"), and some are negative ("doing B and C together are bad"), and it is entirely possible for constraint networks to be unsatisfiable ("A and C together are good", meaning A and C being satisfied then unsatisfies B and C).



This image shows 6 facets. Positive constraints (good together) use solid lines, and negative constraints (bad together) used a dashed line. They provide two sample configurations, where red dots are things done and yellow dots are things not done:



They figure that configuration 1 is better than configuration 2 because it violates fewer constraints (2 vs. 4). The gotcha is that nobody--not even the experts--necessarily knows how many facets (points) there are, nor what all the constraints (lines) are supposed to be.

This thing above is interesting to me because it ties into the idea of "goal conflicts" -- when you are given many objectives, all of which must be satisfied, but some of which are mutually exclusive ("be fast", "be safe", and "be cheap" being an obvious and oversimplified example). The paper states that some configurations are better than others, and tacit knowledge is essentially knowing how to navigate these difficulties.

The authors state that solutions may be stable:

quote:

When a practice is a reasonably good solution to the constraint network, a practitioner who has learned the practice finds it easy to maintain. Deviations from the standard in many facets can be sensed and corrected. [Doing the wrong thing] provides her with a signal that can be used to return to the standard. Even if she is unaware of which facet deviated, she can make little (i.e., roughly single-facet) adjustments in her behavior until consonance returns. When the practice is a reasonably good solution, in other words, the practitioner only needs to implement the solution. She does not need to understand it. Stable solutions like these are candidates for culturally transmitted tacit knowledge practices

The key point with this then is that if the learner can be guided by a teacher close enough to the standard practice, the feedback from constraints will be sufficient to maintain her there. These may include physical interventions (positioning someone directly), scaffolding (tools that shape practice), mnemonics (principles and memorable guidelines), or verbal guidance ("keep your back straight!").

Only a few facets need to be transmitted in a reliable way, which the author call the "kernel", and which can be activated in context to recover the rest of practice:

quote:

In our toy example, practice one can be efficiently transmitted to the next generation by fixing only two critical nodes (nodes three and six). A learner who obeys her teacher’s guidance in these two facets can learn the full pattern simply by remaining attentive to environmental feedback.
[...]
That effective subset of interventions (a kernel), when placed in an embodied context, reliably activates the characteristic and flexible behaviors of an expert. The very nature of tacit knowledge means that the teacher is unaware of the exact nature of practice she exemplifies. However, the structure of the problem also can enable “tacit teaching”, where the teacher intervenes in a fraction of the facets but nonetheless passes on the practice to some of the learners with near-perfect fidelity.

What their model implies is that the kernel almost always needs to only be a tiny fraction of the whole (10-15%) and a skilled trainer transmitting only these would still be able to transmit the whole practice to the learner, even if only a small amount of information is conveyed between them. This, in short, would explain why tacit knowledge is even transmissible even with no one being able to describe all it entails.

They found in various simulations of their models (with say, 30, or 100 points) that:

quote:

Neither teacher nor learner need know, in any conscious fashion, the correct pattern in all thirty facets—indeed, they need not even know how many facets there are. All that is needed for effective transmission is (1) that the teacher keep in mind four key features of the learner’s behavior, and (2) that the learner attend to the teacher’s guidance while remaining attentive to the consonance demands of her environment.

So that explains their whole transmission theory. They then dive into broader cultural transmissions. What their model shows is that the stability of the solution means that most students will get very good results, and some students will have lovely results. Note here that lovely results means "not culturally standard". In some cases they may be because the actual results are bad ("person fails to juggle more than two balls"), but it also may mean that they succeed, but in a way that clashes with accepted best practice within their community.

The gotcha here is that this pattern implies that cultural transmission of tacit knowledge is bursty: it's mostly always right, but once in a while someone will learn wrong and find a new mostly-stable configuration that is very effective, and possibly hard to transmit:

quote:

Long periods of stability, in which cultural practices change very little, are interspersed with chaotic periods. These chaotic periods begin with a long leap in the solution space, and the original tradition is completely lost. Communities of practice in these chaotic periods are then much worse at preserving their (new) traditions, and make long leaps in turn. This continues until a new, sufficiently stable, practice is discovered. A longer period of high-fidelity transmission commences, and the cycle repeats.

If I hand-wave away a bunch of explanations, simulations, and model calculations and self-criticisms, they then conclude with:

quote:

[Our model] shows how high-fidelity “tacit teaching” is possible, even in the case where both teacher and student lack conscious knowledge of up to 90% of the components of the practices. A small amount of guidance, well-presented, allows the majority of students to “lock in” an efficient, culturally-widespread practice. This is possible only when the features of underlying practice are subject to specific constraints and echoes observation of skill acquisition dynamics in ecological contexts.
[...]
When most students do extremely well, but a small fraction, with otherwise equivalent abilities, do extremely poorly, it may be a sign that tacit knowledge is at play.

Anyway, I found this paper interesting because it proposes an interesting explanation to ways skills that are hardly teachable nevertheless get transmitted, can lead to various distinct "schools" that each are stable and sustainable approaches to a discipline, and how such skill transmission can also disappear. It also ties in with Situated Learning, which assumes that some forms of knowledge are generally very social (rooted in apprenticeship and communities of practice), rather than just descriptive knowledge.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
direct from sfi weenie land from two sfi peeps, complete with the line over ostensible power law in log log plot

the underlying phenomenon is almost surely just backbone variables in csp's. the backbone variable phase transition is just plainly responsible for the complex phase space of constraint satisfaction problems ceteris paribus but a perusal through parisi's papers or mezard's book (this one https://web.stanford.edu/~montanar/RESEARCH/book.html) woulda told you that without putative actual semantic content stuck in it like these peeps have

the annoying thing to ask is, 'would it hold in xor csp's?' cuz all the phase transition stuff does hold with xorsat. this hosed deolalikar's attempt to prove p != np cuz the whole argument was based upon phase transition structure and went through for xorsat, which is in p. it is actually really the case that every np problem has this complex phase transition structure, so sad that some p problems do too

bob dobbs is dead fucked around with this message at 05:00 on Oct 2, 2022

MononcQc
May 29, 2007

I understand none of these acronyms you use and an foreign to the lexical field you’re referring to

(aside from p=np)

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost

MononcQc posted:

I understand none of these acronyms you use

santa fe institute was created by nuke physics peeps to pretend to be condensed matter peeps and futz around with computers for a bit and sorta metastasized into a big condensed matter theory institute with big big weird computational pretensions

one of the more successful pretensions is basically noting that satisfiability problems (and by extension all np-complete poo poo) looks mighty like a condensed matter lattice. then doing condensed matter and statistical physics poo poo on it. find free energy, find partition function oh wait poo poo there's a second order phase transition just like in ferromagnetism (not like ice and water thats first order), oh drat gimme a nobel prize (nobel prize given 2021, to parisi. parisi isn't sfi but published a lot w sfi peeps)

csp = constraint satisfaction problem. 'the kinda poo poo you pull out the backtracking algo for'

xorsat is like satisfiability, but instead of OR clauses you got XOR clauses. because 1 XOR 1 = 0 you can do gaussian elimination to solve instances of xorsat so its in p. this is a pretty good annoying question to ask because linearity is always related to easy solvability and the whole paper deals with how experts wrestle with weird and hard stuff

bob dobbs is dead fucked around with this message at 05:10 on Oct 2, 2022

MononcQc
May 29, 2007

okay got it. But it sounds to me like one of the core concepts here is that you can’t easily do satisfiability or constraint satisfaction there because the whole point of tacit knowledge is that the actual graph — both the vertices and the edges — are mostly unknown or impossible for practitioners to describe, and deeply entangled into their ecosystem.

Their model uses these mechanisms to say “if it does work like that, then we’d expect to see these properties” which as far as I can tell is different from proposing that these core points can be identified and computationally optimized.

OTOH it’s a reasonable leap to use the model and leverage it, but I have a hard time believing it wouldn’t hit the same general limits we see in complex systems theory and ecosystems: the relationships are non-linear, and a lot of the interactions are dynamic such that any adjustment made by one part of the graph triggers unpredictable adjustments in other parts and the whole thing hardly remains stable.

to put it another way, I could imagine the graph and constraint satisfaction as a useful analogy to illustrate the mechanisms in play, but I have a hard time believing that they hold the key to actually succeeding at improving things — aside maybe from identifying situations where situated learning is a good candidate?

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
linearity and nonlinearity and polytime and nondeterministic polytime are deeply related - i just told you that xorsat can be solved by gaussian elimination of matrices representing the var-clause relations cuz of the linearity of xor in the boolean ring

MononcQc
May 29, 2007

yeah I get that, but the IRL graph is poo poo like “if you’re horseback riding and you move your leg that way the horse doesn’t like it” and uh I have a hard time thinking the computer is gonna help

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
theyre physicists dude

just shove that poo poo in the complaining and concerns section

MononcQc
May 29, 2007

yeah that’s why to me it’s a fun model of “here’s how maybe teaching a tiny bit of things may accidentally unlock the whole understanding” but does not hold keys to actually figuring out how to do it aside from letting you know you have to have apprenticeships in context with your peers to pick it up.

I guess for a long while Chick Sexing was a great example of this sort of stuff, in a very specific and narrow domain.

E: nicer link on chick sexing https://psmag.com/magazine/the-lucrative-art-of-chicken-sexing. Reading the article with the lens of the model is a fun exercise.

MononcQc fucked around with this message at 05:47 on Oct 2, 2022

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

this rings a bell for me with something i cannot fully recall but the point would effectively be a practitioner trying to actually explain their tacit understandings would lose the mental model by trying to describe it and be worse off for having tried to analyze it

Share Bear
Apr 27, 2004

Hi,

I skipped about 40 posts to write this. Does anyone know any good books or research on reducing alert fatigue and increasing the efficacy of alerts? I regret to say I have rarely had to research my own original stuff from sources so that skill is very unpracticed.

I'm currently in a situation where people are angling to add another alert methodology (slack messages) to the various dashboards and alerting systems we already have. I want to prevent this and make everyone realize we gotta clean a lot of stuff up first before doing so.

I imagine a good alert as someone tapping you on the shoulder and saying "Hey, this thing is broken, and I think you're the best person to look at it. You might want to start by checking x". I think a lot of alerts we have are more like logging or dashboards and I want to be able to either purport or insist on a mental framework when setting up alerts and alert channels.

Is one alert actionable? It depends on the alert. Is the same alert 100 times actionable? Also depends. I want to set up some sort of heuristics that aren't just me making stuff up based on what seems good and iterating on that.

Share Bear fucked around with this message at 17:32 on Oct 5, 2022

qirex
Feb 15, 2001

Share Bear posted:

Does anyone know any good books or research on reducing alert fatigue and increasing the efficacy of alerts? I regret to say I have rarely had to research my own original stuff from sources so that skill is very unpracticed.

I got a lot out of defensive design for the web but it's not directly applicable to your situation. there might be something newer about a similar topic

MononcQc
May 29, 2007

Share Bear posted:

Hi,

I skipped about 40 posts to write this. Does anyone know any good books or research on reducing alert fatigue and increasing the efficacy of alerts? I regret to say I have rarely had to research my own original stuff from sources so that skill is very unpracticed.

I'm currently in a situation where people are angling to add another alert methodology (slack messages) to the various dashboards and alerting systems we already have. I want to prevent this and make everyone realize we gotta clean a lot of stuff up first before doing so.

I imagine a good alert as someone tapping you on the shoulder and saying "Hey, this thing is broken, and I think you're the best person to look at it. You might want to start by checking x". I think a lot of alerts we have are more like logging or dashboards and I want to be able to either purport or insist on a mental framework when setting up alerts and alert channels.

Is one alert actionable? It depends on the alert. Is the same alert 100 times actionable? Also depends. I want to set up some sort of heuristics that aren't just me making stuff up based on what seems good and iterating on that.

Relevant literature off the top of my head:

A selection of diagrams from these sources:




I have strong opinions on a lot of alert design things and the importance of having them mediate stimuli adequately, but you asked for research instead of hot takes and that's what I could quickly come up with.

MononcQc
May 29, 2007

Just been referred to The Alarm Problem and Directed Attention in Dynamic Fault Management (1995). I haven't read it yet, but it looks good and it's Woods.

quote:

This paper uses results of field studies from multiple domains to explore the cognitive activities involved in dynamic fault management. Fault diagnosis has a different character in dynamic fault management situations as compared to troubleshooting a broken device which has been removed from service. In fault management there is some underlying process (an engineered or physiological process which will be referred to as the monitored process) whose state changes over time. Faults disturb the monitored process and diagnosis goes on in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple
interleaved tasks. Typical examples of fields of practice where dynamic fault management occurs include flight deck operations in commercial aviation, control of space systems, anesthetic management under surgery, and terrestrial process control. The point of departure is the “alarm problem” which is used to introduce an attentional view of alarm systems as tools for supporting dynamic fault management. The work is based on the concept of directed attention -- a cognitive function that inherently involves the coordination of multiple agents through the use of external media.Directed attention suggests several techniques for developing more effective alarm systems.

I've also been given the name of Michael Rayo and I can see a few interesting possibilities on his researchgate page, though mostly about healthcare: https://www.researchgate.net/profile/Michael-Rayo. I have read none either yet.

Share Bear
Apr 27, 2004

Thank you. Should you ever be in NYC I will gladly take you out to eat or drink.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost

Share Bear posted:

Thank you. Should you ever be in NYC I will gladly take you out to eat or drink.

same but sfba, you hear?

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
Not exactly the angle you’re looking for but I’d approach it from an organizational analysis. Who’s driving the change? What were the factors that led to the current dissatisfaction, are those factors equally experienced or were the impacts disassociated from the inciting events, etc.

There’s plenty of great research on this topic obv. but I usually find that the research is only convincing to a limited audience.

MononcQc
May 29, 2007

(that woods paper from 1995 I linked last is probably the one I'll cover this long week-end if you're too lazy to read it or just don't have the time)

MononcQc
May 29, 2007

This week's paper is The Alarm Problem and Directed Attention in Dynamic Fault Management by David Woods, as predicted in my previous post.

It concerns mainly what it dubs "the alarm problem", what is essentially the property of many alarms to be nuisance alarms (when something is not a false alarm, but reports on a condition that is not considered threatening, like a smoke detector beeping when you slightly overcook your toasts), alerts that have messages that are either ambiguous or underspecified, alarm inflation (proliferation I assume?), and alarms that give you an update on system status rather than pointing out anomalies. There are more bad behaviors, but those are salient ones.

In general, warns that the time periods where the alarms are densest are also the same time periods where the cognitive load and the criticality of task is the highest on practitioners. It's during that time that alarms are supposed to help, but if they're poorly designed they'll instead distract and disrupt important tasks. The approach he pushes instead is one where the alarm system is seen as an agent part of the sociotechnical system (humans and machines) and that attempts to direct the attention of human observers.

This perspective is important because rather than seeing the alarm as something that just tells you about important things accurately or in a timely manner, it becomes an overall cognitive task based around attention control, which becomes very contextual and needs to consider demanding activities:

quote:

A critical criterion for the design of the fault management systems is how they support practitioner attention focusing, attention switching and dynamic prioritization.

The critical point is that the challenge of fault management lies in sorting through an avalanche of raw data -- a data overload problem. This is in contrast to the view that the performance bottleneck is the difficulty of picking up subtle early indications of a fault against the background of a quiescent monitored process. While this may be the bottleneck in some cases, field studies of incidents and accidents in dynamic fault management emphasize the problem of shifting attention to potentially informative areas as many data values are changing.

[...] Shifting the focus of attention in this context does not refer to initial adoption of a focus from some neutral waiting state. In fault management, one re-orients attentional focus to a newly relevant event on a different data channel or set of channels from a previous state where attention was focused on other data channels or on other cognitive activities (such as diagnostic search, response planning, communication to other agents). Dynamic fault management demands a facility with reorienting attention rapidly to new potentially relevant stimuli.

They consider the control of attention as a skill that can be developed and trained, but also one that can be undermined. They also consider alarm signals as a message to direct attention to a specific are topic, or condition in a monitored process. The receiver must in turn quickly evaluate (from partial information) whether to direct attention away from whatever it is they are paying attention to.

This creates a sort of contradictory position where you want to provide information, and that information needs to be processed to define its importance, but mostly because we want to know if it requires attention or not. But evaluating requires some sort of attention already. But before tackling that, let's break down the parts of the equation.

First, an attention-directing signal ("look at this!") acts as a referrer. Its influence depends on the information it provides on a) the event and condition it refers to, and b) the context in which it happens. There's also some value in knowing about why the system thinks this event or value is meaningful.

Second, the concept of directed attention is inherently cooperative. One agent has to have some awareness of where the other agent's attention is, what it is they're doing, and without explicit communication.

Third, for the communication to be effective and not too demanding cognitively, the attention-directing signal must use a "joint reference" -- meaning an external representation of a process and its state. You can talk about a known service, a known status, a given operation, and do so effectively. If you're referring to something entirely new and never seen before, you don't actually use a reference, you have to give an explanation and this is costly.

Fourth, attention management requires the ability to manage signals: enqueue them, bundle them, ignore them. This brings us to our contradictory position where this management to know what to drop and ignore requires not ignoring the signal.

Making this work requires something Woods describes as a preattentive process:

quote:

It is important to see the function of preattentive processes in a cognitive system as more than a simple structuring of the perceptual field for attention. It is also part of the processes involved in orienting focal attention quickly to “interesting” parts of the perceptual field. Preattentive processes are part of the coordination between orienting perceptual systems (i.e., the auditory system and peripheral vision) and focal perception and attention (e.g., foveal vision) in a changing environment where new events may require a shift in attentional focus at indeterminate times. Orienting perceptual systems are critical parts of the cognitive processes involved in noticing potentially interesting events and knowing where to look next (where to focus attention next) in natural perceptual fields.

To intuitively grasp the power of orienting perceptual functions, try this thought experiment (or better, actually do it!): put on goggles that block peripheral vision, allowing a view of only a few degrees of visual angle; now think of what it would be like to function and move about in your physical environment with this handicap [...] [T]he difficulty in performing various visual tasks under these conditions is indicative of the power of the perceptual orienting mechanisms.

In short, we're already quite good as humans at doing that sort of non-focused pre-processing and organization of data to help filter and pre-direct where to give attention next by choosing which part of the data space to focus on. An alarm designer must therefore try to build mechanisms that support preattentive processes, to strike a "balance between the rigidity necessary to ensure that potentially important environmental events do not go unprocessed and the flexibility to adapt to changing behavioral goals and circumstances."

For this to work, your preattentive signal needs to:
  1. be capable of being picked up in parallel with other lines of reasoning
  2. include partial information on what it refers to so the observer knows whether to shift attention or not
  3. assessing the signal and partial information must be cognitively light enough that it doesn't interrupt ongoing reasoning
An example of this sort of thing was found by accident with the control rods in nuclear power plants. The position of rods was indicated by a mechanical counter, which created an audible "click" when the state of the system changed. If the rods moved faster, the clicks also went faster. A similar signal also existed with boron concentration in coolant fluids (and you may imagine hearing "how close to boiling" water is from this sound). It turns out that this let the nuclear powerplant operators handle control rods in parallel with other (primarily visual) elements, and notice if the system was changing or fixed. In the end, this could create a background "normal" state where operators could pick up variations and departures from expected states.

Older analog alarm displays (annunciator displays) had some good properties for this as well. Annunciator displays had a fixed array of tiles, fixed in space on a board. When a change or event would happen, a tile would light up. While this had a lot of weaknesses, one of the advantages was that experienced operators could end up picking patterns where specific alerts or group of alerts would light up specific physical locations, and so they could have an idea of what was going on from peripheral vision alone. If you put related elements together, then parts of the board gained a better spatial organization.

It's important to point out that preattentive processes are not conscious decisions or judgment but is a sort of recognition-driven process. A key factor is that you can coordinate with focal attention in existing processes around perceptual fields to help attention management.

But it's not sufficient to just plop something in your peripheral vision. Bad alerts only tell you "something is wrong here." As pointed out earlier, this is an underspecified alarm because good ones refer both to a state/event/behavior and to a reason why the signal is interesting. An example of this came from the study of computer displays that used icons representing processes, which changed hues when anomalies were detected. In dynamic settings, a fault tends to come with a cascade of disturbances, which meant alarms would tend to come up in groups, which would hide changes:

quote:

The hue coded icon display provided very little data, forcing the operator to switch to other displays as soon as any trouble at all occurred in the monitored process; in other words, it was a data sparse display. Field studies support this result. Practitioners treat systems with uninformative alarm systems as if there were only a single master caution alarm.

This can generally be improved by finding ways to increase the informativeness with partial information that can be more rapidly evaluated.

Another issue comes with nuisance alarms, which often highlight conditions that may be anomalous but turn out to be expected in the current context. These tend to require more intelligence/awareness in the alarm system about the ongoing context:

quote:

Alarms should help link a specific anomaly into the larger context of the current activities and goals of supervisory agents. What is interesting depends on practitioners’ line of reasoning and the stage of the problem solving process for handling evolving incidents. [...] [T]he context sensitivity of interrupts is the major challenge to be met for the development of effective alarm systems, just as context sensitivity is the major challenge for developing solutions that treat any data overload problem

Variations and change are the norm, so the authors recommend focusing on differences from a background, or departure from normal functions and models of expected behaviour in specific contexts.

Other suggestions include finding representation of processes that can emphasize and capture changes and events. You may also want to take advantage of non-visual channels, or if none are available, peripheral vision channels. Specifically, analog graphical representations tend to be friendlier to peripheral access, along with spatial dedication (like with annunciator displays).

The authors conclude (after a lot of examples that I encourage people to look into if they want more info):

quote:

[A]ttentional processes function within a larger context that includes the state of the process, the state of the problem solving process, practitioner expectations, the dynamics of disturbance propagation. Considering each potentially anomalous condition in isolation and outside of the context of the demands on the practitioner will lead to the development of alarm and diagnostic systems that only exacerbate the alarm problem. [...] In aggregate, trying to make all alarms unavoidable redirectors of attention overwhelms the cognitive processes involved in control of attention [...] Alarms are examples of attention directing cognitive tools. But one must recognize that directed attention is only meaningful with respect to the larger context of other activities and other signals.

MononcQc
May 29, 2007

oh yeah I forgot to put it in because it wasn't in my highlights, but Woods does point out that the default state of computer-based dashboards tend to have the "keyhole" problem: you only focus on a tiny part of the system at a time in great details, and then flip through various windows/tabs/displays to see the next relevant thing. This, in short, requires active attention shifts for all sources of attention and avoids almost all mechanisms that help preattentive processes.

The observed behaviour of experts who have to use these systems is that they'll pick a tiny (but useful) subset of important metrics, shift them in fixed place, and never depart from that display when doing important tasks because they won't want to manage flipping through information. Most of the available data will never be used because it isn't accessible in cognitively effective ways.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

another great writeup, thank you

MononcQc
May 29, 2007

This week's paper is a work of ethnography joined with software engineering titled Unruly Bodies of Code in Time. It's a chapter from a book, written by Marisa Leavitt Cohn, which studies how software engineers tend to consider their code to be timeless and immaterial, but at the same time turn out to have harsh judgments on legacy code, obsolescence, and the impact on their careers. I'm picking this one because a) I read it this week b) it's a cool humanities view of software engineering c) it has some solid quotes.

The chapter is organized first in an overview, and then in "vignettes", which are sample stories from the ethnographic work, done by embedding themselves in the software development teams at the JPL labs (NASA) responsible for the Cassini mission.

She first mentions that software engineers deal with "material instantiations of code" particularly when dealing with long-lived projects running on legacy software. They have to deal with unmaintained languages, older hardware, deprecated protocols, and keeping things running. Yet at the same time, there is a strongly held belief that code itself does not decay, much like math, I assume.

quote:

Bodies of code are increasingly bound up in the contingencies of historical organizational decisions, material constraints of available technologies, as well as the careers of those maintaining the code.
[...]
The negotiations that take place in managing aging software are not only a matter of securing computational systems from disastrous changes; they are also a matter of how engineers manage the temporality of obsolescence and the entanglement of their own careers, language proficiencies, and expertise with the lifetimes of systems they develop or maintain.
[...]
Those who want to commoditize their expertise must then detach themselves from the concerns of particular bodies of code and their accidental materialities, and align with more universal, timeless ideals of code as immaterial.

The first vignette is from a guy who works as a navigator first, and sees software as a tool to do his job, but not an end in itself. He wants to bring in a new more modern system and has to write glue code to make it work, but also wants to make sure he remains seen as a navigator, not as a software guy, because then he's gonna be stuck handling more software:

quote:

Aligning to software work can make one indispensable if you are the only person who knows that tool, which could be bad in the long run if some other more exciting mission comes up. It also puts you at risk of becoming a software person and being seen as someone to ask to write code rather than to design a mission tour. [...] working with the new software was also a matter of distancing himself from the legacy systems in operation at Cassini that are no longer relevant.
[...] By positioning himself against the legacy software to his broader professional network at the lab, William made sure that while he was aligned to the mission system, he was not aligned to the software code in the same way that his colleagues were.

His colleagues turned out to call him out on it as if he were "selling out" to the newer system because he aligned to newer regimes. From his perspective, they are “fearing the corpse but gripping the casket.”, which is an absolutely quotable bit.

The second vignette is from a green engineer working with some old rear end code base and frequently triggering incredibly old bugs that nobody else triggers because they all tend to know how things are supposed to work. By not knowing what the unspoken rules are, he keeps walking into decade-old erroneous behaviours that people had no idea could happen, such as writing over files that were supposed to be immutable for 40 years or so.

Generally the code is a sort of patchwork of various fixes, and takes a life of its own:

quote:

When I ask if after he surfaced this bug in the system, were they able to fix it. “Not really, but [we could] just be more aware of [it]. We call those ‘features.’ When it is something that you can’t change and is just the way it is. It’s a feature. Like we have features,” he says, as he gestures to his face.

This, as the author later points out, is a very direct sign that code is material and not intemporal, but career engineers will keep stating and believing code is intemporal.

The third vignette is about a software developer who actually likes working with legacy software, enjoying the sort of detective work required. He had first worked on the Cassini mission in 1996, left for other projects, and had then more recently come back. He stated, specifically, that working with new software made him feel like a replaceable cog in a machine; new work is cookie-cutter, broken down like in an assembly chain. This felt repetitive and made his experience irrelevant.

The author points out that the general take of software engineers is to believe newer systems are better. Newer systems are generally thought to be overvalued because they're expected to be more future-proof. She does ask:

quote:

What does it mean, after all, for a system to be more maintainable than one that has been maintained for over 40 years? Systems are durable, not because of some attribute of the programming paradigm in which they arise, but simply by virtue of people contributing to keep it going.

She mentions that working with newer systems isn't so much a gain in what they can do, but the ability to continuously truncate the histories behind them. In order for code to act as a commodity, its historicity must be removed:

quote:

In long-lived systems, particular temporalities of work must be maintained in order for the system to remain vital, and likewise a system can “fail” for lack of those who know how to program in older languages. [...] legacy is considered a derogatory word, referring to code that has stuck around too long and become heavy. Old software is pathologized for being mired in the past, and those who care too much for it are as well. [...] At the same time, newer systems and methods are adopted with a rhetorical promise of eternal youth, as the solution that will never age.

[T]he “trope of immateriality” is both analytically weak, smoothing over technical complexity, and ideological in suggesting that digital systems liberate us from the historical and material contingencies of other media.

She concludes by mentioning that software can live and die by the people leaving projects and taking away the memory and history required to keep it current with its surroundings. She solidly sits in the camp that all software is material and temporal. Software engineers maintain a sort of "moral economy" where old code requires more and more maintenance, and its maintenance work is therefore de-valued, and people perceive more and more privileges towards software that has not yet been written.

quote:

This ethos shapes the attachments and moral commitments of engineers to competing valuations of maintenance and innovation. As in rubbish theory, legacy code is that which is not yet thrown away but is durable despite its devaluation and troubles the moral economy of software work. It is in this duration of unruly bodies of code in time that the ideology of “immateriality” lives
Does anyone think she got much wrong?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

MononcQc posted:

This week's paper is a work of ethnography joined with software engineering titled Unruly Bodies of Code in Time. It's a chapter from a book, written by Marisa Leavitt Cohn, which studies how software engineers tend to consider their code to be timeless and immaterial, but at the same time turn out to have harsh judgments on legacy code, obsolescence, and the impact on their careers. I'm picking this one because a) I read it this week b) it's a cool humanities view of software engineering c) it has some solid quotes.

The chapter is organized first in an overview, and then in "vignettes", which are sample stories from the ethnographic work, done by embedding themselves in the software development teams at the JPL labs (NASA) responsible for the Cassini mission.

She first mentions that software engineers deal with "material instantiations of code" particularly when dealing with long-lived projects running on legacy software. They have to deal with unmaintained languages, older hardware, deprecated protocols, and keeping things running. Yet at the same time, there is a strongly held belief that code itself does not decay, much like math, I assume.

The first vignette is from a guy who works as a navigator first, and sees software as a tool to do his job, but not an end in itself. He wants to bring in a new more modern system and has to write glue code to make it work, but also wants to make sure he remains seen as a navigator, not as a software guy, because then he's gonna be stuck handling more software:

His colleagues turned out to call him out on it as if he were "selling out" to the newer system because he aligned to newer regimes. From his perspective, they are “fearing the corpse but gripping the casket.”, which is an absolutely quotable bit.

The second vignette is from a green engineer working with some old rear end code base and frequently triggering incredibly old bugs that nobody else triggers because they all tend to know how things are supposed to work. By not knowing what the unspoken rules are, he keeps walking into decade-old erroneous behaviours that people had no idea could happen, such as writing over files that were supposed to be immutable for 40 years or so.

Generally the code is a sort of patchwork of various fixes, and takes a life of its own:

This, as the author later points out, is a very direct sign that code is material and not intemporal, but career engineers will keep stating and believing code is intemporal.

The third vignette is about a software developer who actually likes working with legacy software, enjoying the sort of detective work required. He had first worked on the Cassini mission in 1996, left for other projects, and had then more recently come back. He stated, specifically, that working with new software made him feel like a replaceable cog in a machine; new work is cookie-cutter, broken down like in an assembly chain. This felt repetitive and made his experience irrelevant.

The author points out that the general take of software engineers is to believe newer systems are better. Newer systems are generally thought to be overvalued because they're expected to be more future-proof. She does ask:

She mentions that working with newer systems isn't so much a gain in what they can do, but the ability to continuously truncate the histories behind them. In order for code to act as a commodity, its historicity must be removed:

She concludes by mentioning that software can live and die by the people leaving projects and taking away the memory and history required to keep it current with its surroundings. She solidly sits in the camp that all software is material and temporal. Software engineers maintain a sort of "moral economy" where old code requires more and more maintenance, and its maintenance work is therefore de-valued, and people perceive more and more privileges towards software that has not yet been written.

Does anyone think she got much wrong?

A lot of extremely ancient things are difficult to maintain because they were built before modern philosophies on testing or system architecture existed. Newer systems are easier to maintain not because they're newer, but because they're all so similar to one another.

It's not about functional capability or "removing historicity" (which.. modern VCS adds _way more history_). It's about commoditizing commonalities to draw greater attention to where the problem diverges from the mean, if it ever does.

I wonder if the author is muddling domain knowledge with familiarity with a codebase. A rendering engineer can quickly move into a new rendering codebase and understand what's going on. A network programmer would likely have more difficulty acclimating to rendering code -- even within the same larger product/organization.

zokie
Feb 13, 2006

Out of many, Sweden
A lot of old code I encounter has been written by lovely developers (and by that I don’t mean “not me”), but that makes sense since because the older a system has become the more hands have touched it and the odds of idiots increases.

That said that post was an eye opener for
me, especially for my earlier attitudes regarding development.

Now much of my efforts are about making my code understandable and approachable to new developers, which is why I force my team to use TypeScript and we keep ourselves to high code coverage targets.

Our sister team has much more problems with onboarding, I refuse to touch their code in any manner other that superficially, because they have barely any tests and with default settings the compiler gives tens of thousands of warnings. If you change something you never know if you run into one of those bugs that have been promoted to feature.

I think “black box” testing or property based testing is critical to ensure that you make your tests good enough for ensuring new developers can understand the solution. It is something that has helped me greatly in the past, I’ve been able to throw away entire modules of code and reimplement them and still feel confident in the result because all the old tests are still green.

MononcQc
May 29, 2007

While I've had to maintain my share of old rear end untested code (and retro-fitted tests into them), I've also seen a lot of well-tested code be replaced by things that were using different frameworks, languages, libraries, regardless of tests or teams. One of the biggest vector of change I've experienced was in fact people inspired by what other companies were doing, hailing as the future, and wanting not to be left behind. This usually led to devs starting to wedge a tech migration into the roadmap to make sure you could still hire and be hired.

In fact, I've worked on projects using tech that was less mainstream quite a few times (and not just Erlang) and the biggest question of people who hesitated to move on the project was whether this would help them or prevent them from getting different roles later in their careers. Whether it was well tested or not was not even a question most of the time, it's something people find out after having moved to a project and going "oh poo poo, you have nothing?" and someone going "we test by rolling it in prod."

Share Bear
Apr 27, 2004

I am probably going to purchase the book that paper in because I again love this.

I generally see code as a means to an end and the quality of the code is irrelevant for the end. I believe that people dealing with awful legacy systems is an example of this.

now my real opinions

the awfulness is not based around development methodologies but rather having a culture that cares about retelling histories that develop context. some methodologies are easier to reteach or assume but at the end of the day you need context. i search for reinforcement of this as even the richest most well funded organizations in the world experience this, so its not down to code methodology but having history. raymond chen and people like him are necessary.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



yeah i started out loving to program for its own sake, but its definitely become a means to end for me. i want to make cool things available/possible for others, which coincidentally requires programming.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Carthag Tuek posted:

yeah i started out loving to program for its own sake

i started programming because I honestly thought it could make peoples lives better. Now I have to do a lot of searching to find a job where I am at best not actively destroying anyones life.

echinopsis
Apr 13, 2004

by Fluffdaddy
you know all the boomers everyone loves to hate?

most of my job is just supplying them with drugs so that they dont die just yet :cry:

Cybernetic Vermin
Apr 18, 2005

echinopsis posted:

you know all the boomers everyone loves to hate?

most of my job is just supplying them with drugs so that they dont die just yet :cry:

jokes on them though, with you putting tons of 5g in everything you give them, right?

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
https://www.youtube.com/watch?v=8Ab3ArE8W3s

Genuinely incredible talk on the limits of the way we code both in terms of how code behaves and how code is represented to humans and human brains.

I was literally screaming "this guy gets it!" at my partner pointing like a monkey at a zoo to the screen like an idiot when I first saw it

This is human factors and machine factors, I love this and you all NEED to see this.



Bonus:

https://www.youtube.com/watch?v=HB5TrK7A4pI

Expo70 fucked around with this message at 11:57 on Oct 22, 2022

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
titles homage to bret's stuff, isnt it?

https://www.programmingtalks.org/talk/stop-drawing-dead-fish

bret's undergoing a pile of failure rn but harc / cdg / dynamicland was pretty great when it was a thing

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Expo70 posted:

https://www.youtube.com/watch?v=8Ab3ArE8W3s

Genuinely incredible talk on the limits of the way we code both in terms of how code behaves and how code is represented to humans and human brains.

I was literally screaming "this guy gets it!" at my partner pointing like a monkey at a zoo to the screen like an idiot when I first saw it

This is human factors and machine factors, I love this and you all NEED to see this.

eh

I think he spends too much time talking about how bad punch cards were, as though theres die-hard punch card stans in the audience.

The live stuff he showed is nice, and no one is gonna say that ironpython notebook style things arent good but a lot of this has been tried and discarded for reasons. Its not an accident that the world looked at lisp and smalltalk and collectively said "meh." And the visual environments? It will always be a layer or two on top of what we already have now. Programs written a few layers lower will always be more efficient and despite all the progress, efficiency still matters. I think they're mostly good for education.

His bit about text glosses over how incredibly difficult it is to craft those diagrams he presents as alternatives.

Having a live preview is great. Modern node is pretty good at that. Hotswapping in new classes in Java is kinda an 80% solution.

Shame Boy
Mar 2, 2010

rotor posted:

I think he spends too much time talking about how bad punch cards were, as though theres die-hard punch card stans in the audience.

fine i'll take my IBM 29 and go home rear end in a top hat :(

Adbot
ADBOT LOVES YOU

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Shame Boy posted:

fine i'll take my IBM 29 and go home rear end in a top hat :(

ok he's finally gone, now we can all get a plan together to steal that sweet IBM 29 from him

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply