Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord
my hand hurtie

Adbot
ADBOT LOVES YOU

Presto
Nov 22, 2002

Keep calm and Harry on.

Sagebrush posted:

how is a mechanical dial slower than a keypad?
I am a huge nerd who enters weird times. Like when I'm heating water for a cup of tea I set it for exactly 1 minute 53 seconds (because 2 minutes is too hot). A dial would not be precise enough. :colbert:

Shame Boy
Mar 2, 2010

my new microwave lets you program in specific programs (which can be weirdly complicated, like run at one power level for a certain time, then another power level for another time etc) and save them to hot keys so you could save your weird dumb water time to one of those i guess

MononcQc
May 29, 2007

I heat my water in a kettle

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
i got a timed electric kettle

hbag
Feb 13, 2021

i heat my water in a big bed with my wife

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
how does this rank on the microwave ergonomics scale

https://www.youtube.com/watch?v=UiS27feX8o0

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
neat trick I just discovered: If you want to do Ace Combat style turn from origin absoloute camera on right stick, you can double interpolate the delta-rate of each axis you're rotating on in your matrix.

what that means is you grab the delta, map that as a clamped range, and the further the delta is, the faster the interpolation rate is so it becomes gummy and sticky when the delta of camera to input range is low so holding a position and smoothly moving the camera has a very low precision demand and doesn't result in camera-shake from a player's thumbstick but moving it quickly feels very slippery and fast.

i was annoyed that project wingman didn't have thumb-shake-compensation like this so its nice to understand the trick. its very simple, but i'm annoyed i don't see more of it.

makes me wonder if some sort of biasing can also be applied with an argmax of middlemost player target and then to have some way of inferring deliberate inputs vs hard releases of the stick so they're treated as two separate inputs by inference:

instantly releasing results in the stick flicking back to zero with no negative over-hang past the middle so that seems like something that would be easy to detect and fire off an event for.

timed micro-input events are fun to explore.

Expo70 fucked around with this message at 07:01 on Jan 24, 2022

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
man this hit hard...

https://www.youtube.com/watch?v=IeRXhhXvDj0

https://www.youtube.com/watch?v=HYB--QB4YKg

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
https://www.youtube.com/watch?v=H_Ym9528awM

I just had a really weird thought...

using an electromagnet in place of a spring, and a hall sensor, couldn't you potentially make a magnet which repels a button like a spring does, but vary the current to change the resistance of the button mechanically to the user? like an analogue keyboard with variable mechanical resistance?

you could basically make a force feedback input system, kinda like a linear version of this:

https://www.youtube.com/watch?v=X1BKkZs3DvA

if its possible, i wonder if it means the "feel" of a switch could be software defined instead of hardware defined...

Expo70 fucked around with this message at 11:28 on Jan 25, 2022

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
thats how some haptic controllers work

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
the novint peeps dont use it for their flagship dealio, the falcon, cuz magnets and heat and banging around dont mix well for long durability what peeps expect out of a 500 usd device (what it is is servos)

(peep a video of peeps using the falcon. in actual person its intuitive as gently caress but peep a video to see how it gets used)

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

bob dobbs is dead posted:

the novint peeps dont use it for their flagship dealio, the falcon, cuz magnets and heat and banging around dont mix well for long durability what peeps expect out of a 500 usd device (what it is is servos)

(peep a video of peeps using the falcon. in actual person its intuitive as gently caress but peep a video to see how it gets used)

oh that's goddamn interesting. i imagine the math used to calibrate this stuff is a huge pain in the rear end to implement, or is it prohibitively expensive?

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
neither. itll just wear out after too much banging over a period of months, not years

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

bob dobbs is dead posted:

neither. itll just wear out after too much banging over a period of months, not years

wait, wear out? like it just mechanically fails? i just wanna make sure i'm understanding.

is it a calibration problem or something?

Expo70 fucked around with this message at 11:37 on Jan 25, 2022

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
fundamentally, electronagnet is a wire around a core. the way they had all the moving surfaces in prototype, they would just keep on whacking the wire or even a plastic cover they made over the wire, fuckin up all that modulation and therefore the device

theres a machine to figure this out in prototype by whackin the poo poo out of things, its pretty funny to look at

bob dobbs is dead fucked around with this message at 11:42 on Jan 25, 2022

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

bob dobbs is dead posted:

fundamentally, electronagnet is a wire around a core. the way they had all the moving surfaces in prototype, they would just keep on whacking the wire or even a plastic cover they made over the wire, fuckin up all that modulation and therefore the device

so then the solution logically is to design it in such a way where that kind of collision is impossible i would assume?

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
or you could just do servos lol

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

bob dobbs is dead posted:

or you could just do servos lol

fair i guess. i'm debating stripping down my old ms sidewider ii force feedback to take a shot at building a custom input device some time in 2022 and i'm honestly not used to the whole building electronics thing yet (my comfort zone is hfe) so messing with stuff with voltage is probably absurdly dangerous and way above what's reasonable for me yet.

starting to overcoming depression and i'm just dealing with this new totally unreasonable urge of I REALLY WANT TO BUILD THINGS and its just the most amazing thing ever

i'll pick shameboys brain and see where it goes, given he's big into electronics and electrical gear and he speaks fluent microcontroller

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
ignoring the fact its "from anime", ever notice the UX work in some stuff is always weirdly high effort? bare with me - i don't think this post is totally useless and i am going somewhere with this beyond pure aestheticism.

the dumb animu poo poo only spurred the thought, the thought behind it is what actually caught my attention and i wanted to share

that said it'll probably sound dumb as hell when i've had a sleep and i'm not in this weird fever dream of "how did i never notice this before?" strangeness.

it always makes me think of the real vr stuff we actually got and how disappointing it is.

like did our aspirations go from this:





to this:



one looks like a super hard-baked UI that's specifically designed to meet extremely specific circumstance, and the other one almost looks like a collage of different... idk, i'd say "appendages" in 3D space? like chunks of living document, talking to eachother.

it makes me think of palettes in photoshop on steroids, as if not only were it denoting some assembled relationship but like as if ... how to put it...

you could have a small library of primitive representations and filter operation libraries and plug them all into data that's coming out of different systems to visualize stuff in VR and build simple "information machines" out of them without conventional programming knowledge where eyeballing certain relationships and intuitively "feeling it out" would solve the problem.

it would be expensive as gently caress cpu-side but you could very rapidly cobble together a model solution with very little training or understanding of certain things by representing systems using visual-mechanical association i guess?

it fits much closer to how i think IE Sutherland imagined his "sketchpad" graphical communications system (which was really vr, but for learning to intuit math by translating it into anthropocentric concepts of how our minds intuit reality https://journals.sagepub.com/doi/10.1177/003754976400200514) in the 1960's.

i couldn't tell you how a linear interpolation works but i can just feel it out in my gut and know "what that machine does" and make use of it every day because it, like a lot of math i learned to intuit messing around in unreal engine and then after learning 'that machine' in the software, i then learned the equation afterwards and suddenly math education got "really easy" and i was able to teach myself any concept like vectors or acceleration or really weird physics phenomena because i had a software "playground" to construct those systems in using nodes.

i learned asm as a kid the same way, dumping opcodes in and messing with memory registers and seeing what happened. having "the stuff" to fiddle with seems to be how math learning 'works'.

i learned to gut-feel geometry by building stuff with my hands. i imagine its the same with all of you - you learn by tinkering and discovering and doing poo poo and then learning there's a rule which describes the phenomenon you're dealing with and that encapsulates and envelops that gut feeling and lets your mind explore further out into stranger places. deeper levels of abstraction and all that.

it feels very very much like Sutherland's mathematical wonderland -- where the intuition component of math arises from how math you don't know talks to math you do know in functional representations.

it feels like its in that 'very strange place of interface' as Sutherland's wonderland.

i wonder what an information-machine like that would be like, plugging different things into eachother? equations are such a piss poor interface for mathematics and they are not a living representation of a process. you have to "run it" in your mind and that's still just so drat slow.

i dunno it probably sounds like marketing speak of "when i sober up this will sound stupid" energy but i felt i wanted to put the thought i had *somewhere* while i still had it.

idk man

Expo70 fucked around with this message at 13:39 on Jan 25, 2022

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

have you tried LCARS op

MononcQc
May 29, 2007

MononcQc posted:

The other paper I'll want to introduce at a later point is titled "Can We Trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches" and discusses whether the idea of best practices makes sense, and the situations in which medical workers are expected not to respect them.

Alright, it's time! Can We Trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches by David D. Woods (again! he developed the idea of Resilience Engineering after investigating NASA disasters so he'll show around a lot) and Gary Klein (the man behind Naturalistic Decision Making, which is a study of how experts make decisions under pressure).

The paper makes the argument that using ideas such as "Best practices"—relying on data for treatment recommendations—are not necessarily a productive approach in healthcare due to specific cognitive challenges, and aims for ways to improve their impact.

The paper identifies 6 cognitive challenges around best practices:

  1. Characterizing problems
    most of the challenge is actually in figuring out what the problem is in the first place, not the solutions to apply, which best practices tend to focus on when it comes to applying data to predict outcomes
  2. Gauging confidence in the evidence
    the quality and relevance of data behind best practice is often misleading, hard to replicate, and specific variables may make them irrelevant or inappropriate for the current context
  3. Deciding what to do when the generally accepted best practices conflict with professional expertise
    if the expertise of the clinician is credible (and the assumption is that it generally is), there may be a situation where they strongly believe that what the best practice recommends may be inappropriate at this point in time. This still represents a decision to be made.
  4. Applying simple rules to complex situations
    Rules are often built from population data, not specific cases; complex situations may contain many variables that are not visible or accounted for in the data, and therefore no set of rules can completely handle all situations.
  5. Revising treatment plans that do not seem to be working
    A statement made here is that evidence-based medicine is not well suited for plan adaptation. Practitioners have to start from early, subtle, and preliminary data, and apply the best treatment available. But as things change and evolve, they have to change and adjust the treatment as well, and gauge that against waiting for the current treatment to work. In short, the idea is that data-driven best practices tend to assume that the suggested approach works, and offer little in terms of support when it does not.
  6. Considering remedies that are not best practices
    Rarer situations are not necessarily well-documented; time pressure and constraint do not necessarily allow in-depth analysis; new ideas can provide useful for rare situations but would not be covered.

In short, the main gist is that best practices tend to oversimplify the world. They are good guidelines and models, but they can't on their own, account for all the work, and shouldn't be used as such. The paper concludes:

quote:

Best practices are an important opportunity for any community to shed outmoded traditions and unreliable anecdotal procedures. They provide an opportunity for scrutiny and debate and progress. They enable organizations to act in a consistent way. However, as we have argued, best practices come with their own challenges.

Cognitive engineering and NDM [Naturalistic Decision-Making] studies have shown some of the difficulties of using evidence in situations that have a great deal of variability, uncertainty, and risk. In effect, decision makers in domains such as health care need plans like best practices but also need to be effective at revising plans to fit the dynamics and variability of specific situations (e.g., patients and diseases) and to handle the changing knowledge about what is effective.

[…]

We should regard best practices as provisional, not optimal, as a floor rather than a ceiling. When we label an approach a best practice, it tends to become a ceiling that is hard to change even as more knowledge is gained. Instead, we can identify provisional best practices that serve as a floor while learning goes forward. It is a move from “best practices” to “better practices” that frees us from undocumented anecdotal approaches and forces a commitment to continual improvement

Which I like very much, and have emphasized.

-----

As a bonus, one of my favorite bits of the paper is this small dig:

quote:

Kahneman and Klein (2009) assert that intuitions are useful under two conditions: a reasonably stable environment and an opportunity for people to learn from feedback. For example, the stock market does not constitute a reasonably stable environment, and we are highly skeptical about claims of expertise or intuition in selecting stocks for investment. Another example is organizational decision making. Most people who work on the administrative side of organizations fail to get frequent, consistent, or accurate feedback and thus fail to develop expertise and credible intuition.

In contrast, medicine satisfies both of the conditions for credible intuitions. It is a reasonably stable environment, and physicians do get some feedback. Intuition here is not a random or mystical process but simply the use of pattern matching that is based on experience in a reasonably stable environment.

Admittedly, physicians do not achieve the levels of expertise found in chess grandmasters. Chase and Simon (1973) described how chess grandmasters accumulate tens of thousands of patterns that enabled them to rapidly size up situations. Chess is a highly stable environment—the positions of the pieces are unambiguous. And chess players receive clear feedback about the quality of their decisions. They know that they have won or lost a game and can go over the moves to determine where they made mistakes. Physicians do not receive the same level of feedback on their decisions. When they refer patients to specialists, they may not be informed about the results. Worse, feedback is abundant on common conditions but limited on rare conditions, such as early rabies. And worse yet, not all feedback is equal.

I just love this whole thing on the side where these researchers, who see expertise in pretty much all sorts of skilled work just go "you know what? Market analysts telling you what to invest in and lots of upper management trying to move orgs around are probably unskilled dogshit garbage at intuiting it, and here's why."

MononcQc fucked around with this message at 03:45 on Jan 27, 2022

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

MononcQc posted:

Alright, it's time! Can We Trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches by David D. Woods (again! he developed the idea of Resilience Engineering after investigating NASA disasters so he'll show around a lot) and Gary Klein (the man behind Naturalistic Decision Making, which is a study of how experts make decisions under pressure).

The paper makes the argument that using ideas such as "Best practices"—relying on data for treatment recommendations—are not necessarily a productive approach in healthcare due to specific cognitive challenges, and aims for ways to improve their impact.

The paper identifies 6 cognitive challenges around best practices:

  1. Characterizing problems
    most of the challenge is actually in figuring out what the problem is in the first place, not the solutions to apply, which best practices tend to focus on when it comes to applying data to predict outcomes
  2. Gauging confidence in the evidence
    the quality and relevance of data behind best practice is often misleading, hard to replicate, and specific variables may make them irrelevant or inappropriate for the current context
  3. Deciding what to do when the generally accepted best practices conflict with professional expertise
    if the expertise of the clinician is credible (and the assumption is that it generally is), there may be a situation where they strongly believe that what the best practice recommends may be inappropriate at this point in time. This still represents a decision to be made.
  4. Applying simple rules to complex situations
    Rules are often built from population data, not specific cases; complex situations may contain many variables that are not visible or accounted for in the data, and therefore no set of rules can completely handle all situations.
  5. Revising treatment plans that do not seem to be working
    A statement made here is that evidence-based medicine is not well suited for plan adaptation. Practitioners have to start from early, subtle, and preliminary data, and apply the best treatment available. But as things change and evolve, they have to change and adjust the treatment as well, and gauge that against waiting for the current treatment to work. In short, the idea is that data-driven best practices tend to assume that the suggested approach works, and offer little in terms of support when it does not.
  6. Considering remedies that are not best practices
    Rarer situations are not necessarily well-documented; time pressure and constraint do not necessarily allow in-depth analysis; new ideas can provide useful for rare situations but would not be covered.

In short, the main gist is that best practices tend to oversimplify the world. They are good guidelines and models, but they can't on their own, account for all the work, and shouldn't be used as such. The paper concludes:

Which I like very much, and have emphasized.

-----

As a bonus, one of my favorite bits of the paper is this small dig:

I just love this whole thing on the side where these researchers, who see expertise in pretty much all sorts of skilled work just go "you know what? Market analysts telling you what to invest in and lots of upper management trying to move orgs around are probably unskilled dogshit garbage at intuiting it, and here's why."

A lot of this is stuff I'm hearing for the first time and this is pretty rad. I'll do some reading, but can you recommend some other reading too?

MononcQc
May 29, 2007

Yeah, there's a lot of this stuff coming from cognitive science and resilience engineering as disciplines. Papers from Richard Cook, David Woods, Gary Klein, Sidney Dekker. Hollnagel, Steven Shorrock and a few others come from something closer to traditional safety and also have lots of interesting stuff. An interesting list is maintained by Thai Woods at https://resilienceroundup.com/ -- but he's lately turned it to a paid model. You can still get a fun list of older free reviews he's run of many papers there.

Anyway, I have a bunch of papers I've read before that I intend to revisit and post here over multiple weeks when the thread may slow down, so I may pick and choose from whatever is convenient.

MononcQc
May 29, 2007

Here's another cool David D Woods paper: Can We Ever Escape from Data Overload? A Cognitive Systems Diagnosis

quote:

Data overload is a generic and tremendously difficult problem that has only grown with each new wave of technological capabilities. As a generic and persistent problem, three observations are in need of explanation: Why is data overload so difficult to address? Why has each wave of technology exacerbated, rather than resolved, data overload? How are people, as adaptive responsible agents in context, able to cope with the challenge of data overload? In this paper, first we examine three different characterisations that have been offered to capture the nature of the data overload problem and how they lead to different proposed solutions. As a result, we propose that (a) data overload is difficult because of the context sensitivity problem – meaning lies, not in data, but in relationships of data to interests and expectations and (b) new waves of technology exacerbate data overload when they ignore or try to finesse context sensitivity. The paper then summarises the mechanisms of human perception and cognition that enable people to focus on the relevant subset of the available data despite the fact that what is interesting depends on context. By focusing attention on the root issues that make data overload a difficult problem and on people’s fundamental competence, we have identified a set of constraints that all potential solutions must meet. Notable among these constraints is the idea that organisation precedes selectivity. These constraints point toward regions of the solution space that have been little explored. In order to place data in context, designers need to display data in a conceptual space that depicts the relationships, events and contrasts that are informative in a field of practice.

The paper focuses on the idea that a lot of incident reports and investigations contain something like "although all of the necessary data was physically available, it was not operationally effective. No one could assemble the separate bits of data to see what was going on." It starts so by the idea of data availability paradox:

"On one hand, all participants in a field of activity recognise that having greater access to data is a benefit in principle. On the other hand, these same participants recognise how the flood of available data challenges their ability to find what is informative or meaningful for their goals and tasks."

A thing Woods mentions is that a lot of systems were intended to help users, but in fact end up requiring even more capacity during times where users are the busiest -- I need data when alarms are ringing, but when alarms are ringing, I'm also the least inclined to slowly think, focus, and analyze.

Data overload is classified into 3 categories:
  1. clutter / too much data
    In the 80s, people tried measuring the bandwidth of what we could process, and reduce the amount of data seen, the numbers of pixels shown. This wasn't successful because often designers tried reducing the information on one display by making people navigate across many. The relevance is often context-sensitive and what you remove may be relevant. In the end the approach was also judged meaningless because "people re-represent problems, redistribute cognitive work, and develop new strategies and expertise as they confront clutter and complexity." Dynamic mechanisms requiring user input don't necessarily help because you only know what to filter once you know what to look for.
  2. workload bottleneck
    There are too many sources of data to look at. A lot of work has been done to have automation assist in analysis. Two categories are given: a) those that strongly rely on the analysis being correct (filters, summarisers, automated search term selectors), or weakly rely on it (indexing, clustering, highlighting, organizing). This type of solutions considered "necessary but not sufficient" to help, and is at risk of breakdown in collaboration structures between humans and machines (remind me to cover papers on Joint Cognitive Systems at some point)
  3. finding significance in data
    Significance is inherently contextual. People have implicit expectations of where useful data is likely to be located and to know what it should look like. There's a relation between the viewer and the scene that must be taken into account, and it's somewhat of an open problem to cater to this need.

This last point, the focus on context sensitivity, is what is further explored. An example is one where error codes for an alarm have a corresponding description, but the specific meaning depends on what else is going on, what else could be going on, what has gone on, and what the observer expects or intends to happen. The significance of a piece of data depends on:
  • other related data;
  • how the set of related data can vary with larger context;
  • the goals and expectations of the observer;
  • the state of the problem-solving process and stance of others.
They call it a myth that information is something in the world that does not depend on the point of view of the observers and that it is (or is often) independent of the context in which it occurs. Data has no significance and is a raw material; informativeness is a property of the relationship between the data and the observer.

So there's another extra set of subcategories of how people focus on what's interesting:
  1. perceptual organization
    Rather than everything being flat in a perception field, things are hierarchical and grouped. You don't count 300 hues of blue, you see the sky. You don't need to show less data, you need to show it with better organization (something Tufte spends a lot on time on in his own texts)
  2. control of attention
    Attention is not permanently fixed on one thing; we have to be able to focus on and process new information. Sometimes it's distracting, sometimes it's relevant. Reorientation on new elements implicitly means you lose focus on other things you were previously focusing on. In the real world this is often dealt with implicitly by having a focal point, but maintaining awareness in peripheral vision and auditory fields. The difficulty of doing something with a limited field of vision (using goggles that block peripheral vision) is an experiment suggested to see the extent of this. So automation or information that wishes to better control and direct attention should ideally have some understanding of what it is the human is trying to do, to know how to mediate the stimuli, place it where it makes sense, and to know how to redirect attention on what is worth it.
  3. Anomaly-based processing
    We do not respond to absolute levels but rather to contrasts and change. Meaning lies in contrasts. An event may be an expected part of an abnormal situation, and therefore draw little attention. But in another context, the absence of change may be unexpected and grab attention because reference conditions are changing.

There's a large section on the paper on tricks technical solutions have to work around context sensitivity, which is limited and brittle, but worth taking a look at:
  • reduce available data: usually done by hiding all the data behind displays or menus. Breaks down because some of the relevant stuff may get hidden and now the tool is at cross-purposes with what the operator intends and increases cognitive costs rather than saving them.
  • only show what's "important": think of log messages with INFO, WARNING, and ERROR, and only showing the most critical data. This, once again, can omit important data (even if people can call up the relevant lower-level information). The problem is to help people recognise or explore what might be relevant to examine without already knowing that it is relevant, which is better done through organization.
  • the machine will compute what is important for you: automation that tries to act intelligently has to be conceived as a teammate to be truly useful. The joint cognitive system paper I referred to earlier considers automation as generally implemented to be a lovely, dipshit teammate, and generally it's like asking someone who can't read the room to tell you when people start feeling uncomfortable while you're in the kitchen. So this, too, easily breaks down when stressed.
  • use syntactic or statistical properties as cues to semantic content.: The correlation is weak. "Sort by relevance" is bad. Making this work reliably is non-trivial and often fails.Also the way the correlation works is often opaque and becomes hard to trust by operators. I'm looking at you AIOps; you suck.

And finally, solutions! Those are, unfortunately, non-trivial otherwise everyone would know about this sort of poo poo. Anyway, they're cool guidelines to keep in mind:
  1. Organisation precedes selectivity: effective systems will have elaborate indexing schemes that map onto models of the structure of the content being explored; will need to provide multiple perspectives to users and allow them to shift perspectives fluently. I like to think of this like "show me the system the way a support engineer cares about it", or "show me what it looks like from the infra point of view", rather than having to dig into selecting elements to yourself build this vision on-the-spot.
  2. Positive selectivity enhances a portion of the structured field: positive metaphors ("spotlight", "peaked distribution across a field") help focus on a part of the data, whereas negative ones ("filters", "gatekeepers") tend to be weaker. We tend to default to negative ones (they're more computationally effective) but they hinder the ability to switch focus to otherwise non-selected elements. Better cognitive results are expected from positive metaphors than negative ones.
  3. You must deal with context sensitivity: solutions to data overload will help practitioners put data into context. Basically it helps to put the context "in the world" rather than having to carry it all in your head. Examples are to show related data, to use model-based displays, automatically extract higher-level events (eg. a device's states) from data, and comparing current anomalies to regular trends.
  4. Observability is more than mere data availability.: "Observability refers to processes involved in extracting useful information. [...] The critical test of observability is when the display suite helps practitioners notice more than what they were specifically looking for or expecting. If a display only shows us what we expect to see or ask for, then it is merely making data available."
  5. design of conceptual spaces: You must depict relationships in a field of reference. "With a frame of reference comes the potential for concepts of neighbourhood, near/far, sense of place and a frame for structuring relations between entities." It's a prerequisite to having more than data availability.

I personally enjoy mixing this idea that we could fix data overload with the law of stretched systems -- what if we just continuously build to the limit of what we can understand? Then, raising the ceiling on the ability to fight data overload will just mean that we'll find new terrible ways of making systems more complex to the point of saturation until overload is there again.

MononcQc fucked around with this message at 04:58 on Jan 30, 2022

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

Captain Foo posted:

I feel like i'm missing something here...both the stick and the lower pad seem to be right-handed, but you could never use both at once with the same hand

That's the entire point. They are exclusive context inputs, where the system would note the hand position of the operator and change the system context automatically.

I had wondered if it would be possible for some sort of painted capacitive coating "guess" at the approximate hand position and provide automatic inference, but that's beyond the sorts of things I know how to implement.

They're for different tasks which operate independently.

I eventually simplified it down to a context switch which when I get around to building it, I'd like to have a solenoid or something so the switch can de-latch itself automatically.

The simplified version:

Shame Boy
Mar 2, 2010

Expo70 posted:

I had wondered if it would be possible for some sort of painted capacitive coating "guess" at the approximate hand position and provide automatic inference, but that's beyond the sorts of things I know how to implement.

oh that's why you were asking me about how hard it is to implement capacitive sensing the other day

the answer by the way is anywhere between "simple enough that the first patents for it used vacuum tubes" and "hard enough that it only really became technologically possible in the last decade or two" depending on how accurate you want the positioning detected and if you want it to detect individual touch points and not just "thing is near / touching me"

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

Shame Boy posted:

oh that's why you were asking me about how hard it is to implement capacitive sensing the other day

the answer by the way is anywhere between "simple enough that the first patents for it used vacuum tubes" and "hard enough that it only really became technologically possible in the last decade or two" depending on how accurate you want the positioning detected and if you want it to detect individual touch points and not just "thing is near / touching me"

Yeah its literally just "which of these five positions is most likely" (throttle/stick/paddle 1 thumb/paddle 2 thumb/no hand present)

I figure it could also just be done by checking a beam being broken along a path - one for the stick palm, one for the throttle palm and the rest just to be based on last press where on the hosas based on some exclusionary rules.

Sagebrush
Feb 26, 2012

when i was a kid we had a lamp that turned on when you tapped the metal base. the circuitry for it is literally just a capacitive oscillator iirc

Cybernetic Vermin
Apr 18, 2005

Sagebrush posted:

when i was a kid we had a lamp that turned on when you tapped the metal base. the circuitry for it is literally just a capacitive oscillator iirc

as is often the case with neat consumer stuff technology connections has a good video on them: https://www.youtube.com/watch?v=TbHBHhZOglw

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Sagebrush posted:

the circuitry for it is literally just a capacitive oscillator iirc


:eng101: catpacitive

https://www.youtube.com/watch?v=4dnUZE58UBg

Sagebrush
Feb 26, 2012


cat patcitive :eng102:

Sagebrush
Feb 26, 2012

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

Expo70 posted:

That's the entire point. They are exclusive context inputs, where the system would note the hand position of the operator and change the system context automatically.

oh gotcha thanks

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
https://twitter.com/fomimoimoin0120/status/1233760131780734978?s=20

I have so many questions.

Captain Foo posted:

oh gotcha thanks

System context is something like "OK which systems am I automating more, and which am I performing more manually?" if that makes sense.

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

wow, I hate this so much. Like I get why, but you don't get rid of the tach alltogether, you turn it into a goddamn ribbon-set around the outside of the PIP so you deprioritize the data without getting rid of it because you're still driving. Its so annoying to see.

my favourite tach has to be the LFA. this was 2011, and lexus had a design exercise they wanted to use to renovate the methodology of how their departments communicated internally and so they decided for shits and giggles to flex and build a supercar.

they actually sold it at a loss, despite the fact it was hideously overpriced ($470,000USD) because the net gains were the changes they made to their methods on other vehicles moving forward. the attention to detail in this thing is immense, taking nine YEARS to build: at first the body was made of aluminium, but they decided the car was too heavy, scrapped it and started again with carbon fibre.


https://www.youtube.com/watch?v=tGGG4EvVU9A


the problem they ran into is the gearbox revved so quickly was analogue indicators wore out too quickly or had to be dampened to protect the parts -- and the engineers were so against doing this (they wanted everybody to see how hard they'd worked on the drat thing to let it do these absurd things) they went with an all digital virtual revcounter. the TFTs of the time weren't fast enough to keep up, so they designed their own which ran at 160hz "because they could".

they did eventually switch to LCDs and then OLED displays.

https://www.youtube.com/watch?v=gU11UR5dZr8

this is just fetishistic design obsession and there's just something that makes the teenager somewhere in this dull old bitch called me smile like an idiot at just how absurd everything is about this goddamn *LEXUS* and that it translates all the way from the engine to how you interact with it.

even little touches, like the gearbox paddles having different textures and pressure levels because they want you to learn automaticity in a supercar is just bonkers. they didn't have to do these things, but they did anyway "because they could".

in terms of instrument clusters, I'm a huge fan of radial systems and i can gush about the ones i like all day to the point where i bothered arning some HLSL just so i could write a radial meter into my own game projects. that said, i totally recognize the problems and flaws with this approach which i'll go into based on the things i learned *after* this video was recorded:

there's a problem you often run into, you often run into with ribbon meters -- directionality, where if a meter has say four exclusive quadrants representing four values you need to know if they move clockwise, horizontally out or vertically out with symmetries

see, a common task may be comparing two quantities which are represented with spatial parity (left side engine vs right side engine on left/right sides of a radial) rather than linked parity (both being in the same quadrant, one inner, one outer with direct referencability or comparability).

https://www.youtube.com/watch?v=Fz_SHswtOns&t=51s

AC5/VD run into this problem pretty abundantly and their fix was to realize "OK, our meter goes from full to empty over time, so we need emptiness to be more readable than fullness for evaluating technique"

result? they went with vertical filling rather than horizontal radial filling.

the net result was the cluttered messy UI of the previous games which looked like this was tidied up and the game became fundamentally more playable by integrating the system-state information into the lockbox.

this also meant eyes didn't have as far to travel when checking for results from the likely source of player attention (which the lockbox had a high probability of overlaying) but it resulted in problems with onboarding because a lot of muscle-memory with gaze had to be unlearned.

in turn, the most vital stat of all, HP was kinda harder to see so they had to implement voice-barks (spoken stuff which says to a player "hey, this is a status indicate, its really importanto!") and they do this to indicate like major changes in VD and indicators to show DPS or damage type patterns

why? because they figured out in V that players wouldn't intuitively know that with the lock-ring. the number was just too tiny.

https://www.youtube.com/watch?v=kiZhIMRx5KU&t=31s

what i take from this is if you solve one problem, you risk creating another and really what you want is the minimum number of "solutions" required to solve other problems. often the answer isn't deciding to be clever, but deciding to be simple and clear so you're not layering on affordances.

this being said, the vocal barks were very welcome even from veterans and most of the AC fangame projects ongoing now implement them regardless of whether or not they have a lockring, a lockbox, dumb-aim, magnetism or whatever else for their fire solution management.

the other lesson i take from this beyond barks is that the meters need much cleaner quadrant separation, and they need several methods to mark status change: a marching ribbon to indicate if the value is increasing or decreasing might be worth looking into, as well as notches to indicate equidistant points to judge the approximate effectiveness. the concern there might be noise, but i think if things are handled appropriately, it might be ok.

i think coming back to general use, you can also see a lot of people see UI in fiction and go "Can I make this work and would anybody pay money for it?

even I'm guilty of this trend. i think on linux they call it "ricing"or something (not fond of the name) but at least the themes always look cool when you see them

My favourite example is probably iPulse

essentially, the creator saw the radial status indicator here:
https://www.youtube.com/watch?v=enwTApkJ0cc

and thought "hey, I can implement that!"

and then they wrote this in what I think was 2011?


https://blog.iconfactory.com/2015/10/a-new-life-for-ipulse/

i remember using it back when I had a Macbook Pro in 2011/2012ish and aside from the fact it took up some screen real-estate i remember thinking at the time that it was cooler than sliced bread.

apps like iStat have overtaken it since and I've left the apple ecosystem but yeah it was cool as hell and sometimes i just really liked watching it work in the corner while i was in photoshop or doing 3d work in modo thinking "wow this is so drat weird and cool". the screenshots really really do not do it justice.

i get that this stuff is hella goofy and space-inefficient but it makes some weird part of me very happy when i see it.

Expo70 fucked around with this message at 11:57 on Feb 4, 2022

MononcQc
May 29, 2007

You might like this classic paper of cognitive science, how a cockpit remembers its speed, which goes over flight prep and whatnot, but also the usage of "speed bugs" to mark significant sections and desirable zones of operations for many regions.



This let them get access to the detailed elements, but also turn the important values they have to remember as collaborative artifacts they no longer have to hold in their heads, that they can cross-validate between crew members, and also just see at a glance whether now is good or not without having to spend time focusing on numbers and reading the proper values, or having to keep them in working memory.

IIRC it's known that from the point you have a third needle on a single gauge, though, the ability to pattern match goes down drastically.

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
good rear end thread

bad ergo ui: the new bmw digital dashboards
- tach needle only shows the tip of the needle where the numbers are
- different modes which completely changes the ui - eg one of the sport modes changes from tach and speedo dials, to a centered numeric display for speedo and then vertical tach
- cool stylish and unclear fonts

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

MononcQc posted:

You might like this classic paper of cognitive science, how a cockpit remembers its speed, which goes over flight prep and whatnot, but also the usage of "speed bugs" to mark significant sections and desirable zones of operations for many regions.



This let them get access to the detailed elements, but also turn the important values they have to remember as collaborative artifacts they no longer have to hold in their heads, that they can cross-validate between crew members, and also just see at a glance whether now is good or not without having to spend time focusing on numbers and reading the proper values, or having to keep them in working memory.

IIRC it's known that from the point you have a third needle on a single gauge, though, the ability to pattern match goes down drastically.

Aaaa I could drown just reading papers like this all day they are such a bitch to find!

Wild EEPROM posted:

good rear end thread

bad ergo ui: the new bmw digital dashboards
- tach needle only shows the tip of the needle where the numbers are
- different modes which completely changes the ui - eg one of the sport modes changes from tach and speedo dials, to a centered numeric display for speedo and then vertical tach
- cool stylish and unclear fonts

Thankyou but most of the awesome work here is MononcQc!

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

This coming week's paper I'm going to summarize is Gary Klein's The strengths and limitations of teams for detecting problems (highly illegal PDF link). Once again, Gary Klein's the man behind Naturalistic Decision Making (NDM), which studies experts in the field and how they make decisions.

The paper focuses on how teams detect problems. Like the actual mechanism by which people go and figure out "huh, that's wrong", and also the cases where they actually do not. But it does so specifically in the context of teams, where many people have shared ownership of a problem, hand off responsibilities to each other, and have varying degrees of expertise and capacity.

The elements studied come from 5 clinical cases in Neonatal Intensive Care Units, 4 from military decision making studies, published events from Apollo 13, the Challenger incident, the grounding of the USS Enterprise on Bishop Rock, and the failure of the U.S. Intelligence community to anticipate the attack on Pearl Harbor during World War II. An extra 12 events are studied coming from Perrow (the guy who came up with Normal Accidents I mentioned earlier in the thread). Given these sources, the analysis is qualitative, not quantitative.

One base principle here is that Klein frames problem detection as part of an ongoing problem resolution cycle. It can be the starting point of problem resolution, but may not actually be the starting point either: you might be working on fixing a problem when you uncover a new one. Another interesting note is that Klein makes a distinction between problem detection, identification, representation, and diagnosis.

Now, the diagram I'm posting is crucial to a lot of the things referred in the paper and many other ones, and unfortunately, people in the humanities have the jankiest most loving ridiculous diagrams (there's much worse than this one):



The idea is that problem detection is not a sequential process, it's a continuous evaluation of events according to a specific frame, until specific cues are picked up that can't be explained within that frame and eventually require re-framing -- figuring out there's a problem is noticing things that don't fit the frame.

There's also a cool list of the usual blockers to detecting an anomaly:



He points out that teams have many advantages over individuals:
  • wider range of attention, to monitor more channels
  • broader expertise
  • more variability, lowering the risk of fixation and increasing the chances of alternative representations
  • better ability to reorganize activities
  • parallel work
But to obtain the benefits of team work, the task needs to be decomposed. This, in turn, means that finding inconsistencies requires reassembling the task products and information management to establish situation awareness and common ground. In general, most of the failures to detect problems come from that lack of situation awareness (knowing what is happening) and common ground (the shared goals and understanding of all teammates).

The paper starts with limitations, by phase. You can get the details, but they provide a helpful summary table I'm just going to copy/paste here:



The three phases are "alertness" (looking for problems), "cue recognition" (the cues are either not detected, or detected but not communicated to teammates, such that the team is not aware of them), sensemaking (understanding what is going on), and action.

If you have time to read the paper, each case comes with an example from the aforementioned real world situations, and some are pretty neat, like cases of people having information they would not know would be relevant to the situation at all, leading to disasters. There are other cool interpretations like:

quote:

One particular difficulty inexperienced team members encounter is with negative cues. They lack the experience to notice when something important has not happened. Therefore, they do not convey the absence of an event, and others, higher up in the organization, have no way of realizing that the typical event did not occur.
It's one thing Klein refers to often; expertise is being able to know what's missing. An interesting bit there is that Klein considers the autopilot in a plane to actually be a team member, and an unskilled one:

quote:

A small commuter airplane was traveling to Chicago in the winter. Asymmetric icing on the wings resulted in asymmetric lift. The autopilot compensated for the asymmetric lift, so the pilots had no cues about the difficulty. By the time the problem became too severe for the autopilot to handle and the pilots discovered the problem, the window of opportunity for a safe recovery had closed, and the plane crashed. The solution imposed by the FAA was to ban the use of autopilots under those circumstances.
It was the autopilot that detected the problem of asymmetrical lift. In addition, it was the autopilot that failed to notify the pilots, who did not discover the problem until it was too late. Woods refers to this as "decompensation," because the automation is compensating for the problem and is thereby masking it.
This is a recurring theme in cybernetics, resilience engineering, cognitive science, and human factors: automation can be framed as a teammate, and we have to admit that it is generally a bad one. So there is a need for clear protocols and understandings of the automation because it will not be communicative, anticipative, or helpful past specific parameters, which the responsible adult around has to track for it.

Anyway, the list of barriers to problem detection in teams shown in Table 2 has some overlap with the barriers identified for individuals (see Table 1). The masking of anomalies appears on both lists; for teams, the masking arises from the actions of other team members or from knowledge-based systems that are functioning as team members. The difficulty of synthesizing diffuse cues into a pattern is on both lists; for teams, the difficulty is when the cues are available to different people working in different sections. But for the most part, the barriers in teams do not appear for individuals. They are emergent properties of the dynamics of teamwork.

The main dynamic in cause is related to needing to coordinate and re-integrate the information from all the tasks that were divided up to effectively be done by a team. This also includes knowing whether to send a message, how to send it, who to send it to, and asking for information. Another cause is having competing priorities within the organization, teams, and at an individual level, and having to trade off across them.

So the paper puts a question about how the division of work should be done: tasks that end up divided more aggressively to less skilled workers (or automation) because the amount of information to process is too large in turn means that the risks of missing the important signals also increase dramatically as part of making their processing more effective by a team! On the other hand, even though teams do possess some of the same gaps as individuals, you would still expect them to be less likely based on diversity, extra range, and increased expertise, so striking a balance is a tricky affair. The paper offers no great answer, but awareness is certainly a useful thing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply