Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dans Macabre
Apr 24, 2004


bump_fn posted:

posting w/o reading

Adbot
ADBOT LOVES YOU

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

Oysters Autobio posted:

(shout out to the sociology prof mom. gently caress that p-hacking pop-psych "ooohhhh look at those margins of error" bullshit, gimme some qualitative ethnography gently caress yeaaa

Long time lurker just jumping in and saying this is awesome.

I know this was touched on earlier but I'm really fascinated to the future of possible UI / UX designs and concepts in computing, but specifically future personal computing and designs for the average knowledge worker shmo. I don't know if this is far outside the scope of this thread but not sure where else this sort of discussion might happen.

I have no background in CS or software engineering but I find it so interesting how for normal everyday users (especially your typical "knowledge worker" who just reads an email inbox all day for a living and the actual "work" is essentially advice in some form or another) the actual "office" type work has just really not changed all that much.

It seems like in order to get that first generation of personal computing users, designers had to really emphasize that skeumorphic design that mimicked administrative offices and all their physical objects. "Files" get put into a "Folder", you have an email inbox where you send and receive memorandums, you save documents you might need later in some sort of shared drive (i.e. a cabinet). If you're like a realtor, or a sales/marketing person, or HR professional, is the future of UI/UX for this just super custom apps that are just essentially fancier dashboards to visualize data specifically for your domain that layers on top of our already existing desktop computing? Most of this stuff to me just looks like either a fancier/nicer looking version of spreadsheet software or basically MadLibs walkthroughs of whatever esoteric "process" you have to do ("Click here to generate your TPS report cover sheet")

Is there ever going to be a major new re-design for desktops or email (i.e. memorandums) or personal computing that somehow "transcends" paper all of this? It seems like digitizing office ephemera seems to have been the only goal and now everyone uses a PC as part of their daily work life without ever being a "computer person", and even with all the recent stuff in VR/AR, this all seems to only translate into physical type jobs like being a mechanic and having AR visualize schematics or construction or whatever.

anyways thanks for hearing my meandering bullshit.

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

zokie posted:

That was a top notch effort poat, shared it with my mom who is a Professor of Sociology.

You should start a podcast or something

This. I'd listen. Diligently.
Without even doing chores at the same time.

MononcQc
May 29, 2007

I'm late this week, and I'm unlikely to post in the next couple of weeks since I'll be on vacation and I may voluntarily drop technical paper reading from my list. But for today I wanted to grab stuff from the STELLA report, which has been very influential for tech stuff, and bring it here since I love to make references to that content.

quote:

On March 14 and 15, 2017, the SNAFUcatchers consortium held an informal workshop in New York on Coping With Complexity. About 20 people attended the workshop.

The workshop coincided with a Category 4 winter storm that paralyzed New York and much of the Eastern seaboard. That storm was named STELLA. Although nearly everyone was able to get to New York, participants from out of town were unable to return home following the end of the scheduled meeting. Many stayed an extra night and the workshop was informally continued on March 16. The participants began calling the workshop "STELLA". Hence the title of this report.

One of the most useful/influential things popularized in this report is the Above-the-line/Below-the-line framework.

This starts with a typical description of what "the system" is:

quote:

It includes internally-developed code, externally-sourced software and hardware such as databases, routers, load balancers, etc. and is provided as a product or service to customers, both internal and external to the organization.



This is a contextual view of the business and of the various components required to make things run. This is the one that software engineers focus the most on, and generally describes "production" as an environment. But the authors encourage us to take a broader view when zooming out:



This is a more systems-oriented view that also includes all the components required to write, maintain, build, and ship the code. But even then, it is an incomplete view.



So there's a shift of perspective above, accepting that all working business enterprises rely on people to build, maintain, troubleshoot, and operate the technical components of the system. Putting everything together gives this view:



quote:

The people engaged in observing, inferring, anticipating, planning, troubleshooting, diagnosing, correcting, modifying and reacting to what is happening are shown with their individual mental representations. These representations allow the people to do their work -- work that is undertaken in pursuit of particular goals. To understand the implications of their actions requires an understanding of the cognitive tasks they are performing and, in turn, an understanding of what purposes those cognitive tasks serve.

The green line is the line of representation. It is composed of terminal display screens, keyboards, mice, trackpads, and other interfaces. The software and hardware (collectively, the technical artifacts) running below the line cannot be seen or controlled directly. Instead, every interaction crossing the line is mediated by a representation. This is true as well for people in the using world who interact via representations on their computer screens and send keystrokes and mouse movements.

This is key stuff. Specifically, everything "below the line" comes from inference. You can't put your head in the computer, see the electrons go, and say "ah, there's the DB doing indexing." We work from abstractions and translations of events and signals from below the line into concepts that make sense for us above the line, using mental models. And each person's mental model is different, changing, and an imperfect representation.

So maintaining a good enough mental model of the below-the-line components is key to work fine. But not only that! Since most of the adaptive work is done by people above the line, we also have to maintain and create mental models of what other people understand. I have to anticipate what others know to work effectively with them.

This, in short, is the mental image that comes up all the time whenever I hear or mention socio-technical systems: working on both the technical representation and the social communication and understanding.

The rest of the report contains lessons from multiple incidents: chef & apache rollout issues, Travis CI failing and RabbitMQ growing, Logstash problems, etc. All anomalies are examples of complex systems failures (surprises with no specific root cause, things present for weeks/months before they happened, triggered by minor unrelated events) and grew up to cascade. They highlighted common features to all responses:
  • surprise
  • uncertainty
  • the role of search
  • the role of system representations
  • generate hypotheses
  • use of basic tools
  • coordinating work and action
  • communications in joint activity
  • shared artifacts
  • the consequences of escalating consequences and disturbance management
  • managing risk
  • goal sacrifice

The report defines each of them in depth, but since I wanted to focus on the above/below the line framework, I'm going to skip on these. They then cover post-mortems and their focus on technical and social aspects (which I'm eliding as a bit off-topic for this thread), including concepts of blamelessness vs. sanctionlessness. Same for incident response patterns around costs of coordination, "strange loop" patterns (hidden or unexpected circular dependencies that emerge and break systems), and dark debt (technical debt creating vulnerabilities that can't be recognized until they reveal themselves)

More on topic for here is a section on visualization and how it essentially sucks for incident handling:

quote:

Representations that support the cognitive tasks in anomaly response are likely to be quite different from those now used in "monitoring". Current automation and monitoring tools are usually configured to gather and represent data about anticipated problem areas. It is unanticipated problems that tend to be the most vexing and difficult to manage. Although representation design for cognitive aiding is challenging, it is an area that is likely to be fruitful for this community. The environment is ripe for innovation.

Anyway this is what I really wanted to focus on, but if you deal with incidents that involve computers it's a very good read.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
Enjoy your vacation. Your effort posts will be missed.

skimothy milkerson
Nov 19, 2006

yo is this the keeb and mouse thread?

why wont logitech put the good scroll wheel on the mx track ball? the fcuk

Midjack
Dec 24, 2007



Skim Milk posted:

yo is this the keeb and mouse thread?

why wont logitech put the good scroll wheel on the mx track ball? the fcuk

no the keyboard thread is here.

MononcQc
May 29, 2007

Someone at work just pointed me to this very good video introducing the above/below-the-line framework in 5 mins. It has David Woods and Richard Cook, who have both been mentioned a lot here:

https://www.youtube.com/watch?v=3P5wuxI0cMY

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
this guy just invented platos cave

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

rotor posted:

this guy just invented platos cave

The trick I would imagine is knowing how deep in the cave you are, and knowing how to swim to lower depths that light still reaches instead of drowning in hypernormalization.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Expo70 posted:

The trick I would imagine is knowing how deep in the cave you are, and knowing how to swim to lower depths that light still reaches instead of drowning in hypernormalization.

i dont remember there being any water features in platos cave

Lonely Wolf
Jan 20, 2003

Will hawk false idols for heaps and heaps of dough.
theres also an ice level

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Lonely Wolf posted:

theres also an ice level

you're thinking of Platos Fortress of Solitude

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

rotor posted:

you're thinking of Platos Fortress of Solitude

Don't you remember the part where you come in from the left and surf down, then hit the huge pillar of ice and then the entire thing is spent trying to get to the top again so you can defeat Robotnik? Michael Jackson did the music and everything.

edit:
OK, Michael Jackson did NOT do the music and everything.

Expo70 fucked around with this message at 17:59 on Jun 28, 2022

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
you reach enlightenment when you collect 100% of the coins from Platos Mystic Cave Zone

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
Crossposting a discovery from my days running tabletop as a teenager, and experimenting with the same effect in other game-types (card, video, etc) and I'm wondering what the formal terminology of this discovery of "negative pressure/void pressure" is called, other than it potentially just being a variant of negative feedback motivation, and whether or not Human Factors has any subtypes of it.

Internet Janitor posted:

on a very tangential note, I rather like science fiction which depicts transhumanist technology with extremely severe downsides that people put up with anyway out of desperation, delusion, or a combination of both

for this reason, gary gygax's virtually unplayable pen-and-paper rpg cyborg commando has a special place in my heart. where lesser minds might have constructed a simple pastiche power-fantasy of cybernetic super soldiers, we're treated to catches such as

  • battery capacity is so limited that you could burn through your entire reserves in a few rounds of combat and then be incapacitated until someone hauls you off with a crane
  • cyborg bodies have to be comically oversized to fit your brain in your torso (which is itself a long story), with all the attendant problems for using regular-person sized tools, clothing, and equipment
  • weirdly specific details of the articulation (or lack thereof) of various joints in your body
  • explicitly zero facial articulation apart from moving the jaw
  • all bodily waste is emitted as aerosols from your mouth and nose
  • your braincase is pre-loaded with 2 years of brain-food, it is not possible to resupply it without elaborate surgery and highly specialized equipment, and most of the medical facilities that could do it have all been hosed up by an alien invasion, so in all likelihood you're gonna starve to death inside your robot body even if you win the war

the rulebook concludes by stating that maybe someday they'll figure out how to reinstall your brain in your cryogenically frozen original body, provided that you can fend off the alien invasion and the surviving industrial complex can figure out how to un-freeze a body...

Expo70 posted:

i just found what i'm reading tonight before bed. this reads like the stuff i used to get told off for trying to slide into our giant-robot tabletop games as a teenager:

  1. Pilots are largely unimportant in "serious" missions -- with remote-pilots essentially just being there to hold triggers down as a glorified deadman's eswitch for legal purposes so if the robot fucks up there's someone who can go to prison so the industrial complex and military have a legal out as set by a precedent established in some weird legal edgecase involving factory robots in an industrial accident twenty years prior.
  2. In situations where heavy jamming made remote operation impossible, the whole trope of "lol teenaged pilot" was kinda flipped on its head with genetic reform and smaller bodies which were better able to endure g-forces due to the shorter blood path and higher relative cardiovascular performance. This was done to make players outside of robots VERY vulnerable not only to the world, but the people within it and the social pressures of said world.
  3. The whole "pilot is a thing in goop" was basically "we put you in an iron maiden full of liquid" and as g-forces build up, your legs are functionally crushed to squeeze the blood back up to your brain. Actually based loosely on a real thing by Dr. R. Flanagan Gray who used such a machine to achieve 35Gs sustained at the US navel warfare centre in 1958 -- so as a result ingame, most pilots died from drowning when the respiration equipment was compromised or via hydro baric invariance caused by explosions striking the pilot-cavity. This meant trying to get a pilot out before this happened was kinda important and made players stay close to eachother.
  4. A colony of flash-cloned bodyparts were lost in an accident a decade or so, and were found to be a small highly functional society of around 350 people isolated from society which had a decent shot at outliving the cold war present
  5. Their longevity for long-term storage meant they didn't die of old age, so they very rarely gave birth and thus their population was incredibly old and stable. I basically backed a player into a corner and said "look, no more robots for you unless you get a transplant from their leader, which would destroy the society".
  6. The player did it, but everybody else hated them for the rest of the game.
  7. I did this kind of fuckery that made the players hate their own characters a lot by making them do increasingly messed up stuff, or risk becoming obsolete and that fight to prevent their own obsolescence was a big part of the game. Sure they had their power-fantasies but god I was so harsh about it. The only one by the end who loved their character was a non-combatant who in the very end was like a care-taker for what were basically idiot-savants who had no memory of where they came from or what they were for, and four players who decided playing the setting was more interesting than trying to grind for pure stats like most tabletop players usually did in the group.
  8. After this campaign, I never had to worry about players who wanted to power-level in any of our games, because I made it very clear that if they over-levelled, I would deliberately telegraph that I would make things difficult for them some how. They initially took this as a challenge, but before long they got that what I was actually doing was trading player-autonomy in exchange for combat potency and that players generally only got one or the other.
  9. By the end, they all pushed for agency over potency every time because they learned you can feel small, free, and important without these things ever being mutually exclusive and that optimization is not the optimal play in all situations - which is the hardest lesson for a game-designer to teach players within the framework of a given game.

There's a very very old cursed problem maxim in game design that goes: "Given the opportunity, players will optimize the fun out of a game -- and that the job of designers is to protect players from themselves"

The fix I learned was optimization should involve the surrender of agency: that nondesirable randomness and unwanted complex consequences which are highly nondesirable are how you respond to optimization. That, as players optimize, you make a game MUCH harder, to herd them. Eventually, players settle into an equilibrium between agency and potency and it becomes a self-regulating difficulty curve and I've never known this fail before.

It works just as well in videogame design, as it does in tabletop, as it does in card-games. The trick is knowing how to communicate the changes, so players make their decisions knowingly so the losses are not random, but so that the perception of progress of the losses of agency over time and their trajectory in this sense, are also not random.

This means knowing when to show the players it might be time to pump the breaks.

A favourite mechanism in a particularly hard fight is to give players triple their normal stats and then to put a timer on the table and say "if you don't defeat him before the timer hits zero, you die and lose everything. You cannot stop this timer once it has been started". This will make players flirt with it in trivial encounters which alleviates boredom by turning score-attack into time-attack as an optimization-strategy, but in risky encounters or encounters with high entropy they won't go anywhere near it and they will feel pressured to use it but will not fall back on it -- which keeps the pressure constant in longer engagements (which is an exceptionally difficult thing to do cognitively speaking).

Players generally hate timers with a passion, and its enough to turn someone off of playing an entire game (see: Majora's Mask, due to misconceptions about how the game's time system worked vs what the tennants of game-design were seen as being for those kinds of games).

Who loving knew the potential threat of a timer they themselves elect to use could be such a powerful psychological pressure?

Negative pressure and void-pressure are amazing.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

rotor posted:

you reach enlightenment when you collect 100% of the coins from Platos Mystic Cave Zone

ok I laughed

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
I was reading this a while ago (https://apps.dtic.mil/sti/pdfs/ADA570632.pdf) relating to symbology research under some of the different environmental conditions and it got me wondering: Is there any reason we don't perform display filtering on the pixel level to counter optical problems with lenses in VR headsets?

I know spheroid distortion affecting legibility is a hug problem on most display systems -- for example, ghosting differentiation and chromatic aberration due to precision limits in lens-manufacture.

I think using angled drop-shadows of differing contrast, distance, space and softness, text would be adjusted to help improve legibility and even directly fight colorfighting -- via a calibration tool with a grid of text in XYZ space of the lens itself.

I *think* the USAF use something similar in some of the optics to red/greenfight some of the optics via blue/yellow mixing to add contrast shadows in certain HUD display systems used on the F/A-18C to help deal with legibility problems when terrain is also behind the HUD itself but I'm not entirely sure how it works.

I wonder if this could be used in conjunction with velocity fields?

Sorry this isn't more useful, its just a thought -- can software and shader-effects be used to help partially compensate for limitations in optics?

I'm not really sure if this field has a name or what the keywords of examining these conditions and situations are, but I'm continuing to look into this.

PS: type:pdf site:dtic.mil is an easy way to find a glutton of military human-factors reports coming from NATO and the US military. Probably basic google-fu, but I'm also wondering what other domains are useful sources for papers, even if they need to be scihub'd? Thanks.

Expo70 fucked around with this message at 15:07 on Jul 3, 2022

MononcQc
May 29, 2007

Vacations are over, and I decided to join this week's paper with some systems-oriented reading group someone started at work. This week's paper is Systemic Design Principles in Social Innovation: A Study of Expert Practices and Design Rationales by Mieke van der Bijl-Brouwer and Bridget Malcolm. This is a follow-up paper from a previous one they had written (and that I hadn't read) where they looked at 5 organizations doing design and had various interviews to understand how they tried to do systemic design to influence various parts of society. In this paper, they look at them to find commonalities in their approaches.

The 5 agencies included worked on projects like balancing time vs. quality investments in Danish schoolteachers following a reform, Netherlands cities and partnering organizations trying to make life better for younger people, a group of 3 Canadian non-profits trying to find ways to reduce social isolation in adults with cognitive disabilities, Australian child protective services helping to improve the rate of children moving from foster care to their original family, and a Canadian provincial government trying to have better open data policies and more valuable data.

So that's very varied stuff, and the paper has been written by people who didn't necessarily have a deep knowledge of systems theory, but were given a lot of literature recommendations, did a literature review, and iteratively went back to their systems theory experts to validate and re-deepen the research every time.

The paper starts with a neat opener where they cover high-level systems theory concepts, and tie them to design work:

quote:

The move of traditional design to the domain of social innovation means that traditional design practice needs to be adapted to this field. [...] One such adaptation is visible in design practices that have become increasingly systemic. This includes designers gaining a deep understanding of the complexity and wickedness of problems and societal systems, and developing new practices to design for these systems.

[G]growing complexity and increasing strain on societal systems has reignited an interest in integrating systems thinking and design practices to build on the analytical strengths of systems thinking and the action-oriented strengths of design.15 This unified field of systemic design is emerging as a new area of practice and academic study.
[...]
Systems thinking is the understanding of a phenomenon within the context of the larger whole. This process is referred to as synthesis, which is opposed to, and complements, the reductionist process of analysis. Russell Ackoff explains that “in analytical thinking, the thing to be explained is treated as a whole to be taken apart. In synthetic thinking, the thing to be explained is treated as part of a containing whole.

Systems thinking emerged in response to the limitations of analytical and reductionist thinking as presented within the scientific viewpoint that has prevailed since the age initiated by the Renaissance. This viewpoint is based on the belief that the behavior of the whole can be understood entirely from the properties of its parts. This reductionist thinking and approach is core to many disciplines and professions—for example in Western medicine, which is organized into specializations based on parts of human bodies.

Although science and analysis have had an immense and positive impact on society, a limitation of reductionist thinking is [...] that “improvement in the performance of parts of a system taken separately may not, and usually does not, improve performance of the system as a whole.”
[...]
The domain of sociotechnical systems targeted by social innovation practitioners is characterized by high levels of complexity and unpredictability and cannot be sufficiently described or controlled through a pre-determined design solution. While we can design and engineer technical systems within a sociotechnical domain, we can only aim to influence or intervene in the broader complex systems they are part of.

This covers good bases, the paper explains a few broader currents in the rest of the intro, but they're not necessarily relevant to their main point about commonalities in approaches when designing with systems in mind so I'm eliding them here.

They identified a bunch of principles ("a rule or heuristic established through experience that guides a practitioner towards a successful solution") that were used over and over. They warn that these were found by qualitative analysis—which they don't mention rating the reliability for—and that different analysts would find different patterns, which is a bit of a bummer.

The 5 practices are:

opening up and acknowledging the interrelatedness of problems
This basically says that you can't necessarily identify a bunch of problems and solve them independently to end up with no problems. Sometimes problems are owned by multiple people, or solving a problem requires inconveniencing other people, so they're all connected. To address this, people had to adequately consider various perspectives to frame problems and which one they'd choose. By deliberately developing multiple perspectives, various solution pathways opened up. They refer to this as taking "an expansionist" view, often with the aid of mapping mechanism, visualization tools, etc.

Some of them were also careful in the choice of vocabulary: calling something a "problem" or a "solution" tends to force a narrowing of perspectives. Calling things "situations", "challenges", "systemic intervention" or "prototypes" tended to keep their visions more flexible.

developing empathy with the system
This is still related to all the various perspectives they can have on systems they study. Acknowledging the various perspectives can reveal tensions between people and stakeholders of the system, and surfacing these tensions is key to be finding useful ways forwards:

quote:

We don’t just collect stories of [citizens] and hang them on the wall, but we engage with them politically. So we take these stories and go to the police, or to school, or to whoever is mentioned in these stories, and we collect the counter-stories, because also the system is trying its best when tackling societal challenges, and has its own stories about what does and does not work well.

Contrary to regular design (which is often about desires and goals of stakeholders), the systemic design approach tends to focus on the relationships between stakeholders.

strengthening human relationships to enable learning and creativity
Continuing that trend of perspectives and relationships, they found that one of the best intervention course was to focus on learning and creativity within these relationships. This focus means that you can't come up with a recipe book. You may have known intervention patterns, but they'll always need to be adjusted and adapted to current contexts. New behaviours, learnings, and experiences arose from improving the relationships, not as something you just told people to do.

To couch this in systems theory terms, they are aiming for self-organizations of elements in a system such that new emergent behaviours and adaptation can take place to meet overall system objectives.

This means designers need to let go of the ambition to control the relationships, and instead must focus on creating conditions, infrastructure, or platforms that promote new behaviours and learnings of people evolving within the system.

influencing mental models to enable change
People work from mental models:

quote:

All of the practitioners in our case studies identified dominant mental models either held by the client organization, or by users or other stakeholders that held the system back from enabling more positive outcomes. This included the belief that restoration of a child to their birth family is the best outcome in child protection in the TACSI case study, and that it is more important for adults with a disability to be safe than to learn in the InWithForward case study.

Because mental models are socially learned ways of perceiving and organizing information, they can be changed.

They can challenge people to see things differently by:
  • Introducing new language to change public narratives
  • focusing on stakeholders that already held enabling mental models
  • making mental models explicit to facilitate discussion and change

They generally consider mental models as one of the most effective lever points in a system since it's the basis of action from people in it. This isn't necessarily a common usage in regular design, but is worth a lot according to this study.

adopting an evolutionary design approach

This resembles the evolutionary process of “vary, select, and amplify” described in living systems theory; what they do is that designers take an incremental approach where they prototype various interventions ("making a portfolio"), see which of these get traction, and then refine and improve them based on whatever shows most promise, while always keeping them aligned with overall goals.



When coming up with a prototype, it's not even always known who will own it and implement it, but they show them to various stakeholders, see what gets traction, and see based on buy-in.

The idea is that in complex systems, people only have a better ability to understand what happened in retrospect, and so they push for a mindset of always being in an experimental mode. In no small part this is because even the problem definition is often not well-understood:

quote:

However, rather than only enabling evolution through execution, design practices also use the evolutionary process in the design of the prototype experiments themselves. Design practice reflects a co-evolutionary problem and solution process, which means that

“Creative design is not a matter of first fixing the problem, and then searching for a satisfactory solution concept. Creative design seems more to be a matter of developing and refining together both the formulation of a problem and ideas for a solution.”

Prototyping in design is therefore not just about testing ideas for interventions—it also helps reframe the problem. This integration of prototyping and framing practices in design offers opportunities for design to contribute to evolutionary practices.

You'll note that this brings us back to concepts of broadening frames and perspectives!

In general, complex system design mentions that problems don't get "solved"; instead they require an ongoing intervention, with experiments that are considered "safe to fail." A major shift from regular design for large systemic social design is to move away from user-centric approaches and towards one that focuses more on the relationships between stakeholders, with a long term commitment to continuous intervention. In some cases, that also led to groups trying to embed design capability within the system so that continuous improvement can be driven from within.

The article concludes:

quote:

As each complex problem situation is different, there is not one way of doing things and we must rely on adaptive practice, where practices are adapted to the problem context at hand. Such adaptations require every actor concerned to engage in a continual and mutual learning process. We therefore stress the need for ongoing education together, through learning communities that include academics and practitioners across multiple disciplines.

MononcQc fucked around with this message at 20:50 on Jul 23, 2022

Shame Boy
Mar 2, 2010

semi-related to this thread, i was reading the wiki article on dragon king theory and it has what might be the best shitpost graph i have ever seen

MononcQc
May 29, 2007

My understanding of a lot of safety theory folks is that Nassim Nicholas Taleb (the guy behind "Black swan events" and "antifragile" as terms) is that they sort of really dislike his work because he took clearly defined academic concepts, ignored them, and invented ambiguous terminology that sound cool and then pushed them as a new science that tried to upend a lot of well-established concepts that had been proven useful.

I had never heard of the Dragon King stuff, but at least the wikipedia article is sort of helpful enough:

quote:

The black swan concept is important and poses a valid criticism of people, firms, and societies that are irresponsible in the sense that they are overly confident in their ability to anticipate and manage risk. However, claiming that extreme events are—in general—unpredictable may also lead to a lack of accountability in risk management roles. In fact, it is known that in a wide range of physical systems that extreme events are predictable to some degree.[4][5][2][3] One simply needs to have a sufficiently deep understanding of the structure and dynamics of the focal system, and the ability to monitor it. This is the domain of the dragon kings. Such events have been referred to as "grey swans" by Taleb.

The "coupling" and interaction chart you posted, while definitely looking like maddening shitpost, sort of draws onto other concepts I've seen. IIRC, This SINTEF Report has a good overview of various incident models, which include things such as "Energy transfers and barriers", "Defence in depth", etc. And section 5 is all about couplings, and Charles Perrow's model of "Normal accidents":

quote:

Major accidents, such as the Three Mile Island accident, often come as fundamental surprise to the people that manage and operate the system (Turner, 1978; Woods, 1990). However, Charles Perrow (1984) insisted that some systems have structural properties that make such accidents virtually inevitable. He therefore labelled these fundamentally surprising events “Normal Accidents”

[...]

In contrast to component failure accidents, system accidents involve the unanticipated interaction of several latent and active failures in a complex system. Such accidents are difficult or impossible to anticipate. This is partly because of the combinatorial problem – the number of theoretically possible combinations of three or four component failures is far larger than the number of possible component failures. Moreover, some systems have properties that make it difficult or impossible to predict how failures may interact.

[...]

Some systems, such as major nuclear power plants, are characterised by high interactive complexity. These systems are difficult to control, not only because they consist of many components, but also because the interactions among components are non-linear. Linear interactions lead to predictable and comprehensible event sequences. In contrast, non-linear interactions lead to unexpected event sequences. Non-linear interactions are often related to feedback loops. A change in one component may thus escalate due to a positive feedback loop, it may be suppressed by a negative feedback loop, or it may even turn into its opposite by some combination of feedback loops. Such feedback loops may be introduced to increase efficiency (e.g. heat exchangers in a process plant). Even some safety systems may add to the interactive complexity of a system, for instance if overheating of a given component initiates automatic cooling. Interactive complexity makes abnormal states difficult to diagnose, because the conditions that cause them may be hidden by feedback controls designed to keep the system stable under normal operations. Moreover, the effects of possible control actions are difficult to predict, since positive or negative feedback loops may propagate or attenuate or even reverse the effect in an unforeseeable manner. Unknown side effects are another source of interactive complexity.

Another system characteristic that makes control difficult is tight coupling. Tightly coupled systems are characterised by the absence of “natural” buffers. A change in one component will lead to a rapid and strong change in related components. This implies that disturbances propagate rapidly throughout the system, and there is little opportunity for containing disturbances through improvisation. Tight couplings are sometimes accepted as the price for increased efficiency. For instance, Just-in-time production allows companies to cut inventory costs but makes them more vulnerable if a link in the production chain breaks down. In other cases, tight couplings may be the consequence of restrictions on space and weight. For instance, the technical systems have to be packed more tightly on an offshore platform than on a refinery, and this may make it more challenging to keep fires and explosions from propagating or escalating.

They also provide this table:



And explain:

quote:

  1. A system with high interactive complexity can only be effectively controlled by a decentralised organisation. Highly interactive technologies generate many non-routine tasks. Such tasks are difficult to program or standardise. Therefore, the organisation has to give lower level personnel considerable discretion and encourage direct interaction among lower level personnel.
  2. A system with tight couplings can only be effectively controlled by a highly centralised organisation. A quick and co-ordinated response is required if a disturbance propagates rapidly throughout the system. This requires centralisation. The means to centralise may, e.g., include programming and drilling of emergency responses. Moreover, a conflict between two activities can quickly develop into a disaster, so activities have to be strictly coordinated to avoid conflicts
  3. If follows from this that an organisational dilemma arises if a system is characterised by high interactive complexity and tight couplings. Systems with high interactive complexity can only be effectively controlled by a decentralised organisation, whereas tightly coupled systems can only be effectively controlled by a centralised organisation. Since an organisation cannot be both centralised and decentralised at the same time, systems with high interactive complexity and tight couplings cannot be effectively controlled, no matter how you organise. Your system will be prone to “Normal accidents”.

So the concepts around black/grey swans and dragon kings is sort of interesting because it gives a perspective from the point of view of someone looking at the system from afar, but there are decades of theories gradually getting refined (the SINTEF report is super good there) to actually try and manage these sort of events as part of building and analyzing systems.

I did have a lightning talk for a local user group mentioning a bunch of models, it could be interesting material for here.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
soc is per bak's thingy. bak was basically the king of the more ridiculous instances of the santa fe institute's much-vaunted tendency to look at a thing, say "drat theres a fractal in there", find a fractal, make a log-log plot with a straight line in it, and move one. bak-tang-wiesenfeld for piles of this one kind of rice / simple self-organized criticality, bak-sneppen for evolution, etc etc.

taleb was a big collaborator with bak decades back (bak died 2000), also w mandelbrot, so that's the weird physico-geometrical base of things. but you get to do trading, which is taleb's main occupation, without serious intellectual theories so thats why hes such a lightweight there

self-organization is not a coherently described term. the more scientifically valuable point of view on it is the second-order critical phase transition in satisfiability, which was explicated harder by georgio parisi (nobel 2021...). because it's satisfiability that phase transition sorta schmucks itself into any np-complete problem. many actual real-world problems have a tendency to easily become np-complete once any serious details are added.

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff

bob dobbs is dead posted:

many actual real-world problems have a tendency to easily become np-complete once any serious details are added.

Expo70
Nov 15, 2021

Can't talk now, doing
Hot Girl Stuff
https://www.youtube.com/watch?v=d2fBBJRjccs

So one thing I'm very curious to know in future is whether or not the motion-sensing and eye-gaze recognition capabilities would be paired with some sort of augmented reality coming from other sensors in a given cluster of personal systems in a personal network.

Hooo boy.


Invocation, attention, time-cost, speed and context
    For one, I wonder a lot about the dismissal and attentional effects of a technology like this -- ie, when you'll either want it around or desire to dismiss it and the kinds of ways it would be invoked.

    Looking at technologies like say, spoken word response assistants like Alexa, the headache I see is that while they're phenomenal for inputting strings or doing fixed term searches they are painfully slow for selection or applying minor deltas, and they don't have the capacity to append a given task with a subtask -- ie, you can't ask it to play a song and then in the same command give a fixed volume level because the time and energy spent invoking the attentional or summon command is a second entire string that has to be appended to the first which makes anything complex become *very complex*.

    In humans, we do this with gaze or passive attentional sound that's simpler than names - usually a combination of the two, and we also abridge commands down or concatenate them kind of on the fly which requires a capacity for past memory search in natural language -- which AI isn't really super good at unless you're looking at systems like GPT(x) which have some capability in this area but are computationally very expensive.

    I can see these kinds of systems being paired together but you run the gamut of then saturating a given space with token communication, since spoken word is inherently simplex by design -- meaning only one person can speak and one person can listen. The same is true of text sure, but its not as chronologically sensitive so you can switch text contexts with a lag of around 0.8s I think and then read to match an expected return in 0.2s or perform fresh comprehension of a new text return in ((n*0.2)^C)^N) where n is wordcount, C is complexity and N is novelty -- to my understanding (if better models exist, I want to know about them, I'm not even sure where I got this from but its one I've used for years).

Wearables are always so drat disappointing

    I'm going to say what I always say about wearables: subvocalization, gaze recognition and a personal network system with decentralized peripherals and detached modular outside-in UX is essential to making these systems function properly. The software *SHOULD* be looking at what its representing and then attempting to represent it through a chain of logical, semantic and mathematical rules which can be altered and edited and given exceptions -- as if photoshop looked at the representations of the data its panels manipulated and then tried to put those representations into metaphors usable by human beings.

    If you are looking where your fingers are going, or your eyes are drawn away from being able to successfully walk through an environment safely your device isn't wearable, its mountable. Wearing means acting through an object, that it doesn't limit you and instead extends your abilities. Shoes protect our feet. Pockets enable containment. Loops, connections, buckles, zips, patterns, the sociological cornocopia of Merleau-Ponty's experiential corporeal schema of identity and its representation and the battersea madness of Uexkull and Sebeock's Umgebung self-perception, Innenwelt of allocentrism of the self in the scope of the world of umwelt.

    Come on, already!

Wearables, and computing in general, despite networks is still isolated and frightened

    Those kinds of systems should be able to learn the habits of users and the overrides they supply as a UX middleware.

    That middleware should be able to look at something akin to a universal data representation standard API that includes things like events, labels, sources, structs, etc -- and it should do to UX what Interface Builder for NeXT and the products that inherited that legacy - MacOS, iOS and the tools which replicated those kinds of design patterns. It shouldn't be the *only* option obviously but if it means someone can whip up something visual which invokes and hooks into systems in different separate programs as if they're services (honestly, why do we even think of them as different in 2022) or do things in the other direction and have a given context discover functionality which leads to a known conclusion and then invoke different equivalent functionality to achieve it.

    I think my favourite example of something like this is probably Quixel Suite, which was this weird parasitic software package which latched onto photoshop and turned its basic pixel manipulating powerhouse into a monster for procedurally texturing objects by invoking ML trained systems which generate different image-maps like bump, specular, displacement, roughness, etc but also that these maps could be then sourced individually onto a 3D mesh with ID-maps to populate and bake a texture OR hand-paint whichever combination of whichever ones you want wherever you want.

    It makes me want to rip out my own hair that good code is in some walled garden and can't ever be re-used, and that it only ever belongs to its own context ever and that these things aren't transplantable or approximately equivalent so things are kept maintainable. The fact code and programs never get to be "finished", that there's no stage where the software is then handed over to a community to maintain and alter it on some level means the longevity of any solution is incredibly short.

    Fantastic software you use every day will probably fall to this and if you've lived long enough, you've got programs you can look back on fondly for which no equivalent now exists which runs well on modern hardware despite the fact the context for it to be needed still exists. It wasn't made obsolete, and nine times out of ten, it was ruined through the decay of its own design by a design-team who had no idea what the gently caress they were doing.

    Like this: Oh god what?!

    https://www.youtube.com/watch?v=dKx1wnXClcI
    I want clean pancakes, this time.


New metaphors, communication and control
    I'm sure there's a logical reason for this (other than capitalism) but my understanding of computers begins with ASM circa 1997 and then has a massive gap and then is bizarrely resumed with visual scripting languages because a brain injury now means I struggle to parse indentations and symbols properly and dyslexia means when I do manage, I tend to get it wrong which makes C++ not a conceptual learning problem for me but a sensorifics problem for me.

    I'm so goddamn pissed and angry that gulf can't just be crossed. Like yeah, of course you're gonna have programmers writing single-threaded software in a metaphor which suits single-threaded execution because you're asking for an abstraction that the metaphor can't represent well. Likewise, with the complexity and condensation of visual languages is always going to be LESS dense than purely text languages.



    Its a poo poo-show that these things just aren't directly interoperable. I did not think the future of computing would be rediscovering the past because the present is so goddamn awful and lazy and botched together.
    This is present everywhere from our phones and the metaphors we use to interact with them to the way we write operating systems and software.
    There's all these horrific issues of managing different scopes, and yet all of our editors are just much nicer versions of editors we've had for thirty or forty years now.
    Surely that's not ok? It always comes back to this stuff every single time.
    We're wasteful and lazy and now instead of it being the compiler's problem, like it was 30 years ago, its the problem of both the programmer and the operator.
    I'm sure so much has been done in this field, but all of these solutions functioning in vacuums is why they're all doomed to fail.
    Interoperability is the way you survive, by being a link in a chain instead of trying to sell rope. The worst is when someone tries to make a link in the chain out of rope, and it seems fine but then it just crumbles away and suddenly the entire stack collapses under the weight of its own rot.

    I'm sure I had a point to make here. This absolutely isn't a shitpost and I speak entirely with 100% sincerity -- I think for us to move forward in any of these fields, we need to look at the new hyper-problem of managing the sheer amount of information and systems we have because when they're not well managed, we end up re-inventing the wheel and adding more to the pile of stuff we have to preserve and maintain.


    Pictured: Him, officer, its all his fault.

The bare minimum, and we can't even do that:
    I wish there were a good way to just, for example, bind any midi or axis to any slider in any program while it runs whether its panel is open or not. Can you imagine the efficiency gains? People shell out huge amounts of money to do this in photoshop, or video editing tools or CAD. Can you even imagine if we could go further?

    I thought that's where human factors and ergonomics was going to take us, but we never got there.

Expo70 fucked around with this message at 17:19 on Jul 27, 2022

MononcQc
May 29, 2007

Expo70 posted:

New metaphors, communication and control
    I'm sure there's a logical reason for this (other than capitalism) but my understanding of computers begins with ASM circa 1997 and then has a massive gap and then is bizarrely resumed with visual scripting languages because a brain injury now means I struggle to parse indentations and symbols properly and dyslexia means when I do manage, I tend to get it wrong which makes C++ not a conceptual learning problem for me but a sensorifics problem for me.

    I'm so goddamn pissed and angry that gulf can't just be crossed. Like yeah, of course you're gonna have programmers writing single-threaded software in a metaphor which suits single-threaded execution because you're asking for an abstraction that the metaphor can't represent well. Likewise, with the complexity and condensation of visual languages is always going to be LESS dense than purely text languages.



    Its a poo poo-show that these things just aren't directly interoperable. I did not think the future of computing would be rediscovering the past because the present is so goddamn awful and lazy and botched together.
    This is present everywhere from our phones and the metaphors we use to interact with them to the way we write operating systems and software.
    There's all these horrific issues of managing different scopes, and yet all of our editors are just much nicer versions of editors we've had for thirty or forty years now.
    Surely that's not ok? It always comes back to this stuff every single time.
    We're wasteful and lazy and now instead of it being the compiler's problem, like it was 30 years ago, its the problem of both the programmer and the operator.
    I'm sure so much has been done in this field, but all of these solutions functioning in vacuums is why they're all doomed to fail.
    Interoperability is the way you survive, by being a link in a chain instead of trying to sell rope. The worst is when someone tries to make a link in the chain out of rope, and it seems fine but then it just crumbles away and suddenly the entire stack collapses under the weight of its own rot.

    I'm sure I had a point to make here. This absolutely isn't a shitpost and I speak entirely with 100% sincerity -- I think for us to move forward in any of these fields, we need to look at the new hyper-problem of managing the sheer amount of information and systems we have because when they're not well managed, we end up re-inventing the wheel and adding more to the pile of stuff we have to preserve and maintain.


    Pictured: Him, officer, its all his fault.
I'll focus only on this part, because I know programming more than the other things, but also because I'm 1/3 of the way through an interesting paper I may post this week-end on why distributed computer-supported cooperative work systems tend to fail at being adopted. Specifically the line about our editors just being slightly nicer versions of 40 years old editors.

I think a major reason why these things are not actually in a vacuum is all the tools that operate and manage code that did not exist before: code search, static analysis tools, policy enforcement (eg. linting), refactoring tools (renaming and moving contents), programmer help (hinting, auto-completion, "language servers"), remote editing, compatibility across stacks (whether it's a RPI with a serial interface, a server in the cloud, or a desktop), source control, code reviewing, ability to link to specific areas (eg. files and lines), do live pairing sessions, customization, support for various locales, AI-assisted code snippets (I will never use copilot!), portability across languages (few systems are purely monolingual), the ability to copy/paste across media (from a web page or a chat client to your code editor), etc.

All of these tend to function on lines of code as text. They theoretically don't have to, but they do. You could also expand it to also account for software operations: would logs, metrics, and remote tracing work as well when they mostly expect text-based concepts to be represented, or would a consequent usage require these to be visual as well?

Building a visual system implies that either you have an underlying representation that is text-friendly in order to work with all these tools, or that as someone creating a visual programming language, you are embarking on the journey of re-implementing the entire ecosystem of tools that support development along with it. And some of these are definitely worth it: source control and diffing are one of the things other engineering disciplines feel they should learn from software engineering.

But if you just do the editing step, you're partly stuck in a sort of vacuum where none of the other tools exist, so the benefit of switching has to offset the downsides of losing the entire chain.

So the question is whether the people who perceive a benefit from the new system tend to be those who need to do the extra work, or whether the extra work comes at their expense. Cases where those doing the extra work aren't those benefiting tend to require a lot more resources to force or nudge in that direction, and it's unlikely to happen organically.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
the only really viable alternative to plain lines of text is still text, but sexprs. and then you gotta be a goddamn weenie to do it

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
i always think of visual programming environments as neat but kind of an eternal beginner mode - easily picked up and quickly outgrown. Humans write things and i dont think thats gonna be replaced any time soon.

MononcQc
May 29, 2007

You can make a lot of friendlier systems for domain-specific things with visual programming and metaphors. Macromedia Flash or Director were two examples of software development environment that were stupid productive for some areas, where you could do in minutes poo poo that could take days in other areas, in no small part because you could bring your development much closer to the final product.

But uh, for sure that wouldn't fit well in a multi-language build system that publishes artifacts whatever in modern day enterprises.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

MononcQc posted:

You can make a lot of friendlier systems for domain-specific things with visual programming and metaphors. Macromedia Flash or Director were two examples of software development environment that were stupid productive for some areas, where you could do in minutes poo poo that could take days in other areas, in no small part because you could bring your development much closer to the final product.

But uh, for sure that wouldn't fit well in a multi-language build system that publishes artifacts whatever in modern day enterprises.

yeah i've worked on many of these and in practice everyone eventually hits a certain skill level and moves to text.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
like there's nothing wrong with having a beginner mode, in fact its very very good, but there also needs to be an intermediate and expert mode.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

if you want visual programming you can go do plant automation for plc industrial controls

you will regret this

(expressing complex ideas, rules, and relationships graphically is harder than using language; ed tufte makes good money charging ballrooms full of middle managers $1k each for a one day information design basics presentation)

Shame Boy
Mar 2, 2010

rotor posted:

i always think of visual programming environments as neat but kind of an eternal beginner mode - easily picked up and quickly outgrown. Humans write things and i dont think thats gonna be replaced any time soon.

to be clear literally all of Expo70's game project is written in unreal's visual programming thing (blueprints) because her dyslexia doesn't let her read C++ without getting immediately tripped up by the "curly boys". like she's written complex aerodynamic simulations and poo poo with visual building blocks connected together with lines, which is just nuts to me.

i'm absolutely certain it's the largest body of code ever written in blueprints by now, and it somehow works reasonably well

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Shame Boy posted:

to be clear literally all of Expo70's game project is written in unreal's visual programming thing (blueprints) because her dyslexia doesn't let her read C++ without getting immediately tripped up by the "curly boys". like she's written complex aerodynamic simulations and poo poo with visual building blocks connected together with lines, which is just nuts to me.

i'm absolutely certain it's the largest body of code ever written in blueprints by now, and it somehow works reasonably well

Ubisoft enters the chat

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Shame Boy posted:

to be clear literally all of Expo70's game project is written in unreal's visual programming thing (blueprints) because her dyslexia doesn't let her read C++ without getting immediately tripped up by the "curly boys". like she's written complex aerodynamic simulations and poo poo with visual building blocks connected together with lines, which is just nuts to me.

i'm absolutely certain it's the largest body of code ever written in blueprints by now, and it somehow works reasonably well

i think thats great and its a testament to why things like that are valuable.

fwiw i can barely read c++ either.

Gnossiennes
Jan 7, 2013


Loving chairs more every day!

I wonder if visual programming would get me to understand programming, because every time i've tried learning python or javascript, i hit a wall fairly early and just cannot comprehend what i'm trying to learn.

Is there a recommended thing for learning visual programming?

i'm a designer by trade, so I mean, something making more sense to me by virtue of being visual instead seems understandable, but i also write plain text notes a lot when i'm doing research & refinement, so don't know why coding trips me up so bad. math maybe??

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

Gnossiennes posted:


Is there a recommended thing for learning visual programming?


flash is a good one

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
whats that one weird DAW

edit: i cant find it but it was this thing where you'd wire up wave generators to stuff, idk its not my field

rotor fucked around with this message at 03:59 on Jul 28, 2022

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
max/msp. its not that good

flash has been eol'd by stebe before he died of stupid

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
there some things that'll generate code from UML diagrams but its mostly scaffolding code

Adbot
ADBOT LOVES YOU

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

bob dobbs is dead posted:

max/msp. its not that good
naw that's not it, it was just released like last year or something, its killin me i cant rememer the name, right on the tip of my tongue

quote:

flash has been eol'd by stebe before he died of stupid

I meant the authoring environment. I guess it's called "Adobe Animate" now. Largely similar interface tho.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply