|
bump_fn posted:posting w/o reading
|
# ? Jun 19, 2022 03:54 |
|
|
# ? Apr 26, 2024 10:47 |
|
Oysters Autobio posted:(shout out to the sociology prof mom. gently caress that p-hacking pop-psych "ooohhhh look at those margins of error" bullshit, gimme some qualitative ethnography gently caress yeaaa
|
# ? Jun 19, 2022 18:51 |
|
zokie posted:That was a top notch effort poat, shared it with my mom who is a Professor of Sociology. This. I'd listen. Diligently. Without even doing chores at the same time.
|
# ? Jun 21, 2022 03:31 |
|
I'm late this week, and I'm unlikely to post in the next couple of weeks since I'll be on vacation and I may voluntarily drop technical paper reading from my list. But for today I wanted to grab stuff from the STELLA report, which has been very influential for tech stuff, and bring it here since I love to make references to that content.quote:On March 14 and 15, 2017, the SNAFUcatchers consortium held an informal workshop in New York on Coping With Complexity. About 20 people attended the workshop. One of the most useful/influential things popularized in this report is the Above-the-line/Below-the-line framework. This starts with a typical description of what "the system" is: quote:It includes internally-developed code, externally-sourced software and hardware such as databases, routers, load balancers, etc. and is provided as a product or service to customers, both internal and external to the organization. This is a contextual view of the business and of the various components required to make things run. This is the one that software engineers focus the most on, and generally describes "production" as an environment. But the authors encourage us to take a broader view when zooming out: This is a more systems-oriented view that also includes all the components required to write, maintain, build, and ship the code. But even then, it is an incomplete view. So there's a shift of perspective above, accepting that all working business enterprises rely on people to build, maintain, troubleshoot, and operate the technical components of the system. Putting everything together gives this view: quote:The people engaged in observing, inferring, anticipating, planning, troubleshooting, diagnosing, correcting, modifying and reacting to what is happening are shown with their individual mental representations. These representations allow the people to do their work -- work that is undertaken in pursuit of particular goals. To understand the implications of their actions requires an understanding of the cognitive tasks they are performing and, in turn, an understanding of what purposes those cognitive tasks serve. This is key stuff. Specifically, everything "below the line" comes from inference. You can't put your head in the computer, see the electrons go, and say "ah, there's the DB doing indexing." We work from abstractions and translations of events and signals from below the line into concepts that make sense for us above the line, using mental models. And each person's mental model is different, changing, and an imperfect representation. So maintaining a good enough mental model of the below-the-line components is key to work fine. But not only that! Since most of the adaptive work is done by people above the line, we also have to maintain and create mental models of what other people understand. I have to anticipate what others know to work effectively with them. This, in short, is the mental image that comes up all the time whenever I hear or mention socio-technical systems: working on both the technical representation and the social communication and understanding. The rest of the report contains lessons from multiple incidents: chef & apache rollout issues, Travis CI failing and RabbitMQ growing, Logstash problems, etc. All anomalies are examples of complex systems failures (surprises with no specific root cause, things present for weeks/months before they happened, triggered by minor unrelated events) and grew up to cascade. They highlighted common features to all responses:
The report defines each of them in depth, but since I wanted to focus on the above/below the line framework, I'm going to skip on these. They then cover post-mortems and their focus on technical and social aspects (which I'm eliding as a bit off-topic for this thread), including concepts of blamelessness vs. sanctionlessness. Same for incident response patterns around costs of coordination, "strange loop" patterns (hidden or unexpected circular dependencies that emerge and break systems), and dark debt (technical debt creating vulnerabilities that can't be recognized until they reveal themselves) More on topic for here is a section on visualization and how it essentially sucks for incident handling: quote:Representations that support the cognitive tasks in anomaly response are likely to be quite different from those now used in "monitoring". Current automation and monitoring tools are usually configured to gather and represent data about anticipated problem areas. It is unanticipated problems that tend to be the most vexing and difficult to manage. Although representation design for cognitive aiding is challenging, it is an area that is likely to be fruitful for this community. The environment is ripe for innovation. Anyway this is what I really wanted to focus on, but if you deal with incidents that involve computers it's a very good read.
|
# ? Jun 27, 2022 00:25 |
|
Enjoy your vacation. Your effort posts will be missed.
|
# ? Jun 27, 2022 00:59 |
yo is this the keeb and mouse thread? why wont logitech put the good scroll wheel on the mx track ball? the fcuk
|
|
# ? Jun 27, 2022 01:44 |
|
Skim Milk posted:yo is this the keeb and mouse thread? no the keyboard thread is here.
|
# ? Jun 27, 2022 02:38 |
|
Someone at work just pointed me to this very good video introducing the above/below-the-line framework in 5 mins. It has David Woods and Richard Cook, who have both been mentioned a lot here: https://www.youtube.com/watch?v=3P5wuxI0cMY
|
# ? Jun 27, 2022 20:35 |
|
this guy just invented platos cave
|
# ? Jun 27, 2022 20:51 |
|
rotor posted:this guy just invented platos cave The trick I would imagine is knowing how deep in the cave you are, and knowing how to swim to lower depths that light still reaches instead of drowning in hypernormalization.
|
# ? Jun 27, 2022 22:06 |
|
Expo70 posted:The trick I would imagine is knowing how deep in the cave you are, and knowing how to swim to lower depths that light still reaches instead of drowning in hypernormalization. i dont remember there being any water features in platos cave
|
# ? Jun 27, 2022 22:15 |
|
theres also an ice level
|
# ? Jun 28, 2022 03:05 |
|
Lonely Wolf posted:theres also an ice level you're thinking of Platos Fortress of Solitude
|
# ? Jun 28, 2022 05:16 |
|
rotor posted:you're thinking of Platos Fortress of Solitude Don't you remember the part where you come in from the left and surf down, then hit the huge pillar of ice and then the entire thing is spent trying to get to the top again so you can defeat Robotnik? Michael Jackson did the music and everything. edit: OK, Michael Jackson did NOT do the music and everything. Expo70 fucked around with this message at 17:59 on Jun 28, 2022 |
# ? Jun 28, 2022 06:02 |
|
you reach enlightenment when you collect 100% of the coins from Platos Mystic Cave Zone
|
# ? Jun 28, 2022 06:06 |
|
Crossposting a discovery from my days running tabletop as a teenager, and experimenting with the same effect in other game-types (card, video, etc) and I'm wondering what the formal terminology of this discovery of "negative pressure/void pressure" is called, other than it potentially just being a variant of negative feedback motivation, and whether or not Human Factors has any subtypes of it.Internet Janitor posted:on a very tangential note, I rather like science fiction which depicts transhumanist technology with extremely severe downsides that people put up with anyway out of desperation, delusion, or a combination of both Expo70 posted:i just found what i'm reading tonight before bed. this reads like the stuff i used to get told off for trying to slide into our giant-robot tabletop games as a teenager:
|
# ? Jun 28, 2022 20:13 |
|
rotor posted:you reach enlightenment when you collect 100% of the coins from Platos Mystic Cave Zone ok I laughed
|
# ? Jun 28, 2022 21:49 |
|
I was reading this a while ago (https://apps.dtic.mil/sti/pdfs/ADA570632.pdf) relating to symbology research under some of the different environmental conditions and it got me wondering: Is there any reason we don't perform display filtering on the pixel level to counter optical problems with lenses in VR headsets? I know spheroid distortion affecting legibility is a hug problem on most display systems -- for example, ghosting differentiation and chromatic aberration due to precision limits in lens-manufacture. I think using angled drop-shadows of differing contrast, distance, space and softness, text would be adjusted to help improve legibility and even directly fight colorfighting -- via a calibration tool with a grid of text in XYZ space of the lens itself. I *think* the USAF use something similar in some of the optics to red/greenfight some of the optics via blue/yellow mixing to add contrast shadows in certain HUD display systems used on the F/A-18C to help deal with legibility problems when terrain is also behind the HUD itself but I'm not entirely sure how it works. I wonder if this could be used in conjunction with velocity fields? Sorry this isn't more useful, its just a thought -- can software and shader-effects be used to help partially compensate for limitations in optics? I'm not really sure if this field has a name or what the keywords of examining these conditions and situations are, but I'm continuing to look into this. PS: type:pdf site:dtic.mil is an easy way to find a glutton of military human-factors reports coming from NATO and the US military. Probably basic google-fu, but I'm also wondering what other domains are useful sources for papers, even if they need to be scihub'd? Thanks. Expo70 fucked around with this message at 15:07 on Jul 3, 2022 |
# ? Jul 3, 2022 15:05 |
|
Vacations are over, and I decided to join this week's paper with some systems-oriented reading group someone started at work. This week's paper is Systemic Design Principles in Social Innovation: A Study of Expert Practices and Design Rationales by Mieke van der Bijl-Brouwer and Bridget Malcolm. This is a follow-up paper from a previous one they had written (and that I hadn't read) where they looked at 5 organizations doing design and had various interviews to understand how they tried to do systemic design to influence various parts of society. In this paper, they look at them to find commonalities in their approaches. The 5 agencies included worked on projects like balancing time vs. quality investments in Danish schoolteachers following a reform, Netherlands cities and partnering organizations trying to make life better for younger people, a group of 3 Canadian non-profits trying to find ways to reduce social isolation in adults with cognitive disabilities, Australian child protective services helping to improve the rate of children moving from foster care to their original family, and a Canadian provincial government trying to have better open data policies and more valuable data. So that's very varied stuff, and the paper has been written by people who didn't necessarily have a deep knowledge of systems theory, but were given a lot of literature recommendations, did a literature review, and iteratively went back to their systems theory experts to validate and re-deepen the research every time. The paper starts with a neat opener where they cover high-level systems theory concepts, and tie them to design work: quote:The move of traditional design to the domain of social innovation means that traditional design practice needs to be adapted to this field. [...] One such adaptation is visible in design practices that have become increasingly systemic. This includes designers gaining a deep understanding of the complexity and wickedness of problems and societal systems, and developing new practices to design for these systems. This covers good bases, the paper explains a few broader currents in the rest of the intro, but they're not necessarily relevant to their main point about commonalities in approaches when designing with systems in mind so I'm eliding them here. They identified a bunch of principles ("a rule or heuristic established through experience that guides a practitioner towards a successful solution") that were used over and over. They warn that these were found by qualitative analysis—which they don't mention rating the reliability for—and that different analysts would find different patterns, which is a bit of a bummer. The 5 practices are: opening up and acknowledging the interrelatedness of problems This basically says that you can't necessarily identify a bunch of problems and solve them independently to end up with no problems. Sometimes problems are owned by multiple people, or solving a problem requires inconveniencing other people, so they're all connected. To address this, people had to adequately consider various perspectives to frame problems and which one they'd choose. By deliberately developing multiple perspectives, various solution pathways opened up. They refer to this as taking "an expansionist" view, often with the aid of mapping mechanism, visualization tools, etc. Some of them were also careful in the choice of vocabulary: calling something a "problem" or a "solution" tends to force a narrowing of perspectives. Calling things "situations", "challenges", "systemic intervention" or "prototypes" tended to keep their visions more flexible. developing empathy with the system This is still related to all the various perspectives they can have on systems they study. Acknowledging the various perspectives can reveal tensions between people and stakeholders of the system, and surfacing these tensions is key to be finding useful ways forwards: quote:We don’t just collect stories of [citizens] and hang them on the wall, but we engage with them politically. So we take these stories and go to the police, or to school, or to whoever is mentioned in these stories, and we collect the counter-stories, because also the system is trying its best when tackling societal challenges, and has its own stories about what does and does not work well. Contrary to regular design (which is often about desires and goals of stakeholders), the systemic design approach tends to focus on the relationships between stakeholders. strengthening human relationships to enable learning and creativity Continuing that trend of perspectives and relationships, they found that one of the best intervention course was to focus on learning and creativity within these relationships. This focus means that you can't come up with a recipe book. You may have known intervention patterns, but they'll always need to be adjusted and adapted to current contexts. New behaviours, learnings, and experiences arose from improving the relationships, not as something you just told people to do. To couch this in systems theory terms, they are aiming for self-organizations of elements in a system such that new emergent behaviours and adaptation can take place to meet overall system objectives. This means designers need to let go of the ambition to control the relationships, and instead must focus on creating conditions, infrastructure, or platforms that promote new behaviours and learnings of people evolving within the system. influencing mental models to enable change People work from mental models: quote:All of the practitioners in our case studies identified dominant mental models either held by the client organization, or by users or other stakeholders that held the system back from enabling more positive outcomes. This included the belief that restoration of a child to their birth family is the best outcome in child protection in the TACSI case study, and that it is more important for adults with a disability to be safe than to learn in the InWithForward case study. They can challenge people to see things differently by:
They generally consider mental models as one of the most effective lever points in a system since it's the basis of action from people in it. This isn't necessarily a common usage in regular design, but is worth a lot according to this study. adopting an evolutionary design approach This resembles the evolutionary process of “vary, select, and amplify” described in living systems theory; what they do is that designers take an incremental approach where they prototype various interventions ("making a portfolio"), see which of these get traction, and then refine and improve them based on whatever shows most promise, while always keeping them aligned with overall goals. When coming up with a prototype, it's not even always known who will own it and implement it, but they show them to various stakeholders, see what gets traction, and see based on buy-in. The idea is that in complex systems, people only have a better ability to understand what happened in retrospect, and so they push for a mindset of always being in an experimental mode. In no small part this is because even the problem definition is often not well-understood: quote:However, rather than only enabling evolution through execution, design practices also use the evolutionary process in the design of the prototype experiments themselves. Design practice reflects a co-evolutionary problem and solution process, which means that You'll note that this brings us back to concepts of broadening frames and perspectives! In general, complex system design mentions that problems don't get "solved"; instead they require an ongoing intervention, with experiments that are considered "safe to fail." A major shift from regular design for large systemic social design is to move away from user-centric approaches and towards one that focuses more on the relationships between stakeholders, with a long term commitment to continuous intervention. In some cases, that also led to groups trying to embed design capability within the system so that continuous improvement can be driven from within. The article concludes: quote:As each complex problem situation is different, there is not one way of doing things and we must rely on adaptive practice, where practices are adapted to the problem context at hand. Such adaptations require every actor concerned to engage in a continual and mutual learning process. We therefore stress the need for ongoing education together, through learning communities that include academics and practitioners across multiple disciplines. MononcQc fucked around with this message at 20:50 on Jul 23, 2022 |
# ? Jul 23, 2022 20:37 |
|
semi-related to this thread, i was reading the wiki article on dragon king theory and it has what might be the best shitpost graph i have ever seen
|
# ? Jul 25, 2022 15:01 |
|
My understanding of a lot of safety theory folks is that Nassim Nicholas Taleb (the guy behind "Black swan events" and "antifragile" as terms) is that they sort of really dislike his work because he took clearly defined academic concepts, ignored them, and invented ambiguous terminology that sound cool and then pushed them as a new science that tried to upend a lot of well-established concepts that had been proven useful. I had never heard of the Dragon King stuff, but at least the wikipedia article is sort of helpful enough: quote:The black swan concept is important and poses a valid criticism of people, firms, and societies that are irresponsible in the sense that they are overly confident in their ability to anticipate and manage risk. However, claiming that extreme events are—in general—unpredictable may also lead to a lack of accountability in risk management roles. In fact, it is known that in a wide range of physical systems that extreme events are predictable to some degree.[4][5][2][3] One simply needs to have a sufficiently deep understanding of the structure and dynamics of the focal system, and the ability to monitor it. This is the domain of the dragon kings. Such events have been referred to as "grey swans" by Taleb. The "coupling" and interaction chart you posted, while definitely looking like maddening shitpost, sort of draws onto other concepts I've seen. IIRC, This SINTEF Report has a good overview of various incident models, which include things such as "Energy transfers and barriers", "Defence in depth", etc. And section 5 is all about couplings, and Charles Perrow's model of "Normal accidents": quote:Major accidents, such as the Three Mile Island accident, often come as fundamental surprise to the people that manage and operate the system (Turner, 1978; Woods, 1990). However, Charles Perrow (1984) insisted that some systems have structural properties that make such accidents virtually inevitable. He therefore labelled these fundamentally surprising events “Normal Accidents” They also provide this table: And explain: quote:
So the concepts around black/grey swans and dragon kings is sort of interesting because it gives a perspective from the point of view of someone looking at the system from afar, but there are decades of theories gradually getting refined (the SINTEF report is super good there) to actually try and manage these sort of events as part of building and analyzing systems. I did have a lightning talk for a local user group mentioning a bunch of models, it could be interesting material for here.
|
# ? Jul 25, 2022 17:00 |
|
soc is per bak's thingy. bak was basically the king of the more ridiculous instances of the santa fe institute's much-vaunted tendency to look at a thing, say "drat theres a fractal in there", find a fractal, make a log-log plot with a straight line in it, and move one. bak-tang-wiesenfeld for piles of this one kind of rice / simple self-organized criticality, bak-sneppen for evolution, etc etc. taleb was a big collaborator with bak decades back (bak died 2000), also w mandelbrot, so that's the weird physico-geometrical base of things. but you get to do trading, which is taleb's main occupation, without serious intellectual theories so thats why hes such a lightweight there self-organization is not a coherently described term. the more scientifically valuable point of view on it is the second-order critical phase transition in satisfiability, which was explicated harder by georgio parisi (nobel 2021...). because it's satisfiability that phase transition sorta schmucks itself into any np-complete problem. many actual real-world problems have a tendency to easily become np-complete once any serious details are added.
|
# ? Jul 25, 2022 17:33 |
|
bob dobbs is dead posted:many actual real-world problems have a tendency to easily become np-complete once any serious details are added.
|
# ? Jul 26, 2022 12:24 |
|
https://www.youtube.com/watch?v=d2fBBJRjccs So one thing I'm very curious to know in future is whether or not the motion-sensing and eye-gaze recognition capabilities would be paired with some sort of augmented reality coming from other sensors in a given cluster of personal systems in a personal network. Hooo boy. Invocation, attention, time-cost, speed and context
Looking at technologies like say, spoken word response assistants like Alexa, the headache I see is that while they're phenomenal for inputting strings or doing fixed term searches they are painfully slow for selection or applying minor deltas, and they don't have the capacity to append a given task with a subtask -- ie, you can't ask it to play a song and then in the same command give a fixed volume level because the time and energy spent invoking the attentional or summon command is a second entire string that has to be appended to the first which makes anything complex become *very complex*. In humans, we do this with gaze or passive attentional sound that's simpler than names - usually a combination of the two, and we also abridge commands down or concatenate them kind of on the fly which requires a capacity for past memory search in natural language -- which AI isn't really super good at unless you're looking at systems like GPT(x) which have some capability in this area but are computationally very expensive. I can see these kinds of systems being paired together but you run the gamut of then saturating a given space with token communication, since spoken word is inherently simplex by design -- meaning only one person can speak and one person can listen. The same is true of text sure, but its not as chronologically sensitive so you can switch text contexts with a lag of around 0.8s I think and then read to match an expected return in 0.2s or perform fresh comprehension of a new text return in ((n*0.2)^C)^N) where n is wordcount, C is complexity and N is novelty -- to my understanding (if better models exist, I want to know about them, I'm not even sure where I got this from but its one I've used for years). Wearables are always so drat disappointing
If you are looking where your fingers are going, or your eyes are drawn away from being able to successfully walk through an environment safely your device isn't wearable, its mountable. Wearing means acting through an object, that it doesn't limit you and instead extends your abilities. Shoes protect our feet. Pockets enable containment. Loops, connections, buckles, zips, patterns, the sociological cornocopia of Merleau-Ponty's experiential corporeal schema of identity and its representation and the battersea madness of Uexkull and Sebeock's Umgebung self-perception, Innenwelt of allocentrism of the self in the scope of the world of umwelt. Come on, already! Wearables, and computing in general, despite networks is still isolated and frightened
That middleware should be able to look at something akin to a universal data representation standard API that includes things like events, labels, sources, structs, etc -- and it should do to UX what Interface Builder for NeXT and the products that inherited that legacy - MacOS, iOS and the tools which replicated those kinds of design patterns. It shouldn't be the *only* option obviously but if it means someone can whip up something visual which invokes and hooks into systems in different separate programs as if they're services (honestly, why do we even think of them as different in 2022) or do things in the other direction and have a given context discover functionality which leads to a known conclusion and then invoke different equivalent functionality to achieve it. I think my favourite example of something like this is probably Quixel Suite, which was this weird parasitic software package which latched onto photoshop and turned its basic pixel manipulating powerhouse into a monster for procedurally texturing objects by invoking ML trained systems which generate different image-maps like bump, specular, displacement, roughness, etc but also that these maps could be then sourced individually onto a 3D mesh with ID-maps to populate and bake a texture OR hand-paint whichever combination of whichever ones you want wherever you want. It makes me want to rip out my own hair that good code is in some walled garden and can't ever be re-used, and that it only ever belongs to its own context ever and that these things aren't transplantable or approximately equivalent so things are kept maintainable. The fact code and programs never get to be "finished", that there's no stage where the software is then handed over to a community to maintain and alter it on some level means the longevity of any solution is incredibly short. Fantastic software you use every day will probably fall to this and if you've lived long enough, you've got programs you can look back on fondly for which no equivalent now exists which runs well on modern hardware despite the fact the context for it to be needed still exists. It wasn't made obsolete, and nine times out of ten, it was ruined through the decay of its own design by a design-team who had no idea what the gently caress they were doing. Like this: Oh god what?! https://www.youtube.com/watch?v=dKx1wnXClcI I want clean pancakes, this time. New metaphors, communication and control
I'm so goddamn pissed and angry that gulf can't just be crossed. Like yeah, of course you're gonna have programmers writing single-threaded software in a metaphor which suits single-threaded execution because you're asking for an abstraction that the metaphor can't represent well. Likewise, with the complexity and condensation of visual languages is always going to be LESS dense than purely text languages. Its a poo poo-show that these things just aren't directly interoperable. I did not think the future of computing would be rediscovering the past because the present is so goddamn awful and lazy and botched together. This is present everywhere from our phones and the metaphors we use to interact with them to the way we write operating systems and software. There's all these horrific issues of managing different scopes, and yet all of our editors are just much nicer versions of editors we've had for thirty or forty years now. Surely that's not ok? It always comes back to this stuff every single time. We're wasteful and lazy and now instead of it being the compiler's problem, like it was 30 years ago, its the problem of both the programmer and the operator. I'm sure so much has been done in this field, but all of these solutions functioning in vacuums is why they're all doomed to fail. Interoperability is the way you survive, by being a link in a chain instead of trying to sell rope. The worst is when someone tries to make a link in the chain out of rope, and it seems fine but then it just crumbles away and suddenly the entire stack collapses under the weight of its own rot. I'm sure I had a point to make here. This absolutely isn't a shitpost and I speak entirely with 100% sincerity -- I think for us to move forward in any of these fields, we need to look at the new hyper-problem of managing the sheer amount of information and systems we have because when they're not well managed, we end up re-inventing the wheel and adding more to the pile of stuff we have to preserve and maintain. Pictured: Him, officer, its all his fault. The bare minimum, and we can't even do that:
I thought that's where human factors and ergonomics was going to take us, but we never got there. Expo70 fucked around with this message at 17:19 on Jul 27, 2022 |
# ? Jul 27, 2022 16:33 |
|
Expo70 posted:New metaphors, communication and control I think a major reason why these things are not actually in a vacuum is all the tools that operate and manage code that did not exist before: code search, static analysis tools, policy enforcement (eg. linting), refactoring tools (renaming and moving contents), programmer help (hinting, auto-completion, "language servers"), remote editing, compatibility across stacks (whether it's a RPI with a serial interface, a server in the cloud, or a desktop), source control, code reviewing, ability to link to specific areas (eg. files and lines), do live pairing sessions, customization, support for various locales, AI-assisted code snippets (I will never use copilot!), portability across languages (few systems are purely monolingual), the ability to copy/paste across media (from a web page or a chat client to your code editor), etc. All of these tend to function on lines of code as text. They theoretically don't have to, but they do. You could also expand it to also account for software operations: would logs, metrics, and remote tracing work as well when they mostly expect text-based concepts to be represented, or would a consequent usage require these to be visual as well? Building a visual system implies that either you have an underlying representation that is text-friendly in order to work with all these tools, or that as someone creating a visual programming language, you are embarking on the journey of re-implementing the entire ecosystem of tools that support development along with it. And some of these are definitely worth it: source control and diffing are one of the things other engineering disciplines feel they should learn from software engineering. But if you just do the editing step, you're partly stuck in a sort of vacuum where none of the other tools exist, so the benefit of switching has to offset the downsides of losing the entire chain. So the question is whether the people who perceive a benefit from the new system tend to be those who need to do the extra work, or whether the extra work comes at their expense. Cases where those doing the extra work aren't those benefiting tend to require a lot more resources to force or nudge in that direction, and it's unlikely to happen organically.
|
# ? Jul 28, 2022 01:08 |
|
the only really viable alternative to plain lines of text is still text, but sexprs. and then you gotta be a goddamn weenie to do it
|
# ? Jul 28, 2022 01:11 |
|
i always think of visual programming environments as neat but kind of an eternal beginner mode - easily picked up and quickly outgrown. Humans write things and i dont think thats gonna be replaced any time soon.
|
# ? Jul 28, 2022 01:16 |
|
You can make a lot of friendlier systems for domain-specific things with visual programming and metaphors. Macromedia Flash or Director were two examples of software development environment that were stupid productive for some areas, where you could do in minutes poo poo that could take days in other areas, in no small part because you could bring your development much closer to the final product. But uh, for sure that wouldn't fit well in a multi-language build system that publishes artifacts whatever in modern day enterprises.
|
# ? Jul 28, 2022 01:20 |
|
MononcQc posted:You can make a lot of friendlier systems for domain-specific things with visual programming and metaphors. Macromedia Flash or Director were two examples of software development environment that were stupid productive for some areas, where you could do in minutes poo poo that could take days in other areas, in no small part because you could bring your development much closer to the final product. yeah i've worked on many of these and in practice everyone eventually hits a certain skill level and moves to text.
|
# ? Jul 28, 2022 01:21 |
|
like there's nothing wrong with having a beginner mode, in fact its very very good, but there also needs to be an intermediate and expert mode.
|
# ? Jul 28, 2022 01:22 |
|
if you want visual programming you can go do plant automation for plc industrial controls you will regret this (expressing complex ideas, rules, and relationships graphically is harder than using language; ed tufte makes good money charging ballrooms full of middle managers $1k each for a one day information design basics presentation)
|
# ? Jul 28, 2022 01:26 |
|
rotor posted:i always think of visual programming environments as neat but kind of an eternal beginner mode - easily picked up and quickly outgrown. Humans write things and i dont think thats gonna be replaced any time soon. to be clear literally all of Expo70's game project is written in unreal's visual programming thing (blueprints) because her dyslexia doesn't let her read C++ without getting immediately tripped up by the "curly boys". like she's written complex aerodynamic simulations and poo poo with visual building blocks connected together with lines, which is just nuts to me. i'm absolutely certain it's the largest body of code ever written in blueprints by now, and it somehow works reasonably well
|
# ? Jul 28, 2022 02:29 |
|
Shame Boy posted:to be clear literally all of Expo70's game project is written in unreal's visual programming thing (blueprints) because her dyslexia doesn't let her read C++ without getting immediately tripped up by the "curly boys". like she's written complex aerodynamic simulations and poo poo with visual building blocks connected together with lines, which is just nuts to me. Ubisoft enters the chat
|
# ? Jul 28, 2022 02:48 |
|
Shame Boy posted:to be clear literally all of Expo70's game project is written in unreal's visual programming thing (blueprints) because her dyslexia doesn't let her read C++ without getting immediately tripped up by the "curly boys". like she's written complex aerodynamic simulations and poo poo with visual building blocks connected together with lines, which is just nuts to me. i think thats great and its a testament to why things like that are valuable. fwiw i can barely read c++ either.
|
# ? Jul 28, 2022 03:40 |
|
I wonder if visual programming would get me to understand programming, because every time i've tried learning python or javascript, i hit a wall fairly early and just cannot comprehend what i'm trying to learn. Is there a recommended thing for learning visual programming? i'm a designer by trade, so I mean, something making more sense to me by virtue of being visual instead seems understandable, but i also write plain text notes a lot when i'm doing research & refinement, so don't know why coding trips me up so bad. math maybe??
|
# ? Jul 28, 2022 03:48 |
|
Gnossiennes posted:
flash is a good one
|
# ? Jul 28, 2022 03:52 |
|
whats that one weird DAW edit: i cant find it but it was this thing where you'd wire up wave generators to stuff, idk its not my field rotor fucked around with this message at 03:59 on Jul 28, 2022 |
# ? Jul 28, 2022 03:54 |
|
max/msp. its not that good flash has been eol'd by stebe before he died of stupid
|
# ? Jul 28, 2022 04:00 |
|
there some things that'll generate code from UML diagrams but its mostly scaffolding code
|
# ? Jul 28, 2022 04:00 |
|
|
# ? Apr 26, 2024 10:47 |
|
bob dobbs is dead posted:max/msp. its not that good quote:flash has been eol'd by stebe before he died of stupid I meant the authoring environment. I guess it's called "Adobe Animate" now. Largely similar interface tho.
|
# ? Jul 28, 2022 04:02 |