Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Red Mike
Jul 11, 2011

turby posted:

:words:

I don't find that odd. You have access to INSERT and UPDATE the database just like you would have if you just cheated on the game itself. They're not concerned because you're not in a position to do any actual damage past adding fake statistics. Everything you can already do without logging into the SQL database directly.

If you want to 'prove' how they should cover the hole up, then update with something malicious, and see their reactions. I'm willing to bet their reaction is going to be 'Good going, you deleted a level, *restore from backups*, now go do it again.'

Adbot
ADBOT LOVES YOU

Red Mike
Jul 11, 2011
Obviously it's because after chaining the numerous if statements to reject every insecure combination they could think of, up to 7 characters, they decided the code was too long.

Red Mike
Jul 11, 2011
Honestly, I wouldn't blame any part of that on the software or the developers. It sounds like they've done their best to work within the insane guidelines the hospitals have set, and I fully believe that the guidelines are part of why they've got so many alerts that just get disabled or ignored. That said, I also don't think the article's proposed solution of 'stop the line' mentality would work either, without leading to even more dangers, like interrupting the senior manager while he's doing something critical, or the interruptions being so frequent that you can't handle the throughput. Unlike in assembly lines, medical matters require very in-depth knowledge to determine "where the screw's gone".

Basically what I'm saying is it'll lead to the senior manager being a floater because everyone is short-staffed, and not realising that there's an issue or not wanting to admit ignorance. You can only shift the errors further up/down the chain, and the higher up you go, the fewer staff you have. The real solution is to have the lower end of the chain be aware/knowledgeable/authoritative enough that they're an effective last line of defense.

Red Mike
Jul 11, 2011

Dr. Stab posted:

Yeah, I'm not trying to speak of "blame" in terms of who to punish. I'm talking about blame in terms of where the error came from and who is capable of fixing it. If the bad specification, rather than negligence by programmers, is the cause of the problems with the software, that still doesn't change the fact that the software has problems, and those problems need to be fixed.

Yeah, that's what I'm talking about as well. I'm saying that it's likely that the developers would probably have tried to get a lot of these things changed and gotten blocked by the guidelines/client who hired them. Eventually after enough refusals to change these, they'd stop trying so hard. Even more importantly, I wouldn't be surprised if they didn't have the domain knowledge to know what's important and that the person who coordinated with them wasn't being useful in this regard.

Basically, I've seen so many non-critical-domain projects get designed with even bigger flaws like this, because the client demanded it/the client didn't explain what's important about it/the client didn't offer any or enough feedback to know how people actually use the software. Add in the tons of regulations medical things have to follow and suddenly you've got a recipe for design hell if you don't have someone on board who knows exactly what the software needs to do and is willing/able to put in the time to help (which no project ever does).

Red Mike
Jul 11, 2011
I know almost nothing of C minutiae, but I'm willing to bet that it's something silly like the optimiser for gcc doing its best and seeing the assignment to x, therefore running the assignments ahead of the return statement because you're not really meant to modify the same variable twice in the same statement.

Or some silliness like what happens when you make it run i++ twice in the same statement.

It's always something like this with C undefined behaviour.

Red Mike
Jul 11, 2011
Is the complaint here that they're expecting Typescript to duck-type (which it can do to a limited degree with interfaces specifically) but then are surprised when it also tries to enforce strong typing at runtime despite the duck typing? Interfaces define the shape of a class (not the data or logic), and you're specifically defining the two child interfaces to have the shape that includes a field with a property that has only one possible value and isn't optional.

If you try to union the two interfaces (with conflicting shapes), the result is a broken shape because it needs to simultaneously contain the same field with two different values. Union of two interfaces does not lead to each property being a union of the equivalent properties from each interface; that would be insane and counter-intuitive. It would have no ability to figure out what the correct types are half the time. You have the ability to do this yourself though, even though it defeats the point:

code:
interface BaseGlass: {beverage: string, full: boolean};

interface BeerOrWineGlass extends BaseGlass: {beer: true | false};

interface WineGlass extends BeerOrWineGlass: {beer: false};

interface BeerGlass extends BeerOrWineGlass: {beer: true};

type Glass = BeerOrWineGlass;

const b1: boolean = Math.random() > 0.5;

let thirst: BeerOrWineGlass;
if (Math.random() > 0.5)
{
  const test : WineGlass = {
    beverage: 'test',
    full: b1,
    beer: false
  };
  thirst = test;
}
else
{
  const test : BeerGlass = {
    beverage: 'test',
    full: b1,
    beer: true
  };
  thirst = test;
}

console.log(thirst);
This is literally what you're trying to do, which is to have the property "beer" be a union of true and false in the base type that you can use to store the object in. But instead of making the property "beer" a union, you're trying to make a union of two object types that both define a property with different types.

Basically, a union is not the same thing as referencing sub-classes by their base class; it is defining a new "type" that is the union of the classes (so it can hold either of the two). Specifically from the docs: "TypeScript will only allow an operation if it is valid for every member of the union."

That said, the fact that TS allows duck typing like this is an unavoidable horror and I hate it. It's not a foot-gun, just a big siren that tends to go off whenever you press the wrong button until you figure out that the similarly coloured button next to it is the right button to press in fact. Over subsequent versions, the siren now even tells you what the problem is and how to avoid it, as well as making it pretty much painless to switch from one to the other (unless the problem is in a module/library).

e: corrected the example which just also highlights why it's a bad idea. You can't avoid defining beer: true or beer: false because an interface is not the same thing as a class.

Red Mike fucked around with this message at 14:49 on Sep 26, 2022

Red Mike
Jul 11, 2011

LOOK I AM A TURTLE posted:

I'm losing my mind here. The earlier example does work from compiler version 3.5 and onwards, even with only type Glass = BeerGlass | WineGlass and beer: true/false on the interfaces.. I could've sworn I tried it earlier and that it failed, but it works. The change is named very explicitly in the 3.5 documentation: https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-5.html#smarter-union-type-checking under "Smarter union type checking". The reason it works is because true and false are all the possible values of boolean, so the compiler is essentially able to turn { beer: true } | { beer: false } into { beer: boolean }. It can do the same thing with any other enumerated type when it can see that you've exhaustively enumerated every possible value.

Am I still missing something? Try switching between 3.3.3 and 3.5.1 here: https://www.typescriptlang.org/play...QsWoOxKnuHunWQA. It works in 3.5.1 and on, and says "type 'boolean' is not assignable to type 'false'" in 3.3.3.

It does when the fields exist in all the unioned types, so { type: 'A' } | { type: 'B' } | { type: 'C' } is equivalent to { type: 'A' | 'B' | 'C' }.
Or even this: { foo: 'Foo', bar: 'Bar' } | { foo: 'Bar', bar: 'Foo' } is equivalent to { foo: 'Foo' | 'Bar', bar: 'Foo' | 'Bar' }.

I believe the idea was to have "beer" be a union of true/false -- which is exactly equivalent to the boolean type -- only on the union type Glass and not on the base type BaseGlass. I don't see anything wrong with defining the type that way.

My bad, it looks like they fixed the boolean/all values specifically enumerated edge cases. That doesn't change that the entire approach is trying to do what is basically classes and sub-classing, except it's doing it via interfaces (duck-typing) and unions. Sure, you can do it, but that's not the right tool for the job. Highlighted by the fact that they deliberately had to add handling for these cases and are unable to handle the general case. Because it's the wrong tool for the job.

Don't get me wrong, in TS all too often you'll end up having to use that particular tool, because an interface is what you have to use (or what a library provides, or what a tool expects, etc). But it's not a slight against the language that it's letting you hammer in a screw but it doesn't work that well all things considered. The closest thing to a slight is that interfaces/unions are too readily available for things that you should use classes/other types for, but that's vague and probably not solvable.

e: I'll be honest, more of a horror is literal types altogether because they're a hacky fix that's been taken to a ridiculous extreme. There just shouldn't be a way to define an interface as "type false", only "type boolean".

Red Mike
Jul 11, 2011

quote:

See, the other disgusting thing that OSM did when the machine was first booted was to go into the e820 table, where the BIOS defines what memory is available to the system, and declare ~512MB of it as nonexistent (or "Address Range Reserved.") That means that when Windows begins booting, if the machine has 2GB of memory, it only sees 1.5GB, as if the other 512 wasn't even installed.

I normally love super ingenious hacks that let you get basically a modern thing running on outdated hardware/software. I don't love this, it reads like something made by a single trusted brilliant engineer who was a year away from retirement and knew they'd never really have to maintain this, just make it look like they could for long enough to leave the company.

Also as the article itself points out, it's not like the modern thing is all that modern anyway, the start-up time was still slow anyway.

Red Mike
Jul 11, 2011

Macichne Leainig posted:

I mean, the evidence is that the original game is 850mb on disk. It's gotta be some asset packing fuckery to blow it up to almost 10x its size or some other weird poo poo.

This isn't a Unity-specific thing, it's just what happens when you port a game to a console without actually optimising for the platform (or when optimising for different things than storage/download size). Unfortunately porting from a platform to another is an absolute clusterfuck of a process, and Unity if anything helps a ton in making it manageable by a small team. And that small team will still burn a lot of time/budget on getting the port to pass certification and be releasable.

But if that small team isn't also given the time/budget/priority to actually then optimise it, you're left with at least one or more of: bad/mediocre performance, huge storage/download size, small weird bugs that no other platform has, minimal integration to the platform, missing small features that other platforms have, tons of extra downloads on top of the base game. And the cause/solution for each one of these is usually different for each game, as well as being a different value for each game (e.g. a small singleplayer 2D game probably won't have bad performance and integration with platform is minimally useful anyway), so this wouldn't be something at the engine/platform level (that Unity/Nintendo could solve). Even for some things that could be engine-level (e.g. asset packing/import/processing), the solutions are never straight wins, they're always trade-offs; maybe some types of games can take that trade-off easily because they don't care about the downside, but that's not most games, so the engine can't make that decision. To top things off, usually the development of the platform-specific parts of each engine are done in concert between the engine company (e.g. Unity) and the platform company (e.g. Nintendo), which means the work happens at a glacial pace with so many recurring issues that honestly it's impressive that it generally works at all most of the time.

If instead you chose not to use Unity/Unreal/etc, then you need a large team rather than a small team, and you still have most of those problems and more (as well as requiring tons of platform-specific know-how for each platform for each game).

For that specific game, it looks like the game was ported to consoles by an external work-for-hire team. From personal experience that means they were told to just do it and otherwise barely communicated outside of milestone check-ins, the storage/download size was deemed low priority, and by the time it was finished the main game studio didn't care because it worked (which I agree with, for a single-player game console ports are just a way to get a burst of a few more sales with barely a tail on it).

e: Don't get me wrong, Unity totally is a weird bullshit engine. But in this case I wouldn't blame Unity/Unreal (or even Nintendo) because it's one of those rare cases where the engines are actually offering something that's miles ahead of any alternative in terms of just getting it done. If these engines didn't exist/have Switch targets, then nothing but AAA and tiny indie games would be going onto the Switch ever.

Red Mike fucked around with this message at 21:23 on Sep 26, 2023

Red Mike
Jul 11, 2011

Macichne Leainig posted:

I didn’t say it was all Unity’s fault but go on I guess

My bad, meant to quote ExcessBLarg!'s post.


Volte posted:

it was determined that the Switch hardware didn't play nicely with on-the-fly decompression

Without getting into the weeds, there's a load of things around asset/texture compression where yes the better option on Switch is to avoid it entirely if you can, and a larger storage size is a good trade-off for an extra month or more of contractor time, especially since it's fixable later via a patch if it becomes an issue.

Adbot
ADBOT LOVES YOU

Red Mike
Jul 11, 2011

OddObserver posted:

The flipside is that Windows has ridiculously bad file system performance (and for lots of games, spare CPU to decompress), so bundles can significantly help there.

Yeah, that's basically what I mean by every solution having trade-offs. Compared to Windows though, on consoles when you look at solutions for a problem they generally all have big drawbacks/caveats that you can only really mitigate through lots of extra dev time, or by not using a feature/your game being limited in some way/your game just happening to be the right genre/type of game that means the drawback isn't an issue. That's slowly improving over the years, but Xbox/PS are light years ahead of Nintendo on it from what I hear.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply