Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy

DrDork posted:

While true, these are also some of the groups least able to "just wait it out" for fixed hardware. If the fixes cumulatively make a substantial performance impact, their only real option is to simply purchase more capacity. That's it. End of story.

I was just responding to the idea that it's sensationalised.

For me at least it made sense to go Ryzen 1700 as as an upgrade last year. More threads was more useful than absolute speed, especially when trying to replicate a complicated server setup locally with virtual machines. The slowest part of web development has always been the database, if I lost performance there I'd notice it right away and I imagine it would be much the same for anyone with a Xeon workstation.

I actually upgraded from an i3-6100 desktop and also a MacBook Pro that work supplied. Both of these were absolutely excruciatingly slow when dealing with larger website databases. I'm talking about pages taking multiple seconds to render where on the production environment they take less than a tenth of a second.

But many people I know work on similar awful hardware, loosing 10-15% of database performance would absolutely loving suck. It's bloody frustrating when you're just trying to iterate on an idea quickly and the page load just drags on forever.

VulgarandStupid posted:

Quick show of hands how many goons are gamers and how many are software devs. What do these Venn diagrams look like?

What does this have to do with anything? Even if there are more gamers here than developers, that doesn't mean anyone is "buying the sensationalism".

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Measly Twerp posted:

What does this have to do with anything? Even if there are more gamers here than developers, that doesn't mean anyone is "buying the sensationalism".

The objection has mostly been that the majority of people vocally concerned about performance loss are those who admit to primarily gaming or doing office type stuff, or are holding on to ancient hardware, etc. Basically the people least affected are complaining the most.

Everyone here has pretty much acknowledged that the server-side of things took a punch to the face, but those people are generally not the ones complaining about it, oddly enough.

Also, if your dev environment is 100x slower than your production one, that's a good argument for a solid upgrade regardless of recent events.

JawnV6
Jul 4, 2004

So hot ...

Measly Twerp posted:

While all of the YouTubers seem relieved that it isn't going to affect their video rendering times, and gamers are satisfied that it won't affect most games, this still discounts a very large group of professionals: software developers.

The infrastructure that their software runs on will be affected, and so will their workstations where they develop it.

Is it your text editor or the compiler that's thrashing between userspace and kernel mode upwards of 40khz?

cinci zoo sniper
Mar 15, 2013




Measly Twerp posted:

and so will their workstations where they develop it.
Tell me more about these workstations, and how software development proceeds on them.

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy

DrDork posted:

The objection has mostly been that the majority of people vocally concerned about performance loss are those who admit to primarily gaming or doing office type stuff, or are holding on to ancient hardware, etc. Basically the people least affected are complaining the most.

Everyone here has pretty much acknowledged that the server-side of things took a punch to the face, but those people are generally not the ones complaining about it, oddly enough.

Fair enough, I've not really seen anything about it except here, Ars, and a couple of YouTubers who are mostly saying "it's not a problem for games".

DrDork posted:

Also, if your dev environment is 100x slower than your production one, that's a good argument for a solid upgrade regardless of recent events.

Tell me about it. There's just not enough processing power in a dual core to run a website that relies on a lot of services to be running. Most of my collegues would be better off running Linux of some variety, except there's still some Mac specific software that has it's fingers right up there with a really tight grip.

Thankfuly I think we're starting to see the end of that, as design focused web apps are starting to be feature compatible with desktop design software.

Khorne
May 1, 2002

Measly Twerp posted:

I was just responding to the idea that it's sensationalised.

For me at least it made sense to go Ryzen 1700 as as an upgrade last year. More threads was more useful than absolute speed, especially when trying to replicate a complicated server setup locally with virtual machines. The slowest part of web development has always been the database, if I lost performance there I'd notice it right away and I imagine it would be much the same for anyone with a Xeon workstation.

I actually upgraded from an i3-6100 desktop and also a MacBook Pro that work supplied. Both of these were absolutely excruciatingly slow when dealing with larger website databases. I'm talking about pages taking multiple seconds to render where on the production environment they take less than a tenth of a second.

But many people I know work on similar awful hardware, loosing 10-15% of database performance would absolutely loving suck. It's bloody frustrating when you're just trying to iterate on an idea quickly and the page load just drags on forever.


What does this have to do with anything? Even if there are more gamers here thann developers, that doesn't mean anyone is "buying the sensationalism".
I've never found local databases to be a bottleneck, but I also can ssh tunel to dev databases for larger data sets.

Khorne fucked around with this message at 01:19 on Jan 25, 2018

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy

Khorne posted:

I've never found local databases to be a bottleneck, but I also can ssh port forward to dev databases for larger data sets.

Yeah, before work went remote only we used to have an office with a central development server, then it didn't matter as all we needed to run was a web browser, a text editor and sometimes Photoshop.

Currently it's more a combination of legacy code and an architechture revolving around caching that cannot be used while developing. For simple applications it's ok, but when you've got 4 threads and every page relies upon 8 or more virtualised services things are not always so smooth.

Lungboy
Aug 23, 2002

NEED SQUAT FORM HELP

BangersInMyKnickers posted:

We're likely talking 3-5 years for silicon that is resistant to spectre-style attacks. Meltdown is pretty much moot at this point.

"Later this year" according to Krzanich.

Police Automaton
Mar 17, 2009
"You are standing in a thread. Someone has made an insightful post."
LOOK AT insightful post
"It's a pretty good post."
HATE post
"I don't understand"
SHIT ON post
"You shit on the post. Why."

fishmech posted:

I mean I can sell you a Toshiba laptop from 1998 that isn't "bugged" but I'm not sure you'd like using a 233 MHz Pentium MMX...

People said I was mad for keeping my Atom Netbook. Who is mad now?! (It's still me. I'm mad)

The performance impact in normal user workloads isn't even that big. I'd go as far as to say that in a user scenario, the most workout the average CPU is ever going to see is gaming and you could argue that for gaming, a fast CPU isn't even nearly as important as a recent GPU. Lots of games don't crunch lots of numbers and even the few that do are usually no match for even semi-recent systems. It's often a lot more important how fast you can push stuff into the graphics card, which makes sense seeing as modern graphics cards are basically computers by themselves. Lots of indie games work just fine on 5 year old low-end machines, I know that from personal experience.

This thing is not as big or bad if you aren't a professional computer toucher. Also, I don't know any current numbers (and they're a bit volatile anyways - also per region) but I was wondering recently if it might be worth it to just not have the computers doing the heavy lifting at home anymore. It might be worth it to shove off heavy computing work into cloud instances and just not bother with the hardware infrastructure yourself, depending on what you do. Could even go for gaming! But I don't want to lean myself too far out of the window, I am not sure what current prices are and you'd certainly need a good internet connection.

Police Automaton fucked around with this message at 13:46 on Jan 26, 2018

repiv
Aug 13, 2009


That doesn't say the chips this year are fixed in silicon, just that they have "built-in protections". Actually fixed silicon is coming in "future chips" at some indeterminate point.

He probably just means that new chips released this year will have the hacky microcode mitigations included from day one, which could be spun as "built-in protection".

mystes
May 31, 2006

repiv posted:

That doesn't say the chips this year are fixed in silicon, just that they have "built-in protections". Actually fixed silicon is coming in "future chips" at some indeterminate point.

He probably just means that new chips released this year will have the hacky microcode mitigations included from day one, which could be spun as "built-in protection".
No, that's just the way news articles are written. The logical connection is unclear as written, but in reality both those sentences are reporting on one connected thing he said in the actual conference call:

https://seekingalpha.com/article/4140338-intel-intc-ceo-brian-krzanich-q4-2017-results-earnings-call-transcript

quote:

We're working to incorporate silicon-based changed to future products that will directly address the Spectre and Meltdown threats in hardware. And those products will begin appearing later this year.

mystes fucked around with this message at 14:03 on Jan 26, 2018

repiv
Aug 13, 2009

Oh that's surprising. Is physically fixing the bugs within 12-18 months of disclosure at all feasible, or were Intel quietly working on a fix before Google independently found the flaws? :thunk:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Police Automaton posted:

It might be worth it to shove off heavy computing work into cloud instances and just not bother with the hardware infrastructure yourself, depending on what you do. Could even go for gaming! But I don't want to lean myself too far out of the window, I am not sure what current prices are and you'd certainly need a good internet connection.

There has been some movement in this direction, and NVidia (amongst a few others) have begun to try the remote-gaming thing, with mixed success. Bandwidth and response times are still major technical hurdles to be overcome, and a lot of people aren't super keen on the whole "own nothing rent everything" business model.

The fun part is that the performance hit from these issues are actually quite a bit more serious on the exact cloud systems you'd be using here.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
Without details it is really hard to gauge what he means by fixes. Does he mean they will completely resolve the problem or is it just a mitigation to the microcode fixes? I would assume the latter (which is still good, but could mean anything from hugely to barely significant). I would guess its a mitigation not a complete solution based on the timeline, but more information is needed.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

repiv posted:

Oh that's surprising. Is physically fixing the bugs within 12-18 months of disclosure at all feasible, or were Intel quietly working on a fix before Google independently found the flaws? :thunk:

Not a chance in hell. They're bundling the microcode changes in to existing silicon in the pipeline. Spectre attacks were theorized back in 1992 and the industry has blasted blindly forward regardless, this requires a fundamental redesign of the architecture that no one has done yet

Gyrotica
Nov 26, 2012

Grafted to machines your builders did not understand.

mystes posted:

No, that's just the way news articles are written. The logical connection is unclear as written, but in reality both those sentences are reporting on one connected thing he said in the actual conference call:

https://seekingalpha.com/article/4140338-intel-intc-ceo-brian-krzanich-q4-2017-results-earnings-call-transcript

Technically everything to do with chips is silicon based.

mystes
May 31, 2006

Gyrotica posted:

Technically everything to do with chips is silicon based.
Yes it may be meaningless; I have no knowledge of what they mean when they're saying it's going to be fixed in silicon and whether or not that simply refers to the preloaded version of the microcode. I was just responding to the idea that the part of the quote about making fixes in silicon was separate from what is supposed to happening later this year.

EoRaptor
Sep 13, 2003

by Fluffdaddy

repiv posted:

Oh that's surprising. Is physically fixing the bugs within 12-18 months of disclosure at all feasible, or were Intel quietly working on a fix before Google independently found the flaws? :thunk:

I think the fix will be in the classic cost vs time vs quality triangle, and that if intel is choosing time and quality the fix will be fine, if somewhat expensive*. As long as performance of the overall CPU doesn't suffer vs previous generation, even if it doesn't improve by much, but meltdown and spectre are both blocked, then the market will accept it.


* Expensive will probably come down to how much silicon space they end up spending on it. There is actually empty space and other 'padding' on current CPU designs, so if they can make use of that then the manufacturing cost won't change meaningfully and you only need to eat the development costs. If they need to grow the chip, then things are less clear about where the compromises will come.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

repiv posted:

Oh that's surprising. Is physically fixing the bugs within 12-18 months of disclosure at all feasible, or were Intel quietly working on a fix before Google independently found the flaws? :thunk:

I would say that Meltdown is quite likely to be fixable that fast.

The flaw exploited by Meltdown is based on speculative execution. All modern high performance CPUs guess whether a branch will be taken before actually knowing. Because the guesses are sometimes wrong, they have features for unwinding processor state to whatever it was just before any guess. The typical method of implementing this is that instructions write results into temporary storage, and these temporary results are eventually promoted to permanent (aka "architectural state"). (Or thrown away, if they were from a chain of events that never should have happened.)

In some speculative CPU designs, load results from addresses owned by another address space are allowed to exist in this temporary storage. This was formerly thought to be good enough to provide ironclad address space protection, because as long as illegal load results are killed before being committed to architectural state, everything's fine, right? Meltdown violates that assumption by using cache timing side channels to exfiltrate data from a soon-to-be-killed speculative execution chain over to another thread which can commit the data to architectural state.

(Re: "formerly thought to be good enough", this is why several ARM designs also Meltdown. AMD got lucky!)

The fix for this doesn't necessarily require significant redesign. You don't have to get rid of the side channel, you just have to never let the data load in the first place. And that's quite possible, since (for reasons I won't go into here) the processor knows whether a load should be allowed before the load data can come back from the memory hierarchy. This makes it possible to add a small amount of logic to set the result of any illegal load to 0. On a processor with that feature, the Meltdown code will run, but the only data you can get back from the shadow realm is zeroes.

If the fix requires changing a critical timing path, things might be more difficult, but it's plausible that it doesn't.

If Intel's lucky they can do this fix with only a metal layer change. There's new logic to add, but that's why you put in spare gates -- logic gates which aren't connected to anything in the original design, but can be wired up as needed with a metal layer change. Metal only changes are faster and cheaper, especially if you can keep it down to only a few layers changed (reduces the number of new masks you need to make).


I have not tried to understand Spectre well enough to give an informed opinion on how easy mitigations for it might be.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

DrDork posted:

Everyone here has pretty much acknowledged that the server-side of things took a punch to the face, but those people are generally not the ones complaining about it, oddly enough.
Most of us having to deal with the Intel security issues professionally with large-scale infrastructure have been too busy with even more busy than expected schedules this year to be able to even complain. Only Linus Torvalds so far has been saying anything of material substance and his comments, while oftentimes inflammatory, are almost always backed with irrefutable technical grounds. January is when people come back from vacation and new releases oftentimes go out the door for many companies, and for ops that is going to be hell on earth, especially if your operations automation is not up to snuff (the vast majority of places even still with cloud providers and automation being so common now due to most software in existence by definition being legacy and thus following legacy extremely stateful architectural practices). I only had several dozen machines to patch but it took my entire team nearly two weeks of focused effort because most of them were basically designed as if they'll never be turned off and sometimes the patches would cause kernel panics for us.

NewFatMike
Jun 11, 2015

Microsoft kinda Ctrl+Z-ing the buggy Meltdown/Spectre updates:

https://www.theverge.com/2018/1/29/16944326/microsoft-spectre-processor-bug-emergency-windows-update-reboot-fix

The Verge posted:

Microsoft has been forced to issue a second out-of-band security update this month, to deal with the issues around Intel’s Spectre firmware updates. Intel warned last week that its own security updates have been buggy, causing some systems to spontaneously reboot. Intel then buried a warning in its latest financial results that its buggy firmware updates could lead to “data loss or corruption.”

Lol

JawnV6
Jul 4, 2004

So hot ...

necrobobsledder posted:

Only Linus Torvalds so far has been saying anything of material substance and his comments, while oftentimes inflammatory, are almost always backed with irrefutable technical grounds.

In that latest exchange, his irrefutable technical grounds consisted of going off half cocked about the wrong primitive. Looked like he spent far more time crafting the comments about cool-aid and garbage patches.

Mr Shiny Pants
Nov 12, 2012

JawnV6 posted:

In that latest exchange, his irrefutable technical grounds consisted of going off half cocked about the wrong primitive. Looked like he spent far more time crafting the comments about cool-aid and garbage patches.

Dude, really? I think he made a fair point.

GRINDCORE MEGGIDO
Feb 28, 1985


What's the deal with Intel's comments a while ago about cutting out some of the old legacy instructions?

They will be back from this meltdown thing with a vengeance eventually, I just wondered if there was anything particularly interesting on the future horizon. Like a leaner architecture.

JawnV6
Jul 4, 2004

So hot ...

Mr Shiny Pants posted:

Dude, really? I think he made a fair point.
In this exchange? He's handwaving so furiously he mixes up two of the new barriers and is falling back on invective to have his wishes done without bothering too much about the details. "Raarrr garbage patches" doesn't strike me as a fair point.

GRINDCORE MEGGIDO posted:

Like a leaner architecture.
Lean in what sense? Which of the programs you currently run would you be willing to do without?

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

Hasn't shown up in WSUS or SCCM yet, and lol I'm not manually installing a patch on all our workstations.

Worth keeping an eye out for though.

GRINDCORE MEGGIDO
Feb 28, 1985


JawnV6 posted:

In this exchange? He's handwaving so furiously he mixes up two of the new barriers and is falling back on invective to have his wishes done without bothering too much about the details. "Raarrr garbage patches" doesn't strike me as a fair point.

Lean in what sense? Which of the programs you currently run would you be willing to do without?

Why would you need to do without anything, isn't emulation possible?

JawnV6
Jul 4, 2004

So hot ...

GRINDCORE MEGGIDO posted:

Why would you need to do without anything, isn't emulation possible?

I still dunno what you mean by "lean" here, but lopping off 8 transistors in a decode unit and slapping on an expensive mechanism to trap any attempt at usage and fall back to software emulation (which is going to cost >8 transistors) doesn't strike me as a clean win.

But fine, which of your programs that you currently run would you mind going 2x slower on the next Intel chip?

repiv
Aug 13, 2009

Didn't AMD64 kill support for 16bit code when running on a 64bit OS? So in theory if Intel dropped support for 32bit OSes they could strip out the legacy 16bit stuff, but I don't know if that would be worth the trouble.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I might be wrong, but you could argue to that even entry-level 64bit computers are powerful enough to fully emulate a 16bit CPU fast enough to still run whatever hosed up data-entry DOS apps that are still out there.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

repiv posted:

Didn't AMD64 kill support for 16bit code when running on a 64bit OS?

It stops supporting a particular way of doing that, but it could/can still be done in alternate ways. Microsoft chose not to revise the NTVDM to handle still running classic 16 bit DOS/Windows stuff in 64 bit mode.

I'm pretty sure all of the instructions are still supported in 64 bit mode, but it's the environment they're executed under for compatibility that is no longer supported directly, the 16 bit real mode and certain aspects of how 32 bit protected mode DOS and Windows things were done which the NT kernel needs handled in different ways.

Kazinsal
Dec 13, 2011



The 386 introduced a few instructions to enter something called virtual 8086 mode, which was basically an early virtual monitor framework for virtualizing a 16-bit real mode system in a 32-bit protected mode environment. Usually in real mode, when the CPU is instructed to access hardware or perform a major state change (software or hardware interrupts, interrupt flag enable/disable, HLT, etc), it just does it. In virtual 8086 mode, however, it traps out to the 32-bit protected mode operating system's virtual machine monitor so the VMM can do the work safely in a protected environment by either accessing the hardware on behalf of the 8086 virtual machine or by emulating it. Once you enter 64-bit mode (either compatibility mode or long mode) though, you can't use any of the virtual 8086 mode instructions. You can do a bit of a roundabout method of opening a VMX instance (basically spin up a hypervisor and virtual machine) in real mode but then you need to basically become an EPT hypervisor to do so.

NTVDM on x86 always used virtual 8086 mode. There was a full 8086 emulator that NTVDM used on Alpha, MIPS, and I think PowerPC, but back then it was slow as all hell since it was an emulator and not a hypervisor (also because the way NTVDM is invoked is a real mess, but that's irrelevant to this story).

isndl
May 2, 2012
I WON A CONTEST IN TG AND ALL I GOT WAS THIS CUSTOM TITLE

Combat Pretzel posted:

I might be wrong, but you could argue to that even entry-level 64bit computers are powerful enough to fully emulate a 16bit CPU fast enough to still run whatever hosed up data-entry DOS apps that are still out there.

There's a 16-bit Windows game I sometimes want to go back and play but then I remember I'd have to install a VM first. :effort:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Wow, that's kind of awkward. A long while ago I reset settings and set up an overclock. Today I go into BIOS for something else, I notice the top saying CPU 4000MHz and Cache 1200MHz. Apparently I've been riding on a throttled uncore for god knows how long. I've no idea how that happened. It said auto.

--edit: Might explain the what I consider mediocre performance I got in GTA5.

Anime Schoolgirl
Nov 28, 2002

isndl posted:

There's a 16-bit Windows game I sometimes want to go back and play but then I remember I'd have to install a VM first. :effort:

dosbox

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

That's installing a VM.

Anime Schoolgirl
Nov 28, 2002

fishmech posted:

That's installing a VM.
the fisher-price toy of vm

isndl
May 2, 2012
I WON A CONTEST IN TG AND ALL I GOT WAS THIS CUSTOM TITLE

When I say Windows I mean full native, windowed GUI and mouse-driven interface and everything. DOSBox was pretty explicit about being for DOS games only.

SamDabbers
May 26, 2003



Windows 3.11 runs in dosbox, so your 16-bit Windows games should too

Adbot
ADBOT LOVES YOU

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy
Windows 1.0 also runs in dosbox, and is the first version of windows to support snapping windows side by side.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply