Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
I've been hunting, and I haven't found any information on this:

Aside from the process size narrowing it down to a group, is there any way to tell which fab a chip came from?

It doesn't look like there's enough information in the CPUID to tell that, so is the only option to pull the heatsink and check the package?

Adbot
ADBOT LOVES YOU

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

JawnV6 posted:

Why would the package have the fab information on it? They're different sites.

I'd assume the package would document both the packaging site (moving to 80% Vietnam shortly) and the fab the batch comes from. Documentation is critical for defect management.

I say "I'd assume" because I don't have a system I can tear down to check right now, hence the question.

Although I suppose GIS may have something clear enough to read.

Edit: And yes, wafer transport is pretty neat, I grew up near the East Fishkill plant, both my parents worked there. Highschool sat between the original and expansion sites, got to go to a few presentations by guest lecturers.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Combat Pretzel posted:

That's practically impossible, because there's no way to entirely track all pointer references to the DLLs being updated, and on top of that, if active data structures and locations of global static variables mismatch between the active DLL and the one to be switched in (which will be 99.9% the case), all affected apps will crash.

If only there were a CPU feature that allowed different processes to look at the same address and see different contents. Sarcasm aside, it works pretty damned well on Linux. I'm running bleeding-edge on my dev box to catch things before I have to deal with them in production, so I'm eating a libc.so change every upgrade - used by approximately everything at all times.

Linux page cache is tied to an inode (unique file ID) rather than a filename, so when you use the atomic rename operation to replace a file you have both copies in RAM. New execs get the new version, running code keeps the old. The old page cache and now-ghost file are refcounted and when all users exit they're cleaned up. Do all the disk IO, then restart services cleanly.

Active data structures are per-process continue to use the same code they started with. Global statics could mean a few different things - if you mean "static data in the library that the processes use" it's still the same from start to finish. Even if it's in a different location in the new library, a running process sees the old.

I say 'linux' in particular rather than unix, since the ability to handle an upgrade cleanly requires a bit of finesse that's entirely dependent on vendor.

Maybe someone better versed on the NT kernel can explain why it's impossible to replace DLLs on the fly in windows, I thought it's what shadow copy was specifically meant to achieve.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
When you push both chips to their OC wall, is skylake significantly faster than sandy bridge? I'm continually amazed at how much life is in that chip. Hell, I only bothered going from 4.2 to 4.5 this month - I never felt the need to bump voltage off stock.

Dumb question - I'm pretty sure my 2500k is running my 1833 RAM at 1600, but I'm not sure. (CPU-Z says it's running the XMP, MSI Control center for overclocks says 1600. "Windows Performance Index" knocks RAM down to 7.6 when a similarly equipped machine scores a 7.9 on that.

Anything that separates out a RAM benchmark so I can tell if I have a problem?

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

JnnyThndrs posted:

I believe CPU-Z will give you accurate readings on your RAM timings/speeds

CPU-Z is even more confused:


So it's maybe running 1833 with fail-safe timings?

Except the MB manufacturer and multiple random benchmarks disagree. That's why I'm looking for something definitive to test with to see WTF is really going on here.

Gwaihir posted:

Absolutely no question it is.
(But the gains are mostly not seen in games)

I went looking and found this showing the improvement mostly comes from the DDR-4 controller. It's games, but they went specifically looking for games that they could hit CPU limits on. The whole core architecture has scaled really well with improved bandwidth, so a 4.6GHZ 2500k with 2133 ram is within spitting distance of a 4.6ghz 6500k with 2400 - but if you feed the 6500 DDR4-3200 it's not even close. Which makes sense - I can't imagine there's a lot of room for improvement in an ALU at this point, so make it faster by preventing stalls. It'd be interesting to see a 6500K with DDR-4 underclocked to 2133 for comparison.

So the performance improvement is due to better external busses. Faster/more PCI, more RAM bandwidth, support for NVMe/sata express/USB 3.1, etc. I'm not blowing off the gains by saying that - to the contrary, those are the places where real work gets bottlenecked more than tightly looped CPU benchmarks do.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

craig588 posted:

Shuffle your memory sticks around, you're running in single channel. I wouldn't worry about the timings at all (you could always manually set them if you want, but eh) but dual channel is very important.

BIG HEADLINE posted:

What he means here is have your sticks either in Slot 1 & 3 or 2 & 4. 1 & 2 and 3 & 4 are single-channel.

EdEddnEddy posted:

As the above say, get the sticks organized in the correct slots (if you have 4, they should be color coded, or like said above, 1/3 2/4) to get dual channel working, also you have 2 XMP for your command rate. While if you are overclocked, getting 1T to work is tricky sometimes, if you can do it in the bios (Set the other XMP if it gives your the option, or if not, manually setting the Command Rate to 1T and see if you boot) those 2 changes combined should give you a pretty nice boost.

Jesus I'm a dumbfuck. Here's what actually happened: Because both DIMMS were on the same channel, the MB said "You're obviously too stupid to overclock, I'm going to run the memory at something I know is safe." I've had this wrong for FIVE loving YEARS: 12/9/2011 from newegg. Built it immediately and haven't taken it apart since.

I put them in different colored slots because same color = same channel, right? Nope! So now it's 1 & 3 and suddenly the BIOS options unlocked and I could select 1833 and XMP-1. I'd say "Geez, what a difference" but I'd have to feed it an x264 encode to be sure and I didn't really bench it before I switched them.

I don't think I would have figured out there was a problem if it wasn't for dicking around with this "UserBenchmark" thing. Shows you where each component lands vs other people with the same component - so I was getting 17th percentile compared to everyone else who owns DDR3-1866 RAM. Fixing it put me up at 94th - obviously a lot of people buy 1866 because the number is higher, then put it in systems that run it at 1600 by default.

Thanks guys, I feel really stupid now. Apparently I'm just bad at computers despite having built my own since '94.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

EdEddnEddy posted:

Whats your CPU-Z look like now? Did you check the bios to see if the ram is running in 1T as well? That alone will give you another good boost if you aren't already.
And welcome to your free upgraded PC lol. Should be a lot better now for more than just Encoding. Now to Overclock and get another good boost.



It's been running a 4.5ghz OC pretty stably. I tried 1T, but p95 large FFTs (some RAM) are starting to throw errors. I need to nail down if it's CPU or RAM then I'll know what's stable.

Hmm - is there an in-windows memory test? I don't think the good-old memtest86 will be useful because the bios OC is different than the running one. I guess I could try to match them.

Edit: I'll followup in the OC thread, this has really wandered off-topic.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Furnaceface posted:

It's new and I'm am idiot. It was covered up, just had to peel the film off. Can you tell it's been a few years since I've done this? :downs:

I once had a whole batch of IBM workstations (pre-lenovo buyout) that showed up with the plastic still covering the thermal compound. As you can imagine, an airgap does not make for great cooling.

The poor sods in the accounting department were actually using the loving things, they couldn't have been much faster than a 386. I didn't find out about it until the complaints reached my department and I popped one open - and started laughing.

Needless to say, everyone was very impressed with my "performance tuneup".

You really don't need much - I had something like a 1/4 oz tube that lasted me for years. Nice for those "Oh gently caress, I've got the wrong mounting kit for this heatsink, I need to go back to the previous one that I just cleaned all the compound off of." moments that happen from time to time.

My current tube (1/2 oz arctic silver I got for a few bucks) I expect to lose long before it runs out.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

silence_kit posted:

No way, material cost per wafer has to be O($100). I happen to know that plain 100mm electronics grade silicon wafers in small volumes are ~$20 each (this is actually incredibly amazing by the way--electronics grade silicon may be the purest material known to man, and yet it is so cheap). Intel, although they are buying much bigger wafers, probably are able to get a much better price/area than I could.


100mm wafers might as well be free. The problem is you get a compounding cost increase with every step in size. As of 2014, 300mm were running $400 and 450s were looking at $6-800. It's hard to make a perfect 300 or 450mm ingot, and they're hard to cut perfectly, which means thicker slices and more post-processing to polish down perfectly. The cost per square inch jumps up about 50% per step, with a big spike when a size is new that settles down to the 1.5^generation curve.

That's just the silicon. I'd consider all the chemical processes involved in production to be material costs as well, since there's some amount of loss on each step.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

silence_kit posted:

I just priced 300mm plain Si wafers on a website which sells wafers for electronics to scientists and it is $80 per wafer for a box of 25. Where are you getting the $400 number?

Maybe your $400 number includes the epitaxy cost. That sounds high to me though. People have told me that epitaxy is expensive, and I understand why it is expensive if you order something custom as a one-off, but no one has ever explained to me why it has to be expensive in volume.

Then I misread this as being wafer costs when they were talking about total costs of processing. The materials cost is a lot lower than I thought then.

So complete processing does cost more per square inch as you go to larger wafers. I'll try to find a better source on materials costs, since the majority of total cost will be capital and this whole discussion was about "aside from capital, what are the costs."

Harik fucked around with this message at 06:02 on Dec 23, 2016

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Paul MaudDib posted:

The socket isn't listed in the benchmark at all, and it's the subject of some debate online. Personally I don't see how you can cram 6 cores onto a socket designed for 4 without at least a new chipset if not a new pinout too, I just don't see these working on Z170/Z270.

It'd be nice to be proven wrong, but...

And since Intel will be switching sockets again when Cannon Lake launches, this is the ultimate dead-end socket. It's literally going to be used for one gen and then thrown away.

I suppose it could be done, but it'd be a weird setup with some cores needing to go through the others for memory access to keep the physical topography the same. Absolutely would need a firmware update unless they did something insanely nuts like keeping two idle until the OS booted them.

So X299 platform then? There's a chipset in search of a reason to exist.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

BIG HEADLINE posted:

They'd have honestly been better off lasering two cores off a 6-core Sky-X and calling them the 7780 and 7790, retaining the 'full' 28 PCIe lanes and full X299 support. And they'll probably end up doing that in a few months as a new 'budget' option. :rolleyes:

There'll probably be a lawsuit over the big fat asterisk on the x299 platform. "We've got 4GB of ram but you can only really use 3.5" and "We've got 28 PCIe lanes but only 16 work most of the time" sound awfully alike to a non-technical person, especially when there's reams of marketing material. Hope they didn't miss an asterisk on anything printed anywhere.

There's also the upcoming fun of discovering that it's not even really upgradable either, because when some poor sucker does go buy an i9 and drop it in there in a year or so he's not going to wave the chicken around counterclockwise 5 times and whoops there goes the magic smoke because someone let FIVR proponents put it on a top-end chip instead of mobile, and only on some of the parts on the same socket, and it causes a complete failure instead of gracefully switching.

E: Is there anything extra lasered off on a 7700K compared to a 7700? They used to nuke a bunch of things when they unlocked the multiplier.

E2: According to the features they report, the only flags the K is missing is "Trusted Execution Technology". Nice. I remember having to decide between overclocking and VT-d for sandy bridge.

Harik fucked around with this message at 06:30 on Jun 25, 2017

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
Older server question:

Anyone know how the CPU compatibility for LGA1366 works? Is it L-series, E-series, X-series separate or can I upgrade a E5620 to an X5640? Older hexacores are so cheap right now.

Going in one of these old servers.

At $110 for the CPU and $90 per 16GB of ECC Ram (*3, triple channel) it seems like a cheap way to dump some more horses in the box. Unless those used-server boards for $600 are A) still around and B) are the same form factor... they had 128GB of ram on them, I think. Anyone remember where those were?

It was natex.us and they're EATX so they won't work in a proprietary supermicro rack. Too bad.

Might be worth buying just for the 16 8GB DDR3-1333 Registered ECC sticks though.

Harik fucked around with this message at 06:01 on Jun 27, 2017

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

canyoneer posted:

Yeah I don't understand the complaint about upgradeability. Upgrading a GPU makes sense during the life of a system, but when it's time for a new processor, you probably want a new everything-else too :shrug:

It's a thing you used to do a lot more than you do now. For a while, AMD & Intel shared a socket, and it was something you could upgrade a few times during the life of a board. I don't think anyone cares that much - there's basically only one choice of chipset if you have a k part, all the interesting features come on the chipset, so buying a board isn't a huge selling point on it's own.

Zen+ might be interesting, since AMD says they're going to keep it on the same socket. I think if it's a compelling enough upgrade you'll see people doing that as opposed to buying entire new systems. A used CPU market for a current-gen motherboard isn't something we see a lot of anymore.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
Computer archaeology question time:

I'm futzing around with old server hardware. Specifically, x3450 xeon + supermicro X8SI6-F board.

I have no idea what kind of RAM this thing can take, because it makes no sense to me. Apparently even though it takes registered ECC 1333 it can only really run at DDR3-800 if you put anything more than 8GB in for reasons?

And there's 16GB DDR3 but it caps out at 8gb because an extra pin for the address lines was too much to ask for.

And single, dual and quad rank dimms, and separate memory limitations (and speed limitations) for each.

The gently caress was everyone smoking back then?

http://ftpw.supermicro.com.tw/products/Memory/3400_support.cfm?pname=MBD-X8SI6-F&DIMM=6

E: Apparently Intel memory controllers were bad and until ivy-e were super weird about what RAM they would take. AMD stuff just worked for all the standard sizes.

What I can't figure out is if it supports dual-rank 8gb modules, (4gb/rank) or if it only supports quad for 8gb. quad rank runs at half the speed.

E: Nope. I forgot how bad computers were back then. 16gb of ddr3-1333 or 24,32 at -800. That's just awful.

Harik fucked around with this message at 14:11 on Jul 19, 2018

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
History question (asked earlier but I have more details now)

I have a lynnfield xeon x3450. Intel is extremely clear that the IMC can only address 1 or 2gb/rank. This is well documented on ark.intel.com and everywhere you look there's a 2gb/rank limit until ivy bridge.

This is just A Fact on the internet.

Counterpoint, I have a pair of these attached to a xeon x3450 and they work fine, all 8gb recognized.


The module clearly has 5 chips on one side and 4 on the other, which corresponds to the 9 in the datasheet, and SPD reports the same part number as the sticker so it's not re-labeled.

How is this possible? I want to put more RAM in but now I have no idea what will actually work and what is documented.

I'm wondering if the address lines are there but 2gb/rank (256x8 * 9 chips) was just coming out at the time and 4gb/rank (512x8) was completely future tech. 8gb/rank maybe? There's a lot of those out nowadays that can be underclocked from 1600 or 1866

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Cygni posted:

Bigger memory sticks, especially dual-ranked, strain the IMC and increase the likelihood of errors. AMD handled this on Ryzen by having a sliding scale: the more sticks and ranks, the slower the officially rated speed was.




Intel prefers to just only support one speed number, so they basically under-promised the max that the IMC could actually support to ensure that they could hit the speed number they promised. Lynnfield on desktop, for example, could only officially take 16gb on paper but many/most/all of them would run 32gb at full speed without too many issues.

So its not too crazy that your chip can support above the max, but Intel's official position is you are outside the box.

Lynnfield/Nehalim xeons have the same deal for single, dual and quad rank (1333, 1066, 800). There's a difference between signal integrity (too many ranks on the line causing loading) and address traces physically not existing.

Literally everyone says it's impossible for this to work because there's physically not enough address lines on the IMC to address larger chips.

Intel xeon x3400 datasheet:


Wikipedia on DDR3 SDRAM:

quote:

Because of a hardware limitation not fixed until Ivy Bridge-E in 2013, most older Intel CPUs only support up to 4 gibibit chips for 8 GiB DIMMs (Intel's Core 2 DDR3 chipsets only support up to 2 gibibits). All AMD CPUs correctly support the full specification for 16 GiB DDR3 DIMMs.

15 rows * 10 columns * 8 banks * 8 byte databus = 2gb/rank. The maximum supported configuration is 256x8 chips, and my modules are 512x8. There's a 16th row line coming from somewhere that's not supposed to physically exist on this processor.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

redeyes posted:

I dunno about that.. the solder WAS liquid metal.

Tin is a fairly poor thermal conductor at something like 1/8th what copper can do. The main reason for de-lidding is to reduce the amount of material between the silicon and the copper heatspreader. If there's a fat layer of tin + an aluminum spreader that can't be removed it's not going to be as good as a super-thin layer of liquid metal going directly to copper.

It's still better than packing the gap between the die & spreader with whatever, but direct copper contact is going to be hard to beat.

E: (W/m K) of various metals:
Tin (solder): 62 – 68
Aluminum: 204
Copper: 386
Gallium (Basis of liquid-metal alloys): 41

Harik fucked around with this message at 17:00 on Sep 8, 2018

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Paul MaudDib posted:

Yeah, this. Lower resolutions happen to be easier for the GPU to run, but talking about "a CPU that is good at 1080p" is actually a misnomer: what you actually mean is a CPU that is good at high refresh rates, it's just that it's easier for the GPU to drive high refresh rates at lower resolutions. If you are CPU bottlenecked then your system that does 100fps at 1440p will also do 100 fps at 720p or whatever - there is the same amount of game logic to run regardless of resolution.

It's more sensible to just cut out the whole "well lower resolutions are easier to drive and..." bit and just say "this CPU is good for 200 fps in title X". Whether or not you hit that will of course depend on your GPU as well, but graphics aren't the part that's running on the CPU, so it's a little nonsensical to talk about a CPU in terms of graphical performance.

The performance of two hypothetical systems, one with a 1050 and one with dual 2080 Tis is going to be very different even though they're both "at 4K" and you're obviously going to need a much beefier CPU to keep up with the SLI 2080 Ti system. So cramming these both into the same metaphorical bucket by talking about a CPU's "4K performance" is dumb, what you really mean is HFR/not-HFR and it would be better to just say as much. So instead of saying "good at 1080p" just say "good at HFR" instead, and instead of saying "good at 4K" say "targeting 60fps" instead.

This isn't completely true. Ideally the CPU would do the same work no matter the resolution, but when there's CPU benchmarks at different resolutions on an otherwise identical videocard, different CPUs end up at different FPS.

https://www.gamersnexus.net/guides/3009-amd-r7-1700-vs-i7-7700k-144hz-gaming

According to this, the r7 1700 is a 200FPS CPU, so anything over 1080p should have completely identical scores. Yet the 1700 is always just a little bit slower, even at completely gpu-bound tasks. I've seen benchmarks where the difference was pronounced even at higher resolutions but I don't have time to go hunting them down tonight.

If I had to guess what it was, it'd be textures. Dynamically loading textures from RAM to the GPU takes time, letting the driver recompress to the card-native format to maximize GPU memory space takes cycles, etc. game->GPU is going to eat a few context switches as well, so faster clocks mean lower latency between the end of one frame and the start of the next. Slower context switches are going to show up no matter what the FPS is, as they add a constant number of microseconds to each frame.

It's close enough for a buying guideline.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
The only one that really pisses me off is ECC. As memory size and usage keeps increasing consumer PCs are getting into the realm of "bitflips are a realistic problem" where they weren't before.

It wouldn't be prohibitively expensive to put 9bits on everything, and they could keep soaking enterprise users with the registered vs unbuffered split.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

JawnV6 posted:

no like, imagine if the x86 ISA wasn't set in stone and they had like, RISC-V levels of churn on what subsets they supported, would any of the goofy segment stuff still be shipping? the promise of transmeta and what the CMS could do would largely be undone by targeting the "native" core, it'd be a tech debt albatross just like corners of x86
That "goofy segment stuff" was implemented on a processor with orders of magnitude less gates than anything out there now. They could plop the original 286 core in, all 134000 transistors of it, just to run "legacy" code on. it wouldn't be a rounding error on the gate count.

When you talk about visibility there's a cost there, too. The more you look at the more work you need to do. You risk spending time to optimize an active loop only to have it finish before you make the changes, with your super-optimizer always playing catch-up to what the code is actually doing. Sure, you could store it for the next time it goes back there, but when will that be, and will the data conditions be the same when it does?

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Paul MaudDib posted:

supposedly Alder Lake-S (client desktop) will not have AVX-512 support.

it seems like you could probably build an application with multiple codepaths, one that is AVX-512 and one that is not, and have the application dynamically executing both of them at runtime based on the appropriate path for the core. Obviously there are potentially some interprocess communication edge cases there, and you would have to have some kind of "affinity" call to tell the OS scheduler that this thread can only be migrated around other AVX-512 cores, but it seems like it broadly should work.
For AVX on AMD64 you trap to the OS the first time you touch an AVX register, to inform the OS that it now has to preserve all that extra state. Linux makes use of this to lazy-save/restore the AVX state on context switches. This is particularly useful if an AVX-using thread gets preempted to handle a system task that doesn't use AVX - rather than causing 4 loads/saves (kernel entry, kernel exit to system task, kernel re-entry, kernel exit to AVX task) it causes zero - the AVX register state is left unchanged the entire time, and when the avx-using thread touches them again the trap just returns as there's no work to do.

I believe it was the same for 8087 FP/SSE/MMX/3DNOW! but that was so long ago and I've purged that disastrous era from my working memory.

The same functionality can be used to lock an avx-using task to the big/huge cores.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Perplx posted:

I think its a great idea, the main reason I don't repurpose gaming rigs into servers is because the idle power is too much, it has to be <15W for me to run 24/7.

That's not what "idle" power means. Idle power is when it's off, just wasting energy waiting on someone to press the power button. Minimal current to keep the USB (and possibly ethernet) circuitry on for wakeup events.

You're still looking at near 3 digit wattage on a server "not doing anything" but waiting for a connection. Mine never goes below 120.


~Coxy posted:

:wtc:

Yeah, it sucks so much that I haven't needed to buy a new PSU for the last 15 years!
Thank you for exemplifying the problem here. People hang on to PSUs way the gently caress too long, and they're full of components that wear out. When they wear out, the best possible outcome is that it just turns off and won't turn on again. The second best outcome is that it happily dumps random high voltages out all the rails and fries your system. The worst outcome is it catches fire and burns down your house.

Harik fucked around with this message at 09:55 on Aug 2, 2021

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
E: gently caress, Q!=E, haven't done that in a long time.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Dr. Video Games 0031 posted:

That's not how it's been explained in the press, and in that Linus Tech Tips video that was posted earlier, he showed a drop in desktop idle power draw (measured at the wall) from around 60 watts to 30 watts when switching to an ATX12VO PSU on an otherwise identical build. Current 80plus certification doesn't even bother measuring efficiency below 20% load, and ATX12VO is meant to be much more efficient at that.
perplex was looking for sub-15w which is sleep mode, not idle.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Paul MaudDib posted:

that's actually "phantom power", not idle power. idle power is power when the machine is running but idle.

but yeah servers actually have relatively obscene idle power, as do HEDT systems built on those same platforms. My X99 system did not idle down very well at all, I think it idled at about 120W at the wall even without XMP or other things enabled.

California's energy requirements are 8.5 watts total for a "high expandability system", (75kwh/8760 hours = 8.5 watts continual usage). There's no way you're reaching that in any sort of active state, this is 100% regulating S3 states and phantom power.

It's confusing because they don't actually state what they're measuring so if you take the regulations literally a 500w system can be used for 100 total hours then have to be physically unplugged for the rest of the year.

So yes, you're right that what's being described is 'phantom power' but it's being called "idle" power in the context of the new regulations.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

canyoneer posted:

The PE pushed back and said that's stupid, because you're going to want these people back in 8 months and they have a pretty unique skillset, I'm not going to go along with this. It continued to heat up and BK took a swing at the guy. They both landed a few punches before the other people in the room pulled them apart, nobody got fired, and life went on.

literally fistfighting the executives to keep your people is SS+ tier management.

Adbot
ADBOT LOVES YOU

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
i've tried a few horizontal cases and they've been universally awful. Thermals are poo poo, they take up an enormous amount of space and because they're a gimmick case building in them is terrible.

I have one left that the kids use and it bluescreens constantly due to overheating. Same tower coolers as the vertical cases, similar size/number of fans, just terrible airflow anyway.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply