Is there some sort of trick for finding AM4/X570 based workstation motherboards with not just the ability to boot ECC memory in non-ECC mode but actual verified ECC support - as well as a minimum of one PCI-ex x16 and two PCI-ex x8 ports plus a minimum of two M.2 x4 slots (preferably three), a minimum of 2 RJ45 connectors on a not-Realtek or similarily-lovely MAC+PHY, and preferably a baseboard management controller that runs IPMI with vKVM? The plan is a minimum of 12 cores with SMT which givs me 8 threads for FreeBSD which is my daily driver, 16 threads for Windows when I wanna do some gaming, and 16 threads to spare for compiling stuff with poudriere when not gaming. BlankSystemDaemon fucked around with this message at 10:40 on Apr 29, 2020 |
|
# ¿ Apr 29, 2020 10:22 |
|
|
# ¿ Apr 29, 2024 17:02 |
I forgot to mention that it pretty much has to be ATX, since I'll be using at least two dual-slot graphics cards and a SAS HBA for 8 disks, as Ryzen 9 doesn't ship with an APU. I looked at it a bit more, and it looks like Ryzen 3rd gen is limited to 24 PCI-Ex lanes, so I don't think it's gonna work. I seem to recall people going nuts over the fact that AMD was supposed to have more lanes for expansion boards, but that appears to only be available on Threadripper and EPYC, which mean they're just as much out of reach for my budget as Intel chips with large amounts of lanes for expansion boards. Paul MaudDib posted:Asrock Rack also has a couple server boards: Paul MaudDib posted:https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications (the above, but puts dual 10 GbE on the NVMe lanes, trades the middle slot for a smaller one off the chipset, ditches the SATA controller. This is the only board I've ever seen with 10GbE on the CPU direct lanes.) Paul MaudDib posted:https://www.asrockrack.com/general/productdetail.asp?Model=X570D4I-2T#Specifications (X570 ITX, goes to 8x SATA with oculink, dual 10GbE, 1x M.2 on back, can be bifurcated to x4x4x4x4 if you have a breakout.) Paul MaudDib posted:That is the sum total of AM4 server boards at the moment. Asrock also makes this threadripper board: mdxi posted:AFAIK, there's one board that meets this requirement before you get into EPYC motherboards: https://www.tyan.com/Motherboards_S8020_S8020AGM2NR-EX Gotta be honest, I'm disappointed there's so little choice after almost a year, with nothing more even announced. BlankSystemDaemon fucked around with this message at 18:47 on Apr 29, 2020 |
|
# ¿ Apr 29, 2020 18:35 |
NewFatMike posted:What *is* your budget? The right question to ask is: Why the gently caress is Threadripper so damned expensive in ; a 3960X is just about the price of the entire computer I had planned out, when I found out the BMC is some lovely home-grown solution involving a complete lack of IPMI and HTML5 vKVM - and the apparent lack of SR-IOV is kind of a killer, too. SamDabbers posted:This gets as close to your requirements as I've seen: https://www.asus.com/Motherboards/Pro-WS-X570-ACE/ Even if it is supported, the NIC is I211, not I350 so SR-IOV won't work on it. The x1 slot, if my dream motherboard has it, would be used for my soundcard. Crunchy Black posted:Hey D.E.: [non-AMD suggestion spoiler alert] why not pick up a Broadwell 2658v4? 40 lanes, ATX board availability, 32 lanes for GPUs then throw this crazy fucker in there: https://highpoint-tech.com/USA_new/series-ssd7110-overview.htm Already have the SAS HBA flashed with IT-mode firmware, ready to be passed through to FreeBSD via hardware accelerated IOMMU - it's how my old-and-used current build-server is booting.
|
|
# ¿ Apr 30, 2020 05:01 |
Crunchy Black posted:1630v4? 1s only so I often forget about those parts.
|
|
# ¿ Apr 30, 2020 05:19 |
Paul MaudDib posted:lots of good about options Crunchy Black posted:Yeah, I just checked out that soundcard. WTF. Are you trying to do broadcast quality audio on a budget? Are you Tim Burke!? I used to freelance nights as a disc jockey (ie. flipping discs, no speak) - so when the radio station went tits-up, the boss handed me that audiocard. I got lucky in that a FreeBSD developer has the same card, and wrote a driver for it. mdxi posted:I can read. I was doing this thing called "trying to help". I thought I was missing something (other than mentioning my budget, which I could've sworn I included, but demonstrably didn't), but it looks like AMD is no better or worse than Intel in terms of market segmentation when it comes to the number of PCI lanes hanging off a CPU.
|
|
# ¿ Apr 30, 2020 07:32 |
priznat posted:Oh late to respond but yeah that asus pro ws x570 ace does support sr-iov. I can’t speak on how well it works as we connect sr-iov drives through a pcie switch fabric which abstracts that away from the host platform so don’t need to enable it. I recall seeing the option in bios though. I guess I looked in the manual for it, not the BIOS manual, because I'm used to my own motherboard manual having everything. So it looks like I can build the AMD system I want to, afterall. Thank you everyone for putting up with my foibles, and I apologize again for being an rear end. Also, fun fact, you can hotlink to subsections of manuals.
|
|
# ¿ Apr 30, 2020 15:52 |
Out of curiosity, is there any sort of chance that the 3rd-gen Ryzen chips might conceivably get a SKU that comes with integrated graphics cards? The reason I ask is because the Pro WS X570-ACE has both HDMI and DisplayPort, and 1st-gen and 2nd-gen Ryzen chips had them. BlankSystemDaemon fucked around with this message at 17:15 on Apr 30, 2020 |
|
# ¿ Apr 30, 2020 17:10 |
Some Goon posted:There's always a chance. But no, nothing has indicated there's anything like that coming. I guess I would have to dig up some die-shots, to see if there is room for it. EDIT: ↓ Welp. BlankSystemDaemon fucked around with this message at 17:30 on Apr 30, 2020 |
|
# ¿ Apr 30, 2020 17:20 |
This is one heck of a dual-EPYC motherboard, and as the article points out, also the only one that currently exists.
|
|
# ¿ May 14, 2020 11:08 |
Finally we get PCI-ex 4.0 switches. The question is, just how long we have to wait for them to show up on motherboards? I believe one of these would solve the problem I have with the current crop of newest-chipset boards from AMD, because I don't need the full bandwidth of even 24 pci-ex 3.0 lanes, let alone the 64 lanes that're on the motherboards that have enough daughterboard slots.
|
|
# ¿ Jun 2, 2020 18:21 |
Risky Bisquick posted:Maxing out 24 lanes is 2x NVME/10g pci-e and a GPU. We need more lanes imo. And when we get all the lanes, the transition to fast flash can happen and we can leave sata behind or regulate it to high capacity low speed storage. EDIT: And the NVMe SSDs that can satuate PCI-Ex 4.0 x16 aren't the high-write-endurance once that can actually last, so it's not like you'll get that speed for the lifetime of the system. As for relegating it, it's already been relegated everywhere outside of the consumer space. Even with filesystems like ZFS, the big customers are storing metadata on NVMe SSDs and bulk storing data in 10MB segments to get the most out of spinning rust. For reference, with the PCIex 64b/66b encoding, there's a lot less overhead than the traditional 8b/10b encoding (~4% vs ~20%, iirc), but even then we're looking at something close to 160Gbps-170Gbps from a PCI-Ex 4.0 x16. And with the 128b/130b encoding, you're only looking at ~2% overhead. BlankSystemDaemon fucked around with this message at 21:41 on Jun 2, 2020 |
|
# ¿ Jun 2, 2020 21:30 |
So long as the IPC is better than a Sandy Bridge era i7-2600, and it turbos to at least 4.2GHz, I'll honestly be happy as long as it has enough PCIe lanes to allow me to do what I want.
|
|
# ¿ Jun 4, 2020 09:43 |
Egosoft also magically fixed their game in the latest beta-patch, so I no longer need to upgrade (I didn't really need to, but really wanted to even if I couldn't afford it).
|
|
# ¿ Jun 4, 2020 18:32 |
Some Goon posted:NVMe only pulls meaningfully ahead at higher queue depths than the average consumer uses their SSD, for games the difference isn't perceptible. It loving owns, here's my daily-driver laptops output of 'top': pre:last pid: 76109; load averages: 1.85, 1.40, 1.19; battery: 100% up 4+23:29:40 22:28:16 94 processes: 1 running, 93 sleeping CPU: 7.1% user, 0.0% nice, 3.4% system, 0.7% interrupt, 88.8% idle Mem: 1262M Active, 1601M Inact, 319M Laundry, 11G Wired, 1917M Free ARC: 8218M Total, 6993M MFU, 182M MRU, 420K Anon, 160M Header, 883M Other 6574M Compressed, 13G Uncompressed, 2.10:1 Ratio Swap: 2048M Total, 2048M Free With zstandard, it'll likely be closer to 14GB or 21GB with 3:1 or 4:1 ratios which zstd can easily accomplish.
|
|
# ¿ Jun 5, 2020 21:33 |
U.2 is the superior interface.
|
|
# ¿ Jun 5, 2020 23:34 |
SwissArmyDruid posted:Um, ackshually, u.3 is the superior interface.
|
|
# ¿ Jun 6, 2020 11:27 |
How about an (almost) Mini-ITX motherboard with a loving SP3 socket for an EPYC on it?
|
|
# ¿ Nov 26, 2020 19:11 |
Supermicro has launched a Threadripper Pro board. BlankSystemDaemon fucked around with this message at 23:38 on Jan 15, 2021 |
|
# ¿ Jan 15, 2021 23:27 |
DrDork posted:drat that board is dense. And if I don't misremember completely, Threadripper Pro provides 128 lanes of PCI-Ex goodness with the southbridge included, which means the 6x PCI-Ex x16, 4x M.2 x4, and and 2x U.2, all get enough PCI lanes. Oh, and it's Zen 2, so you don't get to fight with the performance pessimizing errata.
|
|
# ¿ Jan 16, 2021 00:00 |
Combat Pretzel posted:Probably not going to buy one, anyway. My mantra has always been same or more/better, so I'd have to stick with quad channel memory. It practically probably doesn't make a difference, altho with sixteen cores going at it... Losing the PCIe lanes is kinda meh, but they really only served the Mellanox card to hook up the NAS, which became a backup only solution, so it can work with Gigabit. Altho if I were to want multiple NVMe drives at their best performance...
|
|
# ¿ Jan 17, 2021 14:04 |
It might be a case of AMD knowing exactly which registers to probe, which the others might not be aware and/or haven't published versions that can handle yet? Naively looking at ACPI C and P states is about as useful as testing the temperature of your CPU with your tongue.
|
|
# ¿ Jan 21, 2021 03:58 |
DrDork posted:Ryzen Master does not offer an interval lower than 1s (what it defaults to). For shits and giggles I turned Afterburner to 0.1s updates and it doesn't show anything different: floor of about 3.6Ghz and peaks of 4.5-4.7Ghz while Ryzen Master is showing nothing over 1-1.5Ghz with a floor of about 500Mhz.
|
|
# ¿ Jan 21, 2021 04:10 |
DrDork posted:Yeah, of course. But everyone else should be doing averaging, too, and more to the point, no matter how fine grained I set the others I don't get any hits with clocks as low as what Ryzen Master is reporting. Even at 10ms ticks. It's the enormous disparity that's confusing me here--even with averaging there should be some overlap (since if Ryzen Master is showing an average of 1Ghz, that means it should be spending most of that 1s tick at 1Ghz or lower, so you'd think that the others would poll at least a few hits down below 1Ghz based on pure statistics), but there doesn't seem to be--the others aren't showing even momentary drops below ~3Ghz. The wrong way is to have a program report averaged out values over a given time period - this is what happens when you do any CPU temperature measurements. The right way is to have a counter for each state, and just increment the counter, that way the user can choose the time period and figure out their own averaging. The reason why I suspect Ryzen Master of using registers unavailable to anyone else is exactly because their numbers are so different.
|
|
# ¿ Jan 21, 2021 05:22 |
Mr Shiny Pants posted:That Lenovo P620 sure looks nice. Too bad they didn't have one when I was looking for a ThreadRipper workstation a couple of years ago. The Supermicro board looks especially interesting, because even with ALL of the daughterboard expansion slots, M.2, U.2, and everything else, it looks like it doesn't have too few PCI-Ex lanes, so you could in theory fill the system to capacity and still not lack for bandwidth.
|
|
# ¿ Jan 24, 2021 13:52 |
Why is a discord the place to find stock availability, of all things? Is RSS/Atom/mailing lists/anything meant for few-to-many mass-communication/anything that isn't a huge-rear end javascript codebase dead?
|
|
# ¿ Jan 24, 2021 19:28 |
hobbesmaster posted:Because that’s slow? Discord is just IRC. Irssi on my laptop has a resident memory set (ie. that's the memory it's actually using) of ~43.5MB, with quite a few scripts and a few plugins. Irssi on my server has a resident memory set of 2MB. Discord, unless limited by the kernel, will allocate several GB easily - and that's assuming there are no memory leaks, which there absolutely are. The only reason you can't use irssi for Discord is that Discord will actively permaban users who use third-party clients. BlankSystemDaemon fucked around with this message at 20:01 on Jan 24, 2021 |
|
# ¿ Jan 24, 2021 19:57 |
Truga posted:https://github.com/terminal-discord/weechat-discord If I do get banned, I have to remember all of the discord servers I'm on, and where I got the invites. Subjunctive posted:Sounds like you should start an IRC server for stock drops. Post the link when you do! ..Are there even still URI handlers for that? I haven't seen an URI for decades like it for what feels like decades. Combat Pretzel posted:I have Discord running 24/7, and when it's minimized, like right now, it barely adds up to 200MB. Even when showing the window and browsing through the channels, goes up to maybe 300MB and stays there. Malloc Voidstar posted:let me know which of those provide desktop notifications within seconds of a specific message being sent, with near zero configuration, to an audience of >10k IRC does provide near-instant notifications, if you configure them.
|
|
# ¿ Jan 24, 2021 20:56 |
mdxi posted:And I thought I was a techno-contrarian. It sounds like you've been in your garage a bit longer than 10 months, mate There is something to be said for the idea that the commodification of compute and memory today has led to developer inefficiencies, though - back on 32bit platforms when you couldn't allocate +2GB for a process, nobody thought of not at least trying to avoid memory leaks, if they wanted something to run.
|
|
# ¿ Jan 24, 2021 22:33 |
GRINDCORE MEGGIDO posted:From the Anand article on it: "A single DB15 D-Sub video output is present for users looking to access the system psychically"
|
|
# ¿ Jan 24, 2021 22:59 |
Cygni posted:complaining about Discord not being IRC is one of the most grey beard things ive heard in a while. gently caress Man posted:Are there any good comparisons of CPU benchmarks for Excel? BlankSystemDaemon fucked around with this message at 08:17 on Jan 25, 2021 |
|
# ¿ Jan 25, 2021 08:15 |
HalloKitty posted:Forget the client, it's still a bit lame that everyone's centred around one proprietary platform instead of standards we already had Because it's a product of silicon valley, and that's pretty much the modus operandi.
|
|
# ¿ Jan 25, 2021 16:24 |
Inept posted:"As of 2016, a new standardization effort is under way under a working group called IRCv3, which focuses on more advanced client features like instant notifications, better history support and improved security.[20] As of 2019, no major IRC networks have fully adopted the proposed standard." IRCv3 didn't get standardized because not only did it force several things which some networks (including one of the biggest ones) don't implement at all, but it also fundamentally broke compatibility with IRC. That's not how standards work. Also, I might not be as involved with ircds as I used to be when I was an ircop for one of the major networks, but I recognize none of the people involved in making IRCv3, so it seems like it's forced by a lot of new people who's seen a chance to jump in and take over a protocol to suit their own needs, without regards to the existing userbase. EDIT: a lot of what IRCv3 attempts is exemplified by what Matrix is doing to IRC too, by breaking compatibility with normal IRC: users of Matrix, when responding to someone, end up quoting part of the sentence, instead of just the regular "nick," or "nick:" way of prefixing when talking to someone. BlankSystemDaemon fucked around with this message at 17:07 on Jan 25, 2021 |
|
# ¿ Jan 25, 2021 16:59 |
Always be suspicious of benchmarks that don't show margin of error, min/max, median+average, and confidence of data. If they don't show it, it's almost always because they only ran one test, and that simply doesn't generate enough data to form a statistical universe. And if they do include it, you should also make sure to check their methodology, because there's always a chance that they don't:
Then there's the the weirdest property of benchmarking: Don't make them too short, or too long; if they're too short you get problems with timestamping resolution (even on systems that keep time to picoseconds like FreeBSD), and if you run tests for too long the temperature changes affect the quartz crystal which causes the same drift ntp is supposed to correct but which you can't control for.
|
|
# ¿ Jan 25, 2021 18:54 |
Stanley Pain posted:Nerds
|
|
# ¿ Jan 25, 2021 21:13 |
Khorne posted:Zen3 mobile details leaked a bit. It's going to have significantly better battery life because igpu voltage no longer dictates the voltage of the CPU cores. It should be closer to on par with Intel's battery life now. The whole stack will have SMT - no more artificial segmentation.
|
|
# ¿ Jan 26, 2021 17:24 |
ECC is a fun time all-round, as it can be in one of many states:
And you likely won't be able to tell unless you talk with a second-or-third level engineer at the ODM, or happen to be able to find someone who's independently confirmed which state it supports.
|
|
# ¿ Jan 26, 2021 19:37 |
Storm One posted:Only the first 2 seem useless to me, the last 4 are all much better than the default of no ECC/EDC whatsoever. NMIs are not supposed to be masked - they're called Non-Maskable Interrupts for a reason. For reference, if I go and look at a Danish retailer right now, the difference in cost for new memory between UDIMM and UDIMM ECC is.. nothing. Sometimes the cheapest UDIMMs are more expensive than the cheapest UDIMM ECC.
|
|
# ¿ Jan 27, 2021 01:01 |
Storm One posted:I suppose I don't, I have no idea what an interrupt is To use a NIC as an example, when someone sends you traffic, the NICs built-in ring-buffer can only store so much data before it overruns itself, so it needs the OS to take that data before that can happen. This happens via an interrupt. Now take that and add interrupts for every single device imaginable that the OS has to interact with, and you can understand why a computer is never truly idle. Generally speaking, the OS wants to handle most interrupts as it gets them, but some of them don't need as much attention as others (and some are even ignored in favour of the OS doing device polling). However, there's also another class of interrupts that are so important that you really need to deal with it right now, called non-maskable interrupts. This is typically along the lines of the cpu screaming "help i'm overheating, flush your data to disk or it will be lost" at Tjunction/Tmax, over the a RAID HBA noting that its battery has died so it can no longer keep the data that the OS assumes has been written to disk safe, all the way to volatile memory (either CPU caches, main memory, or something else) noticing that the bits are unexpectedly flipping, meaning that trouble is afoot (as the data I linked in a previous post suggests, this happens much more often than expected by the people who made the decision to leave out ECC). Now, you're probably wondering why all of this is necessary to know, and that's fair, because it isn't really, but I'm awake at 5 in the morning for no loving good reason, so I'll be damned if I'm going to be bored. The reason for this nonsense is that in cases 3 and 4, you can very easily assume that your error-correcting DIMMs aren't having any problems, and it can still turn out that you end up writing corrupt data to disk, the system becoming unstable (but you have ECC memory, so you begin to suspect the CPU or PSU instead), or you have weird application behaviour that's non-deterministic and makes you doubt your sanity. So with all of that said, imagine what would have happened if IBM and Intel hadn't cheapened out and added ECC to normal computers; think of how many people complain about their computer working unreliably sometimes, how many people complain about their computers crashing, and every time either has happened to you, and multiply that with the number of people you estimate has ever touched a computer, then multiple it by the number of minutes they've spent being frustrated. Wouldn't cutting down even 1% of that be worth it? Even at 1% it's a substantially large number years of productivity/time wasted. Especially when a more realistic percentage is likely much higher, because of the locality associations linked earlier in my previous posts. MaxxBot posted:Cezanne lookin good BlankSystemDaemon fucked around with this message at 05:41 on Jan 27, 2021 |
|
# ¿ Jan 27, 2021 05:35 |
DrDork posted:While "proper" support is ideal, I don't think it's wrong to note that case 3 and 4 there still represents an improvement over non-ECC memory in practice for non-enterprise use. Sure, bit flips today are somewhat more common than they were expected to be way back in the 70's or whatever, but they're still uncommon, and if the ECC is taking care of those, great, your system is more stable than it otherwise would be. That it can't/doesn't tell you about double- or more flips is annoying, but non-ECC wasn't gonna tell you about those, either. The only notable downside here is if you think your system is running in case 5/6, but is actually in 3/4, and so you assume the RAM subsystem is "good" since it's not throwing NMIs when it's actually got defective hardware or something. I still find that to be rather unlikely, since truly hosed up RAM usually throws more than the occasional single bit error and you can ferret that out with something like Memtest. All NMIs don't equal panic(9) or BSOD. It's entirely dependent on which NMI it is - for example, for a single bit error that was corrected, FreeBSD will simply put a message in syslog, and if it wasn't corrected, it will try to override the memory location unless it was filled with dirty memory (ie. something that hadn't been written to disk yet, which is much less likely). I'm half-convinced it's mostly down to the result of a lack of a market because it was made a premium item and that got exacerbated by lower production in a vicious cycle that at one point had ECC memory costing many times that of normal DIMMs (which like I mentioned before, isn't true for at least Danish retailers, where an ECC DIMM can often be cheaper than a non-ECC DIMM of equivalent SKU). The part about DDR5 could also very well be because the few producers of integrated circuitry (all 3 of them that produce in batches large enough to sell globally), which is used for memory, have been putting pressure on JEDEC. We don't know, and DDR training is closed-source firmware guarded jealously, so we'll likely never know. I also know what I'm taking, because neither of us are doing memory intensive workloads, and I'd much rather have the error correction than memory that bursts to slightly higher speeds. Unless there's an OS that lives in non-volatile main-memory DIMMs (and that doesn't exist, because it's only in the very early planning stages), you're doing HPC cluster stuff with memory-intensive workloads, or in-memory database serving to customers, memory speed doesn't matter as much as some people think - and all of those benefit from ECC memory too. mdxi posted:Nothing is for free. The error-correcting part of ECC RAM isn't magic; it's code in the firmware that checks the value of every byte of memory when it's accessed (and/or refreshed?). And that takes time. It consists of a Galois finite field matrix transformation and a XOR. The first is always handled in hardware, as it requires a pretty large supercomputer to do in software, but the circuitry to accomplish this is so cheap, because it's used in all of those places, that it doesn't matter. XOR is a natural part of any processor. BlankSystemDaemon fucked around with this message at 22:02 on Jan 27, 2021 |
|
# ¿ Jan 27, 2021 21:45 |
|
|
# ¿ Apr 29, 2024 17:02 |
DrDork posted:Yeah, I have no trouble believing that a common cause of single-bit flips is an iffy cell or two, which is therefore far more likely to throw them in the future than the perfectly good cell next to it is. How're you going to notice a desktop experiencing single-bit errors, though? Even assuming they happen in the same locality, your applications don't live in the same bits of memory from startup to startup - aside from the whole virtual memory situation and how processes tend to migrate as new data is written and old data gets flushed to disk, there's very little that makes it clear memory errors are happening, and it seems likely they're to blame for all of the Heisenbugs that people experience. Higher boost or base bandwidth for memory don't correlate directly to higher FPS. If you think they do, I'd love to see your statistics proving it.
|
|
# ¿ Jan 28, 2021 01:58 |