|
Shrimp or Shrimps posted:Looking like a strong no-go. Ah well. Ok, so, I was looking at an image of a Trident 3, sorry. It looks like its a z490i, but the rear io is different because the trident uses some sort of internal antenna for wifi/bluetooth. So, you could swap it into an nr200. The only really odd thing about the board is the lack of vrm heatsinks. https://imgur.com/a/OC68fIo Canna Happy fucked around with this message at 05:31 on Feb 10, 2021 |
# ? Feb 10, 2021 05:26 |
|
|
# ? Apr 26, 2024 15:17 |
|
Canna Happy posted:Ok, so, I was looking at an image of a Trident 3, sorry. It looks like its a z490i, but the rear io is different because the trident uses some sort of internal antenna for wifi/bluetooth. So, you could swap it into an nr200. The only really odd thing about the board is the lack of vrm heatsinks. Thanks, I appreciate the help! So if it appears to be a standard itx board, then it should fit in the NR200 with the proper mounting screw holes. Not having vrm heatsinks is weird; the cooler design in the trident x blows air downward onto the motherboard so I guess they just rely on that. However swapping it out and using a tower cooler would mean needing to get sinks to stick to the vrms most likely. In that case, swapping another itx board into the trident case would probably mean having to remove the vrm heatsinks to get it to fit beneath the cooler. How essential are vrm heatsinks? I'm guessing once you start ocing and unlock the pl limits for like a 10700k, it's guzzling so much power that the sinks are very essential. I wonder why MSI didn't sink them with something at least low profile. Shrimp or Shrimps fucked around with this message at 22:54 on Feb 10, 2021 |
# ? Feb 10, 2021 22:46 |
Well, as is to be expected the only thing that matters is Number = Bigger when comparing FreeBSD 12.2-RELEASE and 13.0-BETA1, but it's evident that Michael doesn't really know what's behind the improvements, and just guesses it's down to "hardware P-states or power management" or "other kernel improvements". The likely culprit is a shitload of micro-optimizations to primitives used by both the kernel and userland, as well as big changes in the VFS and VM subsystems, along with a bunch of scalability improvements (which are to ensure FreeBSD runs well at up to 1024 threads). The majority of that work isn't even done yet, so there's plenty more Number = Bigger to be had. Still no standard deviation, min/max values, mean/average/median reports, or confidence intervals - which suggests this isn't benchmarked properly. It's frustrating that someone who makes it their business to report on statistics doesn't do it properly. BlankSystemDaemon fucked around with this message at 06:35 on Feb 11, 2021 |
|
# ? Feb 11, 2021 06:33 |
|
Larabel produces an insane number of articles daily and, AFAIK, Phoronix is the only website doing that level of reporting on FOSS software, so maybe don't try to hold it against him too hard?
|
# ? Feb 11, 2021 14:11 |
|
Producing mountains of poo poo "journalism" and extremely bad benchmarks doesn't make any of it less poo poo.
|
# ? Feb 11, 2021 21:09 |
|
lol okay, if it's that mission critical, do your own benchmarks. I'm not gonna poo poo on the only person doing it if he's doing alright.
|
# ? Feb 12, 2021 01:42 |
|
Yeah I'll take bad benchmarks over no benchmarks.
|
# ? Feb 12, 2021 02:16 |
|
10850k is available for $350 at Microcenter. Frankly, thats an incredible value even with Rocket Lake coming in a month. https://www.microcenter.com/product/626745/intel-core-i9-10850k-comet-lake-36ghz-ten-core-lga-1200-boxed-processor
|
# ? Feb 12, 2021 02:42 |
|
Also this on Amazon https://www.amazon.com/Intel-i7-107...HNXWA80J4QA0MCD
|
# ? Feb 12, 2021 02:48 |
|
I firmly believe in the Anand Lal Shimpi adage of "there are no bad products, just bad prices", and with these price cuts, the Intel parts are straight up tasty. That 10700F is $70 cheaper than the 3700X/5600X and faster to equivalent in basically everything. Still gotta fight motherboard shortages though.
|
# ? Feb 12, 2021 02:59 |
|
Cygni posted:Rocket Lake coming in a month. It is going to be so god drat tempting to upgrade and I hate that I’m even considering it.
|
# ? Feb 12, 2021 03:08 |
|
Not that I'm tempted but far can I go with a straight cpu upgrade from an 8700?
|
# ? Feb 12, 2021 04:10 |
NewFatMike posted:lol okay, if it's that mission critical, do your own benchmarks. I'm not gonna poo poo on the only person doing it if he's doing alright. I'm not criticizing him because I don't want him to do it - rather, I want him to do a good job, so that the numbers actually mean something and we can draw conclusions from them. None of this is special or esoteric knowledge - it's so easy that it can be explained with cartoons, as is done in The Cartoon Guide to Statistics. None of it is hard to accomplish when you use tooling, and he does use tooling - in fact, he's been using his own tooling since 2008. MaxxBot posted:Yeah I'll take bad benchmarks over no benchmarks.
|
|
# ? Feb 12, 2021 05:07 |
|
You can let him know in the comments, I'm not sure he'd pay ten bucks to read this thread.
|
# ? Feb 12, 2021 05:21 |
NewFatMike posted:You can let him know in the comments, I'm not sure he'd pay ten bucks to read this thread.
|
|
# ? Feb 12, 2021 05:44 |
|
BlankSystemDaemon posted:I've reached out to him a few times over the years, and never gotten a response. Color me extremely shocked.
|
# ? Feb 12, 2021 05:58 |
|
Ben Smash posted:Color me extremely shocked. Yes, I too can't possibly imagine why asking a dude to do 3-10x the number of runs in order to fill out the needed data to get 95% Ci and the rest of the stuff asked for there on a free website isn't getting a whole lot of traction. Especially when the software in question is still a beta. Besides, do we really need candle graphs to understand the point of the entire article, which was "13.x is a gently caress ton faster than 12.2"? It's not like it's posing itself as a resource for you to base hardware scaling decisions on.
|
# ? Feb 12, 2021 22:02 |
Apparently I have no loving idea what I'm doing when sober, because I only just now, while being tipsy, spotted that I posted the previous posts in the Intel thread, not the Linux thread. orz
|
|
# ? Feb 12, 2021 22:36 |
|
in Ancient Persia, the men used to debate posts once sober and once drunk, because the post needed to sound good in both states in order to be considered a good idea. (nah )
|
# ? Feb 12, 2021 22:38 |
Paul MaudDib posted:in Ancient Persia, the men used to debate posts once sober and once drunk, because the post needed to sound good in both states in order to be considered a good idea.
|
|
# ? Feb 12, 2021 23:11 |
|
Cygni posted:10850k is available for $350 at Microcenter. Frankly, thats an incredible value even with Rocket Lake coming in a month. Great deal, really unfortunate so many people don’t live near one.
|
# ? Feb 12, 2021 23:14 |
|
BlankSystemDaemon posted:But what if we post both when sober and tipsy/drunk, that way we can get different takes on arguments back and forth, and double the PPTP (posts per time period). If you do, make sure to post the resulting scatter plot of shots vs how good the argument sounds.
|
# ? Feb 12, 2021 23:28 |
DrDork posted:If you do, make sure to post the resulting scatter plot of shots vs how good the argument sounds.
|
|
# ? Feb 12, 2021 23:49 |
|
BlankSystemDaemon posted:Apparently I have no loving idea what I'm doing when sober, because I only just now, while being tipsy, spotted that I posted the previous posts in the Intel thread, not the Linux thread. orz This is 99% of the reason behind poking at you,, I hope we can be friendly e-cquaintances ♥️
|
# ? Feb 13, 2021 01:15 |
|
DrDork posted:Yes, I too can't possibly imagine why asking a dude to do ... Lmao holy poo poo dude smoke weed or something. Shits weird for everyone right now.
|
# ? Feb 13, 2021 01:39 |
|
Ben Smash posted:Lmao holy poo poo dude smoke weed or something. Shits weird for everyone right now. Havin' a toke to celebrate the start of a long weekend at this very moment, highly recommend (it also softens the blow of yet another day of there being no stock available for anything more technologically complex than a 6502)
|
# ? Feb 13, 2021 01:44 |
|
BlankSystemDaemon posted:The worst part is, this sounds like an excellent idea to my drunk brain. I mean, I legitimately can think of several worse ways to spend an afternoon. Say it's for science! Ben Smash posted:Lmao holy poo poo dude smoke weed or something. Shits weird for everyone right now. Can't
|
# ? Feb 13, 2021 02:37 |
|
BlankSystemDaemon posted:Apparently I have no loving idea what I'm doing when sober, because I only just now, while being tipsy, spotted that I posted the previous posts in the Intel thread, not the Linux thread. orz lol i thought that might be in the wrong thread cause i didnt recognize most of the words but thought i was just too dumb to get the context
|
# ? Feb 13, 2021 02:39 |
NewFatMike posted:This is 99% of the reason behind poking at you,, I hope we can be friendly e-cquaintances ♥️ DrDork posted:Yes, I too can't possibly imagine why asking a dude to do 3-10x the number of runs in order to fill out the needed data to get 95% Ci and the rest of the stuff asked for there on a free website isn't getting a whole lot of traction. Especially when the software in question is still a beta. That's the entire point of tooling, to make it automated so you just get the results without having to do anything other than wait - or, you know, do productive stuff, I guess? Devops think they invented the idea, but sysadmins have been doing these things for a shitload of a long time - as an example, cfengine was imported to FreeBSD Ports back in March, 1998, and I remember using it for automating things relatively soon after I started using FreeBSD in 2000. I guess the reason I care is because statistics is a thing that can be used to show just about anything - including misrepresenting something and outright lying to people, although I don't think that's the case with Michael. I'd like it if people get it right when they use statistics, because that way it's science and can be replicated by others. BlankSystemDaemon fucked around with this message at 07:53 on Feb 13, 2021 |
|
# ? Feb 13, 2021 07:45 |
|
BlankSystemDaemon posted:The tooling does everything if it's made properly - ie. use puppet/chef/cfengine to connect to the machine, install the tooling, run, reboot, wait, connect, run, rinse, and repeat. You're not wrong on that, but it'd still increase the runtime of the test by 3-5x minimum. I'm just saying asking for hours of extra running is maybe asking a bit much on an article that notes they're reporting preliminary performance of a beta OS, and given that the performance numbers aren't even close in most tests means that a 5-10% difference due to <whatever> isn't really germane to the point of the article. Especially given that it is, as noted, a beta OS, so the whole thing is kinda a curiosity at this point rather than a deep technical article.
|
# ? Feb 13, 2021 20:22 |
|
DrDork posted:You're not wrong on that, but it'd still increase the runtime of the test by 3-5x minimum. I'm just saying asking for hours of extra running is maybe asking a bit much on an article that notes they're reporting preliminary performance of a beta OS, and given that the performance numbers aren't even close in most tests means that a 5-10% difference due to <whatever> isn't really germane to the point of the article. Especially given that it is, as noted, a beta OS, so the whole thing is kinda a curiosity at this point rather than a deep technical article. I actually don't get BSD's insistence on running everything N times because statistics. Sometimes that's needed, but usually more for short running tests where the resolution of the timer influences results. You can get meaningful data out of single runs, especially with the availability of nanosecond scale timers on many operating systems. But you have to think about that, and a ton of other test methodology pitfalls, to meaningfully test things. You even have to think carefully about what to test.The problem with Phoronix is that Larabel doesn't. His model is making GBS threads out lots of lazy and pointless content. He chooses random topics, puts no effort into making sure he's doing something meaningful, runs a script to generate some graphs, and then writes a bad layman's level interpretation which makes software engineers cringe. http://blog.martin-graesslin.com/blog/2012/09/why-i-dont-like-game-rendering-performance-benchmarks/ There's plenty of other criticism out there. It's easy to find, because he regularly pulls poo poo like "I just proved video card X is better than video card Y" despite not constructing a test where the only variable is whether it was card X or Y installed. The problems with his methodology aren't subtle.
|
# ? Feb 13, 2021 22:40 |
|
BobHoward posted:I actually don't get BSD's insistence on running everything N times because statistics. Sometimes that's needed, but usually more for short running tests where the resolution of the timer influences results. You can get meaningful data out of single runs, especially with the availability of nanosecond scale timers on many operating systems. There's an argument to be made about the potential impacts of background tasks and what not, but yeah, for long-running tests that's likely to be less impactful. But yeah, if you're testing the wrong thing in the wrong way, running it 5 extra times isn't gonna make the resultant data any more relevant.
|
# ? Feb 13, 2021 22:58 |
|
Many of the review sites discuss issues with running multiple reruns of certain tests, because it is just down to the time required. They often get hardware not too far in advance of the date the NDA drops, and every site has to have their data ready to go on release, otherwise no one will read their stuff. All the good ones of course have their testing automated, but some of those benches (think battery life for light use on the latest ultra portable laptops, or some of the extended SSD benches anandtech does) are simply not feasible to have done in time. That doesn't mean everything published is useless though, far from it, and that's the gist of several of these posts which is a little :???: Nothing posted is going to be so off kilter as to make bench results useless, and when one site does have anomalies the others usually make note of it and post a video about what they got wrong or whatnot.
|
# ? Feb 14, 2021 00:21 |
DrDork posted:You're not wrong on that, but it'd still increase the runtime of the test by 3-5x minimum. I'm just saying asking for hours of extra running is maybe asking a bit much on an article that notes they're reporting preliminary performance of a beta OS, and given that the performance numbers aren't even close in most tests means that a 5-10% difference due to <whatever> isn't really germane to the point of the article. Especially given that it is, as noted, a beta OS, so the whole thing is kinda a curiosity at this point rather than a deep technical article. In accordance with the schedule, -BETAn is a snapshot of what will become the final -RELEASE, that doesn't stick around for a long time. What's being benchmarked has been run in production by a lot of people, including Netflix for all of the back-end of their CDN for up to two years prior - it's got nothing to do with being "a beta OS". You can't even use the Number = Bigger thought process I mentioned earlier, because only a subset of the benchmarks use score-based reporting, the rest use time. Gwaihir posted:Many of the review sites discuss issues with running multiple reruns of certain tests, because it is just down to the time required. They often get hardware not too far in advance of the date the NDA drops, and every site has to have their data ready to go on release, otherwise no one will read their stuff. All the good ones of course have their testing automated, but some of those benches (think battery life for light use on the latest ultra portable laptops, or some of the extended SSD benches anandtech does) are simply not feasible to have done in time. There's a huge difference between testing hardware and testing software, but fundementally the same rules apply: Figure out the minimum reproducible change that you want to test, and avoid changing any other variable when testing that. In Michaels case, he confounds his benchmarks by using an arbitrary set of hardware which seems to come and go depending on the season and what colour of underwear may or may not be worn. If Michael wants to present numbers and say something about X or Y being better, he needs to do it properly. If not, pedantic assholes like me will keep crawling up his rear end about how he might as well be lying to people and making claims he can't back up. BlankSystemDaemon fucked around with this message at 16:31 on Feb 14, 2021 |
|
# ? Feb 14, 2021 16:27 |
|
BlankSystemDaemon posted:FreeBSD is produced with a script (documented here) in such a fashion that anyone can build it and get the exact same result... Sure, but even if you can make the whole testing operation a single automated button click, 5x the run time is still 5x the run time. Similarly, yes, I know the Beta will eventually work its way into a RC and then eventually into a stable release, but it's not like they don't make changes to things each step of the way. It's in beta for a reason, and we can expect the final stable release to have somewhat different characteristics because of the fixes they'll be making between now and then. So, combined, it strikes me as pretty reasonable to go "eh, I'm not going to put a ton of effort into exhaustively benchmarking this to statistical significance because all I'm trying to do at this point is get a rough feel for what sort of performance improvements we might be able to expect"--which the graphs as presented capably do. You're not wrong that changing up your testing platform every few months makes cross-comparisons very difficult / impossible, but that's a whole different issue. It's not like Intel trying to find hilarious ways to "prove" that the M1 is a dog of a chip because it can't run Crysis natively or whatever. BlankSystemDaemon posted:You can't even use the Number = Bigger thought process I mentioned earlier, because only a subset of the benchmarks use score-based reporting, the rest use time. Really, man? Come on. No one here is so as to be incapable of understanding that on some charts bigger = better and on others smaller = better. Doesn't change anything here.
|
# ? Feb 14, 2021 19:58 |
DrDork posted:Sure, but even if you can make the whole testing operation a single automated button click, 5x the run time is still 5x the run time. DrDork posted:Similarly, yes, I know the Beta will eventually work its way into a RC and then eventually into a stable release, but it's not like they don't make changes to things each step of the way. FreeBSD works under a principle called Principle Of Least Astonishment, which has a lot of different meanings depending on context, but in FreeBSD it means that soon as code from -CURRENT makes it to -STABLE (the branch where releng is created from each time a new -RELEASE happens on the same major version), there shouldn't be any breaking changes that will surprise someone using FreeBSD. That effectively means that while there can be minor improvements and lots of features added, the big sweeping changes that could cause a noticeable effect simply won't occur. DrDork posted:So, combined, it strikes me as pretty reasonable to go "eh, I'm not going to put a ton of effort into exhaustively benchmarking this to statistical significance because all I'm trying to do at this point is get a rough feel for what sort of performance improvements we might be able to expect"--which the graphs as presented capably do. DrDork posted:You're not wrong that changing up your testing platform every few months makes cross-comparisons very difficult / impossible, but that's a whole different issue. It's not like Intel trying to find hilarious ways to "prove" that the M1 is a dog of a chip because it can't run Crysis natively or whatever. DrDork posted:Really, man? Come on. No one here is so as to be incapable of understanding that on some charts bigger = better and on others smaller = better. Doesn't change anything here. When he's comparing Linux vs FreeBSD in one of the next articles, you'll see that third-party software get built with, for example, different toolchains; Linux defaults to GCC for basically everything and FreeBSD uses LLVM for a large portion of the ports tree (there are still some things which are an exception to this rule, but more and more things move towards using LLVM because of all the work the LLVM project is doing, while GCC don't appear to want much to do with compatibility, ironically). What this means is that all the subtleties that go into compilers make a difference for the numbers that appear - just look at any serious compiler talk comparing SPEC stuff for gcc, llvm, icc (Intels compiler that's used for HPC), and other compilers, or look at how Intel themselves are struggling to keep up with Apple because Apple happens to control both the chip they're making as well as the toolchain that builds the software (they hired the LLVM folk a long time ago). And that ignores the fact that there's a whole host of optimization which can and often does change, even if the difference between -O2 and -O3 is actually not really worth bothering with according to this: https://www.youtube.com/watch?v=r-TLSBdHe1A
|
|
# ? Feb 14, 2021 22:33 |
|
You're spending a lot of time arguing against this guy's entire body of work--and you might be right in terms of his other articles and Linux vs FreeBSD or whatever. I don't really care about his other articles. I am explicitly only talking about this one article and noting that the +/- 5% difference in results that might show up through repeated testing does not meaningfully take away from the massive performance difference being shown between 12.2 and 13.x "fresh out of the box" installs. That's it. If you know people who are actively planning on moving from a current 'nix production system over to 13.x (vs just noting that it's something they're exploring / considering / waiting for more data on) based on beta results from one dude on the internet, and they have no plans to independently verify the performance, wait for -Stable, or at minimum wait for subsequent thorough reviews before pulling the trigger, you know some dumb people.
|
# ? Feb 15, 2021 00:06 |
DrDork posted:You're spending a lot of time arguing against this guy's entire body of work--and you might be right in terms of his other articles and Linux vs FreeBSD or whatever. I don't really care about his other articles. A userland performance boost of over 100% cannot be easily explained by the changes that's gone into 13-CURRENT and will be in 13.0-RELEASE. What can explain it is, for example, use of INVARIANTS in the kernel (documented here), but since the methodology is broken, we can't say anything about why it is the way it is, just like the numbers can't be used to say anything about the actual performance, since they're impacted by the methodology too. Again, I'm not saying that FreeBSD 13 isn't faster - I keep an eye on the tree to see if the code changes without the documentation changing, so I've noticed a lot of the big changes going in. Thing is, safe memory reclamation by Jeff Roberson, as well as the substantial VM changes by Jeff, Konstantin Belousov, and Mark Johnston, lockless delayed invalidation by Konstantin, along with the many micro-optimizations to kernel primitives, a lot of the lockless and per-CPU changes to the VFS, and rewriting of C library functions in hand-rolled assembly by Mateusz Guzik, depessimization changes by Conrad Mayer, Ryan Libby, Matreusz, et al. plus the new binary tree search implementations by Doug Moore and Edward Tomasz Napierala (my mentor) - they, and a lot of other things, all combine to make a difference in terms of performance, but even if you combine them all together, there's no reason they should impact userland to the degree that's been shown with the numbers, especially considering that most of these changes are focused largely on improving multi-threaded scalability up to 1024 threads. The point is, people are going to look at the benchmarks and think "oh neat, 100% improvement" and then when they don't get that 100% performance improvement, they're going to claim anything from "FreeBSD promised them it would be faster" over "FreeBSD isn't as fast as is claimed" (without sourcing the claim) and all the way to blaming FreeBSD people for ruining their lives. I know I'm being hyperbolic Also, yeah, there's a lot of people who make stupid decisions based on Number = Bigger - it accounts for a substantial amount of the world economy, in fact. BlankSystemDaemon fucked around with this message at 07:01 on Feb 15, 2021 |
|
# ? Feb 15, 2021 06:19 |
|
You're welcome to do your own, statistically significant and rigorous, testing of 12.2 vs 13.0 and report the results and we can see how far off he is, then. I mean, with some OSS automation software out there it's just a couple of easy clicks, right? Again, anyone who takes single-source, single-run, "hey guys these are preliminary benchmarks of a beta system" third party benchmarks as "FreeBSD promised <anything>!" shouldn't be making purchasing decisions for anyone and deserves whatever disappointment they get for not validating that a given rando benchmark actually relates to whatever prod loads they're running. (yes, I'm sure such people exist, but I have no more sympathy for them than I do the people who buy the same video card as their friend and then are mad that they don't get the same FPS in games despite the entire rest of their systems being completely different)
|
# ? Feb 15, 2021 14:56 |
|
|
# ? Apr 26, 2024 15:17 |
DrDork posted:You're welcome to do your own, statistically significant and rigorous, testing of 12.2 vs 13.0 and report the results and we can see how far off he is, then. I mean, with some OSS automation software out there it's just a couple of easy clicks, right? Not only do those people exist, they might be the majority since very very few people understand statistics. Plus, people are just going to accuse me of being biased, since I'm a FreeBSD developer.
|
|
# ? Feb 15, 2021 18:04 |