Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Canna Happy
Jul 11, 2004
The engine, code A855, has a cast iron closed deck block and split crankcase. It uses an 8.1:1 compression ratio with Mahle cast eutectic aluminum alloy pistons, forged connecting rods with cracked caps and threaded-in 9 mm rod bolts, and a cast high

Shrimp or Shrimps posted:

Looking like a strong no-go. Ah well.

Ok, so, I was looking at an image of a Trident 3, sorry. It looks like its a z490i, but the rear io is different because the trident uses some sort of internal antenna for wifi/bluetooth. So, you could swap it into an nr200. The only really odd thing about the board is the lack of vrm heatsinks.
https://imgur.com/a/OC68fIo

Canna Happy fucked around with this message at 05:31 on Feb 10, 2021

Adbot
ADBOT LOVES YOU

Shrimp or Shrimps
Feb 14, 2012


Canna Happy posted:

Ok, so, I was looking at an image of a Trident 3, sorry. It looks like its a z490i, but the rear io is different because the trident uses some sort of internal antenna for wifi/bluetooth. So, you could swap it into an nr200. The only really odd thing about the board is the lack of vrm heatsinks.
https://imgur.com/a/OC68fIo

Thanks, I appreciate the help! So if it appears to be a standard itx board, then it should fit in the NR200 with the proper mounting screw holes. Not having vrm heatsinks is weird; the cooler design in the trident x blows air downward onto the motherboard so I guess they just rely on that. However swapping it out and using a tower cooler would mean needing to get sinks to stick to the vrms most likely. In that case, swapping another itx board into the trident case would probably mean having to remove the vrm heatsinks to get it to fit beneath the cooler.

How essential are vrm heatsinks? I'm guessing once you start ocing and unlock the pl limits for like a 10700k, it's guzzling so much power that the sinks are very essential. I wonder why MSI didn't sink them with something at least low profile.

Shrimp or Shrimps fucked around with this message at 22:54 on Feb 10, 2021

BlankSystemDaemon
Mar 13, 2009



Well, as is to be expected the only thing that matters is Number = Bigger when comparing FreeBSD 12.2-RELEASE and 13.0-BETA1, but it's evident that Michael doesn't really know what's behind the improvements, and just guesses it's down to "hardware P-states or power management" or "other kernel improvements".
The likely culprit is a shitload of micro-optimizations to primitives used by both the kernel and userland, as well as big changes in the VFS and VM subsystems, along with a bunch of scalability improvements (which are to ensure FreeBSD runs well at up to 1024 threads). The majority of that work isn't even done yet, so there's plenty more Number = Bigger to be had.

Still no standard deviation, min/max values, mean/average/median reports, or confidence intervals - which suggests this isn't benchmarked properly.
It's frustrating that someone who makes it their business to report on statistics doesn't do it properly.

BlankSystemDaemon fucked around with this message at 06:35 on Feb 11, 2021

NewFatMike
Jun 11, 2015

Larabel produces an insane number of articles daily and, AFAIK, Phoronix is the only website doing that level of reporting on FOSS software, so maybe don't try to hold it against him too hard?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Producing mountains of poo poo "journalism" and extremely bad benchmarks doesn't make any of it less poo poo.

NewFatMike
Jun 11, 2015

lol okay, if it's that mission critical, do your own benchmarks. I'm not gonna poo poo on the only person doing it if he's doing alright.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Yeah I'll take bad benchmarks over no benchmarks.

Cygni
Nov 12, 2005

raring to post

10850k is available for $350 at Microcenter. Frankly, thats an incredible value even with Rocket Lake coming in a month.

https://www.microcenter.com/product/626745/intel-core-i9-10850k-comet-lake-36ghz-ten-core-lga-1200-boxed-processor

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Also this on Amazon

https://www.amazon.com/Intel-i7-107...HNXWA80J4QA0MCD

Cygni
Nov 12, 2005

raring to post

I firmly believe in the Anand Lal Shimpi adage of "there are no bad products, just bad prices", and with these price cuts, the Intel parts are straight up tasty. That 10700F is $70 cheaper than the 3700X/5600X and faster to equivalent in basically everything. Still gotta fight motherboard shortages though.

Ugly In The Morning
Jul 1, 2010
Pillbug

Cygni posted:

Rocket Lake coming in a month.


It is going to be so god drat tempting to upgrade and I hate that I’m even considering it.

FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe
Not that I'm tempted but far can I go with a straight cpu upgrade from an 8700?

BlankSystemDaemon
Mar 13, 2009



NewFatMike posted:

lol okay, if it's that mission critical, do your own benchmarks. I'm not gonna poo poo on the only person doing it if he's doing alright.
But that's the entire point, he isn't doing it alright.
I'm not criticizing him because I don't want him to do it - rather, I want him to do a good job, so that the numbers actually mean something and we can draw conclusions from them.
None of this is special or esoteric knowledge - it's so easy that it can be explained with cartoons, as is done in The Cartoon Guide to Statistics.
None of it is hard to accomplish when you use tooling, and he does use tooling - in fact, he's been using his own tooling since 2008.

MaxxBot posted:

Yeah I'll take bad benchmarks over no benchmarks.
The issue is that when benchmarks aren't performed with statistical rigor and by controlling for all variables (which, as an example, is what everything on this page is about), you can't actually compare the numbers and get a meaningful answer, because you're not comparing two versions of the same thing, you're comparing two different things - ie. the old adage about comparing apples and oranges.

NewFatMike
Jun 11, 2015

You can let him know in the comments, I'm not sure he'd pay ten bucks to read this thread.

BlankSystemDaemon
Mar 13, 2009



NewFatMike posted:

You can let him know in the comments, I'm not sure he'd pay ten bucks to read this thread.
I've reached out to him a few times over the years, and never gotten a response.

Ben Smash
Aug 22, 2005

LARDROOM
Grimey Drawer

BlankSystemDaemon posted:

I've reached out to him a few times over the years, and never gotten a response.

Color me extremely shocked.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Ben Smash posted:

Color me extremely shocked.

Yes, I too can't possibly imagine why asking a dude to do 3-10x the number of runs in order to fill out the needed data to get 95% Ci and the rest of the stuff asked for there on a free website isn't getting a whole lot of traction. Especially when the software in question is still a beta.

Besides, do we really need candle graphs to understand the point of the entire article, which was "13.x is a gently caress ton faster than 12.2"? It's not like it's posing itself as a resource for you to base hardware scaling decisions on.

BlankSystemDaemon
Mar 13, 2009



Apparently I have no loving idea what I'm doing when sober, because I only just now, while being tipsy, spotted that I posted the previous posts in the Intel thread, not the Linux thread. orz

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
in Ancient Persia, the men used to debate posts once sober and once drunk, because the post needed to sound good in both states in order to be considered a good idea.

(nah :justpost:)

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

in Ancient Persia, the men used to debate posts once sober and once drunk, because the post needed to sound good in both states in order to be considered a good idea.

(nah :justpost:)
But what if we post both when sober and tipsy/drunk, that way we can get different takes on arguments back and forth, and double the PPTP (posts per time period).

B-Mac
Apr 21, 2003
I'll never catch "the gay"!

Cygni posted:

10850k is available for $350 at Microcenter. Frankly, thats an incredible value even with Rocket Lake coming in a month.

https://www.microcenter.com/product/626745/intel-core-i9-10850k-comet-lake-36ghz-ten-core-lga-1200-boxed-processor

Great deal, really unfortunate so many people don’t live near one.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

But what if we post both when sober and tipsy/drunk, that way we can get different takes on arguments back and forth, and double the PPTP (posts per time period).

If you do, make sure to post the resulting scatter plot of shots vs how good the argument sounds.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

If you do, make sure to post the resulting scatter plot of shots vs how good the argument sounds.
The worst part is, this sounds like an excellent idea to my drunk brain.

NewFatMike
Jun 11, 2015

BlankSystemDaemon posted:

Apparently I have no loving idea what I'm doing when sober, because I only just now, while being tipsy, spotted that I posted the previous posts in the Intel thread, not the Linux thread. orz

This is 99% of the reason behind poking at you,, I hope we can be friendly e-cquaintances ♥️

Ben Smash
Aug 22, 2005

LARDROOM
Grimey Drawer

DrDork posted:

Yes, I too can't possibly imagine why asking a dude to do ...

Lmao holy poo poo dude smoke weed or something. Shits weird for everyone right now.

Kazinsal
Dec 13, 2011



Ben Smash posted:

Lmao holy poo poo dude smoke weed or something. Shits weird for everyone right now.

Havin' a toke to celebrate the start of a long weekend at this very moment, highly recommend

(it also softens the blow of yet another day of there being no stock available for anything more technologically complex than a 6502)

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

The worst part is, this sounds like an excellent idea to my drunk brain.

I mean, I legitimately can think of several worse ways to spend an afternoon. Say it's for science! :science:

Ben Smash posted:

Lmao holy poo poo dude smoke weed or something. Shits weird for everyone right now.

Can't :(

Cygni
Nov 12, 2005

raring to post

BlankSystemDaemon posted:

Apparently I have no loving idea what I'm doing when sober, because I only just now, while being tipsy, spotted that I posted the previous posts in the Intel thread, not the Linux thread. orz

lol i thought that might be in the wrong thread cause i didnt recognize most of the words but thought i was just too dumb to get the context

BlankSystemDaemon
Mar 13, 2009



NewFatMike posted:

This is 99% of the reason behind poking at you,, I hope we can be friendly e-cquaintances ♥️
No, that's completely fair, posting friend :love:

DrDork posted:

Yes, I too can't possibly imagine why asking a dude to do 3-10x the number of runs in order to fill out the needed data to get 95% Ci and the rest of the stuff asked for there on a free website isn't getting a whole lot of traction. Especially when the software in question is still a beta.

Besides, do we really need candle graphs to understand the point of the entire article, which was "13.x is a gently caress ton faster than 12.2"? It's not like it's posing itself as a resource for you to base hardware scaling decisions on.
The tooling does everything if it's made properly - ie. use puppet/chef/cfengine to connect to the machine, install the tooling, run, reboot, wait, connect, run, rinse, and repeat.
That's the entire point of tooling, to make it automated so you just get the results without having to do anything other than wait - or, you know, do productive stuff, I guess?

Devops think they invented the idea, but sysadmins have been doing these things for a shitload of a long time - as an example, cfengine was imported to FreeBSD Ports back in March, 1998, and I remember using it for automating things relatively soon after I started using FreeBSD in 2000.

I guess the reason I care is because statistics is a thing that can be used to show just about anything - including misrepresenting something and outright lying to people, although I don't think that's the case with Michael.
I'd like it if people get it right when they use statistics, because that way it's science and can be replicated by others. :science:

BlankSystemDaemon fucked around with this message at 07:53 on Feb 13, 2021

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

The tooling does everything if it's made properly - ie. use puppet/chef/cfengine to connect to the machine, install the tooling, run, reboot, wait, connect, run, rinse, and repeat.

You're not wrong on that, but it'd still increase the runtime of the test by 3-5x minimum. I'm just saying asking for hours of extra running is maybe asking a bit much on an article that notes they're reporting preliminary performance of a beta OS, and given that the performance numbers aren't even close in most tests means that a 5-10% difference due to <whatever> isn't really germane to the point of the article. Especially given that it is, as noted, a beta OS, so the whole thing is kinda a curiosity at this point rather than a deep technical article.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

DrDork posted:

You're not wrong on that, but it'd still increase the runtime of the test by 3-5x minimum. I'm just saying asking for hours of extra running is maybe asking a bit much on an article that notes they're reporting preliminary performance of a beta OS, and given that the performance numbers aren't even close in most tests means that a 5-10% difference due to <whatever> isn't really germane to the point of the article. Especially given that it is, as noted, a beta OS, so the whole thing is kinda a curiosity at this point rather than a deep technical article.

I actually don't get BSD's insistence on running everything N times because statistics. Sometimes that's needed, but usually more for short running tests where the resolution of the timer influences results. You can get meaningful data out of single runs, especially with the availability of nanosecond scale timers on many operating systems.

But you have to think about that, and a ton of other test methodology pitfalls, to meaningfully test things. You even have to think carefully about what to test.The problem with Phoronix is that Larabel doesn't. His model is making GBS threads out lots of lazy and pointless content. He chooses random topics, puts no effort into making sure he's doing something meaningful, runs a script to generate some graphs, and then writes a bad layman's level interpretation which makes software engineers cringe.

http://blog.martin-graesslin.com/blog/2012/09/why-i-dont-like-game-rendering-performance-benchmarks/

There's plenty of other criticism out there. It's easy to find, because he regularly pulls poo poo like "I just proved video card X is better than video card Y" despite not constructing a test where the only variable is whether it was card X or Y installed. The problems with his methodology aren't subtle.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BobHoward posted:

I actually don't get BSD's insistence on running everything N times because statistics. Sometimes that's needed, but usually more for short running tests where the resolution of the timer influences results. You can get meaningful data out of single runs, especially with the availability of nanosecond scale timers on many operating systems.

There's an argument to be made about the potential impacts of background tasks and what not, but yeah, for long-running tests that's likely to be less impactful.

But yeah, if you're testing the wrong thing in the wrong way, running it 5 extra times isn't gonna make the resultant data any more relevant.

Gwaihir
Dec 8, 2009
Hair Elf
Many of the review sites discuss issues with running multiple reruns of certain tests, because it is just down to the time required. They often get hardware not too far in advance of the date the NDA drops, and every site has to have their data ready to go on release, otherwise no one will read their stuff. All the good ones of course have their testing automated, but some of those benches (think battery life for light use on the latest ultra portable laptops, or some of the extended SSD benches anandtech does) are simply not feasible to have done in time.

That doesn't mean everything published is useless though, far from it, and that's the gist of several of these posts which is a little :???:

Nothing posted is going to be so off kilter as to make bench results useless, and when one site does have anomalies the others usually make note of it and post a video about what they got wrong or whatnot.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

You're not wrong on that, but it'd still increase the runtime of the test by 3-5x minimum. I'm just saying asking for hours of extra running is maybe asking a bit much on an article that notes they're reporting preliminary performance of a beta OS, and given that the performance numbers aren't even close in most tests means that a 5-10% difference due to <whatever> isn't really germane to the point of the article. Especially given that it is, as noted, a beta OS, so the whole thing is kinda a curiosity at this point rather than a deep technical article.
FreeBSD is produced with a script (documented here) in such a fashion that anyone can build it and get the exact same result (via something called reproducible builds and using isolation to prevent the host environment from impacting the build), and it can even be built on Linux or macOS.
In accordance with the schedule, -BETAn is a snapshot of what will become the final -RELEASE, that doesn't stick around for a long time.
What's being benchmarked has been run in production by a lot of people, including Netflix for all of the back-end of their CDN for up to two years prior - it's got nothing to do with being "a beta OS".

You can't even use the Number = Bigger thought process I mentioned earlier, because only a subset of the benchmarks use score-based reporting, the rest use time.

Gwaihir posted:

Many of the review sites discuss issues with running multiple reruns of certain tests, because it is just down to the time required. They often get hardware not too far in advance of the date the NDA drops, and every site has to have their data ready to go on release, otherwise no one will read their stuff. All the good ones of course have their testing automated, but some of those benches (think battery life for light use on the latest ultra portable laptops, or some of the extended SSD benches anandtech does) are simply not feasible to have done in time.

That doesn't mean everything published is useless though, far from it, and that's the gist of several of these posts which is a little :???:

Nothing posted is going to be so off kilter as to make bench results useless, and when one site does have anomalies the others usually make note of it and post a video about what they got wrong or whatnot.
He's not on a deadline, though - nobody does this but him. He can take his time and do the necessary benchmarks.

There's a huge difference between testing hardware and testing software, but fundementally the same rules apply: Figure out the minimum reproducible change that you want to test, and avoid changing any other variable when testing that.

In Michaels case, he confounds his benchmarks by using an arbitrary set of hardware which seems to come and go depending on the season and what colour of underwear may or may not be worn.

If Michael wants to present numbers and say something about X or Y being better, he needs to do it properly.
If not, pedantic assholes like me will keep crawling up his rear end about how he might as well be lying to people and making claims he can't back up.

BlankSystemDaemon fucked around with this message at 16:31 on Feb 14, 2021

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

FreeBSD is produced with a script (documented here) in such a fashion that anyone can build it and get the exact same result...

Sure, but even if you can make the whole testing operation a single automated button click, 5x the run time is still 5x the run time. Similarly, yes, I know the Beta will eventually work its way into a RC and then eventually into a stable release, but it's not like they don't make changes to things each step of the way. It's in beta for a reason, and we can expect the final stable release to have somewhat different characteristics because of the fixes they'll be making between now and then. So, combined, it strikes me as pretty reasonable to go "eh, I'm not going to put a ton of effort into exhaustively benchmarking this to statistical significance because all I'm trying to do at this point is get a rough feel for what sort of performance improvements we might be able to expect"--which the graphs as presented capably do.

You're not wrong that changing up your testing platform every few months makes cross-comparisons very difficult / impossible, but that's a whole different issue. It's not like Intel trying to find hilarious ways to "prove" that the M1 is a dog of a chip because it can't run Crysis natively or whatever.

BlankSystemDaemon posted:

You can't even use the Number = Bigger thought process I mentioned earlier, because only a subset of the benchmarks use score-based reporting, the rest use time.

Really, man? Come on. No one here is so :hurr: as to be incapable of understanding that on some charts bigger = better and on others smaller = better. Doesn't change anything here.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

Sure, but even if you can make the whole testing operation a single automated button click, 5x the run time is still 5x the run time.
Again, there's no deadline. What's the hurry? Once 13.0-RELEASE happens, there will be even less changes, as at that point, it's turned over to the security officer which means only security announcements and errata notices get in, which is an even more rigorous process.

DrDork posted:

Similarly, yes, I know the Beta will eventually work its way into a RC and then eventually into a stable release, but it's not like they don't make changes to things each step of the way.
It's in beta for a reason, and we can expect the final stable release to have somewhat different characteristics because of the fixes they'll be making between now and then.
That's not how the FreeBSD development process works. Changes that affect the characteristics of a given -RELEASE are already in by the time the release engineering branch is created by the release engineer, and from that point on, any commit to the releng branch requires explicit approval by the release engineer.
FreeBSD works under a principle called Principle Of Least Astonishment, which has a lot of different meanings depending on context, but in FreeBSD it means that soon as code from -CURRENT makes it to -STABLE (the branch where releng is created from each time a new -RELEASE happens on the same major version), there shouldn't be any breaking changes that will surprise someone using FreeBSD. That effectively means that while there can be minor improvements and lots of features added, the big sweeping changes that could cause a noticeable effect simply won't occur.

DrDork posted:

So, combined, it strikes me as pretty reasonable to go "eh, I'm not going to put a ton of effort into exhaustively benchmarking this to statistical significance because all I'm trying to do at this point is get a rough feel for what sort of performance improvements we might be able to expect"--which the graphs as presented capably do.
Based on the stock people put in the numbers, according to what's said about them, they're used to make financial decisions, because even before the product is out, I've already seen people talking about moving from Linux to FreeBSD because of those numbers.

DrDork posted:

You're not wrong that changing up your testing platform every few months makes cross-comparisons very difficult / impossible, but that's a whole different issue. It's not like Intel trying to find hilarious ways to "prove" that the M1 is a dog of a chip because it can't run Crysis natively or whatever.
But that's just the tip of the proverbial iceberg - I've mentioned a lot of other issues that compound this, and BobHoward has mentioned some as well. It all combines to make a complete mockery of the whole thing.

DrDork posted:

Really, man? Come on. No one here is so :hurr: as to be incapable of understanding that on some charts bigger = better and on others smaller = better. Doesn't change anything here.
Sure it does.
When he's comparing Linux vs FreeBSD in one of the next articles, you'll see that third-party software get built with, for example, different toolchains; Linux defaults to GCC for basically everything and FreeBSD uses LLVM for a large portion of the ports tree (there are still some things which are an exception to this rule, but more and more things move towards using LLVM because of all the work the LLVM project is doing, while GCC don't appear to want much to do with compatibility, ironically).
What this means is that all the subtleties that go into compilers make a difference for the numbers that appear - just look at any serious compiler talk comparing SPEC stuff for gcc, llvm, icc (Intels compiler that's used for HPC), and other compilers, or look at how Intel themselves are struggling to keep up with Apple because Apple happens to control both the chip they're making as well as the toolchain that builds the software (they hired the LLVM folk a long time ago).

And that ignores the fact that there's a whole host of optimization which can and often does change, even if the difference between -O2 and -O3 is actually not really worth bothering with according to this:
https://www.youtube.com/watch?v=r-TLSBdHe1A

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

You're spending a lot of time arguing against this guy's entire body of work--and you might be right in terms of his other articles and Linux vs FreeBSD or whatever. I don't really care about his other articles.

I am explicitly only talking about this one article and noting that the +/- 5% difference in results that might show up through repeated testing does not meaningfully take away from the massive performance difference being shown between 12.2 and 13.x "fresh out of the box" installs. That's it.

If you know people who are actively planning on moving from a current 'nix production system over to 13.x (vs just noting that it's something they're exploring / considering / waiting for more data on) based on beta results from one dude on the internet, and they have no plans to independently verify the performance, wait for -Stable, or at minimum wait for subsequent thorough reviews before pulling the trigger, you know some dumb people.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

You're spending a lot of time arguing against this guy's entire body of work--and you might be right in terms of his other articles and Linux vs FreeBSD or whatever. I don't really care about his other articles.

I am explicitly only talking about this one article and noting that the +/- 5% difference in results that might show up through repeated testing does not meaningfully take away from the massive performance difference being shown between 12.2 and 13.x "fresh out of the box" installs. That's it.

If you know people who are actively planning on moving from a current 'nix production system over to 13.x (vs just noting that it's something they're exploring / considering / waiting for more data on) based on beta results from one dude on the internet, and they have no plans to independently verify the performance, wait for -Stable, or at minimum wait for subsequent thorough reviews before pulling the trigger, you know some dumb people.
His methodology is flawed in every article I've read, not just this one - which impacts his entire body of work, irrelevant of whether it's 5% or 100% improvement.

A userland performance boost of over 100% cannot be easily explained by the changes that's gone into 13-CURRENT and will be in 13.0-RELEASE.
What can explain it is, for example, use of INVARIANTS in the kernel (documented here), but since the methodology is broken, we can't say anything about why it is the way it is, just like the numbers can't be used to say anything about the actual performance, since they're impacted by the methodology too.

Again, I'm not saying that FreeBSD 13 isn't faster - I keep an eye on the tree to see if the code changes without the documentation changing, so I've noticed a lot of the big changes going in.
Thing is, safe memory reclamation by Jeff Roberson, as well as the substantial VM changes by Jeff, Konstantin Belousov, and Mark Johnston, lockless delayed invalidation by Konstantin, along with the many micro-optimizations to kernel primitives, a lot of the lockless and per-CPU changes to the VFS, and rewriting of C library functions in hand-rolled assembly by Mateusz Guzik, depessimization changes by Conrad Mayer, Ryan Libby, Matreusz, et al. plus the new binary tree search implementations by Doug Moore and Edward Tomasz Napierala (my mentor) - they, and a lot of other things, all combine to make a difference in terms of performance, but even if you combine them all together, there's no reason they should impact userland to the degree that's been shown with the numbers, especially considering that most of these changes are focused largely on improving multi-threaded scalability up to 1024 threads.

The point is, people are going to look at the benchmarks and think "oh neat, 100% improvement" and then when they don't get that 100% performance improvement, they're going to claim anything from "FreeBSD promised them it would be faster" over "FreeBSD isn't as fast as is claimed" (without sourcing the claim) and all the way to blaming FreeBSD people for ruining their lives.
I know I'm being hyperbolic

Also, yeah, there's a lot of people who make stupid decisions based on Number = Bigger - it accounts for a substantial amount of the world economy, in fact.

BlankSystemDaemon fucked around with this message at 07:01 on Feb 15, 2021

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
You're welcome to do your own, statistically significant and rigorous, testing of 12.2 vs 13.0 and report the results and we can see how far off he is, then. I mean, with some OSS automation software out there it's just a couple of easy clicks, right?

Again, anyone who takes single-source, single-run, "hey guys these are preliminary benchmarks of a beta system" third party benchmarks as "FreeBSD promised <anything>!" shouldn't be making purchasing decisions for anyone and deserves whatever disappointment they get for not validating that a given rando benchmark actually relates to whatever prod loads they're running. (yes, I'm sure such people exist, but I have no more sympathy for them than I do the people who buy the same video card as their friend and then are mad that they don't get the same FPS in games despite the entire rest of their systems being completely different)

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

You're welcome to do your own, statistically significant and rigorous, testing of 12.2 vs 13.0 and report the results and we can see how far off he is, then. I mean, with some OSS automation software out there it's just a couple of easy clicks, right?

Again, anyone who takes single-source, single-run, "hey guys these are preliminary benchmarks of a beta system" third party benchmarks as "FreeBSD promised <anything>!" shouldn't be making purchasing decisions for anyone and deserves whatever disappointment they get for not validating that a given rando benchmark actually relates to whatever prod loads they're running. (yes, I'm sure such people exist, but I have no more sympathy for them than I do the people who buy the same video card as their friend and then are mad that they don't get the same FPS in games despite the entire rest of their systems being completely different)
I don't understand statistics to the degree that a statistician does, so I'm no more qualified to do it than Phoronix is, but anyone who knows any scientific field knows that falsification is a lot easier than producing original research. I'm also almost certain that I would make errors that would invalidate the results just as easily.

Not only do those people exist, they might be the majority since very very few people understand statistics.
Plus, people are just going to accuse me of being biased, since I'm a FreeBSD developer.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply