Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Probably no better in threadripper, possibly worse if it tries to span the interconnect between dies/memory controllers and incurs the additional latency and bandwidth penalty. Those tests don't indicate any bottlenecking from the CPU; GPU limited pretty much exclusively. Game engines are going to target realistic hardware and it must have been more than enough work to make what they have distribute load across 8 cores, I doubt they bothered to do any more if it doesn't provide real gains.

Adbot
ADBOT LOVES YOU

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

fishmech posted:

Ultimately DEC Alpha was important because DEC was important, and they positioned it as the direct successor to their popular VAX families of processors.


This was true, but there was also that early NT was specifically designed to not favor any particular CPU design, and there was also the fact that x86 processors weren't all that fast themselves back in the day (but conversely, DEC Alpha hardware was hardly inexpensive, and IIRC DEC never brought Alphas down to their "low-end" systems in their heyday). And Alpha support was only in NT 3.1/3.5/4.0 and by 4.0 it was already looking sketchy for the architecture.

I guess you could consider it like, what if the Intel CPU lines currently stopped at the "Pentium" branded chips of today, everything i3 and up and all the Xeons were absent, and they were also a few generations back from current? That's kinda what putting up especially 486s or the early Pentiums against the DEC Alphas was like, with the Alphas taking the role of very high Intel chips of today.

I have a DEC Alpha Multia which was (at the time) their low end workstation CPU. I ran linux and Windows NT on it back in the 90s. The Alpha CPU in it wasn't very fast but it was sort of the lowest tier you could get that was still an Alpha. I think mine is 166mhz (haven't turned it on in 15 years or more due to dead multia syndrome but I'll repair it some day). I remember spending a lot of money on 32 MB of True Parity RAM for it. I got mine in 1996 and it was only two years later that they were bought by Compaq. They produced an Intel pentium based Multia as well, I guess because it was an attractive form factor (small pizza box kind of thing).



The cool part about it was that it was decently fast for a lot of stuff and you had a lot of OS flexibility, but the downside was any x86 binaries for NT would run slowly in FX/32 emulation but be "optimized" for alpha and build a library of alpha-optimized binaries it could refer to. There was even some software to share that library on a LAN in case you had a lot of Alphas running NT, I guess. They never got as fast as something compiled for the Alpha natively, though.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Couple of new stories aggregated for your perusal:

AMD must shell out $29.5m to make a class-action lawsuit filed by shareholders who accused the company of misleading investors before Llano's launch go away. One wonders if they are only capitulating to the settlement *now* because they actually have the funds to pay it out.

PCIe 4.0 to be finalized at the end of this year, 5.0 by 2019. Here's hoping Zen+ and future iterations can keep on top of that, or Ryzen will be short-lived indeed.

1900X?

SwissArmyDruid fucked around with this message at 22:16 on Aug 29, 2017

Not Wolverine
Jul 1, 2007
Alphas we're fast, often about twice as fast as the Pentium available at the time. The problem was that DEC was a dinosaur that didn't believe in things like marketing or personal computers, the Alpha was going to sell itself to mainframe builders everywhere based on performance alone.

The alpha team later went on to work for AMD to design the Athlon, the engineers were brilliant. Dirk Meyer came from DEC to help with the Athlon, and they also had the silicon valley wizard Jim Keller, and Intel was just screwing off with Netburst talking about 10GHz Pentium 4s. drat I miss the early 2000s.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I remember Intel sending dev support people to my work in the late 90s to tell us to change x<<4 to x*=16 all over our code so it would run faster, because of some architectural change in P4. Ridiculous in several ways, but that's Netburst for you.

Kilson
Jan 16, 2003

I EAT LITTLE CHILDREN FOR BREAKFAST !!11!!1!!!!111!

VostokProgram posted:

Was DEC Alpha supposed to be super badass or something back in the day? I find mention of it in a lot of places but no explanation of why it was so interesting

e: besides it having bizarre super weak memory ordering

Each generation of Alpha was pretty much faster than anything else within their target market when released, but they didn't have that much software support, and DEC marketing was poo poo.

It's really too bad they didn't get to continue with their plans for EV8, Tarantula, and beyond. They looked pretty ridiculous, and it would've been cool to see if they might have changed the landscape a bit. Here's a couple of interesting reads:

http://www.realworldtech.com/ev8-mckinley/
https://pdfs.semanticscholar.org/024f/3e0ea6a49e536f3d135e73d77323a924498d.pdf

Mr.Radar
Nov 5, 2005

You guys aren't going to believe this, but that guy is our games teacher.

Subjunctive posted:

I remember Intel sending dev support people to my work in the late 90s to tell us to change x<<4 to x*=16 all over our code so it would run faster, because of some architectural change in P4. Ridiculous in several ways, but that's Netburst for you.

Yeah, Intel apparently decided to drop the barrel shifter (that pretty much every other x86 CPU has for fast shifts and rotations by arbitrary amounts) from Netburst so they had to use microcode loops if you were shifting by more than a single bit. That makes writing constant-time crypto algorithms targeting Netburst a huge PITA.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

why the hell is pcie4.0 going to be so drat short lived? 3 has been around for what, close to a decade?

wargames
Mar 16, 2008

official yospos cat censor

Watermelon Daiquiri posted:

why the hell is pcie4.0 going to be so drat short lived? 3 has been around for what, close to a decade?

two differant teams, 4 had issues and 5 didn't. Why did we go from ipv4 to ipv6? ipv5 had issues.

Kazinsal
Dec 13, 2011



PCIe 3.0 was finalized in 2010, boards started showing up around 2012 with Z77/Ivy Bridge.

Their goal for finalizing PCIe 3.0 was 2008 IIRC so realistically 5.0 won't be until around 2023, and implementations around 2025.

wargames posted:

two differant teams, 4 had issues and 5 didn't. Why did we go from ipv4 to ipv6? ipv5 had issues.

Plus this. You can actually read the "IPv5" spec though! It's RFC 1819, Internet Stream Protocol Version 2.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
The Gen4->Gen5 transition will "only" change the signal rate whereas there were some other changes going from 3.1 -> 4.0.. (10 bit tag size, scaled flow control, power supply spec changes, etc)

Of course that's the plan and there will probably be some kind of feature creep that ends up delaying 5.0 anyway.

The biggest reason for the delay in going to 4.0 is probably the number of companies in the PCI SIG consortium now (group that creates the spec). A lot of this is due to the explosion of PCIe based storage so drive vendors want their say. It's like herding cats.

With the functional changes ironed out in 4.0 it should just be physical change going to 5.0..

isndl
May 2, 2012
I WON A CONTEST IN TG AND ALL I GOT WAS THIS CUSTOM TITLE

wargames posted:

Why did we go from ipv4 to ipv6? ipv5 had issues.

And here I thought it was supposed to make it easy to remember how many numbers you were supposed to input for the IP address. :downs:

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Honestly, I somehow never connected the 'v' in ipv# to 'version [number]'

I think i just thought the 4/6 meant the number of bytes/halfwords :shobon:

Watermelon Daiquiri fucked around with this message at 08:38 on Aug 30, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
IPv6 addresses have 16 bytes / 8 half-words (32-bit) though.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Combat Pretzel posted:

IPv6 addresses have 16 bytes / 8 half-words (32-bit) though.

128 bits.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
just goes to show i dont care enough to really count them lol

also conflating link-local addresses which have fewer groups shown

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
That's what I said. The 32-bit was to indicate half of which word width.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Combat Pretzel posted:

That's what I said. The 32-bit was to indicate half of which word width.

Ah, ok. Sorry.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
TBH kind of annoying that the protocol to that long to gain traction. I have technical literature of IPv6 dated 2001 in some cardboard box.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong
Google's IPv6 usage statistics (gathered on the basis of how many connections they see to all their sites over v6 vs v4) are very interesting for that. For one thing, IPv6 adoption is consistently higher on Saturdays, Sundays and holidays than on normal workdays. For another, Belgium, the US and Greece are the top 3 IPv6 users, in that order.

https://www.google.com/intl/en/ipv6/statistics.html

SCheeseman
Apr 23, 2003

The motherboard for my i7 3770 system died and I wanted a new system in a pinch so I grabbed a Ryzen 3 1200 and a MSI B350M PRO-VDH, along with some 3200 mhz RAM or whatever.

I managed to easily hit 3.8Ghz with the stock cooler, though I only managed to get 2666mhz from the DDR4. Still, I don't really notice much of a difference from my old system which is fantastic for the cost. Dolphin even runs considerably better, I used to get framerate drops on some games and don't now.

The R3 1200 is an absolute bargain A++++

HamHawkes
Jan 25, 2014

SwissCM posted:

The motherboard for my i7 3770 system died and I wanted a new system in a pinch so I grabbed a Ryzen 3 1200 and a MSI B350M PRO-VDH, along with some 3200 mhz RAM or whatever.

I managed to easily hit 3.8Ghz with the stock cooler, though I only managed to get 2666mhz from the DDR4. Still, I don't really notice much of a difference from my old system which is fantastic for the cost. Dolphin even runs considerably better, I used to get framerate drops on some games and don't now.

The R3 1200 is an absolute bargain A++++

What are your plans for that 3770?

eames
May 9, 2009

https://www.youtube.com/watch?v=EtB3uirEhbY

Destiny 2 CPU benchmarks by Gamer's Nexus

SCheeseman
Apr 23, 2003

HamHawkes posted:

What are your plans for that 3770?

It's probably going to a friend of mine. I live in Australia, so if you're interested in buying it the cost of shipping would probably make it not worth it.

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy
Woop. I finally got a retention ring for my Arctic Liquid Freezer 360, the free one they sent got lost in the post, so I ordered a Corsair branded ring which is a perfect fit because Asetek makes all of them. So far the CPU is a whole lot cooler.

With the stock cooler at 3.7GHz it was topping 87C running mprime/prime95, now it is reaching only 61C during the same part of the stress test. I'm actually comfortable running it now, and may even try for a higher overclock.

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy

eames posted:

https://www.youtube.com/watch?v=EtB3uirEhbY

Destiny 2 CPU benchmarks by Gamer's Nexus

So SMT atm is completely not being used by D2, sure hope that changes by the time the full game comes out

eames
May 9, 2009

Some TR benchmarks for comparison would have been nice, I suspect the engine tops out at 8 threads because that's what consoles have.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

fishmech posted:

Google's IPv6 usage statistics (gathered on the basis of how many connections they see to all their sites over v6 vs v4) are very interesting for that. For one thing, IPv6 adoption is consistently higher on Saturdays, Sundays and holidays than on normal workdays. For another, Belgium, the US and Greece are the top 3 IPv6 users, in that order.

https://www.google.com/intl/en/ipv6/statistics.html
Most of the infrastructure here in Belgium is IPv6 capable. If the telcos would push users to upgrade to the new IPv6-enabled DSL and cable modems, you'd see that number soar. The current policy is to replace on failure, and those old modems are relatively robust.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Scarecow posted:

So SMT atm is completely not being used by D2, sure hope that changes by the time the full game comes out

There's the typical 10% improvement between the 7600K and 7700K so SMT is being used. Based on the fact that the 1600X and 1700 score the same it probably isn't scaling past 12 threads - which is not unusual for games.

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy

Paul MaudDib posted:

There's the typical 10% improvement between the 7600K and 7700K so SMT is being used. Based on the fact that the 1600X and 1700 score the same it probably isn't scaling past 12 threads - which is not unusual for games.

no its that SMT for AMD is flat out not being used atm see in the article
http://www.gamersnexus.net/game-bench/3038-destiny-2-beta-cpu-benchmarks-testing-research
"For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:"

Truga
May 4, 2014
Lipstick Apathy

fishmech posted:

Google's IPv6 usage statistics (gathered on the basis of how many connections they see to all their sites over v6 vs v4) are very interesting for that. For one thing, IPv6 adoption is consistently higher on Saturdays, Sundays and holidays than on normal workdays. For another, Belgium, the US and Greece are the top 3 IPv6 users, in that order.

https://www.google.com/intl/en/ipv6/statistics.html

greece probably sold many of their ipv4 blocks off :v:

Arivia
Mar 17, 2011
I think the only important thing from that GN article is this, and it shows how Intel is still totally ahead in everything and perfect in every way:

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy

Arivia posted:

I think the only important thing from that GN article is this, and it shows how Intel is still totally ahead in everything and perfect in every way:



yes lets just ignore that SMT is not working for AMD cpus in D2 atm

CaptainSarcastic
Jul 6, 2013



Scarecow posted:

yes lets just ignore that SMT is not working for AMD cpus in D2 atm

:thejoke:

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

Scarecow posted:

no its that SMT for AMD is flat out not being used atm see in the article
http://www.gamersnexus.net/game-bench/3038-destiny-2-beta-cpu-benchmarks-testing-research
"For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:"


But can we really trust this? r/AMD says No!:

The Hottest of Takes posted:

Gamers Nexus is not reliable is anti AMD bullshit, all manipulated. Everything that favors AMD is not reviewed, benchmarked nor shown by gamers nexus. Conclusions are even worst. Stop using that guys things as a "revealed truth" nor information because it is all OPINIONS of one guy who clearly hates AMD or works for competitors. And it's clear that something about that is really going on with the videos he didn't show, then showed and then explained like the "glue interview" one that "was lost". I saw polaris benchmarks with OLD DRIVERS in that site.

STOP.

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy

yeah I missed that one

Arivia
Mar 17, 2011

Scarecow posted:

yeah I missed that one

Yeah, Steve spends the first ten minutes of the video going "there's a lot more to this than just straight benchmarks, please pay attention to all the details" and then the last minute going "so don't take the one graph you like and throw it out there without context as if it's the final word" so I was just doing that for shits and giggles. It's a beta, SMT isn't working, Ryzens are good CPUs.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

AVeryLargeRadish posted:

But can we really trust this? r/AMD says No!:

r/AMD is hosed in the head because Steve only ever poo poo on the 1700X and 1800X for being overpriced relative to the 1700, and was iffy on the 1700 from a pure gaming standpoint as the 7700K performs better for a similar enough price. Oh and he rightly pans AMD for Vega. Otherwise I've never seen him explicitly down on AMD and instead has been supportive of basically the 1700 and down, and even Polaris as budget options.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

FaustianQ posted:

r/AMD is hosed in the head because Steve only ever poo poo on the 1700X and 1800X for being overpriced relative to the 1700, and was iffy on the 1700 from a pure gaming standpoint as the 7700K performs better for a similar enough price. Oh and he rightly pans AMD for Vega. Otherwise I've never seen him explicitly down on AMD and instead has been supportive of basically the 1700 and down, and even Polaris as budget options.

posted:

The Hottest of Takes posted:

Adbot
ADBOT LOVES YOU

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
[/quote]

I mean, yea?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply