Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Combat Pretzel posted:

Is there a reason Intel can't just introduce a new public architecture, that maybe fixes and/or improves issues that may come with x86 (are there even any worthwhile pulling such a move?), and add instructions for the OS to switch the CPU between decoders? So that the OS can run executables of both x86 and the new poo poo, and eventually Intel can do away with the x86 one? Or anything in the CPU based on virtual memory addresses, like everything past 64GB are new instructions, so that everything is transparent and can be mixed, just by letting the OS load at the respective addresses?

--edit: ^^^ I guess tons more registers might be an idea for an overhauled instruction set.
--edit: Mixing things transparently would be a bitch for call conventions, if you wanted to introduce new ones, I figure.

Binary translation tied to hardware sucks, software binary translation is okay but the performance hit means it's probably not worth it. Also using as much of the virtual address space as possible is a security feature (ASLR et al).

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Well, the current x86-to-ľOp decoder is just that. Add a second New-to-ľOp decoder and some means to switch between them, to enable a transition period. But the core question I guess is still if it's actually worth to do away with x86 and go with a newer instruction set closer to the micro ops (or whatever other reason that might streamline things or help with performance). And the second question is, if it'll actually worth it to start. I'd say Microsoft will play along, and people that do more than just run a browser are probably going to upgrade often enough to not draw out such a hypothetical transition period. But there's still AMD.

As far as the address space goes, there's plenty in 64bit mode to create a virtual boundary to use to switch between instruction sets and still randomize things.

GRINDCORE MEGGIDO
Feb 28, 1985


If Intel do kill off old SIMD instructions, how much of an advantage would that actually give them? (Transistor saving). Wouldn't it be tiny?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Rastor posted:

Stupid rumor: Intel is considering no longer guaranteeing x86 backward compatibility, sometime in the 2019-2020 time frame.

Yea, ill believe that after apple drops PPC and moves to x86

Rastor
Jun 2, 2001

Don Lapre posted:

Yea, ill believe that after apple drops PPC and moves to x86

That can never happen, there's too much legacy code out there. Unless you think they are so arrogant they would force the switch and subject their users to some kind of slow translated mode for their current software.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Rastor posted:

That can never happen, there's too much legacy code out there. Unless you think they are so arrogant they would force the switch and subject their users to some kind of slow translated mode for their current software.
Itanium New (tm)

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

SourKraut posted:

Itanium New (tm)

Amusingly, Rosetta debuted as a MIPS to Itanium translator.

Platystemon
Feb 13, 2012

BREADS
I thought Rosetta worked well. :shobon:

Creative Suite performance was bad, but the overall transition experience was better than it had any right to be.

Platystemon fucked around with this message at 13:21 on Dec 30, 2016

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

Platystemon posted:

I thought Rosetta worked well. :shobon:


It did, and as many other OS X good stuff it was removed too soon.

pmchem
Jan 22, 2010


SourKraut posted:

Itanium New (tm)

I actually ran some scientific HPC stuff on non-x86 itanium big iron back in the day. It was a nice machine to use, those CPUs had massive caches. Real fast.

feedmegin
Jul 30, 2008

Combat Pretzel posted:

Is there a reason Intel can't just introduce a new public architecture, that maybe fixes and/or improves issues that may come with x86 (are there even any worthwhile pulling such a move?), and add instructions for the OS to switch the CPU between decoders? So that the OS can run executables of both x86 and the new poo poo, and eventually Intel can do away with the x86 one? Or anything in the CPU based on virtual memory addresses, like everything past 64GB are new instructions, so that everything is transparent and can be mixed, just by letting the OS load at the respective addresses?

--edit: ^^^ I guess tons more registers might be an idea for an overhauled instruction set.
--edit: Mixing things transparently would be a bitch for call conventions, if you wanted to introduce new ones, I figure.

As other posters have alluded to, Intel did this in about the year 2000, it was called the Itanium, it did indeed have a poo poo ton of registers, the initial version did have a hardware x86 execution unit on it, and by and large outside of some niche HPC contexts it was a piece of poo poo because Intel has a long and inglorious reputation of trying to make things that aren't x86 but end up sucking balls. Itaniums are still around (I write software that runs on them among other things!) but their performance never lived up to the hype.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

feedmegin posted:

As other posters have alluded to, Intel did this in about the year 2000, it was called the Itanium, it did indeed have a poo poo ton of registers, the initial version did have a hardware x86 execution unit on it, and by and large outside of some niche HPC contexts it was a piece of poo poo because Intel has a long and inglorious reputation of trying to make things that aren't x86 but end up sucking balls. Itaniums are still around (I write software that runs on them among other things!) but their performance never lived up to the hype.

Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so...

feedmegin
Jul 30, 2008

Paul MaudDib posted:

Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program what exactly would make it not run right on another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so...

Different preferred alignments, loose versus strict memory coherence if you're doing multithreading, little versus big endian, 64 versus 32 bits (sizeof(int) for example), alignment read/write restrictions (try writing an unaligned word on an out of the box SPARC Solaris box versus x86). That's just modern CPUs, it gets weirder with the sort of machines that were around when C came into existence (36 bit words, non-twos-complement integer arithmetic, etc) or indeed some embedded stuff.

Oh, btw, ARM (32-bit) has 16 general purpose registers if you're writing user-level code. 32 registers is indeed more usual in a RISC but 16 certainly isn't laughable the way 8 was.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry
So I delidded my 6700k with one of these today (it is a very solid delid tool, btw). I used Coollaboratory Ultra for the new TIM.

Prior to the delid, Prime95 would hit 78C during the first part of the test and over 90c on the second.

After the delid, the first part is around 65c and the second part it maxes at about 75c.

Both test are with the CPU OC to 4.6Ghz and with a Noctua nh-d15 cooler.

Funny aside, the UEFI said it detected a new CPU when I started it up for the first time. I assume it was because I had overclock set to auto and it test the chip on each boot and detected new thermal characteristics.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

feedmegin posted:

As other posters have alluded to, Intel did this in about the year 2000, it was called the Itanium, it did indeed have a poo poo ton of registers, the initial version did have a hardware x86 execution unit on it, and by and large outside of some niche HPC contexts it was a piece of poo poo because Intel has a long and inglorious reputation of trying to make things that aren't x86 but end up sucking balls. Itaniums are still around (I write software that runs on them among other things!) but their performance never lived up to the hype.
How I understand it, Itanium was a completely different concept and x86 was shoehorned in.

The question is, how deep is the micro-op format tied to x86? If it's some ideal format Intel thought up and the x86 decoder makes-do with it, wouldn't it be better to introduce an overhauled optimized instruction set and migrate to it on longterm? I've recently read that x86 and x86-64 are quasi separate decoders to the micro-op format. If so, a new instruction format decoder would be a third one.

But as said, if AMD doesn't play along, it's probably futile, anyway.

Paul MaudDib posted:

Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so...
I think funny pointer math that uses fixed types for storing intermediate values creates the most drama. Mostly affects 32bit apps going 64bit. I don't think an 64bit compliant application that ran on x86 will suffer when compiling for say Itanium, so long it didn't involve any assembler.

feedmegin
Jul 30, 2008

Combat Pretzel posted:

How I understand it, Itanium was a completely different concept and x86 was shoehorned in.

I'm not sure it would be any better trying to do the same thing now, the concept is basically the same.

quote:

I think funny pointer math that uses fixed types for storing intermediate values creates the most drama. Mostly affects 32bit apps going 64bit. I don't think an 64bit compliant application that ran on x86 will suffer when compiling for say Itanium, so long it didn't involve any assembler.

I just mentioned a bunch of other things that would bite you; Itanium is friendler than most to x86 code, but if you're trying to write portable code that runs on Itanium you probably also care about about POWER and SPARC (both of which have larger market shares at this point), so you need to worry about all the other stuff I mentioned that is not 32<>64 bit specific. And that's leaving aside OS differences, since Windows and Linux on Itanium is pretty much dead. Itanium==HP-UX (these days), SPARC==Solaris, POWER==AIX (mostly), so you've got that to worry about as well.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

Paul MaudDib posted:

Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so...

There's also the fact that, at the end of the day, all of the c, cpp, basic etc stuff gets turned into machine code anyways, which is as arch dependent as you can get (I'm assuming that by 'C program' you mean raw code and not a program made using C, as there are no real differences between binaries beyond compiler-specific quirks). You can make 'standard libraries' for everything under the sun, but you'd have to explicitly add support in the libraries for each of them that gets made (in addition to the compiler support) which can bog down compilation on everything else. That's why Java came about (its a platform agnostic language running on a translator vm after all) as well as the proliferation of other scripting languages (over compiled ones) since processing time became less and less of a concern. (The game boy, for instance, basically required bare metal asm programming since there was so little room for error, whereas today people make shoddy scripts in slow interpreters because it's 'fast enough')

What would be interesting is if someone came up with a 'shader' of sorts (to borrow from GPUs) that allowed platform agnostic binaries to run bare metal. I know there are people who have gigantic hard ons for fpgas in cpus-- I don't see why, given a large enough array and cache, you couldn't vectorize translation (well, given fixed opcode and instruction sizes at least)

Watermelon Daiquiri fucked around with this message at 03:03 on Dec 31, 2016

feedmegin
Jul 30, 2008

Watermelon Daiquiri posted:

What would be interesting is if someone came up with a 'shader' of sorts (to borrow from GPUs) that allowed platform agnostic binaries to run bare metal.

You've just described how .NET and Android's new ART Java environment work, not to mention the AS/400. It's not a new concept :)

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Well hey, yeah thats kinda it, though part of what I was thinking of was in the context of stuff beyond a specific platform. I honestly don't remember what all I was considering as I wasn't actually giving it much thought. Thinking about it now, though itd be kinda pointless beyond the first time its executed and distributed. I'm much more hardware than software, anyways.

Confusion
Apr 3, 2009
Itanium was also intended as Intels chip to transition to 64 bit. Its failure gave AMD the opportunity to jump in and set the de facto industry standard with x86-64, which Intel now has to license from them. It was a humiliating failure for Intel, and it will be long before they try something like it again.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

Confusion posted:

Itanium was also intended as Intels chip to transition to 64 bit. Its failure gave AMD the opportunity to jump in and set the de facto industry standard with x86-64, which Intel now has to license from them. It was a humiliating failure for Intel, and it will be long before they try something like it again.

Well sure, they need to wait until we need 128 bit computing :haw:

Gwaihir
Dec 8, 2009
Hair Elf

Lowen SoDium posted:

So I delidded my 6700k with one of these today (it is a very solid delid tool, btw). I used Coollaboratory Ultra for the new TIM.

Prior to the delid, Prime95 would hit 78C during the first part of the test and over 90c on the second.

After the delid, the first part is around 65c and the second part it maxes at about 75c.

Both test are with the CPU OC to 4.6Ghz and with a Noctua nh-d15 cooler.

Funny aside, the UEFI said it detected a new CPU when I started it up for the first time. I assume it was because I had overclock set to auto and it test the chip on each boot and detected new thermal characteristics.

That's quite a nice drop.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party
Itanium had a lot of problems, but the biggest problem was that VLIW is a very poor approach to general purpose computation. it was even too inefficient for GPUs. I blame Itanium on compiler researchers that overpromised and underdelivered (I'm very curious how many NOPs you'd get per instruction word in non-HPC/scientific applications). if you're a processor architect thinking about VLIW and you're running code other than some very well-defined DSL, stop and remember that the code you're running sucks and that means your processor will suck.

also, if you're really curious about how to gently caress up C programs across multiple architectures, the clang sanitizer docs are a good place to start. the obvious stuff that wrecks you when going from 32 to 64 or from architecture to architecture is now widely understood (did you pass int instead of size_t? did you assume int and void* are the same size?) and easily detected by compilers. the insidious stuff is memory coherence (although the C++11 memory model helps) and undefined behavior because that can run the gamut from "program immediately crashes" to "works 99.9999% of the time but very very rarely you get a subtly wrong answer."

finally: the idea that there's "portable scripting languages" on one hand and "fast bare metal languages" on the other is wrong. the hard part of porting between different platforms (OSes, really) is the differences in standard libraries and UI, not making things run fast. if you want to have an easy time and still make things run fast, you build most of your code in some reasonably high level language (Python, Java, .NET, whatever), implement the UI in the platform-specific language (C++/C# for Windows, ObjC/Swift for Mac, C++ for Linux, Java for Android), and link computation kernels written in C/assembly/whatever.

ufarn
May 30, 2009
New Linus videos comparing Intel Extreme CPUs:

https://www.youtube.com/watch?v=jy1M0jkRWWk

One of the most succinct summaries of Intel's incrementalism. The price tag chart is pretty messed up.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Well, the 10-core Broadwell-E is slightly an outlier here. The 8-core one would also cost 999$ and probably have delivered slightly better single thread performance than the 10-core for having four less logical threads on the memory controller.

That said, double the price for two cores is a little bit much. I guess their high frequency yields for that core count are kinda poo poo?

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Combat Pretzel posted:

Well, the 10-core Broadwell-E is slightly an outlier here. The 8-core one would also cost 999$ and probably have delivered slightly better single thread performance than the 10-core for having four less logical threads on the memory controller.

That said, double the price for two cores is a little bit much. I guess their high frequency yields for that core count are kinda poo poo?

Also, given it's a pure epeen product for one percenters, they might as well price it accordingly. What are they going to do, buy even more expensive Xeons over it?

DarkEnigma
Mar 31, 2001
Yeah price/performance is not a concern at this level. I'm on my way to going custom loop water cooling just so I can squeeze an extra 1 or 200 mhz out of my 6950x

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

ufarn posted:

New Linus videos comparing Intel Extreme CPUs:

Linus gave us one of the best laughs last year:

https://www.youtube.com/watch?v=gSrnXgAmK8k

They may be good at benchmarking, but JFC a RAID50 striped in Windows? :stare:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Of course, they go with UnRAID going forward.

FormatAmerica
Jun 3, 2005
Grimey Drawer
What a loving nightmare.

JawnV6
Jul 4, 2004

So hot ...

Combat Pretzel posted:

The question is, how deep is the micro-op format tied to x86? If it's some ideal format Intel thought up and the x86 decoder makes-do with it, wouldn't it be better to introduce an overhauled optimized instruction set and migrate to it on longterm? I've recently read that x86 and x86-64 are quasi separate decoders to the micro-op format. If so, a new instruction format decoder would be a third one.
None of this really matters? The micro-op format is more of an implementation detail than some "better" representation of computation than the x86 ISA. The ISA encoding has to be shipped around the network, loaded from disk, held in memory, consume space in MLC's/I$, etc. The micro-op format exists for all of 20 mm2 and adding another bit might as well be free. They exist with vastly different tradeoffs and micro-ops would be ill-suited as a general ISA.

That's the other thing I don't get about more registers in general. If you want to actually use them you're still hitting memory to avoid disturbing others above or below on the call stack. Whether that's nicely encoded in a CISCy 2b call opcode or handled one at a time by instructions, it's still happening.

JawnV6 fucked around with this message at 21:07 on Jan 1, 2017

Setset
Apr 14, 2012
Grimey Drawer

mayodreams posted:

Linus gave us one of the best laughs last year:

https://www.youtube.com/watch?v=gSrnXgAmK8k

They may be good at benchmarking, but JFC a RAID50 striped in Windows? :stare:

god it's like watching a reality show for techies. the suspense!

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

mayodreams posted:

They may be good at benchmarking, but JFC a RAID50 striped in Windows? :stare:

why is that especially bad

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

blowfish posted:

why is that especially bad
It's a RAID-0 made out of RAID-5's. If one element of the RAID-0 drops out, it fucks up everything. Also, 8 disks in a RAID-5 is really pushing it. You have to account for possibility of failure during a rebuild when using very large disks in a RAID-5, thus why there's RAID-6 (double parity) and beyond (like triple parity RAID-Z) for larger arrays. Or alternative setups like RAID-10. Granted, they're using SSDs, but it'll probably still apply in some form.

Sormus
Jul 24, 2007

PREVENT SPACE-AIDS
sanitize your lovebot
between users :roboluv:
The thing you take home from Linus' videos is that they are not, how to say, professionals. They sometimes manage to make a pretty solid video, but all their "lets build X" videos are a nightmare.

For example: They decided to renovate their home office thing and migrate all their computers to water cooling, in the same loop, with wall-mounted copper pipes. For some reason they were surprised to see massive amounts of corrosion in their water. They also used a bathtub to store their reservoir and pumps.
https://www.youtube.com/watch?v=b8bLtg9J1Oc

Their endnote on that project was:
They're moving to a new place (unsure if thats related to this fiasco, as Linus says they were not planning to move when they started the build) and will not be doing this again, instead they will just get an A/C.

Sormus fucked around with this message at 19:26 on Jan 1, 2017

Da Mott Man
Aug 3, 2012


In windows as a file server you should be using something like SOFS (Scale-Out File Server) or Storage Spaces. Windows software raid is poor, especially for his use case and his choice to do a raid 50 with the raid 0 in software is just asking for trouble.

Da Mott Man fucked around with this message at 19:35 on Jan 1, 2017

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

Sormus posted:

The thing you take home from Linus' videos is that they are not, how to say, professionals. They sometimes manage to make a pretty solid video, but all their "lets build X" videos are a nightmare.

For example: They decided to renovate their home office thing and migrate all their computers to water cooling, in the same loop, with wall-mounted copper pipes. For some reason they were surprised to see massive amounts of corrosion in their water. They also used a bathtub to store their reservoir and pumps.
https://www.youtube.com/watch?v=b8bLtg9J1Oc

Their endnote on that project was:
They're moving to a new place (unsure if thats related to this fiasco, as Linus says they were not planning to move when they started the build) and will not be doing this again, instead they will just get an A/C.

Heh, considering tons of places manage to successfully use water cooling from chillers located on a different floor or area of the building entirely, they really didn't do a good job planning that.

ufarn
May 30, 2009
What specifically are their backgrounds? Comp-sci, engineering? I've always assumed they had something beyond a history of tampering with computers in their childhood.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Watermelon Daiquiri posted:

Heh, considering tons of places manage to successfully use water cooling from chillers located on a different floor or area of the building entirely, they really didn't do a good job planning that.

You probably don't use an unending supply nasty-rear end tap water to do it though.

I mean even if you filled it with tap water in the first place the minerals would eventually precipitate out and you would get something reasonably clean plus some crust, but a continuous supply of tap water ensures that your parts are going to look like the Elephant's Foot before too long.

Adbot
ADBOT LOVES YOU

ItBurns
Jul 24, 2007
ITT we have a serious discussion about the academic credentials of Youtube clickbait gaming hardware reviewers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply