|
Combat Pretzel posted:Is there a reason Intel can't just introduce a new public architecture, that maybe fixes and/or improves issues that may come with x86 (are there even any worthwhile pulling such a move?), and add instructions for the OS to switch the CPU between decoders? So that the OS can run executables of both x86 and the new poo poo, and eventually Intel can do away with the x86 one? Or anything in the CPU based on virtual memory addresses, like everything past 64GB are new instructions, so that everything is transparent and can be mixed, just by letting the OS load at the respective addresses? Binary translation tied to hardware sucks, software binary translation is okay but the performance hit means it's probably not worth it. Also using as much of the virtual address space as possible is a security feature (ASLR et al).
|
# ? Dec 27, 2016 19:15 |
|
|
# ? May 28, 2024 10:46 |
|
Well, the current x86-to-ľOp decoder is just that. Add a second New-to-ľOp decoder and some means to switch between them, to enable a transition period. But the core question I guess is still if it's actually worth to do away with x86 and go with a newer instruction set closer to the micro ops (or whatever other reason that might streamline things or help with performance). And the second question is, if it'll actually worth it to start. I'd say Microsoft will play along, and people that do more than just run a browser are probably going to upgrade often enough to not draw out such a hypothetical transition period. But there's still AMD. As far as the address space goes, there's plenty in 64bit mode to create a virtual boundary to use to switch between instruction sets and still randomize things.
|
# ? Dec 27, 2016 19:46 |
|
If Intel do kill off old SIMD instructions, how much of an advantage would that actually give them? (Transistor saving). Wouldn't it be tiny?
|
# ? Dec 27, 2016 20:09 |
|
Rastor posted:Stupid rumor: Intel is considering no longer guaranteeing x86 backward compatibility, sometime in the 2019-2020 time frame. Yea, ill believe that after apple drops PPC and moves to x86
|
# ? Dec 27, 2016 20:13 |
|
Don Lapre posted:Yea, ill believe that after apple drops PPC and moves to x86 That can never happen, there's too much legacy code out there. Unless you think they are so arrogant they would force the switch and subject their users to some kind of slow translated mode for their current software.
|
# ? Dec 30, 2016 02:26 |
|
Rastor posted:That can never happen, there's too much legacy code out there. Unless you think they are so arrogant they would force the switch and subject their users to some kind of slow translated mode for their current software.
|
# ? Dec 30, 2016 03:15 |
|
SourKraut posted:Itanium New (tm) Amusingly, Rosetta debuted as a MIPS to Itanium translator.
|
# ? Dec 30, 2016 04:04 |
|
I thought Rosetta worked well. Creative Suite performance was bad, but the overall transition experience was better than it had any right to be. Platystemon fucked around with this message at 13:21 on Dec 30, 2016 |
# ? Dec 30, 2016 13:19 |
|
Platystemon posted:I thought Rosetta worked well. It did, and as many other OS X good stuff it was removed too soon.
|
# ? Dec 30, 2016 14:36 |
|
SourKraut posted:Itanium New (tm) I actually ran some scientific HPC stuff on non-x86 itanium big iron back in the day. It was a nice machine to use, those CPUs had massive caches. Real fast.
|
# ? Dec 31, 2016 00:40 |
|
Combat Pretzel posted:Is there a reason Intel can't just introduce a new public architecture, that maybe fixes and/or improves issues that may come with x86 (are there even any worthwhile pulling such a move?), and add instructions for the OS to switch the CPU between decoders? So that the OS can run executables of both x86 and the new poo poo, and eventually Intel can do away with the x86 one? Or anything in the CPU based on virtual memory addresses, like everything past 64GB are new instructions, so that everything is transparent and can be mixed, just by letting the OS load at the respective addresses? As other posters have alluded to, Intel did this in about the year 2000, it was called the Itanium, it did indeed have a poo poo ton of registers, the initial version did have a hardware x86 execution unit on it, and by and large outside of some niche HPC contexts it was a piece of poo poo because Intel has a long and inglorious reputation of trying to make things that aren't x86 but end up sucking balls. Itaniums are still around (I write software that runs on them among other things!) but their performance never lived up to the hype.
|
# ? Dec 31, 2016 02:00 |
|
feedmegin posted:As other posters have alluded to, Intel did this in about the year 2000, it was called the Itanium, it did indeed have a poo poo ton of registers, the initial version did have a hardware x86 execution unit on it, and by and large outside of some niche HPC contexts it was a piece of poo poo because Intel has a long and inglorious reputation of trying to make things that aren't x86 but end up sucking balls. Itaniums are still around (I write software that runs on them among other things!) but their performance never lived up to the hype. Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so...
|
# ? Dec 31, 2016 02:05 |
|
Paul MaudDib posted:Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program what exactly would make it not run right on another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so... Different preferred alignments, loose versus strict memory coherence if you're doing multithreading, little versus big endian, 64 versus 32 bits (sizeof(int) for example), alignment read/write restrictions (try writing an unaligned word on an out of the box SPARC Solaris box versus x86). That's just modern CPUs, it gets weirder with the sort of machines that were around when C came into existence (36 bit words, non-twos-complement integer arithmetic, etc) or indeed some embedded stuff. Oh, btw, ARM (32-bit) has 16 general purpose registers if you're writing user-level code. 32 registers is indeed more usual in a RISC but 16 certainly isn't laughable the way 8 was.
|
# ? Dec 31, 2016 02:10 |
|
So I delidded my 6700k with one of these today (it is a very solid delid tool, btw). I used Coollaboratory Ultra for the new TIM. Prior to the delid, Prime95 would hit 78C during the first part of the test and over 90c on the second. After the delid, the first part is around 65c and the second part it maxes at about 75c. Both test are with the CPU OC to 4.6Ghz and with a Noctua nh-d15 cooler. Funny aside, the UEFI said it detected a new CPU when I started it up for the first time. I assume it was because I had overclock set to auto and it test the chip on each boot and detected new thermal characteristics.
|
# ? Dec 31, 2016 02:15 |
|
feedmegin posted:As other posters have alluded to, Intel did this in about the year 2000, it was called the Itanium, it did indeed have a poo poo ton of registers, the initial version did have a hardware x86 execution unit on it, and by and large outside of some niche HPC contexts it was a piece of poo poo because Intel has a long and inglorious reputation of trying to make things that aren't x86 but end up sucking balls. Itaniums are still around (I write software that runs on them among other things!) but their performance never lived up to the hype. The question is, how deep is the micro-op format tied to x86? If it's some ideal format Intel thought up and the x86 decoder makes-do with it, wouldn't it be better to introduce an overhauled optimized instruction set and migrate to it on longterm? I've recently read that x86 and x86-64 are quasi separate decoders to the micro-op format. If so, a new instruction format decoder would be a third one. But as said, if AMD doesn't play along, it's probably futile, anyway. Paul MaudDib posted:Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so...
|
# ? Dec 31, 2016 02:15 |
|
Combat Pretzel posted:How I understand it, Itanium was a completely different concept and x86 was shoehorned in. I'm not sure it would be any better trying to do the same thing now, the concept is basically the same. quote:I think funny pointer math that uses fixed types for storing intermediate values creates the most drama. Mostly affects 32bit apps going 64bit. I don't think an 64bit compliant application that ran on x86 will suffer when compiling for say Itanium, so long it didn't involve any assembler. I just mentioned a bunch of other things that would bite you; Itanium is friendler than most to x86 code, but if you're trying to write portable code that runs on Itanium you probably also care about about POWER and SPARC (both of which have larger market shares at this point), so you need to worry about all the other stuff I mentioned that is not 32<>64 bit specific. And that's leaving aside OS differences, since Windows and Linux on Itanium is pretty much dead. Itanium==HP-UX (these days), SPARC==Solaris, POWER==AIX (mostly), so you've got that to worry about as well.
|
# ? Dec 31, 2016 02:34 |
Paul MaudDib posted:Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so... There's also the fact that, at the end of the day, all of the c, cpp, basic etc stuff gets turned into machine code anyways, which is as arch dependent as you can get (I'm assuming that by 'C program' you mean raw code and not a program made using C, as there are no real differences between binaries beyond compiler-specific quirks). You can make 'standard libraries' for everything under the sun, but you'd have to explicitly add support in the libraries for each of them that gets made (in addition to the compiler support) which can bog down compilation on everything else. That's why Java came about (its a platform agnostic language running on a translator vm after all) as well as the proliferation of other scripting languages (over compiled ones) since processing time became less and less of a concern. (The game boy, for instance, basically required bare metal asm programming since there was so little room for error, whereas today people make shoddy scripts in slow interpreters because it's 'fast enough') What would be interesting is if someone came up with a 'shader' of sorts (to borrow from GPUs) that allowed platform agnostic binaries to run bare metal. I know there are people who have gigantic hard ons for fpgas in cpus-- I don't see why, given a large enough array and cache, you couldn't vectorize translation (well, given fixed opcode and instruction sizes at least) Watermelon Daiquiri fucked around with this message at 03:03 on Dec 31, 2016 |
|
# ? Dec 31, 2016 02:37 |
|
Watermelon Daiquiri posted:What would be interesting is if someone came up with a 'shader' of sorts (to borrow from GPUs) that allowed platform agnostic binaries to run bare metal. You've just described how .NET and Android's new ART Java environment work, not to mention the AS/400. It's not a new concept
|
# ? Dec 31, 2016 03:13 |
Well hey, yeah thats kinda it, though part of what I was thinking of was in the context of stuff beyond a specific platform. I honestly don't remember what all I was considering as I wasn't actually giving it much thought. Thinking about it now, though itd be kinda pointless beyond the first time its executed and distributed. I'm much more hardware than software, anyways.
|
|
# ? Dec 31, 2016 04:23 |
|
Itanium was also intended as Intels chip to transition to 64 bit. Its failure gave AMD the opportunity to jump in and set the de facto industry standard with x86-64, which Intel now has to license from them. It was a humiliating failure for Intel, and it will be long before they try something like it again.
|
# ? Dec 31, 2016 14:04 |
|
Confusion posted:Itanium was also intended as Intels chip to transition to 64 bit. Its failure gave AMD the opportunity to jump in and set the de facto industry standard with x86-64, which Intel now has to license from them. It was a humiliating failure for Intel, and it will be long before they try something like it again. Well sure, they need to wait until we need 128 bit computing
|
# ? Dec 31, 2016 20:47 |
|
Lowen SoDium posted:So I delidded my 6700k with one of these today (it is a very solid delid tool, btw). I used Coollaboratory Ultra for the new TIM. That's quite a nice drop.
|
# ? Dec 31, 2016 22:43 |
|
Itanium had a lot of problems, but the biggest problem was that VLIW is a very poor approach to general purpose computation. it was even too inefficient for GPUs. I blame Itanium on compiler researchers that overpromised and underdelivered (I'm very curious how many NOPs you'd get per instruction word in non-HPC/scientific applications). if you're a processor architect thinking about VLIW and you're running code other than some very well-defined DSL, stop and remember that the code you're running sucks and that means your processor will suck. also, if you're really curious about how to gently caress up C programs across multiple architectures, the clang sanitizer docs are a good place to start. the obvious stuff that wrecks you when going from 32 to 64 or from architecture to architecture is now widely understood (did you pass int instead of size_t? did you assume int and void* are the same size?) and easily detected by compilers. the insidious stuff is memory coherence (although the C++11 memory model helps) and undefined behavior because that can run the gamut from "program immediately crashes" to "works 99.9999% of the time but very very rarely you get a subtly wrong answer." finally: the idea that there's "portable scripting languages" on one hand and "fast bare metal languages" on the other is wrong. the hard part of porting between different platforms (OSes, really) is the differences in standard libraries and UI, not making things run fast. if you want to have an easy time and still make things run fast, you build most of your code in some reasonably high level language (Python, Java, .NET, whatever), implement the UI in the platform-specific language (C++/C# for Windows, ObjC/Swift for Mac, C++ for Linux, Java for Android), and link computation kernels written in C/assembly/whatever.
|
# ? Dec 31, 2016 23:34 |
|
New Linus videos comparing Intel Extreme CPUs: https://www.youtube.com/watch?v=jy1M0jkRWWk One of the most succinct summaries of Intel's incrementalism. The price tag chart is pretty messed up.
|
# ? Jan 1, 2017 13:56 |
|
Well, the 10-core Broadwell-E is slightly an outlier here. The 8-core one would also cost 999$ and probably have delivered slightly better single thread performance than the 10-core for having four less logical threads on the memory controller. That said, double the price for two cores is a little bit much. I guess their high frequency yields for that core count are kinda poo poo?
|
# ? Jan 1, 2017 14:53 |
|
Combat Pretzel posted:Well, the 10-core Broadwell-E is slightly an outlier here. The 8-core one would also cost 999$ and probably have delivered slightly better single thread performance than the 10-core for having four less logical threads on the memory controller. Also, given it's a pure epeen product for one percenters, they might as well price it accordingly. What are they going to do, buy even more expensive Xeons over it?
|
# ? Jan 1, 2017 15:53 |
|
Yeah price/performance is not a concern at this level. I'm on my way to going custom loop water cooling just so I can squeeze an extra 1 or 200 mhz out of my 6950x
|
# ? Jan 1, 2017 16:05 |
|
ufarn posted:New Linus videos comparing Intel Extreme CPUs: Linus gave us one of the best laughs last year: https://www.youtube.com/watch?v=gSrnXgAmK8k They may be good at benchmarking, but JFC a RAID50 striped in Windows?
|
# ? Jan 1, 2017 16:10 |
|
Of course, they go with UnRAID going forward.
|
# ? Jan 1, 2017 17:18 |
|
What a loving nightmare.
|
# ? Jan 1, 2017 17:38 |
|
Combat Pretzel posted:The question is, how deep is the micro-op format tied to x86? If it's some ideal format Intel thought up and the x86 decoder makes-do with it, wouldn't it be better to introduce an overhauled optimized instruction set and migrate to it on longterm? I've recently read that x86 and x86-64 are quasi separate decoders to the micro-op format. If so, a new instruction format decoder would be a third one. That's the other thing I don't get about more registers in general. If you want to actually use them you're still hitting memory to avoid disturbing others above or below on the call stack. Whether that's nicely encoded in a CISCy 2b call opcode or handled one at a time by instructions, it's still happening. JawnV6 fucked around with this message at 21:07 on Jan 1, 2017 |
# ? Jan 1, 2017 17:41 |
|
mayodreams posted:Linus gave us one of the best laughs last year: god it's like watching a reality show for techies. the suspense!
|
# ? Jan 1, 2017 17:54 |
|
mayodreams posted:They may be good at benchmarking, but JFC a RAID50 striped in Windows? why is that especially bad
|
# ? Jan 1, 2017 18:42 |
|
blowfish posted:why is that especially bad
|
# ? Jan 1, 2017 19:16 |
|
The thing you take home from Linus' videos is that they are not, how to say, professionals. They sometimes manage to make a pretty solid video, but all their "lets build X" videos are a nightmare. For example: They decided to renovate their home office thing and migrate all their computers to water cooling, in the same loop, with wall-mounted copper pipes. For some reason they were surprised to see massive amounts of corrosion in their water. They also used a bathtub to store their reservoir and pumps. https://www.youtube.com/watch?v=b8bLtg9J1Oc Their endnote on that project was: They're moving to a new place (unsure if thats related to this fiasco, as Linus says they were not planning to move when they started the build) and will not be doing this again, instead they will just get an A/C. Sormus fucked around with this message at 19:26 on Jan 1, 2017 |
# ? Jan 1, 2017 19:24 |
|
In windows as a file server you should be using something like SOFS (Scale-Out File Server) or Storage Spaces. Windows software raid is poor, especially for his use case and his choice to do a raid 50 with the raid 0 in software is just asking for trouble.
Da Mott Man fucked around with this message at 19:35 on Jan 1, 2017 |
# ? Jan 1, 2017 19:25 |
Sormus posted:The thing you take home from Linus' videos is that they are not, how to say, professionals. They sometimes manage to make a pretty solid video, but all their "lets build X" videos are a nightmare. Heh, considering tons of places manage to successfully use water cooling from chillers located on a different floor or area of the building entirely, they really didn't do a good job planning that.
|
|
# ? Jan 1, 2017 19:31 |
|
What specifically are their backgrounds? Comp-sci, engineering? I've always assumed they had something beyond a history of tampering with computers in their childhood.
|
# ? Jan 1, 2017 19:42 |
|
Watermelon Daiquiri posted:Heh, considering tons of places manage to successfully use water cooling from chillers located on a different floor or area of the building entirely, they really didn't do a good job planning that. You probably don't use an unending supply nasty-rear end tap water to do it though. I mean even if you filled it with tap water in the first place the minerals would eventually precipitate out and you would get something reasonably clean plus some crust, but a continuous supply of tap water ensures that your parts are going to look like the Elephant's Foot before too long.
|
# ? Jan 1, 2017 19:50 |
|
|
# ? May 28, 2024 10:46 |
|
ITT we have a serious discussion about the academic credentials of Youtube clickbait gaming hardware reviewers.
|
# ? Jan 1, 2017 19:50 |