|
Complete conjecture but if you switch jobs every 2-3 years the amount of forms to fill out is boring and lame. I'm going to randomly guess burn-out.
|
# ? Jun 11, 2020 22:18 |
|
|
# ? Apr 29, 2024 10:28 |
|
Intel management wanted the headline, wasn't willing to give him the freedom he expects/needs. I'm still not sure what a CPU designer does on a daily basis, or what about a person makes them so much better at it than another.
|
# ? Jun 11, 2020 22:28 |
|
Some Goon posted:Intel management wanted the headline, wasn't willing to give him the freedom he expects/needs. Some Goon posted:I'm still not sure what a CPU designer does on a daily basis, or what about a person makes them so much better at it than another.
|
# ? Jun 11, 2020 22:37 |
|
JawnV6 posted:this is from 24 days ago lol https://fortune.com/longform/microchip-designer-jim-keller-intel-fortune-500-apple-tesla-amd/ I’d love to read the rest of this but it’s paywalled.
|
# ? Jun 11, 2020 22:41 |
|
JawnV6 posted:he's not a designer, at least not in the ground level jargon. id call him an architect I'd say he's more of an importer-exporter
|
# ? Jun 11, 2020 22:45 |
|
Some Goon posted:I'm still not sure what a CPU designer does on a daily basis, or what about a person makes them so much better at it than another. He's the best at dreaming up new ways to artificially limit the speeds and functionalities of new computer chips, preventing gamers' rigs everywhere from achieving the benchmark numbers that they were meant to achieve.
|
# ? Jun 11, 2020 22:50 |
Ok Comboomer posted:I’d love to read the rest of this but it’s paywalled. (it's a link, add-on works in both Firefox and Chrome)
|
|
# ? Jun 11, 2020 22:55 |
|
JawnV6 posted:this is from 24 days ago lol https://fortune.com/longform/microchip-designer-jim-keller-intel-fortune-500-apple-tesla-amd/ And as to what someone like Jim Keller does all day: meetings, so many meetings. A PI position is already back to back meetings, and he has been doing media work as well.
|
# ? Jun 12, 2020 09:29 |
|
As a little cross pollination to the world of wsb I found this DD take on Intel amusing: https://www.reddit.com/r/wallstreetbets/comments/hbr1ol/get_the_fuck_out_of_intel_and_get_out_immediately/ I think Zen 3 will crush it this year but I now wonder if Intel can spend its way out of the problem this time by paying off OEMs and such.
|
# ? Jun 21, 2020 19:06 |
|
Rumor has it he really has a family health issue and cannot keep pulling what I assume is the insane workload inside Intel, he's still consulting for now. I kind of believe this one, as he would otherwise jump ship as he has done in the past. Shorting INTC based on that news and the fact that Zen3 is on schedule after all is one hell of a risky trade. For gamers and PC users, yeah Zen 3 can absolutely crush it if they manage to get CPI parity and lower down some of the latencies. But institutional shareholders don't really give a poo poo. Desktop is 1) relatively low-margin 2) not a growth sector, 3) not a fab priority given the current problems meeting demand in other areas and 4) not where Intel's boutique approach is very profitable. Intel still is a boutique firm with a crapton of customized chips and systems for a wide variety of customers. AMD isn't making a Zen variant for high speed trading or base stations, if they had the engineering bandwidth for it I doubt they would get the fab bandwidth. Intel don't need to pay off OEMs with cash, they pay them off with custom designs.
|
# ? Jun 22, 2020 12:15 |
|
True, but it's not all sunshine and roses for Intel, either: the datacenter is a highly profitable growth area that Intel has utterly dominated up until now, but I've already seen some government buyers and similar either explore or purchase AMD systems for new builds: Intel is gonna have to figure out how to deal with Zen being able to provide more cores at reasonably good IPC at better prices than Xeons at some point here in the reasonably near future if they want to maintain the quarterly profits they've been used to. That said, I'm still long Intel because betting against them has, historically, been a fool's move--they've got more cash and more engineering know-how that pretty much anyone else out there.
|
# ? Jun 22, 2020 14:18 |
|
DrDork posted:
That's never under question, it's how Intel is utilizing them. very poorly, that's how
|
# ? Jun 22, 2020 17:32 |
|
Rip to Intel Macs. Will be interesting to see how hard Intel fights back with giving its OEMs engineering support, like they did with the initial Ultrabook effort.
|
# ? Jun 22, 2020 19:43 |
|
Cygni posted:Rip to Intel Macs. Will be interesting to see how hard Intel fights back with giving its OEMs engineering support, like they did with the initial Ultrabook effort. I’m more curious to see how MS and the OEMs respond. According to the keynote Apple’s “good friends at Microsoft” already have Office ready to go for ARM MacOS, and presumably have had devkits, etc. and eyeballs behind the curtain for a while. Can’t imagine that they’re not looking at the Surface line or portables from Dell, et al. with similar ideas.
|
# ? Jun 22, 2020 19:57 |
|
Ok Comboomer posted:Can’t imagine that they’re not looking at the Surface line or portables from Dell, et al. with similar ideas. The Surface Pro X already exists, it even has an x86 compatibility layer just like Rosetta.
|
# ? Jun 22, 2020 20:01 |
|
Fame Douglas posted:The Surface Pro X already exists, it even has an x86 compatibility layer just like Rosetta. Emphasis on x86, Microsoft's emulator can't run x86_64 apps which kinda sucks. Tons of stuff is 64-bit only nowadays.
|
# ? Jun 22, 2020 20:03 |
|
I honestly will be more interested to see how the hell Apple plans to not hilariously fragment their userbase. If they've got it so that iPhone/iPad/iMacs are all built on the same architecture, but the MBP / Mac Pro are still built off Intel / x86, I would expect there to be some serious annoyance from users who buy a new laptop and suddenly find themselves restricted to the Apple mobile store. Like, is Apple going to lean heavily on devs to release comparable versions for both architectures, or are they just going to accept that you can't get full versions of a lot of software on the low-power lineup and say that's just the tradeoff you make for better battery life? I think people are a lot more willing to accept that they get a gimped version of Photoshop on an iPad than they are on something that's sold as a laptop (even if it'll be basically a slightly larger iPad with permanently attached keyboard at that point).
|
# ? Jun 22, 2020 20:03 |
|
repiv posted:Emphasis on x86, Microsoft's emulator can't run x86_64 apps which kinda sucks. Tons of stuff is 64-bit only nowadays. That's the opposite of how I expected it to be limited
|
# ? Jun 22, 2020 20:16 |
|
DrDork posted:I honestly will be more interested to see how the hell Apple plans to not hilariously fragment their userbase. If they've got it so that iPhone/iPad/iMacs are all built on the same architecture, but the MBP / Mac Pro are still built off Intel / x86, I would expect there to be some serious annoyance from users who buy a new laptop and suddenly find themselves restricted to the Apple mobile store. Did you watch the keynote/read the quick takes from it? They’ve got a bunch of solutions, including universal binaries, and it sounds like Rosetta 2 is going to be good enough to do x86 emulation for a lot of apps (they showed it running the existing MacOS port of Shadow of the Tomb Raider on their A12Z-based devkit which looks to be the guts of a current iPad Pro in a Mac Mini box + 16gb of RAM and an SSD). Also Apple made a big point to state numerous times that the whole stack was getting switched over in two years. I can’t imagine that they would be announcing that now if they didn’t have MacBook Pro/Mac Pro-capable chips already working in their skunkworks.
|
# ? Jun 22, 2020 20:20 |
|
Yeah, but it doesn't answer all the questions. Universal binaries sound great, but as we've seen by everyone else who has tried them, they're not easy to get right when you're working with non-trivial programs. Rosetta 2 / emulation has promise, but again, I'll believe that as a perfect universal solution when I see it. You're no doubt right that they plan on doing the full stack eventually, but that's still two years away, which is a long time in terms of listening to users complain that they can't get their program to run right. Especially for a company that bills itself as "it just works." e; that said, if there's one tech company out there that can just go "look, bitches, this is the way it is now, you will deal with it and you will like it" and have it work, it's Apple. DrDork fucked around with this message at 20:29 on Jun 22, 2020 |
# ? Jun 22, 2020 20:23 |
|
DrDork posted:Yeah, but it doesn't answer all the questions. Universal binaries sound great, but as we've seen by everyone else who has tried them, they're not easy to get right when you're working with non-trivial programs. Rosetta 2 / emulation has promise, but again, I'll believe that as a perfect universal solution when I see it. Eh, this one honestly looks like it’s going to be substantially less painful than the Intel Transition and Rosetta 1. Idk that they would’ve shown off Maya running via Rosetta 2 if they didn’t expect people to try it for themselves and look for holes—but also Apple totally worked extensively with Autodesk behind the scenes to make that successful demo happen.
|
# ? Jun 22, 2020 20:28 |
|
It'll be interesting to see what effects the Apple ARM switch will have wider attempts to produce a viable ARM desktop & server market. If Apple show up with x64_64 beating desktop CPUs if that will trigger a bunch of investment. Or comedy option: Apple use their massive piles of cash to enter the server market and then follow that up with their own cloud computing service.
|
# ? Jun 22, 2020 21:52 |
|
repiv posted:Emphasis on x86, Microsoft's emulator can't run x86_64 apps which kinda sucks. Tons of stuff is 64-bit only nowadays. They're working on 64 bit compatibility. But emulation is always going to be comparatively slow, something I'm very dubious even Apple could really solve. Ok Comboomer posted:Eh, this one honestly looks like it’s going to be substantially less painful than the Intel Transition and Rosetta 1. Idk that they would’ve shown off Maya running via Rosetta 2 if they didn’t expect people to try it for themselves and look for holes—but also Apple totally worked extensively with Autodesk behind the scenes to make that successful demo happen. I haven't seen the demonstration, but if it's rendering on the GPU, it would make sense that the emulation penalty wouldn't be as large. But that doesn't mean regular more CPU-based programs are going to run amazingly well. Fame Douglas fucked around with this message at 21:59 on Jun 22, 2020 |
# ? Jun 22, 2020 21:55 |
|
Pablo Bluth posted:It'll be interesting to see what effects the Apple ARM switch will have wider attempts to produce a viable ARM desktop & server market. If Apple show up with x64_64 beating desktop CPUs if that will trigger a bunch of investment. Or comedy option: Apple use their massive piles of cash to enter the server market and then follow that up with their own cloud computing service. Those old Xserves certainly sound like real servers. I wonder if son-of-Xserve would carry on that trend or follow the Mac Pro in an attempt to be as quiet as possible
|
# ? Jun 22, 2020 22:29 |
|
There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications.
|
# ? Jun 22, 2020 22:34 |
|
Fame Douglas posted:There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications. Also all of apple’s data center servers are like HP, I’m pretty sure. I think they run a few data centers on racked Mac Minis.
|
# ? Jun 22, 2020 22:39 |
|
Fame Douglas posted:There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications. Cygni posted:Rip to Intel Macs. Will be interesting to see how hard Intel fights back with giving its OEMs engineering support, like they did with the initial Ultrabook effort.
|
# ? Jun 22, 2020 22:53 |
|
ARM on servers is (mostly) what got the ex-Apple nuvia guy sued by Apple.
|
# ? Jun 22, 2020 23:46 |
|
Fame Douglas posted:There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications. #1 super computer in the world is Fujitsu’s new ARM cluster, was announced today. Ampere’s and Marvell’s new server parts look impressive too. Dunno if the X86 Armageddon is finally here, but there is noticeable momentum.
|
# ? Jun 22, 2020 23:56 |
|
Cygni posted:#1 super computer in the world is Fujitsu’s new ARM cluster, was announced today. Ampere’s and Marvell’s new server parts look impressive too. Dunno if the X86 Armageddon is finally here, but there is noticeable momentum. The raw resource cost per chip is not significantly different between an ARM and an Intel; it's roughly the same amount of silicon. So if a race was declared to produce the most TFLOPs per hour of CPUs, the Intel design would be the chosen CPU to produce.
|
# ? Jun 23, 2020 03:59 |
|
human garbage bag posted:The raw resource cost per chip is not significantly different between an ARM and an Intel; it's roughly the same amount of silicon. So if a race was declared to produce the most TFLOPs per hour of CPUs, the Intel design would be the chosen CPU to produce. In a perfect spherical cow vacuum world your first sentence is true. In reality the raw materials aren't what system builders care about, it's the total cost per tested and pacakged chip. Raw materials are a tiny fraction of that cost. Different foundries with different process tech can and do have different costs. Even a single foundry can even have reduced-cost process recipes on the same node. (For example, reducing the metal layer count for chips which don't need dense wire routing.) But more importantly, marginal cost of production is meaningless when you're thinking about assembling a supercomputer. What matters is the price you pay to the supplier for each chip, which has variables completely independent of the cost of building it. When you buy an Intel chip, you get charged a hefty profit margin because (a) thanks to the long dominance of x86 Windows there's perpetually high demand for x86 CPUs and (b) thanks to exploitation of the patent system, Intel has a near-monopoly on the right to build x86 CPUs. (AMD got legally backdoored in back in the 386 days, and their patent position was eventually bolstered by authoring the AMD64 extensions to the ISA which Intel now has to use. Nevertheless, Intel has been pretty effective at keeping AMD down, and historically so has AMD itself - it hasn't always been the best-run company.) In short, Intel is a monopolist, and they use that to make fat profit margins. Even now, amidst all the horrible setbacks on the 10nm node, they're raking in money. I have no idea what you're going for with your second sentence. If your only goal was to produce lots of paper TFLOPS for some abstract competition, you wouldn't use conventional microprocessors with conventional ISAs like x86 or ARM, you'd go for extremely wide VLIW that was neither Intel nor ARM. Such machines tend to have an unfortunately large ratio between their peak and actual FLOPS, so unless you're only building your super for bragging rights they're not a great idea.
|
# ? Jun 23, 2020 06:36 |
HPC loads are also massively parallelizable, so that does tend to favour massive amounts of cores that're more energy efficient, which is basically ARMs modus operandi.
|
|
# ? Jun 23, 2020 08:00 |
|
D. Ebdrup posted:HPC loads are also massively parallelizable, so that does tend to favour massive amounts of cores that're more energy efficient, which is basically ARMs modus operandi. Less performance per core means you need to spend more power on interconnects, and interconnects (and moving days around in general) is becoming the most expensive part of computation now. Interconnects are basically the entire thing that distinguishes HPC from a rack of commodity PCs. You are specifically not just looking for a bunch of massively parallel hardware, it’s about moving data around efficiently. So that’s a non trivial downside.
|
# ? Jun 23, 2020 08:07 |
|
Yeah, at least when it comes to supercomputing, the biggest hurdle, generally, is parallelizing the load. Which uh, I don't think I'm going to shock anyone here by saying, AMD has quite the advantage at the moment in the PCIe space, which helps immensely.
|
# ? Jun 23, 2020 12:20 |
|
BobHoward posted:In a perfect spherical cow vacuum world your first sentence is true. In reality the raw materials aren't what system builders care about, it's the total cost per tested and pacakged chip. Raw materials are a tiny fraction of that cost. I was actually envisioning a total war scenario where a command economy throws out the profit motive. In this scenario I believe that having all foundries produce Intel chips would produce more total TFLOP computing power than having all foundries produce ARM chips.
|
# ? Jun 23, 2020 13:31 |
|
hey does Alereon still post here just askinghuman garbage bag posted:The raw resource cost per chip is not significantly different between an ARM and an Intel; it's roughly the same amount of silicon. So if a race was declared to produce the most TFLOPs per hour of CPUs, the Intel design would be the chosen CPU to produce. looks like uninformed guessin human garbage bag posted:I was actually envisioning a total war scenario where a command economy throws out the profit motive. In this scenario I believe that having all foundries produce Intel chips would produce more total TFLOP computing power than having all foundries produce ARM chips. like how is FLOPS even marginally relevant, what's this problem that is purely compute bound and has no IOcut or sensitivity to memory bandwidth
|
# ? Jun 23, 2020 18:55 |
|
Not to mention that GPUs and matrix processors would give you vastly more bang for your weird FLOPs/chip buck than any CPU.
|
# ? Jun 23, 2020 23:17 |
|
human garbage bag posted:I was actually envisioning a total war scenario where a command economy throws out the profit motive. In this scenario I believe that having all foundries produce Intel chips would produce more total TFLOP computing power than having all foundries produce ARM chips. But why do you believe that? You still haven't given any actual reason why you think Intel chips automatically equals more TFLOPS per wafer. Much less any indication that you understand why peak TFLOPS numbers are not actually what you should be optimizing for in a super. Or any indication that you understand that you can't just decide to make Intel chips on TSMC's process, you'd have to do a costly port.
|
# ? Jun 23, 2020 23:19 |
|
If we ended up in some sort of WWIII scenario where we had to go all-out on chip production, I have to assume that would be directed at the lower-power/embedded stuff that ends up in weapons systems and vehicles, not exactly the stuff gamers drool over. For scale, the upgraded CPU in the F-35 advertises a Dhrystone of 2900 which puts it on par with a Raspberry Pi 3B+ (yeah I know it likely kills the Pi in other capabilities but we're talking about ~FLOPS~). https://www.l3commercialaviation.com/avionics/products/high-performance-icp/
|
# ? Jun 24, 2020 01:32 |
|
|
# ? Apr 29, 2024 10:28 |
|
Dhrystone MIPS is a pure integer test FYI, no correlation with FLOPS. Also even as an integer test, it's a super bad benchmark that nobody should use for anything (like, seriously, it's REAL bad), yet still survives in the embedded computing realm because people are dummies or something.
|
# ? Jun 24, 2020 01:54 |