|
SwissArmyDruid posted:Yeah, it tells us that the 16-core chip we thought was headed for servers... isn't. Intel's X99 flagship is only, what, 8 cores, with HT? I really don't understand what AMD is trying to achieve by subdividing things even more. I mean, who the hell needs 8 cores in a desktop environment NOW, let alone 16? High density memory optimized virt environments trying to minimize CPU steal time
|
# ¿ Apr 22, 2015 22:46 |
|
|
# ¿ Jan 15, 2025 08:32 |
|
cat doter posted:Correct me if I'm wrong, but isn't the whole point of DX12 that it automatically multithreads draw calls? Or do those have to be manually programmed still? Because if so, AMD will continue to be screwed. Parallelized shaders happen in GPU space. DX12 does partially remove the headache of having (and synchronizing) separate threads for AI, sound, etc, though
|
# ¿ Apr 23, 2015 01:01 |
|
LeftistMuslimObama posted:That's an interesting read, but I should probably clarify. I'm an absolute newbie to the subject of graphics. While I have done a lot of programming, it's been exclusively either backend stuff or used a language with its own gui editors like VB6 or C#. GPUs are, abstractly, very parallelized in-order floating point co-processors. LeftistMuslimObama posted:I'm finding it really hard to track down anything that just lays out "here are shaders, this is what they do and how they work. Here's what a rendering pipeline is." You know, explain it to me like I'm 5 kind of stuff. I've wanted to play around with making some very basic games, but I find it difficult to work with a framework if I don't understand what's under the hood at least a little bit. As an illustration, I was pretty clueless at interacting with Unix systems at a user level until I started taking an operating systems programming course this year and started learning about how everything in Unix is built. Then it just clicked and I'm flying around the filesystem and using the various tools available like a pro*. Shaders are a practical application of linear algebra, which is also a practical application of geometry. Remember planes from 7th grade? What happens when those planes intersect? How would you describe the positions? And how a given triangle (described as the boundaries of a plane) look? Color? Now, this has to have some idea of depth. Because your monitor is flat, representing a 3d image. Look at a picture. If you assigned things that were further "away" in a 3d sense but it obviously looks the in a 2d sense a number, it'd be depth. What if the camera could see through objects in the foreground? Because a model of something flying through the air in a game which disappears behind a wall is not invisible. The GPU looks at the depth, and decides "that doesn't need to be rendered". This can also be described with shaders. There's no real "explain like I'm 5" explanation from where you're starting from. Learn how processors work. Refresh your memory on discrete mathematics and linear algebra. Then pick up a book on shaders.
|
# ¿ Apr 23, 2015 21:32 |
|
No Gravitas posted:I wonder if it would be possible to just stick in copper plates into the CPU to wick heat away. Usually we get a dead/disabled core, but if we could instead of that have on-chip heatsinks. There are already heat shields.
|
# ¿ Apr 29, 2015 21:54 |
|
I'm not sure where you get that idea at all. Please look up the history of Acorn computing and ARM. The appeal of ARM is that it's an easy-to-understand, low power RISC ISA that scales out to a zillion cores really easily. A lot of the server world is running on stuff that's very memory-intensive but has low CPU requirements, and they're trying to pack density in without overscheduling interrupts on CPUs, which ARM does really well at. When RISC died 10-ish years ago because the existing vendors didn't see the writing on the wall, Alpha, SPARC, and POWER (less so MIPS) were incredibly expensive without performance to warrant it (in many common business use cases). x86 won because it was cheap enough and performed well enough. Now AMD and Intel are busy fighting over whether Intel can take away the rest of AMD's market share with $200 CPUs while the rest of world is saying "hey, if they can shove one of these CPUs into a Chromebook and sell the whole thing for $200, how much would a CPU+memory+board that runs most of my stuff at an acceptable speed cost? $30?" And they don't have to worry about licensing a ton of poo poo from Intel or AMD with cross-patent bullshit to do it. Whether or not ARM actually outperforms x86 at similar TDP is the big question everyone's after, and Intel's sort of proving here and there that massively power-gated x86 can come close to beating ARM at the embedded game, but performance isn't the question you should be looking at. It's price/performance.
|
# ¿ Jun 2, 2015 18:44 |
|
ARM isn't really public either. It's a huge mess, and the ISA isn't free (but at least it's only $1m or something), but aarch64/armv8-a and armhfp/v7 are finally sane enough for widespread general-purpose OS support. Same for ppc64le, if IBM can not be IBM and price it reasonably. The hardware is incredible. So is the price.
|
# ¿ Jun 2, 2015 23:10 |
|
P8 is a lot cheaper and about 60% faster in single-threaded, but it's still $2k minimum for a dev kit, I think
|
# ¿ Jun 3, 2015 00:23 |
|
PC LOAD LETTER posted:I'd give the edge to the A64x2 era. I think that was the 1st time they ever got a major 'x86' ISA feature jump on Intel. The dual core thing was nice but Intel's Jackson Tech (Hyper Threading) was probably a more elegant solution for the time and as it turns out in the long run too. SMT has very few technical advantages over just cramming more cores on (assuming your interconnects are fast enough and you're not fighting for access to an off-die memory controller... again Intel) unless you're expecting cache misses or it's interleaved. PC LOAD LETTER posted:edit: \/\/\/\/\/ Yea early Hyper Threading was moderately inconsistent performance wise. I think there were some tasks where it actually caused some small reductions in performance. Generally though it worked as intended and for most part was 'good enough' without blowing up the CPU die size. PC LOAD LETTER posted:Putting 2 full CPU's on die/package definitely offered more performance and IMO would've been good as a high end or niche part for the time. But pushing that sort of product for mass production was a mistake IMO for AMD. Yea it gave them a nice performance lead but it also meant they were even more fab limited than they were before which was always a huge problem for them. It wasn't a fab limitation problem. Maybe the inability to wedge 16 cores onto a die in 2012 was a fab problem, but the fact that they even wanted to do that was a roadmap/architecture problem in the same way as Intel's "10ghz P4" roadmaps. Barking up the wrong tree.
|
# ¿ Jun 9, 2015 16:35 |
|
Chuu posted:If ARM had anything even remotely competitive to Xeon it would happen very quickly. The reality is, as much as people like to talk about ARM servers, ARM doesn't have anything remotely competitive by pretty much any metric that you would want to rack in a data center. AArch64 is competitive, depending on your use case. Many workloads these days are light on CPU. AArch64 is a little slower than modern Atom cores (a little), and about on-par with performance/watt, though Xeon beats it on both accounts. Still, if you're looking for a ton of cores without a lot of heat, and so-so performance is ok, ARM is definitely an option. Chuu posted:IBM is dumping an absolute ton of money into POWER right now, and they have some big wins in some very specific industries (see: financial, oil & gas). I don't think you'll see power show up in something like EC2 because production just isn't there -- but I wouldn't be surprised to see POWER spread to other industries in the next couple of years. VostokProgram posted:How difficult would an industry-wide transition from x86 to ARM be? Would we have to throw out the entire IBM PC-esque architecture? Or would it be more like, your motherboard still uses a PCIe bus and UEFI/BIOS firmware and all the other things that define a modern PC, except that the CPU is running a different architecture and therefore your programs need to be recompiled.? BIOS is a no-go (and nobody wants it anyway), but PCIe and UEFI work fine on other architectures. EFI started life on Itanium anyway. Gwaihir posted:Nope, they're pretty exclusively focused on the server and datacenter market. That's where the $$$$ is. IBM's not interested in selling low margin consumer crap when they can sell big ole server chips for 4 digits a pop. ARM is chipping away at this, but embedded PowerPC and MIPS are still players, even if the money isn't as flashy as shoving P8s into max-config mainframes but making customers pay to enable them.
|
# ¿ Jul 23, 2015 16:35 |
|
|
# ¿ Jan 15, 2025 08:32 |
|
syzygy86 posted:In California, non-complete clauses are invalid entirely (except for some very narrow cases that don't apply here). In most states, it seems they are generally enforcible. It depends a lot. Really. In general, they're more enforceable the further east you go, but I wouldn't say they're "generally enforceable" in most states. It matters whether you acquired an advantage which you wouldn't have had without the job (client base, contact with company that poached, level of seniority/IP knowledge, etc)
|
# ¿ Oct 30, 2015 13:22 |