|
Laslow posted:Yep. Cloud is a race to the bottom and they don’t play that game. lol cloud is a major profit center for big tech companies extending to even hardware makers that cater to cloud providers
|
# ? Nov 23, 2020 06:13 |
|
|
# ? Apr 25, 2024 05:45 |
|
Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing
WhyteRyce fucked around with this message at 06:25 on Nov 23, 2020 |
# ? Nov 23, 2020 06:23 |
shrike82 posted:lol cloud is a major profit center for big tech companies extending to even hardware makers that cater to cloud providers
|
|
# ? Nov 23, 2020 07:09 |
WhyteRyce posted:Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing Then it might make economic sense for them to make a buttload of ARM Xserves(AServes?) for themselves to use.
|
|
# ? Nov 23, 2020 07:20 |
|
WhyteRyce posted:Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing Sure, they want to sell services. Like every other "cloud" provider not named Amazon, Microsoft, or Google, they'll keep hosting those services on AWS and Azure, because that's cheaper, easier, and less risky than developing their own server hardware from the ground up and building a worldwide network of datacenters to run it all. The really interesting thing here is going to be the server-side transition to ARM. Right now, one of the perceived issues in cloud ARM adoption is that developers would like to be able to run the same binaries and containers locally and server-side, whether that's a specific interpreter/runtime or their own compiled binaries, to make deployment and troubleshooting easier. When local dev is x86-64, then there's a barrier in going to ARM hosts. But, local dev on an ARM Macbook flips the script; now, it'll be easier to deploy to AWS Graviton or whatever equivalent Azure comes up with when they finally roll out ARM VMs. Laslow posted:And maybe thats their opening. They can charge people gently caress-awful prices for personal backups/sync because its so seamless with the iOS and Mac devices that can shove the service signup page into millions of captive faces. Apple has hosted iCloud in Azure and AWS for a long time now. Why would they go back to on-prem hosting now? Having a good CPU is just one tiny part of a very complex equation, and they can pay other people to handle all the headaches for them with better economies of scale than they'll ever have.
|
# ? Nov 23, 2020 07:35 |
|
A simple Google shows Apple indeed has their own data centers and is spending billions to expand
|
# ? Nov 23, 2020 07:39 |
|
Jenny Agutter posted:i actually replaced the single fan with a noctua 120mm fan which led to me checking the temps and realizing the issue. if i get the D15 with a single 140mm fan would it make sense to put the 120mm i already have on it as well, or will they end up causing issues? or should I just trial and error it? this is all in a meshify c The single fan model is the one I'd always recommend, the D15S - the S is actually shaped differently, and doesn't block the top pcie slot HalloKitty fucked around with this message at 10:27 on Nov 23, 2020 |
# ? Nov 23, 2020 10:03 |
|
movax posted:I'll go ask / check what the other modmins think — all it takes is one person to completely ruin it for everyone and I know similar things have been tried in the past, but I like the spirit! I would like to express enthusiasm for this. I relish the opportunity to dig through the IT Drawer Full Of Parts We Keep On Hand For Troubleshooting Purposes Because They're Too Old To Be Used In The Current Machine But Too New To Throw Away And We're Too Lazy To Sell Them For Break-Even When You Factor In Shipping.
|
# ? Nov 23, 2020 11:26 |
|
Mr.Radar posted:Since some of the gains that the M1 is showing do seem to be related to the RISC-style uniformity of the instruction set (specifically the huge ROB and amazing instruction-level parallelism) in a way that would be hard to replicate just by throwing more transistors at the x86 decoder (like they were able to do in the past to compete with RISC architectures) I could see Intel (or AMD) releasing a new "Risc86" architecture that preserves (most of) the semantics of x86(-64) (i.e. something close enough you could automatically translate like 95%+ of x86 assembly to it) but with an instruction encoding that's more uniform and easier to decode enabling them to implement those same type of optimizations. Presumably the chips would continue to include an x86(-64) decoder to run legacy software as well, making the transition as smooth as possible. The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified. 1) Independent of the chip design, Apple is using a better Instruction Set Architecture than Intel. Of course computer-oriented people would want to attribute the increase in performance to a computer-level idea. 2) The Apple chip is just better engineered than the Intel chips, independent of the different constraints put on the different products. 3) Apple, being a computer system company, and not a computer chip company, does not need to make a profit on the sale of their computer chips. Because of this, they are able to make their chips much bigger than Intel, and this gives them an advantage over Intel independent of how well-engineered are the two chip designs. 4) TSMC has a manufacturing process advantage over Intel. I don't know which of the 4 reasons is most important. Maybe they are all important. Thoughts? silence_kit fucked around with this message at 12:15 on Nov 23, 2020 |
# ? Nov 23, 2020 11:48 |
silence_kit posted:The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified. ARMv8a also implements javascript instructions, so it hardly classifies as RISC. Reasons two and three could be part of an explanation, but even if combined, aren't likely to be the whole answer. Reason four seem dubious at best, since that would affect chip yields / binning more, and that's not something we've seen borne out with AMDs chips which are also manefactured at TSMC. Besides, Intel also gets stuff manefactured at TSMC. EDIT: Another part of the answer is very likely to be that since Apple are funding LLVM development, and since that's a primary part of the toolchain used to build macOS, the people Apple are funding have access to the equivalent of all of the NDA'd stuff that Intel similarily uses for their HPC-targeted compilers for C++, Fortran, and such. This naturally also benefits other projects that use LLVM for ARM, such as FreeBSD. BlankSystemDaemon fucked around with this message at 13:47 on Nov 23, 2020 |
|
# ? Nov 23, 2020 13:27 |
|
silence_kit posted:The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified. Number 4 is quite large. TSMC 5nm means they've got a 30% power advantage over the next closest designs on TSMC 7nm. Intel's 14nm isn't even in the same ballpark. Intel 10nm exists, I guess, but it also sucks. Combined with 3, (apple shoved GIGANTIC amounts of L2 cache on the chip, which equals space), it's a very sizable difference vs another company's product. 1 is probably not a big one in the scheme of things.
|
# ? Nov 23, 2020 13:46 |
|
I don't think Intel makes any CPUs at TSMC as far as I know, and their process is clearly ahead of Intel's current 14nm+++++ trash. But this is probably what, 20-30% advantage in power/performance tops. I'm not a big enough nerd to tell if the instruction set makes a difference, though from what I remember reading, supposedly x86 decoding isn't a significant bottleneck. And Intel has more than enough money to fund development, it's not like you can throw more people and money at it and get linearly better processors. My completely uninformed take is that Intel hosed up in execution/management of process transition and that derailed the architecture development too since poo poo had to be backported to work on 14nm instead developing something new. IIRC their main cash cow is in servers, and that's not going anywhere just yet, but AMD is now a real competitor and Amazon is rolling out their Arm stuff. So they really need to get their poo poo together quickly. BobHoward posted:Intel hosed around with StrongARM for a while, renaming it to Xscale and going through a few generations. In 2006 they sold the entire business unit to Marvell, just in time to avoid being part of the smartphone gold rush. (NB: they already had Xscale products targeted at phones at the time of the sale!) The Atom phones were pretty competitive at the lower-midrange, from what I remember: https://www.youtube.com/watch?v=v_vttBfgt04&t=213s
|
# ? Nov 23, 2020 13:51 |
mobby_6kl posted:I don't think Intel makes any CPUs at TSMC as far as I know, and their process is clearly ahead of Intel's current 14nm+++++ trash. But this is probably what, 20-30% advantage in power/performance tops. Tangentially related, there are also some pretty interesting patents to be found.
|
|
# ? Nov 23, 2020 14:00 |
|
Intel did post a 10% YoY loss for their datacenter group I think, while AMD at the same time posted a 116% gain. The difference is that Intel has something like 18 billion in revenue per quarter while amd has 3. So there's a large buffer there.
|
# ? Nov 23, 2020 14:06 |
Part of that 10% YoY loss is almost certainly guaranteed to be from the hyperscalers Those exist purely because of the commodification of compute as a resource with oversubscription being the price customers ultimately pay.
|
|
# ? Nov 23, 2020 14:26 |
|
mobby_6kl posted:IIRC their main cash cow is in servers, and that's not going anywhere just yet, but AMD is now a real competitor and Amazon is rolling out their Arm stuff. So they really need to get their poo poo together quickly. So, about Amazon's ARM offering and how good it is: If your application runs with no issues on Amazon's ARM CPUs (Graviton 2), then you are probably able to realize an immediate ~30-50% cost savings by doing so. Groups spending a lot of money on AWS services that have Graviton instance types available should be cutting over ASAP. It's a huge win for them, and I'd expect the vast majority of AWS' own hosted services to be based on it shortly. Amazon has been showing tons of performance numbers on applications that scale well with more cores showing how you can choose between keeping the same instance size and getting better performance, or going down an instance size and keeping similar performance. How on earth did they get access to a world-killing ARM CPU? Because for more than a decade now AWS has been selling "1vCPU = 1 hyper-thread, half of a physical core" and now for their own ARM chips, "1vCPU=1 physical core". So the actual headliner here is that "2 ARM cores are moderately faster at multi-threaded tasks than a single Intel or AMD x86 core", and Amazon is willing to rent you those 2 ARM cores for less money than a single x86 core. Amazon is still going to clean shop with this offering, and it should become the default choice for most AWS applications.
|
# ? Nov 23, 2020 14:53 |
Twerk from Home posted:Amazon has been showing tons of performance numbers on applications that scale well with more cores showing how you can choose between keeping the same instance size and getting better performance, or going down an instance size and keeping similar performance. How on earth did they get access to a world-killing ARM CPU? Because for more than a decade now AWS has been selling "1vCPU = 1 hyper-thread, half of a physical core" and now for their own ARM chips, "1vCPU=1 physical core". So the actual headliner here is that "2 ARM cores are moderately faster at multi-threaded tasks than a single Intel or AMD x86 core", and Amazon is willing to rent you those 2 ARM cores for less money than a single x86 core. Amazon is still going to clean shop with this offering, and it should become the default choice for most AWS applications.
|
|
# ? Nov 23, 2020 16:23 |
|
Intel already sends a lot of chipset-scale trailing process work to TSMC; the new part is sending high performance/margin HPC GPU out. That is probably going to TSMC but rumors are they’re looking at Samsung which would be a hilarious and on brand with the last half decade of intel decision making.
|
# ? Nov 23, 2020 16:26 |
|
Twerk from Home posted:So, about Amazon's ARM offering and how good it is: Also, there’s a rumor is that AWS isn’t charging to recoup the full development costs for Graviton. Even at hyperscaler discount points CPUs are the biggest cost per node, and aws would rather capture more of that. Gv2 isn’t about any specific ARM architecture benefits but priced as a lever to get customers to adapt multiarch. Nuvia’s whole pitch is using the Apple chip design philosophy in the data center; whether there’s enough volume left over to be economically viable is the question.
|
# ? Nov 23, 2020 16:38 |
PCjr sidecar posted:Also, there’s a rumor is that AWS isn’t charging to recoup the full development costs for Graviton. Even at hyperscaler discount points CPUs are the biggest cost per node, and aws would rather capture more of that. Gv2 isn’t about any specific ARM architecture benefits but priced as a lever to get customers to adapt multiarch. I gotta say, I'm a bit confused by Nuvia, because they let Jon Masters go, and he was the VP of Software and really knows his poo poo.
|
|
# ? Nov 23, 2020 19:12 |
|
Question about Xeons and strictness of RAM compatibility: I have an i3-6320 in my fileserver. It's been a rather nice value server CPU with a high clock and ECC support. I've been pondering swapping it for a Xeon E3-1270/80 v6 as a low-effort upgrade to give myself two more cores to better support more simultaneous users, if I can snag one on eBay for a good price. My i3-6320 lists compatible RAM as: DDR4-1866/2133, DDR3L-1333/1600 The Xeon, however, lists: DDR4-2400, DDR3L-1866 The 64 GB of RAM in the machine is all DDR4-2133 ECC. Motherboard is a Supermicro MBD-X11SSM-F-O. Will the Xeon be happy to run the RAM at 2133 or must it be exactly 'to spec'? I know consumer SKUs are flexible with this, but the server stuff seems stricter so I've been trying to find out before going ahead and buying one, as the upgrade probably isn't worth it if I have to go through re-buying all the RAM. (At that point, maybe just wait a year or two and buy a whole new mobo+CPU+RAM).
|
# ? Nov 24, 2020 00:54 |
admiraldennis posted:Question about Xeons and strictness of RAM compatibility: The main thing to keep an eye on with respect to memory is whether or not it's registered/buffered, in addition to the usual such as ECC and CAS latency. BlankSystemDaemon fucked around with this message at 03:20 on Nov 24, 2020 |
|
# ? Nov 24, 2020 03:18 |
|
silence_kit posted:The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified. https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive A good instruction set, starting from a clean-sheet, a good focus on modern requirements and full control of the software stack so they can include novel features and have them fully used from day 1. Pablo Bluth fucked around with this message at 21:34 on Nov 25, 2020 |
# ? Nov 25, 2020 21:31 |
Pablo Bluth posted:I found this Anandtech deepdive really interesting. It certainly gives the impression of a lot of 1 and 2. They're both effectively CISC-like ISAs running on out-of-order super-scalar processors with branch prediction, micro-op fusion and caching RISC-like CPU cores, and there's lot to be said for the kinds of optimization that can be done in the compiler when the same company is working on both the chip and the compiler. Heck, just look at all the ~250 page PDF for microarchitecture optimizations for x86-likes, the ~150 page pdf for x86-like assembly language optimization, or the ~400 page pdf for keeping track of latency, throughput, and micro-operation breakdowns for individual instructions.
|
|
# ? Nov 25, 2020 22:23 |
|
I won't claim to know the finer details of what counts as ISA and what counts as something else, but the Anandtech article talks about how unusually wide the architecture is and the challenge in going the same route with x86. x86 is now 40 years of making improvements fit with legacy decisions. Even if that just adds the odd percent penalty here and there, compound interest can quickly add up. Would anyone making a clean sheet design today do something even remotely close to what x86 has become?
|
# ? Nov 26, 2020 00:26 |
|
BSD superfan, your posts make me facepalm so often the "Javascript instruction" is just a slightly specialized version of FP-to-int. FP conversion is important when implementing JS because JS has no native integer data type, just IEEE double precision floating point. Adding this variant of FP-to-int in ARM v8.3 wasn't much of a change from v8.2. There were already two versions (vector and scalar) of what FJCVTZS does, which is to convert floating point to signed fixed point with rounding towards zero. The "javascript" version just gives you the same result mod 2^32. For some reason that's important to JS interpreters and JITs. The ARM64 instruction set has a lot of opcodes, but most of them are like this: variations on a theme easily supported with the same hardware. Some of the 1980s RISC pioneers have backronymed RISC to "Reduced Instruction Set Complexity" rather than the original "Reduced Instruction Set Computer," because their primary concern was always ease of implementation, not counting up the number of instructions and deciding that if the result was too many it wasn't a RISC any more.
|
# ? Nov 26, 2020 00:53 |
|
ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.
|
# ? Nov 26, 2020 02:13 |
|
CFox posted:ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86. VIA.
|
# ? Nov 26, 2020 03:51 |
|
Space Gopher posted:The really interesting thing here is going to be the server-side transition to ARM. Right now, one of the perceived issues in cloud ARM adoption is that developers would like to be able to run the same binaries and containers locally and server-side, whether that's a specific interpreter/runtime or their own compiled binaries, to make deployment and troubleshooting easier. When local dev is x86-64, then there's a barrier in going to ARM hosts. But, local dev on an ARM Macbook flips the script; now, it'll be easier to deploy to AWS Graviton or whatever equivalent Azure comes up with when they finally roll out ARM VMs. Are there any projections on how long this transition would take? I assume decades.
|
# ? Nov 26, 2020 04:02 |
|
CFox posted:ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86. As anyone who has worked with NVIDIA as a partner would tell you: lol.
|
# ? Nov 26, 2020 04:16 |
|
Gabriel S. posted:Are there any projections on how long this transition would take? I assume decades. Depends what you mean by "transition." There are small and developing customers who would probably jump on a 30-50% cost AWS cost reduction as quickly as they could--a couple of years at most. But if you're talking "big iron" type customers who have entrenched products that would take massive engineering efforts to port over to a different arch, yeah, a decade+ if ever. The real question will be if AWS can maintain that price delta over time. If not, then it saps a lot of the reason for going with ARM in the first place. But then, AWS gets to set the prices however they like, and presumably have a vested interest in driving ARM adoption, sooo... CFox posted:ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86. Good luck waiting for anyone to whip up a commercially available ARM chip that can compete on the desktop (or laptop, for that matter). Apple ain't sharing, and no one else seems to give a rat's rear end about anything outside the datacenter or cell-phones. To the extent that x86 belongs to Intel and AMD, it's almost irrelevant because the development costs for any competitive chip are so astronomical that no one else is bothering except for custom solutions for their own internal products. DrDork fucked around with this message at 04:22 on Nov 26, 2020 |
# ? Nov 26, 2020 04:19 |
|
actually the best shot for competitive high-performance ARM is NVIDIA. That's where they'll need to aim to satisfy their HPC ambitions. lovely phone efficiency cores with GPUs attached aren't going to be suitable for the HPC roles they're envisioning, they will be pushing their design teams to follow Apple in high-perf cores. if the merger gets turned down - yeah, nobody is doing high-perf ARM cores, that will be available to the commercial market, that dream is over. In a lot of ways NVIDIA buying ARM is really necessary to break the x86 duopoly. Nobody else is going to do it (that will offer their chips to the open market). Paul MaudDib fucked around with this message at 04:42 on Nov 26, 2020 |
# ? Nov 26, 2020 04:39 |
|
CFox posted:ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86. isn't it locked to one company now, nvidia? also gently caress ARM i ain't installing wack rear end emulators to run old games
|
# ? Nov 26, 2020 06:22 |
BobHoward posted:The ARM64 instruction set has a lot of opcodes, but most of them are like this: variations on a theme easily supported with the same hardware. Some of the 1980s RISC pioneers have backronymed RISC to "Reduced Instruction Set Complexity" rather than the original "Reduced Instruction Set Computer," because their primary concern was always ease of implementation, not counting up the number of instructions and deciding that if the result was too many it wasn't a RISC any more. Which should sound familiar, because that was the point of the paper that later gave birth to what's now known as Moore's Law, where one of the most-used words is 'cost'. Also, leave my Britney^wmy gimmick alone! CFox posted:ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86. While it's not nearly as advanced as, for example, the offerings from Intel, AMD, ARM or any of the licensees, RISC-V does exist as an out-of-order super-scalar CPU DrDork posted:Good luck waiting for anyone to whip up a commercially available ARM chip that can compete on the desktop (or laptop, for that matter). Apple ain't sharing, and no one else seems to give a rat's rear end about anything outside the datacenter or cell-phones. To the extent that x86 belongs to Intel and AMD, it's almost irrelevant because the development costs for any competitive chip are so astronomical that no one else is bothering except for custom solutions for their own internal products.
|
|
# ? Nov 26, 2020 09:11 |
silence_kit posted:The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified. New memory controller allows for a single core to saturate the available bandwidth to ram; allowing for very fast single core performance. DDR 5 provides an additional 50% bandwidth so 2021 2022 will see an even larger jump Posted from the Apple m1 thread here’s all the bug workarounds listed on my system with workarounds code:
|
|
# ? Nov 26, 2020 09:25 |
Coffee Jones posted:Posted from the Apple m1 thread here’s all the bug workarounds listed on my system with workarounds
|
|
# ? Nov 26, 2020 09:30 |
|
how much mileage can i get from a xeon e31246 v3? i had a i54430 for seven years and never had a problem with it
|
# ? Nov 26, 2020 13:49 |
|
Zeta Acosta posted:how much mileage can i get from a xeon e31246 v3? i had a i54430 for seven years and never had a problem with it Depends on what you're trying to do with it. You could probably stretch it out another year if you're gaming and/or doing light productivity, and you have a decent GPU.
|
# ? Nov 26, 2020 14:01 |
|
gradenko_2000 posted:Depends on what you're trying to do with it. You could probably stretch it out another year if you're gaming and/or doing light productivity, and you have a decent GPU. 1080 gaming with an rx 590. word, spss and atlas-ti its all i use
|
# ? Nov 26, 2020 14:23 |
|
|
# ? Apr 25, 2024 05:45 |
|
Zeta Acosta posted:how much mileage can i get from a xeon e31246 v3? i had a i54430 for seven years and never had a problem with it That xeon is basically the i7 haswell variant of the i5. You'll be getting hyperthreading and a good bit higher clock speeds. Will probably last quite a good bit. i5 https://ark.intel.com/content/www/us/en/ark/products/75036/intel-core-i5-4430-processor-6m-cache-up-to-3-20-ghz.html Xeon https://ark.intel.com/content/www/us/en/ark/products/80916/intel-xeon-processor-e3-1246-v3-8m-cache-3-50-ghz.html
|
# ? Nov 26, 2020 20:53 |