|
Cygni posted:I remember this exact quote about 100BaseT and then gigabit. I’m not even looking at the speed tho I’m just thinking of what devices need to be wired. What devices we have now and on the horizon.
|
# ? Apr 25, 2020 21:08 |
|
|
# ? Apr 26, 2024 05:25 |
|
.
sincx fucked around with this message at 05:55 on Mar 23, 2021 |
# ? Apr 25, 2020 22:14 |
|
Isn’t it also applicable inside a HMD? obviously the multiple viewer benefit would be lost.
|
# ? Apr 25, 2020 22:33 |
|
.
sincx fucked around with this message at 05:55 on Mar 23, 2021 |
# ? Apr 25, 2020 22:38 |
|
Even at that, you could still use retina/gaze tracking to eliminate a lot of unnecessary pixels. Think about a big screen TV in a living room; a lot of those top pixels are never going to be used. Also, w/r/t 8K and such, we're going to see a lot of lossy compression a la DLSS instead of continuing to widen the bandwidth requirements.
|
# ? Apr 25, 2020 23:01 |
|
D. Ebdrup posted:What the christ, they make NVMe-over-fabric?! I imagine this is simply better than regular Fibre Channel, in which case, bring it on HalloKitty fucked around with this message at 23:14 on Apr 25, 2020 |
# ? Apr 25, 2020 23:06 |
|
sincx posted:A HMD is like a 2 microlens version of a light field display. you dont need more than 2 lens because you have 1 viewer and 2 eyes whose locations are fixed relative to the display Yeah but HMDs still have the issue of fixed focus where the eyes are only ever focused at one depth despite stereo objects being at different 3D depths and the lizard brain doesn’t like that. I figured light fields can fix that.
|
# ? Apr 26, 2020 00:17 |
|
LRADIKAL posted:Also, w/r/t 8K and such, we're going to see a lot of lossy compression a la DLSS instead of continuing to widen the bandwidth requirements. We're gonna see that mostly because of the atrocious state of internet services in the US. 100Gb may be a ways down the road, but 10Gb has a lot of very reasonable uses right now. Even if we can't conceive of a use for 40Gb/100Gb in the home now, in 20 years I'm sure we will have multiple "obvious" applications for it. Technology expands to fill the available space.
|
# ? Apr 26, 2020 01:00 |
|
sincx posted:it's a grid of microlens, each with a very dense pixel array underneath remember lytro? no? light field tech is cool as hell but decades away
|
# ? Apr 26, 2020 01:06 |
|
DrDork posted:We're gonna see that mostly because of the atrocious state of internet services in the US. 100Gb may be a ways down the road, but 10Gb has a lot of very reasonable uses right now. Even if we can't conceive of a use for 40Gb/100Gb in the home now, in 20 years I'm sure we will have multiple "obvious" applications for it. Err it’s really just storage or display. There’s really nothing else in the consumer realm. Chip to chip sure that will keep going up but device to device is really just general purpose storage and display. And general purpose storage is becoming less and less relevant in the consumer space. I’m not saying its absolute but I just can’t think of anything either.
|
# ? Apr 26, 2020 01:11 |
|
Shaocaholica posted:I’m not saying its absolute but I just can’t think of anything either. I guess that's mostly my point. At 10Mbps, we couldn't really think of why we'd need 1Gbps to the home, either, and now we can't think of why we'd ever have been so blind. I agree that it'll take a long time before >40Gb makes sense to a home user, but we'll get there in the next decade or three, that I do not doubt for a second.
|
# ? Apr 26, 2020 01:47 |
|
maybe something involving streaming neural information from a brain interface? I agree with DrDork though, something will use the bandwidth even if it ends up being ultra-HD streaming hyperdimensional spatiotemporal models of porn star sex bits e: a more real answer is that it enables cloud storage that has performance comparable to today's local storage if coupled with a smart local cache to keep latency down
|
# ? Apr 26, 2020 02:30 |
|
Matrix interface. That’s what will use 100G.
|
# ? Apr 26, 2020 02:47 |
|
Uncompressed True 4K UHD is like 12 gigabit a second. And 8k is a ways away, but absolutely coming if the panel makers have any say (3DTV proves they really dont, but still), and that is supposed to be over 24Gbps for 60fps. Does anyone need uncompressed True 4k or 8k? Probably not, but people used to think VHS was more than anyone needed. Zelda: Link to the Past fits on a 1.44Mb floppy. The icon alone for it in the Switch virtual console is larger than that. I never thought I would want more than gigabit on my HTPC/NAS, but there have been multiple instances where I ran out of bandwidth with more than user trying to use it simultaneously. When bandwidth and storage goes up, developers and end users will find ways to use it.
|
# ? Apr 26, 2020 03:06 |
|
8k is broadcast OTA in Japan. But I'm not sure what people are watching it with.
|
# ? Apr 26, 2020 03:09 |
|
D. Ebdrup posted:You got similar bandwidth characteristics, sure - which is great is if you're using it for bulk storage, but there's one thing RJ45 can't get you. I don't get why the physical layer would affect the latency by an order of magnitude. This is not a raw physical measurement but, here's the CrystalDiskMarks I took when I was trying out that SFP+ 10GBaseT adapter on my Aquantia 10gbe nic. zscratch = HP EX920 NVMe, direct passthrough to ZFS This is an inexact comparison for a variety of reasons - the Aquantia is off my client's PCH, while the SFP is CPU-direct on the client with a Supermicro card with Intel 82598EN. (Server was an identical Supermicro/82598EN on FreeBSD). Also, the server had 32GB of RAM, which I set the dataset size to 32GB to help. But it's still a high-ram server which may minimize the difference a bit of course. But is it orders of magnitude off? No, not really. Also, for fun, here's my almost-full 8-drive 8TB ZRAID2 shucc storage array across the aquantia 10gbase-t for comparison. I see pretty much what I would expect for I/O latency in those figures, although I don't have a direct comparison with the Intel. Paul MaudDib fucked around with this message at 04:50 on Apr 26, 2020 |
# ? Apr 26, 2020 03:31 |
|
Cygni posted:Uncompressed True 4K UHD is like 12 gigabit a second. And 8k is a ways away, but absolutely coming if the panel makers have any say (3DTV proves they really dont, but still), and that is supposed to be over 24Gbps for 60fps. Does anyone need uncompressed True 4k or 8k? Probably not, but people used to think VHS was more than anyone needed. I think we're getting into a lot of fundamental limits in places like frequency, noise acceptance/rejection, and storage density. I don't think we'll be seeing too many more 10x increases anymore, and display resolution and lightfields is a massive place, in terms of data requirements. edit: Paul, I think they meant latency, not bandwidth. Boy, those high numbers are fun tho. LRADIKAL fucked around with this message at 03:49 on Apr 26, 2020 |
# ? Apr 26, 2020 03:45 |
|
All you need to want more bandwidth is to find something where bandwidth reduces the latency of some action. You don't have to somehow use it continuously. Play games without spending any time downloading anything first. Backup software that just instantly mirrors your disk remotely in the background without ever bothering you or being even remotely noticeable. Upload a video with a tap and without committing to waiting for 10 minutes for it to slowly finish. Software delivered as virtual machine images that you don't even have to install before running. In the middle of working on a big project, but want to work on it with a different computer (e.g. moving between home/work/laptop)? Too disruptive to close everything down, save, and then reopen and get back to what you were doing? Replicate the machine state from one computer to a another. What could it be? 300 gigs? ezpz. The applications don't even have to know they were moved. With a nice 100G link, that's less than 30s to accomplish from scratch.
|
# ? Apr 26, 2020 03:50 |
|
LRADIKAL posted:edit: Paul, I think they meant latency, not bandwidth. Boy, those high numbers are fun tho. yeah, but I mean, at some point the latency starts affecting bandwidth, because you can't get as many round-trips per second. If the latency was 10x as high then 4KB Q1T1 should have plummeted, right? Also, to make those numbers even more really fun, the EX920 server was actually on a 3.0x2 PCH slot. (should make more impact on sequential than random) post-postscript-also: that was over SMB. Paul MaudDib fucked around with this message at 04:57 on Apr 26, 2020 |
# ? Apr 26, 2020 04:45 |
|
crazypenguin posted:All you need to want more bandwidth is to find something where bandwidth reduces the latency of some action. You don't have to somehow use it continuously. I guess if that’s the case there’s no reason to have anything stored or even processed locally.
|
# ? Apr 26, 2020 04:50 |
|
Shaocaholica posted:I guess if that’s the case there’s no reason to have anything stored or even processed locally. If you don't think this is the eventual goal of cloud storage companies and thinks like Stadia, you're not thinking far enough ahead.
|
# ? Apr 26, 2020 04:55 |
|
Shaocaholica posted:I guess if that’s the case there’s no reason to have anything stored or even processed locally. certainly it'll be interesting to see if the desire to render everything into x-as-a-service can overcome the desire to not invest anything into infrastructure ever
|
# ? Apr 26, 2020 05:14 |
As for pushing 100Gbps, Netflix is doing 200Gbps per-server for their FreeBSD-based content delivery system: https://www.youtube.com/watch?v=8NSzkYSX5nY BlankSystemDaemon fucked around with this message at 14:06 on Apr 26, 2020 |
|
# ? Apr 26, 2020 14:03 |
|
Paul MaudDib posted:I don't get why the physical layer would affect the latency by an order of magnitude. It's less about the difference of fiber vs copper and more because of how transmission works between SFPs vs RJ45s (at least at reasonable wire lengths). The specs on a the SFP+ standard target a ~0.3us latency, while the 10Gbase-T spec allows 2-2.5us. While I am not an expert on the internal workings of SFPs, my understanding is this is largely down to that 10Gbase-T PHY strategy uses block encoding, requiring a data block to be read and held in the transmitter prior to transfer, while the SFP PHY strategy doesn't bother with encoding, due to having near zero concerns about EMI/crosstalk. So it can utilize a simplified transmission method that's simply faster. e; I'll admit I have no real idea how the SFP->RJ45 modules work. Do they transmit using standard block encoding? Do they do raw media translation like DAC twinax? You're also not going to see the latency differences between RJ45 and Fiber show up in a test like that, where it is a pretty minor component in bulk file transfers of any size, and who knows what your switch is doing--probably not cut-through switching, at any rate. Frankly I doubt any home system is going to really show the differences--that's the realm of stuff like NVMeoF and whatnot. DrDork fucked around with this message at 18:05 on Apr 26, 2020 |
# ? Apr 26, 2020 17:56 |
|
DrDork posted:It's less about the difference of fiber vs copper and more because of how transmission works between SFPs vs RJ45s (at least at reasonable wire lengths). The specs on a the SFP+ standard target a ~0.3us latency, while the 10Gbase-T spec allows 2-2.5us. While I am not an expert on the internal workings of SFPs, my understanding is this is largely down to that 10Gbase-T PHY strategy uses block encoding, requiring a data block to be read and held in the transmitter prior to transfer, while the SFP PHY strategy doesn't bother with encoding, due to having near zero concerns about EMI/crosstalk. So it can utilize a simplified transmission method that's simply faster. I'm not a real expert on the topic, but I did do some experimental implementation work once on a Forward Error Correction (FEC) decoder for long haul fiber 100G networking. It was experimental in that my starting point was working ASIC source code, and I was asked to see if it was possible to port it to work at full 100G rate in FPGAs, for Reasons. I didn't succeed, the original design was too dependent on things ASICs do way better than FPGAs, and it was deemed not important enough to spend more effort on. So, with that not-a-real-expert caveat, for 10G, I would not be surprised if short haul SFP gets away with no FEC required to make the physical layer reliable, while 10Gbase-T likely needs some. And FEC adds latency.
|
# ? Apr 26, 2020 22:29 |
|
BobHoward posted:So, with that not-a-real-expert caveat, for 10G, I would not be surprised if short haul SFP gets away with no FEC required to make the physical layer reliable, while 10Gbase-T likely needs some. And FEC adds latency. Yeah, to be clear, I'm talking 10Gb fiber only. I have no idea what the latency profiles of 100Gb would be, let alone 100Gb optimized for multi-km runs instead of the intra-datacenter uses I've worked with.
|
# ? Apr 26, 2020 23:17 |
Paul MaudDib posted:yeah, but I mean, at some point the latency starts affecting bandwidth, because you can't get as many round-trips per second. If the latency was 10x as high then 4KB Q1T1 should have plummeted, right? If you're interested, there's a paper by Mathis et.al. called the macroscopic behavior of the TCP congestion avoidance algorithm published July 1997 in ACM. (CCIE finally paying off!) EDIT: I just realized I forgot to reply to your other post, orz Paul MaudDib posted:I don't get why the physical layer would affect the latency by an order of magnitude. What's happening is that at a certain point, the time it takes to transfer the actual packets over the SFP connectors matters more than the latency itself. Where it will make a difference is if you do proper disk access testing via dtrace, and since Microsoft developers took the code from FreeBSD and ported it to Windows 10, you can do that. I imagine you can't take existing scripts, though - so more than likely you'll want to fire up Windows Performance Monitor aka perfmon because it lets you chart IOs per second and meassure the latency of them too - however, be aware that it looks like perfmon can't do direct latency tracking of every subsystem. I guess Microsoft would like to/are working on switch(ing) all tools like perfmon to using dtrace, because that way they get proper tracing of all kernel subsystems. Apple has been through a similar process with a bunch of their tools, supposedly. BlankSystemDaemon fucked around with this message at 11:19 on Apr 27, 2020 |
|
# ? Apr 27, 2020 10:26 |
|
Remember the recent Cyrix/VIA/Centaur/IDT chat, that seemed to dead end at some ancient core tech transfer to Chinese CPUs? Well apparently Zahoxin’s new CPU was indeed a new VIA arch, and now there is a new one with Centaur branding and a built in inferencing accelerator and edram. 8 cores with AVX-512 (Supposedly Haswell level performance), 4 channel DDR4, and 44 pcie lanes on TSMC 16nm for edge compute. https://ascii.jp/elem/000/004/010/4010926/
|
# ? Apr 28, 2020 08:32 |
|
Cygni posted:Remember the recent Cyrix/VIA/Centaur/IDT chat, that seemed to dead end at some ancient core tech transfer to Chinese CPUs? Interesting, and only a day or two after Bloomberg gave a few more deets on the upcoming ARM Macs (possibly up to 3 skus based on 8-core versions of the upcoming A14, expected by the end of the year) My body is ready for a laptop with the guts of a fancy iPad.
|
# ? Apr 28, 2020 11:40 |
|
I tried to read up on that Bloomberg piece about ARM Macs - is ARM now at the point where they can offer comparable performance compared to a MacBook that's effectively a really customized Intel laptop?
|
# ? Apr 28, 2020 11:51 |
|
gradenko_2000 posted:I tried to read up on that Bloomberg piece about ARM Macs - is ARM now at the point where they can offer comparable performance compared to a MacBook that's effectively a really customized Intel laptop? Been there for at least a year or two, IIRC.
|
# ? Apr 28, 2020 12:02 |
|
Ok Comboomer posted:Been there for at least a year or two, IIRC. is there such a thing as an ARM desktop CPU that you could buy and build into like a Intel Core CPU?
|
# ? Apr 28, 2020 12:08 |
|
"Yes" but they're mostly workstations or development hardware for server deployments. Here is an article with some recent examples: https://www.servethehome.com/marvell-thunderx3-arm-server-cpu-with-768-threads-in-2020/ For most commodity computing, at the desktop level ARM doesn't have much to offer that x86 doesn't. For more portable hardware like tablets and laptops, it makes more sense even looking at Chromebooks with ARM processors since they're more tuned for battery life. They aren't high powered machines in any case. Microsoft's recent forays into ARM based platforms have been a wet fart so far. Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture. I was actually planning on reading up on the transition from POWER to x86 for Apple today to see how they might handle an x86 to ARM transition.
|
# ? Apr 28, 2020 14:09 |
|
NewFatMike posted:I was actually planning on reading up on the transition from POWER to x86 for Apple today to see how they might handle an x86 to ARM transition.
|
# ? Apr 28, 2020 14:45 |
|
gradenko_2000 posted:I tried to read up on that Bloomberg piece about ARM Macs - is ARM now at the point where they can offer comparable performance compared to a MacBook that's effectively a really customized Intel laptop? For the limited number of things that 90% of people use their Apple laptops for, yeah. Hell, even if they did nothing new and just shoved the A12Z chip they already have into a laptop body, it would sell. You're not going to be doing high-end gaming (or high-end compute, for that matter) with one, but for Office-style apps, web browsing, etc., it'll be more than enough. The question is the software, which brings me to... NewFatMike posted:Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture. Apple is probably the only ones who are going to be able to make it work, because they're the only ones who can force the entire ecosystem to move in the direction they want. Microsoft failed because developers weren't interested in solving the chicken-and-egg issue of making software for hardware no one wanted because there was no software for the hardware (among other reasons).
|
# ? Apr 28, 2020 14:57 |
|
NewFatMike posted:Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture. Yes, and they also get to design their own integrated GPU. It would be limited to their own Metal API, but I'd expect the performance to be absolutely groundbreaking in its segment.
|
# ? Apr 28, 2020 15:08 |
|
DrDork posted:Apple is probably the only ones who are going to be able to make it work, because they're the only ones who can force the entire ecosystem to move in the direction they want. Microsoft failed because developers weren't interested in solving the chicken-and-egg issue of making software for hardware no one wanted because there was no software for the hardware (among other reasons). A big part of this is because they can probably get a lot of the app store apps working if it is the same architecture. It will break a lot of things, and it will probably be a huge pain for a ton of devs, but I don't think it would be a day-1 reset for absolutely everything.
|
# ? Apr 28, 2020 15:25 |
|
NewFatMike posted:Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture. Also, even if they don't ever make the move, it's a nice bit of leverage they can hold over Intel. The real blocker would be the higher-end systems, especially the Mac Pro. Apple could drop an ARMbook Air with iPad Pro guts in a laptop shell any time they wanted, and it would probably work just fine. Selling it as the future, though, would be a bit harder if they keep their biggest, baddest systems on the old architecture indefinitely.
|
# ? Apr 28, 2020 15:32 |
The ARM Morello, which underlies the Neoverse based Graviton2 chip, is making huge waves in the server market too. It is also the CPU thats being used for the capabilities-based CheriBSD, a soft-fork of FreeBSD (meaning code goes back to FreeBSD regularly) made by Cambridge with. CHERI is notable for being the way to mitigate many of the foibles of C and C++ software, by applying hardware-based capabilities.
|
|
# ? Apr 28, 2020 16:49 |
|
|
# ? Apr 26, 2024 05:25 |
|
NewFatMike posted:Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture. I was surprised by this and looked up some performance comparisons. It's crazy how close A13 is to Intel and AMD's desktop CPUs. Does anyone have any insights regarding how they've squeezed that much performance out of ARM cores (especially considering its clocks are way lower than desktop CPUs)? E.g. a more efficient ISA?
|
# ? Apr 28, 2020 17:18 |