Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shaocaholica
Oct 29, 2002

Fig. 5E

Cygni posted:

I remember this exact quote about 100BaseT and then gigabit.

I’m not even looking at the speed tho I’m just thinking of what devices need to be wired. What devices we have now and on the horizon.

Adbot
ADBOT LOVES YOU

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

Only registered members can see post attachments!

sincx fucked around with this message at 05:55 on Mar 23, 2021

Shaocaholica
Oct 29, 2002

Fig. 5E
Isn’t it also applicable inside a HMD? obviously the multiple viewer benefit would be lost.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:55 on Mar 23, 2021

LRADIKAL
Jun 10, 2001

Fun Shoe
Even at that, you could still use retina/gaze tracking to eliminate a lot of unnecessary pixels. Think about a big screen TV in a living room; a lot of those top pixels are never going to be used.

Also, w/r/t 8K and such, we're going to see a lot of lossy compression a la DLSS instead of continuing to widen the bandwidth requirements.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

D. Ebdrup posted:

What the christ, they make NVMe-over-fabric?!

I imagine this is simply better than regular Fibre Channel, in which case, bring it on

HalloKitty fucked around with this message at 23:14 on Apr 25, 2020

Shaocaholica
Oct 29, 2002

Fig. 5E

sincx posted:

A HMD is like a 2 microlens version of a light field display. you dont need more than 2 lens because you have 1 viewer and 2 eyes whose locations are fixed relative to the display


A light field display usable for multiple people would require hundreds of thousands of microlens, each with an array of hundreds of thousands of pixels

Yeah but HMDs still have the issue of fixed focus where the eyes are only ever focused at one depth despite stereo objects being at different 3D depths and the lizard brain doesn’t like that. I figured light fields can fix that.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

LRADIKAL posted:

Also, w/r/t 8K and such, we're going to see a lot of lossy compression a la DLSS instead of continuing to widen the bandwidth requirements.

We're gonna see that mostly because of the atrocious state of internet services in the US. 100Gb may be a ways down the road, but 10Gb has a lot of very reasonable uses right now. Even if we can't conceive of a use for 40Gb/100Gb in the home now, in 20 years I'm sure we will have multiple "obvious" applications for it.

Technology expands to fill the available space.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

sincx posted:

it's a grid of microlens, each with a very dense pixel array underneath

each microlens projects its own pixel array outwards spherically, so as the viewer moves around in 3D space, they see different underlying pixels under each microlens

since a light field display actually recreates the field of photons from a given scene, it's is the only way to have a single 3D display that can be viewed by multiple people, without glasses, from any direction, and have the view change correctly as you walk by the display



remember lytro? no? light field tech is cool as hell but decades away

Shaocaholica
Oct 29, 2002

Fig. 5E

DrDork posted:

We're gonna see that mostly because of the atrocious state of internet services in the US. 100Gb may be a ways down the road, but 10Gb has a lot of very reasonable uses right now. Even if we can't conceive of a use for 40Gb/100Gb in the home now, in 20 years I'm sure we will have multiple "obvious" applications for it.

Technology expands to fill the available space.

Err it’s really just storage or display. There’s really nothing else in the consumer realm. Chip to chip sure that will keep going up but device to device is really just general purpose storage and display. And general purpose storage is becoming less and less relevant in the consumer space.

I’m not saying its absolute but I just can’t think of anything either.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Shaocaholica posted:

I’m not saying its absolute but I just can’t think of anything either.

I guess that's mostly my point. At 10Mbps, we couldn't really think of why we'd need 1Gbps to the home, either, and now we can't think of why we'd ever have been so blind. I agree that it'll take a long time before >40Gb makes sense to a home user, but we'll get there in the next decade or three, that I do not doubt for a second.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

maybe something involving streaming neural information from a brain interface?
I agree with DrDork though, something will use the bandwidth even if it ends up being ultra-HD streaming hyperdimensional spatiotemporal models of porn star sex bits


e: a more real answer is that it enables cloud storage that has performance comparable to today's local storage if coupled with a smart local cache to keep latency down

Shaocaholica
Oct 29, 2002

Fig. 5E
Matrix interface. That’s what will use 100G.

Cygni
Nov 12, 2005

raring to post

Uncompressed True 4K UHD is like 12 gigabit a second. And 8k is a ways away, but absolutely coming if the panel makers have any say (3DTV proves they really dont, but still), and that is supposed to be over 24Gbps for 60fps. Does anyone need uncompressed True 4k or 8k? Probably not, but people used to think VHS was more than anyone needed.

Zelda: Link to the Past fits on a 1.44Mb floppy. The icon alone for it in the Switch virtual console is larger than that. I never thought I would want more than gigabit on my HTPC/NAS, but there have been multiple instances where I ran out of bandwidth with more than user trying to use it simultaneously.

When bandwidth and storage goes up, developers and end users will find ways to use it.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

8k is broadcast OTA in Japan. But I'm not sure what people are watching it with.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

You got similar bandwidth characteristics, sure - which is great is if you're using it for bulk storage, but there's one thing RJ45 can't get you.
SFP, or SFP+, has much lower latency by an order of magnitude - so if you've got ZFS with its ARC as well as an L2ARC and mirrored SLOG on SSDs, and you're doing iSCSI storage (or even iSCSI boot, which most SFP/+ controllers can do), you'll notice a difference.

I don't get why the physical layer would affect the latency by an order of magnitude.

This is not a raw physical measurement but, here's the CrystalDiskMarks I took when I was trying out that SFP+ 10GBaseT adapter on my Aquantia 10gbe nic.

zscratch = HP EX920 NVMe, direct passthrough to ZFS





This is an inexact comparison for a variety of reasons - the Aquantia is off my client's PCH, while the SFP is CPU-direct on the client with a Supermicro card with Intel 82598EN. (Server was an identical Supermicro/82598EN on FreeBSD). Also, the server had 32GB of RAM, which I set the dataset size to 32GB to help. But it's still a high-ram server which may minimize the difference a bit of course.

But is it orders of magnitude off? No, not really.

Also, for fun, here's my almost-full 8-drive 8TB ZRAID2 shucc storage array across the aquantia 10gbase-t for comparison. I see pretty much what I would expect for I/O latency in those figures, although I don't have a direct comparison with the Intel.

Paul MaudDib fucked around with this message at 04:50 on Apr 26, 2020

LRADIKAL
Jun 10, 2001

Fun Shoe

Cygni posted:

Uncompressed True 4K UHD is like 12 gigabit a second. And 8k is a ways away, but absolutely coming if the panel makers have any say (3DTV proves they really dont, but still), and that is supposed to be over 24Gbps for 60fps. Does anyone need uncompressed True 4k or 8k? Probably not, but people used to think VHS was more than anyone needed.

Zelda: Link to the Past fits on a 1.44Mb floppy. The icon alone for it in the Switch virtual console is larger than that. I never thought I would want more than gigabit on my HTPC/NAS, but there have been multiple instances where I ran out of bandwidth with more than user trying to use it simultaneously.

When bandwidth and storage goes up, developers and end users will find ways to use it.

I think we're getting into a lot of fundamental limits in places like frequency, noise acceptance/rejection, and storage density. I don't think we'll be seeing too many more 10x increases anymore, and display resolution and lightfields is a massive place, in terms of data requirements.

edit: Paul, I think they meant latency, not bandwidth. Boy, those high numbers are fun tho.

LRADIKAL fucked around with this message at 03:49 on Apr 26, 2020

crazypenguin
Mar 9, 2005
nothing witty here, move along
All you need to want more bandwidth is to find something where bandwidth reduces the latency of some action. You don't have to somehow use it continuously.

Play games without spending any time downloading anything first. Backup software that just instantly mirrors your disk remotely in the background without ever bothering you or being even remotely noticeable. Upload a video with a tap and without committing to waiting for 10 minutes for it to slowly finish. Software delivered as virtual machine images that you don't even have to install before running.

In the middle of working on a big project, but want to work on it with a different computer (e.g. moving between home/work/laptop)? Too disruptive to close everything down, save, and then reopen and get back to what you were doing? Replicate the machine state from one computer to a another. What could it be? 300 gigs? ezpz. The applications don't even have to know they were moved. With a nice 100G link, that's less than 30s to accomplish from scratch.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

LRADIKAL posted:

edit: Paul, I think they meant latency, not bandwidth. Boy, those high numbers are fun tho.

yeah, but I mean, at some point the latency starts affecting bandwidth, because you can't get as many round-trips per second. If the latency was 10x as high then 4KB Q1T1 should have plummeted, right?

Also, to make those numbers even more really fun, the EX920 server was actually on a 3.0x2 PCH slot. (should make more impact on sequential than random)

post-postscript-also: that was over SMB.

Paul MaudDib fucked around with this message at 04:57 on Apr 26, 2020

Shaocaholica
Oct 29, 2002

Fig. 5E

crazypenguin posted:

All you need to want more bandwidth is to find something where bandwidth reduces the latency of some action. You don't have to somehow use it continuously.

Play games without spending any time downloading anything first. Backup software that just instantly mirrors your disk remotely in the background without ever bothering you or being even remotely noticeable. Upload a video with a tap and without committing to waiting for 10 minutes for it to slowly finish. Software delivered as virtual machine images that you don't even have to install before running.

In the middle of working on a big project, but want to work on it with a different computer (e.g. moving between home/work/laptop)? Too disruptive to close everything down, save, and then reopen and get back to what you were doing? Replicate the machine state from one computer to a another. What could it be? 300 gigs? ezpz. The applications don't even have to know they were moved. With a nice 100G link, that's less than 30s to accomplish from scratch.

I guess if that’s the case there’s no reason to have anything stored or even processed locally.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Shaocaholica posted:

I guess if that’s the case there’s no reason to have anything stored or even processed locally.

If you don't think this is the eventual goal of cloud storage companies and thinks like Stadia, you're not thinking far enough ahead.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Shaocaholica posted:

I guess if that’s the case there’s no reason to have anything stored or even processed locally.

certainly it'll be interesting to see if the desire to render everything into x-as-a-service can overcome the desire to not invest anything into infrastructure ever

BlankSystemDaemon
Mar 13, 2009



As for pushing 100Gbps, Netflix is doing 200Gbps per-server for their FreeBSD-based content delivery system:
https://www.youtube.com/watch?v=8NSzkYSX5nY

BlankSystemDaemon fucked around with this message at 14:06 on Apr 26, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

I don't get why the physical layer would affect the latency by an order of magnitude.

It's less about the difference of fiber vs copper and more because of how transmission works between SFPs vs RJ45s (at least at reasonable wire lengths). The specs on a the SFP+ standard target a ~0.3us latency, while the 10Gbase-T spec allows 2-2.5us. While I am not an expert on the internal workings of SFPs, my understanding is this is largely down to that 10Gbase-T PHY strategy uses block encoding, requiring a data block to be read and held in the transmitter prior to transfer, while the SFP PHY strategy doesn't bother with encoding, due to having near zero concerns about EMI/crosstalk. So it can utilize a simplified transmission method that's simply faster.

e; I'll admit I have no real idea how the SFP->RJ45 modules work. Do they transmit using standard block encoding? Do they do raw media translation like DAC twinax? :iiam:

You're also not going to see the latency differences between RJ45 and Fiber show up in a test like that, where it is a pretty minor component in bulk file transfers of any size, and who knows what your switch is doing--probably not cut-through switching, at any rate. Frankly I doubt any home system is going to really show the differences--that's the realm of stuff like NVMeoF and whatnot.

DrDork fucked around with this message at 18:05 on Apr 26, 2020

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

DrDork posted:

It's less about the difference of fiber vs copper and more because of how transmission works between SFPs vs RJ45s (at least at reasonable wire lengths). The specs on a the SFP+ standard target a ~0.3us latency, while the 10Gbase-T spec allows 2-2.5us. While I am not an expert on the internal workings of SFPs, my understanding is this is largely down to that 10Gbase-T PHY strategy uses block encoding, requiring a data block to be read and held in the transmitter prior to transfer, while the SFP PHY strategy doesn't bother with encoding, due to having near zero concerns about EMI/crosstalk. So it can utilize a simplified transmission method that's simply faster.

e; I'll admit I have no real idea how the SFP->RJ45 modules work. Do they transmit using standard block encoding? Do they do raw media translation like DAC twinax? :iiam:

I'm not a real expert on the topic, but I did do some experimental implementation work once on a Forward Error Correction (FEC) decoder for long haul fiber 100G networking. It was experimental in that my starting point was working ASIC source code, and I was asked to see if it was possible to port it to work at full 100G rate in FPGAs, for Reasons. I didn't succeed, the original design was too dependent on things ASICs do way better than FPGAs, and it was deemed not important enough to spend more effort on.

So, with that not-a-real-expert caveat, for 10G, I would not be surprised if short haul SFP gets away with no FEC required to make the physical layer reliable, while 10Gbase-T likely needs some. And FEC adds latency.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BobHoward posted:

So, with that not-a-real-expert caveat, for 10G, I would not be surprised if short haul SFP gets away with no FEC required to make the physical layer reliable, while 10Gbase-T likely needs some. And FEC adds latency.

Yeah, to be clear, I'm talking 10Gb fiber only. I have no idea what the latency profiles of 100Gb would be, let alone 100Gb optimized for multi-km runs instead of the intra-datacenter uses I've worked with.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

yeah, but I mean, at some point the latency starts affecting bandwidth, because you can't get as many round-trips per second. If the latency was 10x as high then 4KB Q1T1 should have plummeted, right?
You're thinking of the bandwidth-delay products, and yes they absolutely matter - but the scales they matter at mean that for my 1/1Gbps FTTH I can send at the full bandwidth to anywhere within ~130ms, so basically anywhere but Asia, Straya, and US Westcoast.
If you're interested, there's a paper by Mathis et.al. called the macroscopic behavior of the TCP congestion avoidance algorithm published July 1997 in ACM.
(CCIE finally paying off!)

EDIT: I just realized I forgot to reply to your other post, orz

Paul MaudDib posted:

I don't get why the physical layer would affect the latency by an order of magnitude.
As others have pointed out, your methodology of testing is wrong; lower latency does affect bandwidth, but as you've found out much an order of magnitude lower latency doesn't equate to an order of magnitude higher bandwidth.
What's happening is that at a certain point, the time it takes to transfer the actual packets over the SFP connectors matters more than the latency itself.

Where it will make a difference is if you do proper disk access testing via dtrace, and since Microsoft developers took the code from FreeBSD and ported it to Windows 10, you can do that.
I imagine you can't take existing scripts, though - so more than likely you'll want to fire up Windows Performance Monitor aka perfmon because it lets you chart IOs per second and meassure the latency of them too - however, be aware that it looks like perfmon can't do direct latency tracking of every subsystem.
I guess Microsoft would like to/are working on switch(ing) all tools like perfmon to using dtrace, because that way they get proper tracing of all kernel subsystems. Apple has been through a similar process with a bunch of their tools, supposedly.

BlankSystemDaemon fucked around with this message at 11:19 on Apr 27, 2020

Cygni
Nov 12, 2005

raring to post

Remember the recent Cyrix/VIA/Centaur/IDT chat, that seemed to dead end at some ancient core tech transfer to Chinese CPUs?

Well apparently Zahoxin’s new CPU was indeed a new VIA arch, and now there is a new one with Centaur branding and a built in inferencing accelerator and edram. 8 cores with AVX-512 (Supposedly Haswell level performance), 4 channel DDR4, and 44 pcie lanes on TSMC 16nm for edge compute.


https://ascii.jp/elem/000/004/010/4010926/

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

Cygni posted:

Remember the recent Cyrix/VIA/Centaur/IDT chat, that seemed to dead end at some ancient core tech transfer to Chinese CPUs?

Well apparently Zahoxin’s new CPU was indeed a new VIA arch, and now there is a new one with Centaur branding and a built in inferencing accelerator and edram. 8 cores with AVX-512 (Supposedly Haswell level performance), 4 channel DDR4, and 44 pcie lanes on TSMC 16nm for edge compute.


https://ascii.jp/elem/000/004/010/4010926/

Interesting, and only a day or two after Bloomberg gave a few more deets on the upcoming ARM Macs (possibly up to 3 skus based on 8-core versions of the upcoming A14, expected by the end of the year)

My body is ready for a laptop with the guts of a fancy iPad.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
I tried to read up on that Bloomberg piece about ARM Macs - is ARM now at the point where they can offer comparable performance compared to a MacBook that's effectively a really customized Intel laptop?

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

gradenko_2000 posted:

I tried to read up on that Bloomberg piece about ARM Macs - is ARM now at the point where they can offer comparable performance compared to a MacBook that's effectively a really customized Intel laptop?

Been there for at least a year or two, IIRC.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Ok Comboomer posted:

Been there for at least a year or two, IIRC.

is there such a thing as an ARM desktop CPU that you could buy and build into like a Intel Core CPU?

NewFatMike
Jun 11, 2015

"Yes" but they're mostly workstations or development hardware for server deployments. Here is an article with some recent examples:

https://www.servethehome.com/marvell-thunderx3-arm-server-cpu-with-768-threads-in-2020/

For most commodity computing, at the desktop level ARM doesn't have much to offer that x86 doesn't. For more portable hardware like tablets and laptops, it makes more sense even looking at Chromebooks with ARM processors since they're more tuned for battery life. They aren't high powered machines in any case.

Microsoft's recent forays into ARM based platforms have been a wet fart so far.

Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture.

I was actually planning on reading up on the transition from POWER to x86 for Apple today to see how they might handle an x86 to ARM transition.

Khorne
May 1, 2002

NewFatMike posted:

I was actually planning on reading up on the transition from POWER to x86 for Apple today to see how they might handle an x86 to ARM transition.
Judging how they handle macos on x86/amd64, they're going to break absolutely everything and not care.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

gradenko_2000 posted:

I tried to read up on that Bloomberg piece about ARM Macs - is ARM now at the point where they can offer comparable performance compared to a MacBook that's effectively a really customized Intel laptop?

For the limited number of things that 90% of people use their Apple laptops for, yeah. Hell, even if they did nothing new and just shoved the A12Z chip they already have into a laptop body, it would sell. You're not going to be doing high-end gaming (or high-end compute, for that matter) with one, but for Office-style apps, web browsing, etc., it'll be more than enough. The question is the software, which brings me to...

NewFatMike posted:

Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture.

I was actually planning on reading up on the transition from POWER to x86 for Apple today to see how they might handle an x86 to ARM transition.

Apple is probably the only ones who are going to be able to make it work, because they're the only ones who can force the entire ecosystem to move in the direction they want. Microsoft failed because developers weren't interested in solving the chicken-and-egg issue of making software for hardware no one wanted because there was no software for the hardware (among other reasons).

eames
May 9, 2009

NewFatMike posted:

Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture.

Yes, and they also get to design their own integrated GPU. It would be limited to their own Metal API, but I'd expect the performance to be absolutely groundbreaking in its segment.

KKKLIP ART
Sep 3, 2004

DrDork posted:

Apple is probably the only ones who are going to be able to make it work, because they're the only ones who can force the entire ecosystem to move in the direction they want. Microsoft failed because developers weren't interested in solving the chicken-and-egg issue of making software for hardware no one wanted because there was no software for the hardware (among other reasons).

A big part of this is because they can probably get a lot of the app store apps working if it is the same architecture. It will break a lot of things, and it will probably be a huge pain for a ton of devs, but I don't think it would be a day-1 reset for absolutely everything.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

NewFatMike posted:

Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture.

Also, even if they don't ever make the move, it's a nice bit of leverage they can hold over Intel.

The real blocker would be the higher-end systems, especially the Mac Pro. Apple could drop an ARMbook Air with iPad Pro guts in a laptop shell any time they wanted, and it would probably work just fine. Selling it as the future, though, would be a bit harder if they keep their biggest, baddest systems on the old architecture indefinitely.

BlankSystemDaemon
Mar 13, 2009



The ARM Morello, which underlies the Neoverse based Graviton2 chip, is making huge waves in the server market too.
It is also the CPU thats being used for the capabilities-based CheriBSD, a soft-fork of FreeBSD (meaning code goes back to FreeBSD regularly) made by Cambridge with.
CHERI is notable for being the way to mitigate many of the foibles of C and C++ software, by applying hardware-based capabilities.

Adbot
ADBOT LOVES YOU

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

NewFatMike posted:

Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture.

I was surprised by this and looked up some performance comparisons. It's crazy how close A13 is to Intel and AMD's desktop CPUs. Does anyone have any insights regarding how they've squeezed that much performance out of ARM cores (especially considering its clocks are way lower than desktop CPUs)? E.g. a more efficient ISA?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply