Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Tapedump posted:

This is where I need you need assure me that you're just joking and not really an idiot.

Your link gives me hope, but...

I'm not joking. What would the downside of a modern CPU architecture running at 20GHz be vs a 4ghz quad core? You can still schedule multiple threads on a single core just fine, and when you do have just one thread crunching it would be way faster.

Adbot
ADBOT LOVES YOU

Ragingsheep
Nov 7, 2009

Twerk from Home posted:

I'm not joking. What would the downside of a modern CPU architecture running at 20GHz be vs a 4ghz quad core? You can still schedule multiple threads on a single core just fine, and when you do have just one thread crunching it would be way faster.

That's what Intel tried to do with the P4 and they ran straight into a thermal brickwall.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Ragingsheep posted:

That's what Intel tried to do with the P4 and they ran straight into a thermal brickwall.

Oh, I know that it's impossible in reality, but it's a nice dream. If we somehow could get more single threaded speed it would be vastly preferable to more cores.

LiquidRain
May 21, 2007

Watch the madness!

Twerk from Home posted:

Oh, I know that it's impossible in reality, but it's a nice dream. If we somehow could get more single threaded speed it would be vastly preferable to more cores.
Just as you said, it's impossible. :)

Intel and ARM keep trying through all the ways they can (though the conspiracy theorists here argue otherwise), and all we get are IPC, branch prediction, cache efficiencies, and new instruction sets that help single threads. Physics is a bitch. Can't go faster.

At least, not while we're still using silicon. :)

Tapedump
Aug 31, 2007
College Slice

Twerk from Home posted:

What would the downside of a modern CPU architecture running at 20GHz be...?
Nope sure how to respond, as it wouldn't be modern architecture, would it?

<speaks in French>
<is replied to in Tagalog>

Let's not play the what-if game here. By your logic, why not just dream/ask of having a bajillion+1 cores that run at lunacy+X GHz? :rolleyes:

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Seems you'll get your 20Ghz processor by 2020 now: http://www.extremetech.com/computing/185688-ibm-betting-carbon-nanotubes-can-restore-moores-law-by-2020

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

Twerk from Home posted:

You know what would be way better than us getting better at using multiple cores? Really fast single cores. I would love to have a single core 17.6GHz Haswell instead of a quad core 4.4 GHz one.

We were supposed to be around 20 GHz by now! http://www.geek.com/chips/intel-predicts-10ghz-chips-by-2011-564808/

The comments on that article are amazing:

Allen posted:

If 10 GHz is the best that Intel can do by 2011, AMD or somebody else is going to eat their lunch. Intel better pick up the pace if they want to remain dominant. Besides, I want it NOW. What will I do with it. Well, I also want the applications now. I guess I've been spoiled by the industry and expect incredible improvements every year.

StickWithApple posted:

doesn't matter the speed of intel's chip in 2011 because motorola and ibm will already have chips out by then that are more effective, use energy better, and run applications ready for the new iMac's that are introduced at the 2011 MacWorld. They'll run at somewhere near 7GHz and still be faster than an 11GHz or even 128GHz sh*t that intel puts out..

Sinful posted:

Back in grad school I worked on computing with light and transistors that had ten states (0-9 or base 10) rather than 2 (0 and 1 or binary). Anybody who doesn't think that these types of technology won't be commercially available by 2011 is kidding themselves.

computer parts
Nov 18, 2010

PLEASE CLAP
I don't know if it's been mentioned but Thunderbolt 3 is now in a USB-C form factor.



I think it's supposed to launch alongside Skylake too.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Twerk from Home posted:

You know what would be way better than us getting better at using multiple cores? Really fast single cores. I would love to have a single core 17.6GHz Haswell instead of a quad core 4.4 GHz one.

We were supposed to be around 20 GHz by now! http://www.geek.com/chips/intel-predicts-10ghz-chips-by-2011-564808/

quote:

I'll be waiting in line at CompUSA.

:)

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I wonder if Thunderbolt will ever get much cheaper though. It's cost-prohibitive compared to USB-c devices by an order of magnitude at least and that results in stupidly high prices for Thunderbolt accessories typically. Heck, I'm kind of shocked that the Thunderbolt ports on my LG34UM95P didn't make the monitor $1300+ at launch.

evilweasel
Aug 24, 2002

necrobobsledder posted:

I wonder if Thunderbolt will ever get much cheaper though. It's cost-prohibitive compared to USB-c devices by an order of magnitude at least and that results in stupidly high prices for Thunderbolt accessories typically. Heck, I'm kind of shocked that the Thunderbolt ports on my LG34UM95P didn't make the monitor $1300+ at launch.

One of the reasons for that is it's so little-used that none of the parts have really achieved economies of scale. If it gets popular costs will come down, though by how much is an open question.

computer parts
Nov 18, 2010

PLEASE CLAP

necrobobsledder posted:

I wonder if Thunderbolt will ever get much cheaper though. It's cost-prohibitive compared to USB-c devices by an order of magnitude at least and that results in stupidly high prices for Thunderbolt accessories typically. Heck, I'm kind of shocked that the Thunderbolt ports on my LG34UM95P didn't make the monitor $1300+ at launch.

Supposedly all of the USB-C 3.1 ports will be able to interact with Thunderbolt so the only cost issue will be the cable.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
b

necrobobsledder posted:

I wonder if Thunderbolt will ever get much cheaper though. It's cost-prohibitive compared to USB-c devices by an order of magnitude at least and that results in stupidly high prices for Thunderbolt accessories typically. Heck, I'm kind of shocked that the Thunderbolt ports on my LG34UM95P didn't make the monitor $1300+ at launch.

bingo

usb chips are dirt cheap compared to intel's weird rear end t-bolt chip and licensing scheme

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Tapedump posted:

Nope sure how to respond, as it wouldn't be modern architecture, would it?

<speaks in French>
<is replied to in Tagalog>

Let's not play the what-if game here. By your logic, why not just dream/ask of having a bajillion+1 cores that run at lunacy+X GHz? :rolleyes:

All that I meant is as a software developer, faster single cores would be vastly preferable to multiple slower cores. Why couldn't we have modern architectures at higher clocks? The reason I chose ~20GHz is that's roughly 4x the current clock speed of quad cores, so there's theoretically about the same amount of cycles there.

JawnV6
Jul 4, 2004

So hot ...

pmchem posted:

There are compilers and debuggers being written for Intel chips by a heck of lot more companies, universities, and worldwide open source teams than "just Intel". No idea how you're even making that statement.
Love it when someone openly admits they don't know what I'm talking about and still talks down to me. I said "debuggers" and you're the one who tried to jam "compilers" in next to it and confused yourself. Less writing, more designing. Think less gdb, more JTAG. Again, this all makes more sense if you'd been following the anti-trust discussion from the beginning instead of popping in for the fourth question and not bothering with this context.

pmchem posted:

Intel has an incredible manufacturing/process advantage and huge resources for chip R&D, beyond something like "Cray" which was never really a CPU company so is a terrible example. It's far beyond what DEC was to Intel in the 90s. Cray still exists by the way: I used three of them today! They're not ignoring the competition from ARM; check out the recent realworldtech article about Atom improvements. Indeed, even for Intel facets of design are subject to competitive forces.
Claiming today, in 2015, that you're "still using a Cray" is essentially ceding the argument. If Intel drops x86, gets bought and spun back out by Pixar, then re-brands as an ARM systems integrator with an interconnect specialty, I'd call that ARM winning. If that's being chalked up as an Intel win I can't imagine what you'd call a failure state. And we're still pretending your list is relevant without justification. It was chock full of esoteric architectures before x86 shuffled in from below as a commodity and that it's still stuffed with dinosaurs is of dubious predictive value.

MaxxBot posted:

Theoretically ARM has the advantage of the decoder logic taking up a smaller portion of the CPU die but there's also been a trend over time of the decoder logic taking up a smaller portion of the CPU die across all microarchitectures so this might not end up mattering that much. From what I have seen so far Intel still has a big advantage in performance/watt due to their superior fab when compared to a theoretical ARM competitor, especially with their low power Xeons. It would be interesting to see how the performance/watt would compare if Intel made some ARM cores, which it looks like might happen before too long.
Decode is a red herring. Intel used to make ARM cores. Were some of the best too, and they made a company-wide bet that x86 was going to come out on top when they spun that out.

Why are we letting Intel waste time with x86? Why not nationalize them, open their fab to everyone, and enjoy the benefits of the best ARM design teams getting the best possible silicon to run on?

MaxxBot posted:

EDIT: But fundamentally saying "Intel's going to be eaten alive by ARM" doesn't really make any sense. ARM is an ISA and Intel is a chip manufacturer, if Intel eventually sees an advantage in dumping x86 for ARM they will do so and go along designing and manufacturing chips just as they did before.

http://seekingalpha.com/article/3229806-intel-becomes-an-arm-chip-maker
I'm considering Intel going to ARM architecture as a failure. Firing the x86 half of the company, slashing and burning margins, and becoming a big fish in the same ocean as everyone else makes it a different Intel. If shuttering x86 and becoming another me-too in the ARM's race isn't "arm eating Intel alive" then I can't imagine what a failure state actually looks like.

Twerk from Home posted:

I'm not joking. What would the downside of a modern CPU architecture running at 20GHz be vs a 4ghz quad core? You can still schedule multiple threads on a single core just fine, and when you do have just one thread crunching it would be way faster.
Can I be there when you tell the poor designer trying to hit a 1ns window that he now has to hit .05ns? It's ok, we'll reassure him with this bit about the OS's scheduler. Also we went from 4 cores but multiplied the time by 5. So everyone be aware we're not using that boring linear speedup, it's something superlinear at the least.

Twinty Zuleps
May 10, 2008

by R. Guyovich
Lipstick Apathy

evilweasel posted:

One of the reasons for that is it's so little-used that none of the parts have really achieved economies of scale. If it gets popular costs will come down, though by how much is an open question.

Has there been any reason for Thunderbolt to become popular that has legs? All I've seen for it are monitor hookups for big Macs and serious business peripherals that have firewire, usb 3 or straight up pci-e connected models available as well.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

JawnV6 posted:

Can I be there when you tell the poor designer trying to hit a 1ns window that he now has to hit .05ns? It's ok, we'll reassure him with this bit about the OS's scheduler. Also we went from 4 cores but multiplied the time by 5. So everyone be aware we're not using that boring linear speedup, it's something superlinear at the least.

I should have just used 4ghz and 16ghz, I meant I'd rather have all the cycles on one core rather than across several. Also, all my thoughts have been purely from the software end and ignoring chip design itself, I know absolutely nothing about it.

Gwaihir
Dec 8, 2009
Hair Elf

necrobobsledder posted:

I wonder if Thunderbolt will ever get much cheaper though. It's cost-prohibitive compared to USB-c devices by an order of magnitude at least and that results in stupidly high prices for Thunderbolt accessories typically. Heck, I'm kind of shocked that the Thunderbolt ports on my LG34UM95P didn't make the monitor $1300+ at launch.

Previous versions also required expensive active cables, while this one works just fine with a 2$ passive cable (At current generation bandwidth levels). The 40 GB/s version still needs an active cable though.

Tapedump
Aug 31, 2007
College Slice

Twerk from Home posted:

All that I meant is as a software developer...
... Why couldn't we have modern architectures at higher clocks?
And theeeeere it is. The first statement which clarifies all his others.

To answer the question, it's been answered at least twice on this page.

Ragingsheep posted:

That's what Intel tried to do with the P4 and they ran straight into a thermal brickwall.

LiquidRain posted:

Just as you said, it's impossible. :)

Intel and ARM keep trying through all the ways they can (though the conspiracy theorists here argue otherwise), and all we get are IPC, branch prediction, cache efficiencies, and new instruction sets that help single threads. Physics is a bitch. Can't go faster.
And that poster was replying to you directly.

Twerk from Home posted:

Oh, I know that it's impossible in reality
Welp, I'm out. Three posts, on this very page, in answer and one of them is yours.

Please stop. I'll not pick up this derail again.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
this wonderful and completely imagined 'ARM is going to eat Intel' argument, brought to you on Intel processors.

JawnV6
Jul 4, 2004

So hot ...

Twerk from Home posted:

I should have just used 4ghz and 16ghz, I meant I'd rather have all the cycles on one core rather than across several. Also, all my thoughts have been purely from the software end and ignoring chip design itself, I know absolutely nothing about it.
Well, yeah, if you knew anything about chip design you would've seen the multi-core future coming from a decade ago. That much is clear. We all know SW folks are lazy and can't be bothered to learn and leverage parallelism, it's just funny to see someone take that and start demanding physical impossibilities.

go3 posted:

this wonderful and completely imagined 'ARM is going to eat Intel' argument, brought to you on Intel processors.
Wow, if they're totally free of competitive pressure I guess we should let the anti-trust wolves tear them apart. So we're back to do you split the design teams up or split design from fab?

JawnV6 fucked around with this message at 16:12 on Jun 3, 2015

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

JawnV6 posted:

Well, yeah, if you knew anything about chip design you would've seen the multi-core future coming from a decade ago. That much is clear. We all know SW folks are lazy and can't be bothered to learn and leverage parallelism, it's just funny to see someone take that and start demanding physical impossibilities.

It's a tough problem. Some workloads parallelize well, some just don't. If it were easy to do highly parallel software then AMD would be more competitive right now. Their FX CPUs with 8 integer cores theoretically have similar performance to Ivy Bridge quad cores for highly parallel integer math.

Gwaihir
Dec 8, 2009
Hair Elf

go3 posted:

this wonderful and completely imagined 'ARM is going to eat Intel' argument, brought to you on Intel processors.

I'm continually baffled why this is and has been a thing for so long. Like, why do people care so deeply what the instruction set is that runs their machines or the servers hosting their websites and databases and such?
Because holy poo poo between the arm ANY DAY NOW!!! Crusaders or the "Intel just isn't innovating fast enough" :qq: brigade it always seems like it's way more important to people than it should be.

Like, I legitimately have no earthly idea what the various advantages and disadvantages are of various styles of architecture, be it x86, arm version whatever, Power, etc. I know that the arm ISA takes marginally fewer transistors in terms of die space, but that ISA decode blocks are a pretty small % of dies in general these days so that's not really a deal like it was back when we were on 130nm chips. Is x86 just a really lovely scheme to work with being propped up by truckloads of R&D money? If someone is an embedded programmer I'd actually like to know.

Because honestly it seems like we've continued to see pretty fantastic leaps in server performance from each generation of Xeon, while mobile chips have vastly improved battery life and maintained good enough performance over the last few years. Desktops haven't exactly leapt forward in single threaded performance as mentioned many times, but welp that's market forces for you. The money ain't there even if the nerd demand is.

And like the other poster mentioned, so what if arm "wins?" Arm doesn't make chips.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I thought x86 is just a front-end these days? I mean, with all that decoding to micro-ops and poo poo.

Gwaihir
Dec 8, 2009
Hair Elf

Combat Pretzel posted:

I thought x86 is just a front-end these days? I mean, with all that decoding to micro-ops and poo poo.

I have no idea. Like, for the people clamoring for arm to "Win" do you think that means we get meaningfully better desktop CPUs in some form? or better performing laptops with even better battery life? Is it just "Competition will mean we get better stuff than we have now?"

(Although at least in the laptop arena I sorta think the CPU's impact on runtime has definitely started to get eclipsed by things like 4k+ screens- See the 10 hour vs 15 hour XPS13 runtimes with 4k vs 1080 panels)

PC LOAD LETTER
May 23, 2005
WTF?!
Supposedly some stuff can still be processed natively for speed but yea most everything else is 'cracked' into micro-ops over several cycles or more (in some case hundreds or thousands of cycles for 'legacy' instructions that are seldom used) of some sort to run on the 'back end' which actually does all the computational work.

Generally its not seen as a big deal anymore to do this sort of thing. At least for x86. Its the x87 FPU that is supposed to be the real nightmare these days to deal with what with its weird support for stuff like 80 bit double extended precision instructions which no one messes with anymore but still has to be supported.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

evilweasel posted:

One of the reasons for that is it's so little-used that none of the parts have really achieved economies of scale. If it gets popular costs will come down, though by how much is an open question.
My understanding is that Thunderbolt is used so little because it's so many times more expensive than USB 3.0 at launch for USB 3.0 than when Thunderbolt was out for years. That is, Thunderbolt was marketplace-crippled by being so expensive and cumbersome to license compared to USB thus far. But beyond the economy of scale issue, Intel's licensing of Thunderbolt is untenable for pretty much anyone that has low margin machines (read: everyone besides Apple) and manufacturers just went with making more tablets and cheaper laptops that fly off shelves than to think of niche users wanting tons of bandwidth on a single cable that would also serve as a monitor cable.

Combat Pretzel posted:

I thought x86 is just a front-end these days? I mean, with all that decoding to micro-ops and poo poo.
It's been that way for at least a decade. The hard part is scheduling these micro-ops / RISC instructions in an optimal way that keeps existing software from breaking. Memory barriers and how they work with x86 are kind of a loophole that was used for lockless concurrent algorithms used on the Xbox for a while. Nowadays, we've made the fetch load semantics clearer with new instructions (last fuzzy memory I have from a JVM design talk implied new instructions should be used for better guarantees). Getting auto-vectorization to work on a lot of compilers is tricky enough, but software developers oftentimes don't have much of a choice but to implement a serial step-by-step series of instructions that aren't parallelizable very much (the best you could do is to try to do some memorization or something in hardware and compute the result ahead of time practically, but that'd be a case of "software developer needs to not issue the same instructions with the same data every time, retard" as the root cause).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Is there any point in introducing a completely new instruction set, that moves the burden of optimization a little more back to the compiler? Using mode switching antics, this doesn't sound unpossible.

Nintendo Kid
Aug 4, 2011

by Smythe
Introducing a completely new instruction set means having to build an extensive emulation set up that can provide performance/power usage not significantly worse than previous chips if yu want it to get any traction int he real computer market.

PC LOAD LETTER
May 23, 2005
WTF?!

Combat Pretzel posted:

Is there any point in introducing a completely new instruction set, that moves the burden of optimization a little more back to the compiler? Using mode switching antics, this doesn't sound unpossible.
Intel tried that with Itanium. It turns out trying to make the compiler do more of the work for the hardware and programmer doesn't work IRL even though it sounds great on paper.

thebigcow
Jan 3, 2001

Bully!
NATIONALIZE INTEL! Malaysia will lead us into the new millennium of computing.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Nintendo Kid posted:

Introducing a completely new instruction set means having to build an extensive emulation set up that can provide performance/power usage not significantly worse than previous chips if yu want it to get any traction int he real computer market.
Eh, I meant that the CPU has two decoders, the x86 one and a new one, between which can be switched, similar to jumping between real, protected and long modes. I suppose the idea is to expose the micro-ops a little more directly to the compiler. Unless the internal format changes a lot between CPU tocks, then it's futile.

Combat Pretzel fucked around with this message at 17:42 on Jun 3, 2015

JawnV6
Jul 4, 2004

So hot ...

Twerk from Home posted:

It's a tough problem. Some workloads parallelize well, some just don't. If it were easy to do highly parallel software then AMD would be more competitive right now. Their FX CPUs with 8 integer cores theoretically have similar performance to Ivy Bridge quad cores for highly parallel integer math.
Just because you don't know jack poo poo about my area doesn't mean I'm as ignorant about yours, thanks for the patronizing bullshit and zero acknowledgement of just how pie in the sky your idea is.

CS folks have this tendency to ask things of systems in a particularly inelegant way. There's some huge pressure to reduce things to simple asks that totally lose grounding in the painstakingly created abstractions. I saw some talk from a guy saying "if we had infinite memory, porblem solved!!" and my systems prof came out giggling. You've essentially got infinite memory now. The whole problem is fast access to the bits you want in a timely fashion, not the arbitrary limit on total storage.

Gwaihir posted:

Like, I legitimately have no earthly idea what the various advantages and disadvantages are of various styles of architecture, be it x86, arm version whatever, Power, etc. I know that the arm ISA takes marginally fewer transistors in terms of die space, but that ISA decode blocks are a pretty small % of dies in general these days so that's not really a deal like it was back when we were on 130nm chips. Is x86 just a really lovely scheme to work with being propped up by truckloads of R&D money? If someone is an embedded programmer I'd actually like to know.
I don't think you get to have much of an opinion at all with a tabloid-grade understanding of the tradeoffs.

Everyone's really quick to point out that x86 gets decoded and that decode takes more transistors. So I guess we're totally unconcerned with the fact that a 2-byte opcode can do a flow checking hundreds of bytes of state that would break out to hundreds of ARM operations and whatever bus is bringing instructions in can't possibly use that extra bandwidth. It's so unbelievably myopic towards the tradeoffs implied by a basic CISC/RISC analysis. Would you spend transistors to reduce bus bandwidth?

Gwaihir posted:

And like the other poster mentioned, so what if arm "wins?" Arm doesn't make chips.
The entire discussion isn't given space to exist in this phrasing. If you don't understand, don't care to understand, and don't have enough relevant knowledge to partake could you just leave the talking to the adults?

PC LOAD LETTER posted:

Intel tried that with Itanium. It turns out trying to make the compiler do more of the work for the hardware and programmer doesn't work IRL even though it sounds great on paper.
I really think it comes down to knowledge available at runtime. The compiler's never going to know what's cached at any given moment so perfect scheduling of execute resources is only going to take you so far. That wasn't the only area promised by Sufficiently Smart, but it's the kind of thing I can't imagine a non-hw solution for.

Gwaihir
Dec 8, 2009
Hair Elf

JawnV6 posted:

I don't think you get to have much of an opinion at all with a tabloid-grade understanding of the tradeoffs.


Great, so talk then. Why is arm eating intel's lunch and what does them "winning" get us, the consumer? Are there huge flaws in current (Intel) chips that just haven't been exposed because AMD is a dumpster fire and the various arm licensees don't have Intel's foundry expertise? Do you just think we'd have much better chips in general if there were more competitors in the CPU market?

e: And it's not like I have to be a dedicated embedded programmer or IC engineer to grasp the applied implications of how different chips perform, but thanks for that anyways :rolleyes:

Gwaihir fucked around with this message at 18:15 on Jun 3, 2015

computer parts
Nov 18, 2010

PLEASE CLAP

Wulfolme posted:

Has there been any reason for Thunderbolt to become popular that has legs? All I've seen for it are monitor hookups for big Macs and serious business peripherals that have firewire, usb 3 or straight up pci-e connected models available as well.

We're right about at the point where 4k stuff is starting to be adopted so there's that at least.

It's unlikely that backups or external graphics cards or whatever are going to be a driver, I'll admit.

thebigcow
Jan 3, 2001

Bully!
Is Thunderbolt on PC still crippled?

Gwaihir
Dec 8, 2009
Hair Elf
How so? Driver support?

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
Tablets are totally gonna replace computers too

thebigcow
Jan 3, 2001

Bully!

Gwaihir posted:

How so? Driver support?

I am way out of date on things but basically

http://www.anandtech.com/show/8529/idf-2014-where-is-thunderbolt-headed

quote:

Thunderbolt on PCs: A Crippled Experience

The reason for the far from optimal experience with Thunderbolt on PCs boils down to two different aspects, the hardware and the software. In terms of hardware, Intel has never allowed motherboard vendors to hang the Thunderbolt silicon / add-in card off the CPU's PCIe lanes. These have to hang off the platform controller hub (PCH). On the other hand, Apple was allowed to hook up the Thunderbolt silicon directly to the CPU. The reason behind this leads us to the software side of things.

Apple has full control over the operating system. Hanging Thunderbolt peripherals directly off the CPU's PCIe lanes requires extensive support from the operating system, particularly when it comes to hot plugging devices and/or waking up peripherals from sleep mode. Over the PCIe lanes off the PCH, Intel has more control via its chipset drivers. Ultimately, it looks like Microsoft dropped the ball and Intel decided to come up with a certification solution by only allowing Thunderbolt silicon to talk to the PCH for all PC boards.

While Microsoft continues to twiddle its thumbs, Intel has decided to come up with less restrictive hardware suggestions to bridge the Thunderbolt experience gap between Macs and PCs.

Adbot
ADBOT LOVES YOU

canyoneer
Sep 13, 2005


I only have canyoneyes for you

go3 posted:

Tablets are totally gonna replace computers too

Can't tell if sarcastic, but for digital content consumption, tablets/phones are pretty much there already.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply