Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rastor
Jun 2, 2001

People talking about "waiting for Maxwell" now are waiting for the 20nm process change. Which is scheduled for ~Q3, which -- with the way process changes are getting more and more difficult -- means early 2015, maybe.

Adbot
ADBOT LOVES YOU

Rastor
Jun 2, 2001

I've been there, only instead of PC parts it was $4600 for the first ever HDTV with LED local dimming. It was available for less than $1600 in less than 9 months.

Oh well. At least it still works and the picture still looks good.

Rastor
Jun 2, 2001

Jesus Christ, 230GB/sec memory bandwidth. There are definitely some math/science workloads out there that could utilize that thing.

Rastor
Jun 2, 2001

mobby_6kl posted:

Oh gently caress you Intel, are we now not even pretending that Broadwell will be out this holiday season? Because if it's not available when the consumer Rift is realeased, I won't be caught holding my dick in hands, and will.... get an AMD, ok!
Through 2015 Intel is the king.

AMD may or may not have something new and competitive in the 2016-2017 timeframe.

After that... well, HP claims they are creating an entirely new architecture based on memristors and photonic interconnects, but that just seems crazy.

Rastor
Jun 2, 2001

Shaocaholica posted:

So....buttcoins?

Chuu posted:

Trading Firms are going to love these depending on the price point and HDL.


Here's Charlie Demerjian's ramblings about why Intel would add that feature. Charlie's often a bit loopy but sometimes he comes up with some interesting thoughts.

Rastor
Jun 2, 2001

Note that the nm of the process names aren't the actual nm measurement of the circuits.

http://eandt.theiet.org/blog/blogpost.cfm?threadid=48709&catid=366

http://spectrum.ieee.org/semiconductors/devices/the-status-of-moores-law-its-complicated

Rastor
Jun 2, 2001

There's no doubt we're reaching the limits of silicon.

The future may be the spiritual successor to the vacuum tube.

Rastor
Jun 2, 2001

Speaking of process difficulties: Intel 14nm desktop processors delayed again to Q3 2015.

Rastor
Jun 2, 2001

Two recent stories I'm surprised haven't been discussed here in the Intel thread:


ASUS has created a proprietary socket which lets them bypass Intel's voltage regulators for greater overclocking.

http://wccftech.com/asus-oc-socket-examined-lga-2011/
http://www.tomshardware.com/news/asus-oc-socket-warranty-x99,27597.html




Intel is now admitting that Broadwell is so delayed that it has crashed into Skylake in the schedule so they're now altogether dropping plans for some Broadwell desktop chips (edit: but they will still produce low-power and high-end/high-power Broadwell chips).

http://arstechnica.com/gadgets/2014/09/lower-end-desktop-cpus-wont-get-broadwell-will-need-to-wait-for-skylake/

Rastor fucked around with this message at 21:10 on Sep 6, 2014

Rastor
Jun 2, 2001

And here's a third topic for discussion: Phoronix had an X99 motherboard go up in smoke and flames, then Legitreviews experienced a similar event which took out the $1000 processor as well.

Phoronix was using an MSI X99S SLI Plus and Legitreviews had an ASUS X99 Deluxe.

Rastor
Jun 2, 2001

Beige desktop case chat, I remember when I was young and stupid I thought the Packard Bell Corner Computer was a really clever and attractive design.



Good grief I was a dumb kid. :stare:

Rastor
Jun 2, 2001

Not until a number of years from now, by which time silicon progress has ground to an almost halt, sputtering adrift with enormous gaps of time between process changes.

Rastor
Jun 2, 2001

KillHour posted:

Still impressive; further than I thought they were. Unless they can get a band gap, though, there's no hope in ever making a digital logic circuit out of it.

Rastor
Jun 2, 2001

Lord Windy posted:

Is Quantum computing some new super fast computer or is it a kind of new magic way of looking at electrons? I remembering someone telling me it's about using the fuzzy circumference of electrons as bits instead of using electrons as bits (ie, 1 electron or whatever is now 2+ bits instead of just 1) but that doesn't sound like any performance gains.
Quantum computers are neither necessarily faster (though they have the potential to solve problems quickly that classical computers could not solve in a reasonable time, or at all) nor do they work via switching flowing electrons. They are fundamentally different things; even at the concept level they are based on qubits instead of bits. They are (would be) used for solving different problems than classical computers, and though no doubt someone will invent some kind of game for them, you won't be running Crysis on one.

Rastor
Jun 2, 2001

I'm pretty sure we've said so in this thread before, but X86 is an instruction set, and has nothing to do with the underlying architecture of the system. You could implement X86 on any computing architecture. You could implement X86 on vacuum tubes or relay switches. And in fact no Intel processor since before the original Pentium has actually worked by executing (internally) the X86 instruction set; instead it is translated into microcode for internal processing.

X86 is like current keyboard layouts: so entrenched that I cannot imagine it ever going away. Ever.

Rastor
Jun 2, 2001

KillHour posted:

Most of these huge companies use in house software written for these mainframes 30+ years ago. The problem is when they try to move to the shiny new commercial stuff that runs on x86 and it breaks everything (*cough* SAP *cough*). Say what you will about those in house programs not being pretty and having spaghetti code, but after 30 years of improvements/bugfixes, they're probably some of the most stable reliable terrifying but functional Frankenstein's monsters out there.
Airline systems are a classic example of this.

Did you know: your airline reservation is a six-digit letters-and-numbers code because back in the day that represented the block of disk on the mainframe that contained the data related to the reservation. There was no such thing as a "database" or "locks", you just read/wrote that block directly.

Rastor
Jun 2, 2001

Not as much efficiency as the many billions of dollars it would cost, unfortunately.

Rastor
Jun 2, 2001

Lord Windy posted:

I can't wait until we have some new storage that is both RAM and Harddisk. Maybe Flash Memory will one day get fast enough. What does 400mb/s translate to in RAM land? Although 160ms latency is essentially forever in computers.

Instant Grat posted:

Google "Memristor".

A) 400MB/s? 160ms latency? Google NVMe drives, such as the Samsung XS1715. 3000MB/s read / 1400MB/s write and 0.2ms latency.

B) http://www.hpl.hp.com/research/systems-research/themachine/


Edit: I see I was rather beaten by necrobobsledder:

necrobobsledder posted:

The bigger problem I see is that our software written at present is incapable of handling super high speed without rewrites and completely rethinking networking. Here's a good example of what is required to handle the network hardware coming down the pipe at 100 gigabits - it's NOT easy, and ironically enough it's somewhat gated by how fast your CPU can work: https://lwn.net/Articles/629155/

The penalty for getting something wrong in sending data is really severe for network applications and any form of high performance computing historically. So improving the memory hierarchy's latency as mentioned above is likely to provide a lot more throughput than simply doubling that theoretical bandwidth. Sure, bandwidth helps for peak performance, but that's an idealized view of the world. This is exactly how Intel has done so well in the past 10 years - clock speed doesn't matter, smarter cache, smarter branch prediction, more efficient TLBs, etc. have been far more helpful than just blindly scaling down transistors and putting small nuclear reactors into our homes (that won't work well anyway due to current leakage to begin with). The question that's unanswered is whether we'll hit a wall even on how smart we can be about this general programming paradigm. Even multi-core / parallel programming won't save us at a point if what we're doing requires serial processing like what's typical in most games because well... most game programmers aren't going to do threads everywhere just from handling overhead alone and guaranteeing some form of hard realtime guarantees that are what people demand from their games (although nobody does hard realtime in practice I'd say because nobody's going to die if you lost a couple ms worth of frames or something during a CS:Go match).

400Mb/s or 400MB/s? Either way though those are slow numbers for even back in 1999 if it's RAM. Flash is substantially faster for starters. Samsung's newest SSD coming out will go at 2000+ MBps sustained.
Well, in a lot of ways, this is already done on most OSes under the covers of a programmer's APIs because a great deal of calls get turned into mmap on Linux, for example, and that basically memory maps disk onto a memory range for you (among other neat options). So it's up to the OS to do this part. This is just one of the realities of legacy programs and bringing them into the current realities of virtual memory systems.

Rastor fucked around with this message at 18:10 on Feb 5, 2015

Rastor
Jun 2, 2001

Welmu posted:

Another Skylake leak:

Here's the WCCFT post.

Really looking forward to those 128MB GT4e graphics benchmarks. Though I predict Intel will choose not to price those parts competitively.

Rastor
Jun 2, 2001

Intel has been getting quite a black eye for their losses trying to get into the Mobile business; the Mobile division has lost something like $7 billion since 2012.

Today they announced they have solved that problem: they will no longer report Mobile profits/losses as a separate item.

Rastor
Jun 2, 2001

Lowen SoDium posted:

I don't think Skylake is getting 20 lanes from the CPU. 16 from the CPU and the PCH now has 20 3.0 lanes but they share bandwidth to the CPU.

I've seen conflicting sources on this, some saying that a Haswell on a Z97 chipset supported 16 PCIe 3.0 lanes and 8 PCIe 2.0 lanes, while a Skylake on a Z170 chipset will just have 20 PCIe 3.0 lanes.

Some (but not all) sources are saying that the Haswell configuration was 16 lanes managed by the CPU and 8 PCIe 2.0 lanes managed by the chipset, and the Skylake configuration is 16 lanes managed by the CPU and 20 PCIe 3.0 lanes managed by the chipset (36 lanes total). A quadrupling of bandwidth connected through the chipset seems unusual but not impossible.

Hopefully it will all be cleared up soonish.

Rastor fucked around with this message at 16:32 on May 27, 2015

Rastor
Jun 2, 2001

Sidesaddle Cavalry posted:

Looks like all the Z170 boards are going to be DDR4 only and now I look like a dumb reddit idiot for buying 8 additional GB of DDR3L (albeit used for 50$) back in April :cry:
Biostar H170 comedy option:
http://wccftech.com/gigabyte-biostar-z170-motherboards-shown-preproduction-samples-featuring-ddr4ddr3l-combo-support/


(personally I would sell the DDR3L and get a Z170 and new RAM)


Edit: beaten!

Rastor fucked around with this message at 17:10 on Jun 1, 2015

Rastor
Jun 2, 2001


Is there any significant difference between Intel Iris Pro Graphics 6200 (GT3e) and the Iris Pro Graphics 5200 (GT3e) from 2 years ago?

Rastor
Jun 2, 2001

Daviclond posted:

From the TH review it looks like they've increased the number of EUs.
Sure, but I guess what I'm saying is these articles are focusing only on Intel's socketed offerings and talking about huge increase over HD Graphics 4600. But back in 2013 we had the Core i5-4570R, Core i5-4670R, and Core i7-4770R. Compared there, this seems like a very small change (48 EUs instead of 40 EUs). It's an interesting chip, but the headline should be "Intel finally releases Iris Pro on socketed chip", not "It's really impressive how good Intel's iGPUs have become", because they were like this 2 years ago.

Rastor
Jun 2, 2001

There's no question Intel is coming from behind in the markets ARM rules, but they are coming. First they were getting design wins in tablets, then smartphones, soon smartwatches and other devices. They are losing a billion dollars a quarter on muscling into mobile and yet they are operating at a tidy profit so I imagine they're prepared to continue that spending for as long as it takes.

Rastor
Jun 2, 2001

pmchem posted:

So your pro-ARM argument is that ARM is the next Intel. Okay, call me when they crack the top 500. Still not holding my breath!
Everyone knows that the most powerful computers in the world are Crays, your claim that Intel processors will one day make the list is laughable. Laughable!

Rastor
Jun 2, 2001

Wintering Stinkbug posted:

I picked up an atom powered low end notebook today. When did atom stop being terrible?

With Silvermont in Q4 2013.

http://www.anandtech.com/show/6936/intels-silvermont-architecture-revealed-getting-serious-about-mobile/2

Rastor
Jun 2, 2001

PCjr sidecar posted:

physics is a bitch
Moore's Law has rammed into a wall. At least for silicon.

Rastor
Jun 2, 2001

Anime Schoolgirl posted:

Nope but they're putting small amounts of DRAM on die just for the GPU (Skylake generation Iris graphics will use 256mb.)
The rumor I heard was that Skylake will get 128MB and it's Kaby Lake that will get 2x128MB modules. And even then Intel seems reluctant to make those chips available in socketable form.

Rastor
Jun 2, 2001

SpelledBackwards posted:

Edit: well gently caress, dunno how I didn't see this was already posted yesterday in the thread.

What do you guys make of this, and do you think it has the potential to replace both RAM and solid state storage at the same time?

Intel, Micron debut 3D XPoint storage technology that's 1,000 times faster than current SSDs
It isn't fast enough to replace RAM. Especially with HBM arriving on the scene.

It will replace some solid state storage uses (assuming it can be successfully manufactured), but the first place it will do that is in million dollar enterprise setups; it will be a long time before this is something affordable for the home consumer.

Rastor
Jun 2, 2001

Marinmo posted:

3D-glasses: The mistake TV makers can't wait to redo.
I think even the TV makers have admitted that was a mistake now. IMO they theoretically could have had some success if they had waited until they had the resolution to pull off fully passive glasses instead of using active glasses, but now the well is poisoned.

Rastor
Jun 2, 2001

Sidesaddle Cavalry posted:

Soooo...how does one connect an NVMe SSD directly to a Haswell-E's PCIe lanes? (Without chipset acting as middleman)

Isn't it just a matter of using whichever PCIe slot(s) are run to those lanes?

Rastor
Jun 2, 2001

Krailor posted:

You have 2 options:

1. Get a NVMe SSD with a PCIe connector
2. Get an adapter that converts a M.2 slot to SFF-8643

M.2 is going to go through the chipset isn't it?

Rastor
Jun 2, 2001

dpbjinc posted:

USB cables always plug in with the USB logo facing upward. That's literally part of the standard. It makes life so much easier knowing that.
There is no "upward" with mobile devices, nor is there consistency about which orientation of the cable should be toward the front of the device.

Still, I question why DrDork's wife is shoving cables with hulk strength. If it's not going in, maybe that's not the way it goes in.

Rastor
Jun 2, 2001

Gwaihir posted:

Also, next year is the year of Linux on the desktop!
Chromebooks are rapidly becoming the standard school computer. Mostly not the ARM models, however.

Rastor
Jun 2, 2001

Nintendo Kid posted:

They really aren't, in any form. They're several orders of magnitude away from that.

DrDork posted:

Fixed for accuracy and more relevance to ARM.


http://bits.blogs.nytimes.com/2015/08/19/chromebooks-gaining-on-ipads-in-school-sector/

quote:

Last year, about 3.9 million Chromebooks were shipped in the education sector, an increase in unit sales of more than 310 percent compared with the previous year, IDC said. By contrast, iPad unit sales for education fell last year to 2.7 million devices, compared to 2.9 million in 2013, according to IDC data.

“Even if Microsoft is No. 1 in volume and Apple is No. 1 in revenue, from the growth perspective, nobody can beat Chromebook,” said Rajani Singh, a senior research analyst at IDC who tracks the personal computer market and is the author of the report.

In the first half of this year, she said, roughly 2.4 million Chromebooks shipped to schools compared with about 2.2 million Windows-based desktops and notebook computers.

“There’s very close competition between Windows and Chromebook,” Ms. Singh said. “It’s becoming more competitive.”

Rastor
Jun 2, 2001

Nintendo Kid posted:

Did you actually read that article? That's under 4 million last year and maybe like 5 million this year. That's hardly actually replacing the dozens of millions of computers already in use. And there's no indication that it's actually replacing rather than being used in addition to existing computers.

You're using the same thinking that told us in 2010 that the laptop and desktop computer would clearly be dead in 2015, or that netbooks would still be a thing now. "This is BIG NUMBER% more than shipped last year! Surely it'll keep up indefinitely!"

Did you actually read that article?

quote:

In the first half of this year roughly 2.4 million Chromebooks shipped to schools compared with about 2.2 million Windows-based desktops and notebook computers.

So in the education market the Chromebooks are outselling the Windows devices, which outsell all the Apple devices combined. Which proves my point, that "Chromebooks are rapidly becoming the standard school computer."

Rastor
Jun 2, 2001

Nintendo Kid posted:

And what part aren't you getting that this tells us nothing about it becoming "the standard school computer"? Here's a hint: there are already millions upon millions of computers already in the schools, and no indication they're being junked in favor of Chromebooks.

Hell going by your logic, the standard computer in schools was already iPads for several years, which is elf-evidently wrong as being in any school any time recently could show you.

DrDork posted:

No one cares who is outselling who, because it's all a tiny rear end drop-in-the-bucket market that is nowhere near critical mass or a "standard".
OK this is becoming a derail, but... I'm legit confused, do we have different definitions of what a "standard" is?

standard n. something used or accepted as normal or average; something established by authority, custom, or general consent as a model or example.

When a school is buying a school computer, by sales ratios that computer is likely to be (1) a Chromebook (most likely), (2) a Windows system, (3) an Apple device. As the most likely purchasing decision by schools, the Chromebook is "the standard school computer" -- that is, the normal / average / example computer schools purchase.


To loop it back around to a semblance of relevance to the discussion: Windows on ARM was DOA because there were no apps, but Chrome OS is architected such that all Chrome apps must work on both x86 and ARM processors.

For those who harbor a grudge against Chrome OS, there's also this weird thing, although Google under Sundar Pichai seems reluctant to embrace Android for use outside of phones, tablets, and wearables.

Rastor
Jun 2, 2001

DrDork posted:

2014 educational sales according to the article you linked:
Chromebooks: 29%
Apple (not-iPads): 12%
iPads: 20%
Windows "devices": 37%

So even by your own article, Chromebooks are not the most common platform, and are nowhere near hegemonic enough to be considered "standard." And that's not even starting the argument over use--the college student who has a laptop and a Chromebook/iPad, for example.
According to the article Chromebooks were outselling Windows devices in 1H 2015.

Adbot
ADBOT LOVES YOU

Rastor
Jun 2, 2001

Anime Schoolgirl posted:

Because it's a new and unexplored "loving moron" segment!

Though, honestly I'd like to know how well those chips would scale when overclocked, because they're the (mostly) full featured high efficiency mobile chips. I'm guessing neeeeeerds would take apart the laptop chassis and put the boards in something else entirely with wacky cooling solutions.

No need to take apart the laptop chassis, just buy one that already has a wacky cooling solution.






http://www.theverge.com/2015/9/2/9251275/asus-gx700-water-cooled-gaming-laptop-ifa-2015-video
http://www.windowscentral.com/asuss-new-rog-gx700-gaming-laptop-has-insane-liquid-cooling-dock

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply