Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
craig588
Nov 19, 2005

by Nyc_Tattoo
That is also one of my very early cases, I gave it to my dad in like 2005 and he used it for long after he should have stopped. Not only with the 80mm fans, but with 10 year old 80MM fans. He kept insisting it was fine, but I secretly switched his hardware over to a more modern one while I was watching his house while he was on vacation. 120MM fans with less than 50,000 hours make a world of difference.

Adbot
ADBOT LOVES YOU

bird with big dick
Oct 21, 2015

Cygni posted:

zip drive, live drive AND an LCD fan controller?? :worship:

I tried to fill every goddamn bay. I think the empty 5.25 bay was just a second CD drive but I'm not sure I remember.

But I still couldn't figure out something to put in that third 3.5 bay. I shoulda just gone full retard and put another 3.5 drive in there for 20 bucks.

Best part of the fan controller is that the right side is like a 25mm fan behind a foam filter. It's like pissing into a hurricane even if your case only has a single 80mm.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

MaxxBot posted:

I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea.



I remember what jet engines the Delta black labels were. 38mm thick fuckers instead of 25.

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance
When is the earliest we might get the 8-core Coffee Lake CPUs? I want a 8C/8T "i5-9400" for ~$200 for my next build if my current 2500K rig lasts until then.

GRINDCORE MEGGIDO
Feb 28, 1985


BIG HEADLINE posted:

I remember what jet engines the Delta black labels were. 38mm thick fuckers instead of 25.

My Athlon loved it, on an Alpha 8045.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

spasticColon posted:

When is the earliest we might get the 8-core Coffee Lake CPUs? I want a 8C/8T "i5-9400" for ~$200 for my next build if my current 2500K rig lasts until then.

Figure on a tease of them at CES or Computex and probably on sale in late Q3 or early-to-mid Q4 2018. Q1 2019 if Intel wants to milk the six-core profits for a bit longer.

bird with big dick
Oct 21, 2015

MaxxBot posted:

I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea.



I bought one as well and I think it lasted three days before I decided I did not like the sound of a jet taking off inside my PC.

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance

BIG HEADLINE posted:

Figure on a tease of them at CES or Computex and probably on sale in late Q3 or early-to-mid Q4 2018. Q1 2019 if Intel wants to milk the six-core profits for a bit longer.

Maybe RAM prices will be reasonable by then.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

MaxxBot posted:

I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea.



I bought the 80mm version to see what it was like. I don't think I ever used it for any real duration, since it was similar to a hair dryer in volume and pitch.

limaCAT
Dec 22, 2007

il pistone e male
Slippery Tilde
Has this been discussed yet?
Intel will ship with AMD graphics. I remember there were some presentations from Intel with "3rd party graphics core".

https://newsroom.intel.com/editorials/new-intel-core-processor-combine-high-performance-cpu-discrete-graphics-sleek-thin-devices/
https://www.pcworld.com/article/3235934/components-processors/intel-and-amd-ship-a-core-chip-with-radeon-graphics.html

MagusDraco
Nov 11, 2011

even speedwagon was trolled

GPU thread was talking about it a bit. The general vibe is "what the fuuuuck is happening what a time to be alive"

Mr.Radar
Nov 5, 2005

You guys aren't going to believe this, but that guy is our games teacher.

There's some discussion about it in the GPU thread.

Xae
Jan 19, 2005

havenwaters posted:

GPU thread was talking about it a bit. The general vibe is "what the fuuuuck is happening what a time to be alive"



When Intel lost their cross licensing agreement with Nvidia they were effectively forced out of the graphics business.

Intel is much more worried about Nvidia than AMD.

This does explain the HBM2 on Vega though.

GRINDCORE MEGGIDO
Feb 28, 1985


I posted in the GPU thread, but how much hbm are they fitting? Wonder if it's accessible as CPU cache.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
We sure as hell need a merged CPU thread now. Also, I expect common CPU recommendations to be "Buy the Intel & AMD CPU"

NewFatMike
Jun 11, 2015

Twerk from Home posted:

We sure as hell need a merged CPU thread now. Also, I expect common CPU recommendations to be "Buy the Intel & AMD CPU"

Agreed, even the GPU thread is up on this, although there's probably a disproportionate number of common posters in CPUs and GPU threads.

I am very excited to do science with this. Hopefully it'll be cheap enough to be a real console killing chip with the way dGPU prices are still hinky.

WhyteRyce
Dec 30, 2001

Wonder how much of this was from pressure from Apple for non-lovely graphics

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

WhyteRyce posted:

Wonder how much of this was from pressure from Apple for non-lovely graphics
You'll know when the first product with such a chip will come to market.

eames
May 9, 2009

I think Apple is more interested in putting their proprietary ARM cores for machine learning on that interposer. Should be good, can't wait for these to find their way into the MBPs.
Is this using their 3D stacked dies yet?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

eames posted:

I think Apple is more interested in putting their proprietary ARM cores for machine learning on that interposer. Should be good, can't wait for these to find their way into the MBPs.
Is this using their 3D stacked dies yet?

Apple is wealthy enough and mobile SoCs are mm2 constrained enough that integration is easier.

Malcolm XML fucked around with this message at 20:14 on Nov 6, 2017

lDDQD
Apr 16, 2006

GRINDCORE MEGGIDO posted:

I posted in the GPU thread, but how much hbm are they fitting? Wonder if it's accessible as CPU cache.

It would be quite useless as a CPU cache - it wouldn't even make very good CPU memory. CPU is all about low latency, low bandwidth; it wants to grab very small amounts of data from memory, and it wants it quickly. GPU is totally the other way around; it wants to grab very large amounts of data from memory, and it doesn't particularly care that it will take a long time for that to be fetched. Since HBM was designed with GPUs in mind, it does the latter fairly well. Which, unfortunately, makes it completely garbage, as far as a CPU is a concerned.

GRINDCORE MEGGIDO
Feb 28, 1985


Good point. How does the latency compare to ddr4?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

GRINDCORE MEGGIDO posted:

Good point. How does the latency compare to ddr4?



lDDQD posted:

It would be quite useless as a CPU cache - it wouldn't even make very good CPU memory. CPU is all about low latency, low bandwidth; it wants to grab very small amounts of data from memory, and it wants it quickly. GPU is totally the other way around; it wants to grab very large amounts of data from memory, and it doesn't particularly care that it will take a long time for that to be fetched. Since HBM was designed with GPUs in mind, it does the latter fairly well. Which, unfortunately, makes it completely garbage, as far as a CPU is a concerned.

It would make great CPU memory so great that it's basically wide io 2 mobile memory

All them caches hide latency real well on CPUs

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

GRINDCORE MEGGIDO posted:

Good point. How does the latency compare to ddr4?

I'd be more curious as to how it compares to eDRAM, since we already have a taste of what that was able to do with Crystalwell.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
This is... a thing. :can:

quote:

An Open Letter to Intel

Dear Mr. Krzanich,

Thanks for putting a version of MINIX 3 inside the ME-11 management engine chip used on almost all recent desktop and laptop computers in the world. I guess that makes MINIX the most widely used computer operating system in the world, even more than Windows, Linux, or MacOS. And I didn't even know until I read a press report about it. Also here and here and here and here and here (in Dutch), and a bunch of other places.

I knew that Intel had some potential interest in MINIX 3 several years ago when one of your engineering teams contacted me about some secret internal project and asked a large number of technical questions about MINIX 3, which I was happy to answer. I got another clue when your engineers began asking me to make a number of changes to MINIX 3, for example, making the memory footprint smaller and adding #ifdefs around pieces of code so they could be statically disabled by setting flags in the main configuration file. This made it possible to reduce the memory footprint even more by selectively disabling a number of features not always needed, such as floating point support. This made the system, which was already very modular since nearly all of the OS runs as a collection of separate processes (normally in user mode), all of which can be included or excluded in a build, as needed, even more modular.

Also a hint was the discussion about the license. I (implicitly) gathered that the fact that MINIX 3 uses the Berkeley license was very important. I have run across this before, when companies have told me that they hate the GPL because they are not keen on spending a lot of time, energy, and money modifying some piece of code, only to be required to give it to their competitors for free. These discussions were why we put MINIX 3 out under the Berkeley license in 2000 (after prying it loose from my publisher).

After that intitial burst of activity, there was radio silence for a couple of years, until I read in the media (see above) that a modified version of MINIX 3 was running on most x86 computers, deep inside one of the Intel chips. This was a complete surprise. I don't mind, of course, and was not expecting any kind of payment since that is not required. There isn't even any suggestion in the license that it would be appreciated.

The only thing that would have been nice is that after the project had been finished and the chip deployed, that someone from Intel would have told me, just as a courtesy, that MINIX 3 was now probably the most widely used operating system in the world on x86 computers. That certainly wasn't required in any way, but I think it would have been polite to give me a heads up, that's all.

If nothing else, this bit of news reaffirms my view that the Berkeley license provides the maximum amount of freedom to potential users. If they want to publicize what they have done, fine. By all means, do so. If there are good reasons not to release the modfied code, that's fine with me, too.

Yours truly,

Andrew S. Tanenbaum

Note added later: Some people have pointed out online that if MINIX had a GPL license, Intel might not have used it since then it would have had to publish the modifications to the code. Maybe yes, maybe no, but the modifications were no doubt technical issues involving which mode processes run in, etc. My understanding, however, is that the small size and modular microkernel structure were the primary attractions. Many people (including me) don't like the idea of an all-powerful management engine in there at all (since it is a possible security hole and a dangerous idea in the first place), but that is Intel's business decision and a separate issue from the code it runs. A company as big as Intel could obviously write its own OS if it had to. My point is that big companies with lots of resources and expertise sometimes use microkernels, especially in embedded systems. The L4 microkernel has been running inside smartphone chips for years.
http://www.cs.vu.nl/~ast/intel/

Kazinsal
Dec 13, 2011



That reads entirely to me like Tanenbaum stealth gloating about being writing the most widely used OS in the world by virtue of Intel embedding it in the ME.

He's still not over the Torvalds-Tanenbaum debate.

What a loving tool.

mewse
May 2, 2006

Kazinsal posted:

That reads entirely to me like Tanenbaum stealth gloating about being writing the most widely used OS in the world by virtue of Intel embedding it in the ME.

He's still not over the Torvalds-Tanenbaum debate.

What a loving tool.

Some Hacker News commenters are confused: "He helped Intel make changes to the code.. and is upset they didn't tell him what they did.. but he's happy they did it, because it proves he chose the correct license?"

WhyteRyce
Dec 30, 2001

Kazinsal posted:

That reads entirely to me like Tanenbaum stealth gloating about being writing the most widely used OS in the world by virtue of Intel embedding it in the ME.

He's still not over the Torvalds-Tanenbaum debate.

What a loving tool.

Meh, Linus toadies gloat about how Linux won because it's everywhere so I won't fault a man for rubbing some stuff back in their face

Volguus
Mar 3, 2009

WhyteRyce posted:

Meh, Linus toadies gloat about how Linux won because it's everywhere so I won't fault a man for rubbing some stuff back in their face

While I'm sure that now MINIX is indeed extremely used (most intel MBs), I kinda doubt that is the most widely used OS. QNX held that crown for quite a few decades and while Linux has taken some of the work old QNX did in certain areas, there are still a shitload of little chips deployed all over the world that control all kinds of machinery. Linux, even if you count Android as Linux (nah), still is nowhere near that kind of usage.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

MICROKERNELS

feedmegin
Jul 30, 2008

Volguus posted:

While I'm sure that now MINIX is indeed extremely used (most intel MBs), I kinda doubt that is the most widely used OS. QNX held that crown for quite a few decades and while Linux has taken some of the work old QNX did in certain areas, there are still a shitload of little chips deployed all over the world that control all kinds of machinery. Linux, even if you count Android as Linux (nah), still is nowhere near that kind of usage.

That's why he very carefully said 'computer' OS. Otherwise Android cleans his clock. And QNX's by now because the really little stuff doesn't run that or anything else.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Volguus posted:

While I'm sure that now MINIX is indeed extremely used (most intel MBs), I kinda doubt that is the most widely used OS. QNX held that crown for quite a few decades and while Linux has taken some of the work old QNX did in certain areas, there are still a shitload of little chips deployed all over the world that control all kinds of machinery. Linux, even if you count Android as Linux (nah), still is nowhere near that kind of usage.

Once upon a time I would have chimed in here pointing out that Cisco's IOS-XR is based on QNX, but now the new and preferred variation ("enhanced" XR) is based on Linux so there goes that bullet point. Not sure what the current install base looks like though, classic XR probably still has the lead there.

Eletriarnation fucked around with this message at 22:36 on Nov 7, 2017

Gyrotica
Nov 26, 2012

Grafted to machines your builders did not understand.

feedmegin posted:

That's why he very carefully said 'computer' OS. Otherwise Android cleans his clock. And QNX's by now because the really little stuff doesn't run that or anything else.

Which is bogus-phones are computers now.

Rastor
Jun 2, 2001

Today Qualcomm announced availability of their Centriq 2400 series ARM processors, which they claim beat Xeons on both performance-per-dollar and performance-per-watt:
https://www.theregister.co.uk/2017/11/08/qualcomm_centriq_2400/

Microsoft and Google are both quoted as investigating its use for cloud workloads and HP promises to ship servers with these to early access customers in early 2018.

Cygni
Nov 12, 2005

raring to post

STH has some non performance analysis of the Qualcomm CPUs. His take away is basically ARM servers havent had great luck getting traction for a variety of reasons (including Broadcom, who may now buy Qualcomm, shutting down their own ARM server business), and with EPYC around, the anybody-but-intel server crowd may already have their darling.

https://www.servethehome.com/analyzing-key-qualcomm-centriq-2400-market-headwinds/

Rastor
Jun 2, 2001

Cygni posted:

STH has some non performance analysis of the Qualcomm CPUs. His take away is basically ARM servers havent had great luck getting traction for a variety of reasons (including Broadcom, who may now buy Qualcomm, shutting down their own ARM server business), and with EPYC around, the anybody-but-intel server crowd may already have their darling.

https://www.servethehome.com/analyzing-key-qualcomm-centriq-2400-market-headwinds/

Not sure why they claim it's single socket only, Qualcomm definitely announced dual node support.




And someone like Google isn't going to give a poo poo about x86 support because they just recompile; they've also been big supporters of POWER architecture development. And for executing cloud lambda functions written in Python or Java or whatever it's a straight non-issue.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Rastor posted:

Not sure why they claim it's single socket only, Qualcomm definitely announced dual node support.




And someone like Google isn't going to give a poo poo about x86 support because they just recompile; they've also been big supporters of POWER architecture development. And for executing cloud lambda functions written in Python or Java or whatever it's a straight non-issue.

That's two independent nodes in 1 OCP tray, note the multi-host NIC.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Mellanox has an interesting host platform now too, ARM based SoC. I think it can be configured either as one of their high end network interface controllers or as a host system. (Bluefield)

Also the POWER9 system has 3rd parties selling EATX form factor motherboards for its launch, pretty different from how it used to be!

Should be an interesting few years for cpu watchers.

Adbot
ADBOT LOVES YOU

Xae
Jan 19, 2005

priznat posted:

Mellanox has an interesting host platform now too, ARM based SoC. I think it can be configured either as one of their high end network interface controllers or as a host system. (Bluefield)

Also the POWER9 system has 3rd parties selling EATX form factor motherboards for its launch, pretty different from how it used to be!

Should be an interesting few years for cpu watchers.

IBM has been pushing PowerPC chips and AIX hard for the last few years.

My last company re-platformed our Oracle DB from x86/Linux to PowerPC/AIX in The Year of Our Lord 2016 because the IBM rep swore it would solve our databases IO being slow as gently caress due to the SAN contention.

:suicide:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply