Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

bull3964 posted:

Three years ago I could see the logic of 10 ARM webservers stuffed in a 1U chassis for low cost compute density, but now you could do twice as many on a single 1U R620 with a couple of Xeons and still have capacity left over to do other nonspecialized compute stuff as well.
That's not really true though, you get better performance and lower power usage with ARM-based servers. Anandtech's testing of the Calxeda-based Boston Viridis shows pretty compelling advantages. The conventional wisdom seems to hold true, if the per-thread performance of ARM is high enough for your application it will probably be the most efficient, otherwise you go Xeon (or maybe Opteron in some scenarios).

Adbot
ADBOT LOVES YOU

Butt Wizard
Nov 3, 2005

It was a pornography store. I was buying pornography.
I'm more interested in what the refresh will do for things like the Surface Pro and the higher-end tablets. The new Atom is still woeful on paper but the battery life on the entry level Win 8 tablets is already something retarded like 12 hours. I think most Pro tablets are between 4 - 8. A guy claiming to work for Microsoft posted on a local forum that we should see some model line-ups with the new chips announced in early June, but does anyone have a rough idea how things might improve?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Shaocaholica posted:

Are newer Intel steppings generally better than older ones as far as TDP and overclockability?

TDP is usually not affected. Overclockability is sometimes but not always greater. For example, the E8400 E0 overclocked more reliably to a moderate speed than the C0 stepping, but did not get to extreme speeds as often.

roadhead
Dec 25, 2001

cstine posted:

Non-technicals have no goddamn clue what's in their console or laptop or phone or whatever. It's both not something they understand, it's ALSO something they don't give a poo poo about.

I dunno, having the same shiny foil sticker on both prominently displayed might even get the mouth-breathers to notice. Maybe.

cstine
Apr 15, 2004

What's in the box?!?

roadhead posted:

I dunno, having the same shiny foil sticker on both prominently displayed might even get the mouth-breathers to notice. Maybe.

Right, but you still have the problem of getting people to both know who the gently caress AMD is, and why they should care - consoles are an appliance, and they care about playing Halo, not about the cpu in the thing.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Alereon posted:

That's not really true though, you get better performance and lower power usage with ARM-based servers. Anandtech's testing of the Calxeda-based Boston Viridis shows pretty compelling advantages. The conventional wisdom seems to hold true, if the per-thread performance of ARM is high enough for your application it will probably be the most efficient, otherwise you go Xeon (or maybe Opteron in some scenarios).

I'm still not completely convinced. They are faster under this workload and consume less power, but only just.

The final conclusion sums it up nicely. There's potential there, but it's very much niche now.

In the end, I still think it all comes down to Intel's fabs. TSMC is going to be awfully busy supplying Qualcomm and Apple with SoCs for their mobile markets. So, it remains to be seen if someone like Calxeda is going to be get enough A15s to be competitive on price as Intel iterates their product. We have Ivy Bridge leaks with up to 15 cores slated to come in Q1 2014.

So, while their power/performance may be great, it still may be cheaper to go x86 when all is said in done.

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy

JawnV6 posted:

Great, once you go out and find the revenue numbers for the companies that actually design ARM SoC's, the companies that fab them, the companies that integrate devices around them, and the plethora of companies that provide third-party support for compiling, debugging, etc. and total all those up to get a reasonable estimate of the revenue the ARM ecosystem is chugging through you might have a comparison that isn't utterly ignorant of the market. Short list would be Qualcomm, Samsung, TSMC, Apple, Atmel...

Same thing applies to x86 with the exception of the manufacturing of the actual CPUs. It looks to me like you are trying to compare everything but the kitchen sink on the ARM side to just the bare manufacturing of the CPUs on the x86 side.

JawnV6 posted:

iPhones alone have shipped >250M, iPads >100M. That's dwarfed by the number of Androids. Cell phones are a fraction of the raw count of ARM cores shipping. The margins are a different picture and a core-to-core count isn't great, but the basic picture is that ARM currently dominates mobile and mobile's on the way up.

I'd just like to see the actual numbers with a good analysis and not just commonplace "mobile is on the way up" projections where somebody uses a ruler and draws a straight line to 2050 based on the 2007 and 2012 values or similar.

JawnV6
Jul 4, 2004

So hot ...

flavor posted:

Same thing applies to x86 with the exception of the manufacturing of the actual CPUs. It looks to me like you are trying to compare everything but the kitchen sink on the ARM side to just the bare manufacturing of the CPUs on the x86 side.
Oh, you're ignorant about x86 too? No prob. Just to take an example I'm quite familiar with: in terms of ICE debugging, there isn't a third party x86 solution. Debug tools are only shipped from Intel/AMD themselves. On the ARM side you've got device manufacturers like Atmel who provide an IDE with ICE support and pure tools vendors like IAR who sell alternatives. This is why I'm pretty comfortable asking your comparison to include those third parties and why there isn't a third party on the x86 side. If you really want to drill in on this and you don't, it's stupid, I hope you're going to go lop off the Intel revenue from Flash and other non-x86 sections.

I could go on, but the short version is that ARM is a diverse ecosystem of several companies. x86 is the Big Two and hardly anybody else. If you're going to call my analysis specious it might help you to actually supply, you know, a fact or two?

flavor posted:

I'd just like to see the actual numbers with a good analysis and not just commonplace "mobile is on the way up" projections where somebody uses a ruler and draws a straight line to 2050 based on the 2007 and 2012 values or similar.
The numbers I've given in this thread (70M PS3's in 6 years, 100M IVB in 1 year, 250M iPhone in 5 years, 100M iPad in 3 years) are "actual numbers." Did you think I was making those up? Every single one is a google search away if you doubt them.

Intel openly acknowledges this in a lot of ways, so I can't imagine why you're holding this point in contention. At the 2011 investor meeting, then-CEO Paul Otellini asked "600 smartphones were sold. Who made the most money? Intel, because someone had to buy a Xeon to support the backend." There are a lot of industries where a high-end manufacturer ceded the low-end to cheap competitors, who ramped up on the huge volumes, got some experience, then beat the high-end player at their own game. Steel, manufacturing, etc.

Shaocaholica
Oct 29, 2002

Fig. 5E

Factory Factory posted:

TDP is usually not affected. Overclockability is sometimes but not always greater. For example, the E8400 E0 overclocked more reliably to a moderate speed than the C0 stepping, but did not get to extreme speeds as often.

What exactly is a stepping anyway? Is it like a different layout of the same logic?

movax
Aug 30, 2008

JawnV6 posted:

Oh, you're ignorant about x86 too? No prob. Just to take an example I'm quite familiar with: in terms of ICE debugging, there isn't a third party x86 solution. Debug tools are only shipped from Intel/AMD themselves. On the ARM side you've got device manufacturers like Atmel who provide an IDE with ICE support and pure tools vendors like IAR who sell alternatives. This is why I'm pretty comfortable asking your comparison to include those third parties and why there isn't a third party on the x86 side. If you really want to drill in on this and you don't, it's stupid, I hope you're going to go lop off the Intel revenue from Flash and other non-x86 sections.

If we include XDP in there though (at least for Intel), there's American Arium in addition to the Intel tools; Arium's the only game in town if you aren't large enough for Intel to decide to share their tools/hardware with you. :(

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Shaocaholica posted:

What exactly is a stepping anyway? Is it like a different layout of the same logic?

It refers to changes in the logic, getting its name from the stepper motors that control the photolithography equipment used in the fabrication process. Sometimes a stepping improves clocks without budging TDP with an efficiency revision, or it fixes a logic bug, or it adjusts power state behavior. Rarely it'll be a major change, like Rev A1 of 65nm Core 2 being a single-core, 1 MB L2 cache variant, or A1 of 45nm Core 2 adding L3 cache and creating a hexacore design for the Xeon 7400 series.

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy

JawnV6 posted:

Oh, you're ignorant about x86 too? No prob.

You know, even assuming you're in possession of the absolute truth here, it's not necessary to start every post that way. If you have facts that you want to bring up, just bring them up. I'm absolutely interested in them. If you ever taught in school or similar, did you preface every response to every question with a comment on how they were ignorant?

JawnV6 posted:

Just to take an example I'm quite familiar with: in terms of ICE debugging, there isn't a third party x86 solution. Debug tools are only shipped from Intel/AMD themselves. On the ARM side you've got device manufacturers like Atmel who provide an IDE with ICE support and pure tools vendors like IAR who sell alternatives. This is why I'm pretty comfortable asking your comparison to include those third parties and why there isn't a third party on the x86 side. If you really want to drill in on this and you don't, it's stupid, I hope you're going to go lop off the Intel revenue from Flash and other non-x86 sections.

I don't know [ANOTHER CHANCE TO CALL ME IGNORANT RIGHT HERE] where exactly one would reasonably draw the line of where the ecosystem around a type of CPU ends and would should be considered its exact market size. I'm sure people have come up with ideas for that. It looks a little bit like you're trying to push some kind of narrow limit around x86 and a wide one around ARM, and everyone who brings this up meets with condescension. If you feel you have very convincing arguments, let them speak for themselves.

JawnV6 posted:

I could go on, but the short version is that ARM is a diverse ecosystem of several companies. x86 is the Big Two and hardly anybody else. If you're going to call my analysis specious it might help you to actually supply, you know, a fact or two?

The numbers I've given in this thread (70M PS3's in 6 years, 100M IVB in 1 year, 250M iPhone in 5 years, 100M iPad in 3 years) are "actual numbers." Did you think I was making those up? Every single one is a google search away if you doubt them.

Oh, you're not able to read? I never doubted your numbers, I doubted what you included. To elaborate: There are many hardware and software companies that wouldn't exist if x86 CPUs wouldn't exist, therefore I'd at least address that instead of some hand-wavy comments. Also the numbers you gave may all be true, but they're not the whole picture because there are more products than those.

JawnV6 posted:

Intel openly acknowledges this in a lot of ways, so I can't imagine why you're holding this point in contention. At the 2011 investor meeting, then-CEO Paul Otellini asked "600 smartphones were sold. Who made the most money? Intel, because someone had to buy a Xeon to support the backend." There are a lot of industries where a high-end manufacturer ceded the low-end to cheap competitors, who ramped up on the huge volumes, got some experience, then beat the high-end player at their own game. Steel, manufacturing, etc.

If you can learn to get the semantics out of what somebody says beyond "if $post != $my_brain_contents then call ignorant", then all I've really said is that if revenue is a measure, then it does not support a merger. I never said there can't be any others. I'm not seeing how a quote in which Intel said that they make more money per smartphone than the makers of the phones supports the idea that they're threatened by that, but I'm sure that's just me.

In closing, I don't really doubt that the mobile space is important and growing, but I'd like to see the whole picture.

Edit: I'm trying to say post what you would tell a court that's weighing whether or not to allow a merger between Intel and AMD, and leave out all the references to the judge being ignorant.

Mr. Smile Face Hat fucked around with this message at 02:44 on May 4, 2013

JawnV6
Jul 4, 2004

So hot ...

flavor posted:

It looks a little bit like you're trying to push some kind of narrow limit around x86 and a wide one around ARM, and everyone who brings this up meets with condescension. If you feel you have very convincing arguments, let them speak for themselves.
This is literally the structure of how the two architectures are designed, fabbed, and pushed into the market by their respective makers. I'm not casting nets or anything, I am trying to describe reality against your continued objections. Intel is a highly integrated device manufacturer (IDM) that owns the architecture, the design teams, the fabs that produce the chips, marketing, sales, etc. ARM is a licensing corporation that does not own a single fab whose business model is to develop a general architecture, license it to companies with design teams who integrate the core into SoC's with differentiating components, those same companies may own fabs or contract that out to Yet Another Company in the ARM ecosystem.

ARM is an ecosystem. Intel is an IDM.

Shaocaholica posted:

What exactly is a stepping anyway? Is it like a different layout of the same logic?
Steppings traditionallylol started with A0. Any polysilicon change (e.g. transistors changing) makes the letter go up and is a major effort, any wire/metal changes are a "dash" stepping and increment the number. Due to how silicon is manufactured from the poly up, 'dash' steppings may be able to intercept in-progress Si and fix chips that would have had known issues.

Shaocaholica
Oct 29, 2002

Fig. 5E
Thanks for the explanation on stepping. So my next question has to do with sspec. What is it? What does it mean when the same model cpu has multiple sspec variants?

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy

JawnV6 posted:

This is literally the structure of how the two architectures are designed, fabbed, and pushed into the market by their respective makers. I'm not casting nets or anything, I am trying to describe reality against your continued objections. Intel is a highly integrated device manufacturer (IDM) that owns the architecture, the design teams, the fabs that produce the chips, marketing, sales, etc. ARM is a licensing corporation that does not own a single fab whose business model is to develop a general architecture, license it to companies with design teams who integrate the core into SoC's with differentiating components, those same companies may own fabs or contract that out to Yet Another Company in the ARM ecosystem.

ARM is an ecosystem. Intel is an IDM.


There's a difference between asking for facts and numbers and "continued objections". I'm not objecting to anything, I'm simply still waiting for the numbers that describe respective market sizes. I do understand the different business models, however if company X builds ARM CPUs with a license, that doesn't mean that all of company X including their fridge and stereo amplifier divisions is now fully part of the ARM ecosystem.

So, what are the respective market volumes of x86 and ARM CPUs and what are reasonable (i.e. most likely nonlinear) projections?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
In 4Q2012, about 90 million PCs moved to market worldwide via channel sales, or about 17 million in the U.S.

In 4Q2012, about 472 million cell phones were sold to end users worldwide, including 207 million smartphones (69.7% Android, 20.9% iOS, 3.5% RIM, 3% Microsoft).

Tablets in 4Q2012 were about 52.5 million units, with 22.9 million of those being iOS units. Unfortunately, the numbers include Windows 8 tablets, including #2 shipper Samsung.

In terms of ISA cost, let's look at some examples: the Exynos 5 Octa SoC Samsung makes for the Galaxy S4 costs around $30, out of the device's $236 bill of materials and ~$8.50 build cost. The SoC, and therefore the ARM ecosystem hardware cost, is about 12% of the final marginal cost of the device (13% of BoM). This is an expensive SoC for a phone, though; the Galaxy S3's SoC cost around $17.50. Samsung

The Galaxy S4, international GSM unlocked, costs $740 on Amazon. To be fair, the contract cost is only $99 to $199, as long as you commit to ~$2000 for two years of service.

Meanwhile, Intel has offered a "reference" Ultrabook with a ~$710 BoM for a high-end 18mm thick model with SSD, exclusive of assembly. Of that Ultrabook, Intel will capture at least the CPU and chipset, and it may also capture network (WiFi and ethernet MAC), NAND (via IMFT), and SSD controller, as well as other sundries. But say we allow for Intel to get the minimum, just the CPU and chipset. The tray price for an i5-3337U is $225, and the QS77 Express chipset has a "recommended consumer price" of $54. So Intel's minimum take on the BoM is about $279, or 39% of the BoM.

This reference ultrabook targets the $1000 price point. There's a good potential profit margin there, but it's nowhere near smartphones margins.

So, to review:

Smartphones and tablets: ~260 million units sold in 4Q12 at a huge margin over BoM in an expanding market, of which ~13% of parts cost goes to the main chipmaker. And if you want, you can be the main chipmaker.

PCs: ~90 million units sold in 4Q12 at a smaller margin in a contracting market, of which ~40% of parts cost going to the main chipmaker is not uncommon. With so many commodity parts, there are very few ways to differentiate besides just throwing more money at BoM. And even Apple, big as it is, can't make Intel budge when it wants something Intel won't sell.

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy
Okay, thank you for that. I'm still not seeing a compelling reason for how a merger/takeover between Intel and AMD would improve the market, and the numbers are based on examples but I won't insist any further.

I guess one problem I'm having besides the numbers is that it's not possible to substitute x86 for ARM and vice-versa in all situations, and having only one maker of x86s left would suck a lot in situations where that type of CPU is the only reasonable solution.

fookolt
Mar 13, 2012

Where there is power
There is resistance
http://www.tomshardware.com/reviews/ivy-bridge-wolfdale-yorkfield-comparison,3487-20.html

When you look at power consumption, it's pretty amazing just how far we have come. I love these long view performance benchmarks.

EIDE Van Hagar
Dec 8, 2000

Beep Boop
http://www.anandtech.com/show/6936/intels-silvermont-architecture-revealed-getting-serious-about-mobile

http://www.tomshardware.com/reviews/atom-silvermont-architecture,3499.html

Some news on the Atom front today.

canyoneer
Sep 13, 2005


I only have canyoneyes for you
The name of the game for years has been that the consumer products make up the volumes, which subsidize the cost of the high dollar/high margin server silicon.

The big risk isn't that Intel is missing the revenue and single digit margins from mobile phones/tablets. The risk is that without the economies of scale that come from those volumes, the margins on their cash cows erode. It's not a "top line" revenue concern, it's a "bottom line" concern.

If they weren't pushing through the volumes of silicon for consumer products and keeping their bleeding edgs fabs loaded, they wouldn't be pulling off the high margins in the high dollar spaces. That's why the fear exists that if they can't deliver some volume in mobile, they may have to start foundry work for others. Else, they end up where AMD is, low margins and fabless (instead of high margins and fabulous:rory:)

evilweasel
Aug 24, 2002

canyoneer posted:

The name of the game for years has been that the consumer products make up the volumes, which subsidize the cost of the high dollar/high margin server silicon.

The big risk isn't that Intel is missing the revenue and single digit margins from mobile phones/tablets. The risk is that without the economies of scale that come from those volumes, the margins on their cash cows erode. It's not a "top line" revenue concern, it's a "bottom line" concern.

The bigger risk is these low-end things move into the high-end things. You create an entire ARM ecosystem for mobile, suddenly it starts looking a lot better for laptops.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
So looks like there will be Haswell-ready PSUs by the launch, hrm http://www.seasonic.com/new/twevent20130510.htm

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Yeah, it began trickling out soon after that it wasn't necessarily that no PSU was ready for that load, just that nobody had really tested a load that low and many units would probably work.

Also I am totally unsurprised to see the X series on there. :c00l:

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Factory Factory posted:

Yeah, it began trickling out soon after that it wasn't necessarily that no PSU was ready for that load, just that nobody had really tested a load that low and many units would probably work.

Also I am totally unsurprised to see the X series on there. :c00l:

Aw yeah. I have an X-660. For my own systems, and for people who are willing to spend the money, I'd always recommend Seasonic X and Platinum series.

But I think the main reason they are going to work is that they are DC-DC for the 3.3 and 5v rails, so there's always some load on the 12v rail.

movax
Aug 30, 2008

I imagine some 3rd-party vendor guys will come up with little ATX interposers or similar where you basically throw a resistor in there to (yeah, wastefully) put a load on the rail. Then again, I think C6/C7 will pay off even moreso in the mobile space where of course the PSU design is tied very closely to the system as a whole and is designed in from the start.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'm still skeptical of people claiming 20+ hr runtimes on laptops given more than half the battery use is from the LCD backlight rather than CPU. Then there's the wifi to worry about too. But I'm very curious to see if x86 mobile will actually be viable soon. My gut feeling tells me that it'll be worthless without a cross compiler or migration kit for most developers as well as a viable iPod/Phone contender an since Microsoft failed here uh... I doubt Intel could do it. Perhaps revitalizing notebooks and making them really attractive again is the main strategy and mobile is a backup plan. This seems inverse to the nVidia strategy.

Rawrbomb
Mar 11, 2011

rawrrrrr
I'm on a SeaSonic X Series X650 Gold , should be safe right? :)

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

necrobobsledder posted:

I'm still skeptical of people claiming 20+ hr runtimes on laptops given more than half the battery use is from the LCD backlight rather than CPU. Then there's the wifi to worry about too. But I'm very curious to see if x86 mobile will actually be viable soon. My gut feeling tells me that it'll be worthless without a cross compiler or migration kit for most developers as well as a viable iPod/Phone contender an since Microsoft failed here uh... I doubt Intel could do it. Perhaps revitalizing notebooks and making them really attractive again is the main strategy and mobile is a backup plan. This seems inverse to the nVidia strategy.

My ThinkPad actually has a utility for (low-precision) power draw monitoring, and the (i5-2410M, 14" 1600x900 LED-backlit display, Intel 320 SSD and often-powered-down 500 GB 7200 RPM hard drive) apparently idles the Windows desktop at ~2 W of power. Haswell dropping idle power from .5W to whatever near-zero it is would be a significant fraction of that, especially if the display had framebuffer DRAM to support IGP sleeping.

Shaocaholica
Oct 29, 2002

Fig. 5E
This might not be the best thread for this but at least here I think it would be more impartial.

This isn't the most scientific of tests:

http://www.pcmag.com/article2/0,2817,2409967,00.asp

but why is there such a large discrepancy between synthetic CPU performance(looking at the geekbench scores) on the same CPU with the only factor being a different OS? Can the differences in modern OSs really contribute to >10% CPU(not GPU related) difference in performance on the same CPU? That just seems like a lot.

edit:

these numbers might be more detailed(not mine, Hackintosh, 3770K)

http://browser.primatelabs.com/geekbench2/1715188
http://browser.primatelabs.com/geekbench2/1713982

Shaocaholica fucked around with this message at 17:05 on May 12, 2013

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Shaocaholica posted:

but why is there such a large discrepancy between synthetic CPU performance(looking at the geekbench scores) on the same CPU with the only factor being a different OS? Can the differences in modern OSs really contribute to >10% CPU(not GPU related) difference in performance on the same CPU? That just seems like a lot.
The OS can control power states, including potentially preventing the CPU from Turboing up. The Geeksbench numbers could just be due to slightly different hardware, power states interfering with Turbo, or even some inherent slowness in memory allocation in OSX. OSX also historically had VERY slow thread handling which made it unsuitable for use on servers, this may have been corrected though.

OSX also doesn't offer GPU acceleration capabilities comparable to Windows so it's at a severe disadvantage in the "Psychedelic Browsing" GPU benchmark. You can also ignore the tests where they benchmarked under Safari on OSX and IE10 on Windows. It's possible other results (iTunes for example) could have been due to testing problems, like not running the tests enough times.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
Also, talking about OS X lacking features, the irony is that people think it's for graphics designers... yet it doesn't support 10-bit colour displays.

Shaocaholica
Oct 29, 2002

Fig. 5E

Alereon posted:

The OS can control power states, including potentially preventing the CPU from Turboing up. The Geeksbench numbers could just be due to slightly different hardware, power states interfering with Turbo, or even some inherent slowness in memory allocation in OSX. OSX also historically had VERY slow thread handling which made it unsuitable for use on servers, this may have been corrected though.

OSX also doesn't offer GPU acceleration capabilities comparable to Windows so it's at a severe disadvantage in the "Psychedelic Browsing" GPU benchmark. You can also ignore the tests where they benchmarked under Safari on OSX and IE10 on Windows. It's possible other results (iTunes for example) could have been due to testing problems, like not running the tests enough times.

Ok, here's an easy to read compare of the same hackintosh from my earlier post:

http://browser.primatelabs.com/geekbench2/compare/1715188/1713982

Same hardware so that can't be an issue. No GPU reliant tests so that wont be either. Some tests have horrible swings in favor of both OSes but windows wins overall.

I wonder if the reason for the difference is fundamental to the way *nix systems handle memory or just the OS X specific implementation of it.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Those CPUs are definitely running on different performance profiles, the Hackintosh doesn't seem to be underclocking. If the Mac isn't Turboing (or not as aggressively) that would make a difference too. Gigabyte Z77 boards also overclock the CPU ~11% under load so that could have been left enabled as well. It doesn't look like the memory speed is being read correctly (since both results look radically different), so maybe the Hackintosh is on DDR3-1866 or something. That said, the Geekbench site notes that the Memory benchmark tests OS memory manipulation performance, so perhaps OSX really is just slow at memory. The Sharpen Image test result is a hug difference, though.

cstine
Apr 15, 2004

What's in the box?!?

Shaocaholica posted:

Ok, here's an easy to read compare of the same hackintosh from my earlier post:

http://browser.primatelabs.com/geekbench2/compare/1715188/1713982

Same hardware so that can't be an issue. No GPU reliant tests so that wont be either. Some tests have horrible swings in favor of both OSes but windows wins overall.

I wonder if the reason for the difference is fundamental to the way *nix systems handle memory or just the OS X specific implementation of it.

Edit: I can't read.

cstine fucked around with this message at 00:27 on May 13, 2013

Shaocaholica
Oct 29, 2002

Fig. 5E

Alereon posted:

Those CPUs are definitely running on different performance profiles, the Hackintosh doesn't seem to be underclocking. If the Mac isn't Turboing (or not as aggressively) that would make a difference too. Gigabyte Z77 boards also overclock the CPU ~11% under load so that could have been left enabled as well. It doesn't look like the memory speed is being read correctly (since both results look radically different), so maybe the Hackintosh is on DDR3-1866 or something. That said, the Geekbench site notes that the Memory benchmark tests OS memory manipulation performance, so perhaps OSX really is just slow at memory. The Sharpen Image test result is a hug difference, though.

Is the GB Z77 OC a bios level thing or is it controlled in the OS via a GB driver? If its bios level then it should be happening in both tests and they are run from the same GB Z77 hackintosh machine. The CPU and memory bios level settings should be identical as well.

HalloKitty posted:

Also, talking about OS X lacking features, the irony is that people think it's for graphics designers... yet it doesn't support 10-bit colour displays.

Isn't current 10bit stuff all hacked outside of the OS anyway? So long as the application and GPU driver can handle it, its possible to implement without formal OS support right? Isn't that how photoshop and windows do it currently? AFAIK, there are no mainstream OSs that support 10bit at the OS level which is a shame. <- See my avatar.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Shaocaholica posted:

Is the GB Z77 OC a bios level thing or is it controlled in the OS via a GB driver? If its bios level then it should be happening in both tests and they are run from the same GB Z77 hackintosh machine. The CPU and memory bios level settings should be identical as well.
I somehow did not understand that was actually the same machine, sorry, I have been on a bad run of misreading posts recently. But anyway, the auto-overclock is a BIOS setting but implemented based on the CPU performance state requested by the OS, if the OS isn't requesting max performance I don't think you'll get it. Even if that setting is disabled though, you'll get different clockspeeds under the same load conditions depending on the OS and power management settings. Someone asked the Geekbench people about the absurdly low sharpen scores and they just guessed it was related to how OSX treated HyperThreading, but nothing concrete.

Shaocaholica
Oct 29, 2002

Fig. 5E
I guess I'm just stumped as to why apps just feel sluggisher on my mac than my PCs. My mac and PCs have different hardware but having to deal with a lot of computers on a day to day basis I can usually tell how app performance/responsiveness will scale from one PC to the next but everything just feels off on a Mac. Off as in slower than what I would expect a PC with the same hardware and none of the apps I care about use the GPU. I'm aware of the internet rumor that there is some 'issue' with writing good OS X/Linux GPU drivers. I don't get why Apple just doesn't throw a few million at it. Its not like they don't have the cash.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Do you have a good SSD? Anandtech has posted a lot about how critical I/O consistency (not having some reads/writes take longer than others) is to the feeling of responsiveness on OSX, to the point where you can tell the difference between decent SSDs. The ideal is a Sandforce drive with the latest firmware, TRIM enabled, and at least 20-25% free space. Apple has taken flak from Valve and others for awhile over their lack of commitment to quality video drivers, but I don't think there's much evidence of them increasing their focus on x86 devices.

Edit: Well that would certainly do it, maybe put a good SSD in the system?

Alereon fucked around with this message at 02:17 on May 13, 2013

Shaocaholica
Oct 29, 2002

Fig. 5E

Alereon posted:

Do you have a good SSD? Anandtech has posted a lot about how critical I/O consistency (not having some reads/writes take longer than others) is to the feeling of responsiveness on OSX, to the point where you can tell the difference between decent SSDs. The ideal is a Sandforce drive with the latest firmware, TRIM enabled, and at least 20-25% free space.

SSD in the Mac? No. My PCs are a mix of SSD and HDD for the OS.

The other part of this topic that I'm interested in is that just because a specific combination of OS and drivers can delivery the best performance in a particular application, doesn't mean that its the theoretical maximum. There could still be a lot of wasted cycles in the best case scenario. What is interesting to me is how far off are real world best cases to the theoretical best.

I work in VFX and I know that my studio(and a lot of others) are pissing a lot of money away by being very very far from real world best case and even further from theoretical best. I know its not an easy task to change the status quo of things like this at a super large 'corporate' level. Just saying something because I notice it. We already have 2+ Intel engineers working on stuff for us but its still a hugely uphill battle with legacy code and fundamental workings of our renderer.

Shaocaholica fucked around with this message at 02:34 on May 13, 2013

Adbot
ADBOT LOVES YOU

Beeftweeter
Jun 28, 2005

a medium-format picture of beeftweeter staring silently at the camera, a quizzical expression on his face

Shaocaholica posted:

If its bios level then it should be happening in both tests and they are run from the same GB Z77 hackintosh machine. The CPU and memory bios level settings should be identical as well.

It's certainly possible they're actually not the same, even if it's the same machine; in order to boot OS X successfully the bootloader usually has to override a bunch of stuff in the SMBIOS and DSDT and poo poo. I wouldn't be surprised if something wasn't set up correctly.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply