|
evensevenone posted:You're right, it it is just as difficult to write cross platform software as it was in 1998 when PPC NT was last relevant. There were C compilers for both PPC and NT and x86 NT (and unix for many years prior). So long as you don't use anything specific to the architecture then code could port fine! Hint: The same problem exists today even with .Net
|
![]() |
|
![]()
|
# ¿ Jan 20, 2025 13:14 |
|
How is GPGPU for AMD parts in linux? If this more or less requires the official AMD drivers to actually use the fusion part of the CPU I'm not going to be interested at all.
|
![]() |
|
Alereon posted:Phoronix has lots of tests of AMD hardware in Linux. Here's their review of AMD Fusion with open source drivers, though that was Brazos and not Llano. Why don't you want to use the Catalyst drivers? That's just testing graphics performance, I see nothing there about compute. If I'm going to have to run Catalyst I'll just reboot into windows where it's faster anyway. Devian666 posted:I haven't had any issues with running computational stuff using the linux catalyst drivers. Is there a specific application that you need other drivers? No, I was just looking playing around with it. What is it currently being programmed in?
|
![]() |
|
Devian666 posted:They delayed until September so let's hope so. It could take a while for stock to be available. Welp! http://www.xbitlabs.com/news/cpu/display/20110831102650_AMD_s_Highly_Anticipated_Bulldozer_Chips_Might_Face_Further_Delay.html
|
![]() |
|
freeforumuser posted:http://www.xbitlabs.com/news/cpu/display/20110901142352_Gigabyte_Accidentally_Reveals_AMD_s_FX_Launch_Lineup_Specs.html Power6 has had 4Ghz+ chips for awhile now or are you restricting discussion to x86?
|
![]() |
|
freeforumuser posted:Found a pretty legit BD leak from PCWorld France. Hmm, I wonder why the architecture is always so behind in games?
|
![]() |
|
KillHour posted:I've always wondered why everyone always seems to root for AMD. Is it nostalgia for AMD was never "on top". Having a better product than your competitor doesn't mean you're "on top".
|
![]() |
|
It's starting to sound like bulldozer might be a gamble in the same way that netburst and itanium were a gamble for intel. Netburst and itanium were all designed with the assumption that the "performance deficiencies" would be overcome by some sort of scaling, clockspeed for netburst and crazy compiliers for itanium. Bulldozer looks like it's betting not on clockspeed but concurrency and eventually offloading more and more work onto the GPU. Even if BD flops I wouldn't be surprised if intel lifted some ideas off of it like they have been so good at doing in the past. VVV Thats the goals for the followups to bulldozer in the same family. The whole fusion thing. Longinus00 fucked around with this message at 03:38 on Oct 3, 2011 |
![]() |
|
WhyteRyce posted:A possible explanation for some of the lackluster leaked performance numbers Is this on LKML? Do you have links for the patch?
|
![]() |
|
I found the thread that has the final patch versions (that I know about). Included are kernel build benchmarks. https://lkml.org/lkml/2011/8/5/171
|
![]() |
|
Alereon posted:I would consider an 8-core processor that can't quite equal a quad-core to be a pretty serious failure. The HardOCP Cinebench numbers show Bulldozer BARELY beating a Phenom II X6, and losing slightly to the i7 2600K. Things are a bit better for POVRay, but I'd definitely say that multi-threaded performnace is far below expectations. I never really expected per-core performance to be good, but I at least thought it would win pretty handily in heavily multi-threaded integer workloads, and that is definitively not the case. I would have also hoped that per-thread floating point performance would go up over Phenom II, but instead it seems to have dropped, pretty seriously when you consider that Bulldozer has a 200-500Mhz clock speed advantage, depending on how effective Turbo Core is. Cinebench and POVRay are integer workloads?
|
![]() |
|
WhyteRyce posted:Why is Windows 7 getting poo poo when Linux needed a kernel patch as well? The linux kernel patch only increased performance by up to 10%, a 40% performance increase is pretty crazy all things considered. AMD has had previous problems with windows kernel scheduling (aka phenom cool and quiet problem) so I suppose this isn't unprecedented. That also a problem that core parking would have solved, it just took until intel got on the case to get that implemented into the kernel. I wouldn't personally believe anything until more people can actually try this stuff out and see what changes are really being done. BlackMK4 posted:Probably because everyone knows Linux already has compatibility issues with a lot of hardware. Considering how bulldozer is supposed to be made to improve server performance I'm sad that all/most the benchmarks so far have been for desktop apps on windows. Hopefully someone will get their hands on some opterons and start doing those tests. Longinus00 fucked around with this message at 07:11 on Oct 16, 2011 |
![]() |
|
Setzer Gabbiani posted:Given all the bad press, I'm surprised the 8120 and 8150 are both sold out on Newegg Remember the reason it was delayed in the first place? Yield issues.
|
![]() |
|
Hey, monopoly markets are fine. Look at how much innovation is going on in the ISP/telecom industry, we keep getting more and more bandwidth and better prices amiright guys? Hell, IE6 was so good microsoft didn't even need to upgrade it for years and years.
Longinus00 fucked around with this message at 19:21 on Oct 20, 2011 |
![]() |
|
trandorian posted:But I did get my speeds upgraded this year out of the blue? Only "competition" my cable provider has here is 3 mbps DSL. AMD's been as effective a competitor against Intel as DSL has been against cable for at least the last 2 years, which is to say, not very. That's great news to all at&t and comcast/verizon customers that get faster speeds and lower bandwidth caps! It's also nice that IE6 was so standards compliant that when standards compliant browsers came about none of those sites worked with them, and the later IEs have an IE compliant mode (that is to say IE6 wasn't standards compliant it's just that sites were forced to work with it).
|
![]() |
|
trandorian posted:Why yes when browsers more standards complaint than IE6 came out they were more standards compliant than IE6. How insightful! You do realize that the Microsoft plan was to have IE releases tied to OSes and that XP SP2 happening screwed everything up and delayed Vista right? Microsoft actually halted development of the next os for a decent period of time to revamp XP with SP2. IE7 was due to come out at the same time roughly that browsers on par with IE6 were finally coming out. I'm not sure you're getting it unless you were making a commentary on the differences between dejure and defacto standards. IE6 was purposely not standards compliant but because of it's market share all the sites had to code to its standards thus screwing other browsers with much smaller market share. AMD is also in a totally different position than microsoft because BD is not some vehicle with which it is going to change the cpu standards that other people are going to have to deal with later. If you only relate them by delays then I suppose BD is actually like half life 2 and all other products that suffer schedule delays. Longinus00 fucked around with this message at 22:21 on Oct 20, 2011 |
![]() |
|
trandorian posted:IE6 was the most standards compliant browser when it was released, that's a fact and your Microsoft bashing doesn't change that. No browser was fully standards compliant before IE6 and in fact there's still none now that are compliant with everything. And there was none as compliant as IE6 was until many years after its release. Nor was "introduce proprietary support things" a Microsoft only thing, Netscape was especially bad with trying that, and adware Opera at the time had its own special things it supported. Really? Opera 6.0 which was released a month after IE6 and supports CSS2 is less standards compliant than IE6?
|
![]() |
|
Intel is a huge company. They have fabs, ram (that's how they started out before they did processors), flash, cpus, and a whole bunch of miscellaneous stuff. It's not like they only do CPUs. Imagine if Intel's fabs got split off Global Foundries style so other people could use their process. Longinus00 fucked around with this message at 04:07 on Oct 21, 2011 |
![]() |
|
Combat Pretzel posted:I wouldn't be surprised if someone came up with a assembly level recompiler...After all, only the APIs need to be there, not the CPU architecture per se. What do APIs have to do with assembly?
|
![]() |
|
Looks like phoronix finally got around to benchmarking the 8150 (skip to page 6+). I wouldn't normally bring up such a trashy site but these are the first linux benchmarks I know about and it seems to do okay. Too bad about the crazy power draw.
|
![]() |
|
Agreed posted:Do they just not own a 2600K or what's the deal there? Phoronix isn't big enough to get sent production samples or anything, same reason why the review is so late. I think he might have purchased this 8150 out of pocket so it doesn't surprise me he doesn't have a very comprehensive field to test against (notice the lack of hexcore k10).
|
![]() |
|
Zhentar posted:That article is a bit deceptive, because there's one thing it doesn't make clear... all of those scores are significantly lower than if they were just allowed to run with 8 threads in the first place. I think it's not trying to compare 2/4 thread vs. 8 thread, it's figuring out how to best schedule when there's not full core/module saturation. Like in games. If windows is trying to maximize idle cores/modules while in lightly threaded situations it could lead to lower performance. This might be what the windows 8 10% performance increase comes from. Longinus00 fucked around with this message at 16:32 on Oct 28, 2011 |
![]() |
|
Zhentar posted:Yeah, I realize that's what they're intending to compare, but the article doesn't do a good job of conveying that; I was pointing it out because it would be easy for someone to walk away from that article with the wrong conclusion. It might help out even in non FP situations because BD shares decoders across a module. You might get better performance simply by being able to throw all of a modules decoders at one thread instead of two. What this is going to power consumption is a different matter.
|
![]() |
|
Fuzzy Mammal posted:Is there any news on the 28nm gpu lineup? Southern Islands is the chip family codename right? I haven't heard anything on them in months and thought they were supposed to be out by now? Granted it feels like the next round of nvidia boards are in the same boat. I bet that if anything's holding them up it's yields on the new 28nm process.
|
![]() |
|
Aleksei Vasiliev posted:http://www.theverge.com/2011/11/29/2596978/amd-committed-to-x86 This doesn't refute the claim that they're pulling out of the desktop x86 market. Slider posted:Is bulldozer actually any faster in games compared to the old phenom II chips? Newegg has the fx-4100 for 120 bucks and it doesn't seem like a bad deal if you buy a 212+ and overclock it. I know the heat/power consumption sucks, but a quad core 4.5ghz processor if you're on a budget doesn't seem that terrible to me. Short answer: No.
|
![]() |
|
Daeno posted:Supposed 7000 series pricing. I wonder how much of this is due to the terrible yields TSMC is giving them vs. abandoning the whole chip philosophy they started with the 4800 series.
|
![]() |
|
Shaocaholica posted:They could have used log scale. Why would they do that?
|
![]() |
|
Considering the process issues it's not surprising they don't have enough supplies. It's a repeat of last year, and the year before.
|
![]() |
|
How is this even news? NVidia started it before ATI and they've both been doing it ever since. You guys going to rage out every year when it happens again?
|
![]() |
|
When's the last time ATI/AMD has been "on top" anyway? The only thing that would be surprising is if Nvidia was able to price match ATI/AMDs top card.
|
![]() |
|
I was talking about in a single chip solution. Crossfire/SLI brings its own problems (driver support required for it to work in games, misc. issues in games, micro stutter when in 2x mode, etc.).
|
![]() |
|
How did my comment about the fact that a new Nvidia card being faster than the 7970 isn't a big deal and is just maintaining status quo turn into this? I am mostly agreed about ATIs driver deficiencies however, especially on the linux side but that's another can of worms.
|
![]() |
|
Nvidia's started the trash talking already so I guess that must mean there's a sizable number of early adopters jumping ship to 7970. Hopefully by the time the next Fermi comes out (article says March-April timeline) AMDs yield will be good enough for them to start lowering prices.
|
![]() |
|
In other news: Nvidia (stop me if you've heard this before) says Kepler is going to be pricey because of yield issues. Get ready to consider the 7970s price a "bargain". Intel, having no direct CPU competition, decides that it can sit on sandy bridge until inventory sells out. Nobody could ever have seen this coming, nobody.
|
![]() |
|
Alereon posted:All indications seem to be that it's a Trinity APU, two Piledriver modules (four cores) and a VLIW4 GPU. If this is true then maybe it might work out great for AMD as game developers try to squeeze out the maximum performance from AMD's somewhat peculiar module design. Optimizations learned there might be applicable to more general programs or compilers targeting bulldozer-esqe designs.
|
![]() |
|
My guiding principle for hardware these days is "don't buy any individual component that's over $200". Obviously the philosophy isn't for everybody but it does mean that for the price of someone else's video card you can get a fast enough full working system (especially if you reuse parts in an upgrade).
Longinus00 fucked around with this message at 06:13 on Apr 30, 2012 |
![]() |
|
grumperfish posted:I usually spend around $250 for a videocard, and haven't ever really been disappointed with performance. I don't have extreme requirements, but that usually puts me well in the mid-range with power to spare, and in two years I just grab another ~$250 card to move up. This worked out particularly-great with the 4870 and the unlocked 6950 I'm running now, as I can very-nearly max everything out (at 1680x1050) and overclocking fills the gaps when I want to run stupid-high settings with something like The Witcher II or Metro 2033. $500+ videocards have their places for certain people, but I'd rather trade off maximum performance for being able to continually-receive "good enough" performance without having to turn many settings down. I don't think I'd handle moving to a 5770-6850 very well, as the inconvenience of having to tailor settings would outweigh (for me) the reduced cost vs. a more powerful card. If you're willing to put up with rebates, count the cost of an included game as part of a "discount", and not buy when just released then a 6950 just squeezes in as a $200 card. I ended up going for a 6870 because you could get them for around $150 after rebate and it basically doubled the performance of my old 4850. I'm actually surprised that I can run 60fps @1080p for many new titles even without any overclocking.
|
![]() |
|
Civil posted:I'd spend good money if AMD (or nv) could produce a video card that performed at mid-range levels, but didn't require an aux. power source or a massive cooling unit. I'm currently rocking a HD5450 because my wife and I wanted a quiet PC, and it does just fine pushing dual 1920 monitors. The last gam3r card I had in there (4850) sounded like a hairdryer. The 4800 series of cards were notoriously hot. The 4870 would be getting close to 100C in games. The newer generation of cards run a bit cooler and cooling has improved a bit since then. All performance geared mid range cards come in multi fan cooling solutions which lowers the noise even further. If even that is not enough then you can try the ridiculous passive heatsink cards of the even larger aftermarket passive heatsinks.
|
![]() |
|
Civil posted:While that card is passively cooled, it still requires additional power, and has the case heating issues that go along with that. I was hoping AMD would solve the problem at the chipset level, rather than an OEM solution that takes 3 slots because the heatsink is so massive. The reason midrange cards require additional power is because it takes all that additional power to reach mid range performance. You may as well complain about how new mid range CPUs require extra 12V headers on the motherboard. There's nothing you can currently change about the "chipset", whatever you mean by that, to fix it. Now if you mean you want a card as fast as mid range performance of X years ago then you're in luck.
|
![]() |
|
![]()
|
# ¿ Jan 20, 2025 13:14 |
|
sh1gman posted:Well as far as overclocking, the 6870 is literally almost impossible to overclock, either I got a bad chip, or something, because a mere 5-10mhz bump on core causes artifacts in 3DMark Vantage, and same deal for UniEngine Heaven(I have since read reviews on several tech sites and they all come to the same conclusion that the 6870 just has 0 overclock headroom). How badly do you want that extra performance? Upgrading after just one generation is usually not a worthwhile investment, especially since the 7800 series is so much more expensive than the 6800 series. If you really want to move up and 'future proof' then you pretty much might as well go all out and splurge one of the 7900 series. I happen to have a 6870 and I have no problem OCing it to the factory OC levels that manufactures ship the more expensive cards with. I also have no problems getting 60fps @1080p in skyrim but I don't play it in the highest texture level that was patched in.
|
![]() |