|
AMD copped to production issues with 32nm, not a good sign You never admit to this, never!
|
# ¿ Sep 30, 2011 18:34 |
|
|
# ¿ May 18, 2024 22:16 |
|
Desperate attempt to draw traffic to a site? Or legitimate excitement?quote:I'm sitting in on a press briefing for AMD Bulldozer right now, and while everything is embargoed, I will say this: If you're building a gaming PC, this is going to be the way to go. I won't be surprised if this delivers all the performance you need for games (and then some) at a very competitive price point compared to Intel. And sufficient performance for modern games usually means sufficient performance for most other desktop tasks. I'm kind of excited to see what the G34 Bulldozer variants will deliver. I'll likely be working on a G34-based refresh for one of our server boards soon and having some Bulldozer action to toss in (and the lower TDP) will be awesome. Have they said what SB this mates with yet?
|
# ¿ Oct 6, 2011 15:54 |
|
freeforumuser posted:PCM NL leaked benchmarks: Fixed Link Module architecture reminds me a bit of what the Xenon can do, in terms of having 3 cores but each being 2-way SMT capable. Benchies look good; not many people are going to really need more than four cores but the price point is nice and it can at least play in the same field as Intel now. movax fucked around with this message at 17:43 on Oct 6, 2011 |
# ¿ Oct 6, 2011 17:39 |
|
Ragingsheep posted:If Bulldozer matches an 2500k in terms of performance and price, is that enough? I don't think so; the 2500K was essentially completed a year ago, if not longer. Several revisions of the chip have been taped out, the chipsets for it are mature and Intel is busy at work getting Ivy Bridge ready for mass product; 22nm ES silicon is already at the majority of ISVs, and that product will only exceed Sandy Bridge performance with no regressions (ideally) as well as correcting some Sandy Bridge errata. I think AMD has to provide a very compelling budget-conscious processor, an area they have historically dominated in (hell I picked an Athlon II for my server build several years ago because the equivalent Intel hardware was a good $150+ more and delivered less performance). Delivering a comparable product is good, but it is not so good when you're always racing to catch up and your competitor has the resources to put out an immediate successor to the chip you're trying to compete with, as well as being a full generation ahead with physical silicon of their next generation architecture.
|
# ¿ Oct 11, 2011 06:35 |
|
gently caress, I just feel terrible for the AMD engineers. They've had a tough road in developing this chip and they've released knowing full well that it's about to get poo poo upon by everyone. Kind of like sending your kid out onto the field knowing he's about to get his rear end kicked into the ground at worst, and barely manage to keep pace with the other kids at best. That said, the weapon they can bring to bear on Intel is pricing. I wouldn't have an issue tossing a chip like this into a system for a non-gaming, non-techie family member if the price for the mobo and CPU was right. One thing I like about AMD is how long that socket has lasted, and the relative lower costs of their boards. The Intel PDG is a pretty thick book with exacting specifications on every little thing; the AMD guidebook is a little looser, and the specs are pretty tolerant, so you can shortcut a bit at the artwork stage. Just remember, the consumer is hosed if AMD ceases to be a viable competitor in the desktop x86 market space.
|
# ¿ Oct 12, 2011 06:54 |
|
HalloKitty posted:Here's a wild card: Hardware Heaven's review Uggh that style sheet/site is hard to read, but gently caress even that little site has some awesome hardware to test with! Only thing that really jumps out to me is that they used DDR3-1866, but they did use that on both platforms. No idea if they played with BIOS settings on the Intel board to increase memory frequency.
|
# ¿ Oct 12, 2011 14:57 |
|
Alereon posted:Yeah I read further and the conclusion and 9/10 rating offended me enough to call them out on their own forums for lack of editorial integrity and writing reviews to please their sponsors. My favorite part was when the reviewer said Bulldozer needed a lower price, and still gave it a 9/10 for value. My second-favorite part was where they didn't mention the higher power usage in the conclusion at all, or even on the page with the power usage numbers. Nice post, but sadly that HW site is just part of the "noise" that fosters up a really insular, groupthinking community and spreads FUD I hope someone answers that post politely! e: hah, you already got called out within two posts.
|
# ¿ Oct 12, 2011 16:17 |
|
Agreed posted:Who thought up the modules=cores idea? Why? If it was an engineer I don't understand it, if it was a marketing guy fire the fucker now. I think the concept has merit, I just hope that AMD gets the chance to further explore it. They burned a lot of transistors in their branch predictor for this one, hopefully a process shrink or further development (likely already been in progress, seeing as BD probably taped out six months ago). Hopefully the Radeon 7000s own face (heh, we're already back at the 7000 numbering there) and can help keep AMD solvent.
|
# ¿ Oct 12, 2011 16:38 |
|
Alereon posted:While that's true for floating point workloads, most people really care about integer performance. If Bulldozer actually performed like an 8-core for integer stuff but a quad-core for floating point, pretty much everyone would consider that a good deal. Except somehow they managed to get it to perform like a slower hex-core at best. I believe the mass-market launch has slipped to Q1 2012, but we should still get a paper launch by the end of the year, I think. They should have a good few months to optimize and get ready to combat Kepler as well. I'm holding out for whatever single card Kepler will get me close to 60FPS @ 2560x1600, personally. e: You knew this was coming, Hitler sees the Bulldozer benchmarks. Downfall is a great movie, and made even more amazing by this scene being so subtitle-ready. "Everyone who bought a Sandy Bridge needs to get the gently caress out now! What the gently caress has AMD even been doing these past few years?" Jacking off to hentai and My Little Pony? "I could poo poo a better CPU! 2 billion transistors and this is what we get?" movax fucked around with this message at 17:26 on Oct 12, 2011 |
# ¿ Oct 12, 2011 17:17 |
|
@anandshimpi posted:anandshimpi
|
# ¿ Oct 12, 2011 19:51 |
|
It's begun, a poor goon posted in parts megathread about perhaps getting this processor, wooed by the eight-core marketing. We were able to save him, but how many more will fall! Ok, so it's not like they're picking up a Pentium 4 vs. a Athlon 64, it'll still deliver performance, just sucking down more power than a comparable Intel and suffering from unoptimized schedulers/software.
|
# ¿ Oct 13, 2011 22:25 |
|
Alereon posted:We should adapt the Programming Language Checklist. This is pretty awesome, heh. You should probably PM Crackbone anyways to get him to add your little Bulldozer blurb to the OP. Sometimes people do read it! Again though, at least this chip manages to mostly "keep pace" with Sandy Bridge outside of a few applications. Not as a big of a performance gulf as say NetBurst and the A64. Intel's marketing and buddy-buddy (honestly kind of suspicious) relationships with OEMs kept power hungry Pentium 4s shipping like crazy even though Athlons were running circles around them. If I recall correctly, Adobe Premiere 6.5 was one benchmark where an Athlon 64 that was behind by nearly a gigahertz in clock would still defeat a Pentium 4 with a very healthy margin. It took until Premiere 7 (Premiere Pro) for the Pentium to start winning that benchmark.
|
# ¿ Oct 14, 2011 14:46 |
|
PC LOAD LETTER posted:Given what he said about BD largely panned out I don't think you can hand wave away what those ex engineers said as "disgruntled employees bitching" or exaggeration or something. I think those guys are floundering because they don't have enough money. They're having to cut corners somewhere, be it the architecture team, process, software support, packaging, etc. They can't fire on all cylinders. In an ideal world they'd have an army of software engineers preparing drivers and updates for the major operating systems while the hardware team gets the actual hardware ready. If they really have switched to a ton of EDA tools as well, I can see a disconnect between some old guard engineers and fresh guys that studied with EDA in school. I know I'm a baby engineer and I had EDA tools at my disposal during school, but I've had to go back to the dark ages a bit in supporting some legacy products.
|
# ¿ Oct 18, 2011 02:07 |
|
trandorian posted:How would you even break up Intel? Not let the desktop and laptop cpu teams talk to each other? Better question: why would you gently caress with a company that has the majority of institutional knowledge when it comes to computer hardware (from the process level up to software) and the world literally runs on their hardware and innovations? The Intel Architecture Labs developed PCI, AGP, amongst many other contributions. In fact, a lot of their research was shut down prematurely because they were beginning to compete with Microsoft. That's a bit sensationalist though, on a more sane note, if there was some busting going on I assume it'd be split along the line of business units.
|
# ¿ Oct 21, 2011 04:20 |
|
Nostrum posted:Is it really even necessary? They paid huge fines and settlements to AMD over their anti-competitive behavior. Pretty sure AMD's only recent fiscal year in the black was because of that. Is there any evidence that they are still actively pursuing anti-competitive practices? Their prices are pretty fair considering they ARE delivering the best product. I dunno if there is any current concern over their current business practices, but I think what people are "worried" about is when/if they get a virtual monopoly on a gigantic market because their products are the only one available/worth buying. The barrier of entry for a new competitor into the x86-processor market would be almost insurmountable, I think.
|
# ¿ Oct 21, 2011 05:24 |
|
I read the Windows NTDebugging Blog quite often, and they just put up a write up on debugging a CLOCK_WATCHDOG_TIMEOUT, which has recently come into the limelight as happening with Bulldozer. Interested read if you want to see a Microsoft engineer step through and isolate the problem.
|
# ¿ Oct 26, 2011 19:53 |
|
streetgang posted:Hey since we have all these faster processors cranking out, how does a 8 core affect gaming? do half the mmo's out there and pc games even have coding to use a 8 core ? HalloKitty pretty much covered it. Biggest applications for multiple cores are in productivity applications. Developers will see faster compile times (we use one of our old 32-core Opteron chassis to compile/simulate VHDL/Verilog), certain server applications like databases prefer cores to raw clock speed, etc. VMs also of course benefit from many cores, and it's nice to get a large # of cores while consuming less rack units. Multiple cores are definitely awesome, but all you need for gaming at the moment is a quad-core.
|
# ¿ Oct 27, 2011 15:44 |
|
pienipple posted:Might be good for an HTPC setup as it eliminates a major source of heat in the small case. That's not a very large market though. Especially with competitors like Sigma whose SoCs are cheaper/somewhat less complex options for powering set-top media boxes. TI's DaVinci chips are also very powerful for their price; they even sport integrated DDR2/3 controllers, PCI Express, SATA, USB and Ethernet. All on one BGA for ~$80 at quantity, IIRC. Why would you even bother dealing with x86 at that price? Licensing a BIOS, trying to minimize power consumption, etc...painful.
|
# ¿ Dec 2, 2011 16:35 |
|
WhyteRyce posted:Oops, sorry I should say I have a E-350. It's not where near as rock solid as the other setup was. I've got some weird HDMI issue where occasionally the resolution looks like it gets set really low (i.e. my WMC looks like it's running 1024x768 stretched to fit my screen), which goes away if I minimize and re-maximize. Which is super-annoying in a HTPC setup that you want to control only with a remote. If I elect to connect directly to my TV instead of through my receiver, then my screen will turn black randomly while idle. WMC will also crash if I have too many files in a video directory. It also crashes when it tries to render the thumbnail for certain MKVs. Continually waiting around for Silverlight 5 so I can do HD Netflix is also fun. And AMD removed the overscan correction tool in a couple of their driver releases. Not to derail too much, but this is why I left HTPCs behind. I know a lot of people have them running successfully and love them to death, but I just had so much trouble with them I went back to dedicated set-top media boxes. Sacrifice in broader software compatibility, but much less painful. It didn't help that my target display was connected via 1080i component, which means a fun battle against overscan. That said, maybe some AMD hardware will be finding its way into aforementioned boxes, but they've got stiff competition from the existing players in the field.
|
# ¿ Dec 6, 2011 03:30 |
|
Factory Factory posted:I do a full HTPC because I actually don't have a huge transcoded media archive. I spend most of my storage on documents and media I create and backups of my PCs, and all my video stuff is live/recorded cable TV, streaming from YouTube and Hulu, and physical disks. It would actually be more of a pain in the rear end to convert everything over to STB-playable stuff than just deal with the four different player softwares and web browser - all of which work great, they just aren't centralized. And I'd need more, expensive storage to boot. Yep, I'm the opposite, with a huge NAS (~15T) full of media that the Sigma can playback. Occasionally run into issues with compressed headers because my NMT is an older generation, but nothing that can't be overcome with a little work.
|
# ¿ Dec 6, 2011 06:07 |
|
necrobobsledder posted:There's also the sheer laziness factor by your customers not wanting to deal with the hardware-supported formats better as mentioned above. Me, I'm really pissed off at having to transcode stuff because most of what I have is so low quality to begin with I can't accept transcoding for convenience. You hit the issue on the head exactly. Right now, your best bet for playing any given media format is generally a ffmpeg-derived/based-on player solution on your PC. You're at the mercy of the firmware maintainers/developers for set-top boxes and however long they plan on continuing to support their box, and what formats they choose to implement. Maybe someone will put out an AMD APU/similar-based solution running x86 Android or similar to make a decent turn-key HTPC, who knows? I've mostly played with TI DaVinci hardware, and it's quite powerful, but the work involved in getting software decoders to leverage the hardware makes me want eat a gun most of the time.
|
# ¿ Dec 6, 2011 07:40 |
|
If I recall correctly, most textbook examples of implementing memory of a type that would be similar to what you want to use for cache use something like six transistors per bit. I am going 100% off memory at a bar right now, I can check my VLSI textbook when I get home to make sure, though someone like JawnV6 or the other chip design goons could comment on it in more detail, I've only done small mixed signal designs.
|
# ¿ Dec 7, 2011 01:35 |
|
DNova posted:I'm not sure why you're bringing this up, but generally on-die cache is SRAM, which is generally a minimum of 6 transistors per bit. Sometimes more. Rarely less. Contrast to standard DRAM which is 1 transistor per bit. Ah, so I was remembering correctly . I was pretty sure it was SRAM, but I was impaired and couldn't think straight so I left it to someone else to clarify. I figured that the massive amount of cache on Bulldozer would have contributed a significant amount to the various transistor count figures bandied about by marketing/engineering. Not that it matters, because 1.2 billion or 2 billion transistors, it still screwed AMD either way.
|
# ¿ Dec 7, 2011 05:27 |
|
Shaocaholica posted:Is there a nicely formatted wikipedia article listing all the GPU families and the hosed up re-naming? I would think that would be one of the best forms of information against the practice. I'm not sure of a family comparison in particular, but each generation generally has a pretty good table listing all the specs, plus launch pricing and launch dates, which is a very good place to start. I gave up trying to understand laptop GPU naming long ago (I have no idea what my MBP's GT330M compares too), but they are plenty guilty of this on the desktop as well. The low-end parts are often based on last-generation stuff and share very little in common with their higher-end buddies. I wish that nvidia and AMD would just stop bothering, seeing as how integrated graphics are growing in penetration (especially thanks to you know, shipping *with* the CPU) and I don't see very many use cases where you'd need a low-end, discrete GPU over the IGP in your CPU and chipset. Of course they won't, and people will still pay $100 for a POS discrete card that performs maybe 10% better than integrated graphics, but still falls into the dead-zone of being plenty enough for regular computing, but useless for playing games. e: I actually went and looked up some stuff for nvidia in particular. The GeForce article has a table at the bottom that attempts to make sense of part names. Here is a giant spec table of nvidia GPUs. The most important thing to pay attention to there is the code-name, which can give you a relative idea of what generation is at play. And here are Red Team's charts. movax fucked around with this message at 19:18 on Dec 7, 2011 |
# ¿ Dec 7, 2011 19:14 |
|
HalloKitty posted:That basically already exists in Windows Experience Index, and it was clearly designed to fulfill this role - but it never gets used. 3DMark tried, but therein came the issue of driver cheating and optimizing for certain benchmarks as well. Basically, fire all marketing departments into the sun. Impossible demands and promising poo poo we can't deliver as engineers, and then confusing the poo poo out of the customer.
|
# ¿ Dec 7, 2011 19:40 |
|
Shaocaholica posted:That list is still a bit hard to read. Maybe I should make my own on wikipedia. The avalanche of [citation needed] heard around the world...
|
# ¿ Dec 7, 2011 20:55 |
|
necrobobsledder posted:Jesus, 3w at idle? That's some serious power gating happening, you'd think they're more power efficient at desktop 2D than integrated chips from Intel are by now. It's like someone decided to drop in ClkEns everywhere in the design and actually properly partition it! Could be a big selling point over nvidia's entry if they can't pull of the same. (I will buy green no matter what anyways, though)
|
# ¿ Dec 16, 2011 17:24 |
|
Chas McGill posted:I'd watch that. At this point, I think it's a decent cash-stream. They were taped out years ago, process improvements are always marching on, and you have guaranteed customer until they EOL the console and stop production. As the consoles shrink too, you can migrate those chips to your newer processes so you can decommission your older lines/furnaces/lithography/etc.
|
# ¿ Dec 19, 2011 16:25 |
|
Shaocaholica posted:^^^ I would think devs are using 32bit pointers somehow to save space since neither the 360 or PS3 needs the additional addressing bits even if their CPUs are 64bit. I think he's shooting for the fact that x86/PPC have vast architectural differences as well. Obviously they aren't programming entirely in assembly, with the robust development tools that are supplied from each manufacturer; this isn't the dark ages of game development. Remember that the 360/PS3 are exotic compared to our x86s. 360 is a three-core PPC-based CPU, each core can perform 2-way SMT. The Cell hardware far outpaced the state-of-the-art in compiler development/parallel programming tools and supplies a single PPC-derived core mated to 7 128-bit RISC cores. And yet, most companies buy an engine that's been designed to run on both of those platforms to base their games on. The obvious exceptions tend to be each system's halo-titles; I remember some blog-posts from a Naughty Dog developer where he details their dropping down to the assembly level to get some effects from Uncharted working properly. I think we'll definitely see more memory in the new-generation consoles, especially for high-res textures, but I don't see why they'd need in excess of 4GB. Even our PC GPUs are fine pushing 1080p with 1GB VRAM, and I don't think most titles in the PC environment eat more than 1GB or so of system memory. Anyways, AMD getting some much-needed revenue for winning a contract to supply a GPU would be most welcome! And IBM will probably get the business (again) for the CPU design.
|
# ¿ Dec 21, 2011 23:32 |
|
Agreed posted:nVidia and ATI both get up to some heinous bullshit when it comes to the low-end and mobile SKUs, do they not? And "why" is probably "because they get away with it every time since the market for those cards is squarely aimed at people who have no idea what they're missing out on." The low-end GPU business is being cannibalized (and rightfully so) by integrated GPUs. There are very few reasons to get something lower than a midrange card these days...you either need the GPU (games) or you don't (integrated will handle Aero, DXVA, etc). I remember my GeForce 7300 being some kind of GeForce 6100 or something equally retarded. All this does is confuse buyers and show the shareholders "look how many products/segments we serve!" We're all smart enough to know not to buy anything below 7xx0 or whatnot, but the average consumer isn't
|
# ¿ Jan 6, 2012 23:48 |
|
Agreed posted:It's worth if if you are can afford, are willing to, and intend to continually upgrade. If it's just part of your cycle, then you can buy in each generation at a nice sweet spot (this generation, two 6950s or two 570s; next generation, who knows) and get dramatically superior performance than a top end card for a same or marginally higher outlay. If you go top of the line every year, you're spending a lot of money trying to push the highest numbers. I'm glad I don't have your illness when it comes to shiny things (I game at 2560x1600). GTX 460 still going strong
|
# ¿ Jan 9, 2012 21:57 |
|
Factory Factory posted:CES is in progress, and no sign of Kepler so far. I think we were speculating that that would mean Kepler is not gonna trounce Southern Islands or anything. SemiAccurate had a quiet article that mentioned Kepler slipping further to 2013, or maybe I was drunk and confused "Kepler slipping to 2012" with the year still be 2011, and beginning to sob helplessly. Also, I think Agreed should himself somehow with regards to the 7970, for our entertainment
|
# ¿ Jan 9, 2012 23:45 |
|
HalloKitty posted:Ah, christ, for a minute I poo poo myself. I saw 3xxxK, and thought.. gently caress, Ivy Bridge sucks, how? Same here, hah, took me a second to remember which thread I was in.
|
# ¿ Jan 11, 2012 16:37 |
|
I met an actual Bulldozer fanboy yesterday He had a nice watercooling setup going (red flag #1) so I asked what he was running, it was a 6-core Phenom. I mentioned offhandedly about Bulldozer being somewhat of a failure when compared to the 2500/2600, and he got really upset and was like "WHAT NO BRO, ITS AWESOME, I NEED THOSE CORES, IVE GOT LIKE 50 WINDOWS OPEN AT ALL TIMES!" At that point I wasn't going to try to discuss further, but there are AMD customers out there somewhere!
|
# ¿ Jan 15, 2012 22:46 |
|
pixaal posted:I got water cooling once I no longer have it (sprung a leak the rug still has a green stain from the fluid). It looked really cool. It was mostly because I was bringing my computer to the dorm where the entire building was programming/IT majors, so people actually saw and thought it looked cool. Oh god I spent so much money on making the outside look cool, but I guess people spent even more making their cars look cool in highschool and I avoided that poo poo. Yeah I used to water-cool back in college as well, but once heat-pipe coolers hit the stage, I went back to air. My Ultra 120 has been going strong, and I'm confident that adapters will keep coming out for upcoming CPU sockets.
|
# ¿ Jan 15, 2012 23:26 |
|
Too many posts with nvidia and AMD in this thread, not enough 3dfx and Voodoo
|
# ¿ Jan 20, 2012 18:26 |
|
Agreed posted:Excuse me, I'd like to call your attention to exhibit A Touche.
|
# ¿ Jan 20, 2012 18:58 |
|
grumperfish posted:Kyro was a PowerVR part. For some unknown reason to me, before I joined my company, the Volari Z9 was chosen as the GPU for the platform (Single-board PC). Granted they run mostly headless, so maybe it was the cheapest choice or something that could deliver basic VGA output.
|
# ¿ Jan 21, 2012 17:17 |
|
Alereon posted:To be fair that article is only for gaming, which is extremely limited in its ability to consistently use more than two cores and gives a significant advantage to processors with Turbo. I would consider an A6-3670K, potentially even overclocked. The big advantage is with the onboard graphics, if he doesn't care at all about graphics and definitely wouldn't want to overclock at all, then an older platform with an Athlon II X4 may be better. That said, I would strongly recommend he consider just springing for the i5 2400+ if he cares about CPU performance, especially with Turbo it's a really fast platform. Hey, you have a star now! FWIW, I think the Intel platforms make better server boards/platforms for the home anyways. Stable, clean drivers across all platforms (Windows, Linux and Solaris!) I hadn't realized that i3 performance had caught up to this point though, I may have to start considering that for any super-budget builds.
|
# ¿ Jan 31, 2012 18:44 |
|
|
# ¿ May 18, 2024 22:16 |
|
Huh. *Looks at his GTX 460* Guess you have to last another year*, buddy! * - at 2560x1600
|
# ¿ Feb 8, 2012 19:55 |