|
FaustianQ posted:It's strange to hear people define good as "not poo poo". AMD not making GBS threads their pants while Intel fucks up with Netburst doesn't make AMD good. If Intel never went netburst and just stayed the course with P-M, AMD would never even have a golden age people could pine and weep for, instead AMD might be competitive up until 2005 instead of 2009. I'd also argue the K6 era was a decent time for AMD, the K6-2 especially being a popular choice. Cheaper than a Pentium II, and often competing very well in benchmarks gamers gave a poo poo about. They kept socket 7 going for so drat long, so they built up a reputation in not requiring you to replace motherboards which kept up momentum for some time. HalloKitty fucked around with this message at 21:29 on Apr 19, 2015 |
# ? Apr 19, 2015 21:24 |
|
|
# ? Dec 3, 2024 11:37 |
|
The SH/SC betting pool on who/when buys AMD.
|
# ? Apr 19, 2015 21:29 |
|
Professor Science posted:either that or Power stuff starts coming out. I think it's much more likely that the EU funds ARM enough to be a thing in servers rather than IBM somehow making money on Power to continue meaningful development, though. Is Power dying too?
|
# ? Apr 19, 2015 22:27 |
|
El Scotch posted:The SH/SC betting pool on who/when buys AMD. Hewlett-Packard
|
# ? Apr 19, 2015 22:47 |
|
HalloKitty posted:I'd also argue the K6 era was a decent time for AMD, the K6-2 especially being a popular choice. Cheaper than a Pentium II, and often competing very well in benchmarks gamers gave a poo poo about. They kept socket 7 going for so drat long, so they built up a reputation in not requiring you to replace motherboards which kept up momentum for some time. First Athlons also beat the P3 Katmai (but coppermine beat the athlon) and athlon thunderbirds were competetive with high end P3's. It isn't just the Netburst era. AMD was competetive in the 2 generations before it.
|
# ? Apr 19, 2015 22:51 |
|
ohgodwhat posted:Is Power dying too?
|
# ? Apr 19, 2015 23:26 |
|
El Scotch posted:The SH/SC betting pool on who/when buys AMD. IBM, 2003
|
# ? Apr 19, 2015 23:35 |
|
ohgodwhat posted:Is Power dying too?
|
# ? Apr 20, 2015 02:15 |
|
A lot of networking hardware runs on custom Power chips FWIW. Cisco uses a LOT of Power though their lines. Though, the recent trend is favoring x86 with a customized linux system. What about MIPS? Cavium seems to have a really solid network/IP geared chips that are now(have been?) MIPS arch.
|
# ? Apr 20, 2015 02:24 |
|
adorai posted:I doubt it. There is a lot of serious banking software running exclusively on power. Of the big four core banking vendors (FIS, FiServ., JHA, D+H), I know that 3 of them definitely run on Power, and the fourth might but I'm not sure. That's a big market, and while there might not be a significant amount of innovation, there is at least a lot of incentive to keep the status quo. It's not a business that you can easily break into, so the biggest shot at dethroning power there would be to convince one of the big four to port their software. Given my experience in the industry, good loving luck because it's filled entirely with assholes who want to keep doing poo poo the way it was done in the 90's. I used to work in the same building as FIS those people looked miserable every single day, even during the company fun day with amusement "rides" in the parking lot.
|
# ? Apr 20, 2015 07:58 |
|
Nystral posted:I used to work in the same building as FIS those people looked miserable every single day, even during the company fun day with amusement "rides" in the parking lot. That's what happens when you work for a machine bent on coding the least innovative code this world has ever seen. "We pay by line-of-code," says the FiS supervisor, long bereft of anything resembling a soul, harking back to a management theory three decades past. It's almost out of a Dickens novel.
|
# ? Apr 20, 2015 12:44 |
|
AMD x86 16-core ZEN APU to fight Core i7 HAVE YOU loving LEARNED NOTHING, AMD?!
|
# ? Apr 22, 2015 22:06 |
|
SwissArmyDruid posted:AMD x86 16-core ZEN APU to fight Core i7 Is this article even giving us any new information?
|
# ? Apr 22, 2015 22:11 |
|
Angry Fish posted:Is this article even giving us any new information? Yeah, it tells us that the 16-core chip we thought was headed for servers... isn't. Intel's X99 flagship is only, what, 8 cores, with HT? I really don't understand what AMD is trying to achieve by subdividing things even more. I mean, who the hell needs 8 cores in a desktop environment NOW, let alone 16? Also, confirmed 14nm FinFET. SwissArmyDruid fucked around with this message at 22:30 on Apr 22, 2015 |
# ? Apr 22, 2015 22:27 |
|
Angry Fish posted:Is this article even giving us any new information? Maybe I am reading the article wrong, but it looks like Fudzilla is drawing the conclusion that Zen doesn't use SMT but is just another CMT bulldozer generation chip. So RIP AMD.
|
# ? Apr 22, 2015 22:41 |
|
SwissArmyDruid posted:Yeah, it tells us that the 16-core chip we thought was headed for servers... isn't. Intel's X99 flagship is only, what, 8 cores, with HT? I really don't understand what AMD is trying to achieve by subdividing things even more. I mean, who the hell needs 8 cores in a desktop environment NOW, let alone 16? High density memory optimized virt environments trying to minimize CPU steal time
|
# ? Apr 22, 2015 22:46 |
|
SwissArmyDruid posted:AMD x86 16-core ZEN APU to fight Core i7 I think they have. It's not using CMT for one thing. It's a server/HPC CPU which explains the abundance of cores. It's probably two 8-core chips in one processor. edit: Intel is selling 15C/30T processors; IBM is selling 12C/96T; and SPARC. It's not a bizarre move by AMD. It's just targeting a different market than what we usually talk about here. GrizzlyCow fucked around with this message at 23:01 on Apr 22, 2015 |
# ? Apr 22, 2015 22:47 |
|
GrizzlyCow posted:I think they have. I feel you and I have different interpretations of this article. My reading from this article is that the 16 core chip that we were talking about a few pages back that we thought was a server part is, in fact, a flagship desktop part, and that a separate 32 core server chip will be the flagship for servers. Do you concur?
|
# ? Apr 22, 2015 23:17 |
|
Nope. The writer of the article doesn't know whether the 16-core ZEN processor will be a desktop processor nor does he says it will. I think he may have accidentally implied otherwise. Poor wording on his part. I could definitely see a 8-core ZEN SKU for the desktop, but 16-cores with all those features and whatnot would probably be prohibitively expensive even when compared to Intel's current i7 Extreme line.
|
# ? Apr 22, 2015 23:52 |
|
Is there a serious gain between 14nm and 10nm? Can AMD legitimately compete with Cannonlake, maybe by just pushing clockspeeds higher and dealing with a slightly higher TDP?
|
# ? Apr 23, 2015 00:15 |
|
DX12 seems to like more cores; maybe having 8 (or more) cores will be useful for games in the future? (You know, if developers actually program games to use them properly)
|
# ? Apr 23, 2015 00:20 |
|
El Scotch posted:DX12 seems to like more cores; maybe having 8 (or more) cores will be useful for games in the future? Correct me if I'm wrong, but isn't the whole point of DX12 that it automatically multithreads draw calls? Or do those have to be manually programmed still? Because if so, AMD will continue to be screwed.
|
# ? Apr 23, 2015 00:23 |
|
El Scotch posted:DX12 seems to like more cores; maybe having 8 (or more) cores will be useful for games in the future? The early tests I've seen of DX12 have it scaling nicely up to 6 cores. I don't know if improvements down the line will allow further improvements but going from effectively 1 core like we get now in most DX11 games to 6 seems like a huge enough improvement already.
|
# ? Apr 23, 2015 00:30 |
|
^^^yes, the early tests seem to point to 6 being a real sweet spotcat doter posted:Correct me if I'm wrong, but isn't the whole point of DX12 that it automatically multithreads draw calls? Or do those have to be manually programmed still? Because if so, AMD will continue to be screwed. I've no idea; I'm just cynical.
|
# ? Apr 23, 2015 00:30 |
|
Beautiful Ninja posted:The early tests I've seen of DX12 have it scaling nicely up to 6 cores. I don't know if improvements down the line will allow further improvements but going from effectively 1 core like we get now in most DX11 games to 6 seems like a huge enough improvement already. Yea, it's probably that once you hit that point, driver overhead isn't a significant drag on your CPU- then you're actually waiting on it for gameplay related logic, AI, etc.
|
# ? Apr 23, 2015 00:34 |
|
cat doter posted:Correct me if I'm wrong, but isn't the whole point of DX12 that it automatically multithreads draw calls? Or do those have to be manually programmed still? Because if so, AMD will continue to be screwed. Parallelized shaders happen in GPU space. DX12 does partially remove the headache of having (and synchronizing) separate threads for AI, sound, etc, though
|
# ? Apr 23, 2015 01:01 |
|
SwissArmyDruid posted:I feel you and I have different interpretations of this article. GrizzlyCow posted:Nope. The writer of the article doesn't know whether the 16-core ZEN processor will be a desktop processor nor does he says it will. I think he may have accidentally implied otherwise. Poor wording on his part. My interpretation of the leaks / articles so far: There will be two versions of the first Zen chips. One will have 16 CPU cores and an integrated GPU. Another will have 32 CPU cores and no GPU cores. Both will be intended for servers / HPC environments. Possibly the 16 core + GPU part will be made available to the enthusiast market as a high-end Haswell-E type part. Additional Zen parts with less than 16 CPU cores will follow these initial releases.
|
# ? Apr 23, 2015 01:24 |
|
cat doter posted:Correct me if I'm wrong, but isn't the whole point of DX12 that it automatically multithreads draw calls? Or do those have to be manually programmed still? Because if so, AMD will continue to be screwed. No. Vulkan, Mantle and DX12 basically get rid of the massive, slow hack-filled black-boxes that make up current video drivers and give control over most of the minutiae to developers. Automatically threading anything would run counter to their design. It's up to the application developer to decide if they want to build command buffers across multiple threads or not. There is no magic number of threads for maximum performance since it will all depend on what the application is doing, CPU speed and the quirks of the particular GPU where ideal workloads can not only vary between manufacturers, but between different generations of chips from the same vendor. Bear in mind that those draw call submission tests are only valid for comparing the overhead of different APIs on the same hardware since they aren't putting the same kind of load on the GPU and hitting the other bottlenecks that a real application would. The_Franz fucked around with this message at 02:17 on Apr 23, 2015 |
# ? Apr 23, 2015 02:09 |
|
The_Franz posted:No. Vulkan, Mantle and DX12 basically get rid of the massive, slow hack-filled black-boxes that make up current video drivers and give control over most of the minutiae to developers. Automatically threading anything would run counter to their design. It's up to the application developer to decide if they want to build command buffers across multiple threads or not. There is no magic number of threads for maximum performance since it will all depend on what the application is doing, CPU speed and the quirks of the particular GPU where ideal workloads can not only vary between manufacturers, but between different generations of chips from the same vendor. For context here, the way game engines and GPU drivers work is basically a massive game of second-guessing. The game writers write the engine with the style they think will work best with the drivers, and then the driver guys write their drivers to make the game engine actually work. It's a massive game of turning individual rendering settings on and off to produce stability and performance. This is a major reason why most games are buggy messes on release, why you need custom drivers for SLI/Crossfire for every game, etc. The goal of the frameworks is to get rid of that, and shunt the workload onto engine developers to handle writing and optimizing their own rendering. Paul MaudDib fucked around with this message at 05:48 on Apr 23, 2015 |
# ? Apr 23, 2015 05:45 |
|
Paul MaudDib posted:For context here, the way game engines and GPU drivers work is basically a massive game of second-guessing. The game writers write the engine with the style they think will work best with the drivers, and then the driver guys write their drivers to make the game engine actually work. This is why you need custom drivers for every game if you want to run SLI, for example. That I did know, so the point of Vulkan and DX12 is that game developers can essentially write their own driver profile? That and the fixing of the draw call bottleneck.
|
# ? Apr 23, 2015 05:48 |
|
FaustianQ posted:Is there a serious gain between 14nm and 10nm? Can AMD legitimately compete with Cannonlake, maybe by just pushing clockspeeds higher and dealing with a slightly higher TDP? Raising TDP, pushing the clocks up, and reducing ASP a bit would be the easiest path for AMD to increase performance and sales in the desktop and server space against Intel but who knows if it'll be effective vs Cannonlake? Or for that matter Skylake? Supposedly Skylake is, finally, a big step up in performance over Broadwell/Haswell/Sandy Bridge. The only rumors I've seen on Zen is that its supposed to have Broadwell-ish performance but with a weaker FPU...but I have no idea where those rumors are coming from so I'm not taking them seriously. Still don't have a clockspeed for Zen either. Only official info. is a TDP of 95w max on a 14nm process + 16 cores with iGPU or 32 cores without. Both AMD and Intel have gotten pretty good about keeping details to themselves unfortunately.
|
# ? Apr 23, 2015 08:51 |
|
Paul MaudDib posted:For context here, the way game engines and GPU drivers work is basically a massive game of second-guessing. The game writers write the engine with the style they think will work best with the drivers, and then the driver guys write their drivers to make the game engine actually work. It's a massive game of turning individual rendering settings on and off to produce stability and performance. This is a major reason why most games are buggy messes on release, why you need custom drivers for SLI/Crossfire for every game, etc. Where can I read more about what goes into engine and graphics development? It's a topic that's interesting to me, but I'm never able to find good reading material when I google around. You mostly just find forum posts from people making pony games in Unity.
|
# ? Apr 23, 2015 14:22 |
|
Its been posted before but this is a good read on the subject. http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019 quote:Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship). This is Vista era, when a lot of people were busy with the DX10 transition, the hardware transition, and the OS/driver model transition. My job was to get games that were broken on Vista, dismantle them from the driver level, and figure out why they were broken. While I am not at all an expert on driver matters (and actually sucked at my job, to be honest), I did learn a lot about what games look like from the perspective of a driver and kernel.
|
# ? Apr 23, 2015 14:39 |
|
That's an interesting read, but I should probably clarify. I'm an absolute newbie to the subject of graphics. While I have done a lot of programming, it's been exclusively either backend stuff or used a language with its own gui editors like VB6 or C#. I'm kind of wondering where I should start if I actually want to understand all the terminology thrown around in a post like that. I'm finding it really hard to track down anything that just lays out "here are shaders, this is what they do and how they work. Here's what a rendering pipeline is." You know, explain it to me like I'm 5 kind of stuff. I've wanted to play around with making some very basic games, but I find it difficult to work with a framework if I don't understand what's under the hood at least a little bit. As an illustration, I was pretty clueless at interacting with Unix systems at a user level until I started taking an operating systems programming course this year and started learning about how everything in Unix is built. Then it just clicked and I'm flying around the filesystem and using the various tools available like a pro*. *not actually a pro
|
# ? Apr 23, 2015 21:16 |
|
LeftistMuslimObama posted:That's an interesting read, but I should probably clarify. I'm an absolute newbie to the subject of graphics. While I have done a lot of programming, it's been exclusively either backend stuff or used a language with its own gui editors like VB6 or C#. GPUs are, abstractly, very parallelized in-order floating point co-processors. LeftistMuslimObama posted:I'm finding it really hard to track down anything that just lays out "here are shaders, this is what they do and how they work. Here's what a rendering pipeline is." You know, explain it to me like I'm 5 kind of stuff. I've wanted to play around with making some very basic games, but I find it difficult to work with a framework if I don't understand what's under the hood at least a little bit. As an illustration, I was pretty clueless at interacting with Unix systems at a user level until I started taking an operating systems programming course this year and started learning about how everything in Unix is built. Then it just clicked and I'm flying around the filesystem and using the various tools available like a pro*. Shaders are a practical application of linear algebra, which is also a practical application of geometry. Remember planes from 7th grade? What happens when those planes intersect? How would you describe the positions? And how a given triangle (described as the boundaries of a plane) look? Color? Now, this has to have some idea of depth. Because your monitor is flat, representing a 3d image. Look at a picture. If you assigned things that were further "away" in a 3d sense but it obviously looks the in a 2d sense a number, it'd be depth. What if the camera could see through objects in the foreground? Because a model of something flying through the air in a game which disappears behind a wall is not invisible. The GPU looks at the depth, and decides "that doesn't need to be rendered". This can also be described with shaders. There's no real "explain like I'm 5" explanation from where you're starting from. Learn how processors work. Refresh your memory on discrete mathematics and linear algebra. Then pick up a book on shaders.
|
# ? Apr 23, 2015 21:32 |
|
cat doter posted:That I did know, so the point of Vulkan and DX12 is that game developers can essentially write their own driver profile? That and the fixing of the draw call bottleneck. It's about the same outcome as if they could write their own driver profile, but that's not how they're getting there. Previously the goal has been to hide the complexity of computer graphics from the game programmer and present as simplistic and accomodating an interface to the graphics system as possible. What Vulkan/Mantle/DX12 are doing is exposing a lower level, more complex, more stringent API. The theory here is that you're handling an enormously complex task that needs to happen extremely efficiently and at the end of the day trying to handle that using Babby's First Graphics API is a losing battle. At the end of the day you can only hide so much complexity, so the idea is to give everyone a fixed target and then let engine and GPU devs each handle their own half of the task. The GPU guys write the API and make their hardware do it quickly, the game guys write their game and tell the API how they want it to work. Think of this as being somewhat like Java/C# vs C - Java will shield you somewhat from the complexities of the task and maybe insulate you from your bad behavior, and it performs OK. Maybe even good if you take heroic measures. It's a lot easier to write fast code in C and it can potentially go much faster than even great Java, but the "heroic measures" here are in terms of complexity of managing everything. There's no handholding, it dumps everything on you and if you mess it up you crash and burn. There's no soft landing for you mistake, just a big old black hole we call UNDEFINED BEHAVIOR. Not intended to be a detailed metaphor, don't tear into me too hard here. That's the theory at least. I'm sure in the real world there will still be tons of patching happening behind the scenes and stuff, because game devs are under huge pressure to ship before their holiday deadline and GPU devs don't want to be the brand whose cards don't work with a AAA title on release day. I'd agree that handling the complexity is best done by the commercial engine people and anyone less than a AAA game studio probably doesn't want to wade into the intracacies of how to get Enviro Bear 2000's graphics to render properly on a Crossfire setup. Paul MaudDib fucked around with this message at 00:14 on Apr 24, 2015 |
# ? Apr 23, 2015 23:06 |
|
evol262 posted:A rendering pipeline is like any other pipeline in a general sense. You should, again, learn how processors work. And how heap and stack works. You may also want to look into the specific language VMs you're familiar with (maybe .NET). To echo this, there really isn't. I'll do my best but this is a drastic simplification. You really can't approach this without at least a passing knowledge of linear algebra, matrix math, and coordinate systems. Basically you start with geometry. Take a 3d model and store it as a list of vertexes, which form surfaces. You have some origin and then take the points relative to the origin and put them into a matrix. We can take certain matrixes (matrices) and multiply the model matrix by this "transformation matrix" and perform various operations on them. For example the matrix code:
So basically you start with a model, then you perform some operation to embed it inside another coordinate space. For instance, maybe you take a hat model and then translate it to the top of your character's head. Then you take the gun and rotate and translate it into his hand. Then you take this combined model and translate it into a larger coordinate space - eg maybe inside a building, which has other objects, then that's inside a larger world model. We have a list of models, their vertices, and the transformation matrix for each model that takes its verticies from the model matrix to their final placement in the world matrix. We then convert this list of vertex to surfaces (primitive polygons). Then you define a camera point (eg "Viewport"). Once you have that you then need to figure out which of these polygons are actually visible from the camera and which are not in the angle of view, obstructed by other surfaces, distort the polygons according to perspective, etc. Then you convert this polygon view to a flat 2D image (rasterization). Then you apply lighting to the coloring/textures on each pixel, etc. The parts of the algorithm that are applied in parallel are the shaders. There's shaders that handle geometry work (translating the vertices to their final place, emitting primitives, etc) and shaders that handle lighting work. There's also tessellation shaders in DX11 but I just did OpenGL so I can't do too much detail there. This is an enormous simplification because it is a really complex topic. As a starting point you might want to look at the classic OpenGL fixed-function pipeline and then work forward from there. The FF pipeline is deprecated and not included in the newer OpenGL standards anymore but it'll make a lot more sense if you see the historical context and the decisions that were made to move forward from there. Maybe this article? There's some other really fun stuff in Games Programming 101 too. For example you may wonder why I used a 4D coordinate space. The answer is Gimbal Lock - a 3d orientation system specified in terms of 3 dimensions can end up with basically a zero-vector and a zero vector shits all these mathematics up bigtime (we keep track of the unit vector representing "up", if that ends up being [0,0,0] then we have no idea what that encodes). Instead you need an extra dimension, so that when you end up with [0,0,0] you have the extra dimension [0,0,0,1] to bail your rear end out. The invention of this led to one of the greatest moments of nerd-ery in history. quote:Quaternion algebra was introduced by Hamilton in 1843.[6] Important precursors to this work included Euler's four-square identity (1748) and Olinde Rodrigues' parameterization of general rotations by four parameters (1840), but neither of these writers treated the four-parameter rotations as an algebra.[7][8] Carl Friedrich Gauss had also discovered quaternions in 1819, but this work was not published until 1900.[9][10] Disregard the constable. Paul MaudDib fucked around with this message at 01:26 on Apr 24, 2015 |
# ? Apr 23, 2015 23:29 |
|
Paul MaudDib posted:That's the theory at least. I'm sure in the real world there will still be tons of patching happening behind the scenes and stuff, because game devs are under huge pressure to ship before their holiday deadline and GPU devs don't want to be the brand whose cards don't work with a AAA title on release day. I'd agree that handling the complexity is best done by the commercial engine people and anyone less than a AAA game studio probably doesn't want to wade into the intracacies of how to get Enviro Bear 2000's graphics to render properly on a Crossfire setup. I doubt that Nvidia will stop injecting their optimized shaders over the ones that ship with games, but I'm not sure how you patch problems with command buffers without tracking state and doing a validation pass over everything submitted which would cause a noticeable performance hit and run contrary to the intention of the new APIs which basically have the driver doing as little as possible. Plus, every driver on every platform would have to implement these hacks and I don't see a company like LunarG building ugly hacks into the Intel Vulkan driver for every jankey game out there. I don't know about DX12, but Vulkan will have a verbose validation layer to be used during development that should catch any misuse of the API and Valve and LunarG are working on a debugger that clearly shows you where where your slow calls are by highlighting them in bright red. At that point it's the developer's problem if things don't work since they had ample warnings. Most of these AAA developers manage to build console titles without things going sideways so a more verbose desktop API shouldn't be an issue. The_Franz fucked around with this message at 14:35 on Apr 24, 2015 |
# ? Apr 24, 2015 00:53 |
|
Who wants some Zen rumors and leaked slides? http://wccftech.com/amd-zen-cpu-core-block/ Here we see the block layout of a Zen core. Again we have confirmation that sharing of floating point units will be dropped, which should mean higher performance per core. Also note the two FMAC 256-bit units -- possibility they are able to fuse together to process 512-bit AVX floating point instructions? http://wccftech.com/amd-zen-x86-quad-core-unit-block-diagram/ And here is a block diagram of a “Zen based Quad Core Unit”. According to the rumors this is the “basic building block of Zen”. Combine two of these Rastor fucked around with this message at 21:37 on Apr 29, 2015 |
# ? Apr 28, 2015 21:03 |
|
|
# ? Dec 3, 2024 11:37 |
|
No? Nobody cares about Zen rumors? The newest one is the 2015-2016 roadmap: http://wccftech.com/amd-2016-14nm-cpu-apu-zen-k12-product-roadmap-leaked/ If I'm reading that correctly the 8-core "Summit Ridge" chip for desktops will not have an integrated GPU.
|
# ? Apr 29, 2015 21:35 |