|
The_Franz posted:Is this something new? MSDN still says: I need sleep, for some reason I glazed over the post and only thought about regular user mode threads (which only go to priority 15 as seen in that link).
|
# ? Apr 3, 2015 18:23 |
|
|
# ? Dec 5, 2024 06:46 |
|
Professor Science posted:uh that is not how thread priority works, these aren't realtime OSes, a high priority thread does not prevent a low priority thread from running forever The thread scheduler will prefer threads of high priority that are ready to run to those of low priority. It's not that those of lower priority will never be run, but if they do run their execution will not be threaded seamlessly with the higher priority threads. Microsoft's documentation even says that you need to beware of this: MSDN posted:However, if you have a thread waiting for another thread with a lower priority to complete some task, be sure to block the execution of the waiting high-priority thread. To do this, use a wait function, critical section, or the Sleep function, SleepEx, or SwitchToThread function. https://msdn.microsoft.com/en-us/library/windows/desktop/ms685100(v=vs.85).aspx
|
# ? Apr 3, 2015 19:57 |
|
LeftistMuslimObama posted:Not to mention that a basic livelock like he's describing would be easily fixed by a competent developer through any number of locking and concurrency primitives. Except most of the computation is handled by lightweight multithreaded job dispatchers that explicitly avoid locking and concurrency primitives because of their high runtime cost. A game cannot achieve 200 jobs per 16ms frame if you are using semaphores or condition variables to signal that new work is ready. Well it might be able to do 200 jobs using locking but those jobs won't have any time left to do actual work.
|
# ? Apr 3, 2015 19:59 |
|
So literal textbook priority inversion kills everything under 2 cores? I remember when going up to 2 cores killed everything because people thought they were clever enough to use rdtsc
|
# ? Apr 3, 2015 20:02 |
|
JawnV6 posted:So literal textbook priority inversion kills everything under 2 cores? I can't say for sure if priority inversion issues are why the min spec is a quad core machine, but in the past I have had to deal with priority inversion issues on machines with fewer cores. ehnus fucked around with this message at 20:10 on Apr 3, 2015 |
# ? Apr 3, 2015 20:08 |
|
Oh gosh I was confused because that was the exact example that you prefaced this entire discussion with.ehnus posted:For example if you have two threads of high priority busy-waiting for work to be finished by threads of lower priority the system can stop making forward progress as the operating system will not pre-empt the higher priority threads. On a four core system this situation wouldn't happen.
|
# ? Apr 3, 2015 20:16 |
|
JawnV6 posted:Oh gosh I was confused because that was the exact example that you prefaced this entire discussion with. I'm not sure what you're trying to say? Is this response trying to be sarcastic? (sorry, I am actually asking) Just to restate my thoughts as they've been scattered over multiple posts: Priority inversion and other scheduling issues turn up in development, especially on machines with fewer cores than what the developers are actively working on, usually consoles or workstations with 8-16 cores. These can be quite difficult to shake out. Perhaps if too many issues like this pop up the people who set the specs realize that it may not be worth the cost to solve vs. the number of sales it would make them (guessing that the number of dual core, 64-bit, DX10+ machines isn't terribly high). I am less certain about the latter part because I have no experience with that part of the system requirements part of the game development process, but I have plenty of experience working on concurrency for games/game engines.
|
# ? Apr 3, 2015 20:45 |
|
ehnus posted:A game cannot achieve 200 jobs per 16ms frame if you are using semaphores or condition variables to signal that new work is ready. Well it might be able to do 200 jobs using locking but those jobs won't have any time left to do actual work. Do you guys hit a lot of contention? Most OS provided locks these days are fairly lightweight and have less than 20ns of overhead when hitting an uncontended lock/unlock sequence.
|
# ? Apr 3, 2015 21:51 |
|
The_Franz posted:Do you guys hit a lot of contention? Most OS provided locks these days are fairly lightweight and have less than 20ns of overhead when hitting an uncontended lock/unlock sequence. There can be a lot of contention, for example, if you have six threads all hitting the same job queues at once for work. There are other factors that limit the use of synchronization primitives. The job system used on DA:I and BF4 allows coordination of jobs between the different classes of processors on the consoles. On the PS3 the job system has full control from both the main processor (PPU) and vector processors (SPU) and limited functionality on the GPU. This is quite handy as the main processor can kick off a bunch of animation or physics jobs on the vector processors and then pick up when they are done. But because reliable semaphores and the like are not available from the vector processors, and not at all from the GPU, the system had to be built around using shared memory and atomics. Frostbite's rendering architect talks a bit about its threading architecture here: http://www.slideshare.net/repii/parallel-futures-of-a-game-engine-v20. Though that presentation is almost five years old now the core concepts remain relevant.
|
# ? Apr 3, 2015 22:23 |
|
Hace posted:Apparently some games won't work even with an i3! Um, source?
|
# ? Apr 4, 2015 00:43 |
|
ehnus posted:(guessing that the number of dual core, 64-bit, DX10+ machines isn't terribly high). The overwhelming majority of laptops are dual core, right? And most people would expect that a laptop with an i7 and a GTX 860m or something should be able to play games at medium-low settings, so they had better support dual cores.
|
# ? Apr 4, 2015 05:45 |
|
Twerk from Home posted:The overwhelming majority of laptops are dual core, right? Dual core but generally hyperthreaded. So not dual core for the purpose of a game that demands "4 cores".
|
# ? Apr 4, 2015 05:52 |
|
BurritoJustice posted:Um, source? Was gonna say dragon age, but the source I got from was dead wrong
|
# ? Apr 4, 2015 17:23 |
|
Are dragon age and da:I well optimised, or bad ports generally?
|
# ? Apr 4, 2015 21:26 |
|
wipeout posted:Are dragon age and da:I well optimised, or bad ports generally? Glitchy, hitchy, and make way too many concessions towards console limitations in controls, UI, and map layout.
|
# ? Apr 4, 2015 21:56 |
|
wipeout posted:Are dragon age and da:I well optimised, or bad ports generally? The first dragon age is one of the best games ever made, and it is excellent on PC. Dragon age inquisition is still a decent game, but suffers from being designed with almost no consideration given to PC gamer habits, and also some EA executive deciding that since skyrim was so successful; all RPGs should be more like skyrim.
|
# ? Apr 5, 2015 03:05 |
|
The Lord Bude posted:The first dragon age is one of the best games ever made, and it is excellent on PC. Dragon age inquisition is still a decent game, but suffers from being designed with almost no consideration given to PC gamer habits, and also some EA executive deciding that since skyrim was so successful; all RPGs should be more like skyrim. While I will mostly agree with your statements, I think you're casting DA:O in too-kind a light. It still has a memory leak that remains unpatched to this day. If I didn't have 16 GB of RAM, (And I only have it because I managed to snag it when it was still cheap, before the SEA floods wrecked the DRAM and hard drive factories down there, and the market shifted away from DRAM to NAND.) I'd have taken much longer to finish that game each time, and even then, things get kind of stupid when you're staring at a loading screen for minutes at when it didn't take nearly that long when you first started the game.
|
# ? Apr 5, 2015 05:16 |
|
SwissArmyDruid posted:While I will mostly agree with your statements, I think you're casting DA:O in too-kind a light. It still has a memory leak that remains unpatched to this day. If I didn't have 16 GB of RAM, (And I only have it because I managed to snag it when it was still cheap, before the SEA floods wrecked the DRAM and hard drive factories down there, and the market shifted away from DRAM to NAND.) I'd have taken much longer to finish that game each time, and even then, things get kind of stupid when you're staring at a loading screen for minutes at when it didn't take nearly that long when you first started the game. I've honestly never experienced this problem. I also have 16gb of RAM though. I have a number of issues with DA:I but it was better than dragon age 2 (which I hated). I'd give DA:I maybe 7 out of 10, where Dragon Age Origins is a 10 out of 10.
|
# ? Apr 5, 2015 05:58 |
|
The Lord Bude posted:Dragon Age Origins is a 10 out of 10.
|
# ? Apr 5, 2015 16:09 |
|
10 out of 10 taints
|
# ? Apr 5, 2015 17:35 |
|
Oh crap, I meant to write were FC4 and da:I bad ports. I finished origins and liked most of it when I played it. I'd like it more but the sequel corrupted my memories of it. I guess it's kind of like being an adult and discovering a childhood TV star was a pedofile.
|
# ? Apr 5, 2015 20:53 |
|
So what is this that I'm hearing about the new mobile platform actually being good? I keep seeing AMD APUs on sub-$600 plastic laptops with 15" or 17" screens. Is this what we're talking about? I thought the APUs were still sub-Core2Duo performance in single threads?
|
# ? Apr 7, 2015 12:46 |
|
We're waiting on a confluence of technologies to come together and create a product that is greater than the sum of its parts. Zen and HBM1/2 combined with GCN in one package.
|
# ? Apr 7, 2015 21:17 |
|
I wish Google and AMD would mingle and make Chromebooks together.
|
# ? Apr 8, 2015 02:34 |
|
Angry Fish posted:So what is this that I'm hearing about the new mobile platform actually being good? teagone posted:I wish Google and AMD would mingle and make Chromebooks together.
|
# ? Apr 8, 2015 03:14 |
|
Rastor posted:That might be interesting for sure, although these days Chromebooks are driving down to prices even AMD doesn't want to sink to. I'd consider a $200-$250 AMD APU based Chromebook. If it manages to notch 10-12 hours of battery life, I'd get one in a heartbeat.
|
# ? Apr 8, 2015 03:20 |
|
Rastor posted:I think the thread title has been the same for a while; I for one don't remember what mobile platform is being referred to there. As you've observed, AMD is currently competing on a value basis, not on a performance/enthusiast basis. Really? If you ask the reddits they all claim that AMD is the value-for-performance king of the hill when it comes to builds below $600, and even then you still have one or two builds using a 8350 bottlenecking two 290Xs in crossfire with 32GBs of 1866. People are still buying the components. The first post definitely needs an update.
|
# ? Apr 8, 2015 13:43 |
|
SwissArmyDruid posted:We're waiting on a confluence of technologies to come together and create a product that is greater than the sum of its parts. Zen and HBM1/2 combined with GCN in one package. Did some reading. Holy poo poo. But how does this apply to consumer products? A 16 core chip with that much cache and memory on board would be priced in the thousands per unit, right?
|
# ? Apr 8, 2015 13:46 |
|
Angry Fish posted:Did some reading. Holy poo poo. But how does this apply to consumer products? A 16 core chip with that much cache and memory on board would be priced in the thousands per unit, right? Well, some naive math: High end AMD CPUs go for around $300-400 when they leave the factory? rumoured 390x chip+8GB HBM: MSRP $700 400+700=1000, add another 10% for "packaging" this into a single chip, $1200? Then again, APUs are generally aimed at lower performance people, so unless this is a huge gamechanger in that too, I expect APUs to come in $500 range, motherboard+case included, and be a lot lower performance that a 390x. If they do, though, $500 could net you a shitload of computing/game power, p/p wise, if they pull this off correctly. As you said, having gigabytes of what is effectively cache is a gigantic boon to desktop computing, IMO. "next gen" consoles released 3 years too early, IMO
|
# ? Apr 8, 2015 14:30 |
|
Angry Fish posted:Really? If you ask the reddits they all claim that AMD is the value-for-performance king of the hill when it comes to builds below $600, and even then you still have one or two builds using a 8350 bottlenecking two 290Xs in crossfire with 32GBs of 1866. People are still buying the components. Angry Fish posted:Did some reading. Holy poo poo. But how does this apply to consumer products? A 16 core chip with that much cache and memory on board would be priced in the thousands per unit, right? 16-core Zen is aimed at servers. We don't know yet what configurations AMD might offer for the home user. quote:The first post definitely needs an update.
|
# ? Apr 8, 2015 14:35 |
|
teagone posted:I'd consider a $200-$250 AMD APU based Chromebook. If it manages to notch 10-12 hours of battery life, I'd get one in a heartbeat. The whole problem is battery life. Intel has just pulled massively far ahead on battery life and it's getting worse. It's not the OS that gives chromebooks really good battery life, it's the CPUs, whether they are Intel or ARM.
|
# ? Apr 8, 2015 14:43 |
|
Twerk from Home posted:The whole problem is battery life. Intel has just pulled massively far ahead on battery life and it's getting worse. It's not the OS that gives chromebooks really good battery life, it's the CPUs, whether they are Intel or ARM.
|
# ? Apr 8, 2015 15:35 |
|
Rastor posted:An 8350 with 32GBs of 1866 is a bad build. Any AMD performance/enthusiast build is a bad build. I agree. Those Piledrivers have the same performance as Intel chips from 2012, but people think its like a Miata or something -- no matter what you tell them, AMD is the best because 8(kinda)cores are better than 4. Rastor posted:16-core Zen is aimed at servers. We don't know yet what configurations AMD might offer for the home user. How big is the die going to be? This thing sounds like it'll be a 4"x4" platter per chip. 6200 pin LGA? Rastor posted:Our usual response is "so make a new thread and we'll move over there."
|
# ? Apr 8, 2015 16:31 |
|
Fanboys who won't change their stance no matter what you tell them? In my Reddit?
|
# ? Apr 8, 2015 16:36 |
|
Angry Fish posted:Did some reading. Holy poo poo. But how does this apply to consumer products? A 16 core chip with that much cache and memory on board would be priced in the thousands per unit, right? I seem to have been misunderstood. Briefly, then: * Bulldozer and its derivatives suck. Zen is K11b. We hope it's good. * GCN cores on APUs are bandwidth-starved as hell. There's only about a third of the cores you'd find on a lovely R9 260X, and they're way, way, way downclocked, because any faster and they'd be bottlenecked by the DDR3 anyways. Future GCN parts are alleged to focus on power efficiency, perhaps hinted at by their codename: Arctic Islands. "Greenland" is alleged to be the top-of-the-line part, and purports to pull a Maxwell on GCN. * HBM is the new hotness. It's got the bandwidth to feed those hungry GCN cores, a wide-rear end memory bus, the physical size to go onto the same package as a CPU/GPU, and there's an AMD patent letting them put HBM in the line between the L2 cache and system memory. If it's anything like how Intel does it, this will allow HBM to be used as a combination of cache and system memory, EXCEPT that this would be L3 cache, not L4. Then, you get the more conventional memory controller tailed off of that, allowing for mixed-memory packages between HBM and DDR4. (At least, it's a good bet that Zen will use DDR4.) Now, we know that AMD has begun to adopt the same binning strategy as Intel. Intel are now presently only intentionally making parts for the server market and the mobile market, (markets that are power-constrained and thermally constrained) Any chips that fail binning for these two markets get pushed to the desktop channel, where they can get the extra power and cooling that they need to run stably. --SPECULATION BEGINS HERE-- So, take 4/8 Zen cores, pair it with more GCN cores running at full speed than Carrizo and previous could ever hope to utilize, and drop 2 GB of HBM onto the package. Congratulations, there is now a mainstream notebook part that's capable of competing with Intel's mobile parts, annihilating them in graphics performance, while offering equivalent or lower costs. (There's no goddamn way in hell that putting eDRAM onto processors as Intel has done with Iris Pro parts is cheaper than HBM, or anywhere near the capacity.) Cut it down to 2 cores, pare back the GCN cores to match, but increase the amount of HBM to 8 GB? Here's a feature-complete SoC for your ultrabooks. You probably won't even need to bother with LPDDR. Take 16 Zen cores, give them 2048 GCN cores, and load on 32 GB of HBM? You now have a server part that's capable of highly-threaded applications, as well as OpenCL computations. Server die has some bad cores? Disable the bad ones, put it onto a package with reduced HBM, and you've got a top-of-the-line enthusiast part. That sort of thing. Mainstream mobile die isn't stable without extra voltage, or runs too hot? Bump it up to desktop, there's your mainstream and low-end desktop parts. --SPECULATION ENDS HERE-- So that's what we're waiting for. It's just that AMD are being tight-lipped about Zen, and the 300-series won't come out until Computex, so we don't know poo poo yet. But we're hoping. SwissArmyDruid fucked around with this message at 21:25 on Apr 8, 2015 |
# ? Apr 8, 2015 18:55 |
|
Rastor posted:Intel has a huge process advantage. With that said, AMD claims that battery life was a major focus for Carrizo. We haven't heard about any Carrizo design wins yet, but it can be assumed some will be announced soon (HP 255 G4 seems almost-official, for example), so we'll see what the results are when those come out. not for long, samsung has a "14nm" process in production ( the BEOL is still 20nm; intel has 14/14) so glofo will probably get a 14nm AMD processor out reasonably soon. Sure intel will have the 6month-12month lead but it's not gonna be as ridiculous as it was.
|
# ? Apr 8, 2015 20:09 |
|
Bam, called it~ APU + Zen + HBM + awesome GCN parts. I still express worries that AMD is going to shoot themselves in the foot with regard to single-threaded performance again, because 16 Zen cores, even if it is a server part, but other than that, this is probably what our next-gen K11-B is gonna look like. I was also incorrect that the HBM would be the shared L3 cache. Article: http://www.fudzilla.com/news/processors/37494-amd-x86-16-core-zen-apu-detailed SwissArmyDruid fucked around with this message at 21:38 on Apr 10, 2015 |
# ? Apr 10, 2015 21:35 |
|
I hope they pull it off. Playing games at 1080p high/ultra settings on an APU with freesync sounds dreamy.
|
# ? Apr 10, 2015 22:22 |
|
No giant L3 cache, but no one plans for 16GB of HBM without some major horsepower backing it. Easily explains the rumored 300W TDP - 95W for the Zen cores, 200W for the GCN, which could give top tier processors reaching into enthusiast. If AMD don't gently caress up, that's a lot of mobile market capture since Iris is still at what, R7 240 level?
|
# ? Apr 11, 2015 00:10 |
|
|
# ? Dec 5, 2024 06:46 |
|
FaustianQ posted:No giant L3 cache, but no one plans for 16GB of HBM without some major horsepower backing it. Easily explains the rumored 300W TDP - 95W for the Zen cores, 200W for the GCN, which could give top tier processors reaching into enthusiast. If AMD don't gently caress up, that's a lot of mobile market capture since Iris is still at what, R7 240 level? No, I think that TDP number might be way overinflated. Those are Greenland GCN cores, which are known to be part of the Arctic Islands series of silicon, which we know is rumored to emphasize power efficiency a la Maxwell. Also, if Zen is to arrive on 14nm FinFET as rumored, I think the TDP might also be lower again relative to what we know is "normal" for an AMD server part. However, if that number *is* correct, I'm thinking it's closer to a 50/50 split, as it *is* 16 physical cores, after all. Intel's highest TDP Xeon E7 v2 part is a 155W 15-core clocked at 3.2GHz, with 37.5MB of L3, although that is probably due to binning, as there also exists a gamut of processors with differing TDPs and core counts, as well as frequencies inversely scaling with core count: http://ark.intel.com/products/family/78584/Intel-Xeon-Processor-E7-v2-Family#@Server (Seems like they have trouble getting that one last core to not be DOA.) SwissArmyDruid fucked around with this message at 03:28 on Apr 11, 2015 |
# ? Apr 11, 2015 03:21 |