Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
EdEddnEddy
Apr 5, 2012



Grundulum posted:

My work machine has an i7-3930k in it; 6 physical cores with two-way hyperthreading, so 12 logical cores. When I try to run jobs that use more than one core, I get the distinct impression that I'm being throttled due to thermal considerations. That leads me to two questions:

(1) Where can I find the core-by-core breakdown of maximum speeds for this processor, assuming just the stock cooling setup?
(2) If I buy one of the coolers recommended in the PC parts thread, will it fit both this processor and Skylake/beyond? I intend to replace this machine eventually, and a cooler seems like it ought to be reusable.

May we get what OS you are using and perhaps a glimpse of what you are doing that has you think this?

I have the same CPU that turbos up to 4.4Ghz but it is sometimes a bit finicky at actually getting full clock usage when doing things that aren't multi-threaded.


If you are on Windows 7 or newer, try switching the Power Profile from Balanced to High Performance and see if the performance is closer to what you might expect.

Adbot
ADBOT LOVES YOU

EdEddnEddy
Apr 5, 2012



NLJP posted:

As an addendum, actually go into the advanced settings of your chosen power profile and make sure max CPU usage is actually set to something sensible. Mine was set to 5% for a long time due to an apparently long known but never fixed bug.

? What sort of bug?

In every Windows version since Vista, (baring the stupid customized ones OEM's create). There are 3 main settings. High performance (which CPU is 100% for both), Balanced (5% idle, 100% Max) and High Performance (100% all the time).

Running Balanced is usually the best bet but I find sometimes certain apps/games don't push the cores to 100% clock properly and using High Performance kicks it up properly to get full performance in a core or two. It is really weird and I wonder if it has something to do with HT. Haven't disabled it to see if that is the case yet though.

EdEddnEddy
Apr 5, 2012



Nintendo Kid posted:

One thing is that in balanced mode, the computer will typically favor reducing clock rate before increasing fan speed to manage heat, while in high performance mode, it will always try to increase fan speed before reducing clock rate.

So with certain workloads, you'll end up never hitting 100% clock rate in balanced while in others you will.

Isn't that adjusted from the Cooling Policy from Passive to Active? Most other more direct fan profile settings are in the Motherboard Bios.

EdEddnEddy
Apr 5, 2012



eggyolk posted:

Skylake finally prompted me to upgrade. From an i3 2100T to a...... i7 3770. Turns out they go for under $200 occasionally.

Nice. That i3 isn't a bad chip, but an upgrade to that i7 has to be night and day.

I went from that i3 to a Pentium 20th anniversary after it kept throwing a bad core error which is strange to me with a non overclocked CPU.

But for $52 getting a Pentium to run at 4.4Ghz is great as a HTPC gaming chip.


Also I knew I had seen that glitch with sleep mode on my laptop on occasion in every os from windows 7-8.1, but it was pretty rare and only happened after a lot of sleep time. A quick reboot seemed to fix it and since I never (can't really) sleep my Desktop because of the Overclock, I never run into that issue on it.

I will have to check out this High Performance profile being used like balanced and see how it handles a few of my VR apps.. TO be able to throttle up things properly where balanced does not, but also allowing it to throttle down when it is idle.. Might be worth exploring.

EdEddnEddy
Apr 5, 2012



If he had run that 920 @ 4.0ghz at least vs Stock, it wouldn't feel like that huge of a jump.

But stock for stock, there is no comparison of course.


While we have gotten a bit faster each generation as well as more power efficient, I miss the days of real performance bumps like we had from the P4 to C2D to i Series.

EdEddnEddy
Apr 5, 2012



Yea the 2500K should be able to hit 4Ghz with ease and 4.4-4.6+ with a little tuning. Like mentioned above, unless there is a specific feature you really want from a newer chip/chipset, id hold off for a bit longer and just get a good cooler if you don't already have one and enjoy your "free" upgrade. (If you haven't OC'ed it already).

I built a 2600K system when it was new and was able to get it to OC to 4.5Ghz and Turbo to 5ghz. It was an amazing chip and still runs great to this day for the guy I built it for.

EdEddnEddy
Apr 5, 2012



That's the Idea. Around or after Summer.

I may have to bite on that 10 Core because.... I'm stupid.

Also hope it overclocks well, I could probably get by with the 8 core one too so we will see. If I am going to do a X99 build I really would like to go all out with the M2 SSD's and such as well. I guess it all depends on what VR pushes next year and if DX12 needs more than my 6 core 3930K can push at its 4.4Ghz speed.

EdEddnEddy
Apr 5, 2012



I've had mine for 4 :ohdear:

Also how many overclockers just bang their cpu to 1.4V and stick it at 4.8-5Ghz 24/7?

Is that still standard practice these days?

EdEddnEddy
Apr 5, 2012



MaxxBot posted:

I've had my 2600k heavily overclocked and overvolted and often running at 100℅ load for 4 1/2 years now. Originally I was able to run at 4.9GHz but I've backed it down to 4.5GHz, mostly because my cooler doesn't seem to work as well as it used to.

Normally I'd upgrade sooner but I want something that's actually faster, I'll probably go with Broadwell-E.

I was going to post a bunch of words about my 2600K experience lol. With the arrival of the i7 K series and Turbo, why in the world would you lock your CPU at such a hot and punishing level 24/7? I built a 2600K system that used Per Core Turbo fantastically. 1 core went up to 5.0Ghz, 2 was up to 4.9, 3 and 4 to 4.8Ghz all on less than 1.385v max I believe. It was on a Antac Kuler920 I believe and while that drat thing got loud, it was stable for a few years in Southern Texas until the guy moved again and screwed a bunch of stuff up with the cooler and I haven't been able to get enough time with it to re-setup and test his OC again. Now it runs like up to 4.2ghz on stock voltages I believe.

Running at 100% for long periods isn't a problem, but idling it at max OC volts and speeds all day just seems super wasteful on both CPU life or more visibly Power usage and Heat generation. :/

Though that trick of using a power plan based off of High Performance rather than Balanced really is something I didn't know about before. Balanced would still hit my CPU's 4.4Ghz OC currently, but the High performance once keeps things out of the 1.2Ghz Idle range almost completely which makes things feel a bit speedier, but also does run the temps up a tad higher. It is great for VR/Gaming though.

EdEddnEddy
Apr 5, 2012



sadus posted:

Not sure if its a new thing but the Skylake motherboards have a voltage offset setting now instead of just hard-coding the voltage, so you can just boost the voltage a bit but it will still drop way down when not in use. We hit 4.8ghz at 1.45v with a i5-6600k with a $30 Cryorig CPU cooler and two cheapo stock case fans but dropped it down to 4.6ghz at 1.4v to be stable with Prime95 v28.7 overnight.

The first i7's had offset voltage ability in the motherboard that properly support overclocking. Its IMO the prefered way to OC so you're not running 1.X volts through your CPU at all times for no reason. That's why I mentioned it earlier with the 2600K OC talk.

It does take some good finesse to get it dialed in as you are combating the native CPU's voltage stepping, the Motherboards Vdroop and Loadline calibration, and whatever other settings each board may have onboard, but once you get it dialed in, it's nice to have a chip that runs almost like factory only with a 1ghz bump.

Also 1.4V just seems extreme to me for any OC below 4.8Ghz. Hell my 3930K doesn't even use more than 1.384v at 4.6ghz and its a fatty. It needs 1.4+ for 4.8-4.9 Range, but the heat and the time it takes to adjust everything else to make it stable is just to much for what little gain I get at that point.

4.4Ghz was an easy sweet spot that is like 10C cooler than 4.6+ in the summer lol.


Subjunctive posted:

DisplayPort carries audio, no?

I though't it did too...

EdEddnEddy
Apr 5, 2012



You have to give ASRock credit for sure, they make some neat boards and support them pretty well when new stuff drops that opens up current chipset features. Their X79 boards were nuts except for how many only had 4 Ram slots instead of 8. :/

EdEddnEddy
Apr 5, 2012



BIG HEADLINE posted:

Lenovo is trying to sucker people into buying a 'gaming laptop' with a 4K touchscreen that has a 48Hz refresh rate that they pair with a 2GB 960M.

Asus wasn't too far off with their 960M Powered 4K 15"er.

Why they put 4K on a 15" laptop with poor dedicated graphics for the task, and yet it's nearly impossible to find a 17.3" with 4K and a touchscreen, if you can at all. (with a 970M or above) :/

EdEddnEddy
Apr 5, 2012



The X1 Carbon is a nice build design, but yes they are non-upgradeable as far as everything goes except maybe the SSD, so get what you might want out of the gate.
Also most of the Lenovos seem to have a small issue with their Wireless cards and coming out of Sleep mode. Not all of them but I saw it on the X1 more than it should.
My Uncle just got the Dell XPS 13 from Costco when they had it for $200 off, and that thing curb-stomps any of the work laptops at my place (everything from Lenovo X1 Carbons, Yoga's, HP's, etc.)

Much higher fit and finish, the QHD+ screen is gorgeous, 8G, 256G SSD, i7, etc all fitting into a great looking 13.3" size.

EdEddnEddy
Apr 5, 2012



Man I know its stupid but if you absolutely didn't want to build your own and just wanted something to game on NOW in 4K give or take, This HP Deal isn't half bad.

EdEddnEddy
Apr 5, 2012



AVeryLargeRadish posted:

From what I have read on the subject it becomes harder to use solder the smaller the CPU die gets, on smaller dies the solder becomes prone to cracking and destroying the CPU. Also the non-LGA2011-3 chips don't really produce enough heat to justify the cost of the soldering process, which is actually very complex and expensive because it uses some very rare metals in extremely thin layers to create the bonds needed between the CPU die and the heat spreader.

Exactly this from what I have read. However they also aren't using the best thermal paste either so delidding and applying some better stuff seems to provide nearly as good results as soldering it would.

Though this is also part of the reason I will probably be LGA2011-X For the foreseeable future as I don't need an iGPU and use all the cores and PCI-E lanes more often then most.

EdEddnEddy
Apr 5, 2012



I think the throttling can save it when it is able to at least vent some heat, a heatsink that is attached properly but with no fan should still work good enough to keep from damaging anything, but a heatsink not attached is much more likely to kill it as it has nowhere for that instant build up of heat to go quick enough.

I did see a kid at a local lan with a new 4770K couldn't figure out how come his PC wouldn't play CS:GO at 200FPS like he was told it would. Asking around and people are checking all over with drivers, windows install, etc that nobody decided to check the temps. I knew it looked like a throttle issue as it would play at >100 FPS for a few seconds, then drop down to the 30-50FPS range for the remainder of a match.

Take a look at the temps, yep 99C. Heatsink was hanging by a single one of the pushpin arms and they were turned to be released. The kid said he followed the arrows.

Set him straight and the comp went on to play in the 40-60C range and 200+FPS like he had been told.


I agree that some people just shouldn't build or work on their own PC's even in this easy as pie day and age. Also the stock coolers used to be better IMO then the ones they have now. The ones that came with the early Core 2 Quad or 1st Gen i7's were massive stockers and with the copper core, the things were practically an upgrade for a lower end CPU that came with only an aluminum core and the shorter fins. I was able to overclock a few lower end C2Q's and i7's later with the "upgraded" coolers that actually allowed a modest OC without creating as much heat as the one that they came with.

Of course any actual OC needs a good aftermarket, but for the most part, the stocker is just fine at stock clocks as long as you just install, make sure the pins go into the holes, and push the 4 pins and hear the drat click.

EdEddnEddy
Apr 5, 2012



The one area that is going to grow that does depend on PCI-E speeds due to latency, is VR. If there is latency in getting data from the CPU, to the GPU, to render changes in a scene while using VR, that is going to wreck the experience due to the time added for the slower interface. (There are plenty of test showing things like VR SLI having a huge difference between PCI-E 2.0 X16 and 3.0 X16.

Now not many people will be going out to buy ultrabooks to hook up VR and play through an external GPU housing (and also while GPU's currently are huge, the next gen with HBM similar to the AMD FURY, should show that they will be shrinking a lot once that becomes mainstream).

But you know some will, and they will piss and moan and wonder why their super $$$ laptop doesn't work with VR and blame everything except the technical limitations.


Either way neat tech but not something I feel is more than a niche of a niche market outside of maybe the professional rendering market or extremely space limited gamer that wants a single system to work on then come home and play on, and that's it.

EdEddnEddy
Apr 5, 2012



Psmith posted:

So I'm finally considering upgrading my CPU/MOBO and I have a few questions.

I have an i5 2500k that I bought about 4 years ago. I have recently upgraded my GPU to a 970 and I've been keeping my eye on Oculus (and VR tech in general) so I kind of want to get my CPU up to date. The main selling point for me here is that because the 2500k is still a good chip that allows me to run mostly everything at higher quality, I will use the current board/chip to create a secondary gaming PC for the living room.

So with that being said, I'm thinking about upgrading to the i7 4790K as it is $300 at Microcenter. I would get a new board as well. Getting the new board is because when I built this machine originally I got a small form factor board to fit my smaller case at the time. This has been something of a hassle as I've continued to upgrade so I feel like it's time for a full size motherboard.

So I have a few questions:

1) Is the upgrade even worth it? I've done a little research on google and the responses seem a little mixed. It seems that I could OC the 2500k fairly easily but I'm currently using stock cooling. So I'd have to try and fit an aftermarket cooler on my small form factor mobo. The recommended CPU for Oculus is "i5-4590 equivalent or greater" so that's my target. Oculus isn't my only concern but I believe it would be the most intensive one.

2) This is always my concern with buying a new expensive computer component but is there some wonderful brand new CPU tech on the horizon that is worth waiting for?

Thanks!

As mentioned above, save the money, get a really good CPU coolers and overclock that thing.

You should be good to go for entry level Rift support as long as you have a few good USB 3.0 Ports to use.

EdEddnEddy
Apr 5, 2012



On the Windows 10 for Skylake on front, I know the update of Windows 10 is still a bit slow since it and even 8.1 are still relatively new OS'es compared to the XP/7 days, but why would anyone really want to build a brand spanking new system with the latest hardware, and then run an out of date OS on it? Outside of the usual people that can't get used to a new start menu (Windows 8 really wasn't that bad and flowed well once you got used to it, then 8.1 made it a bit more mouse friendly, while destroying the consistency of the "new" interface), you do also get the latest optimizations and abilities to work with the latest hardware in the best way possible. I know for a fact my old now 3930K ran a lot better in Windows 8 then it did in Windows 7 because 8+ was able to identify the difference between a physical and virtual core which Win7 could not. This not only was good for the Intel Hyper threaded group, but also especially for those that got burned by AMD Bulldozer with their Core "Modules' in the design of that 8 sort of core chip. It gave them at least a 10% boost which was better than nothing.

You can always virtualize an older OS if you need to for whatever reason outside of the hardware for research stuff you guys mention. Hell I knew of labs that run on DOS/W3.1 stuff because of the old school timing of the hardware that can't be emulated or run on newer hardware since the drivers were written to connect to the devise directly. Keeping that kind of crap working has to be tough when someone forgets to dust it off one every 3 years or so. But for mainstream PC users, Windows 10 will be the OS going into the future like it or not.


I do just wish we had better control over updates still, at least for drivers, but outside of that 10 seems to be reasonably good on everything else I have ran it on so far.

EdEddnEddy
Apr 5, 2012



Exactly, Windows 7 Was great. A Big Step up from XP and Vista, (Even Vista was good coming from XP as long as you ran it on >1G RAM) but 8. 8.1 and 10 all run and work even better than 7 on newer hardware.

One thing I am still a bit stick on with Windows 10 though, is even though they say it has lower requirements than say 8.1, I feel that the new graphics model they did for the OS interface runs like poo poo on some older hardware where 8/8.1's was quick and fast on pretty much anything.

I understand the new GPU accelerated model is a much more modular design going forward, but the little bit (and sometimes bigger bit) of lag you get when you click the windows key and start typing to just find something local on the system can be a bit of a pain in the rear end sometimes when it hiccups and you have to wait for whatever Cortana system it's hanging on to catch up.

It's much less apparent on newer/faster hardware, but on old laptops and such that I have tinkered with it on, it runs better in parts, and worse in others, my guess, due to the caching and other connected stuff it is offering.


The only other thing I dislike is the forced drivers on really old hardware. The 2005 Toshiba I have does not like any newer synaptic touchpad driver but it insist it needs to install the latest one which breaks the side touch scrolling. :argh:

Still haven't found a solid way to disable that from happening yet. All the GPO editing seems to do diddly squat.

EdEddnEddy
Apr 5, 2012



With the next gen GPU's having HBM on them, that might make the CPU have more they have to do to keep the GPU's of the future fed, and I agree I hope DX12 makes that a reality in the long run.

The FPS boost you see in games like Project Cars from DX11 to DX12 looks promising at keeping the games FPS consistently high vs dipping at heavy load areas on midrange hardware which was impressive.

EdEddnEddy
Apr 5, 2012



It's not what I would call a GT for the PC, however it is the most beautiful and physically accurate racing game I have played so far. Asseto Corsa is a really close second as well. I don't think Project Cars is DX12 yet, but it is coming among other things.

Comparing racing games, the GT series on the PS3, 5 and 6, both have ok physics and force feedback feel, Forza 5 and 6 on the Xbox One greatly disappointed me in how bad the force feedback was when I got a G290 to test with it, and the physics while reasonably ok with the cars stock, seems to just throw traction to the wind when you start to do any mods without tuning the suspension like mad. (Considering I own a few cars that are ingame, and when matching the "performance level" of the game to my real life, I should be dead now by the way I felt the physics emulated the handling).

Asseto Corsa is Really good as well but Project Cars really seems to nail near perfection as far as what I would expect grip and different platforms of each car to feel like. Mid engine cars handle and feel exactly like I would expect which a lighter touch on power and breaking helps keep things in check, the Caterham handles great as long as you respect its lightness and small wheels ( and I believe open Diff). If you drive it wide open you will wonder why the car doesn't seem to want to stop sliding sideways as you make your way around turns. Letting off the gas a little until your rear end settles in the direction you want to head, and using its light weight to keep the speed up around the turns, allows that car to just fly around a track and is a ton of fun. The BMW M1 feels exactly like I would expect with respect to its weight, power and handling. And each car since has continued to impress me in exactly the way I would expect in real life.


Lots of words, but in conclusion. Get Project Cars, Have a Good FFB wheel, and treat it like a true driving sim. It is amazing.

EdEddnEddy
Apr 5, 2012



Ervin K posted:

Anyone know if they're releasing a 6 core i7? You'd think i'd be out by now.

Yea since the Ivy Bride Time. Thats what the -E series has been for on the X79/X99 chipsets.

6 Cores Starting with the X79 chipset series, i7 3930K (mine) / 3960X / 4930K / 4960X, and the X99 series, 5820K / 5930K / and 8 cores with the 5960X

EdEddnEddy
Apr 5, 2012



HalloKitty posted:

Ivy Bridge? Try Gulftown, and the amazing 980X.

That one completely slipped my mind. I wanted one of those back when soo bad. However the -E Series started around Ivy Bridge with SB-E even though SB was old news by then.

Still think the X58 chipset was a drat good one too. Built a few systems, never had one of my own since I had a badass still X48 that I was able to get SLI working, 1Ghz OC on a Q9550 with 1.25V, and a full 4 slots for 8G DDR3 at PC1081Mhz. That thing flew.

EdEddnEddy
Apr 5, 2012



If you have any K series CPU, you can OC it about 1Ghz and last 3-5 Generations without performance being a limiting factor at all. The main reason to get a Skylake over say an IvyBridge or even Sandy Bridge, is if you really wanted to have the extra features the Chipset and chip may offer.

The reason you get a -E series is if you do a lot of multithreaded work (Encoding for me, BluRay done in 30 Minutes) and sometimes some other serverish level storage and performance benefits. If you want to make a super beast Ramdisk powered monstrosity, the X99 with quad channel memory would be your best choice.

EdEddnEddy
Apr 5, 2012



Once I got a 120Hz 3D monitor, I finally got what everyone who was big on the >60Hz crowd was all up about. While holding a solid 60 in 90% of games is great, being able to go up to 120 and have a silky smooth frame displaying experience is quite nice. Everything just feels even smoother.

30FPS is downright terrible to sit through unless it is some really cinematic experience game or something.

Now the whole 75/90Hz/FPS thing coming with VR is going to make a little more of a challenge. Being able to play some games down to the 50FPS level on a screen was no big deal, but in VR, the FPS has to remain at the VR specified speeds and right now, in high detail games, (like Elite Dangerous) 75 is a bit tough in VR, and 90 is going to be downright hard unless I get a new Pascel powered Nvidia card (or 2) when those beast launch.

It is interesting how when the system is running normally. I can run 120+ FPS without VR, but the minute you are shoved into VR, the performance is almost cut in half (which is to be expected) and as long as there aren't any other bottlenecks (like SteamVR seems to have currently when using the Oculus through it) things work. But when they don't for whatever reason (not currently hardware based) man it destroys the experience and I can imagine it completely turning off some people from VR at least for a generation or two if Steam/Valve/Oculus don't gets things right and let the hardware really off the leash.

EdEddnEddy
Apr 5, 2012



fishmech posted:

Er, the only broadcast standards we're using for HDTV is 30 and 60 FPS, with much of it being 30. These are hardly high framerate, considering as those have been standard rates for TV and monitors for ages.

Are you sure you're not thinking of a TV set that upconverts lower frame-rates to 120 FPS by literally generating new frames based on the average of the frames it's actually receiving? That looks pretty fake, but it's because the actual content is being stretched out with frames that don't actually exist in the source. That sort of stuff is always going to look wonky.

That is one of the worst things HDTV's have brought out in years. I believe the only time it is semi good is when watching Sports, but movies are just destroyed by it, and it looks the worst when you watch a Animation either drawn Disney like stuff, or 3D Pixar content. Both are impossible to watch as the blending just does not work in fast movement scenes and such. :barf:

EdEddnEddy
Apr 5, 2012



Saukkis posted:

I don't think you can blame HDTV for that. Later SD CRT televisions already supported 100/120Hz and they created extra frames by interpolation.

I don't care where it got started or who has it, the option should not be enabled by default and I want to stab whoever though it was going to be the next big marketing gimmick in his left testicle. :argh:



*I don't really want to do this, but I do feel bad (only a little) for anyone who gets a new TV, doesn't really like it but lives with the stupid feature because they are too lazy/ignorant/stupid to figure out how to turn it off...

EdEddnEddy
Apr 5, 2012



Yea ECS wasn't really High End, but their cheap boards you got at Frys with a CPU for the price of the CPU, all have lasted the test of 5+ years so far on a few simple office level builds.

They weren't feature filled by any means, but they did work.

They even made a absolutely killer SLI Water-cooled GTX 9800GTX bundle back when that was killer bang for the buck right before the 200 series came out. So I will give them credit that they tried to break out from their cheap boards mold.

EdEddnEddy
Apr 5, 2012



ASUS CUSL2-C with a 933 P3. My first PC built myself.
I still have that thing next to my desk for all sorts of old Windows 98 goodness, but I last was messing with it, trying to get a SoundBlaster 16 PCI soundcard to work with some old games. Turns out I need a ISA version to do what I really need.
Still, played everything on a PCI Voodoo Banshee before getting a Geforce 2 MX followed by a Geforce 3 Ti 500 that followed me to a P4 1.8Ghz build with an ASUS P4T-E then a P4T533-C when the -E died...

It now has a AGP Vodoo Banshee for real Glide games and runs a hell of a lot better. The PCI Banshee was great, but the PCI latency showed itself a ton on a 933 where on the previous P2 266 it was in, of course did not.


I'm sort of sad that site sort of died around the 2001 era as the boards and adventures of Rambus RAM and DDR RAM as well as the chipsets and such were still pretty fun in the P4 days.


I swear my P4 1.8ghz with 256MB Rambus still ran Windows XP at a sort of "spunkier" performance on a fresh install, then my P4 3.0 with a ASUS P4P800 and up to 2GB DDR 400. The 1.8 only was replaced as a friend of ours needed a work PC upgrade, and I had experimented with a old Voodoo 2 I had gotten and put it on the slot directly below the Ti 500. The Ti ended up cooking its fan and dying, and the AGP slot never seemed to be the same again. It always seemed like any game you played, even with 60+FPS at the time, had a sort of momentary slowdown/lag that made pretty much any game unplayable. (The Scene would move smoothly, but like in steps __|__|__|__|__ when it should have just been a smooth _______ Transition as far as FPS was concerned..) It couldn't play games even with a new Ti 4200 or Ti 4600 at the time, but it ran business apps like a champ up until the users inevitably got a virus again, and again.

EdEddnEddy
Apr 5, 2012



Combat Pretzel posted:

ASUS CUV4X-D with two P3-933.

That drat thing outlasted plenty of P4s in overall usability, thanks to the multiprocessing. I actually refused to upgrade for a long while when Intel disabled the GTL+ lines on the consumer CPUs, preventing people to do multiprocessing on the cheap. Then the AthlonX2 came and slowly kicked off multiprocessing for the consumers again.

Woah.... If I knew more about building computers back then, I so would have gotten one of those. That thing would have been amazing with like 512mb ram back when.

A friend of mine built a dual CPU AMD setup like a year or two later, but it was those Athlon MP's and the system just never ran right. He had to downclock it and just hope it would run right for nearly any gaming which was unfortunate because it was pretty cool and XP saw the 2 CPU's at the time so it was a bummer.

But 2 of my old 933's would be downright fantastic for years I would bet. The main reason I got a new P4 1.8 back then, was because I got the Ti 500 and the CPU was just not able to feed it what it needed. Putting it into the 933 showed only a little performance boost over the 2 MX, but in the 1.8 that card absolutely screamed at the time playing games like Tribes 2 and Giants, Right up until it burned up. :(

EdEddnEddy
Apr 5, 2012



Though there are incidents where Xeon powered servers are overclocked and used....

I remember there being an article for CCP that the servers they hosted some parts of EVE on, (Jita and Large Scale Battles?) were on some newfangled servers that were Overclocked 4.4Ghz Xeon's that were water cooled and the fastest servers of their kind at the time. (Like in 2012/2013..)

EdEddnEddy
Apr 5, 2012



fishmech posted:

Yes but there's also no reason to be using Windows 8 if you're the kind of person who hates Windows 10. 8 has all of the downsides and none of the upsides.

This was rendered false for those that actually like Touchscreen laptops.

My dad's Asus Vivobook updated itself to 10 and he couldn't quite figure out anything until I showed him. Even after a while 10 really did poo poo all over the entire touch interface stuff that made Windows 8/8.1 actually work really with on a Tablet/Touch device. Went back to 8.1 and got everything back the way it was for him for now.

Tablet Mode is not an answer for removing the Touch IE* (Since Edge doesn't quite replace IE just yet for those who use it still) and it broke/removed nearly all the apps like Mail and such that were working fine in 8.1, then installing 10, it removes them all and doesn't update the start menu with what you were actually using.

The overall move on a PC with 7/8.x on say a desktop not using the Windows 8 style apps is fantastic don't get me wrong, but for someone who actually adopted the 8/8.1 apps and controls, to just rip those back out and replace with a more clunky style is sort of a kick to the nuts.

And Microsoft has been doing that a lot since 8 dropped not to mention with Zune and Windows Phone.

They keep releasing good stuff and because a handful bitch enough about something they don't still know how to use even when it's been around sine 95, they remove instead of just offer both in the next version. Stop it!


*Also I know IE AHH! Evil! but with my home wifi being filtered through ad-blocking in my router, IE is about as safe as any other browser and the touch IE in 8 wasn't bad for those that don't want to use the touchpad and actually like the swipe gestures for tabs and such.



Now on the hardware front, if you aren't running the latest greatest OS at the time for your hardware, you are just missing out on performance/features that you could otherwise enjoy. My 3930K ran a bunch better on 8 then it did on 7 (and yes 7 didn't see the X79's chipset drivers out of the box which does take a little extra effort to get running then). And 10 runs on it even better which was nice. Plus DX12 is 10 only.

EdEddnEddy
Apr 5, 2012



AGP I extended the life on my P4 3.0 before I finally got a C2Q 9550 system. Went from a 9800PRO to a 7800GS which was a pretty decent card for what it was.

Also how do you have a 4800 series ATI card that hasn't burnt itself out yet?!? :psyduck:

I had gone through 3 4870X2's before VisionTek Swapped me up to a refurb 5870 which is happily humming along in that old system I gave to my Sis for Photoshop stuff.

Loved those X2's but man they did get hot. Friend of mine with a 4850 and his bro with a 4830 both ran super hot to the point that they finally died or got close.

EdEddnEddy
Apr 5, 2012



On the 4000 series, it wasn't the GPU itself but the VRM's. The core could be a balmy 70-80C but those VRM's would get up to 100C+ easy if you didn't watch it.

On like the 3rd 4870X2 that was dying on me, I threw caution to the wind and threw the voltage/fans up to see if I could get it to stop glitching out playing SC2. The VRM Temps got up to over 130C but amazingly it didin't die, but man did it put out some heat.

It amazes me how good the Nvidia Reference coolers got after using that X2, then a 5870, then SLI 560TI's. They all were just hot running things and the stock fan profiles sucked noodles as well as the fans themselves when you wanted them to ramp up and got so loud.

At 100% my 780's fans are easily 1/2 as quiet as the 560Ti fans were at 60%. And they never go above the low 70C's with my fan curve adjustment and mild overclock.

I continue to hate any aftermarket fan design for use with SLI. I want the heat vented out of the back unless aftermarket is going water.

EdEddnEddy
Apr 5, 2012



HalloKitty posted:

That was only 10 years ago or so when they were finally hitting the end of their lifecycle. The last AGP card I bought was a 7800GT for my Pentium 4 system. Did you only just get into IT?

You mean 7800GS, they never made the GT/GTX's AGP. :P



Trust me I looked hard and wide for one hah. They had a very close to full GT style GS at one point, but it was super expensive vs the normal GS's. (Gainward 7800GS+ Which ran a full 20 pipe GT Chip)

The alternate ATI card that came out for AGP that was actually really good, was the ATI X800XL which was a 16ROP beast like the 850XT's at the time which was cool. *(Crap they made a X850XT Platinum Edition in AGP as well. Well color me surprised. That would have been a great card too.)


RIP BFG. :(

EdEddnEddy fucked around with this message at 18:33 on Feb 10, 2016

EdEddnEddy
Apr 5, 2012



HalloKitty posted:

It was advertised as a 7800GS for whatever obscure reason, but it absolutely was a 7800GT. It's the one I had, in fact, have; but it's in storage, in the aforementioned Pentium 4 Northwood system.

Yes, it was expensive. It's actually the most expensive graphics card I've ever bought, to date. (£274.97 in April 2006). Even my current 290X cost a decent amount less.

Yep that was The card. Very nice and I wish I could have gotten my hands on one of them. Though since the chip at the time was a AMD Athlon64 4000, my P4 3.0 was just not going to feed it enough either way.


Don Lapre posted:

Never forget



What in the hell is that monstrous piece of amazing engineering? That is awesome!

EdEddnEddy
Apr 5, 2012



blowfish posted:

sorry, it's a bit hard to remember all the companies that lost against nvidia/wintel :agesilaus:

They didn't "loose" as their cards at the time were easily neck and neck with Nvidia.
They did however look appeasing to AMD which bought them out to get themselves a very good GPU business to help compete with Intel's Integrated GPU's for mobile devices.
Luckily they know that keeping the Dedicated GPU business alive is good for them too, and even better they are sort of spinning off the GPU wing again as a semi separate company lol. Shoud have just left them separate as ATI but well, Corps will be Corps.

EdEddnEddy
Apr 5, 2012



eggyolk posted:

I've been meaning to ask this for a while.
My workstation PC has a 5820K in it with a 240mm AIO liquid cooler. I built it myself and it runs great for Solidworks stuff. During rendering it clocks 4.4 at 50C.
Problem is that I've been trying to get it to run at 3.6GHz default because some programs don't trigger the turbo boost and it runs slow as poo poo. After adjusting the bios to a higher minimum clock, it seems to switch between 1.2Ghz and 3.6 at a very high frequency according to Intel Power Gadget. Is this safe for the CPU? It switches frequencies several times per second and I'm worried it's damaging things. How do I get it to run at a constant bottom end speed?

If you are running a Windows OS as well, then go to Power Options under the Control Panel / Hardware and Sound Power Options, show the additional plans if you only see Balanced and Power Saver, and select High Performance. That one should keep the CPU near top turbo clock most of the time even idle, so if you are not doing anything with it, you might want to switch back to balanced for when you aren't doing much.


Another thing I recently discovered, is making a new Power Plan based off of High Performance. If you do that, then set the Minimum Processor State to 5% and Max to 100%, then based on the High Performance plan, it will still idle when its doing nothing like in balanced, but likes to shoot up to near max turbo a lot quicker than in Balanced mode. Works great for my VR stuff and Games that do the same of not pulling the CPU up to speed, yet benefit a good bit when it is there. Really weird.

Adbot
ADBOT LOVES YOU

EdEddnEddy
Apr 5, 2012



Yep. My English on Friday apparently took a nosedive as it was a rough week, and Saturday didn't help make the week any better. :(

I agree their drivers also were always a mixed bag. Usually you ended up staying with the one good one that worked with your card and all the games you were wanting to play at the time. They have gotten better, but so has Nvidia so it is a constant uphill battle for them.

At least the next gen of cards should bring on some much needed competition once again, if Nvidia doesn't just curbstomp them with their new tech right out of the gate. They both have had some major time to put R&D into their new stuff with how long they have been sitting on their current tech with Fury being the only real newish tech in a long while.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply