Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Khorne
May 1, 2002
This is kinda disappointing. I've had an i7 3770k (ivy bridge) since it came out in 2012 and I was looking to maybe upgrade in early 2016. It looks like I am going to be sitting on this system (i7 3770k / 16gb of ram / gtx 670) for an eternity. Judging by the road map, probably until 2017 at the earliest and 2018 at the latest. That's nuts. I think the only other system I had for a duration similar to this was an E8400. That thing can still run lots of games coming out today at 60+ fps.

Khorne fucked around with this message at 21:58 on Aug 5, 2015

Adbot
ADBOT LOVES YOU

Khorne
May 1, 2002

Josh Lyman posted:

Upgrading your GPU to a 970 will be a significant performance increase.
I know the GPU could be upgraded and GPUs keep advancing. The truth is the 670 still performs great because I am on 1920x1080 and don't use particularly high settings ever. I prefer a steady 100-140 fps with no dips which means even on a 970 I'd be turning things down a bit. The 670 actually runs most modern games on medium-high settings at those frame rates. And for single player stuff you can turn things up a bit more and get 60+ fps still.

But GPU and my Samsung 830 256gb from the original build are the two things that might get upgraded before I actually build a new pc. Mostly because 256gb isn't enough for a primary hard drive and juggling applications onto other hard drives is obnoxious. If I hadn't installed an aftermarket cooler on the 670 I'd want to get rid of it just because of how obscenely loud it was.

Tab8715 posted:

Where did you read this and what's there reasoning?
Some of the reasoning for upgrading from sandy/ivy bridge is "those motherboards only have 2 SATA 3 ports the rest are SATA 2" and "USB 3 ports aren't native/are generally a bit more scarce" and "the PCI-E pipeline is poo poo so you can't run PCI-E ssds without cannibalizing GPU bandwidth!" Those were the three arguments I saw on a few review sites.

I don't know what other people are doing, but PCI-E ssds seem pointless at this moment. And I'm not going to have more than 2 ssds so why would I want more than 2 sata 3.0 ports. And USB 3 being native or not is irrelevant because it performs the same. On top of all that, if I actually needed any of these things I could build a current gen system when I need them instead of now when I don't need them. And let's not forget that even though modern ssds can saturate sata 3, the pci-e ssds perform pretty much identical for most real world applications.

I feel like a luddite making this post. If I were building a desktop today I'd go with Skylake, but because I have a decent ivy bridge system there's still no compelling reason to upgrade. Laptops have came a long way since 2012 at least.

Khorne fucked around with this message at 09:08 on Aug 6, 2015

Khorne
May 1, 2002
I posted in the wrong thread.

Khorne
May 1, 2002

WhyteRyce posted:

I don't know much of anything in this space but not to long ago I remember hearing the complaints about CUDA is that it sucks to write for, won't give you the performance boost they claim unless you write it correctly, requires you to hire expensive software guys in order to write and maintain good CUDA code. I'm assuming those complaints were overblown?
CUDA and OpenCL are both not particularly hard to write for if you have a background in C and hopefully a basic understanding of gpu architecture. If you want to truly leverage the platform then it takes some serious effort depending on what you are doing, but to get better performance than optimized non-gpu implementations of things that should be done on GPU is generally trivial.

I don't know what performance boost they claim. For financial stuff I'm sure CUDA developers cost a fortune. For scientific stuff you're looking at fairly average pay if you can find someone for the job.

Grundulum posted:

Also, have fun if you don't use C as your scientific language of choice. CUDA Fortran exists, but only for one proprietary compiler and is just as much of a pain in the dick to code for. The next HPC purchase I make will probably be a Phi.
Your project doesn't have to be in C. You write CUDA/OpenCL code in their version of C and it cleanly integrates with pretty much any reasonable language. I've personally done implementations for various research groups that use C++, java, and python.

If someone wanted me to use Cuda Fortran I'd probably quit my job. Having to deal with fortran code frequently is unfortunate enough, but taking something that is well suited to one language and bringing it to another language it has no business being associated with would really irk me. There's a reason Fortran isn't used for systems programming and C is. With that said, I'd work on a fortran project that used CUDA but it would have to be called through a binding/dll of some kind and not be done in CUDA Fortran.

Khorne fucked around with this message at 19:37 on Aug 19, 2015

Khorne
May 1, 2002

Richard M Nixon posted:

I can't imagine what market they were targeting where you'd need huge parallelism but your dataset was insignificant in size.
Phi, and gpu computing in general, is pretty decent for molecular dynamics simulations. It's likely pretty decent for lots of other physics simulations too. The datasets are small in size because you have a small number of floats, let's say three, for each particle. And then you have some number of particles, and it fits pretty easily in far less than 8gb of memory. It's also trivially parallelized with no relevant branching.

Khorne fucked around with this message at 19:19 on Oct 20, 2015

Khorne
May 1, 2002

Kazinsal posted:

Dumbass friend of mine needs a quiet 1366 cooler on a budget. Like, a $40 budget.

212 EVO with the fan speed dramatically lowered?
Is he replacing the cooler on his graphics card? Because most noise from the case will come from the gpu which are mostly all 60dB monsters ever since the gtx 6xx days (at least). Make sure he avoids the blower design and that helps a little bit, but they are still obnoxiously loud for the most part.

Khorne fucked around with this message at 21:05 on Nov 30, 2015

Khorne
May 1, 2002

Smol posted:

Is Prime95 just brutal on Skylakes? It caused like 20C higher temperatures than any other stress test.

(I went and bought a 6700k, trying to OC it now)
It depends on the settings you use, but yes prime95 does more than throw "100%" load at the cpu. It's brutal on everything.

It's the best tool for making sure your overclock is stable and not going to have too high of an average temp under stress.

I'd also recommend doing other things while prime95 is running after you think it's stable. While ocing my i7 3770k I ran into a few situations where it could run prime95 for 24 hours+, but if I tried to watch a youtube video while prime95 was going or browse websites prime95 would crash or I'd blue screen. Getting the voltage settings just right was a pain.

Khorne fucked around with this message at 20:55 on Jun 7, 2016

Khorne
May 1, 2002

mediaphage posted:

Honestly if it's that multithreaded (I assume because the mods are all separate?) and you can afford it, maybe it's time to move up to something with more than four cores.
Modded MC primarily uses a single core. It's not very multithreaded at all.

Khorne
May 1, 2002

necrobobsledder posted:

Sandy Bridge will be the first CPU to be unusable because the supply of LGA1150 motherboards were exhausted long before the CPUs failed. My Xeon E3-1230 is going to be my Kubernetes machine into the next decade it seems.
Other older gens aren't doing much better. I've been eying some used server processors for some time and the motherboard prices went from $250 to $450 over the past year. RAM prices have also nearly doubled. Which sucks, because I wanted to go from 16gb->32gb on the ivy bridge machine I use at home. 4.5ghz ivy bridge on air still performs really drat well vs modern processors for gaming and general stuff.

I might end up upgrading just to donate my current machine to a nephew or something. I feel like the biggest drat old person/nerd hybrid alive for not wanting to pay current DDR3 prices because of what they once were.

Khorne fucked around with this message at 00:19 on Jul 1, 2017

Khorne
May 1, 2002

Combat Pretzel posted:

So what exactly is the difference between a server and desktop product these days anyway? Especially when the purported desktop product supports all relevant server things like ECC, high bus bandwidth and virtualization?

That slide is pretty much a pants-making GBS threads admission.
Compare server motherboards to home ones. Compare xeon configurations (especially TDP/core count/cache) to desktop. Intel's server products blow the homes away. Although not for games and common home tasks with the notable exception of video encoding.

The difference between a server and desktop product is why AMD doesn't exist in the server market anymore. I am not even sure when the last time AMD had a competitive server product was, and that they escaped the bulldozer architecture without a legitimate lawsuit from vendors or customers in that market is impressive. I'm biased because I'm on the HPC side of things.

For home use, the current gen AMD cpus are good for the first time in a long, long time.

Khorne fucked around with this message at 11:47 on Jul 14, 2017

Khorne
May 1, 2002

movax posted:

I'm convinced buying a 2600K at launch was the best CPU purchase I've ever made. I think my laptop is also Sandy Bridge, and my old MBP is a Nehalem; unfortunately, that one limps a little bit with websites that are full of lovely JavaScript. Or that's just an old OS X install and Safari sucking...
Buying a 3700k was a solid investment for me. My CMOS battery died last week, and my score in benchmarks is higher than many people with current gen cpus. My only complaint is that I have win7 home pro which has a 16gb ram limit and I really want 32gb or more of ram now. Web browsers eat ram like candy in 2017, and I run a few vms. I've been debating just switching to linux because I don't even have time to game lately and it's a more convenient operating system for everything other than gaming. ECC would be nice too, but it's not like newer generation home stuff has ecc either. I don't think I've ever owned a primary desktop computer for going on 6 years before.

I keep looking to upgrade and the motivation just isn't there yet. I care too much about >=144 fps in games to play at over 1080p, and everything else it does just fine.

Khorne fucked around with this message at 18:35 on Jul 23, 2017

Khorne
May 1, 2002

bobfather posted:

Your 3770k will overclock to at least 4.3 on air, easily, so do that if you haven't yet.

If you bought Windows 7 and have a key, that key will upgrade you to Windows 10 Home. Just download the Windows 10 installer, build a USB boot drive and input your 7 Home key.

Otherwise you can download the Windows 10 Assistive Technologies eye and upgrade to Win 10 Pro for free.

No good reason to stick with 7 these days, especially if it's holding you back.
It runs stable at 4.4-4.5 on air. Peak temperature is low-mid 70s c I think as long as the room it's in isn't super hot. Even 4.6 runs alright at the same voltage, but it's a little more volatile. I've only used that in h1z1: kotk because that game is coded awful and ridiculously cpu bound.

It's great to know I can still upgrade my windows install.

Actually, I've always had a question about overclocking it. I have it set to a x45 multiplier and 1.330v, but it usually draws 1.28v maybe 1.31v running prime95. Is this because of my motherboard, just how it is, or what's the deal? I have an ASRock Z77 Extreme6 if it makes any difference. I also noticed putting voltage higher than 1.325V-1.330V to say 1.35v made little to no difference in stability at higher clock speeds

Khorne fucked around with this message at 19:10 on Jul 23, 2017

Khorne
May 1, 2002
Why are people so hyped about ryzen/threadripper? It seems competitive on paper, but if you look at benchmarks it has 7%-10% worse single core performance than a 6 year old intel architecture. Which is mostly what matters on the desktop. It's nowhere near competitive with recent intel stuff unless your life is encoding videos, and at that point intel has a better solution there too: buy used 10 core xeons for $70 each and put them in a dual socket motherboard.

I have some mild hype because intel might release some 6 core+ chips at a competitive price to compete with the people easily duped into buying an inferior AMD product, but that's about it. I'm an AMD fanboy at heart, as everyone should be, but man who spends money on them anymore. :(

Khorne fucked around with this message at 04:33 on Jul 28, 2017

Khorne
May 1, 2002

Malloc Voidstar posted:

source your press releases

edit: your addition about "competitive price" is the point, AMD's CPUs here are slightly worse on performance but price also matters and they are absolutely competitive/winning on performance for the price
The i9 kaby lake x chip is $345 right now. It has a ~30% performance gain single core over the $300 ryzen processor.

Anime Schoolgirl posted:

Games are not the only thing that matters to people, hard as it may seem to believe.
Things that matter on the desktop that won't be limited by poor single core performance: ahh, umm huh, benchmarks and video encoding. Even things like web browsing or compiling (even gcc with a favorable -j for the ryzen) will be dominated by a 4c with higher single core performance vs the 8c.

Anime Schoolgirl posted:

Sure, people can afford 500 dollar used motherboards.
You can get LGA2011 dual socket motherboards for around $280 new still.
Most things people do on the desktop are heavily bottlenecked by single core performance.

Khorne fucked around with this message at 04:53 on Jul 28, 2017

Khorne
May 1, 2002

PerrineClostermann posted:

Like...? Most people aren't gaming. Even the gamers don't spend the majority of their time in game.
Like anything except encoding video? We're comparing a 4 core and 8 core processor here. Not a 1 core and 8 core.

Most people aren't running a ton of VMs with active cpus or hpc applications.

Anime Schoolgirl posted:

I think we found the first person in the thread who actually bought a Skylake-X chip
Nah, I have an i7 3770k and it has around 10% higher single and quad core performance than an overclocked ryzen. I paid around $330 for it 5 years ago.

Khorne
May 1, 2002

PerrineClostermann posted:

TIL multitasking is single-threaded
I mean, I know what you are saying, but it kind of is when things aren't devouring CPU or particularly latency dependent. Especially when you have 4 physical cores/8threads vs 8 cores/16 threads. What do you expect people to be running? A video + 20gb of ram worth of browser tabs + a game + a vm + messenging crap + text editors (or even something more awkward like eclipse) will run just fine at once on 4c.
I mean, admittedly it depends what you are compiling and on a number of other factors.

I want more cores too, but the cost of significant single core performance is too much.

Khorne fucked around with this message at 05:02 on Jul 28, 2017

Khorne
May 1, 2002
Web browser is significantly limited by single core performance. If you have 2+ cores your single core clock speed is going to matter significantly more even if you have 100+ tabs open.

Combat Pretzel posted:

I'm not even sure who he's preaching towards. It's almost a given that most people posting in this thread are more of the power user sort and can make use of a higher number of cores. Sometimes I dabble in video, rendering, I work with VMs and like to multitask like an idiot. The more cores, the merrier.

Anything with more source files than you have fingers on both of your hands. OK, that's maybe a bit of hyperbole, but you get the idea. Compiling the ground control software of the multirotor firmware I'm working on takes like 10 minutes with -j6 on my 5820K. I'm sure it'll go faster with more cores, since it's around 3000 C/C++ files.
Is -j6 faster than -j9/-j12 with the 5820K and that particular build?

Video and rendering both make great use of more cores. VM stuff can also.

Khorne
May 1, 2002

Anime Schoolgirl posted:

That's not true anymore unless you're still using Firefox 24. All browsers these days will eat all and every core they want.
The actual performance limiting code still runs single core. Your average, or even above average, javascript heavy site is not going to leverage parallelism very well at all.

I am a huge drat nerd who has hundreds of tabs open at any time like everyone else in the thread.

NewFatMike posted:

Not everyone wants to buy used/warranties are cool and good also.

New motherboards also support new features like M.2/NVME and USB C.
NVME is good for sure. You can kinda use it on older motherboards if you want to mess around.

Khorne fucked around with this message at 05:36 on Jul 28, 2017

Khorne
May 1, 2002

Anime Schoolgirl posted:

Except they do in Chrome. My lovely braswell student computer is significantly faster than a core m3 on loving Facebook.
I am sitting here on a 5200u (2c) and facebook is instant even with a bunch of messenger tabs, a video playing in chrome, visual studio code, sublime text, a video call, a few ssh sessions, and some other crap open.

PerrineClostermann posted:

Chrome regularly takes up 20-40% of my CPU while I'm browsing and not doing intensive tasks. I recently slimmed down to 95 tabs. 2600k.
20%-40% of your 2600k is probably only using 2 cores. It does the same to my 3770k.

Khorne fucked around with this message at 05:24 on Jul 28, 2017

Khorne
May 1, 2002

SourKraut posted:

Source? I'm curious.
When you ask for source I assume you're asking for who I'm quoting with my not serious posts. I just took the time to look into Ryzen and was a bit disappointed it didn't motivate me to upgrade or to use it for a home server project. It's a pretty solid platform and the first time AMD has been anywhere near competitive in ages.

If you mean on benchmarks, check user submitted benchmarks or any aggregate site. The numbers are roughly, 3770k just flat out beats 1700 stock/oc, it's even with 1700x stock and has a ~6% edge oc, it gets beat by 1800x stock, has ~2% edge oc, and obviously at that slim of a margin it gets smashed on anything that leverages the extra cores. Right, and quad core the 1700x and 1800x both outperform the 3770k slightly if I remember right.

Kinda moot given that no one should buy a 3770k in 2017 because even used they demand too high of a price, but it's not like performance got worse on later gen intel cpus.

PerrineClostermann posted:

...And it's definitely not the only thing I do at once.
Good news, there's 2 more cores and 1 of the 2 browser cores isn't working that hard. It runs into the issue of "what are you doing", if you're tabbed into the browser and doing stuff you're not tabbed into something else.

I don't think web browser justifies more cores in any way, and even things that benchmark new cpus with web browsers will agree as far as I've seen.

8+ cores is definitely a big reason I am looking forward to upgrading in a year or two. At the rate things are going it looks like I'm waiting for my computer to die a horrible death or for 7nm to hit, though, which will probably be longer than a year or two.

Khorne fucked around with this message at 05:52 on Jul 28, 2017

Khorne
May 1, 2002

NihilismNow posted:

Where do i get these used 10 core xeons for $70?
People on Ebay want a lot more for their E5-2680 V2's.
The days of the 10 core ES with release steppings for $70 if you message the seller are over. They realized they're pretty much the release chip and started selling them for significantly more. You can still get the 8 core sandy bridge gen chips for around that if you hunt around and bother to message sellers. Especially if you're buying a pair.

Ryzen is favorable to getting an old xeon for desktop use. The TDP difference alone if you plan on using it heavily should pay for itself, it likely has better gaming or single threaded performance, and not having to "hack" the bios for nvme support is a big plus. If it's not going to be under constant heavy load, if you don't want nvme, if single thread performance isn't a sticking point, and if you value ECC the water gets much murkier. Technically, ryzen/threadripper support ecc, but do/will affordable motherboards exist with ecc support?

Twerk from Home posted:

So we've gotten to the bottom of this: You're a sucker if you buy CPUs less than 3 years old. Why get a Xeon v4 or Gold/Platinum when you can get a v1 or v2? Why buy an X299 chip when you can buy 2014's finest 5820K? Why buy memory in 2017 when you could time travel to when it cost half as much 18 months ago?
Nah. The used cpu/motherboard market on the desktop is a wasteland. For server grade stuff, the used market is alright depending on when you buy and if you take your time and haggle a bit. The problem is if you have a competitive desktop chip from the past 5 years there's very little incentive to upgrade, and a lot of extra cores is a potential incentive, just like nvme. However, when the ipc is worse than your existing chip from 2012 it really makes it a hard sell unless your workflow is hungry for more cores and you can't offload it.

I do lots of cpu intensive stuff on the hpc clusters I have access to, but if the task is 4 or less cores it runs notably faster on my desktop even vs fairly decent broadwell xeons. I still don't usually run them on my desktop because sending it to something that does nothing else and checking back tomorrow is real convenient, and in all honesty depending on what I was doing at home I'd build a system with a large number of cores that's not my desktop and do the same drat thing. I realize this isn't an option for certain serious hobby/work stuff.

Khorne fucked around with this message at 15:38 on Jul 28, 2017

Khorne
May 1, 2002

Cygni posted:

For real 7nm, everyone basically needs EUV to work, which looked shaky for like a decade there. The machinery exists now, finally. Now its going to be a race to see who can actually make a product with it that works and has realistic margins. I wouldn't be surprised at all to see everyone stall out again here.
EUV has worked for a while it's just about margins. EUV has a large upfront cost and has significantly lower yields than the current process.

I like how those guys have "5nm" on their 3 year roadmap. It'll be interesting to see when quantum tunneling comes into play and how it is dealt with.

Khorne fucked around with this message at 12:13 on Aug 4, 2017

Khorne
May 1, 2002

ufarn posted:

I literally have no idea what laptop I'd get if I had to move from MacBooks. ThinkPad were awesome, but after Lenovo bought them, they don't hold the same allure as they used to.
Thinkpads are still awesome. Just do your research. One of the gens had a garbage trackpad. Some of the ultra thins can have thermal issues if you block their vents, to the surprise of no one. Otherwise, no real issues, the "malware" thing is overblown garbage. I have a t450s and it is pretty good. Very durable, I throw it around and do all kinds of outdoor things with it. The screen is great (1920x1080 @ 14" ips with respectable color/brightness). The ports are kinda questionable, but having ethernet + vga without any adapters is solid. HDMI needed a mini display port adapter.

The only real con is it's tedious to get inside and I worried I would break something. There is no way I am getting an SSD or more RAM from lenovo. They markup price a lot and don't really describe what their ssd is. I bought and installed those myself.

Khorne fucked around with this message at 16:41 on Aug 18, 2017

Khorne
May 1, 2002

priznat posted:

I've never personally owned an Asrock motherboard but we buy a lot at work and it'll probably be my go to on the next build. Asus has gone too wacky with the floater lights and a lot of extra poo poo no one needs.
ASRock is good. Just look at the components they are using and make sure none are a hassle. Not sure how relevant this is now. With my z77 build it took around 6 months for decent USB 3.0 drivers to come out for one of the hubs they used. Before then it would just turn on+off constantly. Like intermittently, every 5s-8s. ASRock themselves make good enough products that last with the things you want for a decent price.

ColHannibal posted:

Western digital for platter drives.
If 8tb+ sure, even seagate is good in 8tb+, but if it's 6tb or lower I'd go with HGST. Which is owned by WD now, but the HGST drives have insane reliability for the price.

If you're in the US, bestbuy's ezstore or whatever 8tb externals go on sale for $159.99 regularly and have wd red drives with a 256mb cache in them, are easy to shuck, and you can RMA them. Not sure I'd even bother with any other platter drives these days at the consumer level.

Khorne fucked around with this message at 16:36 on Sep 2, 2017

Khorne
May 1, 2002

Craptacular! posted:

For now I'm just going to keep my Ivy and see what I can reach at 1.2v or something. I saw some guy achieved 4.5ghz at 1.4 and everyone was telling him that that was too much voltage and he was killing his chip, so I figure at either stock voltage or just a little over I'll see if I can get what I want and that's that. Delidding is absolutely not an option as I can't afford to replace a CPU right now, and actually to be honest I could totally just get by without OC since it's not like I ever complain about this PC. I just decided to do it now that the generation is five years old and no more documentation about how to OC it is going to appear than what's already out there.

This would, literally, be the first time I even try to OC anything. I go back a long ways (386 -> K6-2 -> Coppermine P3 -> Prescott P4 -> Conroe C2D -> Ivy i7) and never bothered to until now. But I'd feel more comfortable replacing this nearly 5 year old Corsair pump with something new, and I also thought my case could be prettier even when it was new, so why not.
I run an i7 3770k at 4.5ghz with 1.35 (1.31-1.32) draw. It might actually be 1.325 or 1.335 or something, I'm not sure and not at my desktop. I have since release, so over 5 years at this point. On air (noctua nh-d14) and it's fine. For most of its life it kept itself fully clocked and I'd run 100% load stuff overnight (8-12 hours per day).

My advice would be to go higher. Just pick a safe enough voltage and see where you get. My chip doesn't run stable at 4.6 no matter how much voltage I throw at it. It's just as unstable as it is at around 1.35 or so.

Has anyone actually had a CPU die in the past 5 years? I can only think of one anecdotally, and it was an amd chip crushed by a too heavy copper heatsink that required multiple high risk modifications to even get into that state.

I probably shouldn't post this here, but I have a P3 that ran 24/7 for over 12 years with a nickel jammed between the heatsink and cpu to force contact on the other half of the heatsink. I retired it because I had no use for it anymore and a better replacement. Why there were coins in between my heatsink and cpu is another story entirely.

Khorne fucked around with this message at 00:18 on Sep 28, 2017

Khorne
May 1, 2002

MaxxBot posted:

From the linus vid
What the gently caress is this chart showing?

It says value - productivity, but then it gives "MSRP" and "Total System". Presumably it's some coefficient involving price and productivity, but uh who the hell knows how they work. The chart does not even appear to be sorted by any known metric because as far as I know Threadripper 1950x is the most expensive setup on it.

Khorne fucked around with this message at 04:38 on Mar 15, 2018

Khorne
May 1, 2002

Michael Jackson posted:

more like MS in Paint
Hey, the other graph posted makes perfect sense.

Khorne
May 1, 2002
Debating delidding my i7 3770k. Temps are going up and it has been almost 6 years so I am going to have to reseat it anyway.

Does anyone have decent resources on doing it? I have access to a lot of tools. So I don't think I need to buy a kit, but I also am not entirely sure what I need to do to safely remove it. Or perhaps more importantly, what I should avoid doing.

I'll probably record video of it happening, and if I break it I'll post it for the amusement of the thread.

Khorne fucked around with this message at 17:26 on Nov 15, 2017

Khorne
May 1, 2002

Palladium posted:

The real surprise is how a 4.8GHz Sandy Bridge is now unable to keep up with a Haswell at only 3.6GHz.
If you look at actual games with real settings you'll see it reasonably keeps up with the 8700k, nevermind a haswell. They come later in the review.

Khorne
May 1, 2002

Measly Twerp posted:

I was just responding to the idea that it's sensationalised.

For me at least it made sense to go Ryzen 1700 as as an upgrade last year. More threads was more useful than absolute speed, especially when trying to replicate a complicated server setup locally with virtual machines. The slowest part of web development has always been the database, if I lost performance there I'd notice it right away and I imagine it would be much the same for anyone with a Xeon workstation.

I actually upgraded from an i3-6100 desktop and also a MacBook Pro that work supplied. Both of these were absolutely excruciatingly slow when dealing with larger website databases. I'm talking about pages taking multiple seconds to render where on the production environment they take less than a tenth of a second.

But many people I know work on similar awful hardware, loosing 10-15% of database performance would absolutely loving suck. It's bloody frustrating when you're just trying to iterate on an idea quickly and the page load just drags on forever.


What does this have to do with anything? Even if there are more gamers here thann developers, that doesn't mean anyone is "buying the sensationalism".
I've never found local databases to be a bottleneck, but I also can ssh tunel to dev databases for larger data sets.

Khorne fucked around with this message at 01:19 on Jan 25, 2018

Khorne
May 1, 2002

Shaocaholica posted:

Are modern games even taking advantage of that or is it more to cover the overhead of capturing/steaming?
You have things running on your computer that aren't games. And it's wonderful to have some spare cores laying around to throw at those tasks.

If you only open a game and have nothing else open then it matters a whole lot less.

Khorne
May 1, 2002

bobfather posted:

The better z68 and z77 boards still sell for $$ on Ebay. Also 2600K and 3770K chips still fetch ~$120-130, which is crazy.

You can also convince people that it's a good choice to go with these used parts by emphasizing just how (relatively) cheap used DDR3 is versus new DDR4.
It's not that crazy given the 2600k and 3770k out perform the current gen ryzens, even overclocked, and matches or exceeds non-k intel chips (not overclocked) in gaming and other single thread limited tasks. Of course, they get smoked in workstation and productivity related stuff. I definitely wouldn't buy one over an r5 1600 or 1600x right now. The RAM savings are kinda nice on an extreme budget, DDR3 wasn't hit with price fixing.

With z77 you can bios hack m2 support as well, but there's no real point because SATA ssds are pretty much identical in real world performance and cost way less.

Khorne fucked around with this message at 18:31 on Apr 6, 2018

Khorne
May 1, 2002

spasticColon posted:

My 2500K won't go past 4.2Ghz no matter what I do which is one of the reasons why I'm not going to bother with overclocking next time around.
That's the literal worst 2500k I have ever heard of, and I thought the guy who couldn't hit 4.5 @ 1.35v had it bad. The worst chip I had heard of before this was someone who needed 1.36v to hit 4.4 and 1.42 to hit 4.5.

Palladium posted:

Yeah, because I'm sure everyone will honestly report their actual OCs in a subject matter mostly fueled by e-peen dick waving.
It's not about epeen when it comes to the last 7-8 generations of intel processors.

Heck, the last few gens we have stats from siliconlottery on the percentage of processors that can hit what clock at what voltage.

Khorne fucked around with this message at 05:21 on Apr 13, 2018

Khorne
May 1, 2002

JazzmasterCurious posted:

Fake. It should have a base clock speed of 4.7 GHz with the boost maxed at 5 :v:
A-B testing shows consumers like a big turbo boost and some overclock headroom. Can't do things like AMD and provide the chip's actual speeds. Need to use lovely TIM and ship the CPU at a clock 30% or more below what 99% of our binning can do.

Khorne fucked around with this message at 15:54 on Apr 16, 2018

Khorne
May 1, 2002

BobHoward posted:

(you missed the joke)

(hint: 4.77)
My post was a joke too. Well, the second half isn't a very good joke.

Khorne
May 1, 2002
When is intel announcing 8 cores?

Khorne fucked around with this message at 22:19 on Apr 18, 2018

Khorne
May 1, 2002

BIG HEADLINE posted:

To be fair, both AMD and Intel are out of ideas - their only play at the moment is shoehorning as many cores as they can onto substrate. I mean, are we going to see LGA16528 someday?
LGA stands for Last Great Architecture.

Khorne
May 1, 2002

GRINDCORE MEGGIDO posted:

Be ironic if Intel's Keller designed architecture sucks and the RajaGPU is outstanding.
Keller was mostly responsible for the shelved AMD ARM stuff that never came to light. The people who designed Zen largely still work for AMD.

Khorne
May 1, 2002

Space Racist posted:

Has anything been confirmed about exactly *why* Intel’s 10nm process is so boned compared to the rest of the industry? Global Foundries and TSMC seem rather confident of hitting 7nm, while Intel is 3+ years past their target date and still sweating the ramp for 10nm.
7nm/10nm are marketing numbers. Intel's 10nm is equivalent to their 7nm.

I should put "roughly equivalent" because they are different processes, but the density and performance should be similar.

The biggest difference is probably that the 7nm coalition is borderline "everyone who isn't intel". It's a combination of IBM, Samsung, and Globalfoundries.

TSMC should have 7nm at production capacity around when Intel has 10nm up. If not sooner.

Khorne fucked around with this message at 15:03 on Apr 28, 2018

Adbot
ADBOT LOVES YOU

Khorne
May 1, 2002

Anime Schoolgirl posted:

it's like they could use a excellent thermally conductive bonding material for this
To be completely fair, soldering the spreader to the die is somewhat complex and AMD has a bunch of cool patents for it.

Intel is also able to cut a lot more corners by not soldering than just the TIM. You really see it in some of their 14nm processors.

The TIM is also more reliable than solder in theory, but in reality we see almost no reports of damage to soldered IHS besides in the LN2 community which is super niche.

Khorne fucked around with this message at 21:43 on May 3, 2018

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply