|
movax posted:I think if (maybe when) Light Peak falls flat on its face, the biggest benefits will be gained industry experience in the R&D and manufacturing of mass-market consumer optical-devices (kind of like how Toslink took off super fast and is in drat near every piece of A/V equipment now). The thing with the optical transceivers is that if you're going to make them in batches of 100,000 for mass consumer level stuff, they get really stupidly cheap. The only really lovely part will be the fact that you need a fusion splicer to make cables.
|
# ¿ Sep 19, 2010 00:22 |
|
|
# ¿ Apr 25, 2024 12:19 |
|
fishmech posted:You realize that people just using Intel's onboard graphics has been norm for nearly a decade now right? The Sandy Bridge stuff seems to be more about snagging people who some gaming as well. It also lays the groundwork for people who are casual gamers, so Pop Cap can make even prettier games that will work awesomely on the new integrated video chips.
|
# ¿ Sep 19, 2010 02:15 |
|
JawnV6 posted:And it's possible to demo Silicon loooooong before any of those processes are ready to kick off, so I'm not sure there's enough points to complete the curve. Demo silicon that's about 90% as awesome as production volume silicon is generally available 6-18 months before production, depending on pretty much everything from node complexity to phase of the moon. Early yields of pre-production silicon are universally dogshit, but you still get a few usable CPUs off the wafer for debug, unit testing and general Foundry Alchemy. I'm still rocking my glorious 5 year old Lynnfield i7 860, and I probably won't upgrade until Skylake is a thing.
|
# ¿ Sep 10, 2014 03:11 |
|
go3 posted:You're not going to convince IT guy since he is a cast-iron idiot. If you want change, convince the people above/around him. "My lovely laptop I got for $300 at walmart runs our software twice as fast as the machines you just finished building for us, why is that? Were they even cheaper than $300?"
|
# ¿ Sep 18, 2014 06:14 |
|
SwissCM posted:Because that's allegedly what happened with TMT. BluRay licensing requires the licensee to put in a bunch of security crap into their software. Pretty much this. The license agreement contains so much mealy mouthed horseshit that depending on whose interpretation you use, would require you to basically make the software nonfunctional due to how much added crap they require. And the interpretation you use is the one that has the entire Blu-Ray consortium behind it. because contract breach lawsuits are so cheap and easy to deal with in court. It's like finding a particularly shiny vice, cranking it around your nuts, then giving it to a schizophrenic homeless man who wants to pawn it for a few bucks.
|
# ¿ Oct 8, 2014 02:31 |
|
Agreed posted:That's incredible! It's more like "If this thing has any sort of software fuckup, everyone is going to die horribly, publicly, and immediately. We need to take whatever steps needed to make sure that doesn't happen." Systems control stuff these days is kinda sorta similar. The multiple versions 3-way voting system is fairly popular for safety critical systems on things that fall out of the sky. Same with having incredibly redundant hardware to handle it. On milspec stuff, you can have hardware with 100% ECC correction on the memory, on each interface bus, inside the processor, and on each instruction. You could have one bitflip in ram, another bitflip on the bus due to interference, an instruction that got corrupted due to a freak magnetic issue, and still get the right answer on the output. All at 95C. Not very fast mind you, but there isn't a lot of cruft in systems that need stuff like that. Methylethylaldehyde fucked around with this message at 03:01 on Oct 22, 2014 |
# ¿ Oct 22, 2014 02:57 |
|
cisco privilege posted:This but 2x Sanyo Denki H1011's on a controller. The TFB1212GHE looks like a better fan, higher airflow at almost twice the static pressure. And at 6 DB louder, a real steal for persistent tinnitus.
|
# ¿ Dec 12, 2014 21:14 |
|
I just hope my Nehalem lives long enough that I can upgrade to Skylake without having to pick up another motherboard or proc before then. I think I'll end up with a faster proc with something like a 50 or 70% IPC improvement, which should be really handy.
|
# ¿ Feb 3, 2015 20:21 |
|
Darkpriest667 posted:I'm on a 2600k that I haven't even overclocked yet. The only thing that has pushed my CPU (besides folding at home) is an unoptimized Star Citizen while recording video lossless. I think I will wait on Broadwell-E but DDR4 is nasty expensive for almost no real world benefit. 6 months should get the prices down from 'enjoy your vicious colon reaming' to 'at least we wined and dined you first, you big babby'. Still not as Ideal as buying ram for $6/GB, but 16GB for $300 isn't far off what I paid for my ram way back in the time before time when DDR3 2133 was hot poo poo bleeding edge stuff. Hell, unless you get the fancy gaming poo poo, it's barely more expensive than DDR3, an 8 GB stick of Crucial is $100, it's come way down since it first launched.
|
# ¿ Feb 4, 2015 12:50 |
|
MrYenko posted:For me, moving from Nehalem to Skylake isn't even about performance, its just to get out of my ancient X58 chipset motherboard, and even that isn't because of speed concerns, but because the thing is flat out old. Dead USB ports, it hasn't had a functioning onboard network interface in years, and I really feel like it's the weak point of my machine, currently. Don't forget the fact that it's some horrible mix of PCIe 1.1 and 2.0a, and you have all both slots available after the video card.
|
# ¿ Feb 10, 2015 21:14 |
|
isndl posted:Those might be partially defective chips that are salvageable by pumping massive amounts of power through them. Not ideal but better than simply throwing them away if their yields are struggling. Depending on how they fuse the chips, it could also be a frequency/voltage/TDP tradeoff, where marginal chips with a really leaky set of cores get fused off and they just allocate the TDP on the other half to bumping the frequency/voltage up to compensate. 10 Cores with a 160W TDP due to a flaky subset of cores turns into a 4670K with ECC ram.
|
# ¿ Feb 14, 2015 04:34 |
|
BobHoward posted:E7 is also held to much more stringent reliability requirements. They're very conservative with binning. That, plus the extra power used by RAS and scalability features, is why you end up with much lower top-bin frequencies than the non-behemoth product lines. Yeah, I probably picked a lovely part to compare it to, it was the first 'stupid fast. 4 cores' thing I could think of. It's still probably TDP budgeting, disable the leaky cores and rebin it as a different SKU. That improves yields and allows for a sort of natural market segmentation effect without needing to actually produce more than one set of silicon masks. They can certainly afford to be super picky about which parts end up in the 5k CPU bin, but even partial recovery on a high cost item is better than no recovery.
|
# ¿ Feb 14, 2015 14:12 |
|
My Rhythmic Crotch posted:Dual 10Gb NICs built-in is pretty cool. We may finally be seeing the start of the cheaper 10GbE networking trend.
|
# ¿ Mar 11, 2015 22:45 |
|
Lord Windy posted:Is Hyper Threading decent? If you had two equally powered devices, would the one with hyper threader perform better in multithreading activities? Between 5% and 55% better performance depending on what exact mix of threads you have, and what part of the CPU they're using. In cases where you're doing the exact same poo poo with every thread, it'll be much closer to the former than the later.
|
# ¿ Apr 28, 2015 11:49 |
|
Twerk from Home posted:What's going on with GTA V then? All my friends playing it report that disabling HT in BIOS gets them 3-5 fps in the in-engine benchmark, most of the FAQs I've seen about it say "turn off hyperthreading if you have an i7". Really lovely programming would be my guess. There is an API call you can use to determine which cores are real and use those preferentially, but it sounds like it's just piling the threads on CPUs 0-3 and then locking cores with contention or something.
|
# ¿ Apr 28, 2015 16:22 |
|
mobby_6kl posted:Even games don't need a top-end CPU, so really you're only looking at a small subset that needs to run calculations or rendering on their desktops, which is a tiny minority of users. It's not the lack of competition, it's the lack of demand, mostly. And for any kind of professional rendering or hardcore scientific calculations, you end up with a dual socket Xeon workstation from a real OEM, and then stuff it full of 64+ GB of ECC RAM. And most/all scientific calcs and render jobs are stupidly multithreaded, so they scale really well to 8/12/24 cores of Xeon goodness @ 3ghz. And that market is a shitload bigger and higher margin than gamers will ever be. For everyone that seems to think that processors should keep getting faster and faster, keep in mind that at 5 Ghz, light travels about 6 cm per clock cycle in vacuum. On a decent sized processor core, that's not exactly a huge margin of error for total signal propagation.
|
# ¿ Jun 2, 2015 20:59 |
|
Rastor posted:Moore's Law has rammed into a wall. At least for silicon. And the non-silicon semiconductors are still years away from even being half as good as silicon in a production environment.
|
# ¿ Jun 24, 2015 07:51 |
|
Josh Lyman posted:Was the decision mainly to move pin costs to the motherboard manufacturers? Much lower inductance, needed to drive a good signal at lower voltages.
|
# ¿ Jul 18, 2015 11:57 |
|
Grim Up North posted:What I'm not getting is: Where is the difference between a PGA and a LGA socketed CPU wrt to conductor length and resulting inductance? I mean the pins are there in both cases, or does the decreased inductance result from something else? The PGA pins have to be longer, and the geometry of those pins and the internal spring clamps gives a higher inductance than the LGA system. The pin has to stick below the side loaded spring contractor by some amount, but the LGA contact pins are also the springs, so the total conductor length is lower and the resulting pin geometry isn't as odd. Longer pins means higher inductance. When you have a 50A or 100A current surge in 30 microseconds as the chip wakes up and goes to full load, the little bit of extra inductance can cause voltage droop and overshoot. That droop and overshoot causes instability in the transistors, so you either drive the chip harder at higher voltage to correct for the undershoot, or you slow the chip down so the little bit of droop doesn't cause stability issues. You can see the same thing happen on really crappy motherboards with single phase CPU VRMs. In that case the power delivery system can't handle the changing loads, and you see the same vdroop issues and resulting weird poo poo happening. It's a pretty tiny difference, but a few nanohenries of inductance when you have a dI/dt of 5MA/s can mean a .075 or greater voltage droop for potentially 100+ microseconds, more than long enough for a 10+ clock cycles to potentially have flaky switching behavior. Now that chips are in the .9-1.1V factory voltage range, 0.075V becomes a non-trivial ripple in supply voltage. It's also why a lot of the nicer motherboards have droop compensation built in, they bump the voltage up during large load swings and bring it back down after 150ish microseconds, to cancel out the droop for the most part. It's still there, but doesn't really affect the chip thermally. Is LGA better than PGA from a pure electric performance standpoint? Yes-ish. Is it better from a cost standpoint? I don't know. Can you engineer around it with motherboard features and creative wiring? You sure as gently caress can. To the average end user, it's basically a wash, most boards are smart enough to auto-compensate for the few things that are slightly different, and aside from how you clamp it in, there isn't anything different.
|
# ¿ Aug 2, 2015 13:17 |
|
Twerk from Home posted:This isn't that surprising, given that gaming is almost never CPU limited, DDR4 latency is a good bit higher normally, and there are differences in the platform that we don't fully understand right now. When the first i7-920 came out, wasn't it slower than a Q9650 in discrete GPU gaming because of cache changes? The latency and speed of the memory can also play a huge factor. The CAS/speed ratio determines how long in absolute time the processor has to wait for data, and with DDR pushing the CAS to 16/17 up from 9-10, you need DDR4 2800 at 15 to really offset the issue. It wouldn't surprise me if you could offset that performance issue with faster ram. They discovered something similar for the haswel over at anadtech.
|
# ¿ Aug 5, 2015 17:35 |
|
Lowen SoDium posted:Someone correct me if I am wrong, but it's a combination of RAM frequency and the CAS latency that make the difference. Divide the CAS latency by the RAM speed and multiply by 100 to get the latency in nanoseconds (or so some post on an overclocking forum told me). That's the latency, you also get a shitload more bandwidth to go with that nominal change in latency. Shouting 'go' still takes the 5ns to start, but you get a firehose worth of data instead of a garden hose's worth. That's the reason to upgrade. I think with a faster set of memory in the machines, the Skylake machines would bench a lot better, I can guarantee that a lot of those benchmarks are sensitive to both the latency as well as the speed of the memory.
|
# ¿ Aug 7, 2015 18:09 |
|
Anime Schoolgirl posted:Haswell i7-4790k boost and total speed including IPC is higher than skylake boost I just want native USB 3.0 ports, PCI 3.0, DMI 3.0, and an M2 SSD slot for those sweet sweet samsung drives coming out in a few months. My i7 860 is starting to show it's age, and my poor motherboard is losing it's base clock stability. Base clock manages to wander aimlessly between 83 and 100 Mhz, I'm honestly kinda surprised the thing still works, and I'm terrified of turning it off.
|
# ¿ Aug 14, 2015 21:49 |
|
Anime Schoolgirl posted:My room collects a ridiculous amount of dust for the airflow it gets (not much) which might not help for fans that have sleeve bearings exposed. I have a shop-vac in my room for this reason. The only time I've seen intel fans fail like that was after the ashfall we had a few years back. The ash is like lapping compound from hell, and will basically skullfuck anything with moving bearing surfaces.
|
# ¿ Aug 15, 2015 00:33 |
|
mayodreams posted:That's apparently why they are calling it 'Intel Optane.' That's loving worse.
|
# ¿ Aug 18, 2015 20:45 |
|
Hot drat! Newegg finally had the 6700k in stock. Goodbye Nehalem, hello Skylake! Also goodbye flaky motherboard with a pci bus clock that wanders 10ish Mhz up and down around 100Mhz.
|
# ¿ Sep 16, 2015 20:19 |
|
Lowen SoDium posted:I did not delid, but I am using a closed loop water cooler. I too will eventually delid once the custom retention brackets are in stock for the 6700k.
|
# ¿ Sep 24, 2015 06:42 |
|
mobby_6kl posted:I was about to order a new laptop for work but the X1 Carbon is the touchscreen model with only 8GB of RAM. Probably won't have a new configuration until the next refresh, is it still expected in February? The X1 is kinda lovely. My firm has like 3k of these things, and the RMA stats on them are about 3x as bad as the last generation's T430s. It's super nice up until one of the daughtercards takes a poo poo and you lose sound+usb, or wifi plus other usb. Or the digitizer craps out and now your mouse is a schizophrenic mess.
|
# ¿ Dec 19, 2015 01:30 |
|
Tab8715 posted:Mobile performance has grown enormously along with integrated graphics. Pretty much, laptops went from 4 hours, maybe, in ultra-dim mode reading static text in 2005, where the best game you could play was DOOM, to 'I can play battlefield on a Iris laptop' and still get 6-10 hours of reading time on it in 2015. If they moved all the video poo poo to L1/L2/L3 cache, they could probably do some pretty cool things with the desktop chips, but would 95% of the people buying them give a poo poo? Nope, because a C2Q and a brand new Skylake both open office apps and play Internet Hearts exactly the same.
|
# ¿ Mar 27, 2016 14:12 |
|
Lowen SoDium posted:Very briefly in the 90's, some game consoles and CD systems like the CDi and 3D0 had options for MPEG hardware add-on modules. There was a limited selection of VCD titles for sale. I think I saw Dances With Wolfs for sale. It was really big overseas and in piracy in the early 200s, where if you had a lovely SD-TV and s-video out you could get a 2-disk copy of whatever the latest movie was, and it was in pretty ok quality. Most of the china-best DVD players of the era could do all the VCD/SVCD formats, in addition to telling macrovision to eat a bag of dicks and being completely region free. A lot of them later had early DivX codec support, where you could run the really early avi versions of the 4CC codec encodes.
|
# ¿ May 11, 2016 19:46 |
|
DrDork posted:Not only is this true, but it actually has negative implications for cooling, since now you're talking more heat concentrated in a smaller area, and past a good TIM/copper cold plate there's already not a ton you can do to promote thermal transfer without getting pretty exotic. Some of the really cool microfluidics stuff might end up being present in high end head spreaders in a few years, basically a flat plate shaped heat pipe, only about 50x more expensive to fabricate. That or some of the graphene composites they're experimenting on, they can have some completely ridiculous thermal conductivities, shame they're even more expensive. Given how big the die itself is, I wonder if we might see the removal of the heatspreader and direct heatsink contact again on retail parts.
|
# ¿ Oct 17, 2016 21:46 |
|
Chuu posted:I suspect one reason the socket is so physically large due to Knight's Landing support, where they can make the chips as large as their large-die yields allows. Hopefully they prove me wrong and use the extra space in Xeons for some ridiculously large caches. Behold the 40 core Xeon E7240 Codenamed 'gently caress you, pay me', with an eye watering 512MB of shared L4 cache. We'll probably see some interesting real world improvements in things when you can buffer huge chunks of data to cache before chewing on it. Just think of how fast multithreaded bubblesort would be with the entire dataset in L4 cache.
|
# ¿ Oct 18, 2016 09:07 |
|
priznat posted:I'm wondering if it might be some kind of CPU/FPGA hybrid with the Altera stuff in there to change what peripherals it supports. That would actually be really handy for certain kinds of scientific calculations, assuming you can get whatever wierdass algorithm into usable verilog code. FPGAs can be 8-20x as power efficient compared to a regular CPU for a lot of stuff that you can either parallelize out the rear end by putting 90 functional compute units on the FPGA, or involves a lot of chewing on small bits of data, like a hash or checksum function, where you can store it in the FPGA's local memory and then pipeline the crap out of the calculation process. I know certain implementations of the openCV spec on FPGAs would get real time 1080p Haar Cascade performance for like 30W, compared to the CPU+GPU at like 200ish.
|
# ¿ Nov 10, 2016 20:43 |
|
priznat posted:Absolutely, I am convinced a big reason to move to the giant 3647 sockets is so they can have FPGAs with the cpu as a multi chip module and they can do a lot of stuff that offload cards/GPUs do now. Makes sense to have some configurability on the smaller side too. It also means you can do incredibly stupid poo poo like put a general purpose port on a machine, route the traces to the inputs of one of the FPGAs, and exploit whatever shared memory/DMA system the FPGA uses to talk to the host CPU to get a 100G ultra low latency interconnect for whatever HPC node/cluster thing you're working on. Or have it poll a 1024x1024 CCD sitting on top of a civil defense beta calibration source, use the FPGA to run analysis and normalization on it, and get a retarded quantity of ultra high grade random numbers to generate keys or salted hashes.
|
# ¿ Nov 10, 2016 21:45 |
|
evilweasel posted:never seen an article shoot its credibility in the head so fast: That's an article holding one hand up in the universal 'give me a minute' gesture, as it pounds an entire gallon jug of bleach, finishes it, belches, then continues right where it left off.
|
# ¿ Dec 14, 2016 00:40 |
|
It's basically a really good use case for having ECC be standard on everything everywhere. The marginal increase in cost can offset a lot of edge case wonky behavior like this.
|
# ¿ Jan 3, 2017 04:24 |
|
Potato Salad posted:Oh lord the Home Server stuff people were doing with extended volumes is utterly different. Storage Spaces is cluster-able software raid. NVME flash storage with accelerators will probably be a much better bang/buck for really large datasets in the near future. Hanging 48 or 64 flash channels off a PCI-E 4x or 8x slot will give you all the QD1 you can stand, for a price a shitton less than registered ECC 128GB DIMMs.
|
# ¿ Jan 5, 2017 04:04 |
|
PerrineClostermann posted:Would Intel really be subject to anti-trust in a single-x86-manufacturer world? x86 isn't absurdly dominant in computing anymore, even from a consumer standpoint. It goes from "gently caress you, pay me (because my server chips are 40% faster and 50% more power efficient), to "gently caress you, pay me (because I'm the only x86 game in town). The price points can both be exactly the same for the exact same chip, but if the regulators think Intel is loving the market over using it's monopoly powers, and Intel doesn't have downy AMD to point to, it could lead to more anti-trust scrutiny. It'll never be 'AMD went under, time to break up Pa-Intel just like we broke up Ma-Bell', but Intel could see some more regulatory annoyances because of it.
|
# ¿ Feb 6, 2017 01:08 |
|
evilweasel posted:They probably can't get away with it now because what saved them in the Pentium 4 era was everyone assumed mhz = speed, and they'd reached a brick wall there. So people in the know new AMD was better but most people looking at the specs wouldn't realize that. Don't forget the illegal as gently caress price fixing/rebate scam, where they locked in every major manufacturer with an exclusivity deal, where they get a $300 proc for like $90, but only if they only use intel chips. 10 years later they got slapped with a huge fine for it, about 8 years too late to fix the issue.
|
# ¿ Feb 12, 2017 17:58 |
|
HalloKitty posted:If it works on existing Cat5e and the NICs/switches are considerably cheaper than 10Gbit ones, I can see uses for it, for sure. Pretty much this. 2-5x as fast with no infrastructure changes, the speed is now able to single link source the newer MIMO ac and ax access points, and the link speed increase for things like workstations that have multi-GB files stored on a NAS or SAN are able to work much closer to the DAS performance they previously saw, again without having to tear the walls out and redo every single drop.
|
# ¿ Feb 18, 2017 21:52 |
|
|
# ¿ Apr 25, 2024 12:19 |
|
BobHoward posted:It'll very likely work fine with those caps, but I would've tried to match exact if possible. Someone presumably designed and verified that board based on the original caps, and it may not actually work better if you change them for "better" ones. Pretty much, using caps to filter noise and prevent ringing in a circuit is half black magic and half careful placement. Changing things like ESR and ripple can do odd things if the design is borderline.
|
# ¿ Feb 20, 2017 12:12 |