|
And when you consider you can have the large intel sockets, lga 2011 I think, will fit on a myitx board it's not like you need that much space. Edit: may also be msi are terribad at making motherboards, which is not outside the realm of possibility.
|
# ? Jan 6, 2017 17:35 |
|
|
# ? Jun 13, 2024 05:52 |
|
So long as they undercut Intel or offer more cores for the same dollar combined with lower board costs, the platform savings by moving to Red Team will be pretty great. I just made a haswell build I can dump for the price I paid (seriously wtf) and grab a Ryzen offering for peanuts.
|
# ? Jan 6, 2017 17:49 |
|
SwissArmyDruid posted:Had they just gone and aped Intel's mounting hole positions, they'd probably have freed up a lot more space to bring those RAM slots in closer. I'd have thought that'd be better for higher frequency RAM's. I wonder what the memory controllers like. What kind of RAM speed do people get out of Bulldozer when they overclock it? I know it doesn't relate much to this (hopefully).
|
# ? Jan 7, 2017 03:45 |
|
Latency *is* a thing that can be affected by trace length from the socket to the ram slots, but I'm not willing to make a claim as to whether or not it is significant, I don't have any of my bookmarks to those tests on my phone.
|
# ? Jan 7, 2017 04:12 |
|
Zen Copypasta from AT forum which in turn copypasta from Reddit:quote:https://www.reddit.com/r/Amd/comments/5mfjun/amd_drops_huge_news_on_ryzen_overclocking_and/
|
# ? Jan 7, 2017 10:11 |
|
Does the new MB chipset support Thunderbolt 3? Because I really want to put my PC under the stairs and run an optical thunderbolt cable to my bedroom.
|
# ? Jan 7, 2017 11:19 |
|
The chipset don't but any vendor can slap a Intel TB chip on there if they want. I don't think any mobo has been announced with such a set up though. Active USB 3 cables out to 50'+ a USB 3 dock of some sort might do what you want for lots less of course. I can't believe how much those longer optical TB cables cost. edit: \/\/\/\/ Honestly TB vs USB3.1/3.2 feels lots like Firewire vs USB2 all over again. Its got Intel behind it and that is huge but it just still seems to be hardly getting any use at all even if the tech itself is impressive. I think if USB hadn't improved to 10Gbs with USB3.2 it might've had a good chance but once that happened its just too niche and expensive. PC LOAD LETTER fucked around with this message at 16:03 on Jan 7, 2017 |
# ? Jan 7, 2017 14:57 |
|
Thunderbolt is a loving travesty on the PC. How long is the standard out? And there's still no/very very few mainboards with a port? Up until recently, you had to install expansion cards that plugged into PCIe and some port of the mainboard.
|
# ? Jan 7, 2017 15:26 |
|
Combat Pretzel posted:Thunderbolt is a loving travesty I've yet to see a compelling case for thunderbolt that isn't well covered with regular USB C 3.1.
|
# ? Jan 7, 2017 17:02 |
|
PC LOAD LETTER posted:The chipset don't but any vendor can slap a Intel TB chip on there if they want. I don't think any mobo has been announced with such a set up though. I go as far as to say TB is DOA tech. It's way too confusing as a standard for consumers, nobody except Intel/Apple wants to suck Intel's dick on licensing costs and it's competing with a free no-nonsense I/O called USB3.0 that is already offering 625MB/s. Meanwhile, average office exec #123423 is still plugging in a 30-year old VGA cable for his meeting presentation.
|
# ? Jan 7, 2017 17:31 |
|
Boiled Water posted:I've yet to see a compelling case for thunderbolt that isn't well covered with regular USB C 3.1.
|
# ? Jan 7, 2017 17:34 |
|
Combat Pretzel posted:Thunderbolt is a loving travesty on the PC. How long is the standard out? And there's still no/very very few mainboards with a port? Up until recently, you had to install expansion cards that plugged into PCIe and some port of the mainboard. I dont know much about thunderbolt but im pretty sure its on all mid range and up z170 boards. Though its confusing because its label USB 3.1 (and is a regular USB port) and thunderbolt 3 simultaneously. I didn't realize thunderbolt was intel only but since its in the same port as a USB 3.1 I assume AMD can use it as well?
|
# ? Jan 7, 2017 18:10 |
|
Say what you will about Thunderbolt, it's presently the only thing that can even think about making external GPU docks a reality. No, I don't think the change to use USB type C connectors is any help. I frankly think that in a perfect hypothetical world, Apple would have made the Lightning connector open source, and we could actually be using THAT for USB type-C instead of the abortion that it presently is*. I feel like this could have opened a pathway for Thunderbolt to migrate to PC as a result, assuming it was done early enough in its lifespan. *(It's easier and cheaper to replace a cable than to replace a port when the tab snaps off. That, I think, could have made Thunderbolt relevant to more people. But no, Apple gotta :apple:.) SwissArmyDruid fucked around with this message at 18:34 on Jan 7, 2017 |
# ? Jan 7, 2017 18:27 |
|
Lightning connecter is only six pins right, is that enough for a 3.1 USB signal?
|
# ? Jan 7, 2017 18:35 |
|
Eight pins, but I *believe* the contacts in the Lightning connector are double-sided. The Lightning connector doesn't care which way you plug it in, after all, and then I think negotiation takes care of the rest? In any event, I think that if Apple weren't so goddamn obsessed with screaming "THIN! THIN! THIN!", you could probably edit: Ha, no, I forgot that type-C is 24-pin, not 16. SwissArmyDruid fucked around with this message at 18:49 on Jan 7, 2017 |
# ? Jan 7, 2017 18:38 |
|
The Lightning connector is garbage for reliability, even beyond the lack of pins it has. And it definitely couldn't carry the 5 amp/20 volt (that's 100 watt) current a USB-C connector can be specced to in the USB Power Delivery form of the USB 3.0 specs.
|
# ? Jan 7, 2017 19:04 |
|
On the other hand, I am dreading the day that I have to tell someone they now have a busted type-C port, the expensive kind, with Thunderbolt 3, because they first didn't plug the type-c connector in hard enough, then too-hard.
|
# ? Jan 7, 2017 19:30 |
|
Something different with the combo USB3/TB ports? If those on my phones are worth as a reference, you gotta be really stupid to not plug it in "hard enough".
|
# ? Jan 7, 2017 19:57 |
|
Combat Pretzel posted:I was mostly interested in it a while ago for cheap higher-than-Gigabit networking. But alas... 10G cards don't cost much anymore if you get them used, and cheaper 2.5/5G controllers that can use existing Cat5E installations for 1G are on the horizon. Not quite like the 40G of TB3, but without a flash-based array you'd have trouble saturating 10G anyway.
|
# ? Jan 7, 2017 20:06 |
|
GRINDCORE MEGGIDO posted:I'd have thought that'd be better for higher frequency RAM's. Both my FX-8350 and Athlon 750K, when overclocked to 4.5 GHz+, were able to run DDR3 2400, but I don't know if that's common or not. It was very turn-key though, I had zero issues. fake edit: The Athlon actually got to 2666 before I chickened out, IIRC. RyuHimora fucked around with this message at 05:47 on Jan 8, 2017 |
# ? Jan 8, 2017 05:45 |
|
Eletriarnation posted:10G cards don't cost much anymore if you get them used, and cheaper 2.5/5G controllers that can use existing Cat5E installations for 1G are on the horizon. 10GBASE-T is terrifying. Insane power consumption, and you get to experience your network cable getting physically warm to the touch. 10GBASE-CR/Direct Attach cables are more expensive (since they're twinaxial cables permanently affixed to a pair of SFP+ transceivers) and have pretty severe length limitations but are much better. e: I could be wrong but iirc 1000BASE-T uses about 0.4-0.6 watts while 10GBASE-T can use all the way upwards of 12 watts on really cheap equipment. Kazinsal fucked around with this message at 09:06 on Jan 8, 2017 |
# ? Jan 8, 2017 09:03 |
|
http://hexus.net/tech/news/cpu/101290-amd-confirms-ryzen-cpus-will-unlocked/ Supposedly all of the Ryzen CPUs will have unlocked multipliers. Nice. Edit: whoops, this has already been mentioned
|
# ? Jan 9, 2017 13:44 |
|
Kazinsal posted:10GBASE-T is terrifying. Insane power consumption, and you get to experience your network cable getting physically warm to the touch. 10GBASE-CR/Direct Attach cables are more expensive (since they're twinaxial cables permanently affixed to a pair of SFP+ transceivers) and have pretty severe length limitations but are much better. --edit: Heh, passive DAC SFP+ is 0.1W. Combat Pretzel fucked around with this message at 17:09 on Jan 9, 2017 |
# ? Jan 9, 2017 14:58 |
|
It might have been mentioned already, but I didn't realize that Ryzens will have 24x PCIe lanes directly from the CPU (e: 4x reserved for the chipset). As an Intel 750 NVMe SSD owner, this is putting me on the bandwagon for reconsidering Intel's X299 HEDT platform as my next upgrade, if Socket AM4 CPUs do end up with a worthwhile performance/price difference from Skylake-E/Kaby Lake-X. Video is Linus but he's not being too insufferable in this one: https://www.youtube.com/watch?v=vPByz-PtWkw e2: chipset is slightly future-leaning too, with usb 3.1 gen2 e3: while i'm on the foolish topic of futureproofing, sure why not let's mention amd's going to keep making cpus for this socket until 2020 Sidesaddle Cavalry fucked around with this message at 02:51 on Jan 10, 2017 |
# ? Jan 10, 2017 02:43 |
|
I have heard Papermaster's comments about how “We’re not going tick-tock,” and “Zen is going to be tock, tock, tock.” http://www.pcworld.com/article/3155129/components-processors/amd-says-its-zen-cpu-architecture-is-expected-to-last-four-years.html Great. We'll be stuck on 14nm chips until 2020. Although who knows how much of this is "because GloFo can't un-gently caress their poo poo sufficiently to get us onto 10nm before then". edit: In retrospect what he PROBABLY meant was "Tick Tick Tick". Ticks: Die shrinks and optimization Tocks: New microarchitecture edit edit: VVVVV SwissArmyDruid fucked around with this message at 04:21 on Jan 10, 2017 |
# ? Jan 10, 2017 03:47 |
|
I saw another source that AMD is planning to stay on 14nm for a while. Possibly related to reports GloFo is going to try to skip over 10nm.
|
# ? Jan 10, 2017 04:10 |
|
We know ibm group may have found a way to 7nm and intel is the only one working on 10nm? http://arstechnica.com/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/ and honestly I see us stuck at 7nm for a very very very long time.
|
# ? Jan 10, 2017 04:19 |
|
Sidesaddle Cavalry posted:e3: while i'm on the foolish topic of futureproofing, sure why not let's mention amd's going to keep making cpus for this socket until 2020
|
# ? Jan 10, 2017 04:37 |
|
wargames posted:We know ibm group may have found a way to 7nm and intel is the only one working on 10nm? As you can tell from the article's suspicious tone, this is pretty expected. The real challenge is making a process that allows you to mass produce millions of chips, and do so economically. Everyone is in the process of shoving their 10nm processes out the door (except GloFlo who's skipping that node), and work is being done on 7nm. 5nm is the area of active research, and where a new transistor design or new materials is going to have to come in. Here's a more detailed article on 5nm.
|
# ? Jan 10, 2017 05:15 |
|
The amount of faith I have in GloFo not albatrossing the gently caress out of AMD is nil. Intel is still on track for 2020, right? 2020 is going to roll around and GloFo will still be in risk production, citing "unforeseen issues" and "developmental hurdles", and they won't have a 10nm node for them to fall back on.
SwissArmyDruid fucked around with this message at 06:25 on Jan 10, 2017 |
# ? Jan 10, 2017 06:18 |
|
There is always FD-SOI for GloFo and AMD, Of which they have 22nm, 12nm, and 7nm FD-SOI. I have no idea what the hurdles would be moving from finFET to FD-SOI or whether FD-SOI is even suitable for such things, but it's apparently a much less complex process if not more expensive in small volume. Also keep in mind that GloFo's 7nm finFET description sounds more like Samsung/TSMC 10nm and they aren't planning EUV for the first run. My guess is AMD does 14nm until mid 2018 (So Zen, Raven, and Zen+, which is likely an update to AVX-512), and shifts over to first generation "7nm GloFo" late 2018, early 2019 with another patch of chips arriving in mid 2020 using EUV if it's available. This is based on AMD's own roadmap for their GPU's, as Vega 20 is supposed to be a 7nm chip, but that's likely on the back of GloFo's promises.
|
# ? Jan 10, 2017 06:30 |
|
Sidesaddle Cavalry posted:It might have been mentioned already, but I didn't realize that Ryzens will have 24x PCIe lanes directly from the CPU (e: 4x reserved for the chipset). As an Intel 750 NVMe SSD owner, this is putting me on the bandwagon for reconsidering Intel's X299 HEDT platform as my next upgrade, if Socket AM4 CPUs do end up with a worthwhile performance/price difference from Skylake-E/Kaby Lake-X. Video is Linus but he's not being too insufferable in this one: While I love the silly high end stuff like Intel 750s, what are you running on it that would possibly perform better with CPU drive PCIe lanes vs chipset? Has there ever even been benchmarks for SSDs running of CPU lanes vs PLX chips or chipset slots?
|
# ? Jan 10, 2017 16:15 |
|
Gwaihir posted:While I love the silly high end stuff like Intel 750s, what are you running on it that would possibly perform better with CPU drive PCIe lanes vs chipset? Has there ever even been benchmarks for SSDs running of CPU lanes vs PLX chips or chipset slots? I tested it on my system. I didn't do it scientifically or anything, but the performance increase in Sandra was barely even there.
|
# ? Jan 10, 2017 17:51 |
|
I don't think you should see a performance increase. I think it's mostly about bottleneck avoidance. We have a maximum 32 Gb/s (4 lanes) bandwidth between CPU and chipset. We're starting to get a lot of very high bandwidth devices. 10 Gb/s USB 3.1gen2, 10 Gb/s ethernet, etc. I was ABOUT to say the bottleneck was somewhat theoretical, but apparently we already have NVMe SSDs hitting 3500MB/s, which is total saturation of that bandwidth. So giving NVMe SSDs their own lanes avoids starving other devices during peak usage.
|
# ? Jan 10, 2017 18:43 |
|
Right, and the question remains, when are you seeing usage like that at home, heh?
|
# ? Jan 10, 2017 19:00 |
|
Agreed, 640kb should be enough for anyone. E: Maybe someone has a homelab with teamed 10GB nics doing a backup from a Intel 750 to a remote system. NihilismNow fucked around with this message at 19:27 on Jan 10, 2017 |
# ? Jan 10, 2017 19:23 |
|
Gwaihir posted:Right, and the question remains, when are you seeing usage like that at home, heh? Maybe not yet but if you can go as long with this as you could the 2500k you very likely will be in 6 years.
|
# ? Jan 10, 2017 19:27 |
|
4K video editing at home is a thing, and it's only going to get bigger each year, especially with 360 video coming for VR as well that needs >4K to not look like crap. Remember that Radeon PRO with the onboard SSD scrubbing an 8K video in realtime?
|
# ? Jan 10, 2017 19:28 |
|
Gwaihir posted:Right, and the question remains, when are you seeing usage like that at home, heh? NihilismNow posted:Agreed, 640kb should be enough for anyone.
|
# ? Jan 10, 2017 19:50 |
|
|
# ? Jun 13, 2024 05:52 |
EdEddnEddy posted:4K video editing at home is a thing, and it's only going to get bigger each year, especially with 360 video coming for VR as well that needs >4K to not look like crap. It's really not. Yes there are a few jobs in this field but most of them have gone away in the past 10 years. 90% of "video production" just happens on youtube or on phones nowadays. Now the number of millennials who think they "need" a high end computer and camera for their awesome videos they will make is not decreasing at all, even if only 1/100 of those people actually do anything with it.
|
|
# ? Jan 10, 2017 19:51 |