Yeah, graphene is fundamentally incompatible with traditional semiconductor manufacturing and logic without some very clever workarounds that haven't really been successful.
|
|
# ¿ Sep 26, 2014 20:15 |
|
|
# ¿ Apr 25, 2024 11:03 |
I wonder if they are trying to do a small on die FPGA for dynamic instruction set add ons or something. Like if there is some cool new encryption algorithm or something they could just sell the fpga program as an add on, instead of waiting for it to be added and tested in a brand new processor, which also lets them do hardware patches like that one thing that haswell hosed up or whatever.
|
|
# ¿ Jun 2, 2015 16:55 |
Skylake NUCs might be one thing where you will see DDR3L happen, as currents already use them.
|
|
# ¿ Aug 6, 2015 23:11 |
Not with micro-usb, apparently. the orientation of the socket is different between the S3 I had and the One M8 I have now, and my cable i use to charge doesn't have a logo on it.
|
|
# ¿ Aug 18, 2015 01:58 |
They are literally the exact same thing
|
|
# ¿ Aug 31, 2015 19:11 |
lDDQD posted:Why do people want these [5775c] for desktop, again? The integrated graphics (which you aren't going to use) takes up like half the die. Surely, you'd be better off with with that die area used to give it... I dunno, like 128
|
|
# ¿ Sep 22, 2015 04:08 |
Grundulum posted:Given that we have no IT staff to do things like this, I think I would be better off buying a CPU cooler and putting it on myself. Well, it is. Heat is just excess energy being given off in thermal form. For all electrical systems, the power used is given by the formula Power = Voltage * Current, and while some of that power usage is used to run the processor, alot of that is wasted in the form of thermal energy. Power is energy per time, voltage is energy per charge, and current is charge per time, so voltage * current is energy per time! Watermelon Daiquiri fucked around with this message at 17:50 on Oct 23, 2015 |
|
# ¿ Oct 23, 2015 17:27 |
Even with an i7 and 8gb of RAM, my work computer still chugs running IE (we recently upgraded to 11!!!), jmp, spotfire, excel, and a couple proprietary database query/visualization tools (usually not all at once but sometimes). Thats after getting an upgrade from an i3/6gb... My computer must not SMT because it avoids putting anything on the secondary threads/cores.
|
|
# ¿ Aug 31, 2016 04:18 |
If intel does put overclocking/enthusiast stuff on the 2k pin socket, I really hope more mobo vendors make mini itx boards. Microcenter doesn't have any mini itx 2011 boards I have a 4690k, and the 6800k seems to be a good way to upgrade to ddr4.
|
|
# ¿ Nov 16, 2016 08:45 |
Yup, exactly... It even has an M2 slot.
|
|
# ¿ Nov 16, 2016 09:35 |
BIG HEADLINE posted:So according to this, Coffee Lake is due in 2H 2018 and is a further optimization of Skylake and Kaby Lake, not a new process at all: http://www.guru3d.com/news-story/6-core-intel-processors-going-mainstream-in-2018-with-coffee-lake.html So a new Devil's Canyon, eh?
|
|
# ¿ Nov 20, 2016 17:49 |
canyoneer posted:FWIW, Intel also insists that TSMC/Samsung/GloFo 7nm is actually more like Intel's 10nm by every measurement in all but name. Heh, I have to wonder if there is some industrial espionage going on that allows them to know what dimensions everyone else is shooting for. Hell, the only way we have to know is either by claimed transistor density (which relies on them being truthful) or cutting the drat things open (which is really only done by Chipworks afaik, since it takes alot of money for that). Looking at the slide on this page: http://www.anandtech.com/show/8367/intels-14nm-technology-in-detail, it seems the truth is somewhere in the middle. While their 14nm is smaller than the others, it's not quite to the level of SS/TSMC's teen node being the same size as Intel's 22.
|
|
# ¿ Nov 21, 2016 02:26 |
There are always new things on the horizon.
|
|
# ¿ Nov 24, 2016 21:29 |
Paul MaudDib posted:You already need to use Edge (or the Windows Store app, which runs embedded Edge) to get 1080p. Chrome and FireFox are 720p max what the gently caress. Why on earth am I not just downloading things seperately
|
|
# ¿ Nov 25, 2016 20:41 |
Yeah the only thing the note 7 thing did to samsung was cut profits by a third by paying for refunds, returns, damage control, sticking them with a stock of exynos specifically modified for the note they have to find new homes for, the other ic stock they probably returned or sold, broken contracts for supply as they manufacture that poo poo in house in korea, and trash an entire ddi chip line. The only place the flash chips would come into play are the ic stocks, which they could resell or repurpose unless they somehow were compatible with the note 7s board and stuff only, which seems dumb.
|
|
# ¿ Dec 7, 2016 20:37 |
Yeah, now people will rush to buy flash futures, hoarding all the flashes in tankers off shore >
|
|
# ¿ Dec 8, 2016 00:35 |
Well, alot of places do produce different designs for different market segments (G_10#, the various snapdragons and other arm processors), yeah it does make sense for places to intentionally disable parts of chips for that same effect, especially if the higher end (read: larger) parts are such high volume ones. It's the same sentiment behind cereal producers making both the main and the generic brands the same way. Certain markets are only so big, so once you saturate it, you can't really do anything else to grow. You can create a brand new chip to fill the lower market segments, but that might not be the best solution as you do have all that higher end stock that is slightly flawed or turns out to not be so repairable, plus it costs so much to design, test, and produce it. I think intel et al usually lasers off interconnects or something to make the lower end chips, but if they can create a software method to do so, that saves a step right there, plus it removes the risk of the lasering accidentally killing something else. Once its in software, then why not let people re-enable things? I mean, its not like we are at a dlc model (yet) in the consumer domain where you can only buy an i3 or whatever and have to pay hundreds to unlock smt, more cores, cache, etc. Frankly, I'm surprised that phone arm processors even have the separate chips for each market, though it might make sense if smaller chips are better.
|
|
# ¿ Dec 19, 2016 03:34 |
priznat posted:I'm not sure how modern CPUs do it but for most other chips they have fuses that are blown so when the device boots the firmware reads the permanently set code and sets up the device accordingly. The dies are identical, it's just a slight change during the packaging process that makes the difference. The dies can also be binned into performance grades but that's not always even necessary. Oh duh, fuses... Yeah thats one way
|
|
# ¿ Dec 19, 2016 15:35 |
If someone is rich enough to afford the high end market segmentation, they generally are rich enough to where they don't give a poo poo. Now, if its something that costs little to nothing to add (money and time) then yeah its economically good but morally bad to segment it, but alot of things aren't. Hell, some times the high end stuff is subsidizing the low end as again, they are generally rich enough to not give a poo poo about an extra $#. I know for me, 20-30 is something I don't even blink at, 100 is minor hesitation, 1000 is serious thought, but for someone making minimum wage 20-30 is literally being able to get to work for the next week or the food for an entire week! Basically, you can think of it as relativity: What's another 50 when its already 800 vs 80? But seriously, I can guarantee that the raw materials going into each wafer are only on the order of like a grand, and with probably 350 good kaby lake dies per wafer you can see how cheap it is that way. However, given the ArF and KrF litho tools are loving expensive (1-2B alone for a fab) and that's not even mentioning the rest, $3-400 for a good enough die on a piece of precision made fiberglass is an ok deal. As the E/X stuff is 2.5 times larger, that increases the costs a fair amount as it takes just as much time, money, and effort to move a lot of 140mm2 dies as it does 350mm2 ones, and thats not even taking into account the fact that an increased die size increases the odds of a random die being defective (like what, 6 times as probable? or am I messing up my math?). Intel is kinda hurting too, so they are looking for ways of filling the coffers and funding the exponentially expensive nodes. Honestly, given the brick wall they are desperately speeding towards (which given their anemic foundry poo poo means it'll hurt worse than tsmc or samsung or whoever) Can you really blame them for trying to milk things for as long as they can? For the Tesla thing: yeah thats sketchy as poo, but remember they are basically a start-up and as they've learned with the suv, the more separate process lines they have the harder and more expensive things are. It would probably cost them more to change things up enough to put larger batteries and/or the sensors on only some of the cars they make (as thats multiplying the number of lines by 4 effectively). They probably could just have the one option, but I wouldn't be surprised if the cost to build the car is based around the higher price. Having software upgrades is a compromise as it allows them to sell to a larger population, and they have the ability to still get the full return on the vehicle. Watermelon Daiquiri fucked around with this message at 02:39 on Dec 21, 2016 |
|
# ¿ Dec 21, 2016 02:31 |
I did include all that, although obliquely.silence_kit posted:No way, material cost per wafer has to be O($100). I happen to know that plain 100mm electronics grade silicon wafers in small volumes are ~$20 each (this is actually incredibly amazing by the way--electronics grade silicon may be the purest material known to man, and yet it is so cheap). Intel, although they are buying much bigger wafers, probably are able to get a much better price/area than I could. Yeah, the films and all are thin, but you have to consider the costs of things like delivery vehicles (as in silane, arsine, or N2 or noble atmo rather than truck), disposal costs, engineering wafers (every little process changes needs at least a cursory look at how it affected the device parms), cmp slurry, other cleaning stuff, wet etch chemicals, PR, not to mention the fact that a LOT of material gets wasted in some of the processes, spare parts (gotta replace those beam slits once they get worn down enough from all those plasmonic ions hitting it, among other things), plasma targets in particular can be $$$$, and a whole boatload of stuff I'm sure I'm forgetting. Granted, I am rounding up a bit (and when I say on the order of, I mean just around a grand. Not 1-10k), but consumables are one of the things that affects my bonus lol. And oh yeah, I'm well aware of Intels intentions, I was just trying to give a bit of their perspective. If they truly are going to ditch silicon for <10nm stuff, hooo boy will xeons cost a leg. You think those cheap silicon wafers are expensive... Watermelon Daiquiri fucked around with this message at 05:12 on Dec 21, 2016 |
|
# ¿ Dec 21, 2016 05:09 |
silence_kit posted:If they do it, they probably aren't going to switch away from the silicon wafer. They'll deposit/grow the new transistor channel materials on top of the silicon wafer. Oh duh. Yeah, they already use sige in there.
|
|
# ¿ Dec 21, 2016 06:19 |
silence_kit posted:SiGe is in the source and drain of at least the p-type transistors in the latest integrated circuit manufacturing processes. It isn't the channel material. Yeah, exactly.
|
|
# ¿ Dec 21, 2016 18:51 |
Real nerds use Mercury
|
|
# ¿ Dec 22, 2016 02:48 |
Well, it does have a thermal conductivity an order of magnitude higher
|
|
# ¿ Dec 22, 2016 03:58 |
Material costs depend on the product node, and the bargaining skills of the company. Like I said, consumables can take up to a grand a wafer and down to a hundred or less, but that includes spare parts along with the raw material. Granted, the spare parts are quite a bit of it as they degrade often, and there needs to be a constant flow of new pumps, quartz rings, robot parts, and a whole host of other things to keep things alive pretty much 24/7. I would hazard a guess its probably a majority of it, though. However, of the raw materials the Si makes up over half. Also, those wafer prices have no bearing on industry costs. Ive seen the wafers and work done in uni clean rooms and theres really no comparison. The reason those are so cheap is most likely because they arent pure, relatively speaking. Remember, with sub 20-nm real gate lengths and low metal layer pitches coming over the next few years, a single impurity atom can gently caress an entire die. The wafers places like intel tsmc samsung etc use by necessity need to have way way upwards of just 99.9% pure silicon, unless they need something predoped. Even then, it would be way way upwards of just 99.9% pure Si and Ar or B or P or whatever they use.
|
|
# ¿ Dec 23, 2016 07:45 |
Yeah, the numbers i give are based off of monthly consumables figures vs monthly wafer shipments for a major wafer fab. I mentioned waste in this thread before I know. And yeah, if a researcher in short channel stuff or stuff that affects it needs a wafer they are going to need the best stuff for the same reaspons, but most of them who are interested in other things like process steps dont need anything as good . But, tbh, im not interested in playing the one-upsmanship game here that I sense.
|
|
# ¿ Dec 23, 2016 18:26 |
Paul MaudDib posted:Slightly OT but what do people do that makes software non-portable across architectures? I get it if you're writing assembly, or a bytecode interpreter, or something, but if I have a C program that runs on x86 what exactly would make it not run right on Itanium, or another architecture? Of course I guess that assumes it's a standards-compliant C program, which is basically impossible, so... There's also the fact that, at the end of the day, all of the c, cpp, basic etc stuff gets turned into machine code anyways, which is as arch dependent as you can get (I'm assuming that by 'C program' you mean raw code and not a program made using C, as there are no real differences between binaries beyond compiler-specific quirks). You can make 'standard libraries' for everything under the sun, but you'd have to explicitly add support in the libraries for each of them that gets made (in addition to the compiler support) which can bog down compilation on everything else. That's why Java came about (its a platform agnostic language running on a translator vm after all) as well as the proliferation of other scripting languages (over compiled ones) since processing time became less and less of a concern. (The game boy, for instance, basically required bare metal asm programming since there was so little room for error, whereas today people make shoddy scripts in slow interpreters because it's 'fast enough') What would be interesting is if someone came up with a 'shader' of sorts (to borrow from GPUs) that allowed platform agnostic binaries to run bare metal. I know there are people who have gigantic hard ons for fpgas in cpus-- I don't see why, given a large enough array and cache, you couldn't vectorize translation (well, given fixed opcode and instruction sizes at least) Watermelon Daiquiri fucked around with this message at 03:03 on Dec 31, 2016 |
|
# ¿ Dec 31, 2016 02:37 |
Well hey, yeah thats kinda it, though part of what I was thinking of was in the context of stuff beyond a specific platform. I honestly don't remember what all I was considering as I wasn't actually giving it much thought. Thinking about it now, though itd be kinda pointless beyond the first time its executed and distributed. I'm much more hardware than software, anyways.
|
|
# ¿ Dec 31, 2016 04:23 |
Sormus posted:The thing you take home from Linus' videos is that they are not, how to say, professionals. They sometimes manage to make a pretty solid video, but all their "lets build X" videos are a nightmare. Heh, considering tons of places manage to successfully use water cooling from chillers located on a different floor or area of the building entirely, they really didn't do a good job planning that.
|
|
# ¿ Jan 1, 2017 19:31 |
Paul MaudDib posted:You probably don't use an unending supply nasty-rear end tap water to do it though. Oh yeah, no, you use UPW for that. THat's part of the whole 'Not planning it out' thing e: or at least distilled Watermelon Daiquiri fucked around with this message at 20:25 on Jan 1, 2017 |
|
# ¿ Jan 1, 2017 20:11 |
How many (aftermarket desktop) cases are there that use the case itself as a heatsink? If you used the backside or bottom of the case as a fin setup (with protection), I wonder how well that'd work either as a direct mount on the cpu or attachments for liquid hoses.
|
|
# ¿ Jan 2, 2017 05:35 |
Well, I am in the industry (for now) and we have many different sram types for different sizes, Vts, and on resistances, though I've never actually gone through them to see what the exact parameters are. I'd say there are probably at least over ten different sram versions used in different areas of the chip. Silver is definitely interesting, and given a lot of the metallization is done in a vacuum due to plasmas and ion beams being involved, oxidation from air isn't that much of an issue. While silver does react slowly with oxygen, its actually hydrogen sulfide thats the main tarnisher for silver. While the layers separating the interconnects from the outside are thin, they should definitely be thick enough to stop most diffusion fluxing through (the al layer alone is like 20-30um), unless I'm completely mixing things up. Watermelon Daiquiri fucked around with this message at 00:00 on Jan 9, 2017 |
|
# ¿ Jan 8, 2017 23:58 |
I've never really looked into it as computer software standards and interfaces aren't my area of expertise, but where do 'chipset' pcie/sata/usb etc get their bandwidth from? I mean, the host chip itself provides the 'lanes' and other bandwidth at the requisite speeds, but how does that interface with the CPU so the devices on that off board host don't see latency relative to the devices talking directly with the cpu (hyperconnect?)? After all, that has only so many pads and besides the many hundred needed for the various vccs and grounds, I can only think of RAM traces, PCIe, gpio, jtag, spi, i2c serial interfaces and various other control/clock inputs and outputs that don't serve double duty as gpios. Basically, how do they multiply the pcie lanes or other communications bandwidth going to the chipset from the cpu? Is it really just a giant buffer? Basically, if the CPU has 20 lanes, 16 for graphics and 4 lanes for the chipset, do they really just buffer the data from allll the other like what, 12 lanes? to squeeze through that 4 lane interface to the cpu?
|
|
# ¿ Feb 8, 2017 09:49 |
Ok, then exactly as I thought. I was trying to move a poo poo ton of data between multiple hard drives both internal and external and other devices on the network and things were getting stupidly laggy. I figured it was due to something like this. Also, according to ark, the pcm handles the dimms? I thought those talked directly to the cpu? E: the data sheet for the z97 chipset doesn't have ram listed, so the ark page must just be talking about how z97 has dimms available on it Watermelon Daiquiri fucked around with this message at 10:16 on Feb 8, 2017 |
|
# ¿ Feb 8, 2017 10:13 |
I think he meant more 'make a kaby lake E/X i5'
|
|
# ¿ Feb 11, 2017 23:15 |
Yeah, I highly doubt it could run longer than like a few meters given the incredibly high bandwidth it has. This is an excellent application for optical cables, however. If they can deliver the same bandwidth (which I have no doubt it could), it would be a good way to send that data over a long distance, though it'd be more of a hassle than regular cat cables if it breaks.
|
|
# ¿ Feb 20, 2017 23:44 |
Yeah, this would be something that'd have to rely on whatever that phrase is that means bigger demand leads to lower prices (like how there are so many tvs made that the crystal used for the color carrier is so cheap people design circuits around them (Nintendo, Intel, Apple, e.g)) Also, I've been wondering if it could be feasible to use a topology similar to vnand for logic (or better yet sram and edram caches). I imagine if it were feasible someone would already be looking into it, but in any case if heat isn't too much of an issue, I was thinking it could be set up in cadence where each logical unit (like a ram cell or a nand or shift register or whatever) would be pre-built in a relatively small area (maybe in a few different vertical configurations to cover as many as possible in and out points) and the auto-router would select the most compatible config and maybe optimize the location to sync things up as needed. Of course, if heat is an issue maybe a dedicated ground and maybe power layers that has no routes on them besides visa between layers would be feasible to help wick heat away (though I doubt the small volume would be enough for multiple watts). I'm also imagining that to get connections to where they need to go (as opposed to the simple lines in flash) you'd have to spread things out to allow room, compromising the areal savings, possible enough to the point that the increased cost involved with ultra-anisotropic etching and-- critically-- masks and ArF/KrF photo layers if those don't make it too expensive on their own. Still, maybe there is a compromise in there somewhere-- sram and edram are relatively simple (but expansive) circuits ( I recall that the snapdragons are around 30-40% sram by area, and there is the edram on the xbone that compromised the graphics due to its size). There might be room for 3-5 layers in the 11-13 metal layers current 14-6nm chips have, which could reduce the size taken up by caches significantly. Actually, given that sram and flash are kinda similar I wouldn't be surprised if it is done already in some products. I'm talking out my rear end for most of this, and I only did a cursory amount of research. I'm still interested in the feasibility and difficulties involved, though.
|
|
# ¿ Feb 21, 2017 00:42 |
Yeah, I know gate all around is 'The Next Big Thing', but I guess I had assumed it'd just be done in a similar manner to finfets and contructed in a 2-d plane on the surface of the wafer. I can't tell (and I'm not really in the mood to research more at the moment) but is the bit line completely surrounded by the word and control gates, or is it only on one or two sides? I saw both varieties when I looked earlier though a circular gate surrounding the channel would be the better option than having different segments to the gate. After all, sharp edges really love to gently caress with electricity lol. I completely forgot that vnand uses poly, though, that does complicate things... e: vvvv Thanks! I hate it when I can't think of the proper word... Watermelon Daiquiri fucked around with this message at 04:38 on Feb 21, 2017 |
|
# ¿ Feb 21, 2017 02:43 |
BobHoward posted:It is doubtful anyone is seriously working on this for logic. 3D SRAM may make sense. I figured that might be the case, then, unless resistances would be appreciably lower. Maybe with new micro-(nano?)fluidics research something something could be feasible. And yeah, it is the ntsc color burst crystal (which is actually 1/4 * 14.31818 = 3.57954 MHz). A lot of things used clocks proportional to that frequency-- 1/2 of it is used in the NES for example, and the SNES both used it exactly as well as 3/4 of it.
|
|
# ¿ Feb 21, 2017 17:20 |
|
|
# ¿ Apr 25, 2024 11:03 |
You are assuming that the resistances will be the same or similar. A good portion of the resistance seen in a device is at the contacts and the layers joining the contacts to the S/D, so if we reduce the number of contacts by having multiple gates affecting the same channel, we reduce the amount of joule heating from the contacts. Of course, I was never a design engineer so I have no idea if they aren't doing this already with finfets (or maybe planar). Of course, having 3 dimensions of freedom would be much more effective than 2-D. Of course, all of this relies heavily upon being able to have silicon as the channel material and having it doped heavily and consistently. Since it would likely only be feasible to have a few layers of 'vertical space' to start with, it might be possible to pre-dope everything they need and then just carve away everything but the silicon needed for the channels (only using a max of like 3 per channel to start with). From there it should be simple to build everything up in between. On paper at least, it should reduce heat by anywhere from like a quarter (to pull a # out of rear end) to maybe even 2/3 depending on how many gates there are per channel, the density achieved, and how much the contact resistance is compared to the channel. I'd love to explore this on the dime of a university, even if it is just enough to see it isn't ever going to be feasible.
|
|
# ¿ Feb 22, 2017 00:53 |