|
Glen Goobersmooches posted:I'm getting a brand spankin' new 670 tomorrow, and I've being reading some scary things about Nvidia's last few rounds of driver releases. Is there a general consensus on the least wretched Beta version at the moment? I powered on mine with 305.67, worked great over the weekend. Which reminds me, Agreed, would you mind posting a screenshot of your fan profile for your 680?
|
# ¿ Aug 21, 2012 02:20 |
|
|
# ¿ Apr 20, 2024 04:26 |
|
Jan posted:To further compound on this, with some personal experience: Yeah, I had my 460 running 2560x1600, and just left AA off, perhaps some other settings dropped to medium. Now with a 670, I can max out all the settings, and even throw on some AA if I feel like it. More importantly though, stays up at 60FPS most of the time instead of slightly above 30 (and I play with V-Sync on).
|
# ¿ Aug 23, 2012 19:22 |
|
Lowclock posted:I tried a few other programs, and other cards and slots too, and always get 2x in the top slot. If I move it to the middle one it shows 8x like it should, and benches higher, and it shows up as 16x in another motherboard. I tried messing with bios settings for a while, which didn't really make any difference either. I think it actually might be something wrong with the motherboard. Yeah, it's possible there's a cold joint somewhere on one of the muxes or Tx-side caps on the mobo. If you can get Asus to advance RMA you can run it at x8 without a huge performance loss.
|
# ¿ Sep 6, 2012 23:25 |
|
Kramjacks posted:So apparently MSI was overvolting their GTX 660 Ti and 670 Power Edition cards, which gave them performance gains but also caused some systems to fail to post or get black screens after a change in load. That circuit looks really weird, any other EEs floating around? I don't know where they sourced the diagram from, but it looks more like a design error than anything else IMHO. AC coupling the anode of a Zener regulator? If anything, that cap would be on the output of the regulator (between cathode and anode). Maybe that sheet in Allegro/whatever was really messy and they missed it. e: looking at the datasheet of the RT8802A, I don't see what pin would benefit purposefully from getting >5V. I could see trying to game SS or a compensation network to get a more aggressive response (Type 2?) but that wouldn't make sense here. I'd hold off on flaying MSI for now, though it still is a bit odd that something this simple (and from the reference design apparently) made it through. movax fucked around with this message at 21:05 on Oct 8, 2012 |
# ¿ Oct 8, 2012 20:51 |
|
Wozbo posted:http://www.xbitlabs.com/news/cpu/display/20110119204601_Nvidia_Maxwell_Graphics_Processors_to_Have_Integrated_ARM_General_Purpose_Cores.html That's pretty slick, not to mention clever and relatively cheap. You get the netlist for the ARM core from ARM, free to implement it on your chosen process, and make whatever changes you deem fit. Since it's ARM, I could see them leveraging AXI to create an interconnect between the ARM cores and their logic, or their own high-performance bus. It would have been pretty if they just threw ARM cores on there and left the usage up to software developers, but having them autonomously (presumably) control GPU functionality is pretty neat. Maybe instead of drivers executing a fuckton of MMIO and hitting registers, it'll be some kind of "dispatch workqueue" thing. (Just guessing, I've only ever worked with Intel drivers). Maybe prettier textures will happen in-line with Maxwell release also, since the next-gen consoles should have more RAM & VRAM. Have to stress PCIe 3.0 bandwidth somehow!
|
# ¿ Oct 8, 2012 21:23 |
|
Alereon posted:The really interesting thing to me is that there actually are no ARM cores here. Project Denver is an implementation of Transmeta's Code Morphing technology to execute ARM code on a custom-designed nVidia core. The original plan was to execute both x86 and ARM on the same cores, but Intel successfully sued to block this x86 compatibility, arguing that the x86 license didn't transfer to nVidia when they acquired the corpse of Transmeta. Oops, I thought they were just licensing it, not rolling their own ARMvX-compatible core. Should be interesting times next year.
|
# ¿ Oct 10, 2012 19:27 |
|
Endymion FRS MK1 posted:How does RMA-ing a card work if they no longer make the card? I returned a Sapphire 6950 I had bought last year for a fan problem, and doing a quick look on Newegg shows the card being deactivated. They'll probably send you a "better" one. Usually, this means the current-generation equivalent of where the 6950 used to live in the line-up. This can lead to hilarity in generation gaps where the newer card actually performs worse in certain games and conditions. Then again, I bought a single 6800GT from eVGA and my murderous PSU caused me to RMA for a 7900GT, followed by a 8800GTS 640, so that wasn't too bad.
|
# ¿ Oct 11, 2012 20:43 |
|
Alereon posted:I'm pretty sure VRAM and system RAM do use the same address space, that's why 32-bit systems can only address 4GB-VRAM-all other hardware reservations worth of system RAM. This isn't relevant for the case of a 32-bit app running on a 64-bit system because Skyrim doesn't care about the VRAM, only the video driver does, and that's a 64-bit application. VRAM is accessible via a PCI BAR most of the time. I can only assume that as VRAM sizes grew while 64-bit adoption was slower, that GPU manufactuers added some kind of psuedo-VM to address the full VRAM while only requesting a 256 or 512M BAR. So I guess you'd have 4 512MB pages on a 2GB card. On modern NV, I think BAR0 is command/control, BAR1 is VRAM; I don't know what the other BARs do off-hand. The video driver is furiously executing memory read/memory write commands that get routed over PCIe to communicate with the GPU; this is why a GPU with an autonomous CPU on-board (like NV + ARM) gets interesting because it has the possibility of breaking tasks down into the video driver sending a high-level command over PCIe, and then the CPU on-board takes care of programming registers. Most PCI MMIO space gets mapped below 4G by the BIOS to retain compatibility with 32-bit OSes; it'll remap any remaining DRAM above the 4G limit where 64-bit OSes can get to it. MTRRs will be set accordingly (on Linux, cat /proc/mtrr to see). 32-bit OS gets whatever RAM fits below the 4G barrier + PCI MMIO, 64-bit OSes get that + whatever memory is remapped at 0x100000000 and above. It is totally possible on buggier BIOSes to brick your system as you add so much PCI devices that you literally cannot place any user RAM below 4G, and the MRC eventually ends up with TOLUD (top of lower usable DRAM) at 0. e: discrete GPUs only consume address space whereas an IGP that needs system memory will actually steal physical memory for its needs e2: Go to Device Manager, and the properties for your GPU. Go to the "Resources" tab and you'll see the memory/IO resources there. It should have some small amount of IO address space for legacy reasons, and the rest should all be BARs. At work my 256MB Radeon HD5700 for instance has: 0x(00000000)E0000000-0x(00000000)EFFFFFFF - 256MB BAR 0x(00000000)F7DE0000-0x(00000000)F7DFFFFF - Control registers? 0xDC00-0xDCFF (+ various) - IO Memory (completely discrete from memory space) I have all those extra 0s because I'm on 64-bit Win 7. Looks like the control BAR got squeezed into the little region available near the top of lower memory where a lot of tiny BARs end up on Intel platforms. Linux, just do lspci -v. movax fucked around with this message at 20:59 on Oct 17, 2012 |
# ¿ Oct 17, 2012 20:52 |
|
Jan posted:Since the subject piqued my curiosity, I did some extra research, and that does sort of match what I've found. What I'm unsure on is that while bus I/O (AGP, PCI-E or otherwise) does seem to require some shared memory (for memory mapped I/O, at least), there shouldn't be any correlation between the amount of VRAM a GPU has and the amount that mapped space will take up. All it does is create a buffer through which the CPU and GPU can communicate, and there's no point making this buffer larger than bus bandwidth. The problem is on a 32-bit OS with only 4GB of addressable memory, you very quickly run out of address space for physical DRAM when you have to devote gigs of memory to memory-mapped I/O. Theoretically, you could lose up to 256MB of addressable memory just for the entirety of PCIe config space, if you wanted to support all of it. e: theoretical max: 255 buses * 32 devices * 8 functions * 4KB config space = 256MB movax fucked around with this message at 21:05 on Oct 17, 2012 |
# ¿ Oct 17, 2012 21:03 |
|
KillHour posted:I don't know why they don't just program games to be 64 bit nowadays. Is anyone really still using a 32 bit OS for gaming? I could see a few reasons: - Engine / toolkit might not support it. Developers like CryTek might not care about this so much, but other houses that license an engine might be limited by the version/release of the engine they are using - Going to a 64-bit release means that no 32-bit system can run that code. Deploying both versions doubles your QA load, I would imagine. Not to mention I'd imagine some pointer hell would be involved as you figure out what's broken on each version. My first point might be moot though, as obviously some engines are capable of targeting PS2/PS3/Wii/X360/etc with the same codebase. A gamedev could probably comment better than I can. The biggest benefit of 64-bit (IMO) for "most people" is that a 64-bit OS essentially removes any limitation on addressable memory. Some chipsets / platforms "only" support 40/48 bits or so, which is still a stupid large amount of memory. Once everyone's on 64-bit, you could have PCIe devices exposing stupid large BARs that encompass their entire onboard memory without any paging. Who cares about burning 8GB of memory space when you're not going to run a 32-bit OS and have exabytes of memory space? movax fucked around with this message at 21:33 on Oct 17, 2012 |
# ¿ Oct 17, 2012 21:30 |
|
Agreed posted:What am I missing, he just said he's running at PCI-e 3.0 16x and 4x, is he mistaken or have I misread? Typo I assume, a P55 mobo wouldn't have PCIe 3.0 support. A card strapped to a x4 link off the PCH also has to travel DMI to the CPU.
|
# ¿ Oct 18, 2012 01:52 |
|
Alereon posted:I'm not going to pretend I was able to understand this, but is there maybe a difference between the "right" way to do things and the way it gets done in practice? Or is that what you're saying? I've never seen a system with 32-bit OS and a discrete videocard have more available RAM than what would be expected from 4GB-VRAM-other hardware reservations, and it seems like if it was possible to do without a hell of a lot of development they would have for the competitive advantage. Sorry, I was just finishing up patches to an internal platform that corrected some issues with BIOS MMIO assignment so the acronyms just kinda flowed Not quite sure what you're asking, but in terms of PCI MMIO (memory-mapped IO) there's nothing terribly special about a GPU, other than them being the consumer device most likely to take a huge bite of address space. Your ethernet controllers and such also eat up MMIO space, but their BARs are less then a megabyte in size. In my case, I have custom data acquisition hardware that eats ~128MB worth of MMIO per unit, and supports up to 8-10 of these things hooked up to a given system. The customer has a 32-bit Linux kernel that they have no plans to upgrade from soon, so they have to suffer with only ~2GB of usable RAM in the system. Chuu posted:One more question about 4GB cards, if you have two 2GB cards in SLI, are the textures duplicated or is it essentially the same memory addressing as a single 4GB card? Not sure actually, that's an interesting question. From a hardware perspective, I could see the GPUs recognizing that there is a SLI bridge special and changing the BARs they request appropriately. Agreed posted:Can't last, I understand that, this is REALLY looking like the last generation where PCI-e 2.0 8x is going to offer performance in the vast majority of scenarios within a few percentage points of PCI-e 3.0 16x. But it does make much more comfortable waiting 'til Haswell for my next major system upgrade. Probably just need to just enable the Marvell SATA controller for my optical drive to free up one SATA slot in case I need to expand storage, otherwise it looks like I should be fine until Haswell. Which is pretty exciting in itself The downside of PCIe 3.0 is that it's pricier to develop. Due to the increased speed, you need probably a 12.5GHz or 16GHz scope to properly debug signal integrity issues. Granted, a lot of the cost of testing is eaten by companies like Altera or Xilinx (plus the usual Intel, AMD) that developed PCIe 3.0 IP and validated their transceivers/solutions against PCI-SIG specs. At PCIe 3.0 speeds you have to use a time domain reflectometer, an accurate model of your board (HyperLynx SI or similar) or brute math to get the s parameters of your board (Touchstone file) and properly apply emphasis/de-emphasis to your captured waveforms. Basically PCIe 3.0 is fast as hell and requires some investment in development tools and increased development time. It lowers pin-count sure, but a lot of companies will still find it cheaper to push out PCIe 1.1/2.0 devices, especially if they started developing their ASIC with older-generation IP and SerDes. The lower pin-count is awesome, but peripheral vendors need to catch up. Think of how many RAID/HBA controllers you could run from 1 x16 PCIe 3.0 link, heh. Could even throw a PCIe switch into the mix to use as a bandwidth bridge. e: BAR is Base Address Register. Software writes all 1s to this register, and then reads the value back. Hardware ties certain bits to 0, therefore reporting to the host system how much memory it wants. e2: Yeah Jan, going up to PCIe 2.0 even should result in a nice performance boost for you.
|
# ¿ Oct 18, 2012 04:03 |
|
Professor Science posted:(important note: one PCI device can have more than one BAR, and there are fun alignment restrictions with BARs that may cause the actual physical address space consumed to be much greater than what you expect versus the sum of the size of the BARs) Right, that's why I mentioned older Nvidia cards having multiple BARs, one of them being command/control, one (presumably) aliased to VRAM, etc. At least that's what the Noveau docs seem to suggest up to NV50 or so. Type 0 header allows for up to six 32-bit BARs, though I haven't run into a device with that many in the field. BAR alignment as described (hardware tying certain bits to 1 and software writing 1s/reading back) ends up being power-of-two, so yeah, if you need 90MB you end up burning 128MB.
|
# ¿ Oct 18, 2012 05:40 |
|
Mierdaan posted:Excellent, thanks. I also found this old brief describing the differences but it looks like it's from 2003. Gives me a good idea of how they treat the two different lines, though. Basically when you pay for the Quadro, you get the ISV certification, ECC memory and you know that software vendors qualified against your particular card. Should cut down on compatibility issues as well. It's somewhat an example of executing the whole "we can sell the [essentially] same product to w people for $x, and y people for $z, so let's do both!".
|
# ¿ Oct 24, 2012 17:14 |
|
Rakthar posted:After having great luck with two Nvidia cards in a row (8800 GTS, then a GTX 570) I decided to try out a GTX 680. I got an EVGA model from Amazon and plugged it right in. Almost right away I noticed this weird hitching / stuttering that was present both during gaming and video playback. If I tried watching youtube videos or regular videos, about every 30 seconds there would be a noticable stutter and then it would resume smooth playback. Did you fully clean drivers before upgrading? Do you have any OCing software (like EVGA Precision) installed?
|
# ¿ Dec 15, 2012 19:43 |
|
parasyte posted:Though if you were out even longer than that, ATI originally had 7000-9000 series cards (R100 and R200 chips back around the turn of the century) and now are back to that. Radeon 8500 All-In-Wonder! 7500 and 8500s were getting their poo poo kicked in by GeForces until R300 (Radeon 9700) launched with Doom 3, and you'd get insane performance improvements over the GeForce 4 with AA and AF maxed. And it was codenamed Khan
|
# ¿ Dec 19, 2012 17:45 |
|
Agreed posted:If Intel gets their driver poo poo together, they stand a chance at owning way more than any one company should. Too much integration. Not... sure if they'll be allowed to keep it at that point, the bread and butter of the graphics companies are that performance range, much as enthusiasts would like to feel included. Do I want a laptop with great battery life and extremely good graphics performance compared to current options? Absolutely! Do I want Intel to own the world? Don't feel they've earned it. I don't think it's sound economics to allow monopoly on the grounds of contemporary success, regardless of how impressive it might look at the moment; the long run does not extrapolate cleanly from the short term, and it's bad decision-making to just trust a company to keep besting themselves when over time that self-interest folds over to a stronger motivation to be profitable (a motive that no longer requires extraordinary innovation, just moving things along now and again). Hey, you're alive! eames posted:And then next up theres Broadwell which will apparently bring another 40% IGP improvement on top of Haswell. FWIW I just got a ~80 slide Intel roadmap for 2013/2014, and I'd say 30 slides were only about graphics stuff. They ain't loving around, which is nice because dGPU's in notebooks slaughter battery life like none other. With no dGPU that should let notebook vendors use simpler stuff like eDP as well, which can't coexist (at least of a few gens ago) with using the PEG port of the CPU.
|
# ¿ Jan 10, 2013 16:07 |
|
roadhead posted:AMD falling all over themselves to do it cheaper - and having APUs available for prototype/dev kits immediately? Just wild speculation on my part. I guess AMD would be a one stop shop for your CPU & GPU needs as well.
|
# ¿ Jan 23, 2013 16:00 |
|
chippy posted:This isn't an upgrade question, but it is a part-picking question. I thought it might fit better in here than in the system building/part picking thread, but feel free to tell me to gently caress off there if it's not. What's your budget like? You could probably get a used Fermi card (GTX4xx/5xx) for pretty cheap from someone who's upgrade, and if it's halfway-decent OEM, the cooler should be pretty drat quiet. My eVGA GTX460 was usually inaudible even under heavy load.
|
# ¿ Mar 11, 2013 18:23 |
|
Reminder to check NV's website for beta drivers that'll probably boost Bioshock performance quite a bit; if they're not out yet, I'd imagine they're due very early next week.
|
# ¿ Mar 27, 2013 15:18 |
|
Factory Factory posted:I'm thinking we might need an FCAT and frame latency writeup for the OP. Y/N? I think that's a good idea, I was thinking about it the other day and a bunch of the stuff is out-of-date as well.
|
# ¿ Mar 28, 2013 19:05 |
|
KillHour posted:Damnit, Epic, why don't you ever release your tech demos to the public? Rhetorical question I'm sure, but fairly certain there is a very specific combination soup of drivers, hardware, etc in use for the demos
|
# ¿ Mar 30, 2013 07:25 |
|
beejay posted:Has anyone else been having trouble with nvidia 314.22 drivers? I get "freezes" for 1-5 seconds every game when playing Starcraft 2 and driver crashes at least once a day. I only play Starcraft 2 right now and last night it caused me to drop out of a game once and then froze my game badly enough that when it recovered all my stuff was dead. Not fun. Anyway I went looking through event viewer and this started on 3/16 which was when I installed 314.21 beta. Yeah, I feel like my secondary monitor performance (two off a GT210) has gone waaaaay downhill after I installed 314.22 for Bioshock Infinite. Hopefully the next release fixes whatever regressions appeared. Endymion FRS MK1 posted:Here's an article showing the top 10 most important graphics cards. Kind of a lovely list; starts off early which is fine, but skips a lot of notable cards like the 9700 that were huge steps up / disrupting forces in the market. But it's PC World, so I think I did a write-up on the Riva 128 or TNT2 earlier in the thread, I was hoping to do a series of "retro" posts, but man, writing words takes time.
|
# ¿ Apr 17, 2013 18:18 |
|
A 2GB 660 Ti should be able to handle 1080p with settings maxed/close-to-max (excepting some insane anti-aliasing/other special effects), so if you're not seeing the performance you expect, it's possible it's your G840 bottlenecking, but that is a Sandy Bridge based Pentium, so it's pretty much brand new when it comes to the current crop of games. I wouldn't blame the 660 though; do you have a ton of background tasks running or something?
|
# ¿ May 10, 2013 19:07 |
|
I don't think we should necessarily be recommending away from AMD cards (this reminds me that OP recommendations are probably in need of updating), and they certainly deliver a better value at certain price points (especially when there's 2-3 AAA games thrown in). I think single GPU performance at say 1080p is still solid for most gamers, assuming drivers are stable/ready at that point.
|
# ¿ Jun 5, 2013 17:16 |
|
Agreed posted:There is a very good chance I will phone-Skype you from the hospital high as a kite. Shouldn't be a long stay at all (and thank god, medical bills are insane, insurance covers far too little... bleh, there's a reason I want to upgrade to a new card out of principle but I can't, hah) but it is inpatient. They pump the good poo poo while you're inpatient. So, uh... prepare for that, amigo. Haha. Also I missed this, well wishes Agreed. Please tape your high-as-kite rantings about GPU overclocking to share with us afterwards
|
# ¿ Jun 5, 2013 17:17 |
|
Just updated to 320.18...to be fair, the update took unusually long compared to any previous driver install and my cards beeped several times as well. No other signs of damage / non-functioning though.
|
# ¿ Jun 14, 2013 17:48 |
|
Nice performance improvement on the 770, but my 670 is less than a year old I think I will probably hang on to it until Maxwell, and eek out all the performance I can from OCing. 2GB of VRAM is definitely on the border for 1600p
|
# ¿ Jun 25, 2013 15:39 |
|
Agreed posted:Awesome! Heck yeah; I missed your guide a few pages(?) ago apparently, PM me links to those and I'll totally throw them in the OP.
|
# ¿ Jun 28, 2013 21:54 |
|
Agreed - I totally did get your PM but got carried away fixing my DNS and hosting which I discovered was broken when the OP lacked any images whatsoever Updated OP and thread title sorry for the delay movax fucked around with this message at 01:44 on Jul 19, 2013 |
# ¿ Jul 19, 2013 01:40 |
|
loving nvidia...trying to install R331 drivers and it rendered my system completely unusable...can't uninstall drivers, can't reinstall drivers, can't boot normally; only boot mode that works is 640x480. I hate computers.
|
# ¿ Oct 22, 2013 06:38 |
|
necrobobsledder posted:For everyone else that had a problem after running nVidia updated drivers, I was able to get out of being forced into safe mode to do anything by uninstalling all drivers and nVidia and ATI software completely (including ATI - I had both ATI and nVidia drivers present briefly) and cold booting each time for each configuration until I had finally installed drivers. Granted, this may not be the proper solution, but I tried reinstalling drivers like everyone else and that didn't work whatsoever. I'm suspecting something is wrong with a number of users' configurations that nVidia didn't quite test for. It kind of pisses me off because I'm not exactly a crazy power user when it comes to my GPUs and I've stuck with mainline driver releases, so I shouldn't have had any problem at all. Maybe it has something to do with using a GTX 680 that I'm not aware of but seeing that people on even 460s getting similar problems, I'm just going to leave the judgment at "nVidia QA hosed up" Yeah this is pretty much how I fixed my problem...it just took like three loving hours. Everything seems normal now. Nothing too crazy about my config either...one GTX 670, one GT 210, three monitors. Though recently I've been driven nuts by incredibly lovely 2D performance (especially if Flash/GIFs are on web pages) on the displays run by the GT 210; is it really that lovely of a card it can't handle some 2D workload? I wonder if the x16 slot I shoved it in is secretly a x1 electrical...
|
# ¿ Oct 23, 2013 18:35 |
|
Alereon posted:It's hard to appreciate how truly lovely a Geforce 210 is: We're talking 20% the performance of Intel HD Graphics at best. You're definitely not going to get acceptable performance even for basic 2D. In particular, it has as little as one eighth the memory bandwidth of your system RAM. Ugh...it's a passively cooled GT 210, MSI I think. Then again, I bought it when I had a GTX 460. I should see if I have the cables to make a 3 monitor setup happen on my 670, only plan on gaming on one of them. Any horrible downsides?
|
# ¿ Oct 24, 2013 01:52 |
|
Factory Factory posted:Higher idle clocks if the screen resolutions are mismatched. Power equivalent to, eh, running two or three CFL bulbs. If only I had Z68 mobo, this wouldn't even be an issue. Hooked up one more display to the 670, gonna see how it works out now and if it murders my FPS by any appreciable amount. e: loving Nvidia, how do you sell a loving GPU spikes to 33% usage displaying a browser window where the only moving element is my avatar jesus
|
# ¿ Oct 24, 2013 04:39 |
|
Also, yes the OP is out of date, Factory Factory and/or Agreed (and all the other regulars) do you guys want to kick off a new one?
|
# ¿ Oct 24, 2013 04:41 |
|
Bloody Hedgehog posted:Because the drivers recognize that a browser is open, and therefore ramps up the clock speeds to account for the fact that browsers all use GPU-accelerated components now, no matter if the webpage being displayed is a text file or a video heavy multimedia site. I haven't used AMD's stuff for a while, but I'd imagine their cards/drivers do the exact same thing. Interestingly, was having some trouble with Chrome playing back Flash, so I switched over to IE...barely pegs either GPU. I guess I'll leave the GT 210 hooked up running my one email/IRC-only display. Factory Factory posted:Good lord, I'm still plodding away at the overclocking thread rewrite. I'd be happy to work on tasks you want to put in front of me, but I can't take on a full rewrite. I think a lot of the content is still good, we just need to keep the first post w/ recommendations up to date. If you can let me know what to throw there (I've been under a rock when it comes to what AMD's doing) I can defintely do that.
|
# ¿ Oct 24, 2013 05:44 |
|
Agreed posted:No. Here, let me try again, with a little more brevity. Sorry to bring this back up, but I've been super busy lately with a new job (and I don't do much directly with GPUs anymore), but I will try to be around more often, and Agreed/FF you can definitely take over the OP whenever you want, it's woefully out-of-date.
|
# ¿ Dec 10, 2013 01:09 |
|
Gwaihir posted:We have a post the pictures of your ~real~ desktop thread, what kind of hardware forum would we be without a "Post the pictures of your desktop(pc this time)" thread? veedubfreak posted:So UPS was kind enough to leave my 800 dollar package from Mountain Mods at my door last night without even having the courtesy to ring the doorbell. God I loving hate UPS. Where should I post a build thread? Go for it, and if it turns into a twisted parody of [H] build threads, I'm OK with that too. We could use more threads than just the megathreads floating around.
|
# ¿ Dec 12, 2013 19:18 |
|
I missed the Ambilight clone discussion; don't all commercial solutions use FPGAs? Nothing else will be OS/driver/PC-independent I assume. I take it the real TVs that can do it take obvious advantage of the fact they have access to the raw display data internally. Wonder if it's worth Kickstarting...
|
# ¿ Dec 28, 2013 04:35 |
|
|
# ¿ Apr 20, 2024 04:26 |
|
Factory Factory posted:Probably not, since Phillips has US patents. Oh, well that sucks. Maybe do it in a way where it just happens to be a device that sits in line and outputs color data...what you do with it is up to you. Could be a fun project!
|
# ¿ Dec 28, 2013 06:30 |