|
Barnyard Protein posted:how dare my computer run programs tht use up the ram computer i need that ram to run computer prrogarm people on yospos unironically post this if it's the wrong program, though. (a web browser) that said Barnyard Protein posted:i keep 12GB of my computer's 16GB RAM in a box on a shelf to ensure that it remains free of harmful data
|
# ? Dec 26, 2015 09:51 |
|
|
# ? Apr 19, 2024 16:00 |
|
has package kit found a problem to solve yet
|
# ? Dec 26, 2015 10:50 |
|
Suspicious Dish posted:did you ever submit a bug report, or even cat /proc/$(pidof pulseaudio)/stack whenever this happens never even considered it. i learned long ago that linux desktop bug reports are pointless i filed a bug against gtk3. it was ignored for over two years, then fixed by an unrelated set of changes. that turned me off on community participation if i don't have my own patch for the bug, i don't even try anymore. Suspicious Dish posted:because i don't believe you i thought it happened to everyone so this surprises me Notorious b.s.d. fucked around with this message at 16:30 on Dec 26, 2015 |
# ? Dec 26, 2015 16:28 |
|
Notorious b.s.d. posted:never even considered it. i learned long ago that linux desktop bug reports are pointless Through the disto's it is hit or miss, but Poettering's projects tend to usually have well monitored tracking facilities. PA looks like it moved over to FreeDesktop.org.
|
# ? Dec 26, 2015 16:43 |
|
I don't really have time to fill out bug reports, I'm busy on my Linux
|
# ? Dec 26, 2015 18:16 |
|
Wheany posted:people on yospos unironically post this if it's the wrong program, though. (a web browser) i dont mind browsers taking tons of memory, i do mind browsers taking so much memory that i get in swap hell and nothing else works
|
# ? Dec 26, 2015 18:47 |
|
oh ho ho i wish I just had programs that just tightloop the CPU. Nah, WebKitGtk has this thing where it likes to go into a tight loop allocating and dirtying memory. The OOM killer will probably kill it. Eventually. After it's thrashed its way through 8GB of swap. In the meantime the entire graphical session just locks up completely to the point where you can't even move the mouse. Sometimes if I hammer Ctrl+Alt+F3 enough I can log in to a text console and kill -9 the loving thing. Really just about all of my system-crapping-out problems come down to a web browser in some way or other. HTML5 Rocks!!!11ftw!
|
# ? Dec 26, 2015 19:37 |
|
Barnyard Protein posted:i keep 12GB of my computer's 16GB RAM in a box on a shelf to ensure that it remains free of harmful data
|
# ? Dec 26, 2015 21:06 |
|
I occasionally see complaints about Xcode using a lot of CPU when a developer opens a project not making their system or Xcode itself unusable, mind you, just a spike in CPU usage that they notice because they leave Activity Monitor running it's because their source code is being indexed, letting them do things like navigate quickly between definitions/declarations/uses of symbols, see callers and callees of methods, see only meaningful classes in IB inspectors, and so on but for some people anything more than a couple percent CPU use for a couple seconds is a surprise
|
# ? Dec 26, 2015 23:20 |
|
people are as big of tightwad resource hoarders with computers as they are with money
|
# ? Dec 26, 2015 23:37 |
|
eschaton posted:I occasionally see complaints about Xcode using a lot of CPU when a developer opens a project i don't have that issue. i do have an issue where i will start typing and the cpu spikes while the beachball of doom spins and everything is frozen for several seconds. most of the time it recovers, but occasionally xcode just crashes afterwards. xcode is still much better than it was 6 or 7 years ago though.
|
# ? Dec 26, 2015 23:59 |
|
fritz posted:i dont mind browsers taking tons of memory, i do mind browsers taking so much memory that i get in swap hell and nothing else works Mr Dog posted:oh ho ho i wish I just had programs that just tightloop the CPU. Nah, WebKitGtk has this thing where it likes to go into a tight loop allocating and dirtying memory. The OOM killer will probably kill it. Eventually. After it's thrashed its way through 8GB of swap. In the meantime the entire graphical session just locks up completely to the point where you can't even move the mouse. Sometimes if I hammer Ctrl+Alt+F3 enough I can log in to a text console and kill -9 the loving thing. This never happen to me on chrome, out of my 24 gb or my previous 8 gb i never seen it use more than 2 or 3 gb on a bad day, but i dont keep dozens or hundreds of tabs open for days for no reason like a hoarder
|
# ? Dec 27, 2015 01:53 |
|
eschaton posted:I occasionally see complaints about Xcode using a lot of CPU when a developer opens a project see also: eclipse
|
# ? Dec 27, 2015 02:35 |
|
fritz posted:i dont mind browsers taking tons of memory, i do mind browsers taking so much memory that i get in swap hell and nothing else works again, that's every program. if something causes my computer to swap, then i have a problem. if something uses 24 gigs of ram and my computer isn't swapping, there is no problem
|
# ? Dec 27, 2015 10:41 |
|
i watched imagemagick use 12 GB of ram when i made a gif one day. it's a pretty recent release with the highest color depth option, so it probably converted those 1080p 8bits per component source frames to 32 bit floats per component before starting to process them
|
# ? Dec 27, 2015 10:45 |
|
Mr Dog posted:oh ho ho i wish I just had programs that just tightloop the CPU. Nah, WebKitGtk has this thing where it likes to go into a tight loop allocating and dirtying memory. The OOM killer will probably kill it. Eventually. After it's thrashed its way through 8GB of swap. In the meantime the entire graphical session just locks up completely to the point where you can't even move the mouse. Sometimes if I hammer Ctrl+Alt+F3 enough I can log in to a text console and kill -9 the loving thing. Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes.
|
# ? Dec 27, 2015 12:06 |
|
Athas posted:Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes. thinking about software stuck in the past, when swapping unused stuff out was a necessity for basic functioning of home computers. if you are swapping out to make the vast monolith XYZ run it can't be pieces of XYZ because you will never get into a workable state down that route not sure to what extent linux has been growing an awareness of what is actually basic user interaction stuff that should get some priority when it comes to latency (scheduling, swapping, etc), but i imagine it is one of those somewhat sore points for kernel people v. server people v. desktop people
|
# ? Dec 27, 2015 13:12 |
|
Athas posted:Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes. the kernel doesn't know what your processes are doing, it can't. it assumes the biggest, hungriest process is also the most important. and sometimes that's true. you would probably be really angry if your gimp instance or your web browser got pushed out of RAM to preserve allocations for the kde mixer widget or something. of course, this behavior is tunable. but you shouldn't try.
|
# ? Dec 27, 2015 18:14 |
|
The_Franz posted:i don't have that issue. i do have an issue where i will start typing and the cpu spikes while the beachball of doom spins and everything is frozen for several seconds. most of the time it recovers, but occasionally xcode just crashes afterwards. anything "interesting" in your configuration at all?
things like syntax coloring of symbols and code completion are also driven by the index, and use libclang, so code that takes a while to compile can also cause a performance hit on that stuff, and there are also aspects of that which can be impacted by having tons and tons of targets of course none of that should crash the IDE, if you either file a bug or throw a crash report in a pastebin I can take a look
|
# ? Dec 27, 2015 18:40 |
|
Athas posted:Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes. There has been an endless stream of clever heuristics added to the scheduling and VM subsystems in Linux that aim to avoid pathological behaviours, and every last one of them has turned out to be detrimental in the common case. Not to say that the problem is insurmountable, only that it's harder than it looks. The discussion about vfork earlier itt was very enlightening actually, and other posters explained an aspect of the VM behaviour to me that I didn't realize before. Ever wonder why Linux lets you allocate more memory than it actually has available? Because of fork(). When a 2GB process does a fork and an exec, then the parent and child briefly require 4GB of committed backing store. The child might exec() back down to a memory footprint of nothing, or it could dirty every single page, requiring them all to be copied from the parent and possibly exhausting memory in the process. And fork/exec is a very common thing. So if Linux didn't do this then the fork part of fork/exec could potentially fail even with gigabytes of free RAM. VM overcommit can be turned off, but it's on by default for good reason. Windows doesn't have any sort of fork mechanism, it has CreateProcess. Internally kernel32.dll and ntdll.dll etc implement it by creating a process object with no threads and no memory pages, then remotely mmaping pages from an EXE into the new process' address space, and then spawning a new thread inside that process pointed to the EXE's start address. So NT has no need to overcommit memory like Linux does.
|
# ? Dec 27, 2015 18:45 |
|
Mr Dog posted:Ever wonder why Linux lets you allocate more memory than it actually has available? Because of fork(). it's not just because fork(). it is quite common for processes to ask for more pages than they will actually ever use, so one of the first and easiest optimizations to a virtual memory system is lazy page allocation -- don't commit a physical page to an allocated virtual page until the process touches it. iow allocation just sets up page table entries flagged to generate pagefaults, and when the process touches one the pf interrupt handler says "oh hey it's time to actually allocate phys mem to this" this is why, if you're writing high performance and/or timing sensitive code, it's important to ensure that all the memory you touch is accessed at least once during startup, rather than potentially leaving it until timing is critical. not only might you have to take a pf interrupt per unallocated page, the handler might need to zero some/all of the pages it's handing over to you (because it must, if that phys page originally belonged to some other process), so this can cause significant performance sharts at an undesired time
|
# ? Dec 27, 2015 22:02 |
|
Wait that doesn't make sense. You're referring to lazy allocation (incidentally, wouldn't it be better to call madvise() instead of explicitly dirtying those pages?). I'm talking about overcommitting: the kernel writing the program a rubber check for 2GB of memory when you know you only have 1GB free and hoping the program doesn't cash it (well, or cash all of it, which is where the metaphor gets messy). You don't need to allocate some specific numbered pages to the program, you just need to commit a quantity of to-be-determined pages from your free pool to that program. Windows lets you reserve an area of VM address space without actually committing any backing store to it, then commit backing store for some subset of those reserved pages at a later time.
|
# ? Dec 27, 2015 22:33 |
|
you can do on-demand page allocation while also doing accurate book keeping, fork()-free operating systems do this
|
# ? Dec 27, 2015 22:33 |
|
what i'm saying is that lazy allocation is another overcommit mechanism. you can have only 1gb free in <insert favorite unix flavor here> and a process which then asks for 2gb will get a pointer to a 2gb block of virtual address space that isn't connected to any physical ram yet, and if it tries to use it all the kernel will have to reclaim pages from somewhere else. you can even ask for more physical pages than actually exist in the system and the default rules in many unix operating systems will allow it, though there's usually some "this is too much" threshold beyond which allocations will fail a whole lot of very basic unix system architecture is built on overcommit as a way of life, afaik the way fork works is a hack based on overcommit rather than people deciding it would be cool if overcommit could be added to unix so fork could work that way also madvise() isnt really guaranteed to do anything, its just a hint about future behavior and the system is free to ignore it. and i dont see any options in the man page (under osx 10.11) which imply behavior equivalent to manually dirtying
|
# ? Dec 27, 2015 23:06 |
|
Mr Dog posted:There has been an endless stream of clever heuristics added to the scheduling and VM subsystems in Linux that aim to avoid pathological behaviours, and every last one of them has turned out to be detrimental in the common case. Nt/ZwCreateProcess has the ability to fork a process. Interix was a full certified UNIX built on NT ofc this is meaningless in win32 since it wont register w/ csrss and do all of the win32 poo poo mmap w/o vm overcommit is also kind of lame fork is bad and terrible b/c it shares all kinds of poo poo with your child proc and if u want to selectively share memory u should do it explicitly rather than lol let me share literally everything
|
# ? Dec 27, 2015 23:12 |
|
BobHoward posted:it's not just because fork(). it is quite common for processes to ask for more pages than they will actually ever use, so one of the first and easiest optimizations to a virtual memory system is lazy page allocation -- don't commit a physical page to an allocated virtual page until the process touches it. iow allocation just sets up page table entries flagged to generate pagefaults, and when the process touches one the pf interrupt handler says "oh hey it's time to actually allocate phys mem to this" this is dumb b/c the kernel is free to shuffle pages around in physical memory/swap as it sees fit even if you scribble around mlock/VirtualLock is what u wnat
|
# ? Dec 27, 2015 23:17 |
|
also virtual memory costs something like a 30% overhead one of the midori OS cool bits was that it could enforce process isolation w/o an MMU so everything became 20-30% faster, for free since it was written in low-level extended C# RIP in peace, though the old eng manager has some sw8 blogs on it now
|
# ? Dec 27, 2015 23:19 |
|
Malcolm XML posted:mmap w/o vm overcommit is also kind of lame file-backed mmap doesn't count against your commit charge in the first place
|
# ? Dec 27, 2015 23:22 |
|
if virtual memory meant actual memory my system would outright swap, because nacl_helper of chrome reserves 88 GB of it sara 5855 1.2 0.1 88119636 33524 tty2 Sl+ 17:15 5:12 /opt/google/chrome/nacl_helper
|
# ? Dec 28, 2015 00:59 |
|
Malcolm XML posted:fork is bad and terrible b/c it shares all kinds of poo poo with your child proc and if u want to selectively share memory u should do it explicitly rather than lol let me share literally everything also this kind of poo poo can be racy, both when threads are involved and when lots of code is running in a process that's written by different teams and doesn't share some sort of centralized bookkeeping posix_spawn has a "close-on-exec" attribute as a way to specify what file descriptors should be preserved in the spawned process, but what you really want (and what has been added in some systems like Darwin as a non-portable extension) is way to say "only these few specified descriptors should be inherited" and because posix_spawn has the slightly cumbersome but open-ended "attributes" mechanism this kind of thing can actually be added when it's needed, you don't have to do crazy poo poo like iterate all open file descriptors between vfork and execle to ensure the child isn't inheriting something it shouldn't
|
# ? Dec 28, 2015 03:42 |
|
eschaton posted:anything "interesting" in your configuration at all? what does moom cause
|
# ? Dec 28, 2015 05:56 |
|
pram posted:what does moom cause https://manytricks.com/moom/
|
# ? Dec 28, 2015 06:38 |
|
Notorious b.s.d. posted:the kernel doesn't know what your processes are doing, it can't. I can tell it which processes are important. I don't have much Linux kernel programming experience, but I have done kernel programming (including scheduler and virtual memory) before, and it's not hard to have a bit that says "this process and its children is important" and then not gently caress with it. Basically, all I need is enough responsivity to launch and xterm, figure out which process is misbehaving, and kill it. Hell, if Linux wasn't retarded about swap, I could wait until the OOM killer gets around to it, but in practice my system is unusable while swapping. Does this solve the common case? No, if my X server or Emacs goes haywire, then it will cause even more damage than otherwise. On a server where total throughput is what you care about, a policy like this is really dumb. But those are not the cases I worry about; in practice my X server never allocates too much memory, nor does my Emacs, but the bloody Haskell compiler does go apeshit sometimes (and browsers too). I just need something that works for my setup (and I think most desktop linux setups). How is this not something that bothers more people? I'd think the desktop Linux developers would run into this kind of poo poo very often and would whip up some kernel patch to ameliorate it. How does OS X handle this?
|
# ? Dec 28, 2015 11:41 |
|
Athas posted:How is this not something that bothers more people? As a workaround, I just don't create swap. My chromebook, of all things, has 16 gigs of ram. If that's not enough to run something, I'd prefer kernel killing it than my machine going into swap hell. It's not like any app ever today doesn't have autosave. I'd rather lose a minute of work than waiting for swap to fill or having to reboot.
|
# ? Dec 28, 2015 13:10 |
|
pseudorandom name posted:file-backed mmap doesn't count against your commit charge in the first place unless it's cow
|
# ? Dec 28, 2015 13:55 |
|
Truga posted:As a workaround, I just don't create swap. My chromebook, of all things, has 16 gigs of ram. If that's not enough to run something, I'd prefer kernel killing it than my machine going into swap hell. It's not like any app ever today doesn't have autosave. I'd rather lose a minute of work than waiting for swap to fill or having to reboot. all well and good except i have an 8gb ~*ultrabook*~ with soldered-on ram and i need to run a windows 7 vm sometimes otherwise yeah same (can't tell if this post is ironic or not)
|
# ? Dec 28, 2015 18:59 |
|
Truga posted:As a workaround, I just don't create swap. My chromebook, of all things, has 16 gigs of ram. If that's not enough to run something, I'd prefer kernel killing it than my machine going into swap hell. It's not like any app ever today doesn't have autosave. I'd rather lose a minute of work than waiting for swap to fill or having to reboot. just set swappiness low. the real mystery is why distributions don't by default, but i suspect most distribution makers feel that bad defaults are their birthright having a bit of swap is nice for when applications inevitably plain dirty memory that they will never ever use again, and you can reclaim that for caching lots of disk cache is p. sexy
|
# ? Dec 28, 2015 19:27 |
|
Well the other reason is the stupid chromebook only comes with 64 gigs of SSD and having swap eats into that. But yeah, on an ancient work pc I use for terminals and firefox, I set swappiness to low and it seems to work just fine with 4 gigs of ram and firefox eating it all up.
|
# ? Dec 28, 2015 19:31 |
|
Athas posted:I can tell it which processes are important. in theory you can do this with cgroups, but you and i both know you're not gonna spend time reading cgroups documentation in order to tame ghc. you're just gonna live with it. kernel tunables are nice to have when you really, really need them, but on your desktop, nobody is gonna bother.
|
# ? Dec 28, 2015 22:26 |
|
|
# ? Apr 19, 2024 16:00 |
|
Cybernetic Vermin posted:just set swappiness low. the real mystery is why distributions don't by default, but i suspect most distribution makers feel that bad defaults are their birthright low-but-not-zero is the win spot for linux desktops. set it to like, 10. or 1. but not 0. when swappiness is set to 0, programs will never be sacrificed for disk cache, and sometimes you actually want it to sacrifice an idle app to get disk cache for an active one. i also have no idea why they don't set the defaults a lot lower. if i walk away from my computer overnight i mos def do not want emacs to get swapped out to disk because it was "idle" in the kernel's opinion
|
# ? Dec 28, 2015 22:30 |