Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

Barnyard Protein posted:

how dare my computer run programs tht use up the ram computer :mad: i need that ram to run computer prrogarm

people on yospos unironically post this if it's the wrong program, though. (a web browser)

that said

Barnyard Protein posted:

i keep 12GB of my computer's 16GB RAM in a box on a shelf to ensure that it remains free of harmful data

Adbot
ADBOT LOVES YOU

Soricidus
Oct 21, 2010
freedom-hating statist shill
has package kit found a problem to solve yet

Notorious b.s.d.
Jan 25, 2003

by Reene

Suspicious Dish posted:

did you ever submit a bug report, or even cat /proc/$(pidof pulseaudio)/stack whenever this happens

never even considered it. i learned long ago that linux desktop bug reports are pointless

i filed a bug against gtk3. it was ignored for over two years, then fixed by an unrelated set of changes. that turned me off on community participation

if i don't have my own patch for the bug, i don't even try anymore.

Suspicious Dish posted:

because i don't believe you

i thought it happened to everyone so this surprises me

Notorious b.s.d. fucked around with this message at 16:30 on Dec 26, 2015

MrMoo
Sep 14, 2000

Notorious b.s.d. posted:

never even considered it. i learned long ago that linux desktop bug reports are pointless

Through the disto's it is hit or miss, but Poettering's projects tend to usually have well monitored tracking facilities. PA looks like it moved over to FreeDesktop.org.

BONGHITZ
Jan 1, 1970

I don't really have time to fill out bug reports, I'm busy on my Linux

fritz
Jul 26, 2003

Wheany posted:

people on yospos unironically post this if it's the wrong program, though. (a web browser)

i dont mind browsers taking tons of memory, i do mind browsers taking so much memory that i get in swap hell and nothing else works

Sapozhnik
Jan 2, 2005

Nap Ghost
oh ho ho i wish I just had programs that just tightloop the CPU. Nah, WebKitGtk has this thing where it likes to go into a tight loop allocating and dirtying memory. The OOM killer will probably kill it. Eventually. After it's thrashed its way through 8GB of swap. In the meantime the entire graphical session just locks up completely to the point where you can't even move the mouse. Sometimes if I hammer Ctrl+Alt+F3 enough I can log in to a text console and kill -9 the loving thing.

Really just about all of my system-crapping-out problems come down to a web browser in some way or other. HTML5 Rocks!!!11ftw!

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

Barnyard Protein posted:

i keep 12GB of my computer's 16GB RAM in a box on a shelf to ensure that it remains free of harmful data

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
I occasionally see complaints about Xcode using a lot of CPU when a developer opens a project

not making their system or Xcode itself unusable, mind you, just a spike in CPU usage that they notice because they leave Activity Monitor running

it's because their source code is being indexed, letting them do things like navigate quickly between definitions/declarations/uses of symbols, see callers and callees of methods, see only meaningful classes in IB inspectors, and so on

but for some people anything more than a couple percent CPU use for a couple seconds is a surprise

Arcteryx Anarchist
Sep 15, 2007

Fun Shoe
people are as big of tightwad resource hoarders with computers as they are with money

The_Franz
Aug 8, 2003

eschaton posted:

I occasionally see complaints about Xcode using a lot of CPU when a developer opens a project

i don't have that issue. i do have an issue where i will start typing and the cpu spikes while the beachball of doom spins and everything is frozen for several seconds. most of the time it recovers, but occasionally xcode just crashes afterwards.

xcode is still much better than it was 6 or 7 years ago though.

Celexi
Nov 25, 2006

Slava Ukraini!

fritz posted:

i dont mind browsers taking tons of memory, i do mind browsers taking so much memory that i get in swap hell and nothing else works

Mr Dog posted:

oh ho ho i wish I just had programs that just tightloop the CPU. Nah, WebKitGtk has this thing where it likes to go into a tight loop allocating and dirtying memory. The OOM killer will probably kill it. Eventually. After it's thrashed its way through 8GB of swap. In the meantime the entire graphical session just locks up completely to the point where you can't even move the mouse. Sometimes if I hammer Ctrl+Alt+F3 enough I can log in to a text console and kill -9 the loving thing.

Really just about all of my system-crapping-out problems come down to a web browser in some way or other. HTML5 Rocks!!!11ftw!


This never happen to me on chrome, out of my 24 gb or my previous 8 gb i never seen it use more than 2 or 3 gb on a bad day, but i dont keep dozens or hundreds of tabs open for days for no reason like a hoarder

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

eschaton posted:

I occasionally see complaints about Xcode using a lot of CPU when a developer opens a project

not making their system or Xcode itself unusable, mind you, just a spike in CPU usage that they notice because they leave Activity Monitor running

it's because their source code is being indexed, letting them do things like navigate quickly between definitions/declarations/uses of symbols, see callers and callees of methods, see only meaningful classes in IB inspectors, and so on

but for some people anything more than a couple percent CPU use for a couple seconds is a surprise

see also: eclipse

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

fritz posted:

i dont mind browsers taking tons of memory, i do mind browsers taking so much memory that i get in swap hell and nothing else works

again, that's every program. if something causes my computer to swap, then i have a problem. if something uses 24 gigs of ram and my computer isn't swapping, there is no problem

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
i watched imagemagick use 12 GB of ram when i made a gif one day. it's a pretty recent release with the highest color depth option, so it probably converted those 1080p 8bits per component source frames to 32 bit floats per component before starting to process them :getin:

Athas
Aug 6, 2007

fuck that joker

Mr Dog posted:

oh ho ho i wish I just had programs that just tightloop the CPU. Nah, WebKitGtk has this thing where it likes to go into a tight loop allocating and dirtying memory. The OOM killer will probably kill it. Eventually. After it's thrashed its way through 8GB of swap. In the meantime the entire graphical session just locks up completely to the point where you can't even move the mouse. Sometimes if I hammer Ctrl+Alt+F3 enough I can log in to a text console and kill -9 the loving thing.

Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes.

Cybernetic Vermin
Apr 18, 2005

Athas posted:

Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes.

thinking about software stuck in the past, when swapping unused stuff out was a necessity for basic functioning of home computers. if you are swapping out to make the vast monolith XYZ run it can't be pieces of XYZ because you will never get into a workable state down that route

not sure to what extent linux has been growing an awareness of what is actually basic user interaction stuff that should get some priority when it comes to latency (scheduling, swapping, etc), but i imagine it is one of those somewhat sore points for kernel people v. server people v. desktop people

Notorious b.s.d.
Jan 25, 2003

by Reene

Athas posted:

Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes.

the kernel doesn't know what your processes are doing, it can't. it assumes the biggest, hungriest process is also the most important. and sometimes that's true. you would probably be really angry if your gimp instance or your web browser got pushed out of RAM to preserve allocations for the kde mixer widget or something.

of course, this behavior is tunable. but you shouldn't try.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

The_Franz posted:

i don't have that issue. i do have an issue where i will start typing and the cpu spikes while the beachball of doom spins and everything is frozen for several seconds. most of the time it recovers, but occasionally xcode just crashes afterwards.

anything "interesting" in your configuration at all?

  • heavy use of C++ templates
  • large number of targets (say from a CMake-generated project)
  • unsupported third-party Xcode plug-ins
  • whole-OS hacks like MOOM etc.

things like syntax coloring of symbols and code completion are also driven by the index, and use libclang, so code that takes a while to compile can also cause a performance hit on that stuff, and there are also aspects of that which can be impacted by having tons and tons of targets

of course none of that should crash the IDE, if you either file a bug or throw a crash report in a pastebin I can take a look

Sapozhnik
Jan 2, 2005

Nap Ghost

Athas posted:

Why does this even happen? Why does the kernel allow one process to push everything else out of memory? Why isn't the memory hog itself swapped out instead of the loving X server or my text editor? This is my #1 desktop Linux issue. I know I can just ulimit the risky processes or disable swap entirely, but I don't mind swap or memory-hungry processes on principle, as long as they stick to stepping on their own toes.

There has been an endless stream of clever heuristics added to the scheduling and VM subsystems in Linux that aim to avoid pathological behaviours, and every last one of them has turned out to be detrimental in the common case.

Not to say that the problem is insurmountable, only that it's harder than it looks.

The discussion about vfork earlier itt was very enlightening actually, and other posters explained an aspect of the VM behaviour to me that I didn't realize before. Ever wonder why Linux lets you allocate more memory than it actually has available? Because of fork(). When a 2GB process does a fork and an exec, then the parent and child briefly require 4GB of committed backing store. The child might exec() back down to a memory footprint of nothing, or it could dirty every single page, requiring them all to be copied from the parent and possibly exhausting memory in the process. And fork/exec is a very common thing. So if Linux didn't do this then the fork part of fork/exec could potentially fail even with gigabytes of free RAM.

VM overcommit can be turned off, but it's on by default for good reason.

Windows doesn't have any sort of fork mechanism, it has CreateProcess. Internally kernel32.dll and ntdll.dll etc implement it by creating a process object with no threads and no memory pages, then remotely mmaping pages from an EXE into the new process' address space, and then spawning a new thread inside that process pointed to the EXE's start address. So NT has no need to overcommit memory like Linux does.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Mr Dog posted:

Ever wonder why Linux lets you allocate more memory than it actually has available? Because of fork().

it's not just because fork(). it is quite common for processes to ask for more pages than they will actually ever use, so one of the first and easiest optimizations to a virtual memory system is lazy page allocation -- don't commit a physical page to an allocated virtual page until the process touches it. iow allocation just sets up page table entries flagged to generate pagefaults, and when the process touches one the pf interrupt handler says "oh hey it's time to actually allocate phys mem to this"

this is why, if you're writing high performance and/or timing sensitive code, it's important to ensure that all the memory you touch is accessed at least once during startup, rather than potentially leaving it until timing is critical. not only might you have to take a pf interrupt per unallocated page, the handler might need to zero some/all of the pages it's handing over to you (because it must, if that phys page originally belonged to some other process), so this can cause significant performance sharts at an undesired time

Sapozhnik
Jan 2, 2005

Nap Ghost
Wait that doesn't make sense. You're referring to lazy allocation (incidentally, wouldn't it be better to call madvise() instead of explicitly dirtying those pages?). I'm talking about overcommitting: the kernel writing the program a rubber check for 2GB of memory when you know you only have 1GB free and hoping the program doesn't cash it (well, or cash all of it, which is where the metaphor gets messy).

You don't need to allocate some specific numbered pages to the program, you just need to commit a quantity of to-be-determined pages from your free pool to that program. Windows lets you reserve an area of VM address space without actually committing any backing store to it, then commit backing store for some subset of those reserved pages at a later time.

pseudorandom name
May 6, 2007

you can do on-demand page allocation while also doing accurate book keeping, fork()-free operating systems do this

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
what i'm saying is that lazy allocation is another overcommit mechanism. you can have only 1gb free in <insert favorite unix flavor here> and a process which then asks for 2gb will get a pointer to a 2gb block of virtual address space that isn't connected to any physical ram yet, and if it tries to use it all the kernel will have to reclaim pages from somewhere else. you can even ask for more physical pages than actually exist in the system and the default rules in many unix operating systems will allow it, though there's usually some "this is too much" threshold beyond which allocations will fail

a whole lot of very basic unix system architecture is built on overcommit as a way of life, afaik the way fork works is a hack based on overcommit rather than people deciding it would be cool if overcommit could be added to unix so fork could work that way

also madvise() isnt really guaranteed to do anything, its just a hint about future behavior and the system is free to ignore it. and i dont see any options in the man page (under osx 10.11) which imply behavior equivalent to manually dirtying

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Mr Dog posted:

There has been an endless stream of clever heuristics added to the scheduling and VM subsystems in Linux that aim to avoid pathological behaviours, and every last one of them has turned out to be detrimental in the common case.

Not to say that the problem is insurmountable, only that it's harder than it looks.

The discussion about vfork earlier itt was very enlightening actually, and other posters explained an aspect of the VM behaviour to me that I didn't realize before. Ever wonder why Linux lets you allocate more memory than it actually has available? Because of fork(). When a 2GB process does a fork and an exec, then the parent and child briefly require 4GB of committed backing store. The child might exec() back down to a memory footprint of nothing, or it could dirty every single page, requiring them all to be copied from the parent and possibly exhausting memory in the process. And fork/exec is a very common thing. So if Linux didn't do this then the fork part of fork/exec could potentially fail even with gigabytes of free RAM.

VM overcommit can be turned off, but it's on by default for good reason.

Windows doesn't have any sort of fork mechanism, it has CreateProcess. Internally kernel32.dll and ntdll.dll etc implement it by creating a process object with no threads and no memory pages, then remotely mmaping pages from an EXE into the new process' address space, and then spawning a new thread inside that process pointed to the EXE's start address. So NT has no need to overcommit memory like Linux does.

Nt/ZwCreateProcess has the ability to fork a process. Interix was a full certified UNIX built on NT

ofc this is meaningless in win32 since it wont register w/ csrss and do all of the win32 poo poo


mmap w/o vm overcommit is also kind of lame


fork is bad and terrible b/c it shares all kinds of poo poo with your child proc and if u want to selectively share memory u should do it explicitly rather than lol let me share literally everything

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

BobHoward posted:

it's not just because fork(). it is quite common for processes to ask for more pages than they will actually ever use, so one of the first and easiest optimizations to a virtual memory system is lazy page allocation -- don't commit a physical page to an allocated virtual page until the process touches it. iow allocation just sets up page table entries flagged to generate pagefaults, and when the process touches one the pf interrupt handler says "oh hey it's time to actually allocate phys mem to this"

this is why, if you're writing high performance and/or timing sensitive code, it's important to ensure that all the memory you touch is accessed at least once during startup, rather than potentially leaving it until timing is critical. not only might you have to take a pf interrupt per unallocated page, the handler might need to zero some/all of the pages it's handing over to you (because it must, if that phys page originally belonged to some other process), so this can cause significant performance sharts at an undesired time

this is dumb b/c the kernel is free to shuffle pages around in physical memory/swap as it sees fit even if you scribble around

mlock/VirtualLock is what u wnat

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
also virtual memory costs something like a 30% overhead


one of the midori OS cool bits was that it could enforce process isolation w/o an MMU so everything became 20-30% faster, for free since it was written in low-level extended C#

RIP in peace, though the old eng manager has some sw8 blogs on it now

pseudorandom name
May 6, 2007

Malcolm XML posted:

mmap w/o vm overcommit is also kind of lame

file-backed mmap doesn't count against your commit charge in the first place

Celexi
Nov 25, 2006

Slava Ukraini!
if virtual memory meant actual memory my system would outright swap, because nacl_helper of chrome reserves 88 GB of it
sara 5855 1.2 0.1 88119636 33524 tty2 Sl+ 17:15 5:12 /opt/google/chrome/nacl_helper

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Malcolm XML posted:

fork is bad and terrible b/c it shares all kinds of poo poo with your child proc and if u want to selectively share memory u should do it explicitly rather than lol let me share literally everything

also this kind of poo poo can be racy, both when threads are involved and when lots of code is running in a process that's written by different teams and doesn't share some sort of centralized bookkeeping

posix_spawn has a "close-on-exec" attribute as a way to specify what file descriptors should be preserved in the spawned process, but what you really want (and what has been added in some systems like Darwin as a non-portable extension) is way to say "only these few specified descriptors should be inherited"

and because posix_spawn has the slightly cumbersome but open-ended "attributes" mechanism this kind of thing can actually be added when it's needed, you don't have to do crazy poo poo like iterate all open file descriptors between vfork and execle to ensure the child isn't inheriting something it shouldn't

pram
Jun 10, 2001

eschaton posted:

anything "interesting" in your configuration at all?

  • heavy use of C++ templates
  • large number of targets (say from a CMake-generated project)
  • unsupported third-party Xcode plug-ins
  • whole-OS hacks like MOOM etc.

what does moom cause

Hugh G. Rectum
Mar 1, 2011

pram posted:

what does moom cause

https://manytricks.com/moom/

Athas
Aug 6, 2007

fuck that joker

Notorious b.s.d. posted:

the kernel doesn't know what your processes are doing, it can't.

I can tell it which processes are important. I don't have much Linux kernel programming experience, but I have done kernel programming (including scheduler and virtual memory) before, and it's not hard to have a bit that says "this process and its children is important" and then not gently caress with it. Basically, all I need is enough responsivity to launch and xterm, figure out which process is misbehaving, and kill it. Hell, if Linux wasn't retarded about swap, I could wait until the OOM killer gets around to it, but in practice my system is unusable while swapping.

Does this solve the common case? No, if my X server or Emacs goes haywire, then it will cause even more damage than otherwise. On a server where total throughput is what you care about, a policy like this is really dumb. But those are not the cases I worry about; in practice my X server never allocates too much memory, nor does my Emacs, but the bloody Haskell compiler does go apeshit sometimes (and browsers too). I just need something that works for my setup (and I think most desktop linux setups). How is this not something that bothers more people? I'd think the desktop Linux developers would run into this kind of poo poo very often and would whip up some kernel patch to ameliorate it.

How does OS X handle this?

Truga
May 4, 2014
Lipstick Apathy

Athas posted:

How is this not something that bothers more people?

As a workaround, I just don't create swap. My chromebook, of all things, has 16 gigs of ram. If that's not enough to run something, I'd prefer kernel killing it than my machine going into swap hell. It's not like any app ever today doesn't have autosave. I'd rather lose a minute of work than waiting for swap to fill or having to reboot.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

pseudorandom name posted:

file-backed mmap doesn't count against your commit charge in the first place

unless it's cow

Sapozhnik
Jan 2, 2005

Nap Ghost

Truga posted:

As a workaround, I just don't create swap. My chromebook, of all things, has 16 gigs of ram. If that's not enough to run something, I'd prefer kernel killing it than my machine going into swap hell. It's not like any app ever today doesn't have autosave. I'd rather lose a minute of work than waiting for swap to fill or having to reboot.

all well and good except i have an 8gb ~*ultrabook*~ with soldered-on ram and i need to run a windows 7 vm sometimes

otherwise yeah same

(can't tell if this post is ironic or not)

Cybernetic Vermin
Apr 18, 2005

Truga posted:

As a workaround, I just don't create swap. My chromebook, of all things, has 16 gigs of ram. If that's not enough to run something, I'd prefer kernel killing it than my machine going into swap hell. It's not like any app ever today doesn't have autosave. I'd rather lose a minute of work than waiting for swap to fill or having to reboot.

just set swappiness low. the real mystery is why distributions don't by default, but i suspect most distribution makers feel that bad defaults are their birthright

having a bit of swap is nice for when applications inevitably plain dirty memory that they will never ever use again, and you can reclaim that for caching

lots of disk cache is p. sexy

Truga
May 4, 2014
Lipstick Apathy
Well the other reason is the stupid chromebook only comes with 64 gigs of SSD and having swap eats into that. :v:

But yeah, on an ancient work pc I use for terminals and firefox, I set swappiness to low and it seems to work just fine with 4 gigs of ram and firefox eating it all up.

Notorious b.s.d.
Jan 25, 2003

by Reene

Athas posted:

I can tell it which processes are important.

in theory you can do this with cgroups, but you and i both know you're not gonna spend time reading cgroups documentation in order to tame ghc. you're just gonna live with it.

kernel tunables are nice to have when you really, really need them, but on your desktop, nobody is gonna bother.

Adbot
ADBOT LOVES YOU

Notorious b.s.d.
Jan 25, 2003

by Reene

Cybernetic Vermin posted:

just set swappiness low. the real mystery is why distributions don't by default, but i suspect most distribution makers feel that bad defaults are their birthright

low-but-not-zero is the win spot for linux desktops.

set it to like, 10. or 1. but not 0. when swappiness is set to 0, programs will never be sacrificed for disk cache, and sometimes you actually want it to sacrifice an idle app to get disk cache for an active one.

i also have no idea why they don't set the defaults a lot lower. if i walk away from my computer overnight i mos def do not want emacs to get swapped out to disk because it was "idle" in the kernel's opinion

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply