|
Hi, I have to run a bunch of simulations for reasons, each of which takes a long time, and I thought by running 3 or so in parallel in the background it would speed it up, however looking at the process monitor it seems each is running at 1/3 the speed? Is there no way to run things in parallel to speed them up?
|
# ? Nov 27, 2016 07:07 |
|
|
# ? Mar 28, 2024 18:53 |
|
icantfindaname posted:Hi, I have to run a bunch of simulations for reasons, each of which takes a long time, and I thought by running 3 or so in parallel in the background it would speed it up, however looking at the process monitor it seems each is running at 1/3 the speed? Is there no way to run things in parallel to speed them up?
|
# ? Nov 27, 2016 07:11 |
|
icantfindaname posted:Hi, I have to run a bunch of simulations for reasons, each of which takes a long time, and I thought by running 3 or so in parallel in the background it would speed it up, however looking at the process monitor it seems each is running at 1/3 the speed? Is there no way to run things in parallel to speed them up? If the simulation is CPU-bound and singlethreaded, then yes, running more copies of it in parallel (up to one per CPU core) will help, since each one will get a different core and they'll all be able to run simultaneously. It sounds like that assumption doesn't hold, though: - If it's multithreaded, it's already using multiple cores, and starting more copies means they need to divide up the available CPU time, each one getting less than it did when it was the only one running - If it's disk- or network-bound instead of CPU-bound, it's going to be fighting with the other copies for access to that resource, and being able to schedule multiple copies on multiple cores won't actually help This is a bit of a simplification (for example, if it's heavily disk- and CPU-bound, running more copies than you have cores might be faster overall, because it can schedule the ones doing the CPU-bound parts while the ones doing the disk-bound parts are waiting for IO), but in general, there's a point where adding more copies doesn't buy you anything and may even be slower, and in some cases, that point is 1. As anthonypants points out, adding more computers might be more helpful. If this is a university project, see if your university has access to a cluster computing network like SHARCNET, perhaps?
|
# ? Nov 27, 2016 14:36 |
|
Zero Gravitas posted:I thought I'd try the latest version of Fedora on a machine for engineering simulation running OpenFoam and some other software. I've used it in the past on my laptop and it was a very nice experience, but gently caress me, I need some encouragement that its going to get better. This sounds like you wrecked the permissions on /home somewhere. Login scripts can't be read, so $PATH isn't set. Not being able to touch your own homedir is bad. Have you checked perms/ownership?
|
# ? Nov 27, 2016 16:24 |
|
Is there any way to log / visualize child processes over the lifetime of a process? I'm thinking something similar to pstree, but it would track all the children over time, not just a report of instantaneous state. I'm dealing with a build process that is an abomination of dozens of nested shell scripts and makefiles, just wanted to try to get an "overhead view" of what the hell is going on if possible, to help wrap my head around it. e: Actually i just realized I'd basically want something like bootchart, but generalized to run on a given process. How can I do that? peepsalot fucked around with this message at 22:35 on Nov 27, 2016 |
# ? Nov 27, 2016 22:17 |
|
peepsalot posted:Is there any way to log / visualize child processes over the lifetime of a process? I'm thinking something similar to pstree, but it would track all the children over time, not just a report of instantaneous state. Output from strace -f -e execve <something> should look like this: code:
|
# ? Nov 27, 2016 22:49 |
|
Okay so here's the thing: I have an old EeePC, specifically an ASUS 1015T, from 2010 (specs posted below), and it was a decent little web browser and things for a while, but after a while it got worse and worse, and it was collecting the dust the past year or so. It's no better with Win10 than the Win7 it started with, but it inspired me to see if there were any 'light'/crappy-PC friendly distros to try out. For the record, I'm a complete Linux newb, but I've got intermediate experience in computers and I'm sure with basic instructions or guidance, I can dink around and find out what I need to. However, I've spent the past few hours on and off trying to install different things on my EeePC to a constant lack of success. Right now, I have the 'latest' version of PuppyLinux on a thumb drive, and the problem is that it failed to find a way to connect wirelessly. I'm trying to look for how to fix that now, but while I do I figure I might just probe here for recommendations. Basically, all I want is a light linux distro that'll let me do basic web browsing, text editor, image viewer and maybe video viewer. Something I can tinker with and learn linux with from a basic starting point rather than be thrown into the weeds of installing, uninstalling, and apparently screwing up. There's billions out there and 99% of them are, I'm sure, incompatible with my device. So what should I try? What should I do to make sure it installs cleanly on a separate partition (or perhaps the entire hard drive--I'm not married to windows and there's nothing valuable in this device at the moment)? ======== [Specs] Model: 1015T-MU17-RD Processor: 1.20GHz, AMD V105 Processor Display: 10.1-Inch 1024X600 WSVGA LED Display (1024 x 600) Graphics: ATI Radeon HD 4250 Memory: Hard Drive: 250GB Camera: 0.3MP Webcam Operating System: Windows 7 Starter WLAN: 802.11b/g/n Wireless LAN (Specifically, an Atheros AR9285 Wireless Network Adapter) USB: 3 x USB 2.0 Card Reader: MMC/SD(SDHC) Battery Life: 6 hours with 6-cell lithium ion Warranty: 1 year limited Morter fucked around with this message at 23:49 on Nov 27, 2016 |
# ? Nov 27, 2016 23:34 |
|
Any Linux distribution should work fine, just don't use KDE or gnome because they'll blow through one gig of ram like it's nothing. Hunt around for a lightweight window manager.. joewm is my current favorite but there's millions out there to choose from. JavaScript heavy sites will probably chug badly with the processor but reading forums or Reddit or whatever would probably be decent enough.
|
# ? Nov 27, 2016 23:46 |
|
NUT96ZJnVHC6T6fde9NY 146qfiVDU2mcQBg9ibqb 3KqMrt7nt5tAbmmiH0Mf o1z2QwPnhmtUzZTBl2PA 3Cwu8o1vosapfzeYfWWs CqTzeFQbO4snEi6KoQ5G Mqrp0GslRScrhONfA1mb bPszDjGhs8dzOvwtOZha CiTxPK1LrYmFvwgCkSHq D3XhKlJxKWl6nIsjfwRB Plasmafountain fucked around with this message at 23:52 on Feb 27, 2023 |
# ? Nov 28, 2016 00:00 |
|
Morter posted:Okay so here's the thing: I have an old EeePC, specifically an ASUS 1015T, from 2010 The Ubuntus have pretty good hardware support out of the box, so I'd suggest one of those. Xubuntu or Ubuntu Mate are good choices, and fairly light. And while the specs on the eeepc certainly aren't impressive, they're not THAT bad. I have (well, used to have, I just gave it to my nephew) an old Toshiba netbook from that era (an NB250) with similar specs, and it'll run xubuntu just fine. It's no speed demon but it's perfectly usable. You don't need to find a distro that's so light it'll still run on a 486.
|
# ? Nov 28, 2016 00:01 |
|
Zero Gravitas posted:I havent touched it! /home should be owned by root, what about /home/yourusername ?
|
# ? Nov 28, 2016 00:10 |
|
Powered Descent posted:The Ubuntus have pretty good hardware support out of the box, so I'd suggest one of those. Xubuntu or Ubuntu Mate are good choices, and fairly light. And while the specs on the eeepc certainly aren't impressive, they're not THAT bad. Just use Gentoo so that its compiled to run fast on your system! (Don't do this)
|
# ? Nov 28, 2016 00:13 |
|
CoreOS! It's the smallest! Containerize that Firefox!
|
# ? Nov 28, 2016 00:18 |
|
Zero Gravitas posted:I havent touched it! The mate+compiz spin should still use normal Anaconda (and anaconda sets all of this up). Unless you migrated /home from another system, I'd guess that something you installed outside of package management (lots of github "curl ... | sudo bash", for example) had an empty variable, and it accidentally chowned your homedir. If /home/foo is not owned by foo:foo, you should chown -R it. Fedora 25 is not broken out of the box this way...
|
# ? Nov 28, 2016 00:32 |
|
Psi8o45IRYj4KralfoMC RwlR8mMTVbxSzrIJFvZu WT8np9NdHptYdD2aarD3 eYqQtepmKRH52c2AtUZH WdQDNdF00RmHntANEoz3 bmzzdYwzR4UUX1zBDQSq 2nOxaLEqioSPjTUEE9Bc M6KERGBkAVVxCN7ubpk8 yTt0YzuiSeSXawVrkwlc pk4QXSRQC7f591btvM4g Plasmafountain fucked around with this message at 23:52 on Feb 27, 2023 |
# ? Nov 28, 2016 00:33 |
|
Zero Gravitas posted:Should it? I'm pretty certain I was previously doing my dicking around with OpenFoam on ubuntu in /home/OpenFoam instead of /home/zerograv/OpenFoam. IIRC only had to stab ~ once at the terminal, or does it auto-map ~ to home/user? Yes it should. ~ on its own expands to the same thing as $HOME, i.e., your home directory (/home/<username>). ~user expands to that user's home directory (/home/user). You generally should not be messing around in /home directly, and if you aren't root it shouldn't even let you.
|
# ? Nov 28, 2016 00:36 |
|
BloJuZOrpEFwEH5PIXsb kxFpc8QHEVeKyNASYqKC EMruH1ZQrFuIotRNXAaa m5w8nVFFRXMhMOpKqznK NyyX0mwSZR6UaaXDLrsZ lsvBSnZWud5aWhpUKlAf 3dbX50MURz9iDxshDpPc kIU4SvE7lnvRWuisnLEi oeIo69UXBO6bvoBceNDA seQeCQXohXOjlUVl4oD8 Plasmafountain fucked around with this message at 23:52 on Feb 27, 2023 |
# ? Nov 28, 2016 01:05 |
|
reading posted:Howdy. I've got Xubuntu running on my desktop and dual booting with Win7. I decided to go against my instincts and upgrade from 14.04 to 16.04. Big mistake of course. During the upgrade, I got booted to my lock screen and the lock screen was unable to display many of its own icons (instead showing little red circles with a slash) and I couldn't log in, I would just get stuck in a loop where I enter my password, hit enter, the screen flickers and takes me right back to the login. peepsalot posted:I'd start by checking /var/log/Xorg.log for errors ColTim posted:I had a similar issue - turned out to be caused by the proprietary NVIDIA drivers not liking the upgraded kernel. Uninstalling the drivers from the text terminal seemed to clear it up. Does anyone know how I can enable networking in recovery mode? I tried $ sudo ifconfig eth0 up, $ sudo dhclient eth0, $ sudo service networking start, but none of those worked. This is an ethernet connection, so no SSID or password. Once I can get that up then I can try apt-get updating things. I'm reluctant to try a $ sudo apt-get remove --purge nvidia*, until I've tried updating everything.
|
# ? Nov 28, 2016 01:16 |
|
Zero Gravitas posted:Derp. Well I look like a massive bell end right about now. Ah well. You'd need to post the output from dnf to see what happens with Chrome. OpenFOAM should be in $HOME (your homedir). Not /home. I'd expect that it installs in $HOME/bin (they also suggest /opt and a couple of others). /opt is not part of $PATH. Neither is $HOME/bin. You'd need to add whichever as part of .bashrc. Or just symlink the executable to /usr/local/bin (which their install scripts should do anyway, but probably don't)
|
# ? Nov 28, 2016 04:59 |
|
fletcher posted:This was on RHEL7. I'm not using a package based install though, I'm installing nginx from source. Are you aware that there are a few options for getting various versions of nginx from Red Hat or at least EPEL, which is Fedora-based? EPEL has 1.10.2, and there are SCL packages for 1.6 and 1.8.
|
# ? Nov 28, 2016 10:01 |
|
A lot of community Chef cookbooks install software from source for whatever reason as the default or only option, even when packages are readily available. I don't know offhand if that's the case with the one he's using.
|
# ? Nov 28, 2016 11:23 |
|
Zero Gravitas posted:I cant launch programs through the terminal that have not been installed through yum, so I have to try and find the executable in the maze of folders and try and throw that at the terminal. you need to edit the "PATH" variable to include the folder of what you want to run. open up or create a ".bashrc" file in your home directory and add "export PATH=$PATH:newPath" to the end of it, where newPath is the aforementioned folder.
|
# ? Nov 28, 2016 16:51 |
|
xzzy posted:CoreOS! It's the smallest! Containerize that Firefox! How is CoreOS compared to Alpine ?
|
# ? Nov 28, 2016 20:56 |
|
Alpine is actually smaller but usability suffers for it. Plus CoreOS isn't really a minimal linux, it's just a distribution focused on hosting containers and nothing else.
|
# ? Nov 28, 2016 21:09 |
|
I mean, CoreOS is basically "barest possible requirements to run systemd+containers+cloud-init". I'm not sure how much more minimal you can get and still have usability/management.
|
# ? Nov 28, 2016 21:50 |
|
Docjowles posted:A lot of community Chef cookbooks install software from source for whatever reason as the default or only option, even when packages are readily available. I don't know offhand if that's the case with the one he's using.
|
# ? Nov 28, 2016 22:44 |
|
evol262 posted:I mean, CoreOS is basically "barest possible requirements to run systemd+containers+cloud-init". I'm not sure how much more minimal you can get and still have usability/management. CoreOS has some creature comforts like a more featured vim that can do syntax hilighting. Alpine is really skin and bones, it trims everything down to the minimum.
|
# ? Nov 28, 2016 23:07 |
|
xzzy posted:CoreOS has some creature comforts like a more featured vim that can do syntax hilighting. Alpine is really skin and bones, it trims everything down to the minimum. Hence usability. If really all you need is containers, Alpine is better. But the size difference is marginal, and I don't know why you'd ever run Alpine as anything other than a base layer for a container.
|
# ? Nov 28, 2016 23:16 |
Buttcoin purse posted:Are you aware that there are a few options for getting various versions of nginx from Red Hat or at least EPEL, which is Fedora-based? EPEL has 1.10.2, and there are SCL packages for 1.6 and 1.8. Yeah my plan was to use the SCL packages, but I ran into issues trying to set it up on the AWS version of Redhat 7. Can't remember the details at the moment. I'll have to give the EPEL ones a shot. Docjowles posted:A lot of community Chef cookbooks install software from source for whatever reason as the default or only option, even when packages are readily available. I don't know offhand if that's the case with the one he's using. I haven't really found that to be the case, every community cookbook I've used tries to install from package and you have to go out of your way to try to install it from source. Some of the newer cookbooks don't even have the option to install from source, like poise-python. fletcher fucked around with this message at 23:43 on Nov 28, 2016 |
|
# ? Nov 28, 2016 23:40 |
|
Try Xubuntu or Lubuntu. They have packages that are current so you won't have issues installing software. They require something stupid like 256mb or 512mb ram to run. However, browsing the web is going to be a shitshow, I was running a ASUS Chromebox with 6gb ram because I wanted something fanless and it was still a shitshow, 2gb I'd assume is just impossible. I specifically tailored an Arch Linux build to be as minimal as possible by using Window Managers and native apps versus Web-based services and it still sucked. Native apps worked fabulously. Easily handled stuff like VLC, Skype and other natively installed poo poo. You'll have to use stuff like Livestreamer to handle the shitwork of piping live streams or videos to VLC. YouTuber fucked around with this message at 23:32 on Nov 29, 2016 |
# ? Nov 29, 2016 23:27 |
|
fletcher posted:I haven't really found that to be the case, every community cookbook I've used tries to install from package and you have to go out of your way to try to install it from source. Some of the newer cookbooks don't even have the option to install from source, like poise-python. And c'mon, don't point at Noah Kantrowitz to "not all" Chef community cookbooks. The man's an angel, but also an outlier.
|
# ? Nov 29, 2016 23:55 |
So I've got a bunch of logs in different directories on a remote server that I would like to copy to my computer. Each one has a path like "/foo/bar/data_set_1/log.txt", "/foo/bar/data_set_2/log.txt", etc. Here's the catch: there are a ton of data_sets that I don't want to copy over, and the ones that I do want to copy over are non-consecutive. Additionally, there are a bunch of data files in each data_set directory that I do not want to copy over--I just want the logs. Finally, the last piece of bonus difficulty is that the remote server will throttle me if I log in too much, so this needs to all be done in one command. In the end, I'd like to have a "data_set_1/log.txt", "data_set_2/log.txt" etc on my computer (in the separate folders), but having something like "data_set_1_log.txt", "data_set_2_log.txt" would also be acceptable. What's the best way to accomplish this? I figure there has to be some rsync / scp option that will do what I want, but I've spent some time digging through the man pages and I haven't figured it out yet. If that explanation of what I want is confusing, let me know and I will try to clarify--it's hard to explain these things via text.
|
|
# ? Dec 1, 2016 01:10 |
|
Look at the rsync --files-from option. Could also do it with a file list fed to tar, look at the -T option. Of course this obligates you to create a text file listing every file you want, but if your needs are specific that might be the only way to do it.
|
# ? Dec 1, 2016 01:15 |
|
VikingofRock posted:So I've got a bunch of logs in different directories on a remote server that I would like to copy to my computer. Each one has a path like "/foo/bar/data_set_1/log.txt", "/foo/bar/data_set_2/log.txt", etc. Here's the catch: there are a ton of data_sets that I don't want to copy over, and the ones that I do want to copy over are non-consecutive. Additionally, there are a bunch of data files in each data_set directory that I do not want to copy over--I just want the logs. Finally, the last piece of bonus difficulty is that the remote server will throttle me if I log in too much, so this needs to all be done in one command. In the end, I'd like to have a "data_set_1/log.txt", "data_set_2/log.txt" etc on my computer (in the separate folders), but having something like "data_set_1_log.txt", "data_set_2_log.txt" would also be acceptable. Look into sshfs. It makes only one ssh connection to the remote box, and then just mounts a folder from that box (often your home directory but it can be anything you have access to) at a point of your choosing. From there, use any means you like (heck, even a graphical file manager) to copy the files you need to your local machine.
|
# ? Dec 1, 2016 01:25 |
Powered Descent posted:Look into sshfs. It makes only one ssh connection to the remote box, and then just mounts a folder from that box (often your home directory but it can be anything you have access to) at a point of your choosing. From there, use any means you like (heck, even a graphical file manager) to copy the files you need to your local machine. This worked great. Thanks.
|
|
# ? Dec 1, 2016 02:00 |
|
hifi posted:you need to edit the "PATH" variable to include the folder of what you want to run. A lot of distros will automatically add ~/bin/ to $PATH these days if it exists, so you may even be able to just create ~/bin/ and toss all your scripts/binaries/whatever in there.
|
# ? Dec 1, 2016 05:54 |
|
I decided to check out openSUSE Tumbleweed (again) yesterday. And I must say I'm somewhat impressed with LXQt, Snapper, 1-Click installs and whatever else I'm missing. But I've yet to get (some) videos to play. Youtube works, but .gifv's (Firefox reports "No decoders required for formats: video/mp4") on Imgur, and .wmv's on disk don't play. I did a 1-click install of: http://opensuse-guide.org/codecs.php, but that didn't seem to work (and I'm not sure why packman is even needed, all those packages seem to be present in the Tumbleweed repo) so I rolled back those changes with snapper. I'm somewhat stumped, I could have sworn last time I gave suse a crack it 'just worked'.
|
# ? Dec 1, 2016 14:11 |
|
Horse Clocks posted:I decided to check out openSUSE Tumbleweed (again) yesterday. And I must say I'm somewhat impressed with LXQt, Snapper, 1-Click installs and whatever else I'm missing. I think H.264 support in Firefox relied on something from Cisco that is free but not free enough to be turned on and there are some shenanigans involved in making it work.
|
# ? Dec 1, 2016 16:37 |
|
thebigcow posted:I think H.264 support in Firefox relied on something from Cisco that is free but not free enough to be turned on and there are some shenanigans involved in making it work. Yes. All video codecs people actually use are heavily patented, and everything but VPX cost real money to use. Cisco paid that money for H.264 so their videoconferencing software would work with no hassle, but the codec has to be distributed in binary form to keep the patent license (they would have no way of calculating how much they owe otherwise).
|
# ? Dec 1, 2016 17:58 |
|
|
# ? Mar 28, 2024 18:53 |
|
Here's an odd problem: In Fedora 24, I upgraded to Cinnamon 3.2 and now the login screen brought up after locking shows a really low-res version of my .face. Rather than figure that out, I thought I'd just remove it altogether. I went into the LightDM GTK+ Greeter settings editor gui and switched the "User image" to off. That didn't seem to take-- every time I re-open the settings dialogue it's activated. Editing the config files in etc/lightdm (as root) doesn't seem to do anything either. I can even switch User Image to off, click "reload" and it'll have the switch activated. I also can't get the user image to not show on the actual log-in/switch users screen (like after booting) either. Any ideas? I can't seem to get anywhere with Google.
|
# ? Dec 2, 2016 05:24 |