Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
icantfindaname
Jul 1, 2008


Hi, I have to run a bunch of simulations for reasons, each of which takes a long time, and I thought by running 3 or so in parallel in the background it would speed it up, however looking at the process monitor it seems each is running at 1/3 the speed? Is there no way to run things in parallel to speed them up?

Adbot
ADBOT LOVES YOU

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

icantfindaname posted:

Hi, I have to run a bunch of simulations for reasons, each of which takes a long time, and I thought by running 3 or so in parallel in the background it would speed it up, however looking at the process monitor it seems each is running at 1/3 the speed? Is there no way to run things in parallel to speed them up?
Have you tried using three times as many computers?

ToxicFrog
Apr 26, 2008


icantfindaname posted:

Hi, I have to run a bunch of simulations for reasons, each of which takes a long time, and I thought by running 3 or so in parallel in the background it would speed it up, however looking at the process monitor it seems each is running at 1/3 the speed? Is there no way to run things in parallel to speed them up?

If the simulation is CPU-bound and singlethreaded, then yes, running more copies of it in parallel (up to one per CPU core) will help, since each one will get a different core and they'll all be able to run simultaneously. It sounds like that assumption doesn't hold, though:

- If it's multithreaded, it's already using multiple cores, and starting more copies means they need to divide up the available CPU time, each one getting less than it did when it was the only one running
- If it's disk- or network-bound instead of CPU-bound, it's going to be fighting with the other copies for access to that resource, and being able to schedule multiple copies on multiple cores won't actually help

This is a bit of a simplification (for example, if it's heavily disk- and CPU-bound, running more copies than you have cores might be faster overall, because it can schedule the ones doing the CPU-bound parts while the ones doing the disk-bound parts are waiting for IO), but in general, there's a point where adding more copies doesn't buy you anything and may even be slower, and in some cases, that point is 1.

As anthonypants points out, adding more computers might be more helpful. If this is a university project, see if your university has access to a cluster computing network like SHARCNET, perhaps?

evol262
Nov 30, 2010
#!/usr/bin/perl

Zero Gravitas posted:

I thought I'd try the latest version of Fedora on a machine for engineering simulation running OpenFoam and some other software. I've used it in the past on my laptop and it was a very nice experience, but gently caress me, I need some encouragement that its going to get better.

I cant install chrome (although as far as I can tell I've done everything correctly using the binary downloaded through yum to install all its prereqs). I'm locked out of editing /home/ unless I'm logged in as root. Compiz shits a brick that requires a restart (using the mate-compiz spin). I cant launch programs through the terminal that have not been installed through yum, so I have to try and find the executable in the maze of folders and try and throw that at the terminal.

I might be wishfully remembering things, but Im pretty sure it was a lot more userfriendly then than it is now. Where the gently caress am I going wrong with this?

This sounds like you wrecked the permissions on /home somewhere. Login scripts can't be read, so $PATH isn't set. Not being able to touch your own homedir is bad. Have you checked perms/ownership?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Is there any way to log / visualize child processes over the lifetime of a process? I'm thinking something similar to pstree, but it would track all the children over time, not just a report of instantaneous state.

I'm dealing with a build process that is an abomination of dozens of nested shell scripts and makefiles, just wanted to try to get an "overhead view" of what the hell is going on if possible, to help wrap my head around it.

e: Actually i just realized I'd basically want something like bootchart, but generalized to run on a given process. How can I do that?

peepsalot fucked around with this message at 22:35 on Nov 27, 2016

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

peepsalot posted:

Is there any way to log / visualize child processes over the lifetime of a process? I'm thinking something similar to pstree, but it would track all the children over time, not just a report of instantaneous state.

I'm dealing with a build process that is an abomination of dozens of nested shell scripts and makefiles, just wanted to try to get an "overhead view" of what the hell is going on if possible, to help wrap my head around it.

e: Actually i just realized I'd basically want something like bootchart, but generalized to run on a given process. How can I do that?
If you strace -f and look for the exec* family of calls, you should be able to parse that output and generate a GraphViz .dot or whatever.

Output from strace -f -e execve <something> should look like this:

code:
# strace -f -e execve ./test.sh
execve("./test.sh", ["./test.sh"], [/* 29 vars */]) = 0
strace: Process 10051 attached
[pid 10051] execve("/bin/bash", ["bash", "-c", "true"], [/* 28 vars */]) = 0
[pid 10051] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=10051, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 10052 attached
[pid 10052] execve("/bin/bash", ["bash", "-c", "false"], [/* 28 vars */]) = 0
[pid 10052] +++ exited with 1 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=10052, si_uid=1000, si_status=1, si_utime=0, si_stime=0} ---
+++ exited with 1 +++

Morter
Jul 1, 2006

:coolspot:
Seashells by the
Seashorpheus
Okay so here's the thing: I have an old EeePC, specifically an ASUS 1015T, from 2010 (specs posted below), and it was a decent little web browser and things for a while, but after a while it got worse and worse, and it was collecting the dust the past year or so. It's no better with Win10 than the Win7 it started with, but it inspired me to see if there were any 'light'/crappy-PC friendly distros to try out.

For the record, I'm a complete Linux newb, but I've got intermediate experience in computers and I'm sure with basic instructions or guidance, I can dink around and find out what I need to. However, I've spent the past few hours on and off trying to install different things on my EeePC to a constant lack of success. Right now, I have the 'latest' version of PuppyLinux on a thumb drive, and the problem is that it failed to find a way to connect wirelessly. I'm trying to look for how to fix that now, but while I do I figure I might just probe here for recommendations.

Basically, all I want is a light linux distro that'll let me do basic web browsing, text editor, image viewer and maybe video viewer. Something I can tinker with and learn linux with from a basic starting point rather than be thrown into the weeds of installing, uninstalling, and apparently screwing up. There's billions out there and 99% of them are, I'm sure, incompatible with my device. So what should I try? What should I do to make sure it installs cleanly on a separate partition (or perhaps the entire hard drive--I'm not married to windows and there's nothing valuable in this device at the moment)?


========
[Specs]
Model: 1015T-MU17-RD
Processor: 1.20GHz, AMD V105 Processor
Display: 10.1-Inch 1024X600 WSVGA LED Display (1024 x 600)
Graphics: ATI Radeon HD 4250
Memory: 1GB DDR3 RAM (Max 2GB) (Have this upgraded to 2GB)
Hard Drive: 250GB
Camera: 0.3MP Webcam
Operating System: Windows 7 Starter
WLAN: 802.11b/g/n Wireless LAN (Specifically, an Atheros AR9285 Wireless Network Adapter)
USB: 3 x USB 2.0
Card Reader: MMC/SD(SDHC)
Battery Life: 6 hours with 6-cell lithium ion
Warranty: 1 year limited

Morter fucked around with this message at 23:49 on Nov 27, 2016

xzzy
Mar 5, 2009

Any Linux distribution should work fine, just don't use KDE or gnome because they'll blow through one gig of ram like it's nothing. Hunt around for a lightweight window manager.. joewm is my current favorite but there's millions out there to choose from.

JavaScript heavy sites will probably chug badly with the processor but reading forums or Reddit or whatever would probably be decent enough.

Plasmafountain
Jun 17, 2008

NUT96ZJnVHC6T6fde9NY
146qfiVDU2mcQBg9ibqb
3KqMrt7nt5tAbmmiH0Mf
o1z2QwPnhmtUzZTBl2PA
3Cwu8o1vosapfzeYfWWs
CqTzeFQbO4snEi6KoQ5G
Mqrp0GslRScrhONfA1mb
bPszDjGhs8dzOvwtOZha
CiTxPK1LrYmFvwgCkSHq
D3XhKlJxKWl6nIsjfwRB

Plasmafountain fucked around with this message at 23:52 on Feb 27, 2023

Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

Morter posted:

Okay so here's the thing: I have an old EeePC, specifically an ASUS 1015T, from 2010

The Ubuntus have pretty good hardware support out of the box, so I'd suggest one of those. Xubuntu or Ubuntu Mate are good choices, and fairly light. And while the specs on the eeepc certainly aren't impressive, they're not THAT bad.

I have (well, used to have, I just gave it to my nephew) an old Toshiba netbook from that era (an NB250) with similar specs, and it'll run xubuntu just fine. It's no speed demon but it's perfectly usable. You don't need to find a distro that's so light it'll still run on a 486.

kujeger
Feb 19, 2004

OH YES HA HA

Zero Gravitas posted:

I havent touched it!

ownership is apparently set to root (using the right click > properties menu) but thats a big drag if I have to log in and out all the bloody time. More than anything I'm just irritated that F25 isnt as shiny or easy to use as I remember fedora being back in the day. I'm wondering if its worth nuking this installation and trying a different spin to see if the default settings for the mate+compiz spin are just hosed and others are better.

/home should be owned by root, what about /home/yourusername ?

RFC2324
Jun 7, 2012

http 418

Powered Descent posted:

The Ubuntus have pretty good hardware support out of the box, so I'd suggest one of those. Xubuntu or Ubuntu Mate are good choices, and fairly light. And while the specs on the eeepc certainly aren't impressive, they're not THAT bad.

I have (well, used to have, I just gave it to my nephew) an old Toshiba netbook from that era (an NB250) with similar specs, and it'll run xubuntu just fine. It's no speed demon but it's perfectly usable. You don't need to find a distro that's so light it'll still run on a 486.

Just use Gentoo so that its compiled to run fast on your system!

(Don't do this)

xzzy
Mar 5, 2009

CoreOS! It's the smallest! Containerize that Firefox!

evol262
Nov 30, 2010
#!/usr/bin/perl

Zero Gravitas posted:

I havent touched it!

ownership is apparently set to root (using the right click > properties menu) but thats a big drag if I have to log in and out all the bloody time. More than anything I'm just irritated that F25 isnt as shiny or easy to use as I remember fedora being back in the day. I'm wondering if its worth nuking this installation and trying a different spin to see if the default settings for the mate+compiz spin are just hosed and others are better.

The mate+compiz spin should still use normal Anaconda (and anaconda sets all of this up).

Unless you migrated /home from another system, I'd guess that something you installed outside of package management (lots of github "curl ... | sudo bash", for example) had an empty variable, and it accidentally chowned your homedir.

If /home/foo is not owned by foo:foo, you should chown -R it. Fedora 25 is not broken out of the box this way...

Plasmafountain
Jun 17, 2008

Psi8o45IRYj4KralfoMC
RwlR8mMTVbxSzrIJFvZu
WT8np9NdHptYdD2aarD3
eYqQtepmKRH52c2AtUZH
WdQDNdF00RmHntANEoz3
bmzzdYwzR4UUX1zBDQSq
2nOxaLEqioSPjTUEE9Bc
M6KERGBkAVVxCN7ubpk8
yTt0YzuiSeSXawVrkwlc
pk4QXSRQC7f591btvM4g

Plasmafountain fucked around with this message at 23:52 on Feb 27, 2023

ToxicFrog
Apr 26, 2008


Zero Gravitas posted:

Should it? I'm pretty certain I was previously doing my dicking around with OpenFoam on ubuntu in /home/OpenFoam instead of /home/zerograv/OpenFoam. IIRC only had to stab ~ once at the terminal, or does it auto-map ~ to home/user?

Yes it should. ~ on its own expands to the same thing as $HOME, i.e., your home directory (/home/<username>). ~user expands to that user's home directory (/home/user).

You generally should not be messing around in /home directly, and if you aren't root it shouldn't even let you.

Plasmafountain
Jun 17, 2008

BloJuZOrpEFwEH5PIXsb
kxFpc8QHEVeKyNASYqKC
EMruH1ZQrFuIotRNXAaa
m5w8nVFFRXMhMOpKqznK
NyyX0mwSZR6UaaXDLrsZ
lsvBSnZWud5aWhpUKlAf
3dbX50MURz9iDxshDpPc
kIU4SvE7lnvRWuisnLEi
oeIo69UXBO6bvoBceNDA
seQeCQXohXOjlUVl4oD8

Plasmafountain fucked around with this message at 23:52 on Feb 27, 2023

reading
Jul 27, 2013

reading posted:

Howdy. I've got Xubuntu running on my desktop and dual booting with Win7. I decided to go against my instincts and upgrade from 14.04 to 16.04. Big mistake of course. During the upgrade, I got booted to my lock screen and the lock screen was unable to display many of its own icons (instead showing little red circles with a slash) and I couldn't log in, I would just get stuck in a loop where I enter my password, hit enter, the screen flickers and takes me right back to the login.

After restarting my system just shows black monitors. If I use Grub to boot into recovery mode, I can get a text console and if I type "sudo service lightdm start" then I can get the lock screen to display. But, I still cannot log in due to this looping behavior.

How can I fix the lock screen? What may have broken after upgrading? It shows my username and accepts input in the password field, it just can't...unlock!

P.S. Forgot to add this resource:
https://ubuntuforums.org/showthread.php?t=1743535
I couldn't find this specific problem there and I don't know what the lock screen software is called to search for it. It's not the lock screen that shows the flame in front of the monitor in black and red, it's the regular Xubuntu/Ubuntu one.


peepsalot posted:

I'd start by checking /var/log/Xorg.log for errors
The X-server is having a segfault. I was dumb and didn't write down more details. If I use $ sudo service lightdm start, and then go back to terminal with ctrl-alt-F1, the Xorg log doesn't list any segfault in the last 50 or so lines.

ColTim posted:

I had a similar issue - turned out to be caused by the proprietary NVIDIA drivers not liking the upgraded kernel. Uninstalling the drivers from the text terminal seemed to clear it up.

Does anyone know how I can enable networking in recovery mode? I tried $ sudo ifconfig eth0 up, $ sudo dhclient eth0, $ sudo service networking start, but none of those worked. This is an ethernet connection, so no SSID or password. Once I can get that up then I can try apt-get updating things. I'm reluctant to try a $ sudo apt-get remove --purge nvidia*, until I've tried updating everything.

evol262
Nov 30, 2010
#!/usr/bin/perl

Zero Gravitas posted:

Derp. Well I look like a massive bell end right about now. Ah well.

I've still got two questions though -

A) I still cant get chrome to install through dnf/yum. It goes to install the pre-reqs and it finds them all, then exits the process inconclusively.

B) the aforementioned OpenFoam issue. I've followed the instructions:
https://openfoamwiki.net/index.php/Installation/Linux/OpenFOAM-4.1/Fedora
http://www.cfd-online.com/Forums/openfoam-installation/174031-installing-openfoam-4-0-fedora-23-a.html

but with some minor discrepancies - step 8 & 9 didnt have the files present. Might just be because its for 23 not 25. Paraview installs in the earlier step but I have to go hunting through folders for the executable to drag to the terminal to open it instead of just mashing "paraview" into the terminal. Its all just such a big ballache - am I simply better off just sucking it up and going back to Ubuntu for the community support?

You'd need to post the output from dnf to see what happens with Chrome.

OpenFOAM should be in $HOME (your homedir). Not /home. I'd expect that it installs in $HOME/bin (they also suggest /opt and a couple of others). /opt is not part of $PATH. Neither is $HOME/bin. You'd need to add whichever as part of .bashrc. Or just symlink the executable to /usr/local/bin (which their install scripts should do anyway, but probably don't)

Buttcoin purse
Apr 24, 2014

fletcher posted:

This was on RHEL7. I'm not using a package based install though, I'm installing nginx from source.

Are you aware that there are a few options for getting various versions of nginx from Red Hat or at least EPEL, which is Fedora-based? EPEL has 1.10.2, and there are SCL packages for 1.6 and 1.8.

Docjowles
Apr 9, 2009

A lot of community Chef cookbooks install software from source for whatever reason as the default or only option, even when packages are readily available. I don't know offhand if that's the case with the one he's using.

hifi
Jul 25, 2012

Zero Gravitas posted:

I cant launch programs through the terminal that have not been installed through yum, so I have to try and find the executable in the maze of folders and try and throw that at the terminal.

you need to edit the "PATH" variable to include the folder of what you want to run.

open up or create a ".bashrc" file in your home directory and add "export PATH=$PATH:newPath" to the end of it, where newPath is the aforementioned folder.

LochNessMonster
Feb 3, 2005

I need about three fitty


xzzy posted:

CoreOS! It's the smallest! Containerize that Firefox!

How is CoreOS compared to Alpine ?

xzzy
Mar 5, 2009

Alpine is actually smaller but usability suffers for it.

Plus CoreOS isn't really a minimal linux, it's just a distribution focused on hosting containers and nothing else.

evol262
Nov 30, 2010
#!/usr/bin/perl
I mean, CoreOS is basically "barest possible requirements to run systemd+containers+cloud-init". I'm not sure how much more minimal you can get and still have usability/management.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

A lot of community Chef cookbooks install software from source for whatever reason as the default or only option, even when packages are readily available. I don't know offhand if that's the case with the one he's using.
Thus began the hatefuck several years ago that led to me gutting almost every community cookbook dependency out of our infra codebase.

xzzy
Mar 5, 2009

evol262 posted:

I mean, CoreOS is basically "barest possible requirements to run systemd+containers+cloud-init". I'm not sure how much more minimal you can get and still have usability/management.

CoreOS has some creature comforts like a more featured vim that can do syntax hilighting. Alpine is really skin and bones, it trims everything down to the minimum.

evol262
Nov 30, 2010
#!/usr/bin/perl

xzzy posted:

CoreOS has some creature comforts like a more featured vim that can do syntax hilighting. Alpine is really skin and bones, it trims everything down to the minimum.

Hence usability.

If really all you need is containers, Alpine is better. But the size difference is marginal, and I don't know why you'd ever run Alpine as anything other than a base layer for a container.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Buttcoin purse posted:

Are you aware that there are a few options for getting various versions of nginx from Red Hat or at least EPEL, which is Fedora-based? EPEL has 1.10.2, and there are SCL packages for 1.6 and 1.8.

Yeah my plan was to use the SCL packages, but I ran into issues trying to set it up on the AWS version of Redhat 7. Can't remember the details at the moment. I'll have to give the EPEL ones a shot.

Docjowles posted:

A lot of community Chef cookbooks install software from source for whatever reason as the default or only option, even when packages are readily available. I don't know offhand if that's the case with the one he's using.

I haven't really found that to be the case, every community cookbook I've used tries to install from package and you have to go out of your way to try to install it from source. Some of the newer cookbooks don't even have the option to install from source, like poise-python.

fletcher fucked around with this message at 23:43 on Nov 28, 2016

YouTuber
Jul 31, 2004

by FactsAreUseless

Try Xubuntu or Lubuntu. They have packages that are current so you won't have issues installing software. They require something stupid like 256mb or 512mb ram to run. However, browsing the web is going to be a shitshow, I was running a ASUS Chromebox with 6gb ram because I wanted something fanless and it was still a shitshow, 2gb I'd assume is just impossible. I specifically tailored an Arch Linux build to be as minimal as possible by using Window Managers and native apps versus Web-based services and it still sucked. Native apps worked fabulously. Easily handled stuff like VLC, Skype and other natively installed poo poo. You'll have to use stuff like Livestreamer to handle the shitwork of piping live streams or videos to VLC.

YouTuber fucked around with this message at 23:32 on Nov 29, 2016

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

fletcher posted:

I haven't really found that to be the case, every community cookbook I've used tries to install from package and you have to go out of your way to try to install it from source. Some of the newer cookbooks don't even have the option to install from source, like poise-python.
That's true, but there's also a lot of them that are dicks about where those packages come from and don't let you install from your own repository. The really obnoxious ones stuff the package install into some omnibus install/configure/manage resource so that you can't even reopen it or delete it from the resource collection because the resource exists in an inner run context.

And c'mon, don't point at Noah Kantrowitz to "not all" Chef community cookbooks. The man's an angel, but also an outlier.

VikingofRock
Aug 24, 2008




So I've got a bunch of logs in different directories on a remote server that I would like to copy to my computer. Each one has a path like "/foo/bar/data_set_1/log.txt", "/foo/bar/data_set_2/log.txt", etc. Here's the catch: there are a ton of data_sets that I don't want to copy over, and the ones that I do want to copy over are non-consecutive. Additionally, there are a bunch of data files in each data_set directory that I do not want to copy over--I just want the logs. Finally, the last piece of bonus difficulty is that the remote server will throttle me if I log in too much, so this needs to all be done in one command. In the end, I'd like to have a "data_set_1/log.txt", "data_set_2/log.txt" etc on my computer (in the separate folders), but having something like "data_set_1_log.txt", "data_set_2_log.txt" would also be acceptable.

What's the best way to accomplish this? I figure there has to be some rsync / scp option that will do what I want, but I've spent some time digging through the man pages and I haven't figured it out yet. If that explanation of what I want is confusing, let me know and I will try to clarify--it's hard to explain these things via text.

xzzy
Mar 5, 2009

Look at the rsync --files-from option.

Could also do it with a file list fed to tar, look at the -T option.

Of course this obligates you to create a text file listing every file you want, but if your needs are specific that might be the only way to do it.

Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

VikingofRock posted:

So I've got a bunch of logs in different directories on a remote server that I would like to copy to my computer. Each one has a path like "/foo/bar/data_set_1/log.txt", "/foo/bar/data_set_2/log.txt", etc. Here's the catch: there are a ton of data_sets that I don't want to copy over, and the ones that I do want to copy over are non-consecutive. Additionally, there are a bunch of data files in each data_set directory that I do not want to copy over--I just want the logs. Finally, the last piece of bonus difficulty is that the remote server will throttle me if I log in too much, so this needs to all be done in one command. In the end, I'd like to have a "data_set_1/log.txt", "data_set_2/log.txt" etc on my computer (in the separate folders), but having something like "data_set_1_log.txt", "data_set_2_log.txt" would also be acceptable.

What's the best way to accomplish this? I figure there has to be some rsync / scp option that will do what I want, but I've spent some time digging through the man pages and I haven't figured it out yet. If that explanation of what I want is confusing, let me know and I will try to clarify--it's hard to explain these things via text.

Look into sshfs. It makes only one ssh connection to the remote box, and then just mounts a folder from that box (often your home directory but it can be anything you have access to) at a point of your choosing. From there, use any means you like (heck, even a graphical file manager) to copy the files you need to your local machine.

VikingofRock
Aug 24, 2008




Powered Descent posted:

Look into sshfs. It makes only one ssh connection to the remote box, and then just mounts a folder from that box (often your home directory but it can be anything you have access to) at a point of your choosing. From there, use any means you like (heck, even a graphical file manager) to copy the files you need to your local machine.

This worked great. Thanks.

ToxicFrog
Apr 26, 2008


hifi posted:

you need to edit the "PATH" variable to include the folder of what you want to run.

open up or create a ".bashrc" file in your home directory and add "export PATH=$PATH:newPath" to the end of it, where newPath is the aforementioned folder.

A lot of distros will automatically add ~/bin/ to $PATH these days if it exists, so you may even be able to just create ~/bin/ and toss all your scripts/binaries/whatever in there.

Horse Clocks
Dec 14, 2004


I decided to check out openSUSE Tumbleweed (again) yesterday. And I must say I'm somewhat impressed with LXQt, Snapper, 1-Click installs and whatever else I'm missing.

But I've yet to get (some) videos to play. Youtube works, but .gifv's (Firefox reports "No decoders required for formats: video/mp4") on Imgur, and .wmv's on disk don't play.

I did a 1-click install of: http://opensuse-guide.org/codecs.php, but that didn't seem to work (and I'm not sure why packman is even needed, all those packages seem to be present in the Tumbleweed repo) so I rolled back those changes with snapper.

I'm somewhat stumped, I could have sworn last time I gave suse a crack it 'just worked'.

thebigcow
Jan 3, 2001

Bully!

Horse Clocks posted:

I decided to check out openSUSE Tumbleweed (again) yesterday. And I must say I'm somewhat impressed with LXQt, Snapper, 1-Click installs and whatever else I'm missing.

But I've yet to get (some) videos to play. Youtube works, but .gifv's (Firefox reports "No decoders required for formats: video/mp4") on Imgur, and .wmv's on disk don't play.

I did a 1-click install of: http://opensuse-guide.org/codecs.php, but that didn't seem to work (and I'm not sure why packman is even needed, all those packages seem to be present in the Tumbleweed repo) so I rolled back those changes with snapper.

I'm somewhat stumped, I could have sworn last time I gave suse a crack it 'just worked'.

I think H.264 support in Firefox relied on something from Cisco that is free but not free enough to be turned on and there are some shenanigans involved in making it work.

Double Punctuation
Dec 30, 2009

Ships were made for sinking;
Whiskey made for drinking;
If we were made of cellophane
We'd all get stinking drunk much faster!

thebigcow posted:

I think H.264 support in Firefox relied on something from Cisco that is free but not free enough to be turned on and there are some shenanigans involved in making it work.

Yes. All video codecs people actually use are heavily patented, and everything but VPX cost real money to use. Cisco paid that money for H.264 so their videoconferencing software would work with no hassle, but the codec has to be distributed in binary form to keep the patent license (they would have no way of calculating how much they owe otherwise).

Adbot
ADBOT LOVES YOU

effika
Jun 19, 2005
Birds do not want you to know any more than you already do.
Here's an odd problem:

In Fedora 24, I upgraded to Cinnamon 3.2 and now the login screen brought up after locking shows a really low-res version of my .face. Rather than figure that out, I thought I'd just remove it altogether.

I went into the LightDM GTK+ Greeter settings editor gui and switched the "User image" to off. That didn't seem to take-- every time I re-open the settings dialogue it's activated. Editing the config files in etc/lightdm (as root) doesn't seem to do anything either. I can even switch User Image to off, click "reload" and it'll have the switch activated.

I also can't get the user image to not show on the actual log-in/switch users screen (like after booting) either.

Any ideas? I can't seem to get anywhere with Google.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply