Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
other people
Jun 27, 2004
Associate Christ
Hopefully this isn't too far off topic.

I have finally had to install Flash. I tried gnash first, but it rendered horribly.

Even if I specifically block flash on [*.]youtube.com, it still detects the plugin (I guess) and then Chromium displays the "blocked plugin" grey box instead of a video.

If I disable the Flash plugin entirely, youtube politely plays HTML5 video, as it did before Flash was installed.

Is there some way around this? Telling youtube you only want HTML5 video doesn't help much when you hit a video in an incognito window. I suppose disabling the plugin all together and just turning it on when you need it is better than nothing.

Adbot
ADBOT LOVES YOU

Fortuitous Bumble
Jan 5, 2007

Is there a good way to edit files in some sort of hex mode over the console? Preferably if there was some command I'm unaware of that worked through common text editors or bash, but other options would be ok too.

Doctor w-rw-rw-
Jun 24, 2008

Fortuitous Bumble posted:

Is there a good way to edit files in some sort of hex mode over the console? Preferably if there was some command I'm unaware of that worked through common text editors or bash, but other options would be ok too.

http://vim.wikia.com/wiki/Improved_hex_editing

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
Anyone have a preferred method / application for automated mySQL database backups? I'll want to push them off to another server (got ssh access), as well as probably collect them on to my dev computer as well every week or so.

I've seen some automated scripts with a bit of a Google around but I'd rather defer to experience on this one and see what other people use, seeing as how any surprises ,if I needed this, would be very bad.

Doctor w-rw-rw-
Jun 24, 2008
I'm not able to boot a FreeBSD guest in a CentOS 6.3 KVM host. Has anyone else heard of something similar? Freezes right after beastie's (beastie is the boot menu) timeout expires. :/

Ninja Rope
Oct 22, 2005

Wee.

Doctor w-rw-rw- posted:

I'm not able to boot a FreeBSD guest in a CentOS 6.3 KVM host. Has anyone else heard of something similar? Freezes right after beastie's (beastie is the boot menu) timeout expires. :/

Recent version of freebsd? what about booting without acpi?

ExcessBLarg!
Sep 1, 2001

Fortuitous Bumble posted:

Is there a good way to edit files in some sort of hex mode over the console?
As mentioned, "xxd"/"xxd -r" is quite useful for this. I suppose some folks might prefer a dedicated terminal-based hex editor, but I'm quite happy with using xxd as a filter.

The only thing to beware of is that if you open binary files in a text editor directly, whether intending to apply xxd as a filter or not, do make sure to open them in binary mode (e.g., "vim -b") so that you don't get line endings added at the very end, or converted, or whatever.

muskrat
Aug 16, 2004

Maluco Marinero posted:

Anyone have a preferred method / application for automated mySQL database backups? I'll want to push them off to another server (got ssh access), as well as probably collect them on to my dev computer as well every week or so.

I've seen some automated scripts with a bit of a Google around but I'd rather defer to experience on this one and see what other people use, seeing as how any surprises ,if I needed this, would be very bad.

This depends on the type of tables. Usually mysqldump is fine (if your backup scheme is simple, probably just a custom script you write or find online).

Filesystem snapshots can work too if you have LLVM or ZFS.

If you have a slave, you can do backups without affecting site traffic (e.g., take the slave offline, copy files, restart it).

Overview: http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html

Also have a look at Percona XtraBackup: http://www.percona.com/software/percona-xtrabackup/

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
Okay, no worries. I guess whatever it is just make sure to regularly exercise the process and verify the results so you know it works. Cheers.

JHVH-1
Jun 28, 2002

Maluco Marinero posted:

Okay, no worries. I guess whatever it is just make sure to regularly exercise the process and verify the results so you know it works. Cheers.

Having a separate machine as slave and dumping from that is a really good option cause you already have a backup environment ready if something goes wrong.

Raw files can have their issues and limitations. The way I used to do it for basic dumps is just set up ssh keys and pipe mysqldump through gzip and then to ssh with the destination command putting it into a gzipped file all in one. Or individual dumps followed by tar.gz the whole lot.

Doctor w-rw-rw-
Jun 24, 2008

Ninja Rope posted:

Recent version of freebsd? what about booting without acpi?

Whatever NAS4Free uses, which I think is FreeBSD 9. Unfortunately, I haven't had much success trying to edit the ISO, because I couldn't reassemble it into a bootable ISO after unpacking, despite all the el torito blah blah mkisofs flags that I found via google. I'm thinking about just installing in a fresh VirtualBox VM then converting the disk image and importing that into KVM - I've at least got the same sort of freezing issue at boot with that image, which is...encouraging? What would I change to boot without acpi by default?

Ninja Rope
Oct 22, 2005

Wee.

Doctor w-rw-rw- posted:

Whatever NAS4Free uses, which I think is FreeBSD 9. Unfortunately, I haven't had much success trying to edit the ISO, because I couldn't reassemble it into a bootable ISO after unpacking, despite all the el torito blah blah mkisofs flags that I found via google. I'm thinking about just installing in a fresh VirtualBox VM then converting the disk image and importing that into KVM - I've at least got the same sort of freezing issue at boot with that image, which is...encouraging? What would I change to boot without acpi by default?

There's an option on the boot menu to boot without ACPI. Does that work at all? If so you can make it permanent by setting hint.acpi.0.disabled="1" in /boot/device.hints, but the ACPI thing was just a guess on my part, and probably not a good one.

Did you look at this bug? I assume that's not the issue if you're seeing the boot menu. These threads seem to suggest upgrading the linux kernel helps: http://www.mail-archive.com/kvm@vger.kernel.org/msg50722.html http://www.spinics.net/lists/kvm/msg54986.html

Doctor w-rw-rw-
Jun 24, 2008

Ninja Rope posted:

There's an option on the boot menu to boot without ACPI. Does that work at all? If so you can make it permanent by setting hint.acpi.0.disabled="1" in /boot/device.hints, but the ACPI thing was just a guess on my part, and probably not a good one.
Ah, thanks.

Ninja Rope posted:

Did you look at this bug? I assume that's not the issue if you're seeing the boot menu. These threads seem to suggest upgrading the linux kernel helps: http://www.mail-archive.com/kvm@vger.kernel.org/msg50722.html http://www.spinics.net/lists/kvm/msg54986.html

Yep, but I have no clue whatsoever as to how one updates CentOS 6's kernel. I'm a bit iffy on that. I know this page has info on how to run it (but it assumes a rpmbuild directory :confused:), but I'm torn between that and using Fedora instead, if I'm not going to just do the ACPI workaround, which sounds much easier at this point.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:
On RHEL, is it possible to have a user's private key not be accessible by the user? I've found that on the two Ubuntu boxes I've tested I can chown the user's id_rsa to root.root and it will still work for passwordless ssh logins, but this doesn't seem to work on Red Hat. This is all by way of attempting to keep the users from copying their private key elsewhere, for reasons.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

FeloniousDrunk posted:

On RHEL, is it possible to have a user's private key not be accessible by the user? I've found that on the two Ubuntu boxes I've tested I can chown the user's id_rsa to root.root and it will still work for passwordless ssh logins, but this doesn't seem to work on Red Hat. This is all by way of attempting to keep the users from copying their private key elsewhere, for reasons.

If you only need passwordless logins to work (that is, they are only SSHing in, not out) then you don't need the private key file on that machine at all; the authorized_keys file is the only thing that matters. Of course, the user has to have the private key on the machine they're SSHing from.

On the other hand, if they need to be able to SSH out of that machine, then there isn't really any way around the fact that they somehow need access to their private key. You can arrange for them to have an ssh agent holding the key instead of a regular file; that would at least make it difficult to extract the key material, although the agent will act as an oracle for them (for obvious reasons) and a dedicated user could reconstruct the key material. Or you could use selinux to prevent any process other than ssh from reading their private key; you'd also have to stop those processes from being ptraced if you want real assurance that they can't get at the contents of those files.

Maybe you should explain why you want to do this, and what you're hoping to achieve? Hiding a user's private key from themself doesn't really make sense, to be honest.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
From a security perspective which is more secure:

1) Two factor authenication where you know a pin + tokencode and that logs you in
2) SSH Key + Passphrase

I'm trying to make the case that SSH keys are just as secure as something like the RSA SecurID system but can't seem to find any good information on it.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:

ShoulderDaemon posted:

If you only need passwordless logins to work (that is, they are only SSHing in, not out) then you don't need the private key file on that machine at all; the authorized_keys file is the only thing that matters. Of course, the user has to have the private key on the machine they're SSHing from.

On the other hand, if they need to be able to SSH out of that machine, then there isn't really any way around the fact that they somehow need access to their private key. You can arrange for them to have an ssh agent holding the key instead of a regular file; that would at least make it difficult to extract the key material, although the agent will act as an oracle for them (for obvious reasons) and a dedicated user could reconstruct the key material. Or you could use selinux to prevent any process other than ssh from reading their private key; you'd also have to stop those processes from being ptraced if you want real assurance that they can't get at the contents of those files.

Maybe you should explain why you want to do this, and what you're hoping to achieve? Hiding a user's private key from themself doesn't really make sense, to be honest.

Maybe I should, but I fear for the amount of KeePassX recommendations. Basically we're a university division and things have been pretty lax for decades. In the last year we finally got around to disabling root ssh logins (all using the same password) on our ~80 servers. However now we have the problem of not everyone having accounts on all of the machines, so when someone goes on holiday and their server tanks, nobody can get in. The brainwave someone here had was to install a password locker, but that was shot down because basically we can't be trusted to keep it current (our systems inventory lists several machines that don't exist anymore, and doesn't list many, many more). So the next idea was to have a highly-available, dedicated server that everyone has a vanilla-standard account on, which one would then use to do a "sudo su - emergencyuser" (this would be the only command permitted in sudoers), and then emergencyuser would be able to ssh (passwordlessly) to whichever machine and use sudo there to do whatever. The problem with this is that the emergencyuser private key is readable once one does the sudo to it, and so disgruntled grad assistant could copy the private key and then go mad with it. So it's not so much that we're trying to conceal the user's own private key from them, but the intermediate account's private key.

Actually that SELinux idea might be what I'm looking for.

Other suggestions are welcome, just keep in mind that we're a smallish unit using a pretty heterogeneous collection of Red Hat/Ubuntu/Solaris and versions of same, and ideally the solution won't involve any major configuration on all of the servers.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

FeloniousDrunk posted:

Actually that SELinux idea might be what I'm looking for.

If you don't already have experience with SELinux, just be aware that it might take you weeks or months to get it setup correctly for this. SELinux does not have a reputation of being easy to start using.

FeloniousDrunk posted:

Other suggestions are welcome, just keep in mind that we're a smallish unit using a pretty heterogeneous collection of Red Hat/Ubuntu/Solaris and versions of same, and ideally the solution won't involve any major configuration on all of the servers.

To be honest, I would suggest that you are going about solving this problem wrong. The intermediate account scheme seems crazy to me.

Are your servers secured in a locked machine room? If so, just configure them all to allow passwordless logins as root for the physical console; in an emergency, you just need someone with a physical key (which presumably you already control) to give temporary access. Even if all your machines aren't locked away somewhere, I'd still suggest at least considering the passwordless-login-on-physical-consoles approach; remember that users who are physically present can already reboot the machine by hand and use a boot disc to get it to do whatever they want, and it's much simpler to control physical access to important machines than it is to control digital access.

Or you can use something like pam_usb so that the machines allow passwordless login on physical access only when a specific USB keyfob is inserted. Give the USB fob to the secretary pool; it's just like the spare office keys that secretary pools keep.

If you can't do that, I'd suggest setting up a NIS and NFS server. NIS will let you have the same accounts on every machine. NFS lets you share a directory to every machine; you can either share entire home directories, or just share something like /var/local/ssh/ and stick everyone's authorized_keys files in there, using "AuthorizedKeysFile /var/local/ssh/%u" in sshd_config.

If that's too much of a change to make on all your servers, then go ahead and setup a key that allows passwordless access to root on all machines, but don't make your intermediate account. Just give the private key to your department secretaries on a USB keyfob, possibly with a passphrase only they're supposed to know. If someone needs it, they get the secretary to plug in the keyfob, run "ssh-add -l 5m /path/to/key" or whatever, and make a note of who was getting access and for what purpose. There's not a lot of training here that needs to be done, and again, it's like trusting the secretaries with spare office keys.

Or maybe just keep track of who has access to which machines, keep that information available, and try to avoid having all the people who can be responsible for a machine go on vacation at the same time?

Basically, it seems like the "people go on holiday and we can't manage their machines" problem is at least in large part a social problem. I think the best solutions involve simply working out who you are already trusting in your organization, and just giving them the technical means to bypass security when it's essential to do so.

Ninja Rope
Oct 22, 2005

Wee.

ShoulderDaemon posted:

Basically, it seems like the "people go on holiday and we can't manage their machines" problem is at least in large part a social problem.

Same with not even having a correct inventory of your servers. I'm sure it's not what you want to hear, but if you can't even maintain a list of what assets you have, how can you protect and maintain those assets?


Goon Matchmaker posted:

From a security perspective which is more secure:

1) Two factor authenication where you know a pin + tokencode and that logs you in
2) SSH Key + Passphrase

I'm trying to make the case that SSH keys are just as secure as something like the RSA SecurID system but can't seem to find any good information on it.

They solve slightly different problems. Authenticating via SSH keys makes brute forcing logins vastly more difficult than guessing a password, even one prefixed by a RSA fob code. They also render some man-in-the-middle/trojaned sshd attacks impossible. However they're only as secure as the account or machine of the user using them. If you can log in as the user and put an alias in his .bash_profile (or root and then do anything), it's unlikely he will notice before typing a password into what he thinks is a real ssh or ssh-agent binary.

RSA tokens prevent passwords from being re-used, so even if someone sniffed the password as it's being entered, what they get is (almost) useless. Plus, requiring "something you have" makes it obvious when that thing gets stolen. Of course, you then have to configure and maintain the RSA system, deal with fobs and idiot users, and you're vulnerably to any security issues RSA themselves may have.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:

ShoulderDaemon posted:

If you don't already have experience with SELinux, just be aware that it might take you weeks or months to get it setup correctly for this. SELinux does not have a reputation of being easy to start using.

We have some experience with it. Enough to turn it off most of the time, at least :/ I think if we set up a small, dedicated box which only has this function then the SELinux might be reasonable.

ShoulderDaemon posted:

To be honest, I would suggest that you are going about solving this problem wrong. The intermediate account scheme seems crazy to me.

Are your servers secured in a locked machine room? If so, just configure them all to allow passwordless logins as root for the physical console; in an emergency, you just need someone with a physical key (which presumably you already control) to give temporary access. Even if all your machines aren't locked away somewhere, I'd still suggest at least considering the passwordless-login-on-physical-consoles approach; remember that users who are physically present can already reboot the machine by hand and use a boot disc to get it to do whatever they want, and it's much simpler to control physical access to important machines than it is to control digital access.

Or you can use something like pam_usb so that the machines allow passwordless login on physical access only when a specific USB keyfob is inserted. Give the USB fob to the secretary pool; it's just like the spare office keys that secretary pools keep.

If you can't do that, I'd suggest setting up a NIS and NFS server. NIS will let you have the same accounts on every machine. NFS lets you share a directory to every machine; you can either share entire home directories, or just share something like /var/local/ssh/ and stick everyone's authorized_keys files in there, using "AuthorizedKeysFile /var/local/ssh/%u" in sshd_config.

If that's too much of a change to make on all your servers, then go ahead and setup a key that allows passwordless access to root on all machines, but don't make your intermediate account. Just give the private key to your department secretaries on a USB keyfob, possibly with a passphrase only they're supposed to know. If someone needs it, they get the secretary to plug in the keyfob, run "ssh-add -l 5m /path/to/key" or whatever, and make a note of who was getting access and for what purpose. There's not a lot of training here that needs to be done, and again, it's like trusting the secretaries with spare office keys.

Or maybe just keep track of who has access to which machines, keep that information available, and try to avoid having all the people who can be responsible for a machine go on vacation at the same time?

Basically, it seems like the "people go on holiday and we can't manage their machines" problem is at least in large part a social problem. I think the best solutions involve simply working out who you are already trusting in your organization, and just giving them the technical means to bypass security when it's essential to do so.

All good points, basically we're just trying to duct-tape a solution onto a big pile of duct tape, so yeah. The physical access thing is not really a starter since 90% of the machines are virtual (we don't manage the VSS, and I'm sure IT Services would be real happy to get a request to create a NFS store shared across 80 machines).

Maybe we could make it a physical box, a raspberry pi or something like that and give it to the "secretary" (heh) who would turn it on for the necessary duration. That would be like the USB key solution, and we could skip the many-user-account situation by having a real password for emergencyuser on that one device.

Maybe someday we'll have some sort of NIS/LDAP set up -- actually we've got LDAP, it was just never used on the linux servers I don't know why.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:

Ninja Rope posted:

Same with not even having a correct inventory of your servers. I'm sure it's not what you want to hear, but if you can't even maintain a list of what assets you have, how can you protect and maintain those assets?

Oh I agree completely. I wrote the inventory system that nobody uses. It's a cultural inertia thing.

Ninja Rope
Oct 22, 2005

Wee.
You can use something like puppet to manage user accounts across all the systems (as well as many other things).

Edit: Can you tie it to something else? DNS records, notes on what IPs are allocated, etc?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
See Thomas Cameron's "SELinux for Mere Mortals"

http://people.redhat.com/tcameron/Summit2012/SELinux/cameron_w_120_selinux_for_mere_mortals.pdf

Elissimpark
May 20, 2010

Bring me the head of Auguste Escoffier.
I had a quick browse through the thread but didn't see an answer for this.

I'm trying to install Linux Mint 13 (64 bit)on a new system from a live USB, but I keep getting the message that "the 'grub-efi' package failed to install into /target/".

I've googled about but most answers appear to be for dual booting, which I am not doing or else have walls of code that means little to me (complete newbie here!).

Would anyone be able to help in a few-words-big-pictures kinda way or have a link to a solution?

Topping this off, I can't get it to recognise that its plugged into a router either!

telcoM
Mar 21, 2009
Fallen Rib

Elissimpark posted:

I'm trying to install Linux Mint 13 (64 bit)on a new system from a live USB, but I keep getting the message that "the 'grub-efi' package failed to install into /target/".

My guess is that you either did not create a EFI system partition for your new installation, or that something went wrong with it.

The installer seems to think that your system should use an EFI-based bootloader instead of a traditional PC BIOS-based one. In a new system, this might even be true. However, most modern systems with EFI actually have UEFI, which is the newer version of EFI + an optional compatibility layer that will allow the traditional BIOS-style bootloaders to work too... but the Arch installer might not be able to recognize that.

EFI allows you to use disks of more than 2 TB as boot disks without fuss. It also may achieve faster boot times than the traditional BIOS. On the other hand, it's very different from what the vast majority of Linux users are familiar with. There will be no MBR nor boot blocks of any type: only bootloader files on a EFI system partition.

Unfortunately, since EFI is a fairly new thing in PCs and the traditional BIOS-style boot has been used since the original IBM PC/AT was introduced in year 1984 or so, you should expect a certain number of challenges. With this, I mean poor EFI implementations and outright bugs. Already there are reports on stupid EFI implementations that just assume the system will be running Windows only, instead of following the EFI specifications.


First, some basic facts about EFI.

When booting EFI-style from a CD-ROM or DVD, an EFI system looks for a directory named "EFI" (theoretically case insensitive) at the root of the CD/DVD. If it exists, it looks for a sub-directory "BOOT" within the "EFI" directory, and a hardware-type-specific bootloader within it. For a 64-bit x86 system (which is what most modern PCs are), the expected bootloader name is "BOOTX64.EFI".

When booting from a hard disk, the situation is a bit more complex. Instead of a nice, well-known ISO9660 or UDF filesystem, there can be any number of partitions, RAID arrays, LVM and other complex things. Even the shiny newfangled EFI firmware cannot support all of this by itself.

On hard disks, EFI wants to see a GPT-style partition table, and a certain special partition within it. This is called "EFI System Partition": it should normally be about 100-200 MB in size, marked with a special boot identifier GUID, and have a FAT32 filesystem on it. Within it, there should be a directory named "EFI", and a bootloader-specific sub-directory and/or a "BOOT" directory just like on EFI-style bootable CDs/DVDs. If the EFI System Partition does not exist on a hard disk, then the hard disk is not bootable in the native EFI way. BIOS manufacturers may optionally add support for other things too, but this is the way things are supposed to work.

In Linux, this means a few things:

First, it is time to forget the old "fdisk" command and its siblings "cfdisk" and "sfdisk". They only understand the traditional BIOS-style partition table, which has a hard maximum limit at about 2 TB size. Instead, the "parted" tool should be used... and the Mint installer apparently already uses it.

Some Linux distributions mount the EFI system partition to /boot/efi. As the EFI system partition will also contain a EFI subdirectory, pathnames will have a silly double EFI component, like /boot/efi/EFI/redhat/grub.efi. But basically, you can think of it as "/boot/efi is the new /boot, with a new directory structure".

Other distributions (like Debian) don't mount the EFI system partition at all, and rely on mtools to access it when needed. I haven't used Mint, but a bit of Googling seems to indicate that Mint belongs to this group.


This image is from the Mint users' forum:


The strange /dev/mapper/isw_alphabet_soup device names indicate that Intel AHCI RAID support is being used. But the important things for you are the "New Partition Table..." button and that a partition with type "efi" is going to be created.

As I understand you're doing a clean install to a new system, you should click the "New Partition Table..." button. If it asks you to select the partition table type, pick GPT. Then make sure you create the EFI System Partition: its size should be about 100 - 500 MB (anything more than that is wasteful) and its type must be set to "efi" - this should make the installer create the appropriate partition IDs for you. Then create the rest of the partitions as you see fit.
(With GPT, there will be no "primary" and "extended" partitions - all partitions will be equal.)

The "Device for boot loader installation" field will apparently be completely ignored when grub-efi is used.

Elissimpark posted:

Topping this off, I can't get it to recognise that its plugged into a router either!

Has it recognized the existence of the NIC at all?
Please run this in a command prompt window:
code:
/sbin/ifconfig -a
Does the output include a block of text for the "eth0" network interface?
If not, the driver module for the NIC is probably not loaded: the output of the "lspci -v" command would be needed to identify your NIC and the correct driver for it.

If eth0 is listed but does not include the keywords UP and RUNNING, the driver has been loaded but the NIC has not been configured. "sudo ethtool eth0" could be used to see if the NIC detects a link at all, but some newer NICs switch completely off if they are not UP. So, before running the ethtool command, run "sudo ifconfig eth0 up" to make sure the NIC is powered up first.

If ethtool output includes "Link detected: yes", the hardware side of things is probably OK. If there is no link detected, the system is probably smart enough to not even bother wasting time with DHCP queries to get an IP address, but it may not generate any error messages for that: the system will just assume the cable is not yet plugged in.

Doctor w-rw-rw-
Jun 24, 2008

Doctor w-rw-rw- posted:

I'm not able to boot a FreeBSD guest in a CentOS 6.3 KVM host. Has anyone else heard of something similar? Freezes right after beastie's (beastie is the boot menu) timeout expires. :/

Update on this: abandoning CentOS for Fedora as a host since updating the kernel to newer versions doesn't seem recommended, and why the hell not; it's a home tinkering+NAS server.

Does anyone know anything about Open vSwitch or OpenStack? Are they neat? Horrible? Worth playing with?

Ratzap
Jun 9, 2012

Let no pie go wasted
Soiled Meat
Does anyone have experience with Fedora 17 and the intel 3150 chipset? I bought a Zotac Zbox recently (model link at the bottom), fedora 17 installs fine but won't switch to a higher resolution than 1280x720. I've got it hooked up via HDMI to an LCD capable of 1900x1080 and the blurb with the machine claims to be able to do the same.
I tried stopping all the X services and running Xorg -configure but it gives the good old "more screens found than devices" error and the xorg.conf.new is useless (no values filled in template look).
lspci showed me the kernel was ok with an N10 and the default running X11 (no xorg.conf, it just auto detects) shows it's using the an intel driver that has "PineView" in the supported list.

I was hoping to have it ready to set up for my mother this weekend coming but I need to get it using the full screen capabilities. Any thoughts or if I've really lucky has someone got a working xorg.conf for one?

http://www.zotac.com/index.php?prod...1&Itemid=100295

wolrah
May 8, 2006
what?
Apparently a basic Debian GUI install is too much now for my Mini 9's little 4GB SSD, any recommendations of compact distros which are still close enough to common ones (preferably apt-based) that I don't have to compile everything?

Somehow even distros like Backtrack which can run fine off a <4GB liveUSB can't install in the same space.

edit: I guess on that thought, is there any way to just clone a liveUSB distro to the drive and have it work? I tried just doing a 'dd' and that didn't get me anywhere.

Morkai
May 2, 2004

aaag babbys

wolrah posted:

Apparently a basic Debian GUI install is too much now for my Mini 9's little 4GB SSD, any recommendations of compact distros which are still close enough to common ones (preferably apt-based) that I don't have to compile everything?

Somehow even distros like Backtrack which can run fine off a <4GB liveUSB can't install in the same space.

edit: I guess on that thought, is there any way to just clone a liveUSB distro to the drive and have it work? I tried just doing a 'dd' and that didn't get me anywhere.

I like Lubuntu for lightweight installs.

My Rhythmic Crotch
Jan 13, 2011

I was just curious if anyone has used Samba as an AD controller? How many users are you supporting? Is it ready for primetime yet? I'm not a windows admin, it's just something I have wanted to try and we have a lot of windows machines at work that could possibly be migrated to a Samba DC if it's really ready to go.

My Rhythmic Crotch fucked around with this message at 18:08 on Aug 22, 2012

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

wolrah posted:

Apparently a basic Debian GUI install is too much now for my Mini 9's little 4GB SSD, any recommendations of compact distros which are still close enough to common ones (preferably apt-based) that I don't have to compile everything?
Do a basic install then just add what you need after that? Can you live with Fluxbox or WindowMaker or do you need GNOME/KDE?

wolrah posted:

Somehow even distros like Backtrack which can run fine off a <4GB liveUSB can't install in the same space.
I think the live distros use a compressed file system

evol262
Nov 30, 2010
#!/usr/bin/perl

wolrah posted:

Apparently a basic Debian GUI install is too much now for my Mini 9's little 4GB SSD, any recommendations of compact distros which are still close enough to common ones (preferably apt-based) that I don't have to compile everything?

Somehow even distros like Backtrack which can run fine off a <4GB liveUSB can't install in the same space.

edit: I guess on that thought, is there any way to just clone a liveUSB distro to the drive and have it work? I tried just doing a 'dd' and that didn't get me anywhere.

Debian minimal plus the GUI stuff is fine. It's probably creating a 2GB swap partition or something. 100MB boot, 512MB swap leaves you more than 3GB, which is even enough for KDE.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
Our legal department (with the review of internal IT) has apparently banned the use of zlib.

I'm not sure what they're getting at here unless they forgot about the Linux servers we have everywhere.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Ratzap posted:

Does anyone have experience with Fedora 17 and the intel 3150 chipset? I bought a Zotac Zbox recently (model link at the bottom), fedora 17 installs fine but won't switch to a higher resolution than 1280x720. I've got it hooked up via HDMI to an LCD capable of 1900x1080 and the blurb with the machine claims to be able to do the same.
I tried stopping all the X services and running Xorg -configure but it gives the good old "more screens found than devices" error and the xorg.conf.new is useless (no values filled in template look).
lspci showed me the kernel was ok with an N10 and the default running X11 (no xorg.conf, it just auto detects) shows it's using the an intel driver that has "PineView" in the supported list.

It could be a hardware texture memory limit. Can you pastebin glxinfo so I could look through it?

wolrah
May 8, 2006
what?

Bob Morales posted:

Do a basic install then just add what you need after that? Can you live with Fluxbox or WindowMaker or do you need GNOME/KDE?

I could have sworn I read that Debian had moved to a lighter-weight GUI by default so I was just rolling with it. I couldn't care less what GUI I have as long as I can run Wireshark, a modern browser, and a terminal, so I'll give it a shot.

xtal
Jan 9, 2011

by Fluffdaddy

wolrah posted:

I could have sworn I read that Debian had moved to a lighter-weight GUI by default so I was just rolling with it. I couldn't care less what GUI I have as long as I can run Wireshark, a modern browser, and a terminal, so I'll give it a shot.

As I understand, they're moving to Xfce for version 7.0 (Wheezy), which isn't out yet.

spankmeister
Jun 15, 2008






fivre posted:

Our legal department (with the review of internal IT) has apparently banned the use of zlib.

I'm not sure what they're getting at here unless they forgot about the Linux servers we have everywhere.

Any background info you can give as to why? Have there recently been developments on the legality of it?

BnT
Mar 10, 2006

fivre posted:

Our legal department (with the review of internal IT) has apparently banned the use of zlib.

Don't forget to erase all of those zlib-compressed files in /boot! Also, :wtc:

Ratzap
Jun 9, 2012

Let no pie go wasted
Soiled Meat

Suspicious Dish posted:

It could be a hardware texture memory limit. Can you pastebin glxinfo so I could look through it?

Thanks for taking the time. This is the glxinfo paste

http://pastebin.com/RrxHbvgi

And while I was at it, here is the paste of lspci

http://pastebin.com/eb7iSUZG

Adbot
ADBOT LOVES YOU

kujeger
Feb 19, 2004

OH YES HA HA

wolrah posted:

I could have sworn I read that Debian had moved to a lighter-weight GUI by default so I was just rolling with it. I couldn't care less what GUI I have as long as I can run Wireshark, a modern browser, and a terminal, so I'll give it a shot.

Debian'll run xfce as the default desktop in the next release (wheezy, 7.0) due...sometime.

edit: beaten; that's what I get for leaving a thread in a tab!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply