Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
For a long time, I've always used the "swap partition should be twice the amount of RAM populated in the system" rule when defining disk partitions for a new install.

I've recently put together a new machine and while it only has 2GB of RAM at the moment, I plan on upgrading it to either 4GB or 8GB in the next couple of months.

Assuming that I'll eventually load it up to 8GB (motherboard's max), would it really be necessary to define a 16GB swap partition? It seems like a full 16GB of disk space for the swap partition is overkill.

Am I correct in this thinking? And if so, what would be a more reasonable choice? I was thinking that 4GB sounds like a decent choice, but I'm just pulling that out of thin air.

Edit:

Enelysios posted:

I am a new Linux convert, and I am having serious problems finding a graphics card driver. I am using Ubuntu, but the driver it installs doesn't support my card. In fact, my card isn't even listed on the official driver list. (For Windows OR Linux) I am using a Radeon Xpress 1100. If I am unable to find proprietary drivers, I know there are supposedly a couple choices for 3rd party/open source drivers, so which would you reccomend? Do Either support my card?

Thank you for your time, as I am still unexperienced and I was unable to get any information on this on my own

I checked the ATI site, and as you mentioned I don't see the Radeon Xpress 1100 listed specifically. The closest match I could find was the Radeon Xpress 1250, which is listed as an on-board device. Is the 1100 also a motherboard-integrated device? Either way, I would grab the driver from http://ati.amd.com/support/driver.html and download/install it. The ATI installer for Linux isn't too difficult to use, it has a GUI option and you should be able to get by with the default options without too much hassle.

As for the third party/open source drivers, they most likely came with your Ubuntu installation. The name for the driver should either be "radeon" or "ati". To switch the driver, you'll need to edit your X.Org config (unless Ubuntu has some kind of a front-end for doing it -- I don't know, I've never used Ubuntu). Do the following:

1) Open a terminal.
2) run su and type your root password. (don't actually type 'run', just the stuff in bold)
3) modprobe radeon
If step #3 didn't give you any error messages (such as "module not found"), then proceed.
4) cd /etc/X11/
5) cp xorg.conf xorg.conf.backup
6) nano xorg.conf

Nano will open. It's an ncurses-based text editor. It's quick, it's easy. Hit control+W to search for text. Enter vesa and press enter. It should find a line of text that looks similar to


Driver "vesa"

Replace the word vesa with radeon.

Press control+X to exit. It'll ask you if you want to save the file, hit 'y' for yes.

Run 'reboot' from the root prompt (there are ways to apply the changes without having to reboot, but for a beginner this will be the easiest way).

Hope that helps you.

juggalol fucked around with this message at 15:43 on Apr 3, 2007

Adbot
ADBOT LOVES YOU

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

dfn_doe posted:

There is just no logical reason to scale up the swap size along with the memory size.

Thanks :)

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
Edit: Beaten

juggalol fucked around with this message at 04:44 on Mar 6, 2008

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

rincewind101 posted:

I'm having some trouble with rtorrent. I recently switched from Azureus - and I'm having some trouble with seeding. Where after the torrent is finished - it doesn't really seed anymore.

What do you mean by "doesn't really seed" - is it just low activity, or no seeding at all? It could be that you're not on a particularly active torrent, or it could also be that your ISP is doing something behind the scenes (Comcast has become notorious for this).

rincewind101 posted:

Does anything look suspect in my config file?

Nothing jumps out at me as being wrong. I'll post my own .rtorrent.rc at the end - mine is much smaller, only a few options set. It looks like you copied the default config and made a couple of edits.

rincewind101 posted:

One of the things I'm not sure about is the bind and ip setting -
bind = a.b.c.d
Bind listening socket and outgoing connections to this network interface address.
ip = a.b.c.d
ip = hostname
Set the address reported to the tracker.

Does that mean that rtorrent is telling the tracker my ip address is 192.168.1.100?

Doesn't look like you have any of these settings defined in the config file anyway. Rtorrent shouldn't be claiming your IP address is 192.168.1.100, it should be the address of the internet-facing device of your network. Since you're using a 192.168 network, I'm assuming you're behind some router/firewall device.

code:
# modified .rtorrent.rc
session = ~/.rtorrent_session

## default directory to save torrent data before it's complete
directory = /mnt/sda10/temp_download

## move completed torrents to different directories based on the watch dir
## it was initially pulled out of
schedule = watch_directory_1,10,10,"load_start=~/t/music/*.torrent,d.set_custom1=/mnt/sda10/music/"
schedule = watch_directory_2,10,10,"load_start=~/t/video/*.torrent,d.set_custom1=/mnt/sda10/video/"

# On completion, move the torrent to the directory from custom1.

on_finished = move_complete,"execute=mv,-u,$d.get_base_path=,$d.get_custom1= ;d.set_directory=$d.get_custom1="

## forwarded ports
port_range = 7001-7020

## encryption settings

encryption=allow_incoming,try_outgoing,enable_retry

rugbert posted:

What are some good web dev apps for linux? So far Ive been using vim to write all my pages but its getting time consuming :/

Check out Quanta Plus

juggalol fucked around with this message at 05:02 on Jul 18, 2008

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

Your Japanese Dad posted:

This is an integrated motherboard chipset for the network card, and it appears to be using the sky2 kernel driver.

Also I should note that restarting the machine causes everything to be fixed.

EDIT: It appears this is a known problem. Looks like I'll have to get another network card.

EDIT2: Anyone have a suggestion for a cheap network card with solid Linux drivers?

What chipset is the on-board NIC using? In the past, I've had Marvell Yukon Gig-E chips that hung a machine using the sky2 driver, but I was able to use a patched version of the sk98lin driver without any issues.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

Your Japanese Dad posted:

I'm pretty sure that's the exact one. I'll take a look into it. Thanks a lot.

No problem. I'll see if I can find the driver patch - I remember SysKonnect's website being kind of a pain in the rear end to navigate, I may have it lying around somewhere. If nothing else, I'll be able to grab it tomorrow at work, I know I have it on the NAS there.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

ShoulderDaemon posted:

But I'd guess your problem is spelling sync with an h.

I did something like that last week - I couldn't figure out why a RAID partition wasn't being mounted at boot, and startup scripts which relied on that data were bombing left and right ... come to find out that I typed "auto" as "audo" in fstab.

Fuckin' New England accent strikes again :v:

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
Edit: ^ Yeah, uninstalling apache is the better route, if you don't ever need it. No reason to have it if you don't ever use it.

lord funk posted:

What's the best way to make sure Apache isn't running? Should I 'turn it off' (and how would I do that) or should I just uninstall it?

We don't do any Web-server stuff, so it isn't really needed in the install. (Running Debian)

You should be able to turn it off using the apache init script. I don't run Debian, but it ought to be something like

code:
# /etc/init.d/apache stop
And to prevent it from starting at boot,

code:
# update-rc.d –f apache remove

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

mcsuede posted:

I had a hard reboot happen during a run of sfill and now I've got full drives, any ideas?

As root, can you run something like "du -b | sort -gr" from / ? That should give you the disk usage in bytes and organize it from biggest to smallest - it ought to at least give you some idea about where the most space is being taken up.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

FugeesTeenMom92 posted:

Yeah, except it loving doesn't.

Turning Wobbly Windows off makes the snapping stop. However, there is apparently no way to have Wobbly Windows without Irritating Snapping Windows.

Have you restarted X after changing the settings? I've had compiz misbehave like that before.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

Sock on a Fish posted:

I've already updated the kernel using yum to see if that'd take care of things, but no good. I'm pretty sure that nfs has been part of the linux kernel for a long-rear end time. I'm on 2.6.9-67.

Can you look at your kernel config file? It should be named 'config' and should exist somewhere within /usr/src. The 'older' standard was to keep the kernel sources in /usr/src/linux - but a lot of distributions have started doing it their own way, you may need to do some digging.

If you can find that file, try 'grep -i nfs config' - it ought to tell you if NFS support is compiled statically, as a module or not at all.

If it's not compiled at all, it may be that it was being handled by the initram disk. You mentioned that you aren't familiar with how it ticks (neither am I) - but maybe the NFS support was originally built into the initram image that the original working box was using?

Edit:

The Remote Viewer posted:

Linux is in the stone age as far as torrent clients go, which I found surprising.

I know I'm really late to the party on this one, but you should check out rTorrent. I didn't see it mentioned in any of the replies following your post, and it's the best torrent client I've seen under Linux. It runs in the console (I run it inside of a screen session and it works out very nicely). If you prefer, there's a web front-end project called wTorrent, but I've never used it.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

Sock on a Fish posted:

It turns out you were on the right track. The running kernel needed an nfs module, but /lib/modules/2.6.9-67.EL/ was drat near empty. Copied over the modules directory from a machine with the same kernel, problem solved. :c00l:

That sounds like the sort of problem that you fix once, you're not 100% sure why it broke in the first place - and once you get it working again, you walk away very slowly, never showing your back to the system.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

JawnV6 posted:

I just plugged it in and it got 90% of the way there, any ideas how to fix the rest?

I think your problem has to do with the X configuration on your Ubuntu system. If you're using an NVidia video card (and driver), run

$ nvidia-xconfig --advanced-help

Within all of that output, there's a section for TV output:

code:
--tv-standard=TV-STANDARD, --no-tv-standard
Enable or disable the "TVStandard" X configuration option. Valid values for "TVStandard" are: "PAL-B", "PAL-D", "PAL-G",
 "PAL-H", "PAL-I", "PAL-K1", "PAL-M", "PAL-N", "PAL-NC", "NTSC-J", "NTSC-M", "HD480i", "HD480p", 
"HD720p", "HD1080i", "HD1080p", "HD576i", "HD576p".
(gently caress, sorry - that broke tables - should be fixed now)

So I guess the next step would be to make a backup of your current working X config and then try running

$ nvidia-xconfig --tv-standard=HD1080p

(Or whatever TV standard you're trying to work with - I don't know which resolution corresponds to which HDTV mode)

Not sure if ATI's aticonfig has a handy-dandy way to do this ... if not I can try to screw around with a test server at work tomorrow and see if I can post the relevant changes.

Edit:

Sock on a Fish posted:

My guess is that the dude that setup the physical machine decided that he'd compile everything into the kernel instead of as modules? If that's possible, that is. That guy is long gone, so there's no way to know. It's possible that he did it to try to boost performance, since this machine is always taxed to the max compiling poo poo with our Java-powered build manager.

Chalk it up to him being a silly goose, I guess.

juggalol fucked around with this message at 05:11 on Jan 6, 2009

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

JawnV6 posted:

nvidia-settings never brought up the GUI, it just ran from the console and quit. Broke X, whatever it did, and the system came back up in reduced graphics mode which ironically showed the full screen at 640x480. I restored my old xorg and it booted fine, I'm thinking this isn't worth the effort anymore. I'll screw around with it a little longer when I get home tonight, but I'm probably going to end up going back to my monitor. Thanks for the help!

I googled around a bit more, and I think the problem you're describing is HDTV overscan. I've never screwed around with HDTV output before, but it sounds like this is a pretty common problem, and tweaking the X config is the proper way to go about it. I've found a bunch of threads about people reporting that they have a problem ... but none of them seem to have a solution

Dunno if that helps you on your hunting or not.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

JawnV6 posted:

Awesome. Very helpful and relevant to the problem at hand.

I can just picture his supple bosom jiggling while he typed that sentence out, his pock-marked & sweaty face twisted in rage.

Edit: I know the 'gtf' tool can be used to generate modelines that can be put into xorg.conf, but you'll need to know specific parameters to put into gtf for it to work.

code:
usage: gtf x y refresh [-v|--verbose] [-f|--fbmode] [-x|--xorgmode]
The x and y inputs are obviously whatever resolution you want the display put out at, but the refresh is the piece that would give you more trouble (I think). I doubt you can find the refresh rate for the TV on Sony documentation ... but you might have some luck if you call their tech support. It won't cost anything but some of your time, and it might actually get the answer.

juggalol fucked around with this message at 21:23 on Jan 6, 2009

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

JawnV6 posted:

I tried a couple suggested refresh rates from googling, every attempt at putting a modeline into xorg.conf ended in 'safe graphics mode'. I'm throwing in the towel, too frustrated to figure it out.

Thanks for trying.

No problem. I'll keep this in mind - I had hoped to use my current 32" HDTV as a desktop display once I get around to upgrading my TV (within the next year or two) but maybe that won't work out so well after all. I'd rather know now than spend hours banging my head against a wall.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

trilljester posted:

I was thinking of trying out Gentoo as it seems it can be fully customized for your machine, and maybe that might work out better for this laptop? Any suggestions?

I ran Gentoo for a good while, but I got very sick of it breaking constantly.

If you're looking for something that "just works" and requires little maintenance to keep running, Gentoo isn't the right way to go (in my opinion). If you're looking for a minimalist/flexible distribution, I've heard good things about Arch ("like Gentoo but isn't broken"), but I've never actually used it. If you want something that "just works", I've had good luck with Ubuntu.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

GuyGizmo posted:

However, each copy, about 45 gigabytes of used space materialized out of nowhere. I confirmed that all hard links were perserved, but nonetheless each time the total size on disk of the directory tree increased in size by about 45 gigabytes. Anyone have any idea why?

Have you compared the contents of the two directories?

Something like "ls -lSRh /dest > dest_ls; ls -lSRh /src src_ls; diff dest_ls src_ls"?

After the first difference it finds the 'diff' output will probably get really, really messy but at least it'll show you the first instance of what changed.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

chryst posted:

Or use "du -sh */*" instead of ls, and it may be cleaner.

Yeah, either way would work - as long as he's able to get a comparison of both directories, he should be able to find out what's different among the two.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

The Merkinman posted:

Is there any audio player for Linux that has the album art view of Windows Media Player?
For those unfamiliar with what I'm describing, here is a picture.
http://www.microsoft.com/library/media/15369/hk/windows/images/windows-vista/discover/Web_MediaPlayer_album02.jpg

Not shown in the picture: If you group by artist, and there are multiple albums by the same artist it also stacks album art in a fan like orientation.

I don't know about the stacking view, but Amarok supports album artwork and allows you browse through album cover photos to pick what you want to listen to.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

The Merkinman posted:

Is this in Amarok 2? I tried 1.4 (Ubuntu 8.10) and I don't see that option anywhere.

For me, it's in Tools -> Cover Manager. Brings up a window with all of your album art, scroll through, double click for the one you want to listen to.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
I'm looking for a quick, easy and painless way to stress test systems before they leave our manufacturing dept. All we're really concerned with is stressing CPU and RAM. I've played around with Inquisitor (https://www.inquisitor.ru), and had some measure of success with it, but I can't run the CPU and memory tests simultaneously, they seem to clobber each other, which results in the memory test failing.

Does anyone know of a good, easy to use bootable CD utility that will test both CPU and RAM? Memtest isn't sufficient, since it only works the CPU just enough to address memory and perform basic operations on it.

If anyone knows of such a magical tool that'll just boot + run, I'd be ever so happy.

sonic bed head posted:

Thank you so much! That's amazing. I had no idea that command existed. It's basically unlocker for Linux.

Check out http://www.ubuntu-unleashed.com/2007/09/advanced-lsof-usage-in-ubuntu-or-any.html - this has some quick and easy examples of how nice lsof really is. Just running "lsof" can be very slow, if you know what you're looking for and throw lsof a few arguments when you invoke it, it'll speed up considerably.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

juggalol posted:

Does anyone know of a good, easy to use bootable CD utility that will test both CPU and RAM? Memtest isn't sufficient, since it only works the CPU just enough to address memory and perform basic operations on it.

I've spent considerable time hunting around for this, with no real luck. So I wound up using remastersys to create my own bootable ISO image based on an already existing HDD install. Slapped down a pretty minimal Ubuntu 8.10 install, installed mprime and an init script to start it up in "torture" mode when the system boots.

As it turns out, I probably spent more time hunting for a pre-made tool than I actually did rolling my own. Remastersys is surprisingly easy to use - if you ever need to create your own custom bootable CD, it's a pretty handy tool to have.


Burning the .iso now, fingers crossed.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
I'm trying to get a *complete* list of all of the files that rtorrent is using at a given time (trying to clean up disk space, want to make sure I'm not deleting something that rtorrent is seeding and wind up re-downloading it).

It isn't as simple as I thought it would be, though.

code:
$ lsof -p `pidof rtorrent` | wc -l
193
I am absolutely certain that rtorrent is actively seeding way, way more than 193 files.

Anyone know how I can get an actual complete list? Or is rtorrent doing something internally like releasing filehandles until it actually needs to do something with them?

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

ShoulderDaemon posted:

It does this. You'll need to directly investigate what torrents it is running.

Ouch. That's gonna take some time.

Thanks for letting me know, though. Maybe I'll shoot an e-mail with this question as a feature request ... because god drat, that's a pain in the rear end when you have a lot of torrents being managed.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

NZAmoeba posted:

Nothing in the error log in apache, which makes me thing it's not getting that far.

Check apache's access.log file instead - it'll tell you if apache ever gets the request from the other clients. If it's not showing any activity, double check the firewall settings on the Fedora box.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

NZAmoeba posted:

Oh, ssh works, I didn't have sshd running before.

So why does that work but http doesn't? httpd is definitely running, and http's box is ticked right next to ssh's in the firewall settings.

Could you try stopping the firewall completely and re-testing from another box?

I'm not suggesting you're not competent enough to check the right options in firewall GUI, I just want to completely rule out the firewall as a possibility.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

NZAmoeba posted:

oh what the gently caress!? I just tried it from another PC and it works loving FINE. I've spent all day bitching about this drat box and wondering why it won't talk on the network properly and it's just my pc?!?!
so new question, why the hell won't it talk http to my machine?

So if machine 'A' is the Fedora box running Apache, machine 'B' is the original desktop you tried to use to view the web server, and machine 'C' is the 2nd client box that worked, can you reply back with all of the network settings for each box? (IP address, netmask, default gateway)

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

NZAmoeba posted:

This has been strangely difficult to google, I have a remote ssh connection into my box that's been there since december 2008, I want to kill it. How? (preferably not killing the other connections coming in)

Running "lsof -i :22" would show you exactly which PIDs are active (on port 22), just kill the appropriate one.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

Bohemian Cowabunga posted:

A simple question that has been bugging me for a while, how do i search recursively with wildcards(*) using ls?
I am trying to do the bash counterpart of C:\dir /s *.junk

Have tried to read the documentation and looked around with google, but I am unable to find a simple answer for this.

code:
find -iname '*.junk'
Ought to get you there.

Edit: Oops, didn't see that we'd moved onto another page. My mistake.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
I'm having some weirdness with sshd on my Ubuntu 9.04 desktop.

I can log into it from some IPs, but not from others. For example, I can't log in from my work station (a XP/Ubuntu 9.04 dual boot), but I can log in from an OpenBSD server.

On the Ubuntu 9.04 box running sshd, I'm looking at auth.log, and seeing where it's refusing the connection:

code:
Apr 30 11:09:04 smallpox sshd[18650]: refused connect from 74.94.###.### (74.94.###.###)
I've turned DenyHosts on and off, it isn't making any difference.

The same box is accepting connections from elsewhere, though - such as my free shell account on honeyshells

code:
Apr 30 12:04:34 smallpox sshd[20182]: Accepted password for juggalol from 66.7.###.### port 44307 ssh2
Apr 30 12:04:34 smallpox sshd[20182]: pam_unix(sshd:session): session opened for user juggalol by (uid=0)
Any ideas why some connections are being refused while others aren't?

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
Turning the logging level to DEBUG in sshd_config, here's the pertinent info from /var/log/auth.log (immediately after a restart, 1st line is when sshd comes back up):

code:
Apr 30 14:18:01 smallpox sshd[25018]: debug1: Bind to port 22 on 0.0.0.0.
Apr 30 14:18:01 smallpox sshd[25018]: Server listening on 0.0.0.0 port 22.
Apr 30 14:18:01 smallpox sshd[25018]: debug1: Bind to port 22 on ::.
Apr 30 14:18:01 smallpox sshd[25018]: Server listening on :: port 22.
Apr 30 14:18:12 smallpox sshd[25023]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8
Apr 30 14:18:12 smallpox sshd[25018]: debug1: Forked child 25023.
Apr 30 14:18:12 smallpox sshd[25023]: debug1: inetd sockets after dupping: 3, 3
Apr 30 14:18:12 smallpox sshd[25023]: debug1: Connection refused by tcp wrapper
Apr 30 14:18:12 smallpox sshd[25023]: refused connect from 74.94.###.### (74.94.##.###)
The "connection refused by tcp wrapper" message looked strange ... googled a bit, turns out it's related to TCPWrapper, which I don't recall having installed/configured, but it could've come with 9.04 - don't really know.

Either way, it turns out that the IP address I was trying to connect *from* wound up in hosts.deny . Maybe I typed my password wrong one too many times, I dunno. I removed it from hosts.deny and added it to hosts.allow, restarted ssh & denyhosts, all seems to be well now.

Thanks :)

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

kyuss posted:

There's sufficient info on creating your own custom Ubuntu Live CD on the web. I tried this last year and it was a breeze.

You want remastersys.

1) Install Ubuntu
2) Configure NDISWrapper + your wireless card.
3) Run remastersys to create a bootable version of your live system
4) Burn ISO
5) Praise its simplicity

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

mike12345 posted:

I know, I know. Do you think it's worth buying that O'Reilly book on regexp or should I just google and print out stuff I find on the web?

Personally, I found that O'Reilly's "sed and awk" was very, very helpful in learning regular expressions. There's likely tons of stuff on the web that you can get for free, but for me, the book was well worth the money, if only to have a quick reference on my desk for when I needed it.

Edit: Not trying to pick a fight with Lucien or anything, just my two cents.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

eighty8 posted:

That link is really helpful for a starting place, thanks. I have been doing some Google searches and it looks like "shred" with appropriate modifiers is the perfect tool for the erasing itself. Combined with the stuff at that link it could be the perfect tool.

I still will have a good bit of scripting to do to make a log file be made and such, but I don't think that should be that hard. What I am more concerned about is that I would like to be able to access the drives serial number if I am able for logging purposes. I know this possible if the drive is plugged into the computer via IDE but I don't know about USB.

Just for curiosity's sake, I ran 'hdparm -i' on an (internal) SATA disk and was able to get the serial number. Running the same command against an external USB disk gave me an error message, "HDIO_GET_IDENTITY failed: Invalid argument".

I did a few minutes of googling to see if anyone else had this problem, but I didn't come up with anything.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

Twlight posted:

You might want to try the -I flag, it seems that the -i flag is for older devices.

Hurr. I probably should've checked the man page. So yeah, running 'hdparm -I' against a USB disk will indeed give you the serial number.

Edit:

code:
hdparm -I $disk|grep "Serial Number" | awk -F\: '{print $2}'| tr -s ' '
Where $disk is the disk you care about.

juggalol fucked around with this message at 15:09 on May 14, 2009

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
I'm having some trouble getting a permissions issue sorted out. I have a system running FreeNAS, which is hosting a RAID5 array.

I have another server running VMware ESX, and I'm looking to use part of the RAID server as storage space for the virtual machines.

The RAID server has one large RAID5 array, which is further split up into logical volumes (music, video, etc etc). I've created another volume for virtual machines, and I would like for this volume to be accessible only by the RAID server (so regular users accessing the music or videos can't get copies of the VM files).

I have the permissions set up on the FreeNAS server so that only the owner+group can read/write the volume, and nobody else can read it.

But the confusion comes in when I'm trying to mount this volume from the ESX server. Is it possible to specify a username on the server when mounting NFS? Meaning, from the ESX server, how can I specify the username/password for the FreeNAS server in order to read/write to the volume?

Or maybe there's a better way to this and I'm just silly.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

taqueso posted:

I think the UID and GIDs need to be the same on both systems when using NFS, and access is controlled by the ID numbers and not names.

Ah, this makes much more sense. Thanks. Only took a couple of minutes to get that set up, it was a lot easier than I expected.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
I have a SATA HDD that was once present in a RAID array. It's a disk that was used in our systems engineering lab, so it's been connected to some RAID card that I can't pinpoint.

When I plug the disk into a system, it shows up as being 1TB, but it's most definitely a 200GB disk.

From a bootable Knoppix DVD, I've ran

code:
# dd if=/dev/zero of=/dev/sda bs=1024 count=1024
To zero-write the 1st MB of the disk, which as far as I know, should've wiped out all of the metadata. The disk is still being detected as 1TB though, so there's some low-level data that I'm not able to wipe off of the drive.

I'm currently doing a zero-write to the whole disk (just in case the metadata is beyond the first 1MB - I doubt this is the case, but what the hell, it's not going to hurt anything).

Any ideas? I can't seem to get this disk's size to be properly detected.

Adbot
ADBOT LOVES YOU

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

mawrucre posted:

What do you mean "it shows up as being 1TB"? Where does it show that? What does dmesg and fdisk -lu say about the disk? What does the BIOS say?

Sorry, by "shows up as 1TB", I mean "fdisk -l" shows the disk as being 1TB in size. BIOS seems to think it's a regular 200GB disk. I haven't tried "fdisk -lu", I'll look at it and get back to you.

Edit:

A full zero-wipe of the disk with dd didn't fix it, but using dban did the trick. I'm not going to try and figure out why, I'm just glad that it's done with.

juggalol fucked around with this message at 15:25 on Jun 10, 2009

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply