Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Super-NintendoUser
Jan 16, 2004

COWABUNGERDER COMPADRES
Soiled Meat

Ashex posted:

linerx help

Thanks, I'll look into the daap server stuff. And I appreciate the code snippet for the samba configuration.

Daap looks pretty cool. It looks like I can use that to share my music, and then use Samba to share pictures, videos, and other files right?

What are the advantages of Daap over just Samba for music, though? I see that it will transcode, but I don't really need that, unless it will encode those stupid m4a files are MP3 and remove the apple lock.

Super-NintendoUser fucked around with this message at 12:58 on Sep 4, 2008

Adbot
ADBOT LOVES YOU

Alowishus
Jan 8, 2002

My name is Mud

Steppo posted:

Is using symbolic links habitually a good practice? If not, would using it in this case be an exception? I doubt that there's enough demand on these documents to create some creepily absurd CPU overhead, with links going this way and that, and it does seem to be the most secure method, shy of FIXING THE MOTHERFUCKING CODE.
I don't see a technical problem with doing the symlinks in this situation. They shouldn't really cause much in the way of CPU overhead, maybe just a bit more disk activity. The biggest disadvantage is the massive administrative overhead that it will cause you. I guess you could set up a cron job that scans for any new documents and auto-symlinks them to the root... only potential problem there would be name collisions.

lord funk
Feb 16, 2004

What's the best way to make sure Apache isn't running? Should I 'turn it off' (and how would I do that) or should I just uninstall it?

We don't do any Web-server stuff, so it isn't really needed in the install. (Running Debian)

Accipiter
Jan 24, 2004

SINATRA.

lord funk posted:

Should I 'turn it off' or should I just uninstall it?

We don't do any Web-server stuff, so it isn't really needed in the install.

Question asked, question answered.

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism
Edit: ^ Yeah, uninstalling apache is the better route, if you don't ever need it. No reason to have it if you don't ever use it.

lord funk posted:

What's the best way to make sure Apache isn't running? Should I 'turn it off' (and how would I do that) or should I just uninstall it?

We don't do any Web-server stuff, so it isn't really needed in the install. (Running Debian)

You should be able to turn it off using the apache init script. I don't run Debian, but it ought to be something like

code:
# /etc/init.d/apache stop
And to prevent it from starting at boot,

code:
# update-rc.d –f apache remove

Mr. Eric Praline
Aug 13, 2004
I didn't like the others, they were all too flat.

Megaman posted:

My mistake then, what would be the best encryption solution from linux to windows? sftp?
Samba sends the password encrypted, so you're fine if the data isn't sensitive.

Megaman
May 8, 2004
I didn't read the thread BUT...

chryst posted:

Samba sends the password encrypted, so you're fine if the data isn't sensitive.

In my case it is so I guess I'll just stick with sftp

lord funk
Feb 16, 2004

Accipiter posted:

Question asked, question answered.

I thought so. But I didn't install the OS on this machine, and I'm looking for the quick fix.

SynVisions
Jun 29, 2003

Megaman posted:

In my case it is so I guess I'll just stick with sftp

You can also tunnel samba through SSH or a VPN if you want the convenience of samba but with encryption.

other people
Jun 27, 2004
Associate Christ
I am not having fun right now.

I need to get an 5gb file onto my mac from my linux computer. When I transfer it over the network the file doesn't work and the md5 hashes don't match. I have no idea why.

So my second choice is to copy the file onto an 8gb ipod and transfer it with that. The problem is what file system do I use? Linux and the mac can both read UDF, so I have formatted the ipod with mkudffs.

I've used mkudffs to format sparse files and mount them without issues before. When I try to mount the ipod after formatting with mkudffs (which seems to work fine itself) I get an error which says:
UDF-fs: No VRS found.

I have no idea how to get around this. I don't have another network cable, the mac won't read DVD-DL discs. I can't use a fat file system because it has a 2gb file limit, as does HFS.


edit: Apparently there are linux kernel drivers and tools to use and create HFS+ file systems now, and they actually work!

other people fucked around with this message at 01:41 on Sep 5, 2008

Magicmat
Aug 14, 2000

I've got the worst fucking attorneys
Quick question, I'm trying to make, by hand, the package open-vmware-tools on Ubuntu. During ./configure it complained about not finding the library liburiparser, so I used synaptic to install liburiparser-dev (which also pulled in liburiparser itself.) However, ./configure still complains about not finding it. The library is installed in /usr/lib/liburiparser.so, which seems fairly standard, and the headers are in /usr/include/uriparser/. I assume it's not finding the headers? Here's the exact error I get:
code:
checking for uriFreeQueryListA in -luriparser... no
configure: error: uriparser library not found or is too old. Please configure without Unity (using --disable-unity) or install the liburiparser devel package.

Magicmat
Aug 14, 2000

I've got the worst fucking attorneys
Edit: Double post

Ashex
Jun 25, 2007

These pipes are cleeeean!!!

Magicmat posted:

Quick question, I'm trying to make, by hand, the package open-vmware-tools on Ubuntu. During ./configure it complained about not finding the library liburiparser, so I used synaptic to install liburiparser-dev (which also pulled in liburiparser itself.) However, ./configure still complains about not finding it. The library is installed in /usr/lib/liburiparser.so, which seems fairly standard, and the headers are in /usr/include/uriparser/. I assume it's not finding the headers? Here's the exact error I get:
code:
checking for uriFreeQueryListA in -luriparser... no
configure: error: uriparser library not found or is too old. Please configure without Unity (using --disable-unity) or install the liburiparser devel package.

tray running ./configure with --help. There should be an option to specify where to look for libraries.

Magicmat
Aug 14, 2000

I've got the worst fucking attorneys

Ashex posted:

tray running ./configure with --help. There should be an option to specify where to look for libraries.
I tried that first, but I couldn't find any obvious ones. I tried setting the CPPFLAGS environmental variable to point to /usr/include/uriparse but then it complains about not finding Uri.h. Here is the output, if you can find a better argument I'm missing.

Edit: Tried --includedir=/usr/include/uriparser with the same results as above.

Edit2: OK, I think I have it figured out. I had to manually install a newer version of uriparser than was included with Ubuntu, and use the --includedir command above. Now it's complaining about other packages, but at least we're making progress.

Magicmat fucked around with this message at 04:30 on Sep 5, 2008

Accipiter
Jan 24, 2004

SINATRA.

Magicmat posted:

Quick question, I'm trying to make, by hand, the package open-vmware-tools on Ubuntu. During ./configure it complained about not finding the library liburiparser, so I used synaptic to install liburiparser-dev (which also pulled in liburiparser itself.) However, ./configure still complains about not finding it. The library is installed in /usr/lib/liburiparser.so, which seems fairly standard, and the headers are in /usr/include/uriparser/. I assume it's not finding the headers? Here's the exact error I get:

code:
checking for uriFreeQueryListA in -luriparser... no
configure: error: uriparser library not found or is too old. Please configure without Unity 
(using --disable-unity) or install the liburiparser devel package.

The problem here isn't necessarily that it can't find the library. The problem is that it can't find the specific uriFreeQueryListA function in the library.

What version of uriparser did you install?

EDIT:

Magicmat posted:

OK, I think I have it figured out. I had to manually install a newer version of uriparser than was included with Ubuntu, and use the --includedir command above. Now it's complaining about other packages, but at least we're making progress.

Ding.

Accipiter fucked around with this message at 17:25 on Sep 5, 2008

vanjalolz
Oct 31, 2006

Ha Ha Ha HaHa Ha
I'm trying to set up NAT on my solaris box with ipf.

Apparently I'm meant to use ipnat to create the nat rules, which is a bit cryptic but apparently works fine.
The problem I have is that I can not figure out how to connect to ssh when I have nat enabled :|

I have two interfaces: rtls0(internal) and elxl0(external)

My ipf.conf is just allow all in/out
My ipnat.conf is


map elxl0 192.168.0.0/24 -> 0/32 portmap tcp/udp 10000:40000
map elxl0 192.168.0.0/24 -> 0/32
rdr rtls0 0.0.0.0/0 port ssh -> 127.0.0.1 port ssh


I saw map elxl0 192.168.0.0/24 -> 0/32 proxy port ftp ftp/tcp as an example so I thought I'd adjust it to map elxl0 192.168.0.0/24 -> 0/32 proxy port ssh ssh/tcp but that didn't help.

Ideally I'd like all internal connections to go to the server(samba, ssh, whatever else) with out having to create special rules.

I think the problem might be related to my subnet - both interfaces have a 192.168.0.x ip.

Accipiter
Jan 24, 2004

SINATRA.

vanjalolz posted:

I'm trying to set up NAT on my solaris box with ipf.

Ehh, this is a Linux questions thread, but I can try to help.

Are you trying to connect to the NAT box itself via SSH, or to a box behind it?

vanjalolz
Oct 31, 2006

Ha Ha Ha HaHa Ha
I'm connecting to the actual NAT box with SSH, and it wont let me
I figured that since ipf is in freebsd someone might know in a unix flavoured thread.

vanjalolz fucked around with this message at 02:19 on Sep 6, 2008

mcsuede
Dec 30, 2003

Anyone who has a continuous smile on his face conceals a toughness that is almost frightening.
-Greta Garbo
I had a hard reboot happen during a run of sfill and now I've got full drives, any ideas?

EVGA Longoria
Dec 25, 2005

Let's go exploring!

What's the lightest weight feature complete GUI browser? I don't want to have to toss CSS to get better performance. HTML/CSS are a must, some form of adblock is icing on the cake.

dont skimp on the shrimp
Apr 23, 2008

:coffee:

Casao posted:

What's the lightest weight feature complete GUI browser? I don't want to have to toss CSS to get better performance. HTML/CSS are a must, some form of adblock is icing on the cake.
Check netsurf, it might be what you're looking for. I think dillo ignores CSS.

Accipiter
Jan 24, 2004

SINATRA.

vanjalolz posted:

I'm connecting to the actual NAT box with SSH, and it wont let me

Instead of redirecting it to localhost, try redirecting it to a physical interface of the NAT box.

other people
Jun 27, 2004
Associate Christ
I have a strange issue here.

I download a large file to my "server" pc (linux!) because it is the only thing on my network that isn't wireless. I then copy the file to a mac laptop via nfs. I have some trouble reading the file, so I did an md5 check and the values on my server and the mac do not match. I then tried connecting the mac with some cat5 and copied the file again. The md5 sum still does not match. Both the copies on the mac get the same md5 sum though.

Then I copied the file from the server to a flash drive, and back off onto the the mac. This time the file "works" and gets the same md5 sum as the server.

So obviously something fishy is going on with my network, but what? The file got on my server via my cable modem, router, cat5. When it arrives via wireless on the mac it is going from server, cat5, router, to the mac. It isn't corrupt on the server, so the cable modem and router can't be malfunctioning, and the cat5 cable must be good for the same reason.

So either both the wired and wireless cards in my macbook are wonky, or NFS is doing something wrong?

I don't really claim to know a whole lot about the details of networking, so maybe there is a much more simple solution to this? I sure hope so!

dont skimp on the shrimp
Apr 23, 2008

:coffee:

Kaluza-Klein posted:

I have a strange issue here.

I download a large file to my "server" pc (linux!) because it is the only thing on my network that isn't wireless. I then copy the file to a mac laptop via nfs. I have some trouble reading the file, so I did an md5 check and the values on my server and the mac do not match. I then tried connecting the mac with some cat5 and copied the file again. The md5 sum still does not match. Both the copies on the mac get the same md5 sum though.

Then I copied the file from the server to a flash drive, and back off onto the the mac. This time the file "works" and gets the same md5 sum as the server.

So obviously something fishy is going on with my network, but what? The file got on my server via my cable modem, router, cat5. When it arrives via wireless on the mac it is going from server, cat5, router, to the mac. It isn't corrupt on the server, so the cable modem and router can't be malfunctioning, and the cat5 cable must be good for the same reason.

So either both the wired and wireless cards in my macbook are wonky, or NFS is doing something wrong?

I don't really claim to know a whole lot about the details of networking, so maybe there is a much more simple solution to this? I sure hope so!
You could try copying it with scp, ftp or sftp and see if you have the same problem. I'd think it's NFS that's causing the problem.

other people
Jun 27, 2004
Associate Christ

Zom Aur posted:

You could try copying it with scp, ftp or sftp and see if you have the same problem. I'd think it's NFS that's causing the problem.

Copying via scp as I type this. It is quite a bit slower.

What do you think NFS could be doing? I was under the impression it was a pretty well vetted program/protocol?

edit: It looks like scp got it right. The md5 hash matches the server.

The kernel on my server is 2.6.26 and has NFS v3 compiled in. I have nfs-utils 1.1.3. I have no idea what the nfs is like on my mac but it is 10.5.2.

other people fucked around with this message at 03:54 on Sep 8, 2008

juggalol
Nov 28, 2004

Rock For Sustainable Capitalism

mcsuede posted:

I had a hard reboot happen during a run of sfill and now I've got full drives, any ideas?

As root, can you run something like "du -b | sort -gr" from / ? That should give you the disk usage in bytes and organize it from biggest to smallest - it ought to at least give you some idea about where the most space is being taken up.

Megaman
May 8, 2004
I didn't read the thread BUT...
I know this is a Linux thread, but I have a UNIX question. In Linux -h for du or df is human readable. Is there a UNIX equivalent without using a shell or perl script?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm using Solaris and it's also -h in the SVR4 version. At least since Solaris 9.

If you're using something older or different than that, there should be -k for displaying everything in 1KB blocks or --block-size=x to specify your own. That's the closest you'll get.

Mr. Eric Praline
Aug 13, 2004
I didn't like the others, they were all too flat.

Kaluza-Klein posted:

Copying via scp as I type this. It is quite a bit slower.

What do you think NFS could be doing? I was under the impression it was a pretty well vetted program/protocol?

edit: It looks like scp got it right. The md5 hash matches the server.

The kernel on my server is 2.6.26 and has NFS v3 compiled in. I have nfs-utils 1.1.3. I have no idea what the nfs is like on my mac but it is 10.5.2.
How big is the file? You may need to change some of your NFS mount options if the file is bigger than 2G, and/or your wireless is unstable.

other people
Jun 27, 2004
Associate Christ

chryst posted:

How big is the file? You may need to change some of your NFS mount options if the file is bigger than 2G, and/or your wireless is unstable.

The file is bigger than 2G. I was going to mention that the same mac plays mp3s off the same server and never seems to have any trouble. The mp3s sound fine.

The wireless appears to be stable. I never have trouble with the mp3s cutting out or anything. I have a Buffalo WRG-G125 with tomato firmware. Also, I still get the corruption when I plug in directly, so it can't really be the wireless then.

What can I do about the NFS mount options? The exports are all (ro,secure,async,no_subtree_check).

I found an old linux kernel mailing list post about NFS corruption and the person "fixed" it by switching from UDP to TCP transfers. I just tried forcing the mac to make a tcp connection and it still produces a file that does not have a matching hash.

Mr. Eric Praline
Aug 13, 2004
I didn't like the others, they were all too flat.

Kaluza-Klein posted:

What can I do about the NFS mount options? The exports are all (ro,secure,async,no_subtree_check).
Try changing async to sync, and see if it helps. Performance will drop, and It's only supposed to matter if the server connection is lost. Maybe there's some IO issue with the mac implementation.

The other thing is to try transferring some smaller files and checking the MD5 on those.

If it's not those, I'm out of ideas.

other people
Jun 27, 2004
Associate Christ

chryst posted:

Try changing async to sync, and see if it helps. Performance will drop, and It's only supposed to matter if the server connection is lost. Maybe there's some IO issue with the mac implementation.

The other thing is to try transferring some smaller files and checking the MD5 on those.

If it's not those, I'm out of ideas.

Well async vs sync didnt seem to make a difference.

I switched from async to sync in the server exports file. I have a fancy program on the mac that exposes lots of nfs mount options. Two of which seem to do with sync/async. Is it possible it is still connecting with async some how? How do I really know?

covener
Jan 10, 2004

You know, for kids!

Kaluza-Klein posted:

Well async vs sync didnt seem to make a difference.

I switched from async to sync in the server exports file. I have a fancy program on the mac that exposes lots of nfs mount options. Two of which seem to do with sync/async. Is it possible it is still connecting with async some how? How do I really know?

do nfs daemons allow you to disable mmap or sendfile? In the Apache HTTP Server world these can both be a source of flakiness platform to platform.

EVGA Longoria
Dec 25, 2005

Let's go exploring!

chryst posted:

Try changing async to sync, and see if it helps. Performance will drop, and It's only supposed to matter if the server connection is lost. Maybe there's some IO issue with the mac implementation.

The other thing is to try transferring some smaller files and checking the MD5 on those.

If it's not those, I'm out of ideas.

Just for reference, the Mac implementation of both SMB and NFS absolutely blows chunks. Out of date, bad default settings - these are just the start of it.

Lucien
May 2, 2007

check it out i'm a samurai ^_^

Kaluza-Klein posted:

Is it possible it is still connecting with async some how? How do I really know?
$cat /etc/mtab

wait for mac i don't know

Mr. Eric Praline
Aug 13, 2004
I didn't like the others, they were all too flat.

Kaluza-Klein posted:

Well async vs sync didnt seem to make a difference.

I switched from async to sync in the server exports file. I have a fancy program on the mac that exposes lots of nfs mount options. Two of which seem to do with sync/async. Is it possible it is still connecting with async some how? How do I really know?
You can specify it on the client as well as the server. Someone posted that the Mac implementation of NFS sucks, and I don't doubt that's your problem. All kinds of NFS weirdness happens when using different server/client builds.

covener posted:

do nfs daemons allow you to disable mmap or sendfile? In the Apache HTTP Server world these can both be a source of flakiness platform to platform.
Neither mmap or sendfile apply to NFS.

Shazzner
Feb 9, 2004

HAPPY GAMES ONLY

Can someone help me out, I'm trying to create a bash script that will take an openoffice file copy it to my documents, rename it "ClassNotes-<current date>" then open it up?

The part I'm having trouble most with (I'm a mostly linux noob) is getting the date, renaming the file, then opening it up in open office.

Lucien
May 2, 2007

check it out i'm a samurai ^_^

Shazzner posted:

Can someone help me out, I'm trying to create a bash script that will take an openoffice file copy it to my documents, rename it "ClassNotes-<current date>" then open it up?

The part I'm having trouble most with (I'm a mostly linux noob) is getting the date, renaming the file, then opening it up in open office.

Here, try this on for size:
code:
#!/bin/bash
newfile=$HOME/Documents/ClassNotes-`date +%Y-%m-%d`.odt
mv $1 $newfile
ooffice $newfile
Save this in /usr/local/bin/YOURSCRIPTNAME and call like so:
code:
$YOURSCRIPTNAME FILENAME
Make sure the file is executable and the path in the script is correct.

rugbert
Mar 26, 2003
yea, fuck you
Has anyone used the balance program?

I have a web server thats going down when its drudged and I think it would be cool if I could build a load balancer for more than a thousand dollars less than buying one.

Adbot
ADBOT LOVES YOU

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe

Lucien posted:

Here, try this on for size:
code:
#!/bin/bash
newfile=$HOME/Documents/ClassNotes-`date +%Y-%m-%d`.odt
mv $1 $newfile
ooffice $newfile
Save this in /usr/local/bin/YOURSCRIPTNAME and call like so:
code:
$YOURSCRIPTNAME FILENAME
Make sure the file is executable and the path in the script is correct.

I just did something like this. It's best to take the output of date into a variable.

code:
today="$(date +%Y-%m-%d)" 

$today.yourfilename.whatever

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply