|
I've been futzing around in Ubuntu 16 and I've broken Sudo. Any command that I enter prefixed by sudo results in a blank newline that capture whatever I type without actually doing anything until I hit Control-Z to kill that process. There are no errors, just an endless blank space until I kill the process. I am deploying from an image, and all new instances from this image exhibit this behavior. How do I fix this?
|
# ¿ May 1, 2018 06:19 |
|
|
# ¿ Apr 25, 2024 13:47 |
|
This is what I thought was a basic Apache/PHP server on AWS EC2 with some environmental variables and aliases shoved into /.bashrc. I also installed some packages whose names escape me to allow colors for things like ls. I’ve done nothing explicit to sudo membership and everything was working fine for a few days through reboots and snapshot/AMI imaging. I’ll try the uninstall/reinstall route and see where that gets me.
|
# ¿ May 1, 2018 15:58 |
|
How do I configure an ubuntu server to register its hostname with windows DNS (serverA) when it receives a DHCP address from a different server (serverB)? I have control over the DNS server but not the DHCP server and these servers are AWS EC2 servers fwiw. Googling is lousy with answers that have DHCP and DNS on the same windows box but it doesn't seem applicable here.
|
# ¿ Apr 25, 2021 04:48 |
|
SamDabbers posted:Typically you don't want non-domain-joined client machines to be able to update the DNS zone directly, so the DHCP server itself registers the host name the client provides to it with the DNS server using some sort of shared key authentication. Active Directory does streamline the plumbing between the Windows DHCP server and the AD DNS, but a similar setup is relatively straightforward between e.g. BIND and ISC dhcpd. Thanks for the response. That was an interesting article as it presents three options: 1. Configure DHCP server to perform DNS registration on behalf of the clients 2. Join the Linux devices to AD domain and configure them to dynamically update 3. Setup a new sub-domain running a dedicated Linux BIND server and configure DNS forwarding on Microsoft DNS server. A pity I'm looking at option two and the author never explores past the first article.
|
# ¿ Apr 25, 2021 18:34 |
|
This is probably a really simple question if you know linux, which I don't: What is the most direct route to getting a thousand images copied every hour from a windows (AWS EC2) server onto an Amazon EFS file store mounted on a pool of Ubuntu servers? I have an app server that generates approximately one thousand graph images every hour. It then takes these images and copies them directly to an AWS FSx file share. This share is accessed by a pool of Windows IIS servers that then serves them to the internet. What I'd like to do is replace the Windows IIS web pool with an Ubuntu Apache web pool. My problem is, since AWS EFS cannot be accessed from a windows server directly, I need to find some way to copy the files off of the windows application server to the EFS file store that is mounted on the Ubuntu servers. How do I accomplish this? Windows-to-windows is a simple robocopy job. but windows-to-ubuntu? I have no idea.
|
# ¿ Aug 15, 2021 06:17 |
|
Sheep posted:FSx for Windows (and FSx for Lustre, which isn't being used here) and EFS are not the same thing. This is an hourly batch job. quote:2. Install NFS client for Windows and mount the EFS share directly in the Windows machine: it's available in control panel/programs/turn Windows features on. Amazon EFS is not supported on Windows instances. quote:3. Install Samba on the Ubuntu machine and make a share available and reachable from the Windows machine. Like the above poster suggested. This seems the most promising for my workload. quote:4. Install WSL and use rsync. This is interesting. I've not heard about this. quote:If you really want to go the rsync route I have a version I built that doesn't require Cygwin, but frankly it's terrible. Pablo Bluth posted:Windows now has OpenSSH shipped by default, so you should have access to scp as a way to copy files. Setup authorized_keys for passwordless access in a script (on a suitably low permission linux account), then write a simple script and schedule it to copy every hour. Use -u or --ignore-existing to avoid recopying old files. I like this idea. It requires the least configuration of the destination web servers, or the creation of a jump box for the windows share.
|
# ¿ Aug 15, 2021 16:48 |
|
From exierience as of yesterday, AWS EFS won't work on windows. Apparently Windows NFS client is implemented slightly differently than AWS EFS. After hours of trying I couldn't make a windows EC2 box see EFS. This was confirmed in a support case I opened up about this very thing. Hence my need for this ask.
|
# ¿ Aug 15, 2021 17:39 |
|
Ffycchi posted:Oh god please use NFS over samba. Yes, you can use NFS shares on windows. This raises an interesting situation in that I'd now be mounting an Amazon EFS filesystem to an Ubuntu EC2 instance, then sharing that mount point as a new mount point via NFS to a windows EC2 host. Ugh. This sounds kludgy as hell.
|
# ¿ Aug 16, 2021 18:43 |
|
I have an Ubuntu EC2 instance with 8GB RAM that is constantly running low on memory (<2%) and I am having a hard time finding the process that is using it. running "ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n" produces code:
|
# ¿ Jan 26, 2022 13:47 |
|
vmstat results:code:
Which then means any physical memory alarms can be ignored unless other symptoms manifest themselves. Is that right?
|
# ¿ Jan 26, 2022 19:11 |
|
Bob Morales posted:What is setting the alarms? Some kind of monitoring software? Yeah. My monitoring software is reporting ~ 98% memory use and I was trying to figure out if I needed to resize this instance or if this was normal behavior. It looks like since I have 8gigs of physical RAM on the instance and 7gigs of it is used for cache I'm fine.
|
# ¿ Jan 26, 2022 19:28 |
|
I am building a web cluster using Ubuntu 22.04 and Apache2 for the front end(s) and a TrueNAS NFS share on the backend for the file store. I am trying to change ownership of the NFS share to allow the web servers to perform updates and file writes but I'm bumping into a permissions issue. So I'm running the following command quote:sudo chown www-data:www-data /nfs/web/files/ and I'm getting quote:chown: changing ownership of '/nfs/web/files': Invalid argument I'm reading that the invalid argument issue is stemming from it being an NFS share mounted locally, but I'm not sure how to rectify it. Can someone help point me in the right direction?
|
# ¿ Feb 6, 2024 00:07 |
|
|
# ¿ Apr 25, 2024 13:47 |
|
Saukkis posted:Use 'id www-data' to check what are the UID and GID and change the ownership to those in the TrueNAS. ...and how do I go about changing ownership in TrueNAS? I assume there's some kind of mapping function that'll assign a user or group on the Ubuntu boxes to a user on the TrueNAS box?
|
# ¿ Feb 6, 2024 00:36 |