|
Hello again! Last time I checked in, the thread helped me figure out a plan to transition away from our in-house email/file share/FTP server, and move onto Microsoft 365 cloud services. I want to share how it's going, and ask about a problem. Success! The transition from hmailserver to Exchange Online went great! The setup was nice and easy. If I accomplish nothing else, I will be happy we offloaded the email server. Failure The transition from self hosted VPN file share (~400-600GB) to SharePoint Online (SPO) is looking... ugly. I was ready to embrace SPO's weird hub-site-library structure but... As I understand it, SPO will create a file janitoring nightmare. SPO has versioning on-by-default, with no option to disable, and minimum major version count of 100 (default 500). Also, SPO's versioning non-incrementally backs up your files on every save or autosave. Microsoft literature talks about "shredded storage," in their DB, BUT... that doesn't result in any space savings on the SPO storage quota. Anecdotally, I've tested, and watched files double, triple, and quadruple in SPO storage size with each save operation. Our on-premises backup is incremental, so I wasn't anticipating this problem. This seems really bad So 1TB in SPO is about 100GB, or less, of real storage if you actively work with your files? And the storage growth will need to be constantly monitored and adjusted with quotas, and constant pruning of versions? Can anyone using SPO corroborate that it requires a large ongoing maintenance commitment? I didn't expect it to be maintenance free, but this seems very fiddly, unless you're paying $XXXX for extra SPO storage. E: Like, I can see there being a use-case for SPO, but it seems really narrow. You need an organization that only actively works on a small number of small files. Our project folders include a mix of photos, drawings, program files, pdf's, and documents. Maybe if I assume the larger files, like pdfs or photos, are updated infrequently, then it all tends to work out? I'd love to hear from an experienced SPO admin. Andenno fucked around with this message at 19:24 on Sep 1, 2021 |
# ? Sep 1, 2021 18:49 |
|
|
# ? Apr 28, 2024 10:39 |
|
Alright this is probably the best place to ask this question. I am the systems admin for a small ad agency and we recently migrated everything to AWS. We occasionally ingest very very large video files. I have a Netgear ReadyNas that is on a fiber connection that uploads these files to an S3 bucket. However I am wanting to move live data they are needing to workdocs. Which AWS has a built in way of doing this: https://docs.aws.amazon.com/workdocs/latest/adminguide/migration.html However I have followed the instructions here and I get an error when starting a migration that the IAM role is not configured correctly. Strangely the policy that they want you to use throws this error when you add it via JSON "Invalid Action: The action workdocs:UpdateUserAdministrativeSettings does not exist". I have looked all over and I can't seem to find anyone else running into this problem or any solutions to it.
|
# ? Sep 1, 2021 20:40 |
|
Valt posted:Alright this is probably the best place to ask this question. I am the systems admin for a small ad agency and we recently migrated everything to AWS. We occasionally ingest very very large video files. I have a Netgear ReadyNas that is on a fiber connection that uploads these files to an S3 bucket. However I am wanting to move live data they are needing to workdocs. Which AWS has a built in way of doing this: Whatever tool they are running on the backend to generate the IAM policy JSON is wrong. As the error message indicates, the UpdateUserAdministrativeSettings action doesn't exist. You can try looking at the UpdateUser action and see if it's the thing doing what you want to do. This one might help too. In instances like this, if you can't find correct documentation, you typically have to do lots of tedious testing to stay secure. You would give yourself workdocs:* and prove that it works. Then give yourself specific subsets of the workdocs actions and note every time it fails what permission it says its missing. Then add that permission and try again. Happiness Commando fucked around with this message at 01:05 on Sep 2, 2021 |
# ? Sep 2, 2021 01:03 |
|
Andenno posted:Hello again! Last time I checked in, the thread helped me figure out a plan to transition away from our in-house email/file share/FTP server, and move onto Microsoft 365 cloud services. I want to share how it's going, and ask about a problem. Nah I don't see much more file janitor poo poo on SPO compared to traditional file share. I'll say I find permissions setting more of a pain in the tuches but once you have the system down it's at least repeatable. If you want to disable versioning try this powershell https://www.sharepointdiary.com/2018/08/sharepoint-online-powershell-to-disable-versioning.html tenant-wide: code:
Dans Macabre fucked around with this message at 04:40 on Sep 2, 2021 |
# ? Sep 2, 2021 04:37 |
|
SharePoint having no concept of a Deny permission so you have to break inheritance every time someone wants a specific folder to be an exception to the permissions set on the rest of the site
|
# ? Sep 2, 2021 09:07 |
|
Happiness Commando posted:Whatever tool they are running on the backend to generate the IAM policy JSON is wrong. As the error message indicates, the UpdateUserAdministrativeSettings action doesn't exist. Yeah I'm not quite sure why this script needs to do the updateuser anyways. However just out of curiosity I removed that line from the JSON and now it will actually let me proceed with the migration. However the migration fails with the error "not authorized to perform: workdocs:UpdateUserAdministrativeSettings on resource (Service: AmazonWorkDocs; Status Code: 404; Error Code: UnauthorizedResourceAccessException;". So I guess part of this datasync process it requires this for some reason?
|
# ? Sep 2, 2021 16:19 |
|
Yeah, that's weird because that isn't a documented API action. Sometimes there are actions that use a permission with a different name, like s3:headobject needs the s3:getobject permission. You can try adding the workdocs:UpdateUser action like the other poster suggested, or you can try just applying the policy as generated and ignore the error.
|
# ? Sep 2, 2021 16:27 |
|
Guy Axlerod posted:Yeah, that's weird because that isn't a documented API action. Sometimes there are actions that use a permission with a different name, like s3:headobject needs the s3:getobject permission. Which I would do, but when you leave that line in the migration page flags the role as misconfigured. But of course I just added that line back in and now it lets me progress with the migration?!?!?! Now it just seems to be working? Really unsure what was wrong before, as I was literally just copy and pasting that json from one page to another.
|
# ? Sep 2, 2021 16:35 |
|
Thanks Ants posted:SharePoint having no concept of a Deny permission so you have to break inheritance every time someone wants a specific folder to be an exception to the permissions set on the rest of the site Sharepoint basically wants all the items in a library to have the same permissions (you can still find people insisting that imposing folder hierarchies on libraries is misguided). If you bear that in mind when you plan things out, you'll have an easier time. e: that's not to say that different groups can't have differing permissions, rather that things are consistent throughout the library. Albinator fucked around with this message at 19:27 on Sep 2, 2021 |
# ? Sep 2, 2021 18:33 |
|
Does anybody have a recommendation for a webserver that is known to allow streaming multipart file uploads with larger-than-memory size limits? I don't care if it's specifically WebDAV or not, I'm just wanting something that will allow faster-than-sftp file transfer over high-latency, high-bandwidth connections. The uploading clients won't be browsers, just Linux computers. Doesn't have to be HTTP either, but HTTPS seems like an easy solution for what I'm trying to do. We're using SFTP right now and it sucks, FTPS would be a real pain too because of its port negotiation mechanism. Over fast (1 or 10gbit) internet connections with latency between hosts of over 100ms, SSH and anything tunneled over SSH like sftp is dog slow. There's a set of patches to OpenSSH that could be installed on hosts, but it's a real pain to ask people to patch their OpenSSL. What I'm wanting is to be able to throw 400GB-5TB over an encrypted channel from a host that can't accept incoming connections to one that can at more than 1/4 of line speed between them. Obviously resuming would be very nice to have as well, but given how fast the connections are, it's not critical. I was hoping it'd be easy to configure nginx to do this, and just throw things up with curl --upload-file or -F. From reading documentation though, it looks like I'm mistaken. Can any of the big webservers be configured to do this? Nginx, apache, Caddy, something that I'm missing? Is there a mature software solution I could use for this other than those? Poking around I only saw some tiny projects like https://github.com/daggerok/streaming-file-server, or I could roll my own solution with NodeJS or Java myself, but I was really hoping to be able to be able to do this with a plain old webserver.
|
# ? Sep 4, 2021 19:43 |
|
Thanks for the warnings and tips on Sharepoint. I was worried about creating a big timesink for myself. I think I'll do a work project or two in Sharepoint and see how it goes before I try to move everything over. Weird customer service story: Even if you want to use your own network equipment, Spectrum requires you to use their modem and router, in bridge mode, if you want static IP addresses. 5 out of 6 Spectrum employees insisted that my business does not get to access that modem and router. After a second day of calling Spectrum every time I want to change a setting, the 6th employee says, "no, that's silly, why would we want you to call us every time you need to change a setting?" She walked me through logging in. Baffling, but still miles ahead of AT&T support.
|
# ? Sep 7, 2021 15:28 |
|
Speaking of weird customer service... I have a client working with and MSP where the MSP will not reopen tickets, ever, as a matter of policy. Like they apparently disabled that option in service now. If a tech closes a ticket the only thing to do is open a brand new ticket. I've never worked with any place like this... Usually reply to ticket to reopen, right? Is this some annoying thing done to make the metrics look good?
|
# ? Sep 8, 2021 00:55 |
|
I wish more helpdesks would make it easy to set a “resolved” state when the ticket is complete, send out the notifications or whatever, and then close it a few days later and remove the option to re-open at that time. It’s the only reliable way to prevent someone just replying to their last case update when they have a new unrelated issue.
|
# ? Sep 8, 2021 01:30 |
nvrgrls posted:Speaking of weird customer service... Bad msp. 1 open ticket becomes a met SLA and a reason to keep contract
|
|
# ? Sep 8, 2021 03:30 |
|
Thanks Ants posted:I wish more helpdesks would make it easy to set a “resolved” state when the ticket is complete, send out the notifications or whatever, and then close it a few days later and remove the option to re-open at that time. Our system does this. What is does not do is let you (easily) search for a user and get a list of their currently open tickets, sorted newest first. Service-Now is weird.
|
# ? Sep 8, 2021 03:53 |
|
Thanks Ants posted:I wish more helpdesks would make it easy to set a “resolved” state when the ticket is complete, send out the notifications or whatever, and then close it a few days later and remove the option to re-open at that time. Zendesk does this, and servicenow CAN do this apparently.
|
# ? Sep 9, 2021 01:50 |
|
I have a BUSINESS NEED for a user to be able to install software on their machine, sometimes. I don't want them signing in as local admin all the time. In on-premise AD world I would've used LAPS, but this client is Azure AD. What's the best thing to do? I can have him call the helpdesk and they add him to local admin on his machine, and remove him at the end of the day or whatever... I can create a separate local user... I hate both those options.
|
# ? Sep 9, 2021 01:52 |
|
Is packaging the software so they can install it through self-service not flexible enough?
|
# ? Sep 9, 2021 02:33 |
|
nvrgrls posted:I have a BUSINESS NEED for a user to be able to install software on their machine, sometimes. I don't want them signing in as local admin all the time. In on-premise AD world I would've used LAPS, but this client is Azure AD. What's the best thing to do? I can have him call the helpdesk and they add him to local admin on his machine, and remove him at the end of the day or whatever... I can create a separate local user... I hate both those options. There's a world of options out there software wise that will let people escalate temporarily or do certain tasks as admin, we generally use policypak for this (and many other things), but there's a lot of alternatives. https://www.policypak.com/policies/least-privilege-manager/ https://www.adminbyrequest.com/ https://github.com/pseymour/MakeMeAdmin/wiki
|
# ? Sep 9, 2021 05:51 |
|
Cross-posting from the main Working in IT thread. Has anyone ever been a support department of one? I might be interviewing for something along those lines and it seems like it could be great or could be a shitshow. I'm just not sure how to feel out which one it is. It's a new "department", before this more senior administrators were pinch-hitting on helpdesk and hardware provisioning and deployment. I'd be inheriting a bit of infrastructure but also apparently free to scrap what I decide isn't working and rebuild it myself. They're an AWS shop so that could be some good experience there and some good projects. I'd apparently have my own budget too. I'd have to get a feeling for what the other teams do and don't do at the interview, but I'd at least be automating workstation imaging, managing IAM rights, and basic networking.
|
# ? Sep 9, 2021 06:24 |
|
Been there (second IT job after changing careers), was good experience. The main questions obviously revolve around variations of the scope theme: 1. Is this more "internal support" for technology departments or are you going to be doing "full company" support? 2. You'd apparently have your own budget - does that include hiring if/when additional support staff become necessary as the company grows? 3. Is this going to involve printers, phones, and other poo poo Dante rightly placed in the ninth circle of hell? 4. What's the reporting structure? 5. What's the long term plan here - to be the nucleus of a new IT department? A new support department? What's the end scope? 6. What's the coverage schedule? Does company stop all business at 5pm or are you going to be 24/7 on call for literally everything technology? Then you get into the catch-all interview stuff: 7. What are the current challenges that the position is expected to be facing? 8. What is success in this position going to look like after six months, one year? 9. What are the long-term goals of the company? etc. A place large enough to have "multiple other senior administrators" implies the existence also of multiple other junior administrators, which also implies probably a large number of ancillary office staff, then depending on what exactly the company does, zero to hundreds of other random people doing things. This could be a lot of staff to support if you're responsible for the entire company and it's just you - you're already a support bottleneck just for the end user support, what happens when something falling under the huge umbrella of "infrastructure, AWS, networking, desktops" that you've implied you'd also be responsible for breaks at the same time? Sheep fucked around with this message at 11:05 on Sep 9, 2021 |
# ? Sep 9, 2021 10:43 |
|
Maneki Neko posted:There's a world of options out there software wise that will let people escalate temporarily or do certain tasks as admin, we generally use policypak for this (and many other things), but there's a lot of alternatives. awesome thank you
|
# ? Sep 9, 2021 13:05 |
|
22 Eargesplitten posted:Cross-posting from the main Working in IT thread. I've been in this position. Here are some questions I'd ask if I were you, off the top of my head:
|
# ? Sep 9, 2021 14:39 |
|
Maneki Neko posted:There's a world of options out there software wise that will let people escalate temporarily or do certain tasks as admin, we generally use policypak for this (and many other things), but there's a lot of alternatives. Admin By Request looks like it might be what I'm looking for for my users. Now to see how excessively expensive it is...
|
# ? Sep 9, 2021 18:25 |
|
nvrgrls posted:Zendesk does this, and servicenow CAN do this apparently. One caveat with Zendesk is that if someone does do it during the short window between "Solved" and "Closed" there's no "Split" feature to separate that reply in to a different ticket. This has been one of their most requested features for years, and they refuse to offer it, instead pointing at a paid third party addon.
|
# ? Sep 9, 2021 18:45 |
|
Cross posting from the homelab thread:bolind posted:What's the recommended best practice for dual PSU servers if I have only one UPS?
|
# ? Sep 16, 2021 11:45 |
|
bolind posted:Cross posting from the homelab thread: One in the wall, to prevent the consumer grade PSU from presenting a single point of failure, if you Absolutely Positively Cannot Accept A Downtime Both in the PSU if it's a good one that is fully isolating the load, because gently caress having to deal with a lighting strike or some other source of a killer current transient because Bob hosed up with earthmoving equipment twenty blocks away.
|
# ? Sep 16, 2021 15:49 |
|
like, what affects you more: MTBF of whatever quality of new or used PSU you've installed causing an outage wherein the PSU couldn't cut over to passthrough mode while failing, or losing the equipment because people are stupid and/or lightning exists?
|
# ? Sep 16, 2021 15:51 |
|
quote:What's the recommended best practice for dual PSU servers if I have only one UPS? Put both into the same ups. If you put one into the wall (no ups), when there's an outage then the entire load goes on the remaining power supply which is on the ups anyways and now you've lost redundancy of the power supplies.
|
# ? Sep 24, 2021 03:07 |
|
I split things across UPS and mains, because then you can do UPS maintenance without shutting everything down. Upgrade from that would be a transfer switch so that things can drop to mains if the UPS goes out. Gold standard is two UPSes. This only really works if your power is somewhat clean though, if your mains supply regularly spikes then you will probably want to sacrifice the UPS rather than all your servers.
|
# ? Sep 24, 2021 11:17 |
|
What we do is two PSUs, one goes to a UPS and the other goes the main on a surge protector.
|
# ? Sep 24, 2021 13:09 |
|
Thanks Ants posted:This only really works if your power is somewhat clean though, if your mains supply regularly spikes then you will probably want to sacrifice the UPS rather than all your servers. The ultimate foundation of my own homelab practice: have a very good UPS, screw lightning or distortion.
|
# ? Sep 24, 2021 13:37 |
|
Thanks Ants posted:I split things across UPS and mains, because then you can do UPS maintenance without shutting everything down. Upgrade from that would be a transfer switch so that things can drop to mains if the UPS goes out. Gold standard is two UPSes. When doing UPS maint, couldn't you shut down power supply 2, move it to main, bring it back up, then shut down and maintain UPS, moving the PSU back to it after?
|
# ? Sep 24, 2021 17:31 |
|
You could, but if that's your plan for running things then just get a transfer switch, even if it's a manual one because they are really cheap E.g. https://www.cdw.com/product/apc-service-bypass-panel-power-distribution-unit-3000-va/4173728 Thanks Ants fucked around with this message at 17:44 on Sep 24, 2021 |
# ? Sep 24, 2021 17:37 |
|
Thanks Ants posted:You could, but if that's your plan for running things then just get a transfer switch, even if it's a manual one because they are really cheap Okay this is really cool. Thanks! E: now I’m thinking about the feasibility of load balanced power UPSes. Could it be possible to have a pool of UPSes that feed power to a pool of devices? This way if a UPS dies, connected devices shift to other UPSes in the pool. You could add/remove a ups for maintenance without any hassle at all. Agrikk fucked around with this message at 19:41 on Sep 24, 2021 |
# ? Sep 24, 2021 19:38 |
|
Yeah that's how the whole-building UPSes work, you just have them sit on their own circuits and you scale out as needed.
|
# ? Sep 24, 2021 19:48 |
|
Andenno posted:Thanks for the warnings and tips on Sharepoint. I was worried about creating a big timesink for myself. I think I'll do a work project or two in Sharepoint and see how it goes before I try to move everything over. Comcast Business is like that as well. For the longest time they insisted that I use an Xfinity modem in bridge mode that would stay in bridge mode until a power outage, at which point it would revert to hone-user mode complete with DHCP, WiFi, and whatnot, breaking everything on my network. I called multiple times and kept getting “you gave what you need, kthxbye” until one tech said, “what the hell? No. You need a business modem not an xfinity modem” and all my problems went away. Except for the fact that after that hardware swap I’d get techs coming onsite for other issues relating to bad cables and they’d immediately try to blame the issue on my business modem and try to swap it out for the xfinity modem. On two occasions the on-site tech swapped out my working business modem for an xfinity one and then bail with it even after I told him to leave my poo poo alone.
|
# ? Sep 24, 2021 19:56 |
|
This isn't my shop internal stuff, but it's a question for a client. They are two CPAs that lease a Remote Desktop Connection to a desktop environment they run their accounting software on. They straight up hit it via a public IP address on default port with only a user name and password guarding it. the vendor has lied continuously about how "They totally use a VPN to keep customers safe" and my people need to jump ship. Question: What other vendor can they pay for a more secure connection? My only other awareness if RightNetworks, whom I'm not sure is any better. They'd love what they had at the place they left, which was a nice VPN tunnel that allowed RDC as well as shared drives and such. Cloudvara is what they want to get off of.
|
# ? Nov 12, 2021 16:58 |
|
I have a client that uses RightNetworks and it's also just straight RDP from the client side, but I believe there is also an IP whitelist involved.
|
# ? Nov 12, 2021 17:24 |
|
|
# ? Apr 28, 2024 10:39 |
|
Thanks, that'd be an improvement. I mean, they don't have a static IP right now, but at least that's a layer of protection above their current vendor.
|
# ? Nov 12, 2021 18:30 |