|
incoherent posted:v2 and v3 are only available in enterprise, not standard. You should be issuing at least 2003 level certs, not 2000. Spin up some VMs, create a fake domain, and step-by-step the cert process. Any templates I can create there have to be 2003 or 2008 (which are v2/v3, right?), but I can't seem to use either type to actually sign anything. So if the problem is that the version of Windows Server we have simply doesn't allow this at all, then it sucks but at least we know. Stupid that it allows creation of templates it can't actually use, and that you can't create the type of template that you can use. quote:https://technet.microsoft.com/en-us/library/cc772393(v=ws.10).aspx No problem, I appreciate the link. I wasn't able to find something that succinctly told me what was supported and what wasn't. Anyway, I ended up working around the issue by using ADSIEdit to manually change the type/version of the template I created to be the same as for the template I copied it from, and it showed up as a Windows 2000 template. Now I can sign requests with it.
|
# ? Sep 20, 2016 15:28 |
|
|
# ? Apr 28, 2024 18:11 |
|
incoherent posted:Did you make sure to delegate kerbros authentication to the cluster? I don't think this is needed because it is not SQL using as a FCI, it's just database availability. There is no shared storage int his scenario. Anyways, I think I found a solution at the application level, I can always just set some post failover powershell scripts through the SQL server agent I guess. lol internet. fucked around with this message at 16:32 on Sep 21, 2016 |
# ? Sep 21, 2016 04:05 |
|
AD question here. We recently took on a new client, and I flew out there to re-IP their entire office about a month ago. They have two domain controllers called DC1 and FS2. While looking in event logs and poking around trying to familiarize myself with their network, and check for issues, I noticed that on FS2, each night at 2:12 AM, event ID 13508 is logged: quote:The File Replication Service is having trouble enabling replication from DC1 to FS2 for c:\windows\sysvol\domain using the DNS name DC1.XXXX.local. FRS will keep retrying. Now, each of the two domain controllers successfully resolve the name of the other. On each DC, the primary DNS is itself, secondary is the partner DC. DC1 holds all the FSMO roles, FS2 is global catalog. There do not appear to be any replication-related issues that I can see. I do see this in DCDIAG though: quote:DC=ForestDnsZones,DC=XXXXXX,DC=local DCDIAG TEST:DNS shows some errors related to root hint servers, nothing else. doing REPADMIN/REPLSUM, REPADMIN/SHOWREPL, or manually syncing with REPADMIN /SYNCALL /AdePq shows nothing out of the ordinary. I cannot see anything unusual anywhere, other than this error that gets logged at the same time every night. There is no 13509 that gets logged later. Any ideas? e: I just looked back as far as I can go in the event viewer on FS2, and these have been getting logged once a day since 2010, WTF?!?! MrMojok fucked around with this message at 21:16 on Sep 21, 2016 |
# ? Sep 21, 2016 20:25 |
|
MrMojok posted:AD question here. Secondly, is FRS running on DC1?
|
# ? Sep 21, 2016 20:29 |
|
Yeah, I did censor the name. File Replication Service is set to auto and started on DC1.
|
# ? Sep 21, 2016 21:00 |
|
MrMojok posted:Yeah, I did censor the name. File Replication Service is set to auto and started on DC1. His point is that you missed a couple.
|
# ? Sep 21, 2016 21:04 |
|
MrMojok posted:Yeah, I did censor the name. File Replication Service is set to auto and started on DC1. Couple of things.... First, you really need to be more careful when posting infomation from a client. Not that what you posted was very damaging, but it just shows you are careless. Something awful is pretty small by the internet standards these days but its big enough to cause headaches. Second, don't try and chase down every error in event viewer ESPECIALLY on a domain controller unless there is actually a problem. It is pretty maddening how common they are and how fruitless it is finding the cause of them can be. DCDIAG health check is a great start. You would be better suited in checking on DNS configuration than digging around in a DC event viewer.
|
# ? Sep 21, 2016 21:10 |
|
Yeah, I did miss a couple. Thanks for pointing that out.
|
# ? Sep 21, 2016 21:16 |
|
What OS are they running? With only two DCs switching it to use DFS is pretty straightforward, I've done it during the day with no impact.
|
# ? Sep 21, 2016 21:21 |
|
devmd01 posted:What OS are they running? With only two DCs switching it to use DFS is pretty straightforward, I've done it during the day with no impact. The DCs are server 2003 standard SP2, and looks like they will remain that way until at least first quarter of 2017 due to budgetary reasons.
|
# ? Sep 21, 2016 21:44 |
|
MrMojok posted:The DCs are server 2003 standard SP2, and looks like they will remain that way until at least first quarter of 2017 due to budgetary reasons. You know that 2003 is very much end-of-life, right? And has been for a while? And shouldn't be allowed near a network connection?
|
# ? Sep 21, 2016 21:48 |
|
quote:We recently took on a new client, and I flew out there to re-IP their entire office about a month ago. They have two domain controllers called DC1 and FS2. CLAM DOWN posted:
Clients.txt I'm not saying it's good or OK, but this is what you deal with in an MSP, your clients are cheap as gently caress and everything is awful. I would hope that his company made them aware of this and the risks associated.
|
# ? Sep 21, 2016 21:54 |
|
CLAM DOWN posted:
Yeah, but literally none of this is up to me. If I'd had my way we wouldn't take on any new clients at all, due to previous bad experiences along these same lines, and the fact that I am literally the only person that works on anything to do with servers/switches/AD/VPNs/whatever else the helpdesk guys don't do, etc. But I have a feeling this won't be the last one we bring onboard with similar issues. Some time in the first quarter of next year I'll be getting them up to server 2012. Supposedly.
|
# ? Sep 21, 2016 21:55 |
|
Fair. I don't know how you can stand MSP work like that. It sounds awful.
|
# ? Sep 21, 2016 21:58 |
|
It is kind of difficult to describe, but at one point many years ago we were all one big company with its own internal IT/HR/accounting departments. Then the IT/HR/accounting were split off into a different corporate entity who supported the original parent company, and a couple of others were brought in, but all companies were in the same AD forest. It then kind of began to morph into an MSP-style arrangement. The first new company we brought on several years back had two domain controllers that were Windows 2000, filled with dust, emitting strange noises, and hard-drive related errors in their event logs. I think only one died before I managed to get them on two new 2008 R2 servers but it was a close-run thing. Yes, it is awful. Based on Sickening's reply and the fact that these errors go back to 2010, I don't think they are much to worry about.
|
# ? Sep 21, 2016 22:07 |
|
I bet they do internal cross-charging as well because someone decided that there weren't enough procedures in place and wanted to create more.
|
# ? Sep 21, 2016 22:16 |
|
Oh you BETCHA
|
# ? Sep 21, 2016 22:20 |
|
Internet Explorer posted:Has anyone been involved in rolling out ticketing/documentation software to a more general audience, not just IT? We are undergoing some management changes at my small company and we are considering having the administrative staff (Accounting, Billing, HR) run in a more organized fashion. We've looked at ZenDesk and JIRA, but both seem to have their flaws. ZenDesk doesn't really do sub-tickets or sub-tasks, making things like a new hire ticket that creates sub-tickets for the other departments, kind of difficult. JIRA seems like it could fit the bill, but the learning curve and time to implement seem somewhat daunting for us. On the documentation side, we are just looking to allow departments to better document their processes and share that knowledge with other departments. I have used Confluence extensively in the past and I am sure that would fit the bill, but so would ZenDesk's knowledge base or whatever. ServiceNow is designed for exactly what you're talking about. We've invested pretty heavily in Service Management for the whole organization and not just IT. When you say small company... how small?
|
# ? Sep 22, 2016 02:14 |
|
Thanks for the reply. 90 users or so. We've been undergoing a lot of management and cultural changes. I am slowly getting them to understand the need for ticketing / documentation.
|
# ? Sep 22, 2016 03:00 |
|
Automation question. As a 99% Linux shop, we very rarely need to install Windows Server on bare metal machines. But we have a few gigantic MSSQL DB's and things like that which demand a physical Windows host. What's the easiest* way for us to set up PXE boot into the Windows Server installer (*where easiest means a strong preference for something we can integrate with our existing Linux-based PXE setup)? Google is coming up with jack poo poo for me outside of "set up WDS lol" which would be unfortunate, but doable, I guess. I've done it in a past life for Windows desktops. I don't even care about an unattended install. Booting to a screen where I click Next a couple times is totally fine. This happens a couple times a year. I'm just trying to avoid opening remote hands tickets to the effect of "pick up USB stick A. put it in slot B" before I can get on with the job.
|
# ? Sep 22, 2016 04:19 |
|
Docjowles posted:Booting to a screen where I click Next a couple times is totally fine. This happens a couple times a year. MDT with WDS? WDS essentially is the PXE portion, MDT creates the network share\boot ISO and WDS can be configured to serve a boot ISO when hosts try to PXE boot. Are you trying to essentially automate the windows servers install a bit? If your Linux PXE servers can serve a "boot ISO" to the clients who PXE, you should be able to use MDT to create that ISO and use your Linux PXE infrastructure. If you are just doing it a couple times a year, you can always just boot off a boot disc\USB drive and say gently caress the whole PXE poo poo? edit: MDT probably isn't the easiest but there isn't much options really.. lol internet. fucked around with this message at 05:54 on Sep 22, 2016 |
# ? Sep 22, 2016 05:52 |
|
Docjowles posted:Automation question.
|
# ? Sep 22, 2016 07:24 |
|
It might not end up being worth the effort. But it feels dumb and bad to install from physical media in TYOOL 2016, especially when none of our other servers have this requirement. Also the gear is all in a colo 300 miles away and I have to open tickets to have the remote hands go do it, which takes a lot longer than "power on server, automatically boot into installer" Doing it via the DRAC is also a good backup option. I forgot we sprang for non-poo poo LOM on our last hardware refresh. lol internet. posted:If your Linux PXE servers can serve a "boot ISO" to the clients who PXE, you should be able to use MDT to create that ISO and use your Linux PXE infrastructure. Thanks, I'll look into this. anthonypants posted:Make a VM template "install Windows Server on bare metal machines"
|
# ? Sep 22, 2016 14:19 |
|
My preference would be to do it via the Idrac, it's pretty painless. That way you're not setting up a special flower PXE setup for a couple times a year occurrence. I once had to do a project where we had to remotely reimage 100+ store servers while retaining the local dfs data and video surveillance footage. Between scripting, staging the image on the data volume, and a small boot ISO attached to the idrac three of us were able to get it done in two weeks.
|
# ? Sep 22, 2016 15:06 |
|
Docjowles posted:"install Windows Server on bare metal machines" I just cringed.
|
# ? Sep 22, 2016 20:05 |
|
Moey posted:I just cringed. Yeah seriously. At the minimum put free esxi then Windows VM.
|
# ? Sep 22, 2016 20:07 |
|
GreenNight posted:Yeah seriously. At the minimum put free esxi then Windows VM. e: and we loooooooooooooooooooooooove using the local datastore anthonypants fucked around with this message at 20:39 on Sep 22, 2016 |
# ? Sep 22, 2016 20:17 |
|
GreenNight posted:Yeah seriously. At the minimum put free esxi then Windows VM. It's a gigantic MSSQL database with like 512GB of RAM and many terabytes of disk. We're never going to be vmotioning this thing around. Thanks for continuing to bikeshed this into the ground, though.
|
# ? Sep 22, 2016 20:30 |
|
Docjowles posted:It's a gigantic MSSQL database with like 512GB of RAM and many terabytes of disk. We're never going to be vmotioning this thing around. Thanks for continuing to bikeshed this into the ground, though. You're welcome.
|
# ? Sep 22, 2016 20:33 |
|
Honestly ILOM is going to be the easiest choice, unless you've got some PXE booting solution setup already that you can feed an ISO into. Things like MDT and WDS aren't really going to do what you want, at least not with a lot of work. They're more for full deployments so you'd need to do quite a bit of work to get them to a usable point.
|
# ? Sep 22, 2016 20:39 |
|
I've worked with some IP KVM's that had virtual media support and those worked pretty well for those purposes, but if you're not already using them it doesn't make sense to rip and replace just for that.
|
# ? Sep 22, 2016 20:48 |
|
Docjowles posted:Thanks for continuing to bikeshed this into the ground, though.
|
# ? Sep 22, 2016 20:58 |
|
The 2GB limit on ISO images on iDRAC vFlash is irritating as gently caress.
|
# ? Sep 22, 2016 22:53 |
|
Quick question about DCs & Sites and Services Normally in one site DC1 and DC2 you set the DNS IP configuration to point at each other. If another site is created with DC 3 and DC4, do I continue to set DNS servers to point at each other? Or should they point back to at least DC1 or DC 2? And... this leads me to my other questions about sites and services. Is there really any reason to setup Sites and Services between two sites asides from good practice? It's a 100Mb link between the sites and there won't be too much data.
|
# ? Sep 23, 2016 05:56 |
lol internet. posted:Quick question about DCs & Sites and Services Have them point at one another. Are you using Windows DHCP on all of your DC's? What's the OS? quote:And... this leads me to my other questions about sites and services. Is there really any reason to setup Sites and Services between two sites asides from good practice? It's a 100Mb link between the sites and there won't be too much data. Yes. Because when sites and services topology accurately reflects your physical setup the workstations in those sites are going to know which DCs need to be providing AD services. The amount of data is kind of immaterial - there's no reason not to set your forest/domain up to maximize efficiency. Not using/misusing sites becomes a minor annoyance for you or someone else later if you need to scale as well.
|
|
# ? Sep 23, 2016 12:37 |
|
lol internet. posted:And... this leads me to my other questions about sites and services. Is there really any reason to setup Sites and Services between two sites asides from good practice? It's a 100Mb link between the sites and there won't be too much data. Data other than AD replication may use your network topology in the future. As an example SCCM may decide where to pull its data from based on the AD site membership of client and server. And if a 20GB disk image is getting moved across that link because a client picked a server at random you'll feel it.
|
# ? Sep 23, 2016 13:17 |
|
Such as DFS namespaces, which is heavily reliant upon correct AD sites and services. I've set up a namespace before that mapped with a single drive letter for everyone, but using DFS namespace redirection we pointed each site to the local DFSR copy at the store server. Fudge posted:Because when sites and services topology accurately reflects your physical setup the workstations in those sites are going to know which DCs need to be providing AD services. The amount of data is kind of immaterial - there's no reason not to set your forest/domain up to maximize efficiency. Not using/misusing sites becomes a minor annoyance for you or someone else later if you need to scale as well. And if you ever have to work in a multi forest/domain environment, AD site names have to match identically across all domains for proper lookup. SRV records are checked in the other domain's DNS zone when doing cross forest authentication to locate DCs, and if if finds an AD site with the same name as the originating DC's site it will keep the request local to the site for faster auth. devmd01 fucked around with this message at 14:06 on Sep 23, 2016 |
# ? Sep 23, 2016 13:57 |
|
Fudge posted:Have them point at one another. Are you using Windows DHCP on all of your DC's? What's the OS? peak debt posted:Data other than AD replication may use your network topology in the future. As an example SCCM may decide where to pull its data from based on the AD site membership of client and server. And if a 20GB disk image is getting moved across that link because a client picked a server at random you'll feel it. 2012 R2. No DHCP or workstation clients. Roughly 15 servers between the two sites all together with static IPs any future growth is unlikely. Should of mentioned that. Just to confirm when you say point at one another you mean DC1\DC2 point to DC3\DC4 vice versa or DC1 point to DC2 and DC2 point to DC3.
|
# ? Sep 23, 2016 15:39 |
how many devices do you have on static IPs? What exactly do you mean by configurations pointing at one another? Edit: wow i read bad. Ignore first part of my question obviously lol milk milk lemonade fucked around with this message at 15:54 on Sep 23, 2016 |
|
# ? Sep 23, 2016 15:49 |
|
|
# ? Apr 28, 2024 18:11 |
|
Fudge posted:What exactly do you mean by configurations pointing at one another? When you're putting a Windows DC up, many times people will fire up the DNS server on that DC and use Active Directory replication to update the DNS servers in a domain or in a forest. But AD is very reliant on DNS being right. So, to ensure a DC/DNS server can always get correct information into DNS regarding its SVC and other record types, most people set the first DNS IP on a DC interface to 127.0.0.1. That means AD will attempt to do DNS activities with the closest possible DNS server: the one it's hosting. The second (or even more) DNS IPs on that interface would point to other DNS servers hosting that zone. That way, if the DC is restarting and DNS isn't yet ready to accept changes or give answers, the DC can still push DNS settings to a writable DNS server. The primary question is this: Two DCs in one site, two DCs in another. Should the DCs in the other site point to each other or to one or both of the other site. Personally, I'd have every DC have the IP of every other DC running DNS in their DNS settings. Here's why: When a Windows box is trying to do DNS activities, it first sends the request to the first IP in the DNS list. After waiting a short time for a reply, it then sends that request to EVERY OTHER IP IN THE LIST OF DNS SERVERS, one after the other as fast as it can. If the first IP responds, great! That's the one it'll use for that cycle. If not, and one of the others respond, great! It'll use that one for that cycle. If none respond, that's when we have bigger problems. So, it really doesn't matter what order the IPs are in after the first one...they'll all be sent at the same time...local site or remote site. And, yes, you can have a long list of DNS servers on an interface, not just two. The main thing is that DCs with DNS servers running AD Integrated zones should always point to writable DNS zones that also are running AD integrated zones. If you're running DNS on a DC but using primary and secondary zones...what's your problem? Hah? <smeks u upside the hed> Why you make your mamma cry?
|
# ? Sep 23, 2016 19:19 |