Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
  • Post
  • Reply
Zaepho
Oct 31, 2013


lol internet. posted:

I setup SCCM with ALL packages, now stuck converting them to Applications and testing again and re-testing OSD.

Make sure you're on SCCM 2012 R2 for that. SCCM SP1 (earlier than CU2) has issues with Applications in an OSD Task Sequence. Getting it setup to update the client prior to installing applications is a giant pain and can be avoided by using R2.

Also, Applications frigging rock for software deployment.

Adbot
ADBOT LOVES YOU

Zaepho
Oct 31, 2013


CLAM DOWN posted:

Anyone on SCOM 2012 or 2012 R2? I'm curious how you have it set up, VM vs physical, SAN vs local disks, etc. I'd be looking at monitoring 600+ servers.

At 600 servers I would probably go with a separate SQL server with the typical best practices there for making a good fast SQL box.

A decent setup on the management server is highly recommended. At 600 servers, depending on how many console users you'll have I would suggest a second Management Server for balancing the agent load across. There is an excel spreadsheet for SCOM Sizing that is a pretty darn good resource for getting into the right ballpark. RAM is king, load up on as much ram as the budget will allow for on both the SQL and Management Server side of things.

I would definitely go R2 over 2012. A word of warning for your install, SCOM is a product that needs a but of time to bake. Get it installed, get a couple agents pushed out and get the Core OS MPs loaded. Lets those discover amd make sure you're getting data back, then start pushing more agents. Don;t add any more MPs until you're happy with the noise level of the Core OS MPs. When you start adding MPs, do it one MP at a time and read the MP Guide first.

Zaepho
Oct 31, 2013


TheDestructinator posted:

I've been put in charge of developing a more standardized and secure desktop environment for our company. I currently have a standard Windows 7 image in place using MDT 2012, but I'll obviously need more for overall system management.

I'd like to use SCCM but it doesn't really make sense for only 100-150 users. Ideally, we'd be using software that's a one-stop shop, but I'm fine with using different software for different admin functions.

Here's what I'd like to use so far, any suggestions would be helpful. At this point I'm looking for things on the cheaper side.

code:
Software Updates - WSUS
Software Deployment - PDQ Deploy
Antivirus - Webroot (we're about to renew license, open to suggestions)
Inventory - ?
Overall Endpoint Management - ?
Any recommendations for a combo of software for a smaller shop that could handle similar functionality to SCCM?

Have you considered System Center Essentials? It's pretty much a limited version of System Center. I'm not sure how far up it scales, but should provide a really nice transition to a full System Center deployment once that makes more sense.

Zaepho
Oct 31, 2013


peak debt posted:

System Center Essentials was an SCCM 2007 thing, it has since been replaced by Windows Intune, which is essentially a web interface cloud SCCM. It does updates, software installations and inventory but won't do imaging. It also annoyingly tries to push you towards Windows 8 through licensing deals. It's not terrible but doesn't compare too well to real deployment solutions, personally I would only recommend it for supersmall offices (like <20 PCs).

Good to know. I do System Center implementations on a daily basis but haven't had the opportunity to play with SCE in the past. That brings it back to SCCM if you can make it make sense from a licensing perspective. Remember with SC2012 you get Endpoint Protection for free which makes it start to make a LOT more sense financially when you start adding up the Per-Endpoint costs of all of the utilities.

Zaepho
Oct 31, 2013


mintskoal posted:

Hey guys, Active Directory question.

I'm doing some infrastructre work for my old job. Part of the project is to create a new domain controller and set up a new AD domain to replace their old one. The current/old one has some really wonky stuff and they'd like to just start over. I'm wondering the best way to handle removing current machines from the current domain and adding them back to the new one with as little friction as possible.

Luckily they're small and we only need to move a SQL Server box, one web server, and 6-8 laptops. Is it as easy as having all clients remove themselves from the domain, then remove the SQL Server, create a new domain and rejoin?

make sure you update the Master DB with the new domain information. As far as i recall this is not supported by MS so never ever admit to MS Support that you moved the SQL Server to a new domain.

Stealing from a good Stack Exchange Answer:
What do you need to take into account when migrating SQL Server to another domain?

The steps below presume

1) IP address will also change 2) SQL Server is NOT clustered

A. Backup:

BEFORE: backup the datases off-machine
B. Services:

BEFORE: depending on the nature of the change/move, you may want to set service start to Manual for all SQL Service
AFTER: Once things or up and running properly, return service start to its original setting
C. SA account:

BEFORE: If all administrator accounts are domain accounts or groups, temporarily enable the 'sa' account with a strong password
AFTER moving: once the domain-based accounts are setup in the new domain, 'sa' can be disabled again
D. Service Windows account:

BEFORE moving: for each SQL-Server-related Windows service, change the service to use a LOCAL windows account or one of the built-in accounts
AFTER moving: grant the necessary privileges to the service new domain accounts. When special permissions are not needed, the SQL Service Configuration Manager can be used to change the service account
E. Windows domain accounts used to login to SQL Server

Re-create the needed accounts or use corresponding accounts in the new domain.
BEFORE moving, script out permissions for OLD domain accounts.
AFTER moving, apply these scripts to the corresponding NEW domain accounts so they will have the same permissions
F. IP Address: SQL Server (unless clustered) will use the new IP address

AFTER: Client applications that reference the service by IP address will need to be configured with the new IP address.
G. Firewall:

AFTER: OLD firewall openings that are no longer used will need to be closed, NEW firewall openings may need to be created for SQL Server, OLAP services, SSRS between servers and clients
H. DNS entries:

AFTER: verify DNS has correctly updated
AFTER: Clients and services that reference by DNS name, may need to be restarted AND/OR their host systems may need their DNS cache flushed. For windows workstations, this can be done with "ipconfig /flushdns"
I. Service Principle Names (SPNs). Some standalone (and all clustered) instances use SPNs.

AFTER: The OLD SPN must be dropped and a NEW SPN must be created. Although it's not recommended to use a SQL Server service account to manage (its own) SPNs, if this is the case, the NEW domain service account will need to be granted WriteServicePrincipalName" privilege
J. Client Network Utility Alias.

AFTER: Update any clients that use these will need to updated
K. Client application and service connection configuration:

AFTER: Data Source Names (DSNs), connection strings, config files, Oracle TNS names for connections - will need to be udpated and applications and services may need to be restarted
L. Internal machine name.

AFTER: If the machine name is also changing, SQL Server's internal machine name entry may need to be udpated

sp_dropserver 'MyOldMachineName' go sp_addserver 'MyNewMachineName','local' go

M. Merge Replication - If merge replication is in use, it will also need to be reconfigured.

BEFORE: ensure all replicas are up-to-date
AFTER: re-configure merge replation
Attributions - some information added from these sources:

http://serverfault.com/questions/49...nning-ms-sql-08

http://social.msdn.microsoft.com/Fo...fe-ea5641ae6b88

Zaepho
Oct 31, 2013


Stealthgerbil posted:

Is there any way to give a user the ability to start, stop, and reboot a virtual machine in server 2012 Hyper-V? Also maybe even restore from a set snapshot. I was messing with the authorization manager and figured out how to create a user that can only do those functions but they have access to every virtual machine. I am not sure how to make it apply to only one virtual machine.

AppController is the best way to handle this but it requires System Center Virtual Machine Manager. If you're already using VMM AppController is a simple web portal for "End Users" to do everything you're asking about (not 100% on the restore checkpoint but pretty sure).

If you wanted to REALLY go the long way, Windows Azure Pack would be the recommendation. It gives the "End User" an azure style portal allowing them to do anything you could do in Azure including requesting new VMs, DBs, Websites, etc. It's pretty cool but I don't personally think it's ready for wide spread enterprise use yet for various reasons.

Zaepho
Oct 31, 2013


Sacred Cow posted:

Has anyone ever used MS App-V with SCCM 12? My company just signed a new EA and we discovered MDOP is included so we want to get the most out of our license.
We're looking to control our limited licensed software like Project, Visio and Adobe Pro by granting and removing access to the virtual app on a as needed basis. It seems like this is the right tool but looking at the documentation makes it seem like it can be a beast to deploy and manage.

Getting MDOP was timed perfectly too. My boss recently tasked me with encrypting all our laptops with BitLocker and now I have an MBAM license

We do a fair bit of App-V for customers at work but I'm not on the SCCM side. I can tell you that there is a certain art to sequencing App-V Packages that is different from the black art of packaging. Once you wrap your head around it you can do some pretty cool stuff.

Aside from that with SCCM 2012 you can uninstall software, hence "controlling" access to it. Metering is a really good tool for determining who needs to lose their Adobe Pro install, then it's just a matter of dropping them in the Uninstall collection.

MBAM is the only way to do bitlocker!

Finally, check out UEV (User Experience Virtualization), it's a clever semi-re-invention of roaming profiles and is included in MDOP.

Zaepho
Oct 31, 2013


Bob Morales posted:

DHCP question:

I'm guessing the fix (assuming both DHCP servers are to be kept) is to choose one address pool, split it between the servers, and make sure the reservations all exist on both. There currently are about 20 leases on the first server, and 70 leases on the second server.

My recommendation if you can swing it is to get the DHCP servers to Server 2012 R2. 2012 has DHCP Failover capabilities. You create the scope on one server and add the other into the failover pair. It automagically maintains the reservations on both sides and splits the scope up for load balancing. Plus if one node goes down for a configurable period of time (15 minutes by default if I recall correctly) the other node will pick up with renewals and issuing leases from the entire scope.

It's dead simple to setup and massively simplifies highly available DHCP.

Zaepho
Oct 31, 2013


Martytoof posted:

In all honesty I should just promote two new 2012R2 DCs and decommission these two.

I pretty much always take any excuse to get to the latest OS.

Zaepho
Oct 31, 2013


Orcs and Ostriches posted:

The guy before me put adobe pdf reader, and java on all of our servers - DCs included. He never remote managed, and would rather stand at the rack staring into the little 800x600 console display whenever he needed to do anything.

Likely because he wanted to hide from people hunting him down at his desk.

Zaepho
Oct 31, 2013


Serfer posted:

Client logs just show "waiting to download", server logs show successful replications. Not that I'm really sure which of the 10,000 log files I should be looking at.

It works for a period of time, and nothing changes between then and when it stops working

Check your content validation. Possibly something is hosing up the content on the DP and the DP validates the content then refuses to serve up invalid content? Actually just checking Monitoring -> Distribution Status -> Content Status for the applications should show any validation or content distribution issues. Heck if you;re not doing content validation perhaps turn it on and see what's up.

Check boundaries possibly? Check the System Management container to make sure there's no left over boundary objects from any old sites or develoment/test environments.

Does it quit wholesale or only for specific client machines?

To echo skipdog, I've worked in environments with upwards of 10k endpoints distributing thousands of different apps, plus patches and OS deployments and Config Manager works. It CAN be rather particular about things and take a little fiddling with to get everything working and tuned properly though. In the end, it's certainly worth the time and effort.

Zaepho fucked around with this message at 19:39 on May 15, 2014

Zaepho
Oct 31, 2013


Hadlock posted:

Doing SCOM properly is a daunting task and the fact that microsoft pubishes a literal "survival guide" doesn't help much

It's all about going slow. Import 1 MP at a time and get it configured and tuned (no the product groups have no frigging clue what works in reality). THEN think about which one to tackle first.

Don't disable it if you don't have to.

I have a 3 buckets approach to tuning.
Bucket 1: Oh poo poo fix it now!
Bucket 2: I don't care and never will (Defrag alerts anyone?)
Bucket 3: Wow we should probably fix that eventually but not until things aren't on fire.

The Bucket 3 MP gets deleted 3-6 months after the initial tuning effort.

Zaepho
Oct 31, 2013


Malcolm posted:

DirectAccess is Enterprise OS only in Win7 (and Win8 I believe) but it loving owns if it fits your org. It's a bummer that Enterprise licensing is so difficult, I know it costs us a bundle but it's great for off-site Windows management.

This is absolutely true. Honestly a lot of the ways that people decide it doesn't fit their org are lame political reasons or perceived needs that they aren't going to satisfy with other solutions either.

I absolutely love firing up my laptop and connecting to resources in the office without even thinking whether or not I'm VPN'ed in. 95% of the time, it plain just works. The other times, the DA server is down (I haven't setup a load balanced cluster yet because I'm a terrible IT person), the Office Inernet is down, My internet is down, or the customer who's Guest WiFi I'm on is doing something weird and terrible to SSL traffic.

Zaepho
Oct 31, 2013


lol internet. posted:

Yeah, that was the thing we were looking for. Works out then.

Dumb question but I assume the Pro version allows you to use BitLocker from the centralized administration panel right?


Thanks!

The centralized Panel is MBAM (http://www.microsoft.com/en-us/wind.../mdop/mbam.aspx) and is part of the MDOP CAL/License/Thing. You may need a EA for this. I stay as far away from licensing as I possibly can though.

Zaepho
Oct 31, 2013


sanchez posted:

That server is suspiciously cheap, the rest of the rates are entirely reasonable. The company may work with you on pricing, but honestly getting pushback on a project that size would result in us just walking away from it, it's not worth haggling with someone with that mindset.

I agree here. What you're seeing is Markup and Hourly Rates. Which honestly are pretty low when looked at from the perspective of the hours that will be put in. A manual server OS build and config I would typically budget a full day for. Just think of the time the tech is going to sit there with his thumb up his butt waiting for Windows Updates to complete. For you this means you hit go and walk away to do other duties. For the consultant on site he's stuck staring at it and waiting which means you pay for that time.

What I'm seeing glaringly missing in that quote is the Design time for AD. You really need to sit down with them and figure out what the best way to lay out your AD infrastructure is. That's not even beginning to get into GPOs, migration of users from local profiles to domain profiles. Implementing AD is a huge leap to make and will take some time and effort to migrate to.

Secondly, find a way to get another server. All of your AD eggs in one basket is a disaster waiting to happen.

Zaepho
Oct 31, 2013


Silly Newbie posted:

We ran into a weird one last week that we're still fighting with.
Server 2012 Hyper-V box. Has a single guest, a server 2008 VM that was created from a physical install via disk2vhd. Everything works great except the networking.
Host can get out to the internet, talks to everything just fine.
Guest can be connected to, but cannot talk out. It has proper network info, can ping other internal devices and the gateway with no problem, but can't get out. DNS (internal DNS server) resolves fine as well. It just never gets past the gateway.
It's on a virtual switch configured as external along with the host, has the correct virtualized drivers to share the host nic, etc.
Anyone ever seen anything like this?

This is an incredibly bad one, but is the Subnet mask set correctly? it really seems like you're getting traffic out (and I would guess even out beyond your local subnet) but not receiving it back when you go outside your local subnet.

Check your overall IP settings on the box just to make sure it all lines up properly. You can also look at the Routing Table (route print) to see if everything looks good there.

Zaepho
Oct 31, 2013


BaseballPCHiker posted:

So does anyone have much experience with WQL queries for SCCM? Trying to build device collections based on AD OU's. Here is my query:
select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where SMS_R_System.SystemOUName = "XXX.LOCAL/XXX/ACCOUNTING/ACCOUNTING COMPUTERS"

Which kind of works. There are 18 computers in that AD group and this query gets 14 of them.

Check the other machines and see what they have for OU in resource explorer. Its possible they're missing or inaccurate.

Also check your limiting collection to verify you're not inadvertently limiting them out of the possible devices for the collection,

Zaepho
Oct 31, 2013


BaseballPCHiker posted:

It was indeed the limiting collection. Thanks for the tip. Reminds me that I still have a ton of work to do in cleaning out our AD site. Unfortunately legal wants me to put in a ticket for each individual system that I need to remove from AD. Ugh, lots of work ahead for me.

Work with them to develop an operational process whereby a system does it automatically.

Typically in enterprise environments I see something along the lines of if a computer does not login or change it's password in 60-90 days, it is disabled and moved to a DisabledComputers UO. 30-90 days later it is then removed from active directory.

If you could convince them to turn this into a standard policy you would have carte blanch to enforce it daily via automation without Change Control, or individual approvals or anything crazy like that.

Zaepho
Oct 31, 2013


BaseballPCHiker posted:

Yeah I've got a powershell script that will give me a list of all computers that haven't logged onto the network in the last 60 days. Which I then gave to them, they still seem super paranoid about me making any changes. I'll try to talk them into letting me at least disable all of them and move them into a separate OU.

Try to make them understand that Active accounts that are unused are a risk from a security and audit perspective. The audit/regulatory perspective is usually the best route to go.

I think most of the regulatory frameworks have some controls around proper onboarding and offboarding of both user and computer accounts.

Zaepho
Oct 31, 2013


BaseballPCHiker posted:

Try building a new msi on a computer that had 2003 installed that you then upgraded to 2013 and set the correct defaults to. Whichever program you use to build the msi should capture the registry changes and apply them on install the next time you use it.

For Office I would very strongly recommend to handle all of this in your transform. This is the supported method for handling all of this and can do pretty much anything you need it to. I've been burnt by capturing changes in the past (as in complete work stoppage for all users at a bank until a fix could be deployed). Don't be that guy!

Zaepho
Oct 31, 2013


CLAM DOWN posted:

It's the Vista server OS. Use R2.

Definitely use R2. Why not 2012 R2 though?

Zaepho
Oct 31, 2013


orange sky posted:

Crossposting from another thread, does anyone have any idea what I can do? I've designed good flows but they don't work because the KSC sucks

What are you replacing it with? Most of the current Enterprise AV Suites will detect and remove the other suites. System Center Endpoint Protection for example this although my goole-gu is failing me this morning when trying to find the list of supported 3rd party AV suites it can remove.

Zaepho
Oct 31, 2013


orange sky posted:

I'm uninstalling with SCCM, restarting afterwards.

Are you trying to use an app/package to do the uninstall/cleanup/install? Have you considered using a Task Sequence? It should be able to survive the multiple reboots that you were referring to before.

Zaepho
Oct 31, 2013


Yaos posted:

You have to manually add the option for it, might only be 2003. http://technet.microsoft.com/en-us/library/dd572752(v=office.13).aspx Problem is that means going around and doing ipconfig /renew.

Add it to DHCP now, next time they renew they should get the updated info if I recall correctly. If your renewal times are sane it means doing it a day ahead takes care of this issue. If the lease times are less sane, a week early.

Zaepho
Oct 31, 2013


hihifellow posted:

DHCP leases are renewed when the lease is halfway to expiring, if it is a week they'll get it in 3 1/2 days.
This is absolutely true.


Yaos posted:

I set it to add it but I will have to check my DHCP lease time but I'm pretty sure it was set to a week..

Consider taking this opportunity to drop the lease time as well. Do you really need desktops having a lease longer than a day? you could go 8-10 hours and really squeeze down the number of leases out there but that's probably not entirely necessary. I usually go for 24 hours on workstation subnets since the environments I've been in are made up of a large number of Laptops that have a tendency to migrate at a fairly rapid pace and this makes sure I'm not wasting space in my scope for machines that have already moved on.

Zaepho
Oct 31, 2013


KillHour posted:

Alright, this is driving me insane, so hopefully someone can help.

I'm having a permissions issue getting a piece of software (Milestone XProtect Corporate) to integrate properly with IIS. This is the error I keep getting:


I've tried giving every account I can think of read/write permissions on the www root folder. What am I doing wrong?

I'm betting based on the error that this is something internal to the Milestone XProtect Corporate software. Is this the initial install/config or was something changed in IIS after an installation?

Zaepho
Oct 31, 2013


Frozen-Solid posted:

Has anyone here gone through an audit that can give some tips and advice?

Might be worth checking out the MAP tool to try to see what it comes up with for licensing information. I BELIEVE this is what they may start with in the audit anyway.

http://technet.microsoft.com/en-us/...s/dd537566.aspx


On the good side, one of our customers got audited and thought they were going to be hosed to the tune of a couple million in SQL licenses, but it turned out they were actually over licensed and dropped the size of their SA renewal in retaliation for the audit.

Zaepho
Oct 31, 2013


incoherent posted:

Does microsoft even "care" about their customers when it comes to stuff like that? Oops we bad, here is a sweeheart deal.

It's a back and forth thing with them apparently. Microsoft is also a customer of the Auditee (to the tune of 70+mil/year) and I suspect they were trying to even up the score a little bit.

Zaepho
Oct 31, 2013


BaseballPCHiker posted:

So I'm at a loss here with getting Office 2013 to install as part of our lite touch deployment. I've got the application imported into SCCM. Ran the setup.exe /admin to make sure it installs silently without any user notice and gets our correct license key. It shows up as an option and seems to install without any problems but when the computer boots up it's just not there. I've checked the SMSTS.log and dont really see anything that would indicate and error but I guess I could copy the log to here. I did have the source files up on a network share that didnt have the correct read rights which I fixed, and I did notice that someone earlier had built a Lync stand alone installer using the same setup.exe /admin options file which I had to delete to get mine working. The strange thing is that the installer does work, if you go into the software center you can install it and run it fine as a user.

Did you add a step to the task sequence to install the App? If it's showing up available to be installed in software center, it would seem like the TS hasn't even tried to install it.

Also, make sure there is an Apply Updates step or 2 after all apps have been installed. It helps make sure things are really really patched when you're done.

Zaepho
Oct 31, 2013


BaseballPCHiker posted:

I had that checked and the apply updates. I got it working but can't for the life of me figure out what difference this would've made. So the application which I tested and installed fine through software center wouldnt work during my lite touch deployment. So on a whim I made it into a package and added it to the install apps task sequence. Totally works fine now. Why it wouldnt work as an application but works as a package is beyond me. It's using the same source files and installer.

SCCM 2012 SP1 or 2012 R2? There is a bug in SP1 that apps don't apply properly during task sequence installs. It's fixed in one of the CUs but the client is the non CU version during a TS and you have to take a bunch of steps to update it to make it work right. It's a bit fuzzy since it's been a while since I messed with that but i recall it being a lot of time and effort to get working.

The short story is R2 is better.

Zaepho
Oct 31, 2013


Hadlock posted:

Powershell:

Our use of powershell in our 150 server windows shop is growing by leaps and bounds. I am writing about 4 scripts a week as we consolidate common tasks etc.

We're looking at writing the top 50 or so functions in to a companynameframework.ps1 flat file on a fileserver and then loading that at the beginning of most all scripts...?

1. Why is this a bad idea
2. How are we supposed to do this? What is the microsoft best practice?

I am guessing we need to put these on a sharepoint server as pssnapins? I don't want to store a file on every server and keep it updated, surely there's a way to manage a wealth of powershell scripts across a datacenter without resorting to using chef or puppet, etc?

This is doable, but honestly, I'd rather build a couple of modules and distribute inside of an MSI. Loading that thing remotely will be annoying. Building a module and an installer for it means you can push it out with SCCM or something like that, and you simply do import-module MyModule. you also have less issues with namescape conflicts because you can call your module explicitly (MyModule\get-MyFunction or soemthign along those lines). It also means you can fail gracefully if its not there. plus it's more portable! If you chnage the location of that function script you have to change every script referencing that location. if you just install the module to a module directory, its there forever.

Zaepho
Oct 31, 2013


Hadlock posted:

Ok, this is an acceptable answer that works inside our existing enterprise ecosystem, thank you sir I will take this into consideration. This is the closest thing I've seen to a "microsoft approved" design so far... but surely there's something baked in to powershell for this?

It's being worked on but honestly its not there yet from my opinion. OneGet from PS v5 is supposed to provide these capabilities but it's not yet released or complete. Looks pretty cool (thing CPAN or something along those lines) and based on NuGet so building your own repository might be able to serve dual purposes.

Zaepho
Oct 31, 2013


capitalpunctuation posted:

Look in to Powershell profiles. They'll be very familiar to a linux admin with a customized .bashrc file. I've got a half-dozen functions in a profile file that gets pushed to all IT workstations. They load automatically whenever Powershell or Powershell ISE is opened on our computers. Group Policy FILE preferences seems to work just fine at pushing a profile file out.

That's quite a clever solution to the issue. My only question here is whether or not users that login without creating a profile (i.e. as a scheduled task) invoke a Group Policy update?

Zaepho
Oct 31, 2013


capitalpunctuation posted:

Are you asking whether or not a user will need to manually run gpupdate to get the file? No, just define the Group Policy Preferences setting in Computer Configuration in a GPO, rather than user, and all of the computers will get the file without user intervention.

If you're asking whether or not the functions can be invoked in a scheduled task, I'm not sure, never tested it.
I didn't realize when i replied that it was a computer GPO and you place it in the default global profile location, the scheduled task should pick it up just fine.

Zaepho
Oct 31, 2013


SSH IT ZOMBIE posted:

Our MDT and SCCM servers are not integrated.

This is the first problem to rectify. Just do it properly in SCCM integrated with MDT and you'll have wonderful consistency between image builds and pushed software and the ability to use all of your SCCM packaged apps in your task sequences, and patch your machines completely, including all of the relevant software, and, and, and... etc.

Give your MDT guy a Gibbs slap and get him doing it right.

Zaepho
Oct 31, 2013


Zero VGS posted:

I know I'm not using the cached credentials as intended, but I was basically planning for them in the first place because I need these laptops to still actually function when the inevitable DC disaster strikes (or dumbass brings laptop home and can't connect to his own wifi). I think scheduled shutdown at night at least could help me test how the network is going to survive if we lose internet or Azure itself has downtime (it has happened before, apparently), and if it saves us a shitload of money and it easily reversible (i.e. if it winds up sucking, I just leave it up 24/7) then all the better.

The Transit bandwidth is the most expensive part of Azure. This is data you send out from your Azure VM. Don't do this. It's not worth it. We have a DC in Azure as a DR for our internal domain. It runs over $100/month to keep it up and running 24/7.

Go find a VAR and pick up proper licensing for Windows Server and your Client machines. Your problem will only get worse over as you grow. As an organization, it's time to put your big boy pants on and get licensing and best practice infrastructure under control.

Zaepho
Oct 31, 2013


Not a ton of users (less than 200) but we're aggressive with the inter-site replication times since we have an ADFS setup in Azure (due to no generator and limited UPS capacity for our internal servers) for O365, Dynamics, ETC (No generator and limited UPS capacity for our internal stuff). makes them act a little more like an internal application. I guess the SCOM agent on there is also trickling out a bit of data. It's really adds up over time.

The CPU/Memory time aren't what eats up the money, it's the bandwidth. Anything sent outbound from an Azure VM will count towards your transit costs unless you do ExpressRoute to your internal network.

Zaepho
Oct 31, 2013


Thanks Ants posted:

That's a lot more poo poo than "a DC in Azure as DR". I'm not surprised it's costing you a more. For reference the DC that I have running has transferred 150MB in, 75MB out over the past 3 weeks.

We ran the DC 24/7 for 2 months to make sure we weren't going to have any surprises. It was clocking in at just over $100/month each month. I haven't looked recently but I know that it's worth the up-time on ADFS to have it in Azure so we're paying for it to be there while the rest of our Infrastructure is in our office (no literally.. a rack IN the office). That consists of a couple physical servers, a couple shelves of ISCSI storage and 115 VMs running on 5 physical hyper-v hosts. It works for us but its certainly not ideal.

Zaepho
Oct 31, 2013


lol internet. posted:

Question about Hyper-V NIC teaming in VM's.

Is there any benefits of NIC Teaming inside the actual VM/child guest if the physical server\parent has NIC teaming setup ?

I assume there isn't but I figured I'd ask just in case.

none that I have seen. If you do it make sure you allow MAC Spoofing and such on the vNICs otherwise it won't work properly.

Honestly, use VMM and do Logical Switches with lots of physical NICs then a single NIC on the guest and you'll be more than covered from a (network) high availability standpoint.

Adbot
ADBOT LOVES YOU

Zaepho
Oct 31, 2013


incoherent posted:

Microsoft finally did a good thing: they've updated RDCman with proper 8.1/R2 support.

http://www.microsoft.com/en-us/down...s.aspx?id=44989

Wait.. they even remember they produced this software?

Nice catch, time to update the app package in our SCCM!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply