Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug
Yeah grab the admx for office 2010 / 2013 and it's a very obvious and easily configured item within there.

Adbot
ADBOT LOVES YOU

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Jeoh posted:

I don't think Exchange likes dynamic memory very much, considering it'll try to get as much RAM as possible (same with SQL, btw).

Limit your ESE database caching. Always. Calculate how much you need and set limits in ADSI. SQL has a UI for doing the same thing, which again you should always do.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

NevergirlsOFFICIAL posted:

ok so I have 2013 and 2007 coexistence going on and mailboxes on the 2013 servers cannot access the (new) Exchange server via Outlook while inside the LAN, UNLESS they use Outlook Anywhere. If they use outlook anywhere it works 100%. I can't google for this because I don't know what "non-outlook anywhere outlook" is called.

I'm guessing this is a DNS issue?


You don't know what it's called because it no longer exists. Post 2007 you only connect via HTTPS to the cas/cas array.

If you have public folder databases in 2010 then you will make RPC MAPI connections to the mailbox servers. 2013 not sure if this is still true (don't use public folder databases).

Legacy clients can reach the GAL and free/busy information if they are published to public folders rather than web (don't do this).

Run outlook /rpcdiag to see how the client is building the connection.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Tab8715 posted:

It's more so about why doesn't Get-Distribution return all the properties for the object, why must I pipe the command to FL or FT? Did Micrsoft just decide which one are displayed by default?

Yes, the default displayed properties of any object are often pre-determined and truncated.

Manipulating powershell objects to get the correct information is real powershell 101 type stuff. Select object on the other side of the pipe get-whatever | select property1,property2 or expressing the property via an encapsulated query (get-whatever).property1 or via storing it to a variable and expressing it that way $var.property1 are all common.

Learning powershell conventions can be done really quickly, and pay huge dividends, especially in larger applications, like Exchange, VMWare, and AD.
I once wrote a completely automated exchange 2007 database consolidation routine that compiled the deltas between about 200+ database backups into a single database that was then exported to PST files and sent up to mimecast for retention. All fully automated. We had bids well into 6-figures for this project before I decided to see if I could just ghetto-shell it. Powershell is just that, well, powerful.

O'reilly media has a great powershell cookbook that opens with a very eye-opening primer on exactly what powershell is (vs traditional shells). I can't recommend it enough.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug
I have a few questions concerning an Exchange 2010 to 2013 migration I'm considering. Sorry for the length.

For the record, I'm currently lead infrastructure engineer for a national automotive sales and lending institution. About 3600 mailboxes. I know Exchange through 2010 quite well. I have my MCITP: Enterprise Admin and Enterprise Messaging Admin certs (along with my VCP5 and CCNA) and used to perform scores of migrations and corrections with every version of exchange from 5.5 to 2010 back in my consulting days.

I am also a wiz with powershell and know the exchange cmdlet set very well. I used to be pretty active in this thread when I was still consulting.

Essentially, I have a few basic planning questions for those of you that have performed these migrations.

My environment:

- 2 data centers, 99% virtualized on VMWare 5.5, geographically separated by 40 miles.
- Metroclustered with EMC VPlex, Cisco Nexus/OTV, and 2x 10gb <1ms roundtrip latency interconnects.
- 90% of our VM environment free-floats between datacenters via fully automated DRS all day long. I would have to log in and check to see what zip code my mailbox databases are currently occupying, our environment is just completely geographically agnostic. Our DMZs are the only piece that is specific to each site.


My Exchange environment looks like this:

- 3 Internal CAS/HT servers. Load balanced behind Cisco ACE.
- 4 mailbox servers, each houses a primary and secondary copy of a database.
- 4 mailbox databases. About ~350GB each.
- 1 Mailbox server that does nothing except our archive database. It's less than 1TB. I keep strict size limits on mailboxes, but no limits for the archive. PST files are prohibited via GPO. gently caress PSTs forever.
- No public folders. gently caress public folders forever.
- 2 CAS-only servers in the DMZs that only handle webmail and activesync. One at each location.
- 2 Exchange unified messaging servers that handle all of our auto-attendant and voicemail functionality.
- Office 365 EOP as our inbound and outbound perimeter.
- Fully operational Lync 2013 implementation. I'm only so-so with Lync.
- No legacy exchange mess in ADSI. AD is completely sane and healthy.
- All mobile devices are managed via AirWatch MDM, so I can publish new ActiveSync settings if needed.

I designed and instituted the majority of this environment, and have no limits on what I can touch. We aren't siloed on our infrastructure team. There's only 4 of us, so there's no way we could operate if we were. I have the authority and ability to make any change I wish. I am 100% privy to all of the required SSL, DNS, sender and connection validation requirements. I don't need any help there.

In my mind, I feel as if I could treat this migration similarly to how an exchange 2007 to 2010 migration might work. I guess my questions are the following, if anybody knows:

At a very high level, what is the flow? Is it essentially -
- Build new CAS environment. Make all of the SSL and DNS chagnes required for it?
- Stand up new mailbox server environment, migrate mailboxes?

Is there better interoperability with previous versions of Exchange in the Exchange 2013 CAS server role? Ie. If I stand up some 2013 CAS servers, can they serve mailboxes on 2010 mailbox servers or am I left needing to segregate the front-end environments while I'm still coexisting 2010 and 2013?

I never actually touched Unified Messaging prior to working here. They didn't even ask questions about it on the exams for the MCITP. The configuration was balls-simple, but how do I even migrate it? Where does the already-configured autoattendant configurations and such live?

Between F5 and the 2012 R2 Web Application Proxy role, which is preferable for a reverse proxy in the DMZs? I currently have CAS servers specifically for external client access, but I hate doing it this way. The only reason I did was because our current load balancers (Cisco ACE) don't actually do reverse proxying and the TMGs we used before were flaky poo poo. I know ADFS is a requirement for the new WAP role, but we already have a fully functional ADFS environment as well, so instituting them would be an hours work tops, but I would love it if anybody knew how well the new WAP role works long-term. I would just configure the application proxy / SSL offloading on the F5s, but we are still mid-migration on those.

And last: Should I even bother migrating to 2013? We have absolutely zero complaints about our 2010 environment as it stands, and I'll never move to 365, so I don't care much about the hybrid features of 2013. The only reason I am even considering it is because, as an organization, we tend to keep on top of new technology.

Blame Pyrrhus fucked around with this message at 05:09 on Sep 24, 2014

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

wintermuteCF posted:

So my colleagues and I have been mulling over an issue and want to get feedback.

Our Environment:
  • 45 mailbox databases, each ~650GB
  • 8 DAGs -- 6 databases for each of the first 5 DAGs, 5 databases each for the last 3 DAGs
  • ~12,000 mailboxes

Previously, placement in a database was based on what business unit you were in. (By way of explanation, this is a company that grew through mergers and acquisitions, and until recently has been operating as a very loose federation of companies. IT was operated under a sort of feudal system without much integration as the local "IT lords" fought to keep control of "their" data.) We've identified that this is a batshit insane way of doing things at our company, causing some DAGs to be stuffed to the gills with user data, and others to be very under-utilized. We want a way to spread user data across the databases in such a way that each database comes out being relatively equal in size.

As a second wrinkle, the director of our group wants to keep one database in each DAG EMPTY with the exception of journaling mailboxes. His explanation is that these databases would "be there in case we need to emergency-move people [to do things like empty another database so we can delete/recreate it if whitespace gets out of control]." Is he crazy for suggesting that we keep 4.8TB of databases empty except for emergencies?

What's the best way to organize and categorize users into databases in a large environment like ours? Please help!

My environment is significantly smaller than yours, but I'm curious, so you have size constraints on your user mailboxes?

The way I keep things manageable is by keeping a narrow 300mb limit on the mailbox, but allowing the online archive store to be as large as they want.

I don't keep the online archive in a DAG as I don't care too much about its availability. The smaller mailboxes are much more manageable, and the archive data is available via any interface they use sans active sync. Users will see it as a separate hierarchy in outlook and webmail, and on the mobile devices they can configure "delete" to instead just archive the message.

Otherwise you can use archive policies to help automate the housekeeping.

Smaller mailboxes keep the DAG replicas more manageable, and online archives are an easy sell, especially if you inform the end user it can be some monstrous or unlimited size.

This plus automatic mailbox distribution takes care of 90% of my distribution woahs.

I also use mimecast for journaling. Keeps it out of my hair, and for the purpose of audits it allows me granular control over custodians. It's cheap as poo poo, works well, and has fantastic auditing and role controls.

Blame Pyrrhus fucked around with this message at 05:53 on Sep 24, 2014

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug
Thanks for the answers Will Styles. We are Office 2013 / Office 2011 across the board with no public folders, so there should not be any strange RPC wonkiness to cope with. And yeah, I'm also hard pressed to find a compelling reason to upgrade. Except to be current, which means easier migrations for the 2015 or whatever version that comes out.

Will Styles posted:

Edit: Why does "Get-MsolUser -License <string>" not work? MSDN indicates it should :( I mean I know I can go through and filter locally based on the license parameter but with 300,000+ user objects it's just a pain in the rear end.

If it's the licenses property for the user you want, can you not use: get-msoluser | where {$_.licenses -match "LITEPACK"}

Or whatever string you are looking for?

Adbot
ADBOT LOVES YOU

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Will Styles posted:

You can, but I'd rather do server side filtering than client. We've got 300,000+ objects in the azure ad and querying every user can take some time and may fail because of network latency. If the -License parameter worked the filtering would be done on their side and they'd only send me the people I'm concerned with as opposed to every user I have.

Yeah it kind of sucks when there isn't the expected filtering or the cmdlets don't accept piped queries the way you might expect. I find myself cursing out PowerCLI a lot because it's odd about what native cmdlets work via pipes. Though, If it wasn't for PowerCLI's less-than-stellar native cmdlet sets I'd probably never write my own functions.

Typically if I'm dealing with a tremendous number of objects like this, I would just store it to a large array and work with that array. So in this instance literally just set it up with: $users = get-msoluser

and then do whatever the hell you want with $users (assuming it populates without timing out).

I always pre-populate any large AD or Mailbox operations and queries this way, especially while building them out so it's not hammering the live data set every time I modify the query to get it to do what I want.

It's always neat to close a powershell window and watch my memory utilization drop by 2gb.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply