That is how I’ve got it currently, though I was confused by this post:
https://social.technet.microsoft.com/Forums/systemcenter/en-US/6dae0b67-714a-4b89-8120-6981637a3707/scom-management-server-not-communicating-with-root-management-server?forum=operationsmanagergeneral
The first part seems to read have a dedicated Management Server on the remote site, but then it goes on to say don’t do that? Then it says maybe not a gateway, so does that mean agents directly back to the UK, or am I just being thick and not reading it correctly?!
"If I have a remote location, but in the same kerberose realm as my data center, and no firewalls exist, then I would first just design the environment to have the agents report directly to a Management server in the remote data center. People often have this idea that since they have a remote location, they MUST place some sort of server there for moniotring, be it a gateway or a MS.
You DONT enhance the design by automatically doing this. Placing a Management server role “across the pond” is almost always a bad idea. People do this because they think having a MS to “queue” data will help when there is latency across the line. In fact - the MS queue is nothing compared to the cumulative power of queues on the agents, and the SQL transport that the MS uses to write to the DB’s does not handle latency well at all. Not to mention, I have seen where remote management servers lock tables in the DB for longer periods during insertions, which caused binding for other well connected management servers.
So - a management server is out… why not a gateway??? Well, placing a gateway doesn’t help a lot in this scenario… it is really just a “queuing forwarder”… it received the data from an agent, and forward it on to a MS. Agent > MS datya is compressed, encrypted, authenticated, and so is the data from the GW > MS. Now - the gateway DOES HELP in some ways, by offering better compression, because it can received data from multiple agents, and doesn’t have as much packet overhead as a single agent > MS channel. That said - the gateway also suffers from a bad WAN connection, in that with a large number of agents, it is possible for it’s queue to fill and it will not block the agents, it will simply start dropping data from the queue. Additionally, adding a GW becomes another “config hop” in the design, where the agents behind a GW can take longer to get config, than if they were reporting directly to a MS. They share a config request affinity, and compete against other agents to get config from the MS… so if there is a big config update to all agents, it really can take much longer for agents reporting to a gateways. This is why we strongly recommend a dedicated MS for gateways to report to… to limit this “competition impact”.
I hope I cleared this up. GW’s are excellent at remote locations for situations where you have a different kerberos realm (untrusted forest), where you have firewalls and need point to point holes opened, or when you have a significant number of agents, and desire the added benefits of compression of the communication channel data to the MS. However - you must temper this with the possibly negative issue of queueing FIFO in the event of a WAN outage, the fact that the GW adds a config hop, complexity to the support model, and cost.
I am not trying to say “remove the gateway… its a bad idea”. It isn’t. It just should be placed by thoughtful and tested design, and not “just because I have a remote location with agents, I need to put something there”"