We’ve got a remote site in another country monitored by a gateway server connected back to SCOM in the UK. We’re looking at setting up another remote site, and I can’t decide whether to use a gateway again, or dedicated management servers. I’ve read Kevin Holman’s blog about it and I’m still not sure.
The remote site will likely be in the same forest. The link between UK and remote site is an unknown quantity at the moment. Likely to be less than 200 agents to monitor.
Should I put 2 management servers for HA over in the remote site, or 2 gateways over there, or just have agents report back to the UK? I’ve been told to plan for worst case scenario with the least reliance on the UK, so that would be a separate SCOM install I suppose, but ain’t nobody got time for that!
Any advice gratefully received.
It really depends on the link. The latency limit is around 150ms for safely doing a management server. There are a few things to consider here. The link speed is a major one and were the machines live is another if they live in the same domain depending on the link I might just point back to existing management server for that number of agents. Now a gateway would be useful if it was in a different domain. You still need to be cautious of how much latency and if the connection is not stable you could have problems.
I think link latency is the consideration / issue here
You can not have a separate management server if the latency is over X (cant remember the speed) over the wire as it breaks SCOM
So in these situations you could have 200 + agents pointing to the UK – but you would need the ports open 200 times
Far better to have a single server point to the UK and have a Gateway
I think it is also more efficient on the data transfer bandwidth required too
Basically, 99% of the time we put in Gateways
That is how I’ve got it currently, though I was confused by this post:
The first part seems to read have a dedicated Management Server on the remote site, but then it goes on to say don’t do that? Then it says maybe not a gateway, so does that mean agents directly back to the UK, or am I just being thick and not reading it correctly?!
“If I have a remote location, but in the same kerberose realm as my data center, and no firewalls exist, then I would first just design the environment to have the agents report directly to a Management server in the remote data center. People often have this idea that since they have a remote location, they MUST place some sort of server there for moniotring, be it a gateway or a MS.
You DONT enhance the design by automatically doing this. Placing a Management server role “across the pond” is almost always a bad idea. People do this because they think having a MS to “queue” data will help when there is latency across the line. In fact – the MS queue is nothing compared to the cumulative power of queues on the agents, and the SQL transport that the MS uses to write to the DB’s does not handle latency well at all. Not to mention, I have seen where remote management servers lock tables in the DB for longer periods during insertions, which caused binding for other well connected management servers.
So – a management server is out… why not a gateway??? Well, placing a gateway doesn’t help a lot in this scenario…. it is really just a “queuing forwarder”…. it received the data from an agent, and forward it on to a MS. Agent > MS datya is compressed, encrypted, authenticated, and so is the data from the GW > MS. Now – the gateway DOES HELP in some ways, by offering better compression, because it can received data from multiple agents, and doesn’t have as much packet overhead as a single agent > MS channel. That said – the gateway also suffers from a bad WAN connection, in that with a large number of agents, it is possible for it’s queue to fill and it will not block the agents, it will simply start dropping data from the queue. Additionally, adding a GW becomes another “config hop” in the design, where the agents behind a GW can take longer to get config, than if they were reporting directly to a MS. They share a config request affinity, and compete against other agents to get config from the MS… so if there is a big config update to all agents, it really can take much longer for agents reporting to a gateways. This is why we strongly recommend a dedicated MS for gateways to report to… to limit this “competition impact”.
I hope I cleared this up. GW’s are excellent at remote locations for situations where you have a different kerberos realm (untrusted forest), where you have firewalls and need point to point holes opened, or when you have a significant number of agents, and desire the added benefits of compression of the communication channel data to the MS. However – you must temper this with the possibly negative issue of queueing FIFO in the event of a WAN outage, the fact that the GW adds a config hop, complexity to the support model, and cost.
I am not trying to say “remove the gateway… its a bad idea”. It isn’t. It just should be placed by thoughtful and tested design, and not “just because I have a remote location with agents, I need to put something there””