<img src="//bat.bing.com/action/0?ti=5794969&amp;Ver=2" height="0" width="0" style="display:none; visibility: hidden;">

Media Agent Networking

[fa icon="long-arrow-left"] Back to all posts

[fa icon="pencil'] Posted by Lewan Solutions [fa icon="calendar"] November 5, 2013

I get a lot of questions about the best way to configure networking for backup media agents or media servers in order to get the best throughput. I thought a discussion of how the networking (and link aggregation) works would help shed some light.

Client to Media Agent:
In general we consider the media agents to be the 'sink' for data flows during backup from clients. This data flow originates (typically) from many clients destined for a single media agent. Environments with multiple media agents can be thought of as multiple single-agent configs.

The nature of this is that we have many flows from many sources destined for a single sink. It is important then if we want to utilize multiple network interfaces on the sink (media agent) that the switch to which it is attached be able to distribute the data across the multiple interfaces. By definition then we must be in a switch-assisted network link aggregation senario. Meaning that the switch must be configured to utilize either LACP or similar protocols. The server must also be configured to utilize the same methods of teaming.

Why can't we use adaptive load balancing (ALB) or other non-switch assisted methods? This issue is that the decision of which member of a link-aggregation-group a packet is transmitted over is made by the device transmitting the packet. In the scenario above the bulk of the data is being transmitted from the switch to the media agent, therefore the switch must be configured to support spreading the traffic across multiple physical ports. ALB and other non-switch –assisted aggregation methods will not allow the switch to do this and will therefore result in the switch using only one member of the aggregation group to send data. Net result begin that the total throughput is restricted to that of a single link.

So, if you want to bond multiple 1GbE interfaces to support traffic from your clients to the media agent the use of LACP or similar switch assisted link aggregation is critical.

Media Agent to IP Storage:
Now from the media agent to storage we consider that most traffic will originate to the media agent and be destined for the storage. Really not much in the way of many-to-one or one-to-many relationships here it's all one-to-one. First question is always "will LACP or ALB help?" the answer is probably no. Why is that?

First understand that the media agent is typically connected to a switch, and the storage is typically attached to the same or another switch. Therefore we have two hops we need to address MA to switch and switch to storage.

ALB does a very nice job of spreading transmitted packets from the MA to the switch across multiple physical ports. Unfortunately all of these packets are destined for the same IP and MAC address (the storage). So while they packets are received by the switch on multiple physical ports they are all going to go to the same destination and thus leave the switch on the same port. If the MA is attached via 1GbE and the storage via 10GbE this may be fine. If it's 1GbE down to the storage then the bandwidth will be limited to that.

But didn't I just say in the client section that LACP (switch assisted aggregation) would address this? Yes and no. LACP can spread traffic across multiple links even if it has the same destination, but only if it comes from multiple sources. The reason is that LACP uses either an IP or MAC based hash algorithm to decided which member of a aggregation group a packet should be transmitted on. That means that all packets originating from MAC address X and going to MAC address Y will always go down the same group member. Same is true for source IP X and destination IP Y. This means that while LACP may help balance traffic from multiple hosts going to the same storage, it can't solve the problem of a single host going to a single storage target.

By the way, this is a big part of the reason we don't see many iSCSI storage vendors using a single IP for their arrays. By giving the arrays multiple IP's it becomes possible to spread the network traffic across multiple physical switch ports and network ports on the array. Combine that with using multiple IP's on the media agent host and multi-path IO (MPIO) software and now the host can talk to the array across all combinations of source and destination IPs (and thus physical ports) and fully utilize all the available bandwidth.

MPIO works great for iSCSI block storage. What about CIFS (or NFS) based storage? Unfortunately MPIO sits down low in the storage stack, and isn't part of the network filing (requester) stack used by CIFS and NFS. Which means that MPIO can't help. Worse with the NFS and CIFS protocols the target storage is always defined by an IP address or DNS name. So having multiple IP's on the array in and of itself doesn't help either.

So what can we do for CIFS (or NFS)? Well, if you create multiple share points (shares) on the storage, and bind each to a separate IP address you can create a situation where each share has isolated bandwidth. And by accessing the shares in parallel you can aggregate that bandwidth (between the switch and the storage). To aggregate between the host and switch you must force traffic to originate from specific IP's or use LACP to spread the traffic across multiple host interfaces. You could simulate MPIO type behavior by using routing tables to map a host IP to an array IP one-to-one. It can be done but there is no 'easy' button.

So as we wrap this up what do I recommend for media agent networking? And IP storage?
On the front end – aggregate interfaces with LACP.
On the back end – use iSCSI and MPIO rather than CIFS/NFS. Or use 10GbE if you want/need CIFS/NFS

Topics: Data Backup & Recovery

Lewan Solutions
Written by Lewan Solutions

  • View & Submit Comments

[fa icon="envelope"] Subscribe to Email Updates



[fa icon="comments-o"] Follow us

Get even more great content, photos, event info and industry news.



[fa icon="calendar"] Recent Posts