Monday, August 8, 2011

FCIP considerations for 10 GigE on Brocade FX8-24

While working with a client to architect a new FCIP solution, there were a number of considerations that needed to be addressed. With this particular implementation we are leveraging the FX8-24 blades in DCX chassis and attaching the 10Gbe xge links to the network core.

With the current version of FOS (6.4.1a in this case), the 10Gbe interface is maximized by using multiple “circuits” which are combined in a single FCIP tunnel. Each circuit has a maximum bandwidth of 1Gb. To aggregate the multiple circuits you need the Advanced Extension License. Each of these circuits needs an IP address on either end of the tunnel. Additionally, there are two 10Gbe xge ports on each FX8-24 card; and they require placement in separate VLANs. Be sure to discuss and plan these requirements with your network team.

There are other considerations as well, such as utilizing Virtual Fabrics to isolate and allow the FCIP fabrics to merge between switches or utilizing the Integrated Routing feature (additional licensing) to configure the FCIP tunnels with VEX ports without the requirement of merging the fabrics.

Regardless of the architecture (Virtual fabric vs integrated routing) you will need to configure the 1Gbe circuits appropriately. You will want to understand the maximum bandwidth your link can sustain, and configure the FCIP tunnel in such a way that you consume just under the maximum bandwidth to prevent TCP/IP congestion and sliding window ramp up from slowing down your overall throughput.

In our example, we want to consume about 6 Gbe of bandwidth between their locations. We will need to configure six circuits within the FCIP tunnel, each configured just under the 1Gbe maximum bandwidth setting.

Share/Save/Bookmark

2 comments:

  1. Hi Lumenate,

    I am designing the same setup with DCX8510-8 and FX8-24 blade. WE have virtual fabric enabled and using Xge ports for FCIP trunking. Could you advice how exactly you design your FCIP and routing ? ( VE or VEX ports )

    Ganesh

    ReplyDelete
  2. Ganesh,

    Thank you for your question. I am digging up my notes on this as it has been running in the environment now for quite some time. This should probably be a part 2 post, but for continuity I’ll post as a reply. Also note, there may be a different methodology to use with more current FOS releases.

    We chose not to do Virtual Fabrics in this case, because the client did not want to take the risk to reboot the fabrics to enable VF, and was concerned about creating a complex environment to manage for his successor(s) when he moved on.

    For this environment, our XGE ports plugged into Nexus 7K core switches. We separated XGE0 and XGE1 on separate VLANS across the Nexus core, for redundancy. Each of the DCXs had the FX8-24 card, so we had (4) XGE links into the core at each site. Our storage replication VLANs are separated so that we can, though routing at the core, and traffic isolation zones on the DCX, control which of their WAN links we push data over at any given time.

    First step was to enable FCR (FC Routing) service using “fosconfig”. We also chose to disable all VE ports while configuring the tunnels. We configured the VE ports at the secondary site, and the VEX ports at the primary site. Next, created the FCIP tunnel, and then added additional circuits to it. Circuits are created when you assign multiple IPs to a XGE port before you create the tunnels. After tunnels are created, we enable the VE ports.
    After verifying the tunnel connectivity, we layer on the LSAN zones and TI (Traffic Isolation) zones to accommodate our design goal.

    ReplyDelete