Select the directory option from the above "Directory" header!

Menu
Brocade's big, fat datacenter fabric

Brocade's big, fat datacenter fabric

DCX Backbone is the cornerstone of Brocade's policy-driven network

At 230 pounds, the Brocade DCX Backbone would be on the lighter end of middle linebackers in the NFL, but it's well-built to fill the middle of a storage network. Unveiled in late January, the DCX represents the first deliverable of Brocade's DCF (Data Center Fabric), the company's newly designed architecture that promises a more flexible, easier-to-manage, policy-driven network, one that embraces multiple connectivity protocols and is better able to respond to applications' demands and to support new technologies such as server virtualization.

In Brocade's vision, the DCX Backbone is the cornerstone of that architecture, with specs that suggest a level of performance never attained before. In fact, Brocade assures me that the DCX is capable of sustaining full transfer rates of 8Gbps on the 896 Fibre Channel ports supported in its largest, dual-chassis configuration.

In addition to FC, the DCX supports just about any other connectivity protocol, including FICON (Fiber Connectivity), FCoIP (FC over IP), Gigabit Ethernet, and iSCSI. That versatility brings to mind the Silkworm Multiprotocol Router, which was the first product from Brocade aimed at consolidating multiple SANs (see my review, "Building the uber-SAN").

In the belly of the beast

I recently had the chance to visit Brocade's labs to see what the DCX can do. Though my test configuration provided plenty of ports to spare, it's interesting to note that the DCX has dedicated ISL (Inter-Switch Link) ports that don't take away from the number of ports available for, say, storage arrays or application servers.

As impressive as the raw specs of the DCX may be, the DCX's most innovative features are software functions that provide better control of bandwidth allocation, let you restrict access to specific ports according to security policies, and allow you to create independent domains to separately manage different areas of the fabric.

I started my evaluation with the bandwidth monitoring features. In a traditional fabric, each connection acts like a garden hose, a passive conduit that has no ability to regulate the flow it carries. With DCX, Brocade's Adaptive Networking option lets you limit the I/O rate on selected ports, a feature that Brocade calls Ingress Rate Limiting.

Here is how it works. In my test configuration Brocade had installed two DCX units: one linked to six HBAs on three hosts, the other linked to a storage array. To better show the traffic management capabilities of the DCX, each host HBA was assigned a dedicated LUN (logical unit number) and a dedicated storage port. The two DCX chassis were connected using two 4Gbps ISLs.

Using a simple Iometer script, I generated a significant volume of traffic on each host. To measure how that traffic spread across the fabric, I invoked Top Talkers, the performance monitoring tool. A new capability of Brocade's Fabric OS 6.0, which was running on both DCX chassis, is to define a Top Talkers profile either for specific ports or for the whole fabric.

As the name suggests, Top Talkers lists the source-destination pairs that are carrying the most traffic. It told me that I had four source-destination pairs that were exchanging more than 40MB of data per second, and a fifth that was flowing at a trickle.

The next step was to limit the traffic flowing from one of those hosts, in order to open more bandwidth to higher-priority streams. After moving to the CLI of the hosts-facing DCX, I typed portcfgqos --setratelimit 3/2 200, setting a maximum data rate of 200mbps on slot 3, port 2 of the DCX, where the HBA in question was connected.

Moving back to the storage-facing DCX, Top Talkers was showing a much reduced traffic rate on that pair (fourth in the list), making more bandwidth available to the other pairs. Now the first three pairs were flowing at 51.1MBps, 45.6MBps, and 45.5MBps respectively, while that fourth pair (previously running at 43.2MBps) dropped to 14.5MBps.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments