Be sure to review the NIC support information carefully before making any NIC purchases.

Which NIC driver to use

If the custom driver is compatible with your card as per Hardware Recommendations, use this table to help you decide whether to go ahead and use the custom driver or whether to stay with the native driver. (If your NIC does not support a custom driver, you must stay with the native driver.)

High Speed AMD

For High Speed AMD, the L size in particular, we strongly recommend to use custom drivers. High Speed AMD performs up to twice better with custom drivers. See AMD Performance Estimates for Different Traffic Profiles for details.

Classic AMD

If this is the case

Use this driver

Your NIC is not compatible with the custom driver.

native

Your NIC is compatible with the custom driver and one or more of these is true:

  • Autodiscovery if turned off.

  • You need to filter traffic.

  • You are automatically load-balancing two or more AMDs.

  • You are using user-defined software service filtering.

  • You are using WAN optimization.

  • Your network is subject to traffic spikes that trigger traffic sampling on the AMD.

custom

Your NIC is compatible with the custom driver but you have no compelling reason (as listed above) to use it.

custom

The native and custom drivers offer similar performance, but we still recommend the custom driver for various reasons as described below.

 

The rest of this section explains the reasoning behind the preceding decision table.

Before DC RUM 12.3, the default setting for the AMD was to monitor only the traffic belonging to user-defined software services (user.map.only=true).

  • The rest of the traffic coming from the SPAN ports was filtered out by the custom driver.

  • If a native driver was used, filtering was not performed on the driver level, but at a higher level (the packet processing layer in the AMD analysis layer), which me. Thus a high CPU usage was observed on those AMDs (on cpu #0) in such cases.

Starting with DC RUM 12.3, a new installation enables autodiscovery mode on the AMD by default. This is equivalent to monitoring all traffic (user.map.only = false). In this scenario, where there is no packet filtering, the custom driver has no performance advantage over the native driver. (However, an upgrade to DC RUM 12.3 from an earlier release does not automatically enable autodiscovery. )

The performance of the native driver and the custom driver in DC RUM 12.3 and later is almost the same, assuming autodiscovery is running (the AMD is analyzing all traffic connected to the cards). Autodiscovery is running by default in all new installations starting with release 12.3.

However, be aware that there can be a performance difference between the custom driver and native driver when you are filtering packets or using various functionalities that depend on filtering.

In addition, the custom driver offers better diagnostics and slightly more accurate time measurements (especially if values are very low, such as with local RTT).

 

Table 2. Pros and cons of custom and native drivers
 

Custom

Native

Pros

  • Provide superior performance in packet filtering and processing due to custom designed data acquisition techniques.

  • Provide additional driver and NIC statistics available through the rcon and RUM Console which are helpful in port mirroring troubleshooting.

  • In terms of the driver, it does not matter whether the card is fiber or copper.

  • Can be used with majority of cards available on the market, including cards not tested by Dynatrace.

  • Are provided with Red Hat distribution, and require no additional installation steps.

Cons

  • Available for a relatively small subset of NIC cards.

    In 12.4, custom drivers are available only for 10 Gb cards.

  • rcon based tcpdump captures only configuration filtered packets.

  • Decreased (up to 50% in some cases) performance in data filtering and processing.

  • Increased CPU load during filtering even though the analyzed traffic might be of low volume

  • Load balancing feature is not supported.

  • Sampling mode is not supported.

  • Partial rcon and RUM Console statistics pertaining to the driver and NIC.

  • Sudden large traffic peaks or performance drops for any reason (such as a deadlock in an analyzer) may cause the native driver RTM to run out of memory and crash. In custom driver mode, this will just result in extensive dropping.

 

Be aware that some vendors offer cards that are similar to cards we recommend, but that are still incompatible with the custom driver. If you are considering using a card that has not been tested, be sure to contact the Support first.

For more information, see Hardware Recommendations.

Ensuring NIC Driver Compatibility

When selecting NICs for an AMD, be sure to follow these rules:

  • Two cards are compatible if they can use the same custom driver. Check the tables to see which custom driver each card takes.

  • You cannot mix custom and native drivers on the same AMD.

  • You cannot mix different custom drivers in the same AMD.

  • You can mix unsupported NICS with supported NICs as long as they all use the native driver.

  • You cannot mix Broadcom sniffing NICs with Intel.

  • You cannot mix 1 GbE sniffing NICs with 10 GbE.

    However, you can monitor 1 GbE networks and 10 GbE networks on the same AMD when using a 10 GbE card as long as the 10 GbE NIC also supports 1 GbE data rates and you use the same custom 10 GbE driver to support both data rates.

Testing Other Cards

We have tested the cards listed in Hardware Recommendations. We make no warranty, implied or otherwise, regarding any other card's performance or reliability. Customers have reported positive results with the cards listed in NICs tested by DC RUM community.

If you want to try an untested card to see whether it works correctly with DC RUM, you are welcome to conduct your own tests.

When choosing a card to test, be sure that your deployment adheres to the restrictions described above in Ensuring NIC Driver Compatibility and that your card uses one of the drivers listed below.

  • Red Hat Enterprise Linux 7: ixgbe

  • Red Hat Enterprise Linux 6: ixgbe, bnx2x, igb, e1000e, e1000, bnx

If you do test another card, please let us know how things turn out.

 

  • No labels