The DPAA NIC PMD (librte_pmd_dpaa) provides poll mode driver support for the inbuilt NIC found in the NXP DPAA SoC family.
More information can be found at NXP Official Website.
This section provides an overview of the NXP DPAA architecture and how it is integrated into the DPDK.
Contents summary
Reference: FSL DPAA Architecture.
The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware components on specific QorIQ series multicore processors. This architecture provides the infrastructure to support simplified sharing of networking interfaces and accelerators by multiple CPU cores, and the accelerators themselves.
DPAA includes:
Infrastructure components are:
Hardware accelerators are:
The Network and packet I/O component:
This section provides an overview of the drivers for DPAA:
Brief description of each driver is provided in layout below as well as in the following sections.
+------------+
| DPDK DPAA |
| PMD |
+-----+------+
|
+-----+------+ +---------------+
: Ethernet :.......| DPDK DPAA |
. . . . . . . . . : (FMAN) : | Mempool driver|
. +---+---+----+ | (BMAN) |
. ^ | +-----+---------+
. | |<enqueue, .
. | | dequeue> .
. | | .
. +---+---V----+ .
. . . . . . . . . . .: Portal drv : .
. . : : .
. . +-----+------+ .
. . : QMAN : .
. . : Driver : .
+----+------+-------+ +-----+------+ .
| DPDK DPAA Bus | | .
| driver |....................|.....................
| /bus/dpaa | |
+-------------------+ |
|
========================== HARDWARE =====|========================
PHY
=========================================|========================
In the above representation, solid lines represent components which interface with DPDK RTE Framework and dotted lines represent DPAA internal components.
The DPAA bus driver is a rte_bus driver which scans the platform like bus. Key functions include:
DPAA PMD is traditional DPDK PMD which provides necessary interface between RTE framework and DPAA internal components/drivers.
Features of the DPAA PMD are:
- Multiple queues for TX and RX
- Receive Side Scaling (RSS)
- Packet type information
- Checksum offload
- Promiscuous mode
DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer Manager.
There are three main pre-requisities for executing DPAA PMD on a DPAA compatible board:
ARM 64 Tool Chain
For example, the *aarch64* Linaro Toolchain.
Linux Kernel
It can be obtained from NXP’s Github hosting.
Rootfile system
Any aarch64 supporting filesystem can be used. For example, Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained from here.
FMC Tool
Before any DPDK application can be executed, the Frame Manager Configuration Tool (FMC) need to be executed to set the configurations of the queues. This includes the queue state, RSS and other policies. This tool can be obtained from NXP (Freescale) Public Git Repository.
This tool needs configuration files which are available in the DPDK Extra Scripts, described below for DPDK usages.
As an alternative method, DPAA PMD can also be executed using images provided as part of SDK from NXP. The SDK includes all the above prerequisites necessary to bring up a DPAA board.
The following dependencies are not part of DPDK and must be installed separately:
NXP Linux SDK
NXP Linux software development kit (SDK) includes support for family of QorIQ® ARM-Architecture-based system on chip (SoC) processors and corresponding boards.
It includes the Linux board support packages (BSPs) for NXP SoCs, a fully operational tool chain, kernel and board specific modules.
SDK and related information can be obtained from: NXP QorIQ SDK.
DPDK Extra Scripts
DPAA based resources can be configured easily with the help of ready scripts as provided in the DPDK Extra repository.
Currently supported by DPDK:
Note
Some part of dpaa bus code (qbman and fman - library) routines are dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
The following options can be modified in the config file. Please note that enabling debugging options may affect system performance.
CONFIG_RTE_LIBRTE_DPAA_BUS (default n)
By default it is enabled only for defconfig_arm64-dpaa-* config. Toggle compilation of the librte_bus_dpaa driver.
CONFIG_RTE_LIBRTE_DPAA_PMD (default n)
By default it is enabled only for defconfig_arm64-dpaa-* config. Toggle compilation of the librte_pmd_dpaa driver.
CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER (default n)
Toggles display of bus configurations and enables a debugging queue to fetch error (Rx/Tx) packets to driver. By default, packets with errors (like wrong checksum) are dropped by the hardware.
CONFIG_RTE_LIBRTE_DPAA_HWDEBUG (default n)
Enables debugging of the Queue and Buffer Manager layer which interacts with the DPAA hardware.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS (default dpaa)
This is not a DPAA specific configuration - it is a generic RTE config. For optimal performance and hardware utilization, it is expected that DPAA Mempool driver is used for mempools. For that, this configuration needs to enabled.
DPAA drivers uses the following environment variables to configure its state during application initialization:
DPAA_NUM_RX_QUEUES (default 1)
This defines the number of Rx queues configured for an application, per port. Hardware would distribute across these many number of queues on Rx of packets. In case the application is configured to use lesser number of queues than configured above, it might result in packet loss (because of distribution).
Refer to the document compiling and testing a PMD for a NIC for details.
Running testpmd:
Follow instructions available in the document compiling and testing a PMD for a NIC to run testpmd.
Example output:
./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
-- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
.....
EAL: Registered [pci] bus.
EAL: Registered [dpaa] bus.
EAL: Detected 4 lcore(s)
.....
EAL: dpaa: Bus scan completed
.....
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:01
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:02
.....
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
DPAA drivers for DPDK can only work on NXP SoCs as listed in the Supported DPAA SoCs.
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value is fixed and cannot be changed. So, even when the rxmode.max_rx_pkt_len member of struct rte_eth_conf is set to a value lower than 10240, frames up to 10240 bytes can still reach the host interface.
Current version of DPAA driver doesn’t support multi-process applications where I/O is performed using secondary processes. This feature would be implemented in subsequent versions.