Configurations
NVIDIA SKU # | OPN # | Data Rate | Network Interface | Ports | PCIe | Technology | Crypto |
MBF1M606A-ESNAT | 2x100GbE/EDR100 | QSFP28 | Dual | PCIe 4.0 x16 | VPI | Disabled |
Supported platform
Installed In: Ceres DF-3015, DF-3060, DF-30120
Specifications
Feature | Description |
PCI Express (PCIe)a | Uses the following PCIe connectors:
|
Up to 100 Gigabit Ethernet |
|
On-board Memory |
|
BlueField SoC | The BlueField-2 SoC integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 3.0/4.0. |
Overlay Networks | In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. BlueField BF1600 Controller Card effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and decapsulate the overlay protocol. |
RDMA and RDMA over Converged Ethernet (RoCE) | The BlueField BF1600 Controller Cards, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, deliver low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. |
NVIDIA PeerDirect® | PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. BlueField BF1600 Controller Card advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. |
Quality of Service (QoS) | Support for port-based Quality of Service enabling various application requirements for latency and SLA. |
Storage Acceleration | A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access.
BlueField-2 SmartNIC may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic |
High-Performance Accelerations |
|
GPU Direct | The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the NVIDIA HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. BlueField BF1600 Controller Card uses high-speed DMA transfers to copy data between P2P devices, resulting in more efficient system applications. |
Security Accelerators | A consolidated compute and network solution based on BlueField Controller Card achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage BlueField compute capabilities and network offloads for security application solutions such as:
|
Supported Platforms
DBoxes: DF-3015v1, DF3060v1
Product Images

Rear:

Manufacturer Documentation
Spec Sheet: https://network.nvidia.com/files/doc-2020/pb-bluefield-storage-controller-card.pdf
https://network.nvidia.com/files/doc-2020/pb-bluefield-storage-controller-card-vpi.pdf
User manual: https://docs.nvidia.com/nvidia-bluefield-bf1600-infiniband-ethernet-controller-card-user-manual.pdf