For over a decade, connecting components via PCIe was predominantly accomplished through NVMe.
You probably remember how storage devices like hard drives and SSDs were connected using a SAS or SATA connection. While SATA SSDs did play a part in developing faster and more robust storage solutions, they were still catered to users of the disk drive era who were using the same old SAS protocol.
The next progression was the NVM which stands for non-volatile memory. The non-volatile part refers to NAND flash memory, meaning it covers everything from USBs and SSDs to memory cards.
NVMe (NVM Express), on the other hand, was completely optimized for flash storage. It allowed for better communication between modern memory and host hardware or software by giving them access to higher levels of parallelism.
In turn, NVMe reduced input and output overhead, and improved read and write performance with significantly lower latency.
But there was a big problem - NVMe remained tied to the host system.
Enter NVMe-oF (NVMe over Fabric) to the fold.
This network storage protocol solves this issue by allowing you to fully utilize the NVMe through any fabric channel such as Fiber, Ethernet, or InfiniBand.
Thus, a dedicated JBOF (just a bunch of flash) can use NVMe speeds and establish communication using RDMA (Remote Direct Memory Access). This can drastically increase speeds as the OS and CPU are completely bypassed.
Many ASIC devices help experience these benefits. For instance, RapidFlex A1000 by Western Digital establishes a direct PCIe connection between NVMe server racks. It uses only a small amount of TDP and absolutely no RAM because of its RapidFlex C1000 NIC which is powered by RapidFlex A1000 ASIC Device.
RapidFlex A1000 supports RoCEv1 and RoCEv2 and 100GBs of Ethernet. Converged Ethernet is a good option in terms of speed even though InfiniBand is an excellent RDMA networking protocol.
Another positive side of NVMe-oF is that it provides local-like connectivity between your storage and applications stored on large servers. This is accomplished by using a network switch that enables applications in your data center that normally rely on DAS (direct-attached storage).
With wider bandwidth and local speeds, applications will run flawlessly on networked storage.
NVMe-oF handles storage with efficiency and lets you make your architecture more parallel. This not only allows you to avoid bottlenecks in modern applications but also adds flexibility to your network infrastructure.
If you connect various storage racks via NVMe-oF to a switch, you’ll also make reading and writing data to a cloud much faster. By taking advantage of RDMA and over-ethernet capabilities, the speed at which you access files remotely will be on par with local speeds.
The same practice of storage disaggregating can encompass other components such as memory. This practice is known as CDI (Composable Disaggregated Infrastructure) and it works by partitioning resources like CPU memory into resource pools. After that, the software takes over and allocates computing resources to various applications executed on the CDI.
With CDI, servers become as flexible as the cloud and as fast as local storage.
The operations of CDIs are much like virtualization machines, where a computer partitions its resources into a multitude of instances. Similarly, components of the CDI can be distributed in real-time according to the demands of the workload. Thus, resources can be allocated or returned to the resource pools dynamically.
With CDIs, you can create next-level shared network infrastructure and improve the environment for multiple users.
Dataknox offers Western Digital NVMe-oF SSD storage servers so you can easily gain access to faster data transfers. We’re available around clock and can help you get started with NVMe-oF or help craft an optimal CDI strategy.
Contact us now to learn more!