Come explore AI technology at booth 471 during SC23 in Denver, from November 12th to 17th — Experience it before you invest!
󰅖

High Performance Deep Learning and AI Server

Dataknox offers cutting-edge AI hardware solutions, revolutionizing AI applications and research with their High-Performance Deep Learning and AI Servers. These servers, equipped with advanced technology, are tailored for optimal performance in complex AI environments.

AI Servers

In the rapidly evolving landscape of AI and deep learning, where the demand for computational power is at an all-time high, High-Performance Deep Learning and AI Servers are crucial. These servers, designed to manage advanced algorithms and large datasets, are pivotal in accelerating deep learning tasks, ensuring faster data processing and shorter training times.

GPU icon

Multi-GPU Performance

󰐕
Pre-Installed Frameworks

Pre-Installed Frameworks & Toolchain

󰐕
Warranty icon

Standard 3-Year Warranty

󰐕

Welcome to the Future of Computing with AI Servers

Dataknox distinguishes itself by incorporating top-tier GPUs from leaders like Nvidia, Intel, and AMD into their AI servers. Featuring cutting-edge models such as Nvidia's H100 and Gaudi2, our servers are designed for unparalleled performance in demanding AI and deep learning applications. This strategic selection of GPUs ensures that Dataknox servers are not just equipped for current computational challenges but are also primed for future advancements. Our commitment to integrating the best GPUs available underlines our dedication to delivering superior, future-ready AI hardware solutions.

Filter by Manufacturer
󰍉
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
GPU Server

Lenovo ThinkStation P360 Ultra

Compact workstation for edge AI and data science, supporting Intel Xeon W CPUs, 128GB DDR5, and NVIDIA T1000 GPUs. No standalone GPUs are included; it supports GPUs via PCIe.

CPU

Single Intel Xeon W-1200 Series processor (up to 10 cores)

MEM

Up to 128GB DDR5-4800 (4x DIMM slots)

HD

Up to 2x 2.5” SATA/NVMe SSDs, plus 2x M.2 SSDs

PCIE

2x PCIe 4.0 x16 slots (for NVIDIA T1000 or RTX A2000 GPUs)

see detailed specification
GPU Server

Lenovo ThinkSystem SR630 V3

The Lenovo ThinkSystem SR630 V3 (part number 7D73CTO1WW) is a 1U server for AI inference and edge AI, supporting dual Intel Xeon CPUs, 4TB DDR5, and 3x NVIDIA or Intel accelerators.

CPU

Dual 5th Gen Intel Xeon Scalable processors (up to 144 cores each)

MEM

Up to 4TB DDR5-5600 (16x DIMM slots)

HD

Up to 10x 2.5” NVMe/SATA/SAS hot-swap bays, plus 2x M.2 SSDs

PCIE

Up to 3x PCIe 5.0 x16 slots (for NVIDIA A100, L40S, or Intel Gaudi 3 accelerators)

see detailed specification
GPU Server

Lenovo ThinkSystem SR650 V3

The Lenovo ThinkSystem SR650 V3 is a 2U server for AI training and HPC, supporting dual Intel Xeon CPUs, 8TB DDR5, and 4x NVIDIA H100 GPUs. Highlighted at NVIDIA GTC 2025, it’s orderable now for scalable AI infrastructure.

CPU

Dual 5th Gen Intel Xeon Scalable processors (up to 144 cores each)

MEM

Up to 8TB DDR5-5600 (32x DIMM slots)

HD

Up to 20x 3.5” or 40x 2.5” NVMe/SATA/SAS hot-swap bays, plus 2x M.2 SSDs

PCIE

Up to 8x PCIe 5.0 x16 slots (for NVIDIA H100, A100, or L40S GPUs)

see detailed specification
GPU Server

GIGABYTE R284-A91

The GIGABYTE R284-A91 (part number R284-A91-AAL3) is a 2U server for memory-intensive AI workloads, supporting dual Intel Xeon CPUs, 4TB DDR5, CXL memory, and 4x NVIDIA or Intel accelerators.

CPU

Dual Intel Xeon 6900 Series processors (up to 144 cores each)

MEM

Up to 4TB DDR5-5600 (24x DIMM slots), plus 16x Micron CXL memory modules

HD

Up to 12x E3.S NVMe hot-swap bays, plus 2x M.2 SSDs

PCIE

Up to 4x PCIe 5.0 x16 slots (for NVIDIA A100 or Intel Gaudi 3 accelerators), plus CXL expansion

see detailed specification
GPU Server

GIGABYTE G493-SB0

The GIGABYTE G293-Z40 (part number G293-Z40-AF2) is a 2U server for AI inference and edge AI, supporting a single AMD EPYC CPU, 3TB DDR5, and 4x NVIDIA GPUs. Orderable now, it’s ideal for compact AI setups.

CPU

Dual 5th Gen Intel Xeon Scalable processors (up to 144 cores each)

MEM

Up to 4TB DDR5-5600 (16x DIMM slots)

HD

Up to 12x 2.5” NVMe/SATA/SAS hot-swap bays, plus 2x M.2 SSDs

PCIE

Up to 8x PCIe 5.0 x16 slots (for NVIDIA L40S, A100, or AMD Instinct MI300X GPUs)

see detailed specification
GPU Server

GIGABYTE G894-SD1

The GIGABYTE G894-SD1 (part number G894-SD1-AAX5) is a 4U server for AI training and inference, supporting dual Intel Xeon CPUs, 8TB DDR5, and 12x NVIDIA B200 or Intel Gaudi 3 accelerators.

CPU

Dual Intel Xeon 6700 or 6500 Series processors (up to 128 cores each)

MEM

Up to 8TB DDR5-5600 (32x DIMM slots)

HD

Up to 24x 2.5” NVMe/SATA/SAS hot-swap bays, plus 2x M.2 SSD.

PCIE

Up to 12x PCIe 5.0 x16 slots (for NVIDIA HGX B200, H100, or Intel Gaudi 3 accelerators)

see detailed specification
GPU Server

GIGABYTE S183-SH0 - Single AMD EPYC 9004 Series processor (up to 96 cores)

The GIGABYTE S183-SH0 (part number S183-SH0-AF1) is a 1U edge server for AI inference, supporting a single Intel Xeon CPU, 2TB DDR5, and NVIDIA A2/T4 GPUs.

CPU

Single 5th Gen Intel Xeon Scalable processor (up to 144 cores)

MEM

Up to 2TB DDR5-5200 (16x DIMM slots)

HD

Up to 4x 2.5” NVMe/SATA hot-swap bays, plus 2x M.2 SSDs

PCIE

2x PCIe 5.0 x16 slots (for NVIDIA A2 or T4 GPUs)

see detailed specification
GPU Server

GIGABYTE G293-Z40

The GIGABYTE G293-Z40 (part number G293-Z40-AF2) is a 2U server for AI inference and edge AI, supporting a single AMD EPYC CPU, 3TB DDR5, and 4x NVIDIA GPUs. Orderable now, it’s ideal for compact AI setups.

CPU

Single AMD EPYC 9004 Series processor (up to 96 cores)

MEM

Up to 3TB DDR5-4800 (12x DIMM slots)

HD

Up to 12x 2.5” NVMe/SATA hot-swap bays, plus 2x M.2 SSDs

PCIE

Up to 4x PCIe 5.0 x16 slots (for NVIDIA A100 or L40S GPUs)

see detailed specification
GPU Server

GIGABYTE G593-ZD2 Up to 6TB DDR5-4800

The GIGABYTE G593-ZD2 (part number G593-ZD2-AF2) is a 5U server for AI training and inference, supporting dual AMD EPYC CPUs, 6TB DDR5, and 10x NVIDIA or AMD GPUs. Orderable now, it’s ideal for AI factories with liquid-cooling.

CPU

Dual AMD EPYC 9004 Series processors (up to 96 cores each)

MEM

Up to 6TB DDR5-4800 (24x DIMM slots)

HD

Up to 12x 3.5” or 24x 2.5” NVMe/SATA/SAS hot-swap bays

PCIE

Up to 10x PCIe 5.0 x16 slots (for NVIDIA H100, A100, or AMD Instinct MI350 GPUs)

see detailed specification
GPU Server

GIGABYTE G383-R80 - Dual 5th Gen Intel Xeon Scalable processors

The GIGABYTE G383-R80 (part number G383-R80-AF2) is a 3U server for AI training and HPC, supporting dual Intel Xeon Scalable CPUs, 8TB DDR5, and 8x NVIDIA H100 GPUs.

CPU

Dual 5th Gen Intel Xeon Scalable processors (up to 144 cores each)

MEM

Up to 8TB DDR5-5600 (32x DIMM slots)

HD

Up to 24x 2.5” NVMe/SATA/SAS hot-swap bays, plus 2x M.2 SSDs

PCIE

Liquid cooled NVIDIA HGX™ H100 with 8 x SXM5 GPUs 8 x PCIe x16 (Gen5 x16) low-profile slots

see detailed specification
GPU Server

Lenovo ThinkSystem SR675 V3

2U AI-optimized server supporting dual AMD EPYC 9004 CPUs, up to 6TB DDR5, and 8x NVIDIA H100 GPUs.

CPU

Dual AMD EPYC 9004 Series processors (up to 96 cores each)

MEM

Up to 6TB DDR5-4800 (24x DIMM slots)

HD

Up to 24x 2.5” NVMe/SATA/SAS hot-swap bays or 12x 3.5” bays

PCIE

Up to 8x PCIe 4.0/5.0 x16 slots (for NVIDIA H100, A100, or L40S GPUs)

see detailed specification
GPU Server

ASUS ESC8000-E12P Server

The ASUS ESC8000-E12P Server (part number ESC8000-E12P) is a 4U server for AI and HPC, supporting dual Intel Xeon 6 processors, up to 8TB DDR5, and 24x storage bays.

CPU

Dual Intel Xeon 6 Scalable processors (up to 144 cores each)

MEM

Up to 8TB DDR5-5600 (32x DIMM slots)

HD

Up to 24x 2.5”/3.5” NVMe/SATA/SAS hot-swap bays

PCIE

8x PCIe 5.0 x16 slots (for NVIDIA GPUs or Intel Gaudi 3 AI accelerators), 2x PCIe 5.0 x8 slots

see detailed specification
GPU Server

ASUS RS700-E12 Server

The ASUS ESC8000A-E11 Server (part number ESC8000A-E11) is a 4U server for AI and HPC, supporting 8x NVIDIA A100 GPUs, up to 8TB DDR4, and 24x storage bays.

CPU

Dual AMD EPYC 7763 CPUs (64 cores each, Milan)

MEM

Up to 4TB DDR5-5600 (16x DIMM slots)

HD

Up to 12x 2.5” NVMe/SATA hot-swap bays, optional 2x M.2 slots

PCIE

2x PCIe 5.0 x16 slots (for NVIDIA GPUs or Intel Gaudi 3 AI accelerators), 1x PCIe 5.0 x8 slot

see detailed specification
GPU Server

ASUS ESC8000A-E11 Server

The ASUS ESC8000A-E11 Server (part number ESC8000A-E11) is a 4U server for AI and HPC, supporting 8x NVIDIA A100 GPUs, up to 8TB DDR4, and 24x storage bays.

CPU

Dual AMD EPYC 7763 CPUs (64 cores each, Milan)

MEM

Up to 8TB DDR4-3200 (32x DIMM slots)

HD

Up to 24x 2.5”/3.5” SATA/SAS/NVMe hot-swap bays

PCIE

8x PCIe 4.0 x16 slots (for NVIDIA A100 PCIe 80GB GPUs), 2x PCIe 4.0 x8 slots

see detailed specification
GPU Server

ASUS RS720A-E12-RS24 Server

The ASUS RS720A-E12-RS24 Server (part number RS720A-E12-RS24) is a 2U server for AI and HPC, supporting 8x NVIDIA H100 GPUs, 6TB DDR5, and 24x NVMe/SATA bays.

CPU

Dual AMD EPYC 9004 Series CPUs (up to 96 cores each)

MEM

Up to 6TB DDR5-4800 (24x DIMM slots)

HD

Up to 24x 2.5”/3.5” NVMe/SATA hot-swap bays

PCIE

8x PCIe 5.0 x16 slots (for NVIDIA H100 PCIe GPUs), 2x PCIe 5.0 x8 slots

see detailed specification
GPU Server

ASUS RS501A-E12 Server

1U server for AI/HPC storage, certified for Weka’s software-defined storage and NVIDIA AI Data Platform.

CPU

Single AMD EPYC 9004 Series or Intel Xeon Scalable processor (up to 96 cores for EPYC)

MEM

Up to 3TB DDR5-4800 (12x DIMM slots)

HD

Up to 4x 2.5” NVMe/SATA hot-swap bays or 2x M.2 slots

PCIE

2x PCIe 5.0 x16 slots (for NVIDIA GPUs), 1x PCIe 5.0 x8 slot

see detailed specification
GPU Server

ASUS ESC N4A-E11 Server

The ASUS ESC N4A-E11 Server (part number ESC-N4A-E11) is a 2U server for AI training and HPC, supporting 4x NVIDIA A100/H100 GPUs, up to 2TB DDR4, and 12x hot-swap storage bays.

CPU

Single AMD EPYC 7773X CPU (64 cores, Milan-X with 3D V-Cache)

MEM

Up to 2TB DDR4-3200 (8x DIMM slots)

HD

Up to 12x 3.5”/2.5” SATA/SAS/NVMe hot-swap bays (configuration-dependent)

PCIE

4x PCIe 4.0 x16 slots (for NVIDIA A100/H100 GPUs), 1x PCIe 4.0 x8 slot

see detailed specification
GPU Server

ASUS Ascent GX10

The ASUS Ascent GX10 (part number ASCENT-GX10) is a compact mini-PC powered by NVIDIA’s GB10 Grace Blackwell Superchip, delivering 1,000 TFLOPS for edge AI, LLM training, and inferencing.

CPU

NVIDIA GB10 Grace Blackwell Superchip (20-core Arm CPU with Cortex-X925 and Cortex-A725 cores)

MEM

128GB unified LPDDR5x memory

HD

Up to 4TB M.2 NVMe SSD storage

PCIE

Not applicable (integrated design, no PCIe slots)

see detailed specification
GPU Server

Lenovo ThinkSystem SR685a V3

The SR780a V3 combines 8x latest NVIDIA GPUs with 5th Gen Xeon CPUs in a liquid-cooled 5U chassis for generative AI workloads

CPU

Supports up to 2 5th Gen Intel Xeon Scalable processors

MEM

Up to 96 GB per DIMM

HD

8 x 2.5-inch hot-swap drive bays supporting PCIe 5.0 NVMe drives

PCIE

8 x PCIe 5.0 x16 FHHL

see detailed specification
GPU Server

Lenovo ThinkSystem SR780a V3

The SR780a V3 combines 8x latest NVIDIA GPUs with 5th Gen Xeon CPUs in a liquid-cooled 5U chassis for generative AI workloads

CPU

Supports up to 2 5th Gen Intel Xeon Scalable processors

MEM

Up to 96 GB per DIMM

HD

8 x 2.5-inch hot-swap drive bays supporting PCIe 5.0 NVMe drives

PCIE

8 x PCIe 5.0 x16 FHHL

see detailed specification
GPU Server

Dell PowerEdge XE8545

Memory Module Slots: 32 slots (288-pin), 16 slots per processor.

CPU

Supports up to 2 AMD EPYC 7003 Series processors (3rd Generation, Milan), with up to 64 cores per CPU

MEM

8/16/32 DIMMs (2TB max with XE9640 GPU configuration)

HD

Supports up to 10 x 2.5-inch hot-swappable drives

PCIE

Supports up to 4 PCIe Gen 4 expansion slots

see detailed specification
GPU Server

Dell PowerEdge XE9640

High-density 6U server supporting 8x GPUs for AI acceleration.

CPU

p to 2 Intel Xeon Scalable processors (4th or 5th Generation

MEM

8/16/32 DIMMs (2TB max with XE9640 GPU configuration)

HD

Up to 4 x 2.5-inch NVMe SSD drives

PCIE

Up to 4 PCIe Gen 5 slots (x16), designed for high-performance expansion

see detailed specification
GPU Server

GIGABYTE G292-Z40

2U server with 4x H100 GPUs and 4TB memory for AI training.‍

CPU

Supports up to 2 AMD EPYC 7002/7003 Series processors

MEM

DDR4 RDIMM (Registered DIMM) or LRDIMM (Load-Reduced DIMM), 8-channel memory architecture per processor.

HD

8 x 2.5-inch hot-swappable bays with a hybrid backplane

PCIE

Supports up to 10 PCIe Gen 4 slots

see detailed specification
GPU Server

HPE ProLiant DL380 Gen11

The HPE Apollo 6500 Gen11 Server features dual 4th/5th Gen Intel Xeon or AMD EPYC processors, supporting 8x high-performance GPUs for AI and HPC workloads.

CPU

DIMM Speed: Up to 5600 MT/s (DDR5)

MEM

DIMM Speed: Up to 5600 MT/s (DDR5)

HD

24x NVMe U.2/EDSFF (368TB max) + dual M.2 boot drives.

PCIE

PCIe 5.0 x16 slots + OCP 3.0 (200Gbps) for GPUs/NVMe/network expansion.

see detailed specification
GPU Server

HPE Apollo 6500 Gen11

The HPE ProLiant DL380 Gen11 features dual Intel Xeon or AMD EPYC processors, supporting 4x GPUs for AI and virtualization.

CPU

Intel 4th/5th Gen Xeon Scalable (up to 64 cores) or AMD EPYC 9004 (up to 96 cores).

MEM

Memory Type: RDIMM/3DS RDIMM

HD

32x NVMe U.2/EDSFF (max 491TB) + dual M.2 boot drives.

PCIE

PCIe 5.0 x16 slots + OCP 3.0 (200Gbps) for GPUs/NVMe expansion.

see detailed specification
GPU Server

PowerEdge XE8640 Rack Server

The PowerEdge XE8640 Rack Server features four NVIDIA® H100 GPUs, 4 TB max RDIMM memory, up to 122.88 TB storage with NVMe SSDs, and comprehensive security measures.

CPU

up to 56 cores per processor

MEM

Up to 4800 MT/s, RDIMM 4 TB max

HD

Up to 8 x 2.5-inch NVMe SSD drives max 122.88 TB

PCIE

2 CPU configuration: Up to 4 PCIe slots (4 x16 Gen5)

see detailed specification
GPU Server

PowerEdge XE9640 Rack Server

The Dell PowerEdge XE9640 Rack Server features dual 4th Generation Intel® Xeon® Scalable processors, offering up to 56 cores per processor.

CPU

up to 56 cores per processor

MEM

RDIMM 1 TB max, Up to 4800 MT/s

HD

Up to 4 x 2.5-inch NVMe SSD drives max 61.44 TB

PCIE

2 CPU configuration: Up to 4 PCIe slots (4 x16 Gen5)

see detailed specification
GPU Server

PowerEdge XE9680 Rack Server

The Dell PowerEdge XE9680 rack server is a high-performance, robust server designed to meet data-intensive workloads and complex computing tasks.

CPU

Two 4th Generation Intel® Xeon® Scalable processor with up to 56 cores per processor

MEM

32 DDR5 DIMM slots, up to 4800MT/s

HD

Up to 8 x 2.5-inch NVMe SSD drives max 122.88 TB

PCIE

Up to 10 x16 Gen5 (x16 PCIe) full-height, half-length

see detailed specification
Lease me Today!
GPU Server

5688M7 Extreme AI Server Delivering 16 PFLOPS Performance

5688A7 is an advanced AI system supporting NVIDIA HGX H100/H200 8-GPUs to deliver industry-leading 16PFlops of AI performance.

CPU

2x 4th Gen Intel® Xeon® Scalable Processors, up to 350W

MEM

32x DDR5 DIMMs, up to 4800MT/s

HD

24x 2.5” SSD, up to 16x NVMe U.2 2x Onboard NVMe/SATA M.2

PCIE

see detailed specification
GPU Server

SuperServer SYS-421GU-TNXR | Universal 4U Dual Processor

Ideal for High-Performance Computing, AI/Deep Learning, LLM NLP with 8TB DDR5 memory, 8 PCIe Gen 5.0 slots, flexible networking, 2 M.2 NVMe/SATA, and 6 hot-swap drive bays.

CPU

Dual 5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processors

MEM

Slot Count: 32 DIMM slots Max Memory: Up to 8TB 5600MT/s ECC DDR5

HD

Hot-swap : 6x 2.5" hot-swap NVMe/SATA drive bays (6x 2.5" NVMe hybrid)

PCIE

1 PCIe 5.0 x16 LP slot(s) 7 PCIe 5.0 X16 slot(s)

see detailed specification
GPU Server

A+ Server AS-4125GS-TNRT

AI/Deep Learning optimized, supports 8 GPUs, dual AMD EPYC™ 9004 up to 360W, 6TB DDR5 memory, 4 hot-swap NVMe bays, 1 M.2 slot, AIOM/OCP 3.0 support. Ideal for high-performance computing.

CPU

Dual AMD EPYC™ 9004 Series Processors up to 360W TDP

MEM

Slot Count: 24 DIMM slots Max Memory: Up to 6TB 4800MT/s ECC DDR5 RDIMM/LRDIMM

HD

24x 2.5" hot-swap NVMe/SATA/SAS drive bays (4x 2.5" NVMe dedicated)

PCIE

9 PCIe 5.0 x16 FHFL slot(s)

see detailed specification
GPU Server

SuperServer SuperServer SYS-421GE-TNRT 4U Dual Processor

Powered by 5th/4th Gen Intel® Xeon®, it offers 8TB DDR5 memory, 13 PCIe Gen 5.0 slots, AIOM/OCP 3.0 support, and 8 hot-swap SATA bays, ideal for high-end computing demands.

CPU

Up to 64C/128T; Up to 320MB Cache per CPU

MEM

Slot Count: 32 DIMM slots Max Memory: Up to 8TB 4400MT/s ECC DDR5 RDIMM

HD

24x 2.5" hot-swap NVMe/SATA/SAS drive bays (8x 2.5" NVMe hybrid; 8x 2.5" NVMe dedicated)

PCIE

12 PCIe 5.0 x16 FHFL slot(s)

see detailed specification
GPU Server

SuperServer SYS-521GE-TNRT 5U Dual Processor

Meet the GPU SuperServer SYS-521GE-TNRT: A high-performance server with 5th/4th Gen Intel® Xeon® support, 8TB DDR5 memory, 13 PCIe Gen 5.0 slots, AIOM/OCP 3.0, and 8 hot-swap SATA bays. Ideal for data-intensive tasks.

CPU

Up to 64C/128T; Up to 320MB Cache per CPU

MEM

Max Memory (2DPC): Up to 8TB 4400MT/s ECC DDR5 RDIMM

HD

8 x 2.5" Gen5 NVMe/SATA/SAS hot-swappable bays

PCIE

Up to 10 double-width or 10 single-width GPU(s)

see detailed specification
GPU Server

Gigabyte G593-SD0 HPC/AI Server - 4th/5th Gen Intel® Xeon® Scalable-5U

Intel's 4th & 5th Gen Xeon processors advance AI/deep learning. Featuring PCIe 5.0, High Bandwidth Memory for improved CPU performance. GIGABYTE supports with PCIe Gen5, NVMe drives, and DDR5 memory.

CPU

5th Generation Intel® Xeon® Scalable Processors 4th Generation Intel® Xeon® Scalable Processors

MEM

3DS RDIMM modules up to 256GB supported

HD

8 x 2.5" Gen5 NVMe/SATA/SAS hot-swappable bays

PCIE

Liquid cooled NVIDIA HGX™ H100 with 8 x SXM5 GPUs 8 x PCIe x16 (Gen5 x16) low-profile slots

see detailed specification
GPU Server

Gigabyte G593-ZX2 HPC/AI Server - AMD EPYC™ Instinct™ MI300X 8-GPU

HPC/AI Server with dual AMD EPYC™ 9004 (up to 128-core/processor), 8 AMD Instinct™ MI300X GPUs, and 24 DDR5 DIMM slots. Ideal for AI, training, and inference.

CPU

Dual AMD EPYC™ 9004 Series processors TDP up to 300W

MEM

Memory speed: Up to 4800 MHz

HD

8 x 2.5" Gen5 NVMe hot-swappable bays

PCIE

Supports 8 x AMD Instinct™ MI300X OAM GPU modules 8 x PCIe x16 (Gen5 x16) low-profile slots

see detailed specification
GPU Server

Gigabyte G593-SD1 4th/5th Gen Intel® Xeon® Scalable

AI and deep learning performance will benefit from the built-in AI acceleration engines, while networking, storage, and analytics leverage other specialized accelerators in the 4th & 5th Gen Intel Xeon Scalable processors.

CPU

5th Generation Intel® Xeon® Scalable Processors 4th Generation Intel® Xeon® Scalable Processors

MEM

32 x DIMM slots DDR5 memory supported only

HD

8 x 2.5" Gen5 NVMe/SATA/SAS hot-swappable bays, NVMe

PCIE

4 x PCIe x16 (Gen5 x16) FHHL slots 8 x PCIe x16 (Gen5 x16) low-profile slots

see detailed specification
GPU Server

5180A7 High-Density 2-Socket Server with 4 Configurations

5180A7 is a high-density dual-socket server supporting AMD EPYC™ 9004 Processors that delivers high performance and reliability in different application scenarios.

CPU

1 or 2 AMD EPYC™ 9004 processors, TDP up to 400 W

MEM

24 DDR5 DIMM (12ch), support RDIMM/3DS DIMM, 4800@1DPC

HD

Internal: 2x SATA M.2 SSD or 2x PCIe M.2 SSD

PCIE

Up to 5x PCIe Gen5 slots (3x standard PCIe slots + 2x OCP 3.0 cards)

see detailed specification
GPU Server

3280A7 Flexible 2U 1-Socket Server Supporting Up to 4 GPUs

3280A7 is a 2U server supporting one AMD EPYC™ 9004 processor. Its flexible configurations and ability to support up to 4 dual-width GPUs make it highly scalable for many applications including AI, high- performance computing..

CPU

1 AMD EPYC™ 9004 processor, TDP up to 400W

MEM

24 DDR5 DIMM, supports RDIMM/3DS DIMM, 12 Channels

HD

Internal: 2x SATA/PCIe M.2 SSD 4x 3.5” SAS/SATA

PCIE

Up to 8x standard PCIe slots (6 x16 FHHL + 2 x8 HHHL or 4 x16 FHHL + 4 x8 HHHL)

see detailed specification
GPU Server

5698M7 6U 8-OAM AI Server for Next- Gen AI, LLM

5698M7 is an advanced AI system supporting 8 Habana® Gaudi®2 OAM accelerators and 2 Intel® Xeon® Scalable Processors. Leveraging the OAM form factor, it delivers scalable, high-speed performance for next-generation AI workloads and LLM.

CPU

2x 4th Gen Intel® Xeon® Scalable Processors, up to 350W

MEM

32x DDR5 DIMMs, up to 4800MT/s

HD

24x 2.5” SSD, up to 16x NVMe U.2

PCIE

8x low-profile x16 + 2x FHHL x16

see detailed specification
Lease me Today!
GPU Server

5688A7 Extreme AI Server Delivering 16 PFLOPS Performance

5688A7 is an advanced AI system supporting NVIDIA HGX H100/A100 8-GPUs to deliver industry-leading 16PFlops of AI performance.

CPU

2x AMD EPYC™ 9004 Series processors

MEM

32x DDR5 DIMMs, up to 4800MT/s

HD

24x 2.5” SSD, up to 16x NVMe U.2 2x Onboard NVMe/SATA M.2

PCIE

see detailed specification
GPU Server

5468M7 8 GPU AI Server Supporting Front/Back I/O & Liquid Cooling

5468M7 is a powerful 4U AI server with the latest PCIe Gen 5.0 architecture, supporting 8 FHFL PCIe GPU cards and 4th Gen Intel® Xeon® Scalable processors.

CPU

2x 4th Gen Intel® Xeon® Scalable Processors, up to 350W

MEM

32x DDR5 4800MT/s RDIMMs

HD

24x 2.5” SSD, up to 16x NVMe U.2 2x NVMe/SATA M.2

PCIE

8x FHFL double-width PCIe 5.0 GPU cards Supports additional 2x FL double-width PCIe GPU, 1x FL single-width PCIe

see detailed specification
GPU Server

5468A7 8 GPU AI Server with PCIe 5.0

5468A7 is a powerful 4U AI server supporting AMD EPYC™ 9004 processors and the latest PCIe Gen 5.0 architecture. Its 10-card pass-through design significantly reduces CPU and GPU communication latency.

CPU

2x AMD EPYC™ 9004 processors, up to 400W

MEM

24x DDR5 4800MT/s RDIMMs

HD

24x 2.5” SSD, up to 16x NVMe U.2 2x NVMe/SATA M.2

PCIE

11x PCIe 5.0 x16 slots

see detailed specification

GPUs in AI Servers

Dataknox incorporates top-tier GPUs from leaders like Nvidia, Intel, and AMD into their AI servers. Featuring cutting-edge models such as Nvidia's H100 and Gaudi2, our servers are designed for unparalleled performance in demanding AI and deep learning applications.

The Nvidia H100 GPU: A Dataknox Premium Offering

Dataknox harnesses the power of the NVIDIA H100 Tensor Core GPUs, a critical component behind the performance of advanced AI models, including platforms like ChatGPT. With this GPU, we guarantee unparalleled performance, scalability, and security across diverse workloads.

CPU

Features 640 Tensor Cores and 128 RT Core

MEM

80GB of memory (HBM3 for the SXM, HBM2e for PCIe

TDP

Up to 700W (configurable)

learn more
Now Available!
Nvidia l40

The NVIDIA L40 GPU: Dataknox's Elite Visual Powerhouse.

Revamp your infrastructure with Dataknox and NVIDIA's L40. Rooted in the Ada Lovelace design, it pairs advanced RT Cores with petaflop-scale Tensor Cores. With a performance leap doubling its predecessors, the L40 is essential for today's visual computing.

CPU

Features 640 Tensor Cores and 128 RT Core

MEM

48GB of memory capacity

TDP

The L40 GPU is passively cooled with a full-height, full-length (FHFL) dual-slot design capable of 300W maximum board power

learn more

Intel Gaudi 2: A Breakthrough in Deep Learning

Dataknox's AI servers, powered by Intel Gaudi 2, are not just hardware; they are a comprehensive solution for unlocking the potential of your data. Our servers seamlessly integrate with your existing IT infrastructure, providing a unified platform for AI development, deployment, and management.

COMPUTE

24 Tensor Processor Cores (TPCs)

MEM

96GB of HBM2E with 2.45TB/s bandwidth

TDP

600 watts

learn more

AMD MI300X: The World's Fastest AI Accelerator

Dataknox presents the AMD MI300X, a cutting-edge addition to our AI server lineup. More than just hardware, it's a pivotal tool for AI innovation. Effortlessly integrating with your IT infrastructure, the MI300X by Dataknox offers a powerful, unified platform for AI development and deployment, unlocking new potentials in data intelligence.

COMPUTE

up to 24 Zen 4 EPYC cores with CDNA 3 (GX940) General-Purpose GPU (GPGPU) cores.

MEM

up to 192 GB of (HBM3)

learn more

Explore AI Solutions - Connect Now

Our dedicated team of experts is on hand to address your inquiries and guide you through our customized 3-5 year financing plans.

Contact us