MAXER-2100
MAXER-2100

MAXER-2100

AI Inference Server, 2U Rackmount, 12/13th Gen CPU, RTX-4080 Super


기능

  • High-perfomance for AI inference: Support NVIDIA RTX 4090, RTX 4080 Super etc.
  • 12/13th Gen Intel® Core™ LGA1700 Socket Processors
  • High CPU computing performance: Build in i9-13900, up to i9-13900K supported
  • 2U Rack Mount, Front Access I/O Design
  • M.2 2242/2280 M-Key x 2
  • M.2 3042/3052/2242 B-Key + Micro SIM Slot
  • M.2 2230 E-Key x 1
  • Onboard TPM 2.0
개요

AAEON’s new MAXER-2100 is the superior choice when selecting an edge server to run complex AI inferencing software. Because of its support for both 12th and 13th Generation Intel® Core™ processors, multiple NVIDIA® GPU cards, and its deep learning capabilities, the MAXER-2100 was deployed as an edge server alongside an AOI device as part of a broader quality inspection solution, running the company’s AI inferencing model to quickly and accurately detect defects during the manufacturing process.

AAEON’s new MAXER-2100 is the superior choice when selecting an edge server to run complex AI inferencing software. Because of its support for both 12th and 13th Generation Intel® Core™ processors, multiple NVIDIA® GPU cards, and its deep learning capabilities, the MAXER-2100 was deployed as an edge server alongside an AOI device as part of a broader quality inspection solution, running the company’s AI inferencing model to quickly and accurately detect defects during the manufacturing process.

AI Server

The MAXER-2100 is a 2U Rackmount Controller specifically crafted to serve as an inference server for solutions demanding the utmost precision. Engineered to fine-tune AI models and execute intricate AI algorithms for AOI in chip and PCBA manufacturing, the MAXER-2100 is compact, expandable, and signifies a novel approach to enhancing industrial computing capabilities.

Watch Video
× Automated Optical Inspection Made Easy
>99% Accuracy

The MAXER-2100 utilized multiple GPUs to accelerate AI inferencing performance, resulting in the identification of component defects over 99% of the time.

Accelerated Defect Detection

The MAXER-2100 reduced the time needed for defect analysis from between 2 and 3s per photograph via manual inspection to just 0.05s per photograph.

Reduction in Labor Costs

The MAXER-2100 reduced reliance on human resources by 67% through the removal of elementary inspection tasks from employee duties.

The AI Inference Server Solution
AI Inference Server|RTX-4080 Super|
사양
MAXER-2100

System

CPU

12/13th Generation Intel® Core™ LGA1700 Socket Processors, TDP Max 125W
Build in: i9-13900

GPU GeForce RTX 4080 SUPER (Optional support NVIDIA GeForce Graphic or AI computing card)
Chipset Intel Q670
System Memory

DDR5 4000MHz DIMM x 4, Up to 128GB
Non-ECC, Un-buffered Memory, Dual-Channel Memory Architecture

Display Interface HDMI 2.0 x 1, DP 1.4 x 1, VGA x1
Storage Device

2.5” SATA-III Drive Bay x 2 (Swappable, RAID 0/1)
M.2 2280 M key for NVMe SSD x 1

Ethernet

RJ-45 GbE LAN x 1 (supports Intel® AMT 12.0)
RJ-45 GbE LAN x 1
RJ-45 2.5GbE LAN x 2

USB USB 3.2 Gen2 x 4 (10Gbps)
Serial Port DB-9 x 1 for RS-232/422/485
Audio Audio (Mic-in/Line-out/Line-In)
Expansion

PCIe [x16] x 1 for RTX 4080 Super
M.2 3042/3052 B-Key x 1 + MicroSIM Slot
M.2 2230 E-Key x 1

TPM TPM 2.0 Onbaord
Indicator System LED x 1, HDD Activity x 1
OS support

Windows® 10 IoT Enterprise 64-bit
Windows® 11 PRO 64-bit
Ubuntu 22.04.3 and above

Drive Bay 2.5” SATA-III Drive Bay x 2 (Swappable, RAID 0/1)
Front Control Power On/Off, System Reset

Power Supply

Power Requirement Build in 1000W power supply

Mechanical

Mounting Rack Mount
Dimensions (W x H x D) 17“ x 3.46“ x 17.6“ (431.8mm x 88mm x 448mm)
Gross Weight 31.52 lb (14.3kg)
Net Weight 24.25 lb (11kg)

Environmental

Operating Temperature 32°F ~ 104°F (0°C ~ 40°C), according to IEC68-2 with 0.5 m/s AirFlow
Storage Temperature -40˚F ~ 176˚F (-40˚C ~ 80˚C)
Storage Humidity 5 ~ 95% @ 40˚C, non-condensing
Certification CE/FCC class A
다운로드
포장 목록, 상세 사양 및 기타 제품 정보를 보려면 데이터 시트 또는 사용자 설명서를 다운로드하세요.