AI Server

Pre-configured AI infrastructure for every use case

AI Hardware from a Single Source

Qualified Systems for the COMI AI Platform

We support you in selecting the right hardware for your AI applications. Whether through our qualified reference systems or through our partners who build systems to your specifications - you receive hardware optimally tuned for the COMI AI Platform. GPUs from NVIDIA, AMD, or Intel are configured according to your requirements.

All systems are tested, pre-configured, and shipped with the AI Platform by us. You receive a ready-to-use system - from compact workstations to 19-inch rack servers to HPC clusters.

AI Workstations

Compact systems for local AI applications

AI Server S

Ultra-compact workstation for desktop or production floor deployment.

S
Processor
12 Cores
Memory
64 GB (2 Channels)
Storage
2× 2 TB + 2× 4 TB
GPU
1× 96 GB
Network
2× 10 Gbit/s
S Max
Processor
16 Cores
Memory
96 GB (2 Channels)
Storage
2× 2 TB + 2× 8 TB
GPU
1× 96 GB + 1× 24 GB
Network
2× 25 Gbit/s
Edge InferenceSingle WorkstationLarge Language Models

AI Server Pro

Powerful workstation for demanding local AI tasks and training.

Pro
Processor
32 Cores
Memory
128 GB (4 Channels)
Storage
2× 2 TB + 2× 3.84 TB
GPU
1× 96 GB + 1× 24 GB
Network
2× 25 Gbit/s
Pro Plus
Processor
64 Cores
Memory
256 GB (4 Channels)
Storage
2× 2 TB + 2× 7.68 TB
GPU
2× 96 GB + 2× 24 GB
Network
2× 100 Gbit/s
Local TrainingMulti-Model InferenceResearch

19" Rack Servers

Data center-ready systems for centralized AI workloads

AI Server Rack Edge

2U Short-Depth · 24 Cores · up to 256 GB RAM · up to 2 GPUs (2× 96 GB) · up to 30 TB · up to 2× 200 Gbit/s

Compact short-depth chassis for industrial racks and tight spaces.

Industrial RackEdge Data Center

AI Server Rack

2U · 32 Cores · up to 2 TB RAM · up to 4 GPUs (4× 96 GB) · up to 60 TB · up to 4× 200 Gbit/s

Powerful rack server for data centers.

Central InferenceProduction

AI Server Rack Ultra

4U · up to 2× 128 Cores · up to 4 TB RAM · up to 8 GPUs (8× 141 GB) · up to 180 TB · up to 4× 400 Gbit/s

Maximum performance for compute-intensive workloads.

Large-Scale TrainingEnterprise

AI Server Cluster

Scalable HPC infrastructure for maximum performance

Modular cluster solution with specialized components - flexibly scalable to your requirements.

Compact Node

Scalable solution for serving many different applications in a compact form factor.

InferenceFine-TuningMulti-Model

High-Power Node

Maximum compute power for the most capable AI agents and compute-intensive workloads.

LLM TrainingDeep LearningHigh Throughput
Storage

Storage

High-performance storage solutions for large datasets and fast data access.

Large DatasetsModel CheckpointsShared Storage
Networking

Networking

High-speed interconnect for minimal latency between cluster nodes.

InfiniBandLow LatencyHigh Throughput
Infrastructure

Infrastructure

Rack systems, power distribution, and cooling - everything from a single source for reliable operation.

Rack & ChassisPower SupplyCooling

Frequently Asked Questions

What you need to know about our AI Servers.

Which GPU manufacturers are supported?

We support GPUs from NVIDIA, AMD, and Intel. For most AI applications, accelerators from all three manufacturers can be used.

Are systems delivered pre-configured?

Yes, upon request we deliver systems fully tested and pre-configured with operating system, drivers, and the AI Platform - ready to use immediately.

What form factors are available?

Our smallest systems are the size of a briefcase and fit on any office desk. We also offer workstations in classic tower cases, standard 19-inch rack servers, and HPC clusters spanning multiple racks.

Can custom configurations be created?

Yes, many customizations are possible - from server vendor to individual components. Through our partners, systems can be assembled according to your specific requirements.

How do the systems integrate with existing infrastructure?

Our servers support standard network protocols and integrate seamlessly into your existing data center. Remote management via IPMI/BMC is available on all rack systems.

How can I expand my system later?

All systems are modular and leave room for upgrades. Workstations can be upgraded with additional GPUs or more memory, and with clusters almost all components can be scaled independently.

Is owning hardware worthwhile compared to cloud solutions?

With continuous usage, dedicated hardware often pays for itself within the first year. Over longer operation periods, you save significant costs compared to cloud services. You also retain full control over your data.

Let's enable AI together!

Get advice on our AI Server solutions.

When you click "Schedule via HubSpot", data will be transmitted to HubSpot.

By submitting your data, you agree to the processing. For more information, see our Privacy Policy.