
AI Hardware from a Single Source
Qualified Systems for the COMI AI Platform
We support you in selecting the right hardware for your AI applications. Whether through our qualified reference systems or through our partners who build systems to your specifications - you receive hardware optimally tuned for the COMI AI Platform. GPUs from NVIDIA, AMD, or Intel are configured according to your requirements.
All systems are tested, pre-configured, and shipped with the AI Platform by us. You receive a ready-to-use system - from compact workstations to 19-inch rack servers to HPC clusters.
AI Workstations
Compact systems for local AI applications
AI Server S
Ultra-compact workstation for desktop or production floor deployment.
- Processor
- 12 Cores
- Memory
- 64 GB (2 Channels)
- Storage
- 2× 2 TB + 2× 4 TB
- GPU
- 1× 96 GB
- Network
- 2× 10 Gbit/s
- Processor
- 16 Cores
- Memory
- 96 GB (2 Channels)
- Storage
- 2× 2 TB + 2× 8 TB
- GPU
- 1× 96 GB + 1× 24 GB
- Network
- 2× 25 Gbit/s
AI Server Pro
Powerful workstation for demanding local AI tasks and training.
- Processor
- 32 Cores
- Memory
- 128 GB (4 Channels)
- Storage
- 2× 2 TB + 2× 3.84 TB
- GPU
- 1× 96 GB + 1× 24 GB
- Network
- 2× 25 Gbit/s
- Processor
- 64 Cores
- Memory
- 256 GB (4 Channels)
- Storage
- 2× 2 TB + 2× 7.68 TB
- GPU
- 2× 96 GB + 2× 24 GB
- Network
- 2× 100 Gbit/s
19" Rack Servers
Data center-ready systems for centralized AI workloads
AI Server Rack Edge
2U Short-Depth · 24 Cores · up to 256 GB RAM · up to 2 GPUs (2× 96 GB) · up to 30 TB · up to 2× 200 Gbit/s
Compact short-depth chassis for industrial racks and tight spaces.
AI Server Rack
2U · 32 Cores · up to 2 TB RAM · up to 4 GPUs (4× 96 GB) · up to 60 TB · up to 4× 200 Gbit/s
Powerful rack server for data centers.
AI Server Rack Ultra
4U · up to 2× 128 Cores · up to 4 TB RAM · up to 8 GPUs (8× 141 GB) · up to 180 TB · up to 4× 400 Gbit/s
Maximum performance for compute-intensive workloads.
AI Server Cluster
Scalable HPC infrastructure for maximum performance
Modular cluster solution with specialized components - flexibly scalable to your requirements.
Compact Node
Scalable solution for serving many different applications in a compact form factor.
High-Power Node
Maximum compute power for the most capable AI agents and compute-intensive workloads.

Storage
High-performance storage solutions for large datasets and fast data access.

Networking
High-speed interconnect for minimal latency between cluster nodes.

Infrastructure
Rack systems, power distribution, and cooling - everything from a single source for reliable operation.
Frequently Asked Questions
What you need to know about our AI Servers.
We support GPUs from NVIDIA, AMD, and Intel. For most AI applications, accelerators from all three manufacturers can be used.
Yes, upon request we deliver systems fully tested and pre-configured with operating system, drivers, and the AI Platform - ready to use immediately.
Our smallest systems are the size of a briefcase and fit on any office desk. We also offer workstations in classic tower cases, standard 19-inch rack servers, and HPC clusters spanning multiple racks.
Yes, many customizations are possible - from server vendor to individual components. Through our partners, systems can be assembled according to your specific requirements.
Our servers support standard network protocols and integrate seamlessly into your existing data center. Remote management via IPMI/BMC is available on all rack systems.
All systems are modular and leave room for upgrades. Workstations can be upgraded with additional GPUs or more memory, and with clusters almost all components can be scaled independently.
With continuous usage, dedicated hardware often pays for itself within the first year. Over longer operation periods, you save significant costs compared to cloud services. You also retain full control over your data.
Let's enable AI together!
Get advice on our AI Server solutions.
When you click "Schedule via HubSpot", data will be transmitted to HubSpot.
By submitting your data, you agree to the processing. For more information, see our Privacy Policy.