Our research group operates a multi-tiered high-performance computing infrastructure composed of dedicated in-house compute nodes, a GPU workstation, and national supercomputing resources.

Our Group Compute Cluster

We maintain a dedicated in-house compute cluster established during the TUBITAK Projects 117M430 and 120M671, with compute nodes connected via 40 Gbps Infiniband for high-speed parallel data transfer. The architecture includes a login node for administration and two compute nodes with RAID configurations optimized for storage and I/O performance.

Cluster Topology
Node CPU Memory Storage
Compute Node 1 2 × Intel Xeon E5-2660 v4 @ 2.00GHz 256 GB RDIMM 1.6 TB SSD (RAID0) + 4 × 4TB HDD (RAID5)
Compute Node 2 2 × Intel Xeon Silver 4214R @ 2.40GHz 192 GB RDIMM 2 × 480GB SSD (RAID1) + 4 × 960GB SSD (RAID5) + 2 × 16TB HDD



GPU Workstation

A new GPU-enabled workstation was established under TUBITAK Project 124M416 to support simulation-visualization tasks and real-time rendering with virtual reality devices.

GPU Workstation
Component Specification
CPU Intel i9-14900K
Memory 96 GB DDR5
GPU NVIDIA GeForce RTX 4070
Storage 2 × 4TB SSD (Run Space) + 2 × 10TB HDD
VR Devices 2 × Meta Quest



National HPC Resources (UHeM)

In addition to our group cluster, we actively utilize national supercomputing resources provided by UHeM at ITU. We access two major clusters:

  • ALTAY: 1.5 PFLOPS CPU cluster based on AMD EPYC processors
  • SARIYER: GPU cluster with NVIDIA A100 and V100 accelerators

Our projects have been allocated thousands of CPU-hours over the years.

Hardware at UHeM
Project Title Duration CPU Hours Used
DVT Modeling 2018–2022 368,495
Development of Quantum Machine Learning (QML) Algorithms for the solution of Partial Differential Equations 2021–2025 128,816
Computational Hemodynamic Modelling of Prosthesis Heart and Venous Valves 2021–2025 1,180,641
Hemodynamic Simulation of Supra Left Ventricular Heart Blood Flow Coupled with Coronary Arteries 2025–2028 185,000