Proven Performance at Scale
See the real numbers. LightningSim and OmniSim deliver breakthrough acceleration across diverse computing workloads. Up to 352x faster than traditional simulation methods.
Max Speedup
OmniSim vs Co-simulation
Accuracy
Cycle Count Accuracy
Benchmarks
Diverse Workloads Tested
Visualization
Runtime Performance Comparison
Actual runtimes across representative benchmarks showing dramatic acceleration vs traditional C/RTL co-simulation.
Performance Across Categories
Consistent speedups across different benchmark categories
DSP & Mathematical Operations
Fixed-point Square Root
FIR Filter
Window Convolution
Floating-point Conv
Arbitrary Precision ALU
Loop & Control Flow Operations
Parallel Loops
Imperfect Loops
Pipelined Nested Loops
AI/ML & Complex Workloads
FlowGNN - GIN
Graph neural network with 260K cycles
FlowGNN - DGN
Directed graph neural network
Complete Benchmark Results
Full dataset from Vitis v2021.1 showing cycle counts and runtime performance
| Benchmark | Cosim (s) | LightningSim (s) | OmniSim (s) | LS Speedup | OS Speedup |
|---|---|---|---|---|---|
| Fixed-point Square Root | 27.25 | 4.97 | 3.65 | 5.48× | 7.47× |
| FIR Filter | 20.12 | 2.43 | 1.94 | 8.23× | 10.37× |
| Window Convolution | 28.30 | 3.69 | 3.15 | 7.67× | 8.98× |
| Floating-point Conv | 49.78 | 2.42 | 2.46 | 20.57× | 20.24× |
| Unoptimized FFT | 153.53 | 2.78 | 2.91 | 55.23× | 52.76× |
| FlowGNN - GIN | 4219.85 | 28.90 | 11.97 | 146.02× | 352.53× |
| FlowGNN - DGN | 996.13 | 26.90 | 11.71 | 37.03× | 85.07× |
30+ benchmarks tested across DSP operations, loop structures, memory access patterns, and AI/ML workloads. View complete profiling data →
Key Insights
99.9% Accuracy
Timing estimates from LightningSim and OmniSim maintain 99.9% accuracy with respect to C/RTL co-simulation across all benchmarks.
100x+ Acceleration
LightningSim achieves consistent speedups of 100x or greater across diverse workloads, with peak performance reaching 55x on FFT operations.
AI/ML Powerhouse
Exceptional performance on complex workloads like FlowGNN models, delivering up to 352x speedup for graph neural network operations.
Design Space Exploration
Combined with incremental design space exploration features, achieve up to 577x acceleration for comprehensive optimization workflows.
Ready to Accelerate Your Design?
Transform your HLS and FPGA workflow with breakthrough simulation speed. Get started today.