MLP Explorer
ML expressivity vs. TinyXPU hardware efficiency ·
Source code
Depth (hidden layers)
2
Width (neurons/layer)
4
8
12
16
24
32
Array size
8×8
16×16
32×32
Task
Sum of sines
Square wave
ML Expressivity
in-browser training
Training…
Parameters
—
log₁₀ Loss (lower better)
—
Steps
—
Network Architecture
Hardware Efficiency
TinyXPU systolic array
PE Utilization
(array mapping × startup overhead)
—
Array mapping
(K×N PEs doing useful work)
—
Startup overhead
(M / (M + HW_ROWS + N − 2))
—
Throughput
(vs peak array MACs/cycle)
—
MACs/cyc
Inference latency
(higher = worse)
—
cycles
Layer
Shape
Params
Adjust the controls above to explore the tradeoff.