MLP Explorer

ML expressivity vs. TinyXPU hardware efficiency · Source code

Depth (hidden layers) 2
Width (neurons/layer)
Array size
Task
ML Expressivity in-browser training
Training…
Parameters
log₁₀ Loss (lower better)
Steps

Network Architecture
Hardware Efficiency TinyXPU systolic array
PE Utilization (array mapping × startup overhead)
Array mapping (K×N PEs doing useful work)
Startup overhead (M / (M + HW_ROWS + N − 2))

Throughput (vs peak array MACs/cycle) MACs/cyc
Inference latency (higher = worse) cycles
LayerShapeParams
Adjust the controls above to explore the tradeoff.