Chip Gallery

1024-channel Sparsity-Aware Neural Interface
1024-Channel Sparsity-Aware Focused-Sampling Neural Interface
  • CIM-based hotspot prediction engine for automatic neural activity localization
  • Supports dynamic switching between 8-bit panoramic monitoring and 11-bit precision tracking
  • Achieves 0.000222 mm2/ch area and 0.34 μW/ch power efficiency

ISSCC: Sparsity-Aware Neural Interface with CIM-Based Predictive Focused Sampling for Hotspot Spike Tracking

Event-Driven Hybrid Neural Network Processor
Event-Driven Hybrid ANN-SNN Neural Signal Processor
  • Supports EEG/ECG/EMG/LFP multi-modal neural signal analysis
  • Reconfigurable ANN+SNN hybrid architecture
  • Always-on BNN + event-triggered CNN; as low as 0.99 μJ/class

TBioCAS: A 0.99-to-4.38 μJ/class Event-Driven Hybrid Neural Network Processor for Full-Spectrum Neural Signal Analyses

Multi-Topology Online-Learning SNN Processor
Multi-Topology Online-Learning SNN Processor
  • Reconfigurable for FC / recurrent / convolutional spiking networks
  • Supports TR-STDP online learning
  • Demonstrated on ECG anomaly detection, image classification, and MNIST

TBioCAS: A 510 μW 0.738-mm² 6.2-pJ/SOP Online Learning Multi-Topology SNN Processor With Unified Computation Engine in 40-nm CMOS

Deep SNN Accelerator
Unstructured Sparsity-Aware Deep SNN Accelerator
  • 3D adder-tree for parallel temporal-step computation
  • Unstructured sparse data parallel execution
  • Supports spike Q/K/V and self-attention (Spike-SSA)

JSSC: An Energy-Efficient Unstructured Sparsity-Aware Deep SNN Accelerator With 3-D Computation Array

Spiking Vision Transformer Accelerator
Spiking Vision Transformer (28 nm) Accelerator
  • Dual-path sparse compute core
  • EMA-free spiking self-attention engine
  • 1-bit / 8-bit adder-tree array

TCASAI: A 28nm Spiking Vision Transformer Accelerator with Dual-Path Sparse Compute Core and EMA-free Self-Attention Engine for Embodied Intelligence

HybMED
Multi-Sparsity Hybrid Neural Network On-Chip Training Processor
  • Feedback Alignment-based online learning
  • Multi-granularity sparsity exploitation for efficiency
  • Hybrid ANN/SNN structure for local adaptation

TBioCAS: HybMED: A Hybrid Neural Network Training Processor With Multi-Sparsity Exploitation for Internet of Medical Things

Wireless Implantable Neural Interface SoC
Neuron-Inspired Wireless Implantable Neural Interface SoC
  • 32-channel event-driven spike detection with wireless transmission
  • Direct Multiplexing front-end + Spike Folding for area efficiency
  • 1.38 μW/ch power, 0.0032 mm²/ch area, >500× compression

CICC: A Neuron-Inspired 0.0032mm²–1.38μW/Ch Wireless Implantable Neural Interface with Direct Multiplexing Front-End and Event-Driven Spike Detection and Transmission

SpikeSEE Retinal Prosthesis
Neuromorphic Visual Processor for Retinal Prostheses (SpikeSEE)
  • Event-driven dynamic scene processing for retinal prostheses
  • Energy-efficient spike-based visual encoding pipeline
  • Evaluated on real-world prosthetic vision simulation tasks

Neural Networks: SpikeSEE: An energy-efficient dynamic scenes processing framework for retinal prostheses

Configurable Analog Front-End
Low-Power Configurable Analog Front-End for Multi-Modal Biosignal Recording
  • Auto-zeroed chopper amplifier + pseudo source-follower for low 1/f noise
  • Tunable active pseudo-resistor for flexible high-pass filtering
  • 1.7 µVrms input-referred noise; suitable for implantable and wearable ExG interfaces

TBioCAS: An Energy-Efficient Small-Area Configurable Analog Front-End Interface for Diverse Biosignals Recording

RF Energy Harvesting Front-End
Reconfigurable RF Energy Harvesting Front-End
  • Stable 1.3–1.8 V output across –22.5 to –3 dBm input range, no DC-DC converter required
  • Peak efficiency 42.8%, sensitivity down to –23.6 dBm
  • Suitable for self-powered IoT sensors and wireless implantable medical devices

TCAS-II: An On-Chip Reconfigurable Front-End for Ultra-Low-Power RF Energy Harvesting

Millisecond-Recovery Neural Recording Front-End
Self-Adaptive Neural Recording Front-End with Millisecond Artifact Recovery
  • Stimulation artifact recovery within 3 ms
  • Tunable high-pass cutoff (0.1–15 Hz) for simultaneous LFP and Spike recording
  • Targets next-generation closed-loop neural interfaces for epilepsy and Parkinson's therapy

TBioCAS: Self-Adaptive Pseudo-Resistors Enabling Millisecond-Level Artifact Recovery and High-Linearity for Neural Recording Front-Ends

High-Speed Programmable Vision Chip
High-Speed Programmable Vision Chip
  • Peak performance 413 GOPS@8-bit; end-to-end vision processing pipeline
  • 4 mm × 6 mm die; 208 GOPS at 200 MHz, 194 GOPS/W energy efficiency
  • Supports CNN and deep learning workloads

TCSVT: A Heterogeneous Parallel Processor for High-Speed Vision Chip & A High Speed Programmable Vision Chip for Real-Time Object Detection

1000 fps Hybrid Reconfigurable Vision Chip
1000-fps Hybrid Reconfigurable Vision Chip with SOM Neural Network
  • Pixel-parallel PE array + reconfigurable RP + dual-core MPU
  • Integrated bio-inspired Self-Organizing Map (SOM) neural network
  • On-chip CMOS image sensor; real-time processing at >1000 fps

JSSC: A 1000 fps Vision Chip Based on a Dynamically Reconfigurable Hybrid Architecture Comprising a PE Array Processor and Self-Organizing Map Neural Network

32-Channel Neural Amplifier
32-Channel Integrated Neural Signal Amplifier
  • Die size only 2.8 mm × 1.9 mm; ~3 μW per channel
  • Input-referred noise < 2 μVrms for high-fidelity neural acquisition
  • Validated in animal experiments; performance on par with commercial systems
64-Channel Neural Recording SoC
64-Channel Neural Recording SoC
  • 64 simultaneous channels; 10.7 µW per channel
  • 3.2 µVrms noise (@5–1 kHz), 100–500× programmable gain, 62.5 kHz/ch
  • Suited for large-scale neural recording, BCI, and animal experiment platforms