Posters

Title Authors & Affiliation
OmniDRL: An Energy-Efficient Mobile Deep Reinforcement Learning Accelerators with Dual-mode Weight Compression and Direct Processing of Compressed Data Juhyoung Lee; Korea Advanced Institute of Science and Technology
Exynos 1080 high-peformance, low-power CPU and GPU with AMIGO Taehee Lee; Samsung
An Energy-efficient Floating-Point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory Juhyoung Lee; Korea Advanced Institute of Science and Technology
Dynamic Neural Accelerator for Reconfigurable and Energy-efficient Neural Network Inference Sakyasingha Dasgupta; EdgeCortix
SM6: A 16nm System-on-Chip for Accurate and Noise-Robust Attention-Based NLP Applications Thierry Tambe; Harvard
ENIAD: A Reconfigurable Near-data Processing Architecture for Web-Scale AI-enriched Big Data Service Jialiang Zhang; U Penn
A Plug-and-Play Universal Photonic Processor for Quantum Information Processing Caterina Taballione; QuiX
Industry’s First 7.2 Gbps 512 GB DDR5 Memory Module with 8-Stacked DRAMs: A Promising Memory Solution for Next-Gen Servers Sung Joo Park; Samsung
LightOn Optical Processing Unit: Scaling-up AI and HPC with a Non von Neumann co-processor Laurent Daudet; LightOn
System-on-Chip Implementation of Trusted Execution Environment with Heterogeneous Architecture Trong-Thuc Hoang; University of Electro-Communications
A CORDIC-based Trigonometric Hardware Accelerator with Custom Instruction in 32-bit RISC-V System-on-Chip Khai-Duy Nguyen; University of Electro-Communications
A photonic neural network using < 1 photon per scalar multiplication Tianyu Wang; Cornell
Edge Inference Engine for Deep & Random Sparse Neural Networks with 4-bit Cartesian-Product MAC Array and Pipelined Activation Aligner Kota Ando; Tokyo Institute of Technology
Photonic co-processors in HPC: using LightOn OPUs for Randomized Numerical Linear Algebra Daniel Hesslow; LightOn
Elpis: High Performance Low Power Controller for Data Center SSDs Seungwon Lee; Samsung
SOT-MRAM – Third generation MRAM memory opens new opportunities Jean-Pierre Nozières; Antaios
PNNPU: A Fast and Efficient 3D Point Cloud-based Neural Network Processor with Block-based Point Processing for Regular DRAM Access Sangjin Kim; Korea Advanced Institute of Science and Technology
Samsung NPU: An AI accelerator and SDK for flagship mobile AP Jun-Seok Park; Samsung