ED5-3

Design of adiabatic quantum-flux-parametron bfloat16 floating point arithmetic unit
*Tomoyuki Tanaka1, Christopher L. Ayala1, Shohei Takagi1, Nobuyuki Yoshikawa1

Machine learning, in which computers themselves learn rules from given data, is attracting attention. Machine learning has moved past the research stage and has become an integral part of our lives in areas such as natural language processing and automated driving. The number of applications using machine learning will increase, but heat and power consumption of computers continue to be challenges. Computers are made up of semiconductor circuits, and in the past, these problems have been avoided by miniaturizing the circuits, but the technology node of such circuits has reached several nm, and there is difficulty to continue miniaturization. One candidate as an alternative for semiconductor technology is the adiabatic quantum-flux-parametron (AQFP), a superconductor device that consumes 1.4 zJ per Josephson junction when operated at 5 GHz [1]. We are conducting research about AQFP logic to solve the problems of heat and power consumption of computers that we are currently facing.
In this presentation, we will present the design of a floating-point adder, multiplier, and fused multiply-add arithmetic unit using AQFP circuits. The floating-point format used is bfloat16, which is designed for machine learning [2]. We also compared the floating-point units designed in 22 nm CMOS [3] and RSFQ [4] circuits and found that they are 1.5 times faster and require 1/40 times the energy of CMOS circuits, and 2 times slower but require 1/1300 times the energy of RSFQ circuits.
The fabrication and measurement results of the part of the circuits in this study will also be presented.

[1] N. Takeuchi, T. Yamae, C. L. Ayala, H. Suzuki, and N. Yoshikawa, “An adiabatic superconductor 8-bit adder with 24kBT energy dissipation per junction,” Appl. Phys. Lett., vol.114, no. 4, p.042602, Jan. 2019.
[2] A. Agrawal, S. M. Mueller, B. M. Fleischer, X. Sun, N. Wang, J. Choi, and K. Gopalakrishnan, “DLFloat: A 16-b Floating Point Format Designed for Deep Learning Training and Inference,” 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH), pp.92–95, Jun. 2019.
[3] S. Mach, F. Schuiki, F. Zaruba, and L. Benini, “FPnew: An Open-Source Multiformat Floating-Point Unit Architecture for Energy-Proportional Transprecision Computing,” IEEE Trans. Very Large Scale Integr. VLSI Syst., vol.29, no. 4, pp.774–787, Apr. 2021.
[4] X. Peng, Q. Xu, T. Kato, Y. Yamanashi, N. Yoshikawa, A. Fujimaki, N. Takagi, K. Takagi, and M. Hidaka, “High-Speed Demonstration of Bit-Serial Floating-Point Adders and Multipliers Using Single-Flux-Quantum Circuits,” IEEE Trans. Appl. Supercond., vol.25, no. 3, pp.1–6, Jun. 2015.

Keywords: Adiabatic Quantum Flux Parametron, Floating point unit, bfloat16