Jonghyun Bae

Logo

E-mail - bnbbkr (at) gmail (dot) com

View My GitHub Profile

Bio

Jonghyun Bae is currently research scientist at Google working on database efficiency. Before joining Google, Jonghyun Bae was a postdoctoral researcher at Performance and Algorithm Research (PAR) group under the lead of Dr. Leonid Oliker in Lawrence Berkeley National Laboratory, and AI Institute at SNU (AIIS) in Seoul National University.

Jonghyun Bae completed his Ph.D in Computer Science and Engineering at Seoul National University as a member of Architecture and Code Optimization Lab (ARC) under the guidance of Prof. Jae W. Lee. Before cominng to SNU, He received an M.S in Electronic, Electrical, and Computer Engineering in 2017, and B.S in Semiconductor Systems Engineering in 2015 – both from Sungkyunkwan University. His current research interests include system for ML and AI, HPC computing, and big data analytics.


Current Position

Google (Sunnyvale, USA)
April 2024 – Present
Position: Research scientist
Supervisor: Jichuan Chang


Education

Seoul National University (Seoul, Korea)
Ph.D. in Computer Science and Engineering (Sep 2017 – Feb 2022)
Advisor: Jae W. Lee
Dissertation: A Large-Batch, High-Throughput Training System for Deep Neural Networks

Sungkyunkwan University (Suwon, Korea)
M.S. in Electronic, Electrical and Computer Engineering (Mar 2015–Aug 2017)
Advisor: Jae W. Lee and Jaehyuk Choi
Dissertation: Jointly Optimizing Task Granularity and Concurrency for In-Memory MapReduce Frameworks

Sungkyunkwan University (Suwon, Korea)
B.S. in Semiconductor Systems Engineering (Mar 2009–Feb 2015)


Publications

International Conferences

[ECCV ‘22] Jonghyun Bae, Woohyeon Baek, Tae Jun Ham, and Jae W. Lee, “L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training”, European Conference on Computer Vision (ECCV), Tel-Aviv, Israel, 2022. [Supplementary]

[USENIX ATC ‘21] Sam Son, Seung Yul Lee, Yunho Jin, Jonghyun Bae, Jinkyu Jeong, Tae Jun Ham, Jae W. Lee, and Hongil Yoon, “ASAP: Fast Mobile Application Switch via Adaptive Prepaging”, USENIX Annual Technical Conference (ATC), Virtual, 2021. [Slides]

[FAST ‘21] Shine Kim*, Yunho Jin*, Gina Sohn, Jonghyun Bae, Tae Jun Ham, and Jae W. Lee, “Behemoth: A Flash-centric Training Accelerator for Extreme-scale DNNs”, USENIX Conference on File and Storage Technologies (FAST), Virtual, 2021. [Slides]
* Equal Contributions

[FAST ‘21] Jonghyun Bae, Jongsung Lee, Yunho Jin, Sam Son, Shine Kim, Hakbeom Jang, Tae Jun Ham, and Jae W. Lee, “FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks”, USENIX Conference on File and Storage Technologies (FAST), Virtual, 2021. [Slides]

[HPCA ‘21] Young H. Oh, Seonghak Kim, Yunho Jin, Sam Son, Jonghyun Bae, Jongsung Lee, Yeonhong Park, Dong Uk Kim, Tae Jun Ham, and Jae W. Lee, “Layerweaver: Maximizing Resource Utilization of Neural Processing Units via Layer-Wise Scheduling”, IEEE International Symposium on High Performance Computer Architecture (HPCA), Seoul, Korea, 2021. [Slides]

[ISCA ‘20] Gyusun Lee*, Wenjing Jin*, Wonsuk Song, Jeonghun Gong, Jonghyun Bae, Tae Jun Ham, Jae W. Lee, and Jinkyu Jeong, “A Case for Hardware-based Demand Paging”, ACM/IEEE International Symposium on Computer Architecture (ISCA), Valencia, Spain, 2020. [Slides]
* Equal Contributions

[USENIX ATC ‘19] Shine Kim, Jonghyun Bae, Hakbeom Jang, Wenjing Jin, Jeonghun Gong, Seungyeon Lee, Tae Jun Ham, and Jae W. Lee, “Practical Erase Suspension for Modern Low-latency SSD”, USENIX Annual Technical Conference (ATC), Renton, WA, 2019. [Slides]

[BigData ‘17] Jonghyun Bae*, Hakbeom Jang*, Wenjing Jin, Jun Heo, Jaeyoung Jang, Joo-Young Hwang, Sangyeun Cho, and Jae W. Lee, “Jointly Optimizing Task Granularity and Concurrency for In-Memory MapReduce Frameworks”, IEEE International Conference on Big Data (BigData), Boston, MA, December 2017. [Slides]
* Equal Contributions

International Journals

[IEEE Micro ‘19] Jonghyun Bae, Hakbeom Jang, Jeonghun Gong, Wenjing Jin, Shine Kim, Jaeyoung Jang, Tae Jun Ham, Jinkyu Jeong, and Jae W. Lee, “SSDStreamer: Specializing I/O Stack for Large-Scale Machine Learning”, IEEE Micro, Sept/Oct 2019.

[IEICE TIS ‘19] Hakbeom Jang, Jonghyun Bae, Tae Jun Ham, and Jae W. Lee, “Eager Memory Managerment for In-Memory Data Analytics”, IEICE Transactions on Information and Systems, 2018.

International Workshops

[IPDPSW ’24] Jonghyun Bae, Jong Youl Choi, Massimiliano Lupo Pasini, Kshitij Mehta, and Khaled Ibrahim, “MDLoader: A Hybrid Model‑driven Data Loader for Distributed Deep Neural Networks Training”, IEEE International Parallel and Distributed Processing Symposium Workshops, San Francisco, CA, May 2024 (To appear).

[MLG-HPCE ‘23] Jong Youl Choi, Massimiliano Lupo Pasini, Pei Zhang, Kshitij Mehta, Frank Liu, Jonghyun Bae, and Khaled Ibrahim, “DDStore: Distributed Data Store for Scalable Training of Graph Neural Networks on Large Atomistic Modeling Datasets”, Workshop on Machine Learning with Graphs in High Performance Computing Environments (Be held in conjunction with SC ’23), Denver, CO, Nov 2023.

[APSys ‘23] Woohyeon Baek*, Jonghyun Bae*, Donghyun Lee, Hyunwoong Bae, Yeonhong Park, and Jae W. Lee, “Liquid: Mix-and-Match Multiple Image Formats to Balance DNN Training Pipeline”, The 14th ACM SIGOPS Asia-Pacific Workshop on Systems, Seoul, Korea, Aug 2023.
* Equal Contributions

[Spark Summit East ‘16] Jonghyun Bae, Sangoh Jeong, Wenjing Jin, and Jae W. Lee, “ggplot2.SparkR: Rebooting ggplot2 for Scalable Big Data Visualization”, Spark Summit East, New York City, NY, February 2016. [Slides]

[UseR!-Poster ‘14] ChungHa Sung, JongHyun Bae, SangGi Hong, TaeJoon Song, Jae W. Lee, and Junghoon Lee, “RIGHT: An HTML Canvas and JavaScript-based Interactive Data Visualization Package for Linked Graphics”, The R User Conference, Los Angeles, CA, July 2014. [Poster]


Experiences

Lawrence Berkeley National Laboratory (CA, USA)
October 2022 – March 2024
Position: Postdoctoral researcher
Supervisor: Leonid Oliker
Analyzing and optimizing performance of scientific applications on HPC environment

AI Institute at Seoul National University (AIIS) (Seoul, Korea)
April 2022 – August 2022
Position: Postdoctoral researcher
Supervisor: Jae W. Lee
Leveraging high-performance storage system for ML/AI performance optimization

NAVER Clova AI Research Intern (Seongnam, Korea)
May 2020 - August 2020
Position: Research intern
Collaborator: Ji-Hoon Kim
Research on automatically searches for effective augmentation policies

Google Summer of Code
March 2014 - August 2014
Improving the R-interactive-Graphics-via-HTml (RIGHT) Package


Activities

Talks

Korea Electric Power Corporation (KEPCO) Data Science Lab (Seoul, Korea)
Invited talk: Technical seminar
A Large-Batch, High-Throughput Training System for Deep Neural Networks

Korea Computer Congress 2021 (Jeju, Korea)
Invited talk: Top Conference session
FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks