Yeonjoon Jung

Yeonjoon Jung

Undergraduate Student at POSTECH · ML Researcher/Engineer at SqueezeBits

Hi, my name is Yeonjoon Jung, and I am an undergraduate student at POSTECH majoring in Convergence IT Engineering & Computer Science and Engineering.

I am currently taking a leave of absence to complete my mandatory alternative military service as an ML Researcher/Engineer at SqueezeBits, where I focus on optimizing and accelerating AI models.

My recent research interests span Efficient AI, including quantization, inference optimization, and parameter-efficient fine-tuning (PEFT), with applications to large language models (LLMs) and diffusion models.

I am always open to collaborations and new research opportunities. Feel free to contact me.

Portrait of Yeonjoon Jung

News

Papers

GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

Yeonjoon Jung, Daehyun Ahn, Hyungjun Kim, Taesu Kim, Eunhyeok Park

Neural Information Processing Systems (NeurIPS) 2025 · Spotlight

Triplet edge attention for algorithmic reasoning

Yeonjoon Jung, Sungsoo Ahn

Learning on Graph Conference (LoG), 2023 · Extended abstract

Blogs

Reliable & Scalable Synthetic Data for Physical AI (Part 2)

On scaling synthetic data generation for Physical AI.

Reliable & Scalable Synthetic Data for Physical AI (Part 1)

On building reliable synthetic data pipelines for Physical AI.

Winning both speed and quality: How Yetter deals with diffusion models

Introducing an efficient pipeline for diffusion model inference.

GraLoRA: Boosting Fine-Tuning Accuracy Without Extra Cost

Introducing GraLoRA, a novel LoRA fine-tuning method.

[vLLM vs TensorRT-LLM] #13. Vision Language Models

Exploring Vision Language Model serving.

[vLLM vs TensorRT-LLM] #12. Automatic Prefix Caching

The effectiveness of prefix caching in LLM serving.

[vLLM vs TensorRT-LLM] #11. Speculative Decoding

Understanding speculative decoding in LLM serving.

[vLLM vs TensorRT-LLM] #2. Towards Optimal Batching for LLM Serving

Analyzing batching in LLM serving.

[vLLM vs TensorRT-LLM] #1. An Overall Evaluation

Evaluating LLM serving with key metrics.

Education

POSTECH

03/2020 - Present

Major: Convergence IT Engineering and Computer Science and Engineering

Korea Science Academy of KAIST

03/2017 - 02/2020