Can Large Language Models Keep Up? Benchmarking Online Adaptation to Continual Knowledge Streams

Jiyeon Kim, Hyunji Lee, Dylan Zhou, Sue Hyun Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Sungmin Cha, Minjoon Seo

Language Model ACL 2026

Can Large Language Models Keep Up? Benchmarking Online Adaptation to Continual Knowledge Streams

Not All Bits Are Equal: Scale-Dependent Memory Optimization Strategies for Reasoning Models

Junhyuck Kim, Ethan Ewer, Taehong Moon, Jongho Park, Dimitris Papailiopoulos

Language Model ICLR 2026

Not All Bits Are Equal: Scale-Dependent Memory Optimization Strategies for Reasoning Models

ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs

Wonjun Kang, Kevin Galim, Seunghyuk Oh, Minjae Lee, Yuchen Zeng, Shuibai Zhang, Coleman Richard Charles Hooper, Yuezhou Hu, Hyung Il Koo, Nam Ik Cho, Kangwook Lee

Language Model ICLR 2026

ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs

Draft-based Approximate Inference for LLMs

Kevin Galim, Ethan Ewer, Wonjun Kang, Minjae Lee, Hyung Il Koo, Kangwook Lee

Language Model ICLR 2026

Draft-based Approximate Inference for LLMs

T1: Tool-integrated Verification for Test-time Compute Scaling in Small Language Models

Minki Kang, Jongwon Jeong, Jaewoong Cho

Language Model ICLR 2026

T1: Tool-integrated Verification for Test-time Compute Scaling in Small Language Models

Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games

Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, Jaewoong Cho

Language Model ICLR 2026

Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games

Distilling LLM Agent into Small Models with Retrieval and Code Tools

Minki Kang, Jongwon Jeong, Seanie Lee, Jaewoong Cho, Sung Ju Hwang

Language Model NeurIPS 2025

Distilling LLM Agent into Small Models with Retrieval and Code Tools

Task Diversity Shortens the ICL Plateau

Jaeyeon Kim, Sehyun Kwon, Joo Young Choi, Jongho Park, Jaewoong Cho, Jason D. Lee, Ernest K. Ryu

Language Model TMLR 2025

Task Diversity Shortens the ICL Plateau

Learning to Better Search with Language Models via Guided Reinforced Self-Training

Jaeyeon Kim, Sehyun Kwon, Joo Young Choi, Jongho Park, Jaewoong Cho, Jason D. Lee, Ernest K. Ryu

Language Model NeurIPS 2025

Learning to Better Search with Language Models via Guided Reinforced Self-Training

Delving into Large Language Models for Effective Time-Series Anomaly Detection

Junwoo Park, Kyudan Jung, Dohyun Lee, Hyuck Lee, Daehoon Gwak , ChaeHun Park, Jaegul Choo, Jaewoong Cho

Language Model NeurIPS 2025

Delving into Large Language Models for Effective Time-Series Anomaly Detection