Hello! I am currently a first-year M.S. student at the Data Intelligence (DLI) Lab at Yonsei University, advised by both prof. Jinyoung Yeo and prof. Dongha Lee. My research interests lie in understanding the limitations of large language models (LLMs) and exploring their potential to overcome these challenges.
Below are some keywords of my recent research interests:
‡ indicates equal contribution.
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
Hyungjoo Chae‡, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, Seonghwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo
Arxiv prerint (Under review).
Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation
Dongjin Kang‡, Sunghwan Kim‡, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo
Arxiv prerint (Under review).
Coffee: Boost Your Code LLMs by Fixing Bugs with Feedback
Seungjun Moon‡, Yongho Song‡, Hyungjoo Chae‡, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo
Arxiv prerint (Under review).
Multi-task Deep Learning for Joint Detection of Necrotizing Viral and Non-infectious Retinitis from Common Blood and Serology Test Data
Kai Tzu-iunn Ong‡, Taeyoon Kwon‡, Harok Jang, Min Kim, Christopher Seungkyu Lee, Suk Ho Byeon, Sung Soo Kim, Jinyoung Yeo, Eun Young Choi
IOVS: Investigative Ophthalmology & Visual Science, 2024.
Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales
Taeyoon Kwon‡, Kai Tzu-iunn Ong‡, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo
AAAI'24: The 38th Annual AAAI Conference on Artificial Intelligence. 2024.
Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
Hyungjoo Chae‡, Yongho Song‡, Kai Tzu-iunn Ong, Taeyoon Kwon, Minjin Kim, Youngjae Yu, Dongha Lee, Dongyeop Kang, Jinyoung Yeo
EMNLP'23: The 2023 Conference on Empirical Methods in Natural Language Processing. 2023.
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
Hyungjoo Chae‡, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, Seonghwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo
Arxiv prerint (Under review).
Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation
Dongjin Kang‡, Sunghwan Kim‡, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo
Arxiv prerint (Under review).
Coffee: Boost Your Code LLMs by Fixing Bugs with Feedback
Seungjun Moon‡, Yongho Song‡, Hyungjoo Chae‡, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo
Arxiv prerint (Under review).
Multi-task Deep Learning for Joint Detection of Necrotizing Viral and Non-infectious Retinitis from Common Blood and Serology Test Data
Kai Tzu-iunn Ong‡, Taeyoon Kwon‡, Harok Jang, Min Kim, Christopher Seungkyu Lee, Suk Ho Byeon, Sung Soo Kim, Jinyoung Yeo, Eun Young Choi
IOVS: Investigative Ophthalmology & Visual Science, 2024.
Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales
Taeyoon Kwon‡, Kai Tzu-iunn Ong‡, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo
AAAI'24: The 38th Annual AAAI Conference on Artificial Intelligence. 2024.
Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
Hyungjoo Chae‡, Yongho Song‡, Kai Tzu-iunn Ong, Taeyoon Kwon, Minjin Kim, Youngjae Yu, Dongha Lee, Dongyeop Kang, Jinyoung Yeo
EMNLP'23: The 2023 Conference on Empirical Methods in Natural Language Processing. 2023.