Tianyi Yang
I break study everything language model related. Training, inference, optimization, and deployment.
I am less of an engineer and more of an independent researcher. My research interests are in natural language processing (NLP), particularly in its intersections with machine learning (ML) and reinforcement learning (RL) using Large Language Models (LLMs). These interests have drawn me toward two main research directions:
đ Does language do reason itself, externalized?
How can we utilize language as a medium of thought to elicit better in-context reasoning and planning abilities of a model, and build models that can handle complex open-ended tasks with long trajectories?
How can we utilize language as a medium of thought to elicit better in-context reasoning and planning abilities of a model, and build models that can handle complex open-ended tasks with long trajectories?
đ± Does human become the lower bound for LLMs?
When average human performance starts to hinder the improvement of LLMs, we need better benchmarks that can not only scale with LLM development but also align with our ultimate goals of building safe and useful LLMs.
When average human performance starts to hinder the improvement of LLMs, we need better benchmarks that can not only scale with LLM development but also align with our ultimate goals of building safe and useful LLMs.
Through my experience in NLP, ML, and RL, I am inspired to work towards building LMs that can reason and interact meaningfully with humans, while also contributing to scalable and accurate evaluation methods.