Daniel Klein

Professor Klein’s research focuses on statistical natural language processing, including unsupervised learning methods, syntactic parsing, information extraction, and machine translation. For specific projects and publications, see┬áhis group’s webpage.He received his BA in Math, CS, Linguistics (summa cum laude) from Cornell University (1994-1998); an M.St. in Linguistics from Oxford University (1998-1999); and an M.S. and Ph.D. in Computer Science from Stanford University (1999-2004).

Recent honors and awards include the ACM’s Grace Murray Hopper Award, the Sloan Research Fellowship, the NSF CAREER Award, and the Microsoft New Faculty Fellowship. He was awarded the Computer Science Division’s Jim and Donna Gray Award for Excellence in Undergraduate Teaching in 2009, and UC Berkeley’s Distinguished Teaching Award in 2010, and the Diane S. McEntyre Award for Excellence in Teaching in 2011. He has won best paper awards with co-authors at NAACL 2010 for “Coreference Resolution in a Modular, Entity-Centered Model,” at ACL 2009 for “K-Best A* Parsing,” at NAACL 2006 for “Prototype-Driven Sequence Learning,” EMNLP 2004 for “Max-Margin Parsing,” and at ACL 2003 for “Accurate Unlexicalized Parsing.”


His research focuses on the automatic organization of natural language information. Some topics of interest include:

Unsupervised language acquisition
Machine translation
Efficient algorithms for NLP
Information extraction
Linguistically rich models of language
Integrating symbolic and statistical methods for NLP
Organization of the web

Research Thrusts