Language has been argued by many to be a critical foundation of complex thought. However, empirical evidence does not support this hypothesis. Engaging in many forms of thinking and reasoning does not engage the language brain areas, and losing linguistic ability (e.g., in aphasia) can leave thinking and reasoning intact. This evidence establishes that language is not necessary for thinking. But could language be sufficient for complex thought? Does being a competent language user entail the ability to reason? The recent rise of large language models (LLMs) has brought this question into focus. These models have achieved a remarkable degree of linguistic competence and their representations successfully capture human neural responses during language processing. Many have argued that LLMs have additionally developed abilities to think and reason. I will describe a framework for thinking about language and thought in LLMs. In particular, I will argue that distinguishing between linguistic capacities (‘formal linguistic competence’) and reasoning capacities (‘functional linguistic competence’) is critical for understanding and evaluating current AI models, and for building new models that can use language in human-like ways.