This paper discusses some of the most common misconceptions about large language models (LLMs), including the belief that they are sentient or conscious, that they are always accurate, and that they can replace human creativity. The paper also proposes a strategy for overcoming these misbeliefs, which involves educating the public about the capabilities and limitations of LLMs, developing guidelines for the responsible use of LLMs, and conducting more research to understand the potential impact of LLMs on society.