What is ChatGPT?
ChatGPT is an AI chatbot made by OpenAI. You type something, and it responds with human-like text. But what's actually happening under the hood?
ChatGPT is built on a Large Language Model (LLM)—a type of AI that's been trained on massive amounts of text from the internet. It learned patterns in how words and sentences fit together.
How It Works
When you type a prompt, the AI predicts the most likely next words based on patterns it learned. It's essentially very sophisticated autocomplete.
Key Concepts
Tokens
LLMs don't read words—they read "tokens," which are chunks of text. A word might be one token or several. "ChatGPT" is two tokens: "Chat" and "GPT." This matters because:
- There's a limit on how many tokens the AI can process at once (context window)
- You're often charged per token when using AI APIs
Training vs. Using
Training is when the model learns from data—this happened before you ever used it. OpenAI trained GPT on billions of web pages, books, and articles.
Inference is when you actually use it—the model applies what it learned to answer your questions. The model isn't learning from your conversations (with some exceptions).
Hallucinations
Sometimes AI confidently generates false information. This is called a "hallucination." The AI doesn't know what's true—it just predicts plausible-sounding text. Always verify important facts.
Other LLMs
ChatGPT isn't the only LLM. Others include:
- Claude (Anthropic) — Known for being helpful and honest
- Gemini (Google) — Integrated with Google services
- Llama (Meta) — Open source, can run locally
- Mistral — European open-source alternative
Tips for Better Prompts
How you ask affects what you get:
- Be specific — "Write a 200-word email to my boss asking for Friday off" beats "write an email"
- Give context — "I'm a beginner in Python" helps the AI calibrate its response
- Ask for format — "Give me a bullet list" or "Explain like I'm 10"
- Iterate — Say "make it shorter" or "add more examples" to refine
What LLMs Are Good At
- Writing and editing text
- Explaining concepts
- Brainstorming ideas
- Summarizing long documents
- Translating languages
- Writing code
What LLMs Are Bad At
- Math (they often make calculation errors)
- Current events (training data has a cutoff date)
- Citing sources reliably
- Truly original creative work
- Knowing what they don't know
Summary
- • LLMs predict the next word based on patterns in training data
- • They don't "understand"—they recognize and reproduce patterns
- • Better prompts lead to better outputs
- • Always verify facts—AI can hallucinate confidently