The Emperor Has No Clothes
"Prompt Engineer — $300K base + equity"
I saw this job posting last week and felt a deep, existential sadness for our industry. We have collectively decided that writing English sentences to a chatbot is a $300K skill. Let that sink in.
Prompt engineering is not engineering. It's not computer science. It's not even technical writing. It's figuring out how to ask a question clearly — a skill that every competent professional should already have.
The Skill Ceiling Is Zero
Here's why prompt engineering will never be a real discipline: the models get better at understanding bad prompts faster than you get better at writing good ones.
{
"type": "pipeline",
"title": "Model Evolution — Decreasing Prompt Complexity",
"steps": [
{ "label": "GPT-3 (2022) — 10 techniques required", "color": "red" },
{ "label": "GPT-4o (2024) — 5 techniques required", "color": "amber" },
{ "label": "Claude Sonnet 4 (2025) — 2 techniques required", "color": "green" },
{ "label": "o3 (2025) — 0 techniques required", "color": "blue" }
]
}
Every "prompt engineering technique" from 2023 is now either:
- Built into the model (chain-of-thought is default behavior)
- Irrelevant (temperature hacking, token manipulation)
- Replaced by better abstractions (structured output, tool use)
What's Actually Valuable
The real skill isn't prompting. It's system design around LLMs:
- How do you handle failures and retries?
- How do you build evaluation pipelines?
- How do you manage context windows efficiently?
- How do you orchestrate multi-step agent workflows?
That's engineering. That's what companies should be paying $300K for. Not "I know to write 'think step by step' at the end of a prompt."
If your entire job can be replaced by a one-line improvement in the model's system prompt, you don't have a career — you have a temporary workaround.
