|
1 |
| -# LLM Engineering Cheatsheet |
| 1 | +# Mastering Language Models Engineering |
2 | 2 |
|
3 |
| -A timeless guide to **thinking and building like a prompt engineer**. This cheatsheet focuses on core principles and patterns that apply across any model, provider, or tool — whether you're using OpenAI, Claude, Llama, or something that doesn't exist yet. |
| 3 | +Welcome to the **LLM Engineering Cheat Sheet**, your go-to resource for timeless principles and best practices for working with language models. This repository is tooling-agnostic, future-proof, and presented in a clear and straightforward manner. |
4 | 4 |
|
5 |
| -> This is not a cookbook or quickstart. It's a mindset guide — built for those who want to reason clearly and build reliably with LLMs. |
| 5 | +## Overview |
6 | 6 |
|
7 |
| ---- |
| 7 | +This cheat sheet covers various topics related to AI, language models, prompt engineering, context management, and more. Whether you are a beginner or an experienced practitioner, you will find valuable insights and practical tips to enhance your work with language models. |
8 | 8 |
|
9 |
| -## Core Philosophy |
| 9 | +## Repository Details |
10 | 10 |
|
11 |
| -LLMs are **probabilistic next-token predictors**, not deterministic logic machines. Prompt engineering is about: |
| 11 | +- **Repository Name:** LLM Engineering Cheat Sheet |
| 12 | +- **Short Description:** Timeless principles and best practices for working with language models — tooling-agnostic, future-proof, and clear. |
| 13 | +- **Topics:** |
| 14 | + - ai-best-practices |
| 15 | + - ai-cheatsheet |
| 16 | + - ai-patterns |
| 17 | + - ai-reference |
| 18 | + - anthropic |
| 19 | + - chatgpt |
| 20 | + - context-management |
| 21 | + - few-shot-learning |
| 22 | + - generative-ai |
| 23 | + - langchain |
| 24 | + - language-models |
| 25 | + - llm |
| 26 | + - llm-engineering |
| 27 | + - openai |
| 28 | + - prompt-design |
| 29 | + - prompt-engineering |
| 30 | + - python |
| 31 | + - python3 |
| 32 | + - system-prompts |
| 33 | + - zero-shot |
12 | 34 |
|
13 |
| -- Designing **clear, structured inputs** |
14 |
| -- Working within **context and token limits** |
15 |
| -- Thinking **iteratively**, not magically |
16 |
| -- Debugging failures like a **system**, not like a mystery |
| 35 | +## Get Started |
17 | 36 |
|
18 |
| -Treat prompts as **interfaces**, not incantations. |
| 37 | +To access additional resources and updates, visit the [Releases](https://github.com/Front-Writer/llm-engineering-cheatsheet/releases). |
19 | 38 |
|
20 |
| ---- |
| 39 | +[](https://github.com/Front-Writer/llm-engineering-cheatsheet/releases) |
21 | 40 |
|
22 |
| -## Prompting Patterns (Universal) |
| 41 | +## Let's Dive In |
23 | 42 |
|
24 |
| -### Zero-Shot |
| 43 | +Explore the curated content in this cheat sheet to level up your skills in language models engineering. From AI best practices to Python programming, we've got you covered with clear explanations and straightforward advice. |
25 | 44 |
|
26 |
| -Ask the model to do a task with no examples. |
| 45 | +Remember, calm confidence and a steady approach are key to mastering the art of working with language models. Embrace the journey of learning and discovery as you delve into the world of LLM engineering. |
27 | 46 |
|
28 |
| -```txt |
29 |
| -"Summarize the following article in 3 bullet points: ..." |
30 |
| -``` |
| 47 | +Happy coding! |
31 | 48 |
|
32 |
| -### One-Shot / Few-Shot |
| 49 | +🚀📚🤖 |
33 | 50 |
|
34 |
| -Give one or more examples to improve reliability. |
35 |
| - |
36 |
| -```txt |
37 |
| -Review: "Great product, but shipping was late." |
38 |
| -Response: "Thanks for your feedback! Sorry about the delay..." |
39 |
| -
|
40 |
| -Review: "Terrible quality." |
41 |
| -Response: "We're sorry to hear that. Could you share more details so we can improve?" |
42 |
| -``` |
43 |
| - |
44 |
| -### Role-Based Prompting |
45 |
| - |
46 |
| -Set a role for the model to adopt. |
47 |
| - |
48 |
| -```txt |
49 |
| -System: You are a technical support agent who speaks clearly and concisely. |
50 |
| -User: My internet keeps cutting out. What should I do? |
51 |
| -``` |
52 |
| - |
53 |
| -### Constrained Output |
54 |
| - |
55 |
| -Ask for output formats explicitly. |
56 |
| - |
57 |
| -```txt |
58 |
| -"List the steps as JSON: [step1, step2, step3]" |
59 |
| -``` |
60 |
| - |
61 |
| ---- |
62 |
| - |
63 |
| -## Prompt Structure: The Anatomy |
64 |
| - |
65 |
| -Always structure prompts with these components: |
66 |
| - |
67 |
| -1. **Role** – Who is the model? |
68 |
| -2. **Task** – What do you want? |
69 |
| -3. **Input** – What info do they need? |
70 |
| -4. **Constraints** – What form should the output take? |
71 |
| -5. **Examples** _(optional)_ – Show what success looks like |
72 |
| - |
73 |
| -### Example Prompt (all parts applied) |
74 |
| - |
75 |
| -```txt |
76 |
| -System: You are a helpful travel assistant that gives concise city guides. |
77 |
| -User: I’m visiting Tokyo for 3 days. Suggest an itinerary with 3 activities per day. |
78 |
| -Constraints: |
79 |
| -- Format your response as bullet points grouped by day. |
80 |
| -- Keep each activity description under 20 words. |
81 |
| -Example: |
82 |
| -Day 1: |
83 |
| -- Visit Meiji Shrine in the morning |
84 |
| -- Eat sushi at Tsukiji Market |
85 |
| -- Explore Shibuya Crossing at night |
86 |
| -``` |
87 |
| - |
88 |
| ---- |
89 |
| - |
90 |
| -## Context Management |
91 |
| - |
92 |
| -- Be **aware of token limits** (e.g. 4k, 8k, 128k) |
93 |
| -- Use **summarization** for long chat histories |
94 |
| -- Drop irrelevant history when possible |
95 |
| -- **Explicit > implicit** — don't assume the model remembers everything |
96 |
| - |
97 |
| ---- |
98 |
| - |
99 |
| -## Evaluation Principles |
100 |
| - |
101 |
| -LLM output is fuzzy. Define quality like this: |
102 |
| - |
103 |
| -- Does it meet the **task objective**? |
104 |
| -- Is the output **formatted correctly**? |
105 |
| -- Would a human say it's **reasonable**? |
106 |
| -- Can you detect regressions with **A/B comparisons**? |
107 |
| - |
108 |
| ---- |
109 |
| - |
110 |
| -## Common Failure Modes |
111 |
| - |
112 |
| -| Symptom | Likely Cause | |
113 |
| -| --------------- | ---------------------------------------- | |
114 |
| -| Hallucination | Vague or underspecified prompts | |
115 |
| -| Repetition | Poor constraint or unclear output format | |
116 |
| -| Refusal | Misalignment between task and role | |
117 |
| -| Loss of context | Too much history or poor summarization | |
118 |
| - |
119 |
| ---- |
120 |
| - |
121 |
| -## Recommended Resources |
122 |
| - |
123 |
| -- [OpenAI Best Practices](https://platform.openai.com/docs/guides/prompt-engineering) |
124 |
| -- [LangChain Docs](https://python.langchain.com/docs/introduction) |
125 |
| -- [Ollama for Local Models](https://ollama.com) |
126 |
| - |
127 |
| ---- |
128 |
| - |
129 |
| -## Minimal Python Example |
130 |
| - |
131 |
| -```python |
132 |
| -import os |
133 |
| - |
134 |
| -from openai import OpenAI |
135 |
| - |
136 |
| -client = OpenAI( |
137 |
| - api_key=os.environ.get("OPENAI_API_KEY"), |
138 |
| -) |
139 |
| - |
140 |
| -response = client.chat.completions.create( |
141 |
| - model="gpt-4o", |
142 |
| - messages=[ |
143 |
| - {"role": "system", "content": "You are a concise technical writer."}, |
144 |
| - { |
145 |
| - "role": "user", |
146 |
| - "content": "Explain what a vector database is in simple terms.", |
147 |
| - }, |
148 |
| - ], |
149 |
| -) |
150 |
| - |
151 |
| -print(response.choices[0].message.content) |
152 |
| -``` |
153 |
| - |
154 |
| ---- |
155 |
| - |
156 |
| -## Final Thought |
157 |
| - |
158 |
| -This guide helps you stay grounded when everything else is changing. Focus on clarity. Prompt with intent. And always think like an engineer. |
| 51 | +___ |
| 52 | +In this README, we have created a comprehensive guide for the "LLM Engineering Cheat Sheet" GitHub repository. The content is structured to provide clear information and guidance without unnecessary complexity. Feel free to explore the repository and enhance your knowledge of language models and AI engineering practices. |
0 commit comments