Prompt Engineering Guide🎓 Prompt Engineering Course🎓 Prompt Engineering CourseServicesServicesAboutAbout
GitHubGitHub (opens in a new tab)DiscordDiscord (opens in a new tab)
  • Prompt Engineering
  • Introduction
    • LLM Settings
    • Basics of Prompting
    • Prompt Elements
    • General Tips for Designing Prompts
    • Examples of Prompts
  • Techniques
    • Zero-shot Prompting
    • Few-shot Prompting
    • Chain-of-Thought Prompting
    • Self-Consistency
    • Generate Knowledge Prompting
    • Prompt Chaining
    • Tree of Thoughts
    • Retrieval Augmented Generation
    • Automatic Reasoning and Tool-use
    • Automatic Prompt Engineer
    • Active-Prompt
    • Directional Stimulus Prompting
    • Program-Aided Language Models
    • ReAct
    • Multimodal CoT
    • Graph Prompting
  • Applications
    • Function Calling
    • Generating Data
    • Generating Synthetic Dataset for RAG
    • Tackling Generated Datasets Diversity
    • Generating Code
    • Graduate Job Classification Case Study
    • Prompt Function
  • Models
    • Flan
    • ChatGPT
    • LLaMA
    • GPT-4
    • Mistral 7B
    • Gemini
    • Phi-2
    • LLM Collection
  • Risks & Misuses
    • Adversarial Prompting
    • Factuality
    • Biases
  • LLM Research Findings
    • Trustworthiness in LLMs
  • Papers
  • Tools
  • Notebooks
  • Datasets
  • Additional Readings
Question? Give us feedback → (opens in a new tab)Edit this page
Techniques

Prompting Techniques

Prompt Engineering helps to effectively design and improve prompts to get better results on different tasks with LLMs.

While the previous basic examples were fun, in this section we cover more advanced prompting engineering techniques that allow us to achieve more complex tasks and improve reliability and performance of LLMs.

Examples of PromptsZero-shot Prompting

Copyright © 2024 DAIR.AI