Prompt Engineering Guide🎓 Prompt Engineering Course🎓 Prompt Engineering CourseServicesServicesAboutAbout
GitHubGitHub (opens in a new tab)DiscordDiscord (opens in a new tab)
  • Prompt Engineering
  • Introduction
    • LLM Settings
    • Basics of Prompting
    • Prompt Elements
    • General Tips for Designing Prompts
    • Examples of Prompts
  • Techniques
    • Zero-shot Prompting
    • Few-shot Prompting
    • Chain-of-Thought Prompting
    • Self-Consistency
    • Generate Knowledge Prompting
    • Prompt Chaining
    • Tree of Thoughts
    • Retrieval Augmented Generation
    • Automatic Reasoning and Tool-use
    • Automatic Prompt Engineer
    • Active-Prompt
    • Directional Stimulus Prompting
    • Program-Aided Language Models
    • ReAct
    • Multimodal CoT
    • Graph Prompting
  • Applications
    • Function Calling
    • Generating Data
    • Generating Synthetic Dataset for RAG
    • Tackling Generated Datasets Diversity
    • Generating Code
    • Graduate Job Classification Case Study
    • Prompt Function
  • Models
    • Flan
    • ChatGPT
    • LLaMA
    • GPT-4
    • Mistral 7B
    • Gemini
    • Phi-2
    • LLM Collection
  • Risks & Misuses
    • Adversarial Prompting
    • Factuality
    • Biases
  • LLM Research Findings
    • Trustworthiness in LLMs
  • Papers
  • Tools
  • Notebooks
  • Datasets
  • Additional Readings
Question? Give us feedback → (opens in a new tab)Edit this page
Models

Model Prompting Guides

In this section, we will cover some of the recent language models and how they successfully apply the latest and most advanced prompting engineering techniques. In addition, we cover capabilities of these models on a range of tasks and prompting setups like few-shot prompting, zero-shot prompting, and chain-of-thought prompting. Understanding these capabilities are important to understand the limitations of these models and how to use them effectively.

Prompt FunctionFlan

Copyright © 2024 DAIR.AI