Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Prompt engineering strategy guide #371

Closed
aaronvg opened this issue Jan 26, 2024 · 0 comments
Closed

[Docs] Prompt engineering strategy guide #371

aaronvg opened this issue Jan 26, 2024 · 0 comments
Labels
invalid This doesn't seem right

Comments

@aaronvg
Copy link
Contributor

aaronvg commented Jan 26, 2024

  1. Chain of thought
  2. Add examples with structure you want
  3. Reduce the input size
  4. Identify contradicting statements in your prompts, like if you say “identify all facts and claims”. But then say “exclude opinions” later in the prompt, it should tell you.
  5. Long inputs - use “here are all of the relevant facts”.
    1. “Json array with all the relevant facts\n print_type(output)\nAll facts or claims json:”
      • works if you’re trying to analyze every sentence
  6. Enumerate every sentence you want the model to take a look at.
  7. Symbol tuning
  8. Use Delimiters for inputs and examples
  9. Use “output json array” when the output is expected to be an array and “output json” when it is not.
  10. Add a category that is an “out”.

Each example should be accompanied with runnable baml project and link to research papers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
2 participants