The Marriott Mistake - A guide to successful prompt engineering

Patrick Cox

6/28/20242 min read

Imagine that your visiting Philadelphia for a conference and staying at the Marriott. After a long flight, you hop in the rental and begin the short trip to the hotel. But thanks to canyon-like roads densely packed with tall buildings, your GPS fails. After your third loop past Liberty Hall, you realize that you need a new plan. Frustrated, you ask a police officer and he succinctly guides you directly to the parking garage adjacent to the Marriott. After gathering your things, you head to the front desk to check-in only to find that you don’t have a reservation at that Marriott (there are three in Philadelphia). The lesson…it’s really important to ask the right question.

Writing prompts in language learning models is very similar. Unless you ask Chatgpt for directions to the exact Marriott where you have a reservation, your odds of success are only 33%. The usefulness and accuracy of results improve as the clarity and specificity of prompts.

Due to the way language learning models, like chatGPT were trained, there are specific prompt structures that will consistently provide the intended outputs. Below are some best practices compiled from a number of sources. Each prompt should include:

  1. · Assign the Llm a role from where it’s perspective, approach and tone will be derived.

o "You’re a cardiothoracic surgeon preparing for a lecture on ventrical stents for an upcoming conference"

o "You’re a fifth-grade teacher preparing a lesson plan on the states of matter"

2. Assign a task – what exactly would you like the model to do?

o “List the top five ventricle stents and their specifications.”

o “Draft a lesson plan that introduces solids, liquids, and gases.”

3.·Specify the desired format.

o “Display in a table with product names as columns.”

o “Create a PowerPoint outline with three embedded videos.”

4.· Set parameters like time periods, geographies, distribution channels, etc.

o “Focus only on stents approved by the FDA since 2020.”

Following are some best practices to improve the efficacy of your prompting. Please feel to share additional and I’ll add to this list for all that are interested.

1. Provide the model with an example when a specific output format is desired.

2. Use ###, or ‘’’ to separate task from examples or output parameters.

3. Be directive – not restrictive. Focus on what you want, not what you don’t want.

4. Work backward from your goal – reverse engineer your prompt.

5. Segment long, cumbersome requests into parsed chunks.

6. Ask for citations - better reliability and context.

7. Define success - provide goals.

o “Reduce bounce rate on the homepage by improving engagement.”

8. Use realistic scenarios.

o “You’re a marketing consultant pitching ChatGPT to a CPG brand to analyze trade promotion ROI.”

9. Encourage analysis.

o “Compare ChatGpt and Perplexity’s capabilities in evaluating marketing attribution models and provide a recommendation”

Language learning models are incredibly powerful research tools, when used effectively. Asking better questions yields better answers. Best of luck in your discovery.

Happy prompting!

References:

Open AI prompt engineering guide: https://platform.openai.com/docs/guides/prompt-engineering

https://community.openai.com/t/a-guide-to-crafting-effective-prompts-for-diverse-applications/493914

https://www.techtarget.com/searchenterpriseai/tip/Prompt-engineering-tips-and-best-practices

https://www.prompthub.us/blog/10-best-practices-for-prompt-engineering-with-any-model