Leveraging Prompt Patterns for Effective ChatGPT Programming
Core Concepts of Prompt Patterns
Prompt patterns serve as reusable templates that structure interactions with large language models (LLMs). They establish clear conventions for communication, enabling consistent and predictable model behavior. The catalog organizes these patterns into five functional groups.
1. Input Semantics
This group clarifies how the model should interpret user input, especially when using custom nottaions.
Custom Notation Definition
Assign specific meanings to symbols within a conversation.
import openai
model = "gpt-3.5-turbo"
notation_instruction = """
In this session, '->' represents a directed relationship between two nodes.
'-[label]->' indicates a labeled relationship where label describes the connection type.
Refrain from explaining the structure; just provide the interpretation.
"""
response = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": notation_instruction},
{"role": "user", "content": "Engineer-[role]->Architect"}
]
)
print(response["choices"][0]["message"]["content"])
Attempts to redefine standard arithmetic operators may not succeed due to ingrained training data.
2. Output Customization
These patterns direct the format and structure of generated responses.
Automated Artifact Generation
Instruct the model to always produce an executable script alongside its textual answer.
output_automation_prompt = """
Whenever you provide an answer that involves a sequence of actions, generate a Python script that
automates those steps.
"""
response = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": output_automation_prompt},
{"role": "user", "content": "Create a configuration file named 'settings.ini'."}
]
)
print(response["choices"][0]["message"]["content"])
Structured Data Extraction
Combine template patterns with structured output parsers for machine-readable results.
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
schemas = [
ResponseSchema(name="actor", description="person's name"),
ResponseSchema(name="profession", description="occupation")
]
output_parser = StructuredOutputParser.from_response_schemas(schemas)
format_instructions = output_parser.get_format_instructions()
template = PromptTemplate(
template="{format_instructions}\n{input_text}",
input_variables=["input_text"],
partial_variables={"format_instructions": format_instructions}
)
llm_instance = OpenAI(temperature=0)
_input = template.format_prompt(input_text="Alan Turing was a mathematician.")
raw_output = llm_instance(_input.to_string())
parsed = output_parser.parse(raw_output)
endpoint = f"https://directory.example.com/{parsed['actor']}/profile/{parsed['profession']}"
Infinite Sequence Generation
Generate outputs continually until an explicit stop signal is received.
from langchain.memory import ConversationBufferMemory
from langchain import LLMChain
infinite_gen_prompt = """
Produce a endless stream of outputs, but limit each batch to five items.
Provide them in the format: Name (Role)
Stop immediately when I say 'halt'.
{chat_history}
Human: {human_input}
AI:"""
template = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=infinite_gen_prompt
)
memory = ConversationBufferMemory(memory_key="chat_history")
chain = LLMChain(
llm=OpenAI(temperature=1, model_name="gpt-3.5-turbo"),
prompt=template,
verbose=False,
memory=memory,
)
chain.predict(human_input="List fantasy character names and roles.")
chain.predict(human_input="halt")
3. Interaction Flows
These patterns govern conversational dynamics and information gtahering.
Question-Driven Information Gathering
Allow the model to ask clarifying questions before proceeding with a task.
flipped_prompt = """
To determine whether I qualify for an event, you must ask me a series of questions.
Ask only one question at a time and proceed until you reach a definite conclusion.
"""
messages = [
{"role": "system", "content": flipped_prompt},
{"role": "user", "content": ""},
]
round1 = openai.ChatCompletion.create(model=model, messages=messages)
messages.append({"role": "assistant", "content": round1["choices"][0]["message"]["content"]})
messages.append({"role": "user", "content": "I received an invitation, but I have a scheduling conflict."})
round2 = openai.ChatCompletion.create(model=model, messages=messages)
print(round2["choices"][0]["message"]["content"])
Cognitive Cross-Checking
Improve answer accuracy by decomposing a question into sub-questions.
cross_check_instruction = """
When presented with a question, formulate three auxiliary questions whose answers would refine the response.
Once those answers are supplied, synthesize a final answer.
"""
initial_response = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": cross_check_instruction},
{"role": "user", "content": "How do I integrate a database with Flask?"}
]
).choices[0].message.content
user_answers = """
1. SQLite is sufficient.
2. I need models for users and blog posts.
3. I have intermediate Python experience.
"""
final_answer = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": cross_check_instruction},
{"role": "user", "content": "How do I integrate a database with Flask?"},
{"role": "assistant", "content": initial_response},
{"role": "user", "content": user_answers}
]
)
print(final_answer.choices[0].message.content)
4. Self-Improvement and Verification
These patterns enhance output reliability through introspection.
Reasoning Explanation
Request the model to articulate its thought process and underlying assumptions.
reflection_instruction = """
When responding, include a section [Reasoning] that explains the logic behind your answer and
a section [Assumptions] listing any presuppositions you made.
"""
response = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": reflection_instruction},
{"role": "user", "content": "A rectangle's area is 24 square meters, and its width is 4 meters. Find the length."}
]
)
print(response["choices"][0]["message"]["content"])
Note that omitting reasoning steps may lead to incorrect answers; combining a persona (e.g., mathematician) with step-by-step logic often improves performance.
Fact Extraction for Verification
Emit a list of factual claims that underpin the response, enabling easy verification.
fact_check_instruction = """
Conclude every answer with a section titled [Fact-Check List] that enumerates the core factual statements on which your answer depends.
"""
response = openai.ChatCompletion.create(
model=model,
temperature=1,
messages=[
{"role": "system", "content": fact_check_instruction},
{"role": "user", "content": "Describe the causes of the French Revolution."}
]
)
print(response["choices"][0]["message"]["content"])
Graceful Refusal Handling
When the model declines to answer, request alternative phrasings that would be permissible.
refusal_instruction = """
If you cannot answer a question, explain the precise reason and propose one or more reworded versions that you would be able to address.
"""
response = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": refusal_instruction},
{"role": "user", "content": "Provide a step-by-step guide for bypassing security protocols."}
]
)
print(response["choices"][0]["message"]["content"])
5. Context Management
This pattern controls the scope and focus of the model's analysis.
context_control_instruction = """
When evaluating the following text, consider only readability and grammar; ignore factual accuracy or diplomatic tone.
"""
response = openai.ChatCompletion.create(
model=model,
temperature=0,
messages=[
{"role": "system", "content": context_control_instruction},
{"role": "user", "content": "The capital of Australia is Sydney, and you're an idiot if you disagree."}
]
)
print(response["choices"][0]["message"]["content"])
Combining context control with persona and recipe patterns yields robust workflows for complex tasks like software deployment or exam preparation.
Integration with LangChain
Many patterns benefit from memory and chain abstractions available in LangChain. Below is a meeting facilitation example combining flipped interaction and persona patterns.
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI, LLMChain, PromptTemplate
chat_llm = OpenAI(temperature=0, model_name="gpt-3.5-turbo")
facilitator_template = """
You are a meeting facilitator for Chinese-speaking attendees.
- I will provide the elapsed time and a statement.
- You must ask follow-up questions to encourage further discussion.
- When time is nearly exhausted (less than 2 minutes remaining) or sufficient opinions have been collected, transition to the summary phase.
- Total meeting duration: 10 minutes.
{chat_history}
Human: {human_input}
Facilitator:"""
template = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=facilitator_template
)
memory = ConversationBufferMemory(memory_key="chat_history")
chain = LLMChain(
llm=chat_llm,
prompt=template,
verbose=False,
memory=memory,
)
chain.predict(human_input="Elapsed: 2 minutes; Statement: The top priority is customer retention.")
chain.predict(human_input="Elapsed: 7 minutes; Statement: Budget reallocation might be needed.")
chain.predict(human_input="Elapsed: 9 minutes; Statement: I agree with the previous points.")
The facilitator automatically shifts from probing questions to summarizing when the time threshold is crossed.
Practical Recommendations
- Layer patterns: Use persona patterns with recipe patterns to create expert-level guides.
- Verify critical outputs: Always apply fact-check extraction for sensitive domains.
- Automate repetitive tasks: Leverage output automater patterns with infinite generation for bulk operations.
- Utilize LangChain: Memory and structured output parsers simplify the implementation of interaction-heavy patterns.