Sitemap

Prompt Like a Pro: Tools, Techniques & Code for Perfecting LLM Prompts

6 min readMay 23, 2025

--

Why Optimize Prompts?

  • Get more accurate outputs
  • Save on token cost
  • Improve speed and reliability
  • Enable dynamic prompting at scale

🔧 Libraries and Tools for Prompt Optimization

Below are powerful tools that can help you automate, experiment, and optimize prompt engineering — for research, production, or fun!

🧪 1. LangChain — Prompt Templates & Chains

LangChain is a full-stack framework for working with LLMs, and its PromptTemplate class is a core feature for structured prompting.

✅ Features:

  • Templates with variables
  • Few-shot examples
  • Output parsers
  • Evaluation chains

💡 Code Example:

from langchain.prompts import PromptTemplate

template = PromptTemplate.from_template(
"Translate the following English to French: {text}"
)
prompt = template.format(text="I love AI.")
print(prompt)
Translate the following English to French: I love AI.

🔗 Docs: LangChain Prompt Templates

⚙️ 2. Guidance — Fine-Grained Prompt Programming

By Microsoft. It allows tight control over prompt execution, templating, and structured generation.

✅ Features:

  • Mix programming and prompting
  • Looping logic in prompts
  • Structured output enforcement

💡 Code Example:

from guidance import gen, assistant,user,models
# load a model (could be Transformers, LlamaCpp, VertexAI, OpenAI...)
gpt = models.OpenAI("gpt-4",api_key="Your-API-Key")
with user():
lm = gpt + "What is the capital of France?"

with assistant():
lm += gen("capital")

🔗 GitHub: Guidance

📏 3. PromptLayer — Logging & Prompt Experimentation

PromptLayer lets you track, version, and compare prompts with OpenAI and other providers.

✅ Features:

  • Monitor prompts & results
  • Log metadata
  • A/B testing prompts

💡 Code Integration:

import os
from promptlayer import PromptLayer

os.environ["OPENAI_API_KEY"] = "Your-API-Key"

promptlayer_client = PromptLayer(api_key="Your-API-Key")
OpenAI = promptlayer_client.openai.OpenAI
client = OpenAI()

prompt_text = "Explain gravity simply."

metadata = {
"experiment": "gravity_experiment",
"user": "aditya",
"run_date": "2025-05-23"
}

response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt_text}],
pl_tags=["gravity", "test_run"],
metadata=metadata,
store=True
)

print(response.choices[0].message.content)

🧪 4. LlamaIndex (fka GPT Index) — Prompt for RAG & Graphs

Best for combining data with LLMs (think RAG). Helps with prompt abstraction across indices and nodes.

✅ Features:

  • Custom prompt templates for different node types
  • Chaining prompts
  • Works with LangChain to

💡 Prompt Template:

from llama_index.core.prompts import RichPromptTemplate

# Define the prompt template with placeholders for context and query
template = RichPromptTemplate(
"""We have provided context information below.
---------------------
{{ context_str }}
---------------------
Given this information, please answer the question: {{ query_str }}
"""
)

# Suppose you have your context and query like these:
context_str = (
"The moon has no capital because it is not a country, "
"but Earth's natural satellite."
)
query_str = "What is the capital of the moon?"

# Format the prompt as a plain string (good for completion APIs)
prompt_str = template.format(context_str=context_str, query_str=query_str)
print("Formatted Prompt String:\n", prompt_str)

# Format the prompt as a list of chat messages (for chat APIs)
messages = template.format_messages(context_str=context_str, query_str=query_str)

print("\nFormatted Messages:")
for message in messages:
print(f"{message.role}: {message.content}")
Formatted Prompt String:
We have provided context information below.
---------------------
The moon has no capital because it is not a country, but Earth's natural satellite.
---------------------
Given this information, please answer the question: What is the capital of the moon?

Formatted Messages:
user: We have provided context information below.
---------------------
The moon has no capital because it is not a country, but Earth's natural satellite.
---------------------
Given this information, please answer the question: What is the capital of the moon?

🔗 LlamaIndex Docs

🧰 5. OpenPrompt — Research-Grade Prompt Tuning

Academic-style prompt framework supporting manual, soft, and auto prompt tuning.

✅ Features:

  • Plug-and-play with HuggingFace models
  • Soft prompt tuning (gradient-based!)
  • Few-shot experiments

🔗 GitHub: OpenPrompt

🧠 6. Promptify — Simplified Prompt Template & Evaluation Tool

Lightweight, easy-to-use Python tool for dynamic prompts.

💡 Code Example:

from promptify import Prompter,OpenAI, Pipeline

sentence = """The patient is a 93-year-old female with a medical
history of chronic right hip pain, osteoporosis,
hypertension, depression, and chronic atrial
fibrillation admitted for evaluation and management
of severe nausea and vomiting and urinary tract
infection"""

model = OpenAI(openai_api_key)
prompter = Prompter('ner.jinja') # select a template or provide custom template
pipe = Pipeline(prompter , model)

output = pipe.fit(text_input=sentence, domain="medical", labels=None)
print(output)
100%|██████████| 1/1 [00:07<00:00,  7.46s/it]
[{'text': "[{'T': 'Age', 'E': '93-year-old'},\n {'T': 'Gender', 'E': 'female'},\n {'T': 'Medical Condition', 'E': 'chronic right hip pain'},\n {'T': 'Medical Condition', 'E': 'osteoporosis'},\n {'T': 'Medical Condition', 'E': 'hypertension'},\n {'T': 'Medical Condition', 'E': 'depression'},\n {'T': 'Medical Condition', 'E': 'chronic atrial fibrillation'},\n {'T': 'Symptom', 'E': 'severe nausea and vomiting'},\n {'T': 'Disease', 'E': 'urinary tract infection'},\n {'branch': 'evaluation and management', 'group': 'admitted for'}]", 'usage': {'prompt_tokens': 205, 'completion_tokens': 156, 'total_tokens': 361, 'prompt_tokens_details': <OpenAIObject at 0x1e906e9d940> JSON: {
"audio_tokens": 0,
"cached_tokens": 0
}, 'completion_tokens_details': <OpenAIObject at 0x1e905b4dfd0> JSON: {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0
}}, 'parsed': {'status': 'completed', 'object_type': <class 'list'>, 'data': {'completion': [{'T': 'Age', 'E': '93-year-old'}, {'T': 'Gender', 'E': 'female'}, {'T': 'Medical Condition', 'E': 'chronic right hip pain'}, {'T': 'Medical Condition', 'E': 'osteoporosis'}, {'T': 'Medical Condition', 'E': 'hypertension'}, {'T': 'Medical Condition', 'E': 'depression'}, {'T': 'Medical Condition', 'E': 'chronic atrial fibrillation'}, {'T': 'Symptom', 'E': 'severe nausea and vomiting'}, {'T': 'Disease', 'E': 'urinary tract infection'}, {'branch': 'evaluation and management', 'group': 'admitted for'}], 'suggestions': []}}}]
output[0]['parsed']['data']['completion']
[{'T': 'Age', 'E': '93-year-old'},
{'T': 'Gender', 'E': 'female'},
{'T': 'Medical Condition', 'E': 'chronic right hip pain'},
{'T': 'Medical Condition', 'E': 'osteoporosis'},
{'T': 'Medical Condition', 'E': 'hypertension'},
{'T': 'Medical Condition', 'E': 'depression'},
{'T': 'Medical Condition', 'E': 'chronic atrial fibrillation'},
{'T': 'Symptom', 'E': 'severe nausea and vomiting'},
{'T': 'Disease', 'E': 'urinary tract infection'},
{'branch': 'evaluation and management', 'group': 'admitted for'}]

🔗 Promptify GitHub

🎯 Real-World Prompt Optimization Pipeline

Let’s bring this all together in a mini-pipeline

from langchain.prompts import PromptTemplate
import openai # ✅ Correct import

# Template your prompt using LangChain
prompt = PromptTemplate.from_template("Summarize the following in 3 key points:\n\n{text}")

# Format your dynamic input
formatted_prompt = prompt.format(text="Artificial intelligence is the simulation of human intelligence in machines that are programmed to think and learn like humans. It enables machines to perform tasks such as problem-solving, decision-making, and language understanding.")

# Set your OpenAI API key (if not using environment variable)
openai.api_key = 'Your-API-Key' # Replace with your actual key

# Call the OpenAI model
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "Be precise and neutral."},
{"role": "user", "content": formatted_prompt}
]
)

# Evaluate or log the output
print("\n✨ Summary:\n")
print(response["choices"][0]["message"]["content"])

✨ Summary:

1. Artificial intelligence (AI) imitates human intelligence in machines by programming them to think and learn like humans.
2. AI allows machines to execute tasks such as problem-solving and decision-making.
3. These tasks also include language understanding, signifying the machine's ability to interpret and respond to linguistic inputs.

Conclusion

The humble prompt has evolved from a simple instruction to a programmable, tunable, testable interface to intelligence.

With tools like LangChain, Guidance, and PromptLayer, you can:

  • Test different prompts
  • Automate dynamic flows
  • Evaluate and compare
  • Integrate with RAG, agents, and apps

--

--

Aditya Mangal
Aditya Mangal

Written by Aditya Mangal

Tech enthusiast weaving stories of code and life. Writing about innovation, reflection, and the timeless dance between mind and heart.

No responses yet