Practical Tips: How To Build Your First LLM Agent
In this section, we present tips that we believe to be helpful for general audiences to design agents.
1. Keep your agent design as simple as possible.
First attempts are rarely seamless, so the most important principle is to aim for the simplest viable design and minimal complexity. Building an LLM agent is already much easier than it was just a few years ago—back when the first version of ChatGPT appeared—because today's models are much more capable. Their strengthened reasoning skills mean that tasks once solved only through long, fine-grained prompt chains can now often be handled with a single, concise prompt.
2. Clearly define the goal or task for your agent.
The most important aspect of agent design is to clearly specify what you want your agent to accomplish. Take time to articulate the agent's goal or task, and make sure this is clearly stated in your prompt. For example, you might begin your prompt with: "Your goal is to..." This directness helps the agent stay focused and increases the likelihood of successful outcomes.
3. Guide your agent by considering how humans would approach the same task.
If simply setting the goal does not lead to desired agent behaviors, a helpful strategy is to craft prompts that outline highlevel milestones to guide agent behaviors. Think about how you would brief a person on the same task. For example, if you are building an essay-writing agent, a human researcher would propose an idea, review relevant literature, suggest a solution, and validate that solution. Structure your prompt around these stages, e.g., "First, propose an original idea on [topic]; then scan the literature...", and let the agent handle the finer details.
4. Provide illustrative examples.
Examples can accelerate understanding for both humans and LLM agents. If you need structured output, show the desired format, e.g., "Produce the result in this JSON schema:...". When a specific methodology matters (e.g., a clinical guideline for diagnosis and treatment), include step-by-step examples that embody each required step. Well-chosen examples help guide the agent's behavior and reduce ambiguity. Note that this practice is often referred as in-context learning in the literature.
5. Start with chatting with LLMs directly, and restrict the use of agent framework.
There are many agent frameworks, such as LangGraph, Dify, Coze, Bedrock, etc, and some (e.g., Coze) even offer a drag-and-drop graphical user interface. However, they often create extra layers of abstraction that can hide the raw prompts and responses you need to understand and debug, while also tempting you to over-engineer. In our experience, newcomers learn more and iterate faster by chating with the model directly (e.g., via ChatGPT), then calling the LLM API, and only later adopting an agent framework. As our examples show, many agent capabilities can be achieved with just a few well-crafted lines of prompt.
If you find this work helpful, please consider citing our paper:
@article{hu2025hands,
title={Hands-on LLM-based Agents: A Tutorial for General Audiences},
author={Hu, Shuyue and Ren, Siyue and Chen, Yang and Mu, Chunjiang and Liu, Jinyi and Cui, Zhiyao and Zhang, Yiqun and Li, Hao and Zhou, Dongzhan and Xu, Jia and others},
journal={Hands-on},
volume={21},
pages={6},
year={2025}
}