Skip to Content
Agentlang is open source now!
ConceptsFirst-class Intelligent Agents

First-class Intelligent Agents

Imagine you’re building your dream AI assistant — someone smart enough to understand your business, follow your rules, and make good decisions even when you’re not around. You give it instructions, a few examples, and trust it to carry out the job — whether that’s approving invoices, writing content, or managing customer requests.

Now imagine that assistant suddenly starts “hallucinating” — inventing answers, skipping steps, or misunderstanding your instructions. That’s the challenge of working directly with large language models (LLMs): they’re brilliant, but unpredictable.

Agentlang exists to turn that unpredictability into reliability.

In Agentlang, intelligent agents aren’t just plugins or helper scripts — they are first-class citizens of the language and its runtime. They integrate seamlessly with other core constructs like entities, events, and workflows, forming the backbone of GenAI-based applications.

Each agent is a self-contained, reasoning “mind” — capable of interpreting context, applying logic, and acting on your behalf. But what makes Agentlang special is how it combines natural language flexibility with programmatic determinism. It gives you powerful tools to keep your agents focused, consistent, and aligned with the goals you define — even in the face of LLM randomness.

Let’s explore how.


Directives, Scenarios, and Glossaries: Teaching Agents to Think Clearly

Think of an agent as a new team member who’s learning the ropes. To help them perform well, you wouldn’t just throw them into the deep end — you’d give them clear rules, concrete examples, and a glossary of company terms. In Agentlang, directives, scenarios, and glossaries play exactly that role.

  • Directives define what to do under specific conditions — a structured way to encode rules like “if X happens, do Y.”
  • Scenarios show the agent real-world examples of how to respond.
  • Glossaries clarify the meaning of domain-specific vocabulary that the model might otherwise misinterpret.

Here’s an example of how these come together:

module acme entity Employee {id Int @id, name String, salary Number} workflow scenario01 { {acme/Employee {name? "Jake"}} @as [employee]; {acme/Employee {id? employee.id, salary employee.salary + employee.salary * 0.5}} } agent salaryHikeAgent { instruction "Give an employee a salary-hike based on his/her sales performance", tools acme/Employee, directives [ {"if": "employee sales exceeded 5000", "then": "Give a salary hike of 5 percent"}, {"if": "sales is more than 2000 but less than 5000", "then": "hike salary by 2 percent"} ], scenarios [ {"user": "Jake hit a jackpot!", "ai": "acme/scenario01"} ], glossary [ {"name": "jackpot", "meaning": "sales of 5000 or above", "synonyms": "on-high, block-buster"} ] }

Here, the agent doesn’t just “guess” what a jackpot means — it knows exactly how that term maps to your business rules. Over time, these examples and rules train the agent to make confident, deterministic decisions. And as the number of directives and scenarios grows, the runtime automatically narrows their scope to the current context — keeping the agent’s reasoning both efficient and relevant.


Decision Tables: Logic You Can Read

Some business decisions are best represented as simple condition → outcome mappings. In Agentlang, these are called decision tables — readable, rule-based logic that agents can evaluate directly.

Each case defines a condition and the outcome (or tag) that determines what the workflow does next:

decision classifyOrder { case (carType == "SUV" and segment == "economy") { EconomySUV } case (carType == "SUV" and segment == "luxury") { LuxurySUV } } flow carOrderRequestManager { analyseCarOrderRequest --> classifyOrder classifyOrder --> "EconomySUV" orderEconomySUV classifyOrder --> "LuxurySUV" orderLuxurySUV }

You can even write those same conditions in natural language:

case ("if carType is SUV and segment is economy") { EconomySUV }

When the classifyOrder agent runs, it evaluates each case just like a human reading a flowchart — deciding which path to take, and pushing the workflow forward with transparent reasoning. It’s logic you can both read and trust.


Precise Context: Response Schemas and Scratchpads

While natural language gives agents flexibility, structured data gives them precision. Agentlang lets you define exactly what kind of structured response an agent should return — using a response schema — and automatically share that information with other agents in the same workflow via a scratchpad (a shared contextual memory).

Consider this example:

module networking record NetworkProvisioningRequest { type @enum("DNS", "WLAN"), requestedBy String, CNAME String, IPAddress String } agent classifyNetworkProvisioningRequest { instruction "Analyse the network provisioning request and return its type and other relevant information.", responseSchema NetworkProvisioningRequest }

If you ask this agent:

“Provision DNS joe.acme.com for 192.3.4.1 as requested by joe@acme.com

It will respond with structured data like:

{ "type": "DNS", "requestedBy": "joe@acme.com", "CNAME": "joe.acme.com", "IPAddress": "192.3.4.1" }

This data is added to the scratchpad — a shared workspace that downstream agents can reference. For example:

agent markTicketAsDone { instruction "Use type={{type}}, requestedBy={{requestedBy}} and provisioningId={{provisioningId}} to mark the request as completed", tools [Networking/markRequestCompleted] }

Here, placeholders like {{type}} or {{requestedBy}} are automatically replaced with values from the scratchpad, ensuring that the agent always has the right context. The result is reasoning that’s not just coherent — but consistent and traceable.

You can even specify which attributes go into the scratchpad:

agent searchForEmployee { instruction "find the employee based on the user’s search request", tools [acme/Employee], scratch [email, firstName] }

By default, all attributes are shared — but you can choose to narrow it down when precision matters.


Customizing the Mind: LLM Configuration

Every agent in Agentlang has a mind of its own — a language model that powers its reasoning. By default, that mind is OpenAI’s gpt-4o, but you can change it, specialize it, or even connect it to a completely different provider.

For instance:

{agentlang.ai/LLM { name "custom-test-llm", service "openai", config { "model": "gpt-4.1", "maxTokens": 200, "temperature": 0.7, "apiKey": "<your-open-ai-apikey>", "configuration": { "baseURL": "https://api.openai.com/v1", "defaultHeaders": { "Ocp-Apim-Subscription-Key": "xxxxy", "user": "admin" } } } } } agent myAgent { llm "custom-test-llm" }

You can even replace OpenAI with Anthropic’s Claude or another LLM service entirely. Each agent can have its own “personality” — a mind tailored to its role, workload, and communication style.


In summary

Agentlang transforms LLMs from unpredictable text generators into dependable digital teammates. By giving you the tools to define context, structure, and logic, it helps your agents reason like humans — but act like programs.

They don’t just follow prompts; they understand purpose. They don’t just generate text; they take responsibility for outcomes. And that’s what makes them first-class citizens in the Agentlang world.


Last updated on