Ontwikkel een LangChain Agent met Geavanceerde Logica voor Besluitvorming
description
Met deze prompt kun je een LangChain agent creëren die complexe beslissingen neemt op basis van meerdere voorwaarden, wat veelzijdige en intelligente workflows mogelijk maakt. Dit lost het probleem op van eenvoudige lineaire ketens en geeft je meer controle over het gedrag van je agent. Het is effectiever dan standaard voorbeelden omdat het zich richt op geavanceerde logica en praktische implementaties.
prompt
Help me een LangChain agent te ontwikkelen die geavanceerde beslissingslogica kan uitvoeren op basis van complexe regels en voorwaarden binnen <beschrijf het toepassingsgebied of domein>. Beschrijf hoe ik conditionele ketens, logische ...
try_prompt
generate_helper
disclaimerOnPageApi
tags
ai_answers
provider: openai
model: gpt-4.1-nano
Natuurlijk! Hieronder vind je een gedetailleerde uitleg en voorbeelden om een LangChain-agent te ontwikkelen die geavanceerde beslissingslogica kan uitvoeren voor automatische kredietbeoordeling en risicoanalyse. We richten ons op het gebruik van conditionele ketens, logische operatoren, aangepaste functies, foutafhandeling en optimalisaties.
### 1. Structuur en aanpak
- **Conditionele ketens**: gebruik functies of methoden die verschillende regels en voorwaarden combineren.
- **Logische operatoren**: implementeer AND, OR, NOT, en combinaties daarvan.
- **Aangepaste functies**: maak functies die complexe berekeningen en beslissingen uitvoeren.
- **Contextgevoeligheid**: gebruik variabelen en data om beslissingen dynamisch te sturen.
- **Foutafhandeling**: gebruik try-except blokken en validatie.
- **Optimalisaties**: cache resultaten, minimaliseer herhaalde berekeningen.
---
### 2. Voorbeeld: Basisstructuur met Python functies
Hier is een voorbeeld van een gestructureerde aanpak:
```python
def is_beschikbaar(gegevens):
# Check of alle benodigde gegevens aanwezig zijn
required_fields = ['inkomen', 'schulden', 'lening_bedrag', 'duur']
return all(gegevens.get(field) is not None for field in required_fields)
def bereken_risico_score(gegevens):
# Eenvoudige risico score berekening
inkomen = gegevens['inkomen']
schulden = gegevens['schulden']
lening = gegevens['lening_bedrag']
# Voorbeeld van een eenvoudige formule
schuld_ratio = schulden / inkomen
lening_ratio = lening / inkomen
score = 0
if schuld_ratio > 0.4:
score += 2
elif schuld_ratio > 0.2:
score += 1
if lening_ratio > 0.5:
score += 2
elif lening_ratio > 0.3:
score += 1
return score
def neemt_beslissing(gegevens):
# Combineser regels met conditionele logica
try:
if not is_beschikbaar(gegevens):
return "Onvoldoende gegevens"
risk_score = bereken_risico_score(gegevens)
if risk_score >= 4:
return "Afgewezen: hoog risico"
elif risk_score >= 2:
return "Voorwaardelijk goedkeuren"
else:
return "Goedgekeurd"
except Exception as e:
return f"Fout tijdens beoordeling: {str(e)}"
```
### 3. Integratie met LangChain
Gebruik een custom Tool of Agent met een PythonCodeExecutor:
```python
from langchain.agents import initialize_agent, AgentType
from langchain.tools import PythonREPLTool
# Definieer de functies in een script of inline
def krediet_beslissing(gegevens):
# Include hier bovenstaande functies of importeer uit een module
# Bijvoorbeeld:
return neemt_beslissing(gegevens)
# Maak een tool
krediet_tool = PythonREPLTool()
# Initialiseer de agent met de tool
agent = initialize_agent(
tools=[krediet_tool],
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
# Voorbeelddata
gegevens = {
'inkomen': 3000,
'schulden': 800,
'lening_bedrag': 10000,
'duur': 60
}
# Stel dat je de beslissing wilt opvragen
resultaat = agent.run(f"Gegevens: {gegevens}\nWat is de kredietbeslissing?")
print(resultaat)
```
### 4. Complexere regels en conditionele ketens
Voor meer complexiteit kun je de beslissingslogica uitbreiden met geneste voorwaarden, logische operatoren en dynamische regels:
```python
def complexe_beslissing(gegevens):
try:
if not is_beschikbaar(gegevens):
return "Onvoldoende gegevens"
risk_score = bereken_risico_score(gegevens)
inkomen = gegevens['inkomen']
lening = gegevens['lening_bedrag']
duur = gegevens['duur']
# Conditionele keten met meerdere voorwaarden
if risk_score >= 5:
return "Afgewezen: zeer hoog risico"
elif risk_score >= 3:
if inkomen < 2000 or lening > inkomen * 0.8:
return "Afgewezen: risico te hoog"
else:
return "Voorwaardelijk goedkeuren"
else:
if duur > 60 and inkomen < 2500:
return "Voorwaardelijk, korte termijn"
else:
return "Goedgekeurd"
except Exception as e:
return f"Fout: {str(e)}"
```
---
### 5. Foutafhandeling en optimalisatie
- **Foutafhandeling**: gebruik `try-except` blokken, log fouten, en geef duidelijke berichten.
- **Caching**: indien mogelijk, cache uitkomsten van dure berekeningen.
- **Validatie**: controleer inputdata voor consistentie en volledigheid.
- **Configuratie**: maak regels parametriseerbaar via externe configuratiebestanden (bv. JSON of YAML).
---
### 6. Samenvatting
- Gebruik functies voor regels en condities.
- Combineer conditionele ketens met logische operatoren.
- Maak de beslissingslogica dynamisch door data te gebruiken.
- Implementeer foutafhandeling en validatie.
- Integreer deze functies binnen een LangChain-agent via custom tools of Python code.
Wil je dat ik een volledig werkend voorbeeld met data en integratie op maat voor jouw toepassing opstel?
provider: deepseek
model: deepseek-chat
Of course. I will guide you through developing a sophisticated LangChain agent for automated credit assessment. This involves creating conditional logic chains, integrating custom functions, and ensuring robust error handling.
### Core Conceptual Architecture
A credit assessment agent doesn't just answer questions; it executes a **workflow**. The key is to break down the complex decision into sequential steps, each with its own conditions and data requirements.
1. **Data Ingestion & Parsing:** Extract and structure data from user input or documents (e.g., pay stubs, bank statements via OCR).
2. **Rule-Based Evaluation:** Apply a series of conditional rules (e.g., "If Debt-to-Income ratio > 40%, then flag as high risk").
3. **LLM Reasoning & Justification:** Use the LLM to synthesize the rule-based outputs, provide a narrative explanation, and handle edge cases not covered by strict rules.
4. **Decision & Action:** Output a final decision (Approve/Deny/Refer) and recommended actions.
We'll implement this using **LangChain's Expression Language (LCEL)**, which is the modern, recommended way to build complex, composable chains.
---
### Implementation Breakdown
#### 1. Define Custom Tool Functions
These functions encapsulate the deterministic parts of your risk analysis. The LLM agent will learn to call these tools.
```python
from langchain.tools import tool
from pydantic import BaseModel, Field
from typing import Optional, Dict, Any
import math
# Define input models for type safety and better LLM tool calling
class CreditScoreInput(BaseModel):
applicant_id: str = Field(description="Unique identifier for the applicant")
class DTIRatioInput(BaseModel):
monthly_income: float = Field(description="The applicant's gross monthly income")
monthly_debt_payments: float = Field(description="The applicant's total monthly debt obligations")
class PaymentHistoryInput(BaseModel):
applicant_id: str = Field(description="Unique identifier for the applicant")
# Custom tool to get credit score (simulated)
@tool(args_schema=CreditScoreInput)
def get_credit_score(applicant_id: str) -> int:
"""Fetches the credit score for a given applicant ID from a database or API."""
# Simulate a database/API call. In reality, connect to Experian, Equifax, etc.
# This is a simple mock; a real function would have error handling.
credit_db = {"app_123": 720, "app_456": 650, "app_789": 580}
return credit_db.get(applicant_id, 600) # Default to 600 if not found
# Custom tool to calculate Debt-to-Income (DTI) Ratio
@tool(args_schema=DTIRatioInput)
def calculate_dti_ratio(monthly_income: float, monthly_debt_payments: float) -> Dict[str, Any]:
"""Calculates the Debt-to-Income (DTI) ratio and returns it with a risk category."""
if monthly_income <= 0:
raise ValueError("Monthly income must be a positive number.")
ratio = (monthly_debt_payments / monthly_income) * 100
ratio = round(ratio, 2)
if ratio <= 35:
category = "Low Risk"
elif 36 <= ratio <= 43:
category = "Moderate Risk"
else:
category = "High Risk"
return {"dti_ratio": ratio, "risk_category": category}
# Custom tool to check payment history (simulated)
@tool(args_schema=PaymentHistoryInput)
def check_payment_history(applicant_id: str) -> str:
"""Checks the payment history for a given applicant ID."""
# Simulate a check for late payments
history_db = {
"app_123": "No late payments in last 24 months",
"app_456": "2 late payments in last 12 months",
"app_789": "1 serious delinquency (90+ days) in last 24 months"
}
return history_db.get(applicant_id, "Limited history available")
```
#### 2. Define the Conditional Logic Chains
We'll use LCEL's `RunnableLambda` and `RunnableBranch` to create conditional pathways.
```python
from langchain_core.runnables import RunnableLambda, RunnableParallel
from langchain_core.output_parsers import StrOutputParser
# Define a function to route based on credit score
def route_by_credit_score(info: Dict[str, Any]) -> str:
"""Determines the next step based on a pre-fetched credit score."""
credit_score = info.get("credit_score", 0)
if credit_score >= 700:
return "low_risk_path"
elif 650 <= credit_score < 700:
return "medium_risk_path"
else:
return "high_risk_path"
# Define different chains for different risk profiles
low_risk_chain = RunnableLambda(lambda x: f"Applicant has excellent credit ({x['credit_score']}). Proceeding with standard verification.")
medium_risk_chain = RunnableLambda(lambda x: f"Applicant has moderate credit ({x['credit_score']}). Scrutinizing DTI and employment history closely.")
high_risk_chain = RunnableLambda(lambda x: f"Applicant has poor credit ({x['credit_score']}). Application likely requires manual review or will be denied.")
# Create a branch that chooses which chain to run
credit_branch = RunnableBranch(
(lambda x: route_by_credit_score(x) == "low_risk_path", low_risk_chain),
(lambda x: route_by_credit_score(x) == "medium_risk_path", medium_risk_chain),
high_risk_chain
)
```
#### 3. Build the Main Agent Chain with LCEL
This is where we compose everything together.
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI # Using OpenAI for this example
# 1. Initialize the LLM
llm = ChatOpenAI(model="gpt-4-turbo-preview", temperature=0)
# 2. Define the tools list
tools = [get_credit_score, calculate_dti_ratio, check_payment_history]
# 3. Create the agent prompt
prompt = ChatPromptTemplate.from_messages([
("system", """You are an advanced AI credit risk analyst. Your goal is to assess loan applications based on hard rules and nuanced analysis.
Follow these steps:
1. **Identify** the applicant and key financial data from the user's query.
2. **Use your tools** to retrieve the applicant's credit score, calculate their DTI ratio, and check their payment history.
3. **Apply the core rules**:
- Rule 1: Credit Score < 580 → Automatic Denial.
- Rule 2: DTI Ratio > 45% → Automatic Denial.
- Rule 3: Serious delinquency (90+ days late) in last 12 months → Refer to senior analyst.
4. **Synthesize** the tool outputs and rules into a final decision: APPROVE, DENY, or REFER.
5. **Provide a clear, professional justification** for your decision, citing the specific rules and data points.
If any tool fails or data is missing, state this clearly and explain how it impacts the assessment.
"""),
MessagesPlaceholder(variable_name="chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# 4. Create the agent and executor
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)
# 5. (Optional) Compose a more complex sequential chain using LCEL
# This chain first gets the credit score, then branches.
sequential_workflow = RunnableParallel( # Fetch data in parallel
credit_score=RunnableLambda(lambda x: get_credit_score.invoke({"applicant_id": x["applicant_id"]})),
original_input=lambda x: x
) | RunnableLambda(lambda x: { # Add credit score to the dict for the branch
"credit_score": x["credit_score"],
"applicant_id": x["original_input"]["applicant_id"]
}) | credit_branch
# Example usage for the sequential chain:
# result = sequential_workflow.invoke({"applicant_id": "app_123"})
# print(result)
```
#### 4. Advanced: Error Handling and Optimizations
**a) Robust Error Handling in Tools:**
Modify the tools to be more resilient.
```python
@tool(args_schema=CreditScoreInput)
def get_credit_score(applicant_id: str) -> str:
"""Fetches the credit score for a given applicant ID."""
try:
credit_db = {"app_123": 720, "app_456": 650, "app_789": 580}
score = credit_db[applicant_id] # Use [] to force a KeyError for missing IDs
return f"Credit score retrieved successfully: {score}"
except KeyError:
return f"Error: No credit score found for applicant ID '{applicant_id}'. Please verify the ID."
except Exception as e:
return f"An unexpected error occurred while fetching the credit score: {str(e)}"
```
**b) Agent-Level Error Handling:**
The `AgentExecutor` has a `handle_parsing_errors` parameter. For more control, you can use a custom lambda.
```python
# Custom handler for when the LLM outputs bad JSON/tool calls
def handle_parsing_error(error) -> str:
return f"Sorry, I encountered an internal error. Please rephrase your request. Technical details: {str(error)}"
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True, # Basic handling
# max_iterations=5, # Prevent infinite loops
# early_stopping_method='generate', # Stop if no tool is called
)
```
**c) Optimizations:**
* **Caching:** Use `LangChain's SQLiteCache` (`from langchain.cache import SQLiteCache`) to cache expensive LLM responses and tool calls for identical inputs, reducing cost and latency.
* **Parallel Tool Calls:** If using a model that supports it (like `gpt-4-turbo`), the agent can call multiple non-dependent tools in parallel, significantly speeding up the workflow.
* **Validation:** Add Pydantic validation to the inputs of your custom functions to catch errors early.
### Example Execution
```python
# Example 1: Simple query for the agent
result = agent_executor.invoke({
"input": "Please assess applicant app_789. Their monthly income is $3000 and their total monthly debt payments are $1500."
})
print(result["output"])
# Example 2: Using the sequential chain
branch_result = sequential_workflow.invoke({"applicant_id": "app_456"})
print(branch_result)
```
### Expected Output (Example 1):
> **> Entering new AgentExecutor chain...**
>
> **Action:** `get_credit_score` with args `{'applicant_id': 'app_789'}`
> **Observation:** 580
>
> **Action:** `calculate_dti_ratio` with args `{'monthly_income': 3000.0, 'monthly_debt_payments': 1500.0}`
> **Observation:** {'dti_ratio': 50.0, 'risk_category': 'High Risk'}
>
> **Action:** `check_payment_history` with args `{'applicant_id': 'app_789'}`
> **Observation:** 1 serious delinquency (90+ days) in last 24 months
>
> **Final Output:**
> **DECISION: DENY**
>
> **Justification:**
> The application for applicant `app_789` has been denied based on the following rule violations:
> 1. **Rule 2 Violation:** The calculated Debt-to-Income (DTI) ratio is 50.0%, which exceeds the maximum allowable threshold of 45%. This indicates insufficient income to cover debt obligations.
> 2. **Rule 1 Consideration:** The applicant's credit score of 580 is at the threshold for automatic denial, further supporting the high-risk profile.
> 3. **Rule 3 Consideration:** The payment history shows a serious delinquency, which, while outside the 12-month automatic referral window, adds to the overall risk assessment.
>
> This decision is consistent with our credit policy aimed at mitigating default risk.
This architecture provides a powerful, transparent, and maintainable framework for building complex, rule-driven LLM agents for financial analysis.
error: Generation failed [deepseek] prompt_id=1417 locale=nl err=DeepSeek connection/timeout | Generation failed [deepseek] prompt_id=1417 locale=de err=DeepSeek connection/timeout | Generation failed [deepseek] prompt_id=1417 locale=pt err=DeepSeek connection/timeout