무제
[Langchain] Map Re-rank 본문
answers_prompt = ChatPromptTemplate.from_template(
"""
Using ONLY the following context answer the user's question. If you can't just say you don't know, don't make anything up.
Then, give a score to the answer between 0 and 5.
If the answer answers the user question the score should be high, else it should be low.
Make sure to always include the answer's score even if it's 0.
Context: {context}
Examples:
Question: How far away is the moon?
Answer: The moon is 384,400 km away.
Score: 5
Question: How far away is the sun?
Answer: I don't know
Score: 0
Your turn!
Question: {question}
"""
)
def get_answers(inputs):
docs = inputs["docs"]
question = inputs["question"]
answers_chain = answers_prompt | llm
# answers = []
# for doc in docs:
# result = answers_chain.invoke(
# {"question": question, "context": doc.page_content}
# )
# answers.append(result.content)
return {
"question": question,
"answers": [
{
"answer": answers_chain.invoke(
{"question": question, "context": doc.page_content}
).content,
"source": doc.metadata["source"],
}
for doc in docs
],
}
choose_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""
You are a competent lawyer.
Use ONLY the following pre-existing answers to answer the user's question in Korean.
Use the answers that have the highest score (more helpful) and favor the most recent ones.
Answers: {answers}
""",
),
("human", "{question}"),
]
)
def choose_answer(inputs):
answers = inputs["answers"]
question = inputs["question"]
choose_chain = choose_prompt | llm
condensed = "\n\n".join(
f"{answer['answer']}\nSource:{answer['source']}\n"
for answer in answers
)
return choose_chain.invoke(
{
"question": question,
"answers": condensed,
}
)
chain = (
{
"docs": retriever,
"question": RunnablePassthrough(),
}
| RunnableLambda(get_answers)
| RunnableLambda(choose_answer)
)
result = chain.invoke("무례한 표현만으로 모욕죄가 될 수 있어?")
print(result)
langsmith를 출력결과를 보면
query에 대해 score가 높은 결과를 고르기 때문에
너무나 직접적인 답변들만 들고오는 것 같다.
법률자문 챗봇을 구현할때는 refine이나 map reduce방식이 더 나은 방법 같아 보인다.
'Project > LLM' 카테고리의 다른 글
| [Langchain] LLM 모델로 LLM 평가하기 (2) | 2024.11.29 |
|---|---|
| [Langchain] Model Evaluation - 1 (0) | 2024.11.24 |
| [Langhchain] Refine (0) | 2024.11.21 |
| [Langchain] Map Reduce (0) | 2024.11.17 |
| [Langchain] Retriever (2) | 2024.11.17 |
Comments