当前位置: 首页 > article >正文

【失败了】LazyGraphRAG利用本地ollama提供Embedding model服务和火山引擎的deepseek API构建本地知识库

LazyGraphRAG测试结果如下
数据:
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./ragtest/input/book.txt
失败了
在这里插入图片描述
气死我也!!!对deepseek-V3也不是很友好啊,我没钱prompt 微调啊,晕死
将模型从deepseek切换为豆包后成功!
在这里插入图片描述

明日继续研究更新
在这里插入图片描述

错误log:
主要是ds的json遵循能力还是有点弱啊

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/fnllm/base/base_llm.py", line 144, in __call__
    return await self._decorated_target(prompt, **kwargs)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/fnllm/base/services/json.py", line 77, in invoke
    return await this.invoke_json(delegate, prompt, kwargs)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/fnllm/base/services/json.py", line 100, in invoke_json
    raise FailedToGenerateValidJsonError from error
fnllm.base.services.errors.FailedToGenerateValidJsonError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/graphrag/index/operations/summarize_communities/community_reports_extractor.py", line 80, in __call__
    response = await self._model.achat(
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/graphrag/language_model/providers/fnllm/models.py", line 81, in achat
    response = await self.model(prompt, **kwargs)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/fnllm/openai/llm/openai_chat_llm.py", line 94, in __call__
    return await self._text_chat_llm(prompt, **kwargs)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/fnllm/openai/services/openai_tools_parsing.py", line 130, in __call__
    return await self._delegate(prompt, **kwargs)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/fnllm/base/base_llm.py", line 148, in __call__
    await self._events.on_error(
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/graphrag/language_model/providers/fnllm/events.py", line 26, in on_error
    self._on_error(error, traceback, arguments)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/graphrag/language_model/providers/fnllm/utils.py", line 45, in on_error
    callbacks.error("Error Invoking LLM", error, stack, details)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/graphrag/callbacks/workflow_callbacks_manager.py", line 64, in error
    callback.error(message, cause, stack, details)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/site-packages/graphrag/callbacks/file_workflow_callbacks.py", line 37, in error
    json.dumps(
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 201, in encode
    chunks = list(chunks)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 431, in _iterencode
    yield from _iterencode_dict(o, _current_indent_level)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 438, in _iterencode
    o = _default(o)
  File "/home/zli/miniconda3/envs/graphrag/lib/python3.10/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type ModelMetaclass is not JSON serializable
22:53:43,292 graphrag.callbacks.file_workflow_callbacks INFO Community Report Extraction Error details=None
22:53:43,293 graphrag.index.operations.summarize_communities.strategies WARNING No report found for community: 8.0

配置如下:

models:
  default_chat_model:
    type: openai_chat # or azure_openai_chat
    api_base: https://ark.cn-beijing.volces.com/api/v3/
    # api_version: 2024-05-01-preview
    auth_type: api_key # or azure_managed_identity
    api_key: ${GRAPHRAG_API_KEY} # set this in the generated .env file
    # audience: "https://cognitiveservices.azure.com/.default"
    # organization: <organization_id>
    model: deepseek-v3-241226
    # deployment_name: <azure_model_deployment_name>
    encoding_model: cl100k_base # automatically set by tiktoken if left undefined
    model_supports_json: true # recommended if this is available for your model.
    concurrent_requests: 25 # max number of simultaneous LLM requests allowed
    async_mode: threaded # or asyncio
    retry_strategy: native
    max_retries: -1                   # set to -1 for dynamic retry logic (most optimal setting based on server response)
    tokens_per_minute: 0              # set to 0 to disable rate limiting
    requests_per_minute: 0            # set to 0 to disable rate limiting
  default_embedding_model:
    type: openai_embedding # or azure_openai_embedding
    api_base: http://localhost:11434/v1/
    # api_version: 2024-05-01-preview
    #auth_type: api_key # or azure_managed_identity
    #type: openai_chat
    api_key: ollama
    # audience: "https://cognitiveservices.azure.com/.default"
    # organization: <organization_id>
    model: bge-m3
    # deployment_name: <azure_model_deployment_name>
    encoding_model: cl100k_base # automatically set by tiktoken if left undefined
    model_supports_json: true # recommended if this is available for your model.
    concurrent_requests: 25 # max number of simultaneous LLM requests allowed
    async_mode: threaded # or asyncio
    retry_strategy: native
    max_retries: -1                   # set to -1 for dynamic retry logic (most optimal setting based on server response)
    tokens_per_minute: 0              # set to 0 to disable rate limiting
    requests_per_minute: 0            # set to 0 to disable rate limiting

vector_store:
  default_vector_store:
    type: lancedb
    db_uri: output/lancedb
    container_name: default
    overwrite: True

embed_text:
  model_id: default_embedding_model
  vector_store_id: default_vector_store

### Input settings ###

input:
  type: file # or blob
  file_type: text #[csv, text, json]
  base_dir: "input"

chunks:
  size: 1200
  overlap: 100
  group_by_columns: [id]

### Output settings ###
## If blob storage is specified in the following four sections,
## connection_string and container_name must be provided

cache:
  type: file # [file, blob, cosmosdb]
  base_dir: "cache"

reporting:
  type: file # [file, blob, cosmosdb]
  base_dir: "logs"

output:
  type: file # [file, blob, cosmosdb]
  base_dir: "output"

### Workflow settings ###

#extract_graph:
#  model_id: default_chat_model
#  prompt: "prompts/extract_graph.txt"
#  entity_types: [organization,person,geo,event]
#  max_gleanings: 1

summarize_descriptions:
  model_id: default_chat_model
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

extract_graph_nlp:
  text_analyzer:
    extractor_type: regex_english # [regex_english, syntactic_parser, cfg]

extract_claims:
  enabled: false
  model_id: default_chat_model
  prompt: "prompts/extract_claims.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 1

community_reports:
  model_id: default_chat_model
  graph_prompt: "prompts/community_report_graph.txt"
  text_prompt: "prompts/community_report_text.txt"
  max_length: 8000
  max_input_length: 4000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: false # if true, will generate node2vec embeddings for nodes

umap:
  enabled: false # if true, will generate UMAP embeddings for nodes (embed_graph must also be enabled)

snapshots:
  graphml: false
  embeddings: false

### Query settings ###
## The prompt locations are required here, but each search method has a number of optional knobs that can be tuned.
## See the config docs: https://microsoft.github.io/graphrag/config/yaml/#query

local_search:
  chat_model_id: default_chat_model
  embedding_model_id: default_embedding_model
  prompt: "prompts/local_search_system_prompt.txt"

global_search:
  chat_model_id: default_chat_model
  map_prompt: "prompts/global_search_map_system_prompt.txt"
  reduce_prompt: "prompts/global_search_reduce_system_prompt.txt"
  knowledge_prompt: "prompts/global_search_knowledge_system_prompt.txt"

drift_search:
  chat_model_id: default_chat_model
  embedding_model_id: default_embedding_model
  prompt: "prompts/drift_search_system_prompt.txt"
  reduce_prompt: "prompts/drift_search_reduce_prompt.txt"

basic_search:
  chat_model_id: default_chat_model
  embedding_model_id: default_embedding_model
  prompt: "prompts/basic_search_system_prompt.txt"


http://www.kler.cn/a/589316.html

相关文章:

  • 面试系列|蚂蚁金服技术面【3】
  • C语言内存函数讲解
  • 10-SDRAM控制器的设计—— signaltap 调试
  • iptables与firewall的区别,从不同的角度讲解
  • 基于金融产品深度学习推荐算法详解【附源码】
  • C++类:特殊的数据成员
  • 鸿蒙跳转到系统设置app界面
  • JAVA(8)-数组
  • 07.Python基础4
  • Linux中安装MySQL
  • 我又又又又又又又更新了~~~纯手工编写C++画图,有注释~~~
  • 解决git fetch 成功后还是不能checkout到fetch分支
  • 深入理解嵌入式开发中的三个重要工具:零长度数组、container_of 和 typeof
  • QT编程之JSON处理
  • AI软件栈:推理框架(二)-Llama CPP1
  • Qt 窗口以及菜单栏介绍
  • embedding技术
  • Cascadeur 技术浅析(五):碰撞避免算法
  • 【python web】一文掌握 Flask 的基础用法
  • 黑龙江有多线IDC服务器托管机房吗?