当前位置: 首页 > article >正文

【大模型开发指南】llamaindex配置deepseek、jina embedding及chromadb实现本地RAG及知识库(win系统、CPU适配)

说一些坑,本来之前准备用milvus,但是发现win搞不了(docker都配好了)。然后转头搞chromadb。这里面还有就是embedding一般都是本地部署,但我电脑是cpu的没法玩,我就选了jina的embedding性能较优(也可以换glm的embedding但是要改代码)。最后问题出在deepseek与llamaindex的适配,因为采用openai的接口,这里面改了openai库的源码然后对llamaindex加了配置项才完全跑通。国内小伙伴如果使用我这套方案可以抄,给我点个赞谢谢。

主要环境:

os:win11
python3.10
llamaindex  0.11.20
chromadb   0.5.15
这个文件是官方例子,自己弄个也成

源码如下:

# %%
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from IPython.display import Markdown, display

from llama_index.llms.openai import OpenAI
import chromadb

# %%

import openai
openai.api_key = "sk"

openai.api_base = "https://api.deepseek.com/v1"
llm = OpenAI(model='deepseek-chat',api_key=openai.api_key, base_url=openai.base_url)


from llama_index.core import Settings


# llm = OpenAI(api_key=openai.api_key, base_url=openai.base_url)
Settings.llm = OpenAI(model="deepseek-chat",api_key=openai.api_key, base_url=openai.base_url)
# %%
import os

jinaai_api_key = "jina"
os.environ["JINAAI_API_KEY"] = jinaai_api_key

from llama_index.embeddings.jinaai import JinaEmbedding

text_embed_model = JinaEmbedding(
    api_key=jinaai_api_key,
    model="jina-embeddings-v3",
    # choose `retrieval.passage` to get passage embeddings
    task="retrieval.passage",
)

# %%
# create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")

# %%


# define embedding function
embed_model = text_embed_model

# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()

# save to disk

db = chromadb.PersistentClient(path="./chroma_db")
chroma_collection = db.get_or_create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

index = VectorStoreIndex.from_documents(
    documents, storage_context=storage_context, embed_model=embed_model
)

# load from disk
db2 = chromadb.PersistentClient(path="./chroma_db")
chroma_collection = db2.get_or_create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
index = VectorStoreIndex.from_vector_store(
    vector_store,
    embed_model=embed_model,
)

# Query Data from the persisted index
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print('response:',response)


1.llamaindex如何配置deepseek

在这里插入图片描述
找到llama_index下面的openai的utils配置里,加入"deepseek-chat":128000,
路径C:\Users\USER.conda\envs\workspace\lib\site-packages\llama_index\llms\openai\utils.py

from llama_index.llms.openai import OpenAI

llm = OpenAI(model="deepseek-chat", base_url="https://api.deepseek.com/v1", api_key="sk-")

response = llm.complete("见到你很高兴")
print(str(response))

2.llama使用jina

# Initilise with your api key
import os

jinaai_api_key = "jina_"
os.environ["JINAAI_API_KEY"] = jinaai_api_key

from llama_index.embeddings.jinaai import JinaEmbedding

text_embed_model = JinaEmbedding(
    api_key=jinaai_api_key,
    model="jina-embeddings-v3",
    # choose `retrieval.passage` to get passage embeddings
    task="retrieval.passage",
)

embeddings = text_embed_model.get_text_embedding("This is the text to embed")
print("Text dim:", len(embeddings))
print("Text embed:", embeddings[:5])

query_embed_model = JinaEmbedding(
    api_key=jinaai_api_key,
    model="jina-embeddings-v3",
    # choose `retrieval.query` to get query embeddings, or choose your desired task type
    task="retrieval.query",
    # `dimensions` allows users to control the embedding dimension with minimal performance loss. by default it is 1024.
    # A number between 256 and 1024 is recommended.
    dimensions=512,
)

embeddings = query_embed_model.get_query_embedding(
    "This is the query to embed"
)
print("Query dim:", len(embeddings))
print("Query embed:", embeddings[:5])


3.llamaindex 使用chromadb

# %%
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from IPython.display import Markdown, display

from llama_index.llms.openai import OpenAI
import chromadb

# %%

import openai
openai.api_key = "sk-"

openai.api_base = "https://api.deepseek.com/v1"


from llama_index.core import Settings


# llm = OpenAI(api_key=openai.api_key, base_url=openai.base_url)
Settings.llm = OpenAI(model="deepseek-chat",api_key=openai.api_key, base_url=openai.base_url)


# %%
import os

jinaai_api_key = "jina_"
os.environ["JINAAI_API_KEY"] = jinaai_api_key

from llama_index.embeddings.jinaai import JinaEmbedding

text_embed_model = JinaEmbedding(
    api_key=jinaai_api_key,
    model="jina-embeddings-v3",
    # choose `retrieval.passage` to get passage embeddings
    task="retrieval.passage",
)

# %%
# create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")

# %%


# define embedding function
embed_model = text_embed_model

# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()

# %%
# set up ChromaVectorStore and load in data
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)

# %%

storage_context = StorageContext.from_defaults(vector_store=vector_store)

# %%
index = VectorStoreIndex.from_documents(
    documents, storage_context=storage_context, embed_model=embed_model
)


# Settings.llm = llm

# Query Data
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print('response:',response)

http://www.kler.cn/a/378700.html

相关文章:

  • 数字信号处理Python示例(5)使用实指数函数仿真PN结二极管的正向特性
  • AI大模型重塑软件开发:从代码自动生成到智能测试
  • 哔哩哔哩车机版2.7.0|专为司机打造的车机版B站,内容丰富,功能齐全
  • django的models使用介绍。
  • 桑基图在医学数据分析中的更复杂应用示例
  • Pinctrl子系统中Pincontroller和client驱动程序的编写
  • Redis系列---数据管理
  • git入门教程8:git高级分支管理
  • YOLO11论文 | 重要性能衡量指标、训练结果评价及分析及影响mAP的因素【发论文关注的指标】
  • Docker Swarm集群配置与使用
  • 基于知识中台的智能法律咨询服务:革新法律服务的新篇章
  • sicp每日一题[2.65]
  • 【D3.js in Action 3 精译_039】4.3 D3 面积图的绘制方法及其边界标签的添加
  • RTP和RTCP的详细介绍及其C代码示例
  • UG NX二次开发(C#)-UFun-创建草图和草图曲线
  • Redis设计与实现 学习笔记 第十四章 服务器
  • RSTP的工作过程
  • CentOS 9 Stream 上安装 Redis
  • 从事人工智能相关岗位需要具备哪些技能?
  • 交叉编译工具链命名规则、以及如何生成交叉编译工具链步骤
  • bash: git: command not found
  • SpringBoot源码(四):run() 方法解析(一)
  • 微服务架构面试内容整理-微服务与传统单体架构的区别
  • 在麒麟V10上下载pycharm
  • Pinctrl子系统中client端设备树相关数据结构介绍和解析
  • 【双目视觉标定】——1原理与实践