jupyter ai 结合local llm 实现思路
参考链接:
jupyter ai develop 开发文档
https://jupyter-ai.readthedocs.io/en/latest/developers/index.html
langchain custom LLM 开发文档
https://python.langchain.com/v0.1/docs/modules/model_io/llms/custom_llm/
stackoverflow :intergrate Local LLM with jupyter ai question
https://stackoverflow.com/questions/78989389/jupyterai-local-llm-integration/78989646#78989646
作者krassowski blog ,关于jupyter lab 有117个post
https://stackoverflow.com/users/6646912/krassowski
====================================
思路
1。Briefly, define the CustomLLM
with something like:
from typing import Any, Dict, Iterator, List, Mapping, Optional
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
class CustomLLM(LLM):
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
payload = ... # TODO: pass `prompt` to payload here
# TODO: define `headers`
response = requests.request(method="POST", url="10.1xx.1xx.50:8084/generate", headers=headers, data=payload)
return response.text # TODO: change it accordingly
@property
def _llm_type(self) -> str:
return "custom"
2。 create MyProvider
# my_package/my_provider.py
from jupyter_ai_magics import BaseProvider
class MyProvider(BaseProvider, CustomLLM):
id = "my_provider"
name = "My Provider"
model_id_key = "model"
models = [
"your_model"
]
def __init__(self, **kwargs):
model_id = kwargs.get("model_id")
# you can use `model_id` in `CustomLLM` to change models within provider
super().__init__(**kwargs)
3。define an entrypoint 程序入口,配置pyproject.toml
# my_package/pyproject.toml
[project]
name = "my_package"
version = "0.0.1"
[project.entry-points."jupyter_ai.model_providers"]
my-provider = "my_provider:MyProvider"
=================================
部署
cd mypackage/
pip install -e .