本地运行大模型效果及配置展示
电脑上用ollama安装了qwen2.5:32b,deepseek-r1:32b,deepseek-r1:14b,llama3.1:8b四个模型,都是Q4_K_M量化版。
运行过程中主要是cpu和内存负载比较大,qwen2.5:32b大概需要22g,deepseek-r1:32b类似。显卡的运行状态在使用nouveau驱动的情况下使用cpu-x没有读取到。前段时间换成NVIDIA驱动后又试了下qwen2.5:32b, 使用nvidia-smi读取到了运行状态,之前会占用大量内存的情况现在也没有再出现,但输出速度几乎不变,不太确定正不正常。考虑到切换NVIDIA驱动后我的两块屏幕无法显示,加上其他一些arch用户在更新系统时遇到的和NVIDIA驱动相关的问题,我又切换回了nouveau。
运行效果方面的话,两个32b的模型的效果是最好的,一些复杂问题的准确性也比两个要高,但是速度也是最慢的,对我来说属于勉强能用,如果速度能达到10 tokens/s的话,用起来就比较流畅了。
环境及配置
- 系统::Arch Linux
- CPU: AMD Ryzen 7 5800H with Radeon Graphics (16) @ 4.463GHz
- GPU: AMD ATI Radeon Vega Series / Radeon Vega Mobile Series
- GPU: NVIDIA GeForce RTX 3050 Ti Mobile / Max-Q
- GPU驱动:nouveau
- 内存: 64G (32x2) DDR4 3200MHZ
qwen2.5:32b
deepseek-r1:32b
llama3.1:8b
一些术语解释
Total Duration:
The total time it took the model to complete the task. This includes all processing time.
Load Duration:
The model’s time to load or initialize before starting the task.
Prompt Eval Count:
The number of tokens (individual words or sub-word units) in the input prompt given to the model.
Prompt Eval Duration:
he model’s time to process and understand the input prompt.
Prompt Eval Rate:
The speed at which the model processed the input prompt, measured in tokens per second.
Eval Count:
The total number of tokens the model processes during the entire task, including both the prompt and the generated output.
Eval Duration:
The model’s time to process all the tokens during the task.
Eval Rate:
The overall processing speed of the model during the task, measured in tokens per second.