当前位置: 首页 > article >正文

查看和升级pytorch到指定版本

文章目录

  • 查看和升级pytorch到指定版本
    • 查看pytorch的版本
      • python 命令查看pytorch的版本
      • 使用pip 命令查看当前安装的PyTorch版本
      • 升级PyTorch到指定版本
    • 升级到特定的版本

查看和升级pytorch到指定版本

查看pytorch的版本

python 命令查看pytorch的版本

通过Python的包管理工具pip来完成

命令:

python -c "import torch; print(torch.__version__)"
或者
python
import torch
print(torch.__version__)

例:

root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# python
Python 3.8.10 (default, Jun  4 2021, 15:09:15) 
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.7.0+cu110
>>> 

使用pip 命令查看当前安装的PyTorch版本

命令:

pip show torch

例:

root@autodl-container-616f40a3b3-41cb82d9:~# pip show torch
Name: torch
Version: 1.7.0+cu110
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /root/miniconda3/lib/python3.8/site-packages
Requires: dataclasses, numpy, typing-extensions, future
Required-by: torchvision
root@autodl-container-616f40a3b3-41cb82d9:~# 

conda环境可以使用:

conda list torch

例:

root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# conda list torch
# packages in environment at /root/miniconda3:
#
# Name                    Version                   Build  Channel
torch                     1.7.0+cu110              pypi_0    pypi
torchvision               0.8.1+cu110              pypi_0    pypi
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# 

升级PyTorch到指定版本

升级到特定的版本

使用pip升级到PyTorch的某个特定版本(例如2.3.1)

pip install --upgrade torch

pip install --upgrade torch==<版本>

例:

root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# pip install --upgrade torch==2.3.1
Looking in indexes: http://mirrors.aliyun.com/pypi/simple
Collecting torch==2.3.1
  Downloading http://mirrors.aliyun.com/pypi/packages/c0/7e/309d63c6330a0b821a6f55e06dcef6704a7ab8b707534a4923837570624e/torch-2.3.1-cp38-cp38-manylinux1_x86_64.whl (779.1 MB)
     |████████████████████████████████| 779.1 MB 11.8 MB/s 
Collecting nvidia-nccl-cu12==2.20.5
  Downloading http://mirrors.aliyun.com/pypi/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)
     |████████████████████████████████| 176.2 MB 5.1 MB/s 
Requirement already satisfied: filelock in /root/miniconda3/lib/python3.8/site-packages (from torch==2.3.1) (3.16.1)
Collecting nvidia-cudnn-cu12==8.9.2.26
  Downloading http://mirrors.aliyun.com/pypi/packages/ff/74/a2e2be7fb83aaedec84f391f082cf765dfb635e7caa9b49065f73e4835d8/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
     |████████████████████████████████| 731.7 MB 5.1 MB/s 
Collecting nvidia-cuda-nvrtc-cu12==12.1.105
  Downloading http://mirrors.aliyun.com/pypi/packages/b6/9f/c64c03f49d6fbc56196664d05dba14e3a561038a81a638eeb47f4d4cfd48/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
     |████████████████████████████████| 23.7 MB 15.5 MB/s 
Requirement already satisfied: fsspec in /root/miniconda3/lib/python3.8/site-packages (from torch==2.3.1) (2024.9.0)
Requirement already satisfied: jinja2 in /root/miniconda3/lib/python3.8/site-packages (from torch==2.3.1) (3.0.1)
Collecting nvidia-cufft-cu12==11.0.2.54
  Downloading http://mirrors.aliyun.com/pypi/packages/86/94/eb540db023ce1d162e7bea9f8f5aa781d57c65aed513c33ee9a5123ead4d/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
     |████████████████████████████████| 121.6 MB 15.3 MB/s 
Collecting sympy
  Downloading http://mirrors.aliyun.com/pypi/packages/99/ff/c87e0622b1dadea79d2fb0b25ade9ed98954c9033722eb707053d310d4f3/sympy-1.13.3-py3-none-any.whl (6.2 MB)
     |████████████████████████████████| 6.2 MB 5.3 MB/s 
Collecting networkx
  Downloading http://mirrors.aliyun.com/pypi/packages/a8/05/9d4f9b78ead6b2661d6e8ea772e111fc4a9fbd866ad0c81906c11206b55e/networkx-3.1-py3-none-any.whl (2.1 MB)
     |████████████████████████████████| 2.1 MB 8.3 MB/s 
Collecting nvidia-cuda-cupti-cu12==12.1.105
  Downloading http://mirrors.aliyun.com/pypi/packages/7e/00/6b218edd739ecfc60524e585ba8e6b00554dd908de2c9c66c1af3e44e18d/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
     |████████████████████████████████| 14.1 MB 4.9 MB/s 
Collecting nvidia-curand-cu12==10.3.2.106
  Downloading http://mirrors.aliyun.com/pypi/packages/44/31/4890b1c9abc496303412947fc7dcea3d14861720642b49e8ceed89636705/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
     |████████████████████████████████| 56.5 MB 5.4 MB/s 
Collecting typing-extensions>=4.8.0
  Downloading http://mirrors.aliyun.com/pypi/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting nvidia-cusolver-cu12==11.4.5.107
  Downloading http://mirrors.aliyun.com/pypi/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
     |████████████████████████████████| 124.2 MB 4.0 MB/s 
Collecting nvidia-cusparse-cu12==12.1.0.106
  Downloading http://mirrors.aliyun.com/pypi/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
     |████████████████████████████████| 196.0 MB 5.6 MB/s 
Collecting nvidia-cublas-cu12==12.1.3.1
  Downloading http://mirrors.aliyun.com/pypi/packages/37/6d/121efd7382d5b0284239f4ab1fc1590d86d34ed4a4a2fdb13b30ca8e5740/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
     |████████████████████████████████| 410.6 MB 6.8 MB/s 
Collecting triton==2.3.1
  Downloading http://mirrors.aliyun.com/pypi/packages/d3/55/45b3882019a8d69ad73b5b2bd1714cb2d6653b39e7376b7ac5accf745760/triton-2.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168.0 MB)
     |████████████████████████████████| 168.0 MB 4.3 MB/s 
Collecting nvidia-nvtx-cu12==12.1.105
  Downloading http://mirrors.aliyun.com/pypi/packages/da/d3/8057f0587683ed2fcd4dbfbdfdfa807b9160b809976099d36b8f60d08f03/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
     |████████████████████████████████| 99 kB 9.7 MB/s 
Collecting nvidia-cuda-runtime-cu12==12.1.105
  Downloading http://mirrors.aliyun.com/pypi/packages/eb/d5/c68b1d2cdfcc59e72e8a5949a37ddb22ae6cade80cd4a57a84d4c8b55472/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
     |████████████████████████████████| 823 kB 9.3 MB/s 
Collecting nvidia-nvjitlink-cu12
  Downloading http://mirrors.aliyun.com/pypi/packages/a8/48/a9775d377cb95585fb188b469387f58ba6738e268de22eae2ad4cedb2c41/nvidia_nvjitlink_cu12-12.6.68-py3-none-manylinux2014_x86_64.whl (19.7 MB)
     |████████████████████████████████| 19.7 MB 5.8 MB/s 
Requirement already satisfied: MarkupSafe>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from jinja2->torch==2.3.1) (2.0.1)
Collecting mpmath<1.4,>=1.1.0
  Downloading http://mirrors.aliyun.com/pypi/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl (536 kB)
     |████████████████████████████████| 536 kB 6.2 MB/s 
Installing collected packages: nvidia-nvjitlink-cu12, nvidia-cusparse-cu12, nvidia-cublas-cu12, mpmath, typing-extensions, triton, sympy, nvidia-nvtx-cu12, nvidia-nccl-cu12, nvidia-cusolver-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, networkx, torch
  Attempting uninstall: typing-extensions
    Found existing installation: typing-extensions 3.10.0.2
    Uninstalling typing-extensions-3.10.0.2:
      Successfully uninstalled typing-extensions-3.10.0.2
  Attempting uninstall: torch
    Found existing installation: torch 1.7.0+cu110
    Uninstalling torch-1.7.0+cu110:
      Successfully uninstalled torch-1.7.0+cu110
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.8.1+cu110 requires torch==1.7.0, but you have torch 2.3.1 which is incompatible.
Successfully installed mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.6.68 nvidia-nvtx-cu12-12.1.105 sympy-1.13.3 torch-2.3.1 triton-2.3.1 typing-extensions-4.12.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM#
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# python -c "import torch; print(torch.__version__)"
2.3.1+cu121
root@autodl-container-616f40a3b3-41cb82d9:~/autodl-tmp/LLM# 

http://www.kler.cn/a/323343.html

相关文章:

  • torchvision库在进行图片转换操作中报antialias参数没有显式设置会导致不同图片后端中的值不统一的警告信息
  • 客厅打苍蝇fly测试总结1116
  • Python 打包教程:从零开始构建可分发的Python包
  • 【linux学习指南】VSCode部署Ubantu云服务器,与Xshell进行本地通信文件编写
  • lua-lru缓存算法解析
  • 速盾:cdn 支持 php 吗?
  • 如何让 Android 的前端页面像 iOS 一样“优雅”?
  • 从 ES5 到 ES14:深入解析 JavaScript 的演进与特性
  • 828华为云征文|部署去中心化网络的 AI 照片管理应用 PhotoPrism
  • 【教程】最新可用! 移动云手机开启Root权限方法
  • 探索甘肃非遗:Spring Boot网站开发案例
  • Ansible-触发器_打标签
  • winsoft公司Utils组件功能简介
  • 27 基于51单片机的方向盘模拟系统
  • 入选ECCV 2024!覆盖5.4w+图像,MIT提出医学图像分割通用模型ScribblePrompt,性能优于SAM
  • vue2 将页面生成pdf下载
  • 【深度学习】05-RNN循环神经网络-03- batch/epoch在神经网络中指的是什么
  • 2024 CSP 游记
  • mysql索引 -- 聚簇索引,非聚簇索引,如何查看linux下的数据库文件,普通/辅助索引(回表查询)
  • python快速搭建https服务器
  • UNI-SOP应用场景(1)- 纯前端预开发
  • ChadGPT 01
  • 98问答网是一个怎样的平台?它主要提供哪些服务?
  • 24年下重庆事业单位考试报名超详细流程
  • ​​合​​合​​信​息​​​龙​​湖​​数​​科​​一​​面​​​
  • MySQL慢查询优化指南