当前位置: 首页 > article >正文

Geoffrey Hinton对于AI的警示 20230507

来源

视频:https://www.bilibili.com/video/BV1zo4y1x7LL
油管:https://www.youtube.com/watch?v=qpoRO378qRY&t=1884s&pp=ygUPZ2VvZmZyZXkgaGludG9u

原话摘抄

Very recently, I’ve changed my mind a lot about the relationship between the brain and the kind of digital intellegence we’re developing. I used to think that the computer models we were developing weren’t as good as the brain. Over the last few months, I’ve changed my mind completely. And I think probably the computer models are working in a rather different way from the brain. They are using backpropagation and I think the brain probably not. A couple of things have led me to that conculsion(个人认为是指从谷歌离职).

If you look at these large language models, they have about trillion connections and things like GPT-4, know much more than we do. They have sort of common sense knowledge about everything. They know probably 100 times more things than we do. They’ve got trillion connections but we got 100 trillion connections. So they are much much better at getting a lot of knowledge into only 1 trillion connections than we are. And I think it’s because backpropagation may be a much better learning algorithm than what we’ve got. That’s scary.

It (backpropagation) can pack more information into only a fewe connections, which we define trillion as only a few.

(An example that we should be scary of the computer models.) A computer is digital, which involves very high energy costs and very careful fabrication. You can have many same models running on different hardware that do exactly the same thing. Suppose you have 10,000 copies. They can be looking at 10,000 different subsets of the data. Whenever one of them learns anything, all others knows it. One of them figures out how to change the weight, so it knows it can deal with this data, they can all communicate with each other, and they all agree to change the weight by the average of what all of them want. And now, the 10,000 things are communicating very effectively with each other. So they can see 10,000 times as much data as one agent could. People can’t do that. If I learn a whole lot of stuff about quantum mechanics, and I want you to know all that stuff, it’s a long painful process of getting you to understand it. I can’t just copu my weights into your brain.

主持人:When I have uncomfortable feeling I just close my laptop.

Yes. But if they’re much smater than us, they’ll be very good at manipulating us. You won’t realize what’s going on. You’ll be like 2 years old. You’ll be that easy to manipulate. So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in washington without ever going there yourself.

(主持人)In some sense, talk is cheap. If we then don’t have actions, what do we do?

I wish it was like climate chage, where you could say, if you got half a brain, you’d stop burning carbon. It’s clear what we should do about it. It’s clear that’s painful, but has to be done. … I don’t thing there’s much chance of stopping the development (of AI). What we want is some way of making sure that even if they’re smarter than us, they’re gonna do things that are beneficial for us. That’s called the alignment problem. But we need to do that in a world where there are bad actors who want to build robot soldiers that kill people. And it seems very hard to me. So I’m sorry. I’m sending the alarm and saying we have to worry about this. And I wish I had a nice simple solution I could push, but I don’t. But I think it’s very important that people get together and think hard about it and see whether there’s a solution. It’s not clear there is a solution.

Stopping them ( AI developing) might be a rational way to do, but there’s no way it’s gonna happen. One reason if the US stops developing and the Chinese won’t. They’re gonna be used in weapons(They指AI tech). Just for that reason alone, the governments aren’t gonna stop developing them. Google brings transformers and diffusion models. And it didn’t put them out there for people to use and abuse. Microsoft decided to put it out there. Google didn’t have really much choice. If you’re gonna live in a captitalist system, you can’t stop google from competing with microsoft. So I didn’t think google did anything wrong. But I think it’s just inevitable in a capitalist system or a system with competition between contries like the US and China that this stuff will be developed(this 指AI tech). If we allowed it to take over, it will be bad for all of us. I wish we could get the US and China to agree like we could with nuclear weapons. We are all in the same boat with respect to the exitential threat.


http://www.kler.cn/news/18550.html

相关文章:

  • SQL 招聘网站岗位数据分析
  • 数据挖掘笔记
  • Spring-AOP
  • 文心一言 VS chatgpt (6)-- 算法导论2.3 1~2题
  • macOS的CAOpenGLLayer中如何启用OpenGL3.2 core profile
  • Oracle监控账号创建【Prometheus】
  • webstorm 创建harthat项目
  • AI 工具合辑盘点(七)持续更新 之 AI 音乐制作工具
  • 【运动规划算法项目实战】如何利用AABB作碰撞检测(附ROS C++代码)
  • SQL学习日记
  • 从文字到语义:文本分词和词性标注的原理与实现
  • Gradio的web界面演示与交互机器学习模型,安装和使用《1》
  • 拐点已至!被比亚迪赶超,大众中国打响「翻身战」
  • 单元测试 - 集成H2 Dao测测试
  • 【Redis7】Redis7 持久化(重点:RDB与AOF重写机制)
  • 名称空间(namespaces)与作用域
  • [LeetCode周赛复盘] 第 344 场周赛20230507
  • 从不同视角来看待API数据接口
  • Unity用脚本获取物体和组件(下)
  • MySQL基础(三)基本的SELECT语句
  • eSIM证书要求-证书验证-EID
  • 第1章 Nginx简介
  • 187页9万字企业大数据治理与云平台实施方案(word)
  • sentinel 随笔 0-责任链
  • 俩小伙一晚上写了个 AI 应用,月入两万??(文末附开发教程)
  • Scrum敏捷开发工具-单团队敏捷开发管理
  • Linux用户空间与内核空间通信(Netlink通信机制)
  • 三种方法教你让模糊照片秒变高清图
  • 软件工程开发文档写作教程(05)—可行性研究报告写作规范
  • PBDB Data Service:Ecological and taphonomic vocabulary(生态学和埋葬学术语)