All articles| All Pictures| All Softwares| All Video| Go home page| Write articles| Upload pictures

Reading number is top 10 articles
美光CEO主动减少50%底薪:配合公司削减支出 - 美光,SanDisk
Dangdang executives eat gift card rebate: siphoned off by more than 15 million Yuan,
美国破获最大网络盗窃案:三人狂盗1亿多份个人信息 - 黑客,网络
美国“恶名市场”名单揭晓,阿里巴巴受警告 - 假冒商品,秀水街,阿里巴巴
Again, Google Wallet debit cards debut
取消140字符限制,“加长版Twitter”靠谱吗? - Twitter,推特
Network about new survey: taxi drivers are tangled,
Tencent exposes employees of “death benefits“: families of half pay for ten years,
谷歌CEO:要勇于承担风险才能成为领袖 - 谷歌
在美跨国公司中哪类缴税最少?科技企业 - 苹果,思科,Alphabet,谷歌
Reading number is top 10 pictures
Tie a large font of mouse
29 the belle stars after bath figure3
Go to the national museum1
Female model behind the bitterness, often being overcharged2
夕阳下的北京街道
Earthquake hedge common sense
美女浴室写真1
Players in the eyes of a perfect love2
Summer is most suitable for young people to travel in China9
The world's top ten most beautiful railway station1
Download software ranking
Detective task-the top secret prostitution files
VeryCD电驴(EasyMule) V1.1.9 Build09081
The cock of the Grosvenor LTD handsome
Tram sex maniac 2 (H) rar bag15
传奇私服架设教程-chm
matrix1
Boxer's Top ten classic battle8
Adobe Flash Player(IE) 10.0.32.18 浏览器专用的FLASH插件
Tram sex maniac 2 (H) rar bag12
Kung fu panda - the secret of the teacher
published in(发表于) 2016/3/29 11:03:53 Edit(编辑)
New concept of IBM chip: AI training 30,000 times faster times

New concept of IBM chip: AI training 30,000 times faster times(IBM新概念芯片:AI训练速度提升30000倍,)

English

中文

New concept of IBM chip: AI-IBM,AI training 30,000 times faster times, chip-IT information

IBM researchers recently published a paper, paper-processing unit of the so-called resistance (Resistive Processing Unit,RPU) concept of new chip, allegedly compared with traditional CPU chips that depth of neural network training rate can be increased to 30,000 times times the original.

Deep neural network (DNN) is a hidden layer of artificial neural networks, the neural network can be supervised training, or unsupervised training, as a result, is able to "learn" machine learning (or AI), also known as the deep learning.

Not long ago, Google (Alphabet) DeepMind defeated Li Shishi of AI in the man-machine war go programs AlphaGo uses a similar algorithm. AlphaGo consist of a search tree algorithm and two millions of such depth of multilayer neural networks of neuron connections. One of the network is called "policy networks", is used to calculate which one step winning highest, another network called "value networks", used to tell AlphaGo how to move better for white and black, so you can reduce the possibility of depth.

With prospects, many machine learning researchers have been turning their attention to a depth of neural networks above. However, in order to reach a certain level of intelligence, which need a lot of computing chip, such as AlphaGo number reached thousands of used computing chip. So it was a very consuming computing resources, and costly task. But IBM researchers presented a new concept of chip, its powerful computing power to thousands of a traditional chip, and if thousands of chips that combine words, AI capability may in the future there will be more breakthroughs.

This chip called RPU using algorithms such as depth of learning two main features: locality and parallelism. ROU with next-generation non-volatile memory (NVM) technology, the concept of weights of algorithms used to stored locally, so as to minimize data movement in the training process. The researchers said if the RPU application to a depth of more than more than 1 billion weights neural network, training up to 30,000 times can be accelerated, that is usually to thousands of machines, training for a few days to produce the result using the chip a few hours can fix, and energy efficiency is much lower.

Of course, the paper was just a concept is proposed, this chip is still in the research stage, and in view of the General non-volatile memory have not yet entered the mainstream market, so the chip market estimated it could be several years of time. However if the chips really are with the computing and energy efficiency advantages, believes that Google, Facebook and other AI research and applications giant will pay close attention to, and IBM itself is one of the active participants of AI and large data, if made to the market should not worry about.


IBM新概念芯片:AI训练速度提升30000倍 - IBM,AI,芯片 - IT资讯

IBM的几位研究人员近日公布了一份论文,论文阐述了一种所谓的电阻式处理单元(Resistive Processing Unit,RPU)的新型芯片概念,据称与传统CPU相比,这种芯片可以将深度神经网络的训练速度提高至原来的30000倍。

深度神经网络(DNN)是一种有多隐层的人工神经网络,这种神经网络即可进行有监督训练,也可进行无监督训练,结果出来的就是能够自行“学习”的机器学习(或者叫人工智能),也即所谓的深度学习。

前不久Google(Alphabet)DeepMind在人机大战中击败李世石的AI围棋程序AlphaGo就采用了类似算法。AlphaGo由一个搜索树算法和两个有数百万类神经元连接的多层深度神经网络组成。其中一个网络叫做“策略网络”,用于计算走哪一步的胜率最高,另一个网络叫做“价值网络”,用于告诉AlphaGo怎么移动对白子和黑子都更好,这样就可以降低可能性的深度。

由于前景看好,许多机器学习研究人员都已经把焦点集中到深度神经网络上面。但是,为了达到一定程度的智能,这些网络需要非常多的计算芯片,比如AlphaGo使用的计算芯片数量就达到了几千个。所以这是一项很耗计算资源、同时也很烧钱的任务。不过现在IBM的研究人员提出了一种新的芯片概念,其强大的计算能力可以一个就顶传统芯片的几千,而如果将成千上万个这种芯片组合起来的话,未来AI的能力也许就会出现更多的突破。

这种名为RPU的芯片主要利用了深度学习等算法的两个特点:本地性以及并行性。为此ROU借助了下一代非易失性内存(NVM)技术的概念,把算法用到的权重值存储在本地,从而把训练过程中的数据移动最小化。研究人员称,如果把这种RPU大规模应用到有10亿多个权重的深度神经网络,则训练速度最高可以加速30000倍,也就是说平时需要几千台机器训练几天才能出的结果用这种芯片几个小时就可以搞定,而且能效还要低得多。

当然,论文只是提出了一种概念,这种芯片目前还处在研究阶段,同时鉴于普通非易失性内存尚未进入主流市场,所以这样的芯片上市估计还需要几年的时间。不过如果这种芯片的确有那么大的计算和能效优势的话,相信GoogleFacebook等从事AI研究和应用的巨头一定会关注的,而IBM本身也是AI、大数据的积极参与者之一,东西要是做出来市场是应该不用发愁的。






添加到del.icio.us 添加到新浪ViVi 添加到百度搜藏 添加到POCO网摘 添加到天天网摘365Key 添加到和讯网摘 添加到天极网摘 添加到黑米书签 添加到QQ书签 添加到雅虎收藏 添加到奇客发现 diigo it 添加到饭否 添加到飞豆订阅 添加到抓虾收藏 添加到鲜果订阅 digg it 貼到funP 添加到有道阅读 Live Favorites 添加到Newsvine 打印本页 用Email发送本页 在Facebook上分享


Disclaimer Privacy Policy About us Site Map

If you have any requirements, please contact webmaster。(如果有什么要求,请联系站长)
Copyright ©2011-
uuhomepage.com, Inc. All rights reserved.