You could, for instance, ask a car fitted with a reasoning engine why it had hit the brakes, and it would be able to tell you that it thought a bicycle hidden by a van was about to enter the intersection ahead.
例如,如果問一輛裝有推理引擎的汽車為什么踩剎車,它會告訴你它認為有一輛藏在貨車后面的自行車即將進入前面的十字路口。
A machine-learning program cannot do that.
而機器學習程序無法做到這一點。
Besides helping improve program design, such information will, Dr Bhatt reckons, help regulators and insurance companies.
巴特博士認為,這些信息除了有助于改進項目設計之外,對監管機構和保險公司也會起到幫助作用。
It may thus speed up public acceptance of autonomous vehicles.
因此,它可能會加快公眾對自動駕駛汽車的接受程度。
Dr Bhatt's work is part of a long-standing debate in the field of artificial intelligence.
巴特博士的工作在人工智能領域長期備受爭論。
Early AI researchers, working in the 1950s, chalked up some successes using this sort of preprogrammed reasoning.
20世紀50年代的早期人工智能研究人員使用這種預編程推理取得了一些成功。
But, beginning in the 1990s, machine learning improved dramatically, thanks to better programming techniques combined with more powerful computers and the availability of more data.
但是,從20世紀90年代開始,由于更好的編程技術、更強大的計算機以及更多數據的可用性,機器學習得到了顯著改善。
Today almost all AI is based on it.
今天,幾乎所有的人工智能都基于機器學習。
Dr Bhatt is not, though, alone in his scepticism.
不過,巴特博士并不是唯一持懷疑態度的人。
Gary Marcus, who studies psychology and neural science at New York University and is also the boss of an AI and robotics company called Robust.AI, agrees.
加里·馬庫斯在紐約大學研究心理學和神經科學,他也是一家名為Robust.AI的人工智能和機器人公司的老板。
To support his point of view, Dr Marcus cites a much-publicised result, albeit from eight years ago.
為了支持他的觀點,馬庫斯博士引用了一個8年前被廣泛報道的結果。
This was when engineers at DeepMind (then an independent company, now part of Google) wrote a program that could learn, without being given any hints about the rules, how to play Breakout, a video game which involves hitting a moving virtual ball with a virtual paddle.
當時,DeepMind(當時是一家獨立公司,現在隸屬于谷歌)的工程師編寫了一個程序,可以在不需要任何規則提示的情況下學習如何玩Breakout,這是一款電子游戲,需要用虛擬球拍擊打移動中的虛擬球。
DeepMind's program was a great player.
DeepMind的程序是一個杰出的玩家。
But when another group of researchers tinkered with Breakout's code—shifting the location of the paddles by just a few pixels—its abilities plummeted.
但是,當另一組研究人員對Breakout的代碼進行修改,將電板的位置稍稍改變了幾個像素后,它的能力便一落千丈。
It was not able to generalise what it had learned from a specific situation even to a situation that was only slightly different.
它無法將自己從某個特定情境中學到的東西應用到即便只是略微不同的情形中。
For Dr Marcus, this example highlights the fragility of machine-learning.
對馬庫斯博士來說,這個例子凸顯了機器學習的脆弱性。
But others think it is symbolic reasoning which is brittle, and that machine learning still has a lot of mileage left in it.
但也有人認為符號推理才是脆弱的,而機器學習還有很大的進步空間。
Among them is Jeff Hawke, vice-president of technology at Wayve, a self-driving-car firm in London.
倫敦的自動駕駛汽車公司Wayve的技術副總裁杰夫·霍克就是其中之一。
Wayve's approach is to train the software elements running a car's various components simultaneously, rather than separately.
Wayve的方法是對驅動汽車各個部件的軟件同時開展訓練,而不是分別訓練。
In demonstrations, Wayve's cars make good decisions while navigating narrow, heavily trafficked London streets—a task that challenges many humans.
在演示中,Wayve的汽車在狹窄、擁擠的倫敦街道上行駛時做出了正確的決定——這對于人類駕駛員來說往往也頗具挑戰。
譯文由可可原創,僅供學習交流使用,未經許可請勿轉載。