The deepest problem with deep learning

by | Feb 10, 2019 | Uncategorized | 0 comments

All Premium Themes And WEBSITE Utilities Tools You Ever Need! Greatest 100% Free Bonuses With Any Purchase.

Greatest CYBER MONDAY SALES with Bonuses are offered to following date: Get Started For Free!
Purchase Any Product Today! Premium Bonuses More Than $10,997 Will Be Emailed To You To Keep Even Just For Trying It Out.
Click Here To See Greatest Bonuses

and Try Out Any Today!

Here’s the deal.. if you buy any product(s) Linked from this sitewww.Knowledge-Easy.com including Clickbank products, as long as not Google’s product ads, I am gonna Send ALL to you absolutely FREE!. That’s right, you WILL OWN ALL THE PRODUCTS, for Now, just follow these instructions:

1. Order the product(s) you want by click here and select the Top Product, Top Skill you like on this site ..

2. Automatically send you bonuses or simply send me your receipt to consultingadvantages@yahoo.com Or just Enter name and your email in the form at the Bonus Details.

3. I will validate your purchases. AND Send Themes, ALL 50 Greatests Plus The Ultimate Marketing Weapon & “WEBMASTER’S SURVIVAL KIT” to you include ALL Others are YOURS to keep even you return your purchase. No Questions Asked! High Classic Guaranteed for you! Download All Items At One Place.

That’s it !

*Also Unconditionally, NO RISK WHAT SO EVER with Any Product you buy this website,

60 Days Money Back Guarantee,

IF NOT HAPPY FOR ANY REASON, FUL REFUND, No Questions Asked!

Download Instantly in Hands Top Rated today!

Remember, you really have nothing to lose if the item you purchased is not right for you! Keep All The Bonuses.

Super Premium Bonuses Are Limited Time Only!

Day(s)

:

Hour(s)

:

Minute(s)

:

Second(s)

Get Paid To Use Facebook, Twitter and YouTube
Online Social Media Jobs Pay $25 - $50/Hour.
No Experience Required. Work At Home, $316/day!
View 1000s of companies hiring writers now!

Order Now!

MOST POPULAR

*****
Customer Support Chat Job: $25/hr
Chat On Twitter Job - $25/hr
Get Paid to chat with customers on
a business’s Twitter account.

Try Free Now!

Get Paid To Review Apps On Phone
Want to get paid $810 per week online?
Get Paid To Review Perfect Apps Weekly.

Order Now
!
Look For REAL Online Job?
Get Paid To Write Articles $200/day
View 1000s of companies hiring writers now!

Try-Out Free Now!

How To Develop Your Skill For Great Success And Happiness Including Become CPA? | Additional special tips From Admin

Skill Advancement will be the number 1 imperative and key matter of attaining genuine accomplishment in every professions as everyone found in this modern society plus in All over the world. And so fortunate to talk over together with you in the next pertaining to whatever prosperous Skill Progression is; exactly how or what ways we deliver the results to acquire ambitions and eventually one might succeed with what the person takes pleasure in to implement just about every working day pertaining to a entire everyday life. Is it so superb if you are ready to improve economically and locate being successful in exactly what you dreamed, designed for, self-displined and performed hard any day and without doubt you turn into a CPA, Attorney, an manager of a considerable manufacturer or even a general practitioner who may greatly add superb guidance and values to other folks, who many, any contemporary society and town clearly esteemed and respected. I can's believe I can help others to be major skilled level who seem to will make contributions important answers and aid valuations to society and communities today. How completely happy are you if you become one like so with your private name on the headline? I get arrived on the scene at SUCCESS and prevail over all the difficult components which is passing the CPA examinations to be CPA. What's more, we will also take care of what are the problems, or alternative problems that could possibly be on a person's means and how I have professionally experienced them and could demonstrate you the right way to cure them. | From Admin and Read More at Cont'.

The deepest problem with deep learning

On November 21, I read an interview with Yoshua Bengio in Technology Review that to a suprising degree downplayed recent successes in deep learning, emphasizing instead some other important problems in AI might require important extensions to what deep learning is currently able to do. In particular, Bengio told Technology Review that,

I agreed with virtually every word and thought it was terrific that Bengio said so publicly. I was also struck by what seemed to be (a) an important change in view, or at least framing, relative to how advocates of deep learning framed things a few years ago (see below), (b) movement towards a direction for which I had long advocated, and (c) noteworthy coming from Bengio, who is, after all, one of the major pioneers in deep learning

So I tweeted it, expecting a few retweets and nothing more. Instead I accidentally launched a Twitterstorm, at times illuminating, at times maddening, with some of the biggest folks in the field, including Bengio’s fellow deep learning pioneer Yann LeCun and one of AI’s deepest thinkers, Judea Pearl.

Here’s the tweet, perhaps forgotten in the storm that followed:

For the record and for comparison, here’s what I had said almost exactly six years earlier, on November 25, 2012, eerily similar,

I stand by that — which as far as I know (and I could be wrong) is the first place where anybody said that deep learning per se wouldn’t be a panacea, and would instead need to work in a larger context to solve a certain class of problems. Bengio was pretty much saying the same thing.

Some people liked the tweet, some people didn’t. Yann LeCun’s response was deeply negative. In a series of tweets he claimed (falsely) that I hate deep learning, and that because I was not personally an algorithm developer, I had no right to speak critically; for good measure, he said that if I had finally seen the light of deep learning, it was only in the last few days, in the space of our Twitter discussion (also false).

By reflecting on what was and wasn’t said (and what does and doesn’t actually check out) in that debate, and where deep learning continues to struggle, I believe that we can learn a lot.

To begin with, and to clear up some misconceptions. I don’t hate deep learning, not at all; we used it in my last company (I was the CEO and a Founder), and I expect that I will use it again; I would be crazy to ignore it. I think — and I am saying this for the public record, feel free to quote me — deep learning is a terrific tool for some kinds of problems, particularly those involving perceptual classification, like recognizing syllables and objects, but also not a panacea. In my NYU debate with LeCun, I praised LeCun’s early work on convolution, which is an incredibly powerful tool. And I have been giving deep learning some (but not infinite) credit ever since I first wrote about it as such, in The New Yorker in 2012, in my January 2018 Deep Learning: A Critical Appraisal article, in which I explicitly said “I don’t think we should abandon deep learning” and on many occasions in between. LeCun has repeatedly and publicly misrepresented me as someone who has only just woken up to the utility of deep learning, and that’s simply not so.

Lecun’s assertion that I shouldn’t be allowed to comment is similarly absurd; science needs its critics (LeCun himself has been rightly critical of deep reinforcement learning and neuromorphic computing), and although I am not personally an algorithm engineer, my criticism thus far has had lasting predictive value. To take one example, experiments that I did on predecessors to deep learning, first published in 1998, continue to hold validity to this day, as shown in recent work with more modern models by folks like Brendan Lake and Marco Baroni and Bengio himself. When a field tries to stifle its critics, rather then addressing the underlying criticism, replacing scientific inquiry with politics, something has gone seriously amiss.

But LeCun is right about one thing; there is something that I hate. What I hate is this: the notion that deep learning is without demonstrable limits and might, all by itself, get us to general intelligence, if we just give it a little more time and a little more data, as captured in Andrew Ng’s 2016 suggestion that AI, by which he meant mainly deep learning, would either “now or in the near future“ be able to do “any mental task” a person could do “with less than one second of thought”.

Generally, though certainly not always, criticism of deep learning is sloughed off, either ignored, or dismissed, often in ad hominem way. Whenever anybody points out that there might be a specific limit to deep learning , there is always someone like Jeremy Howard to tell us that the idea that deep learning is overhyped is itself overhyped. Leaders in AI like LeCun acknowledge that there must be some limits, in some vague way, but rarely (and this is why Bengio’s new report was so noteworthy) do they pinpoint that what those limits are, beyond to acknowledge the data-hungry nature of the systems.

Others like to leverage the opacity of the black box of deep learning to suggest that that are no known limits. Last week, for example, Tom Dietterich said (in answer to a question about the scope of deep learning):

Dietterich is of course technically correct; nobody yet has delivered formal proofs about limits on deep learning, so there is no definite answer. And he is also right that deep learning continues to evolve. But the tweet (which expresses an argument I have heard many times, including from Dietterich more than once) neglects the fact we also do have a lot of strong suggestive evidence of at least some limit in scope, such as empirically observed limits reasoning abilities, poor performance in natural language comprehension, vulnerability to adversarial examples, and so forth. (At the end, I will even give an example in the domain of object recognition, putatively deep learning’s strong suit.)

To take another example, consider LeCun, Bengio and Hinton’s widely-read 2015 article in Nature on deep learning, which elaborates the strength of deep learning in considerable detail. There again much of what was said is true, but there was almost nothing acknowledged about limits of deep learning, and it would be easy to walk away from the paper imagining that deep learning is a much broader tool than it really is. The paper’s conclusion furthers that impression by suggesting that deep learning’s historical antithesis — symbol-manipulation/classical AI — should be replaced (“new paradigms are needed to replace the rule-based manipulation of symbolic expressions on large vectors.”). The traditional ending of many scientific papers — limits — is essentially missing, inviting the inference that the horizons for deep learning are limitless; symbol-manipulation soon to be left in the dustbin of history.

The strategy of emphasizing strength without acknowledging limits is even more pronounced in DeepMind’s 2017 Nature article on Go, which appears to imply similarly limitless horizons for deep reinforcement learning, by suggesting that Go is one of the hardest problems in AI. (“Our results comprehensively demonstrate that a pure [deep] reinforcement learning approach is fully feasible, even in the most challenging of domains”) — without acknowledging that other hard problems differ qualitatively in character (e.g., because information in most tasks is less complete than it is Go) and might not be accessible to similar approaches. (I discuss this further elsewhere.)

It worries me, greatly, when a field dwells largely or exclusively on the strengths of the latest discoveries, without publicly acknowledging possible weaknesses that have actually been well-documented.

Here’s my view: deep learning really is great, but it’s the wrong tool for the job of cognition writ large; it’s a tool for perceptual classification, when general intelligence involves so much more. What I was saying in 2012 (and have never deviated from) is that deep learning ought to be part of the workflow for AI, not the whole thing (“just one element in a very complicated ensemble of things”, as I put it then, “not a universal solvent, [just] one tool among many” as I put it in January). Deep learning is, like anything else we might consider, a tool with particular strengths, and particular weaknesses. Nobody should be surprised by this.

When I rail about deep-learning, it’s not because I think it should be “replaced” (cf. Hinton, LeCun and Bengio’s strong language above, where the name of the game is to conquer previous approaches), but because I think that (a) it has been oversold (eg that Andrew Ng quote, or the whole framing of DeepMind’s 2017 Nature paper), often with vastly greater attention to strengths than potential limitations, and (b) exuberance for deep learning is often (though not universal) accompanied by a hostility to symbol-manipulation that I believe is a foundational mistake. In the ultimate solution to AI.

I think it is far more likely that the two — deep learning and symbol-manipulation-will co-exist, with deep learning handling many aspects of perceptual classification, but symbol-manipulation playing a vital role in reasoning about abstract knowledge. Advances in narrow AI with deep learning are often taken to mean that we don’t need symbol-manipulation anymore, and I think that it is a huge mistake.

So what is symbol-manipulation, and why do I steadfastly cling to it? The idea goes back to the earliest days of computer science (and even earlier, to the development of formal logic): symbols can stand for ideas, and if you manipulate those symbols, you can make correct inferences about the inferences they stand for. If you know that P implies Q, you can infer from not Q that not P. If I tell you that plonk implies queegle but queegle is not true, then you can infer that plonk is not true.

In my 2001 book The Algebraic Mind, I argued, in the tradition of Newell and Simon, and my mentor Steven Pinker, that the human mind incorporates (among other tools) a set of mechanisms for representing structured sets of symbols, in something like the fashion of a hierachical tree. Even more critically I argued that a vital component of cognition is the ability to learn abstract relationships that are expressed over variables — analogous to what we do in algebra, when we learn an equation like x = y + 2, and then solve for x given some value of y. The process of attaching y to a specific value (say 5) is called binding; the process that combines that value with the other elements is what I would call an operation. The central claim of the book was that symbolic processes like that — representing abstractions, instantiating variables with instances, and applying operations to those variables, was indispensible to the human mind. I showed in detail that advocates of neural networks often ignored this, at their peril.

The form of the argument was to show that neural network models fell into two classes, those (“implementational connectionism”) that had mechanisms that formally mapped onto the symbolic machinery of operations over variables, and those (“eliminative connectionism”) that lacked such mechanisms. The ones that succeeded in capturing various facts (primarily about human language) were ones that mapped on; those that didn’t failed. I also pointed out that rules allowed for what I called free generalization of universals, whereas multilayer perceptrons required large samples in order to approximate universal relationships, an issue that crops up in Bengio’s recent work on language.

Nobody yet knows how the brain implements things like variables or binding of variables to the values of their instances, but strong evidence (reviewed in the book) suggests that brains can (pretty much everyone agree that at least some humans can do this when they do mathematics and formal logic; most linguistics would agree that we do it in understanding the language; the real question is not whether human brains can do symbol-manipulation at all, it os how broad is the scope of the processes that use it.)

The secondary goal of the book was to show that that was possible to build the primitives of symbol manipulation in principle using neurons as elements. I examined some old ideas, like dynamic binding via temporal oscillation, and personally championed a slots-and-fillers approach that involved having banks of node-like units with codes, something like the ASCII code. Memory networks and differentiable programming have been doing something a little like that, with more modern (embedding) codes, but following a similar principle, the latter embracing an ever-widening array of basic micro-processor operations such as copy and compare of the sort I was lobbying for. I am cautiously optimistic that this approach might work better for things like reasoning and (once we have a solid enough machine-interpretable database of probabilistic but abstract common sense) language.

Whatever one thinks about the brain, virtually all of the world’s software is built on symbols. Every line of computer code, for example, is really a description of some set of operations over variables; if X is greater than Y, do P, otherwise do Q; concatenate A and B together to form something new, and so forth. Neural networks can (depending on their structure, and whether anything maps precisely onto operations over variables) offer a genuinely different paradigm, and are obviously useful for tasks like speech-recognition (which nobody would do with a set of rules anymore, with good reason), but nobody would build a browser by supervised learning on sets of inputs (logs of user key strokes) and output (images on screens, or packets downloading). My understanding from LeCun is that a lot of Facebook’s AI is done by neural networks, but it’s certainly not the case that the entire framework of Facebook runs without recourse to symbol-manipulation.

And although symbols may not have a home in speech recognition anymore, and clearly can’t do the full-stack of cognition and perception on their own, there’s lot of places where you might expect them to be helpful, albeit in problems that nobody, either in the symbol-manipulation-based world of classical AI or in the deep learning world, has the answers for yet — problems like abstract reasoning and language, which are, after all the domains for which the tools of formal logic and symbolic reasoning are invented. To anyone who has seriously engaged in trying to understand, say, commonsense reasoning, this seems obvious.

Yes, partly for historical reasons that date back to the earliest days of AI, the founders of deep learning have often been deeply hostile to including such machinery in their models; Hinton, for example, gave a talk at Stanford in 2015 called Aetherial symbols, in which tried to argue that the idea of reasoning with formal symbols was “as incorrect as the belief that a lightwave can only travel through space by causing disturbances in the luminiferous aether.”

Hinton didn’t really give an argument for that, so far as I can tell (I was sitting in the room). Instead, he seemed (to me) be making a suggesting for how to map hierarchical sets of symbols onto vectors. That wouldn’t render symbols “aether”, it would make them very real causal elements with a very specific implementation, a refutation of what Hinton seemed to advocate. (Hinton refused to clarify when I asked.) From a scientific perspective (as opposed to a political perspective), the question is not what we call our ultimate AI system, it’s how does it work. Does it include primitives that serve as implementations of the apparatus of symbol-manipulation (as modern computers do), or work on entirely different principles? My best guess is that the answer will be both: some but not all parts of any system for general intelligence will map perfectly onto the primitives of symbol-manipulation; others will not.

That’s actually a pretty moderate view, giving credit to both sides. Where we are now, though, is that the large preponderance of the machine learning field doesn’t want to explicitly include symbolic expressions (like “dogs have noses that they use to sniff things”) or operations over variables (e.g., algorithms that would test whether observations P, Q, and R and their entailments are logically consistent) in their models.

Far more researchers are more comfortable with vectors, and every day make advances in using those vectors; for most researchers, symbolic expressions and operations aren’t part of the toolkit. But the advances they make with such tools are, at some level, predictable (training times to learn sets of labels for perceptual inputs keep getting better, accuracy on classification tasks improves). No less predictable are the places where there are fewer advances: in domains like reasoning and language comprehension — precisely the domains that Bengio and I are trying to call attention to — deep learning on its own has not gotten the job down, even after billions of dollars of investment.

Those domains seem, intuitively, to revolve around putting together complex thoughts, and the tools of classical AI would seem perfectly suited to such things. Why continue to exclude them? Symbols in principle also offer a way of incorporating of all the world’s textual knowledge, from Wikipedia to textbooks; deep learning has no obvious way of incorporating basic facts like “dogs have noses” nor to accumulate that knowledge into more complex inferences. If our dream is to build machine that learn by reading Wikipedia, we ought consider starting with a substrate that is compatible with the knowledge contained therein.

The most important question that I personally raised in the Twitter discussion about deep learning is ultimately this:

Symbols won’t cut it on their own, and deep learning won’t either. The time to bring them together, in the service of novel hybrids, is long overdue.

Just after I finished the first draft of this essay, Max Little brought my attention to a thought-provoking new paper by Michael Alcorn, Anh Nguyen and others that highlights the risks inherent in relying too heavily on deep learning and big data by themselves. In particular, they showed that standard deep learning nets often fall apart when confronted with common stimuli rotated in three dimensional space into unusual positions, like the top right corner of this figure, in which a schoolbus is mistaken for a snowplow:

In a healthy field, everything would stop when a systematic class of errors that surprising and illuminating was discovered. Souls would be searched; hands would be wrung. Mistaking an overturned schoolbus is not just a mistake, it’s a revealing mistake: it that shows not only that deep learning systems can get confused, but they are challenged in making a fundamental distinction known to all philosophers: the distinction between features that are merely contingent associations (snow is often present when there are snowplows, but not necessary) and features that are inherent properties of the category itself (snowplows ought other things being equal have plows, unless eg they have been dismantled). We’d already seen similar examples with contrived stimuli, like Anish Athalye’s carefully designed, 3-d printed foam covered dimensional baseball that was mistaken for an espresso

Alcorn’s results — some from real photos from the natural world — should have pushed worry about this sort of anomaly to the top of the stack.

The initial response though, wasn’t hand-wringing; it was more dismissiveness, such as a Tweet from LeCun that dubiously likened the noncanonical pose stimuli to Picasso paintings. The reader can judge for him or herself, but the right hand column, it should be noted, are all natural images, neither painted nor rendered; they are not products of imagination, they are reflection of a genuine limitation that must be faced.

In my judgment, deep learning has reached a moment of reckoning; when some of its most prominent leaders stand in denial, there is a problem.

Which brings me back to the paper and Alcorn’s conclusions, which actually seem exactly right, and which the whole field should take note of: “state-of-the-art DNNs perform image classification well but are still far from true object recognition”. As they put it “DNNs’ understanding of objects like “school bus” and “fire truck” is quite naive” — very much parallel to what I said about neural network models of language twenty years earlier, when I suggested that the concepts acquired by Simple Recurrent Networks were too superficial.

The technical issue driving Alcorn’s et al’s new results?

As Alcorn et al put it, emphasis added,

Funny they should mention that. The chief reason motivation I gave for symbol-manipulation, back in 1998, was that back-propagation (then used in models with fewer layers, hence precursors to deep learning) had trouble generalizing outside a space of training examples.

That problem hasn’t gone away.

And object recognition was supposed to be deep learning’s forte; if deep learning can’t recognize objects in noncanonical poses, why should we expect it to do complex everyday reasoning, a task for which it has never shown any facility whatsoever?

In fact, it’s worth reconsidering my 1998 conclusions at some length. At that time I concluded in part that (excerpting from the concluding summary argument):

Richard Evans and Edward Grefenstette’s recent paper at DeepMind, building on Joel Grus’s blog post on the game Fizz-Buzz follows remarkably similar lines, concluding that a canonical multilayer network was unable to solve the simple game on own “because it did not capture the general, universally quantified rules needed to understand this task” — exactly per what I said in 1998.

Their solution? A hybrid model that vastly outperformed what a purely deep net would have done, incorporating both back-propagation and a (continuous versions) of the primitives of symbol-manipulation, including both explicit variables and operations over variables. That’s really telling. And it’s where we should all be looking: gradient descent plus symbols, not gradient descent alone. If we want to stop confusing snow plows with school buses, we may ultimately need to look in the same direction, because the underlying problem is the same: in virtually every facet of the mind, even vision, we occasionally face stimuli that our outside the domain of training; deep learning gets wobbly when that happens, and we need other tools to help.

All I am saying is to give Ps (and Qs) a chance.

The deepest problem with deep learning

Research & References of The deepest problem with deep learning|A&C Accounting And Tax Services
Source

Send your purchase information or ask a question here!

5 + 5 =

Welcome To Knowledge-Easy Management Sound Tips and Thank You Very Much! Have a great day!

From Admin and Read More here. A note for you if you pursue CPA licence, KEEP PRACTICE with the MANY WONDER HELPS I showed you. Make sure to check your works after solving simulations. If a Cashflow statement or your consolidation statement is balanced, you know you pass right after sitting for the exams. I hope my information are great and helpful. Implement them. They worked for me. Hey.... turn gray hair to black also guys. Do not forget HEALTH? Competence Progression is without a doubt the number 1 important and most important issue of acquiring real success in just about all jobs as one experienced in our own society not to mention in World-wide. Consequently privileged to examine with everyone in the subsequent in relation to whatever thriving Competency Improvement is;. how or what tactics we perform to achieve dreams and subsequently one can get the job done with what those is in love with to do every daytime intended for a maximum daily life. Is it so fantastic if you are ready to grow successfully and discover achievement in the things you thought, geared for, regimented and been effective really hard every single working day and absolutely you develop into a CPA, Attorney, an owner of a large manufacturer or perhaps even a health care professional who may well very bring about wonderful guide and valuations to people, who many, any modern culture and community obviously adored and respected. I can's believe that I can allow others to be very best high quality level who will chip in important remedies and elimination valuations to society and communities now. How satisfied are you if you turn out to be one just like so with your own personal name on the headline? I get arrived on the scene at SUCCESS and rise above all of the really difficult elements which is passing the CPA examinations to be CPA. At the same time, we will also go over what are the risks, or various situations that could possibly be on your current way and precisely how I have in person experienced all of them and can demonstrate you ways to prevail over them.

0 Comments

Submit a Comment

Business Best Sellers

 

Get Paid To Use Facebook, Twitter and YouTube
Online Social Media Jobs Pay $25 - $50/Hour.
No Experience Required. Work At Home, $316/day!
View 1000s of companies hiring writers now!
Order Now!

 

MOST POPULAR

*****

Customer Support Chat Job: $25/hr
Chat On Twitter Job - $25/hr
Get Paid to chat with customers on
a business’s Twitter account.
Try Free Now!

 

Get Paid To Review Apps On Phone
Want to get paid $810 per week online?
Get Paid To Review Perfect Apps Weekly.
Order Now!

Look For REAL Online Job?
Get Paid To Write Articles $200/day
View 1000s of companies hiring writers now!
Try-Out Free Now!

 

 
error: Content is protected !!