[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 191 KB, 1080x924, 1571597559970.jpg [View same] [iqdb] [saucenao] [google]
11077421 No.11077421 [Reply] [Original]

>The next evolution in artificial intelligence may be a matter of dispensing with all the probabilistic tricks of deep learning, mastering instead a manifold of shifting energy values, according to deep learning's pied piper, Yann LeCun

Is the AI community going full retard?

>> No.11077452

>>11077421
What did LeCun even do?
Deep learning was just the next logical step after the failure of basic perceptrons and the increase in both computational power and available data.
A third AI winter can't come soon enough to shut their stupid mouth.
The next advance in AI will come from people actually trying to understand and classify parts of the human brain and actually merging the connectionism to the symbolism approache.

>> No.11077463

>>11077452
>The next advance in AI will come from people actually trying to understand and classify parts of the human brain and actually merging the connectionism to the symbolism approache
Thats where neural-nets came from in the first place and people still have pretty much have no fucking clue what's going on in even basically applications of them much less the human brain
Neuro-science will never be worth anything because the brain is end to end hidden layers

>> No.11077468

>>11077463
*even basic

>> No.11077508

>>11077463
It might seem hard, even impossible, but I'm convinced it's the only way out, if we are to emulate the human mind.
No one said that intelligence could be reduced to a few mathematical concepts.
Gradient descent can only do so much.
We actually have to get our hands dirty now that all the low hanging fruit have been picked.

>> No.11077558

>>11077421
He's talking about EBMs, some research he did about a decade ago that's getting some more attention recently.

http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf

https://openai.com/blog/energy-based-models/

>> No.11077577

>>11077463
You ever seen a fly get stuck in an infinite loop until it exhausts all its resources trying to exit through a closed window? That's millions of years of evolution, which if 'product-itized' in its current form would have the Youtubosphere pissed off that their gay new drone keeps running out of batteries trying to find a way through glass.

>> No.11077585

>>11077577
attach gun to drone
have it shoot at air if it can't get out

>> No.11077690

>>11077558
If this is about using energy as the objective function instead of quadratic loss, I've spoken to CERN researchers who tested both and came to the conclusion that using energy is inferior in many ways.

>> No.11077794

>>11077690
In what sense is it inferior? Don't really know anything about EBMs, but their concept seems cool.

>> No.11077836

>>11077794
Worse prediction quality and lesser generalization capability. They said they reached higher scores with the standard approach, although apparently it wasn't extremely much.

I also guess implementation is a bitch since you have to derive with respect to your inputs.

>> No.11077867

>>11077836
Now that I've read part of the OpenAI paper it seems they still aren't as powerful as GANs for image processing, but it seems like they're almost getting there. Well, we'll see, who knows.