Once physicist name Richard Feynman said: learn principles, not formulas. Or the full version of that quote goes like that: " Understand. Don't memorize. Learn principles, not formulas.". Feynman meant that nobody can memorize every single formula and parameter in the world. Knowing principles helps to make connections between formulas, across the field where those formulas require. The base thing is that there are multiple formulas.
That meant for certain purposes. The formula that is made for physics will probably not fit mathematics. The difference between calculations is minimum, but the physics formula requires different variables than mathematical formulas. But otherways the physics formula requires only regular mathematics if the user of the formula gets the right variables.
So formula E=Mc^2 mean
E=Energy
M=mass
C^2 is the speed of light power 2.
That means we can solve that formula very effectively by using simple mathematics. But for that, we require variables like the speed of light.
All of the formulas not fitting into every field. But we can solve them be solved by using simple mathematical principles. Or actually, those formulas are nothing to solve. Those mathematical formulas are tools that are meant for calculating things like energy levels.
The same formula meant for calculating energy levels doesn't fit to calculate the pressure at normal temperature. That calculation requires its formula.
That idea is the key to deep learning or fuzzy logic. The learner simply learns principles. And then apply those principles to every formula that the learner sees.
"Recent research demonstrates that brain-inspired shallow feedforward networks can efficiently learn non-trivial classification tasks with reduced computational complexity, compared to deep learning architectures. It showed that shallow architectures can achieve the same classification success rates as deep learning architectures, but with less complexity." (SctechDaily.com/Debunking the Myth: Is Deep Learning Necessary for Artificial Intelligence?)
"Efficient learning on shallow architectures is connected to efficient dendritic tree learning, which incorporates findings from earlier experimental research on sub-dendritic adaptation and anisotropic properties of neurons. This discovery suggests the potential for the development of unique hardware for fast and efficient shallow learning, while reducing energy consumption. (Representation of a deep learning neural network tree.)". (SctechDaily.com/Debunking the Myth: Is Deep Learning Necessary for Artificial Intelligence?)
The neural network can be physical. Or it can be virtual. We can say that structure that connects things like databases is a neural network as well as the structure. That connects sensors to databases or human brain neurons to one entirety.
That is one way to close learning. The learner will not try to remember the exact formula. The learner simply studies the way to highlight the principle that guides that learner to select the right pages from the formula book, and there that learner can find the right formula.
The idea of that kind of model is this. When the learner just knows the base principles that creature can connect those principles with multiple different places.
In that case, things like physics lecture are the thing, that guides the learner to think that: "maybe the needed thing is physics formula". The reason for that is that lecture is about physics. This is an example of fuzzy logic. But I don't know how good an example that thing is about deep learning.
Then we must realize that deeply learned things give the same answers as the formulas. That the actor just read from the paper. If we think about things like a robot that moves on streets. That machine might seem very deep thinking. The information handling protocol seems very impressive. But the thing is that the system might not realize what it does. When it sees something it just connects that thing to the certain database, where are certain actions that the robot can use in those situations.
When the AI connects something that it gets from sensors with the database we must realize that the AI might not know what things in the databases mean. This is the thing about AI.
The thing. What we call learning in the case of AI is just to make a new record for the system. When we are making databases. There might be a system that keeps records of how often it uses certain databases. And in that model. The system raises the rank of the database depending on how often it uses that database. The system operates like a page rank on the internet.
But if we want that system gives some kind of response to the thing the computer just reacts to things like incoming stones. Or another car that comes from the right. The system must know traffic regulations. And then if the other car is too close push the brakes. But that kind of thing doesn't require deep learning. The system must not know why it must do something. It must just know the reaction to the case. When AI controlled robot sees some calculation and somebody asks solution for it. The system must not know what it does. It must just react the right way.
https://scitechdaily.com/debunking-the-myth-is-deep-learning-necessary-for-artificial-intelligence/
Comments
Post a Comment