什麼是數碼產品?羅曼·皮克勒 Uncategorized WRAP YOUR MIND around NEURAL NETWORKS

WRAP YOUR MIND around NEURAL NETWORKS

artificial intelligence is playing an ever increasing function in the lives of civilized nations, though most citizens most likely don’t recognize it. It’s now commonplace to speak with a computer when calling a business. Facebook is ending up being frightening precise at acknowledging faces in uploaded photos. Physical interaction with wise phones is ending up being a thing of the past… with Apple’s Siri as well as Google Speech, it’s slowly however surely ending up being easier to just talk to your phone as well as tell it what to do than typing or touching an icon. try this if you haven’t before — if you have an Android phone, state “OK Google”, complied with by “Lumos”.這是魔法!

Advertisements for products we’re thinking about appear on our social network accounts as if something is reading our minds. reality is, something is reading our minds… though it’s difficult to pin down precisely what that something is. An advertisement may appear for something that we want, even though we never realized we desired it up until we see it. This is not coincidental, however stems from an AI algorithm.

At the heart of many of these AI applications lies a process understood as Deep Learning. There has been a great deal of talk about Deep discovering lately, not only right here on Hackaday, however around the interwebs. as well as like most things associated to AI, it can be a bit challenging as well as difficult to comprehend without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll understand that I like to take challenging subjects, strip away the complication the very best I can as well as explain it in a method that anyone can understand. It is the goal of this article to apply a similar approach to this concept of Deep Learning. If neural networks make you cross-eyed as well as machine discovering provides you nightmares, checked out on. You’ll see that “Deep Learning” seems like a daunting subject, however is truly just a $20 term utilized to explain something whose underpinnings are fairly simple.

Machine Learning

When we program a machine to perform a task, we compose the directions as well as the machine performs them. For example, LED on… LED off… there is no requirement for the machine to understand the expected result after it has completed the instructions. There is no reason for the machine to understand if the LED is on or off. It just does what you told it to do. With machine learning, this process is flipped. We tell the machine the result we want, as well as the machine ‘learns’ the directions to get there. There are a number of methods to do this, however let us focus on an simple example:

Early neural network from MIT
If I were to ask you to make a bit robot that can guide itself to a target, a simple method to do this would be to put the robot as well as target on an XY Cartesian plane, as well as then program the robot to go so many units on the X axis, as well as then so many units on the Y axis. This simple technique has the robot just bring out instructions, without really understanding where the target is.  It works only when you understand the coordinates for the starting point as well as target. If either changes, this approach would not work.

Machine discovering enables us to offer with altering coordinates. We tell our robot to discover the target, as well as let it figure out, or learn, its own directions to get there. One method to do this is have the robot discover the distance to the target, as well as then move in a random direction. Recalculate the distance, move back to where it started as well as record the distance measurement. Repeating this process will provide us a number of distance measurements after moving from a fixed coordinate. After X amount of measurements are taken, the robot will move in the direction where the distance to the target is shortest, as well as repeat the sequence. This will ultimately enable it to reach the target. In short, the robot is just utilizing trial-and-error to ‘learn’ exactly how to get to the target. See, this stuff isn’t so difficult after all!

This “learning by trial-and-error” concept can be represented abstractly in something that we’ve all heard of — a neural network.

Neural Networks For Dummies

Neural networks get their name from the mass of neurons in your noggin. While the general network is absurdly complex, the operation of a single neuron is simple. It’s a cell with a number of inputs as well as a single output, with chemical-electrical signals providing the IO. The specify of the output is determined by the number of active inputs as well as the stamina of those inputs. If there are sufficient active inputs, a threshold will be crossed as well as the output will ended up being active. Each output of a neuron acts as the input to one more neuron, producing the network.

Perceptron diagram via exactly how to Train a Neuarl Ne由Prateek Joshi的Python在Python
同樣應該簡單地重建矽中的神經元(以及因此神經網絡)。您有許多輸入到求和中。添加輸入,以及它們超出特定閾值,輸出一個。否則輸出為零。答對了!雖然這讓我們在一個神經元模仿神經元,但遺憾的是,不幸的是不是非常有用。為了使我們的位矽神經元值得在閃存中存儲,我們要求製作輸入以及輸出少二進制……我們要求提供優勢,或更常見的標題:權重。

在1940年代後期,一個人來說,一個人的名字,弗蘭克霍博特的名字發明了這件事,稱為一個感知者。 Perceptron就像我們在前一段中解釋的比特矽神經元,有一些例外。其中最重要的是輸入具有權重。隨著重量的引入以及比特反饋,我們獲得了最令人迷人的能力……學習的能力。

通過kdnuggets來源
返回我們的位機器人,了解究竟如何到達目標。我們為機器人提供了結果,也可以撰寫自己的方向,以便通過隨機動作的試驗和錯誤過程以及XY坐標系中的距離測量來發現如何完成該結果。 Perceptron的概念是這個過程的抽象。人工神經元的產出是我們的結果。我們希望神經元向我們提供特定輸入集的預期結果。我們通過使Neuron改變輸入的重量來實現這一點,直到它實現了我們想要的結果。

調整權重由稱為反向傳播的過程完成,這是一種反饋類型。所以你有一組輸入,一組權重和結果。我們確定結果是從我們想要的地方的位置,然後利用差異(稱為誤差)來改變利用數學思想被理解為漸變體積的權重。這種“體重調整”過程經常被稱為訓練,但只不過是一個試用和錯誤過程,就像我們的位機器人一樣。

深度學習

這些天,深深的發現似乎比IoT更多的定義。然而,最簡單,最直接的前方我可以發現是一個神經網絡,其中輸入和輸出之間的一個或多個層以及用於解決複雜問題。基本上,深度發現只是一個複雜的神經網絡,利用了對傳統計算機來做真正困難的東西。

深度發現圖通過Kun Chen深入了解Deave Discovering指南
輸入以及輸出之間的層稱為隱藏層,並顯著提高神經網絡的複雜性。每個層具有特定目的,以及佈置在層次結構中。例如,如果我們有一個深度發現神經網絡訓練以確定圖像中的貓,則第一層可以查找特定的線段以及弧。層次結構中更高的其他層將看一下非常第一層的輸出,並嘗試確定更複雜的形狀,如圓圈或三角形。甚至更高的層都會尋找物體,如眼睛或晶須。有關分層分類技術的更詳細說明,請務必在不變的表示上檢查我的文章。

由於它通過試驗和錯誤過程培訓,因此不會理解圖層的實際輸出。兩種類似的深度發現用完全相同的圖像培訓的神經網絡將從其隱藏層創造不同的輸出。這會帶來一些不舒服的問題,因為麻省理工學院正在發現。

現在,當您聽到有人談論機器學習時,神經網絡以及深度學習時,您應該至少具有模糊的概念,更重要的是,它的工作原理。神經網絡似乎是下一個巨大的事情,儘管他們現在已經存在了很長時間。檢查[Steven Dufreesne]文章,這些文章在多年來改變了什麼,以及利用Tensorflow跳入他的教程,以便在機器學習中嘗試。

Leave a Reply

Your email address will not be published. Required fields are marked *