Intelligent voice-controlled assistants support us in Navigation, and image recognition systems interpret X-rays: AI-based applications, which were considered Science Fiction only a few years ago, are making their way into our everyday lives – and are getting better and better. Technological drivers in the field of artificial intelligence (AI) are currently machine learning and deep learning. Combined with the availability of mass data and advances in fast, parallel computing, they were responsible for the spectacular AI breakthroughs of recent years.
Compared to the comprehensive concept of artificial intelligence, the two approaches are defined as follows:
Artificial intelligence (AI) defines challenges that need to be solved and develops solutions. According to John McCarthy, who introduced the term artificial intelligence in 1956, AI is the science and” building ” of intelligent machines, especially intelligent computer programs. This is very similar to the task of using computers to understand human intelligence. However, AI does not have to be limited to biologically observable methods.
Machine learning is a fundamental sub-field of artificial intelligence. It aims to develop machines that automatically deliver meaningful results without precise programming of a concrete solution. Special algorithms learn models from available sample data, which can then also be applied to New, previously unseen data. Machine learning with large neural networks is called Deep Learning. A large number of artificial neurons process input information in several (latent) layers and provide the result at the output.
Deep learning is a type of artificial intelligence derived from machine learning where the machine can learn by itself, unlike programming where it simply executes predetermined rules to the letter.
How Deep Learning Works
Deep Learning relies on a network of artificial neurons inspired by the human brain. This network consists of tens or even hundreds of “layers” of neurons, each receiving and interpreting information from the previous layer. For example, the system will learn to recognize letters before tackling words in a text or determine if there is a face in a photo before discovering who it is.
At each step, the “wrong” answers are eliminated and sent back to the upstream levels to adjust the mathematical model. As the program reorganizes the information into more complex blocks. When this model is subsequently applied to other cases, it is normally able to recognize a cat without anyone ever telling it that it has never learned the concept of a cat. Starting data is essential: the more different experiences the system accumulates, the more effective it will be.
Applications of Deep Learning
Deep Learning is used in many areas :
- image recognition,
- automatic translation,
- autonomous car,
- medical diagnosis,
- personalized recommendations,
- automatic moderation of social networks,
- financial prediction and automated trading,
- identification of defective parts,
- detection of malware or fraud,
- chatbots (conversational agents),
- space exploration,
- intelligent robots.
It is also Deep Learning that Google Alpha Go artificial intelligence managed to beat the best Go champions in 2016. The search engine of the American giant is itself increasingly based on Deep Learning rather than written rules.
Today deep Learning is even able to “create” paintings by Van Gogh or Rembrandt on its own, to invent a new language for communicating between two machines.
Expertise in Machine and Deep Learning
For a Future–Oriented and Successful “AI”, it is, therefore, crucial to strengthen the Expertise the n machine and deep learning in universities as well as in research programs and competence centers. Regardless of whether data is collected on a large or small scale as the “oil of the 21st century”: without high – “performance refineries ” – methods such as machine learning or deep learning remain what they are: crude oil that cannot drive an (economic) engine.
At the same time, the AI infrastructure must be further developed especially by clusters that support machine and deep learning using special AI accelerators (e.g. GPU/CPU clusters). This infrastructure should be available to all stakeholders with proven Expertise.
Requirements for AI of Tomorrow
Modern AI applications are impressive. However, their development is also very costly. In addition, approaches such as deep learning require mostly processed (“labeled”) training data. It is difficult, if not impossible, to procure them. An important goal of research is therefore to simplify these development processes. For many applications – for example in medicine – it is also essential that AI-based predictions and decisions are reliable and comprehensible. In addition to AI expertise, the development of trusted AI systems also requires extensive programming knowledge, application knowledge, and in-depth expertise in dealing with uncertainties.
Deep Learning across Multiple Layers
Flat neural networks, as we have seen so far, exhibit only a single feature transformation, i.e. a concealed layer. Like many other machine learning algorithms, they transform features into another representation only once. One Problem with such flat architectures is the quality of the classification since learners need to generalize the high-dimensional initial features to the most important features with just a single Transformation.
The application developer or data scientist often helps with his knowledge of the application field, i.e. his domain knowledge, and manually selects the most representative features possible. As a rule, the input here is not the little processed raw data, but features are manually constructed that make a simple classification possible. One therefore also speaks of Feature Engineering.
For complex tasks such as image analysis, this process of feature design can also be two-stage. First, low-level features are constructed using image analysis techniques, such as finding characteristic edge points in the image using Sift and constructing features from such points. For example, new features are created from these features using the cluster analysis method, which is then used by a classifier with a flat architecture.
This procedure is error-prone and requires a lot of Trial-and-Error from the programmer and experience in the domain. If a face is depicted in a photo, there are usually edges in the eyes, nose,e, and mouth, for example. Now one can try to construct characteristics from the characteristic arrangement of eyes, nose mouth, etc. to recognize a pictured face. Furthermore, simple statistical quantities and distributions, such as brightness values or colors, can help to distinguish landscape images (more green) from portraits (more skin color).
But images can be so different that the learner is often wrong in his assessment when such manually constructed features are used for the classification. To be less likely to cause misjudgments, the human being must then identify additional features, program them, and hand them over to the learner.
Add Comment