Introduction to Artificial Intelligence:
As before, many people can hardly imagine the term artificial intelligence.
Some are firmly convinced that AI is a sustainable technological development, thanks to which machines will relieve us of all troublesome tasks in the future.
Others, on the other hand, think that artificial intelligence is just another buzzword and that AI, like many trends before, will soon vanish again.
Last but not least, there are people who are simply scared by artificial intelligence and whose fears range from the loss of countless jobs to horror scenarios in which robots take control of humanity.
So it is high time to ensure clarity. We explain why artificial intelligence is much more than just short-term hype and why it is worthwhile to take a closer look at the topic.
History of AI:
Artificial intelligence has not been the only concern of humanity in recent years. Rather, the idea of building machines that can automate human thinking is several centuries old.
One of the earliest sources on the subject is the work L’Homme Machine published by Julien Offray de La Mettrie in 1748. Since neither the knowledge nor the technical possibilities for AI were available at the time, the idea remained.
In the first half of the 20th century, the theoretical foundations for the further development of artificial intelligence were gradually created in science.
Finally, in the second half of the 20th century, there were the first serious attempts
to develop artificial intelligence.
The Dartmouth Conference (1956)
This research project, launched in 1956, is generally considered the official birth of AI as a science specialty.
The computer program ELIZA, presented by Joseph Weizenbaum in 1966, was able to show, by simulating a conversation between a psychotherapist and a patient, that communication between a person and a computer is possible using natural language.
Deep blue (1997)
Herbert Simon already put forward the thesis in 1957 that a computer could become world chess champion within 10 years. A full 40 years should pass before his prediction came true.
It was only in 1997 that the Deep Blue system, developed by IBM, succeeded in defeating the multiple chess world champion Garri Kimowitsch Kasparow in six games, which was undoubtedly a milestone for the field of artificial intelligence at the time.
AI between expectation and reality
Over time it became apparent that the feasibility could not keep up with the huge expectations, which regularly put research in the field of artificial intelligence back.
In recent years, however, the technical possibilities available have developed rapidly. This has contributed to the fact that AI systems have become more and more powerful and their practical benefits have continued to increase.
AlphaGo is a computer program developed by Google that masters the Chinese board game Go and that beat Lee Sedol in March 2016, one of the world’s best in this game. Since Go is significantly more complex than chess, the success of AlphaGo is considered another milestone in the field of artificial intelligence.
Put simply, AI is a sub-area of computer science that aims to imitate intelligent human behavior.
A computer should therefore be programmed so that it can solve problems independently. For this purpose, he processes perceptions, compares them with previously learned algorithms and then derives a specific action or recommended action.
When recording information using image processing, cameras, scanners or 3D sensors take pictures of their surroundings. This form of image processing is also known as machine vision.
It is currently used in particular in industry. There, AI can be used, for example, for monitoring production and quality assurance, for example by checking the surface of a workpiece or carrying out a completeness check.
Voice recognition is another way that AI systems can record information. Man speaks or writes something and the computer picks up what has been said or written.
This technology is used, for example, in intelligent chatbots, which have been used increasingly for communication with customers in recent years, and in voice assistants for smartphones, such as Siri from Apple.
If the computer could only record the information made available to it, artificial intelligence would of course be useless. He must therefore also be able to process his perceptions and derive appropriate actions from them.
The area of action includes expert systems that should relieve people and support them in their decisions, as well as natural language processing (NLP) for machine processing of natural language.
The Turing test is a method that is used to check based on specified criteria whether a computer or a machine has a mind that is equivalent to humans.
The idea for the Turing test, named after the British computer scientist and mathematician Alan Mathison Turing, dates back to 1950. However, the criteria on which it was based were only formulated later.
Turing test procedure
In addition to the computer, two human test subjects take part in the Turing test. Communication takes place via a keyboard without the test participants having hearing or visual contact.
One of the test persons communicates with the other test person and the computer and then has to say which of his conversation partners was the machine.
If he cannot answer this unequivocally, the Turing test is passed and it is assumed that the computer’s ability to think is equivalent to that of humans.
How does artificial intelligence work?
AI systems today mostly consist of neural networks that are trained with the help of machine learning or deep learning. Such a network is made up of several layers of artificial neurons, each of which can consist of several hundred or a thousand individual neurons.
The first layer serves as an input layer for the acquisition of information, while the last layer of the neural network outputs as an output layer the conclusion reached in the course of the processing process.
There are several hidden layers between the input and output layers, which are responsible for processing the information.
If, for example, a neuron from the input layer gives the signal that a pixel is green, a neuron from the second layer can link this information with corresponding signals from other neurons from the previous layer and deduce from this that it is a green area.
In this way, more complex relationships are created in each layer, so that the output layer then knows that a tree is obviously depicted on an image or can even recognize which type of tree it is.
Of course, this example is a very simplified illustration of how artificial intelligence works. In reality, the processes are much more complex.
How does artificial intelligence come about?
In order for artificial intelligence to actually work and for it to be able to differentiate, for example, a tree from a goat or even an oak from a fir tree, the AI system must first acquire the necessary knowledge.
In practice, this usually happens through supervised learning, for which a corresponding data record is required. For example, the AI system must be shown a large number of pictures of trees, or of what it should learn.
It doesn’t matter whether it is pictures, texts, language or another form of data. It is only important that the training data record can be processed digitally.
In the course of the learning process, weighted connections with the neurons of previous layers are also formed in order to identify relevant and irrelevant neurons from the previous layer. The weights change continuously during training until the best possible result is finally achieved.
How far is artificial intelligence?
Many people see the development of artificial intelligence as a serious threat from autonomous machines that develop a life of their own and oppose the instructions of their developers.
However, the practice looks a little different. Artificial intelligence is still dependent on human teachers who feed it data. AI is, so to speak, a kind of frontend that makes the huge amount of data that big data brings usable.
But it can only draw the right conclusions from the data if it has previously gained the relevant experience.
AI is currently a specialist when it comes to solving specific tasks in a specific area. However, it currently lacks the capacities to solve complex problems from different departments at the same time.
The development is going in this direction. However, it will still take a while until the time has actually come and AI may even be on a par with human intelligence.
Currently, among other things, the existing hardware in the data centers is a limitation for the further development of artificial intelligence. Due to the ever-growing data records of AI applications, even large database systems will eventually reach their limits.
One of the future challenges will therefore be to implement flexible and scalable software platforms for the interaction between artificial intelligence and big data and the further development of this promising technology.
What can artificial intelligence do?
A distinction is made between strong and weak AI in artificial intelligence. Weak AI is already superior to humans in some areas, or at least on an equal footing.
To speak of strong AI, a corresponding system in all areas would have to reach the level of the human brain.
Weak AI is currently still the reality. It focuses on solving specific, clearly outlined problems. It was specially developed for the respective application scenarios and only works superficially without having a deeper understanding of the topic.
Weak AI performs clearly defined tasks with a consistent approach. In doing so, she works with the methods that humans have made available to her.
Possible Areas of Application for weak AI are:
Chatbots for customer support
Navigation systems in motor vehicles
Software for image and speech recognition
Strong AI is also known as super intelligence. Their goal is to acquire or even surpass the same intellectual faculties as humans.
In contrast to weak AI, strong artificial intelligence not only acts reactively, but intelligently, flexibly and on its own initiative. To date, it has not been possible to develop strong artificial intelligence.
In order to speak of a strong AI, the system would have to combine the following properties:
- Ability to communicate in natural language
- Ability to make decisions even when uncertain
- Ability to learn and plan
- Logical thinking
In addition, she must be able to combine all of her skills to achieve an overall goal.
Despite repeated discussions about the feasibility, the majority of researchers are now convinced that it is only a matter of time before the first strong AI. A period of around 20 to 40 years is considered realistic.
Google AI Duplex: The Future of Artificial Intelligence
Google Duplex is a great example of the direction in which artificial intelligence is developing and how it is making our everyday lives easier.
This is an AI-equipped telephone assistant that is able to make appointments independently and that can reserve a table in a restaurant for its users, for example.
The digital assistant Google Duplex doesn’t just speak with a human voice. He also builds in typical sounds like “Ehm” and pauses to sound more real.
islang, the operation of the telephone assistant is admittedly still a little cumbersome and, moreover, only reserved for selected uses of Google’s Pixel smartphone. Until that changes, however, it is only a matter of time.
How artificial intelligence is changing our society
Artificial intelligence will change our everyday lives as well as the world of work. Although this development is still at the very beginning, the first effects of this change can already be felt.
While this may worry many people, it also opens up great opportunities in very different areas. Fear of mass unemployment is certainly just as unfounded as the fear that intelligent machines will eventually take over and suppress humanity.
There will be changes from AI
Of course, many professions in their current form will no longer exist in the future. But there have been changes like this in the past, for example in the wake of the industrial revolution, when steam engines were increasingly replacing human muscle strength.
However, this does not mean that artificial intelligence will completely replace people and that no one will have a job in the future.
Rather, AI will increasingly contribute to making our lives easier and safer in some areas, such as production.
Just like computers and the Internet, artificial intelligence will eventually become part of our everyday life and will be perceived by everyone as something quite normal. The beginning has long been made and the development will inevitably continue in the future.
Why artificial intelligence is becoming indispensable for companies
In order to remain competitive in the long run, companies cannot avoid dealing with the topic of artificial intelligence. If you want to keep up with current developments and continue to exist on the market in the future, you have to know the potential that AI has to offer and implement suitable systems in the existing structures.
The range of possible application scenarios is large and there is hardly any area in which it is not possible to profit in one way or another through the use of powerful AI systems.
Artificial intelligence has the following advantages in particular for companies:
AI systems are efficient. Existing processes can be optimized with artificial intelligence. Smart machines deliver accurate results. Artificial intelligence can help save costs. AI systems can increase customer satisfaction. Personalized advertising and advice can increase sales.
Where is artificial intelligence used?
The use of AI opens up new opportunities for companies in almost all industries and is therefore by no means only suitable for areas that have always been tech-savvy. Artificial intelligence is already being used in very different ways to improve processes, from logistics to retail (online and offline) to banks and the media industry.
AI in e-commerce
AI in industry
AI in retail
AI in logistics
AI in trading
AI for banks
AI in marketing
AI in the media
AI in agriculture
- Amram is a technical analyst and partner at DFI Club Research, a high-tech research and advisory firm .He has over 10 years of technical and business experience with leading high-tech companies including Huawei,Nokia,Ericsson on ICT, Semiconductor, Microelectronics Systems and embedded systems.Amram focuses on the business critical points where new technologies drive innovations.
- Tech Business2020.06.1610 Advantages That Apps Have Over Websites For Your Business
- AMD2020.05.01Nvidia Ampere: New GPU Generation Unveiled 2020
- AI2020.04.28Is The World Better Without AI (Artificial Intelligence)?
- Technology Business2020.04.24Facebook Shows The Strangest Keyboard You Have Ever Seen