How artificial intelligence will change the future beyond 2020 ?
What distinguishes AI Pioneers From Low Performers?
But what do leading companies do differently with “AI” than “low performers”? In order to achieve significant results through the use of artificial intelligence, companies have to go through four development phases, as can be seen from the Accenture study:
- 1. Outline the advantages of AI and the associated opportunities to reinvent your own products with digital technologies.
- 2. Develop a vision of how existing offers can be supplemented by AI.
- 3. Provide the necessary resources for the development of AI-based products.
- 4. Implement your vision and concrete initiatives and thus enable the digital reinvention of products on a large scale.
If one divides the examined companies according to branch and current status with the use in certain clusters, it shows that the maturity level with artificial intelligence differs very much from branch to branch. Automotive companies are apparently more able to vigorously pursue and implement AI initiatives than others. After all, nine percent of companies in the automotive industry already reach the third and five percent even the fourth stage of development. On the other hand, the manufacturers of industrial machines are not quite there yet. Only seven or three percent reach the third stage here. Only one percent reaches level 4 and thus the highest stage of AI maturity.
One thing that distinguishes the AI avant-garde is that it works closely with partners in ecosystems to identify the AI applications that bring the greatest benefit to the customer. Most companies rely on AI solutions such as machine vision (73 percent), deep learning (64 percent) and robot-controlled process automation (64 percent).
The mechanical engineering association VDMA recently founded the “Machine Learning” working group to give the topic of machine learning additional impetus. The VDMA Future Business Competence Center had already extensively examined the topic in 2017 with the study “Machine Learning 2030 – Images of the future for mechanical and plant engineering” .
At the decision of the board of the software and digitization association, the VDMA should now push ahead with the topic in the new working group that brings users and providers together. “Currently 375 members of the VDMA association for software and digitization clearly show the importance that digitization has in mechanical engineering” , emphasizes Matthias Dietel, IBM Germany Lab and member of the executive board.To what extent is artificial intelligence replacing the role of humans?
VDMA Quick Guide: What can you do with AI?
Artificial Intelligence and the Future of Humans
Mechanical engineering companies that want to use machine learning understandably ask for guidelines, roadmaps, standardization and cooperation with institutes and IT service providers. “Training, data sovereignty and ethical issues played a major role in our ‘Machine Learning 2030’ study. The VDMA “Machine Learning” group of experts wrote a quick guide as the first handout . The guideline is primarily aimed at the management of mechanical engineering companies that deal with the topic of machine learning in their company or would like to deal with it in the future.
The aim of the Quick Guide is to provide management with initial assistance with the business assessment and relevance of machine learning in order to derive their own approach and strategy definition. The guideline provides information on the opportunities, challenges and possible solutions. Above all, the Quick Guide should help to approach machine learning with the “right questions”.
The areas of application for machine learning in mechanical engineering are diverse: the VDMA mentions this
- Predictive maintenance,
- industrial image processing and
- Robot controls.
The mechanical engineering association’s Quick Guide also sees considerable potential in logistics inside and outside the factory.
Nevertheless, medium-sized companies in particular are afraid of (larger) investments in such a disruptive technology as AI, as Jörg Kremer from mip Management Informationpartner GmbH observed. There were less technophobic reasons behind this; rather, the reluctance to invest is more motivated by economic questions: “Is it worthwhile for my company to develop an AI solution? Do I do this with an IT partner? Individual development or standard solution? What is the connection to my upstream and downstream IT systems? What does my ROI look like? “
AI expert Kremer therefore advises medium-sized companies to first consider using a web-based AI platform . Such platforms offer a technically and economically interesting solution without having to offer your own IT developer know-how. Kremer sees IBM Watson as the most advanced solution in this area. Companies can use this AI platform to select different microservices for their needs without having to make extensive investments – for example in software development, computing or storage capacity. Kremer: “This allows companies to fully concentrate on customer- or project-specific training of the applications. The security of the data is also taken into account because the data sovereignty remains with the customer in accordance with the GDPR. “
Mechanical engineering: The factory of the future configures itself
In order to be able to react flexibly and quickly to changes in demand in the future, an efficient and simple way of working is essential for production systems. Researchers at the Research Institute of the Free State of Bavaria for software-intensive systems and services ( FORTISS ) are demonstrating what such a scenario could look like with their “Fortiss Future Factory” .
With the help of a cognitive production system, methods are developed how factories can adapt to changing requirements in the future. The use of artificial intelligence is primarily intended to reduce programming and configuration costs. Manufacturers want to be given the opportunity to manufacture individual products in small batch sizes with little downtime.
The “Fortiss Future Factory” consists of ten networked stations that can be combined as required, which can currently assemble two products with three variants each. In principle, the range of products that can be manufactured with the system is infinite: everything is possible from storage boxes to thermometers to shavers, with the machine constantly configuring itself, as the Fortiss researchers emphasize.
“The special thing about the machines is that they can describe themselves and store their skills in virtual, ‘yellow pages for registered machines’,” explains research group leader Alois Zoitl. The required product descriptions and production steps are stored in the system. Defined interfaces allow access to automatically evaluable descriptions of the capabilities of the respective factory modules. These could be converted automatically and at short notice when the order is received, according to the scientist. Planning software from the same institute acts as a virtual operator, which plans, places, controls and monitors the entire manufacturing process.
The concept of the system can also be applied to other fields of application. FORTISS focuses, among other things, on textile production. In the future, customers should be able to design their own clothing in the online shop. The order is then processed directly by the convertible factory.
Automotive: deep learning and AI in engine development
The German Research Center for Artificial Intelligence (DFKI) , the world’s largest non-profit research center for AI, and IAV , one of the leading development partners in the automotive industry, open the joint “Research Laboratory Learning from Test Data” (FLaP) . In the new test environment at DFKI in Kaiserslautern, special analysis methods of artificial intelligence for use in test methods in automotive development are being researched and developed. Machine learning technologies are used, including deep learning and time series analysis.
According to the partners, the application potential of intelligent data analysis methods for the monitoring and optimization of test data, control devices and test benches in the automotive industry is “extraordinary”. For example, a modern engine control unit has more than 50,000 setting parameters that are decisive for performance, consumption, wear and the overall performance of the engine. Through deep learning technologies, more precisely the use of neural networks in the control unit, it can independently “learn” how to optimally set the input variables.
The use of such networks in the time series analysis of engine test data also enables new approaches for “predictive health monitoring”, so that the prediction of wear and maintenance can be improved. Such processes are to be researched and developed in the new research laboratory. At the same time, the FLaP will also work on new visualization options for the diverse measurement data from the neural networks. The plan is to create a toolbox of AI tools that can be used intuitively by automotive engineers.
Automotive: The Postbus plans its route itself
Autonomous driving and electric is at the top of the agenda at the Deutsche Post DHL Group . To this end, the logistics group will build a test fleet of autonomous and purely electric delivery vehicles. The partner is the automotive supplier ZF – because the “Postbuses” are equipped with the control box ZF ProAI, which ZF Friedrichshafen AG developed together with NVIDIA .
Above all, the light, electric and intelligent delivery vehicles can better meet future requirements on the “last mile” to the customer, which are currently very complex and cost-intensive due to the flexibility expectations in e-commerce and the requirements of scheduling. The Deutsche Post DHL Group currently has a fleet of 3,400 street scooter delivery vehicles . These can be equipped with ZF sensors – cameras, lidar and radar sensors – the information of which is processed by the ZF ProAI control box. Thanks to AI, the vehicles can later “understand” their immediate surroundings, plan a safe route – or reschedule at short notice -, follow the route, and park the vehicle independently. This makes deliveries more precise, safer and cheaper.
“The example of autonomous delivery vehicles shows how strongly AI and deep learning influence the commercial vehicle industry,” says Jensen Huang, founder and CEO of NVIDIA. “Since orders from online shopping continue to increase strongly, but the number of truck drivers is limited, AI-capable autonomous vehicles will play a key role in future logistics on the ‘last mile’.”
In order to develop these AI delivery vehicles, the Deutsche Post DHL Group has already equipped its data center with the NVIDIA DGX-1 supercomputing chip and is training its artificial neural network. In the course of further vehicle development, these deep learning algorithms will later be transferred to the vehicle control boxes on the NVIDIA Drive PX platform. In a prototype that was presented at the NVIDIA developer conference “GPU Technology Conference” (GTC) in Munich, six cameras, one radar and two lidar systems supply the AI with data.
Logistics: The algorithm replaces the dispatcher
The logistics industry is experiencing fundamental changes under the sign of digitization and job profiles are always under pressure to adapt. A good example of this is the dispatcher. Its main task traditionally lies in the optimization of transports and in pricing. Both tasks are already supported by computers today.
Digital freight forwarders like FRACHTRAUM support the entire business model with self-learning algorithms. From this perspective, it is clear to the Berlin-based company that the role of people in logistics planning will change significantly due to algorithms that are working better and better: from planning and optimizing a transport to looking after and managing the people involved.
Pricing in logistics is subject to many factors of different dynamics. In addition to the weight and route length of the transport, holidays and bridging days, the availability of the requested truck type, loading equipment exchange or seasonal demand peaks, factors such as the short-term nature of the booking request and the current fuel price.
The selection of more than 100 parameters that dispatchers have to take into account when determining a transport price shows how complex and therefore error-prone this process is. This is one of the reasons why the large logistics companies work with regional branches, because dispatchers can only guarantee this density of information for a regionally limited area. Only there the complexity of the disposition process remains manageable. FRACHTRAUM therefore relies on a machine learning-based algorithm. This is able to incorporate all relevant parameters within a few seconds and to determine a binding price on an ad hoc basis – for every type of transport.
At the same time, the quality of the solution taken improves with each transport carried out, since this automatically increases the amount of data available, on the basis of which the price is calculated. And what will become of the dispatcher? In future, this will be seen more than ever in the role of the human link between driver and shipper. Just one year after entering the market, FRACHTRAUM already carries out around 3000 transports a month fully automatically based on self-learning algorithms.
Chatbot assists the buyer
Natural language processing and artificial intelligence make shopping faster, more intuitive and more enjoyable. Of these, one is at Basware convinced. The Finnish software company presented the Basware Assistant , a new chatbot function within its electronic procurement solution, at the AP & P2P Conference & Expo in spring 2018 . The chatbot serves as a virtual assistant that makes it easier for buyers to find purchase orders and orders to which they have access.
The Basware Assistant uses natural language processing and artificial intelligence to create a new and simplified way of interacting with Basware’s e-procurement solution. Buyers can communicate with the Basware Assistant like a flesh-and-blood person to search for orders, purchase requests, supplier and item names, and ID and document numbers. Verbal communication with the sourcing solution eliminates the need to navigate through a series of screens as before in order to arrive at the desired “process”.
Another example of the use of AI in procurement is the Würzburg company Scoutbee , which uses artificial intelligence to help buyers find new suppliers and optimize their supplier relationships . Only recently, the founders of Scoutbee were able to prevail against 118 competitors in the Northern Bavaria business plan competition with their AI concept.
But how does it go on with artificial intelligence? What is currently being researched on? What’s in the pipeline?
Machines can (nevertheless) act morally
Machines can soon imitate people’s moral behavior. Scientists at the University of Osnabrück are convinced of this. The reason is autonomous driving, because self-driving automobiles are the first generation of intelligent robots that share everyday living space with people. It is therefore essential to develop rules and expectations for autonomous systems that define how they should behave in critical situations.
The Institute for Cognitive Science at the University of Osnabrück has now published a study in “Frontiers in Behavioral Neuroscience” , which shows that human-ethical decisions can be implemented in machines and that autonomous vehicles will soon overcome moral dilemmas in road traffic. Politically, the debate on the modeling of moral decisions is accompanied by an initiative of the Federal Ministry for Transport and Digital Infrastructure (BMVI). To this end, it has formulated 20 ethical principles. The Osnabrück study now provides the first empirical scientific data.
“To be able to define rules or recommendations, two steps are necessary. The first thing to do is to analyze and understand human moral decisions in critical situations. The second step is to describe human behavior statistically in order to derive rules that can then be used in machines, ”explains Prof. Dr. Gordon Pipa, one of the lead scientists in the study.
To implement both steps, the authors used a virtual reality that is used to observe the behavior of test subjects in simulated traffic situations. The study participants drove through the streets of a typical suburb on a foggy day. In the course of the experiments, there were inevitable and unexpected dilemma situations in which people, animals or objects were obstacles in the lanes. In order to be able to avoid the obstacles in one of the two lanes, moral consideration was necessary.
The observed decisions were then evaluated by a statistical analysis and translated into rules. The results indicate that in the context of these inevitable accidents, moral behavior can be explained by a simple value of life, for every person, every animal and every object.
Leon Sütfeld, the lead author of the study, explains this as follows: “Human moral behavior can be explained or compared with considerable precision by comparing the value of life that is associated with every person, every animal or every object. This shows that human moral decisions can in principle be described with rules and that these rules could consequently also be used by machines. ”
Basically, the findings of the Osnabrück researchers contradict the eighth principle of the BMVI report (see above), which is based on the assumption that moral decisions cannot be modeled. But can this fundamental difference be explained? Algorithms can either be described by rules or by statistical models that relate several factors to each other. So laws are rule-based. In contrast, humans and modern artificial intelligent systems tend to use complex statistical weighing. This weighing allows humans and modern AI to evaluate new situations that humans and AI have not been exposed to.
In the scientific work of Sütfeld such a methodology similar to human behavior was used to describe the data. “Therefore, the rules do not have to be formulated abstractly by a person at the desk, but have to be derived and learned from human behavior. This raises the question of whether these rules that have now been learned and conceptualized should not also be used as a moral aspect in machines, ”argues researcher Sütfeld.
“Personality recognition” for robots
People recognize gestures and interpret looks at lightning speed and almost automatically. Computers and robots cannot do this, which is why scientists all over the world are researching how to make cooperation between humans and computers more social, efficient and flexible. Computer scientists from Saarbrücken and Stuttgart have now reached an important milestone together with psychologists from Australia. The software system they developed processes a person’s eye movements and calculates whether they are vulnerable, sociable, tolerable, conscientious or curious.
“With our eyes we not only grasp the surroundings, they are also the window to our soul. Because they reveal who we are, how we feel and what we do, ”explains Andreas Bulling, who heads the research group ” Perceptual User Interfaces ” in Saarbrücken at the Max Planck Institute for Computer Science and at the Cluster of Excellence at Saarland University Bulling has trained scientists in Stuttgart and Australia to develop their own software system based on machine learning algorithms so that it can evaluate eye movements and use them to draw conclusions about a person’s character traits.
In order to receive the data for the training and evaluation, 50 students participated at Flinders University in Australia. After arriving in the laboratory, the researchers equipped the students with an “eye tracker”. The latter filmed the subjects’ eye movements as they strolled around the campus for around ten minutes and bought a coffee or other items in the campus shop. The students were then asked to take off their glasses and fill out special questionnaires in order to determine their personality and level of curiosity in a conventional manner.
“In order to analyze the recorded eye data regardless of the duration of the recording, we worked with a shiftable time window, since this does not weaken any characteristics,” explains Bulling. The researchers obtained 207 characteristics from each of the resulting time windows. These included statistics on gaze fixation as well as the blink rate. Using this data and the information from the questionnaires, the researchers combined around 100 decision trees per personality trait into a classifier and trained them. The result: In the subsequent test with previously unused data material, they were able to demonstrate that the software system reliably recognizes traits such as emotional instability, sociability, tolerance and conscientiousness.
“We can also transfer the knowledge gained in this way about non-verbal behavior to robots so that they behave like humans. Such systems would then communicate with people in a much more natural way and would therefore be more efficient and flexible to use, ”says researcher Bulling.
International Data Spaces – better data availability
AI experts also see a need for action with regard to the availability of data. As already shown, there is a scarcity of generally accessible, usable data in a worldwide comparison. To create incentives to generate and share data, professionals recommend that the creators of the data retain control and sovereignty over their data, but should share it for mutual benefit. Models such as the International Data Spaces and especially the Industrial Data Space are exemplary in this context.
The Industrial Data Space is a virtual data room that supports the secure exchange of data and the simple linking of data in business ecosystems based on standards and with the help of common governance models. The Industrial Data Space preserves the digital sovereignty of the owner of the data and at the same time forms the basis for smart services and innovative business processes.
Author Profile
- Amram is a technical analyst and partner at DFI Club Research, a high-tech research and advisory firm .He has over 10 years of technical and business experience with leading high-tech companies including Huawei,Nokia,Ericsson on ICT, Semiconductor, Microelectronics Systems and embedded systems.Amram focuses on the business critical points where new technologies drive innovations.