There has been a lot of hype around AI in recent years; enthusiasts and early adopters revere it for its potential benefits to humanity, and the naysayers, on the other hand, despise it for its doomsday scenarios, where computers take over the world with intentions of establishing dominion over mankind. In any case, an artificial intelligence evolving to rationalize in the same capacity as human beings is unlikely, unless, of course, we can invent a chip that can give computers human emotions, and the ability to procreate like we do.

AI is a subset of data science, and data is the new currency. NFL recently partnered with a tech company to place RFID chips in player uniforms and footballs to gather real-time data on player movement on the field. With systems deployed at all football stadiums in the US for capturing this data, NFL can process and act on this data, and generate new revenue streams. For example, such data can be used for improving and predicting player performance, enhancing video games, creating new applications, etc.

This is the sort of value that is present in data; raw and untapped data, when intelligently processed can yield efficiencies, as well as reveal new opportunities that businesses can thrive on. According to IOT Analytics, more than 4.7 billion IoT devices were connected on the internet in 2016; by 2021 this will surpass 11.5 billion. According to IDC the number of connected devices will be about 41.6 billion by 2025. So, what can businesses do to get ready for the upcoming data deluge? Well, in short, take initiative in developing skills in AI and data science, but, more importantly, employ a smart strategy on where to process this data.

Fundamentally, AI involves digitizing real world events through smart sensors, training the computers through application-specific algorithms on the gathered data, and inferring on what actions to take from the on-going learning.

Sensing

Data gathered through IoT devices in their raw form are the starting point. Depending on the application, different data may need to be acquired and combined for making valuable computations for decision making. For example, a control valve on an oil refinery pipeline may have sensors for temperature and pressure, and these can be used to compute the amount of flow control needed through the valve for the particular fluid.

Legacy technologies already allow you to do this; however, the future is about gathering this data for learning, and forecasting outcomes. This predictive aspect of data science is what will drive costs down, and unveil new opportunities. For example, accumulated data can be used to make decisions on when to replace hardware or equipment before it breaks in the field, which can be very costly.

As smart IoT devices will be everywhere, the volume of data generated will be immense. Wind farms can easily produce tens of thousands of data points each second. Vehicle traffic is expected to exceed 10 exabytes monthly by 2025. Then, there is the data from Smart Cities, and so on. The best strategy for handling this data, especially for real-time processing, is to consume it at the Edge. Some benefits include:

  • low latency, and, thereby, improved response times
  • opportunity to increase product capability (through complex software on the Edge Cloud)
  • opportunity to reduce hardware costs
  • reduced product development cycles

Transporting this magnitude of data to the Cloud will use up much of the bandwidth in the backhaul, and cause data traffic jams.

Training

In order for machines to learn and evolve, they must have a large data lake from which to train the AI model. Initially, when the data lake is small, the accuracy in the decision or outcome will be low. As more data is collected, processed, and acted on, the learning will improve.

This is a critical component of AI; a large data set is needed to derive more business insights. With machine learning algorithms, you can unveil data patterns to help with predictive analytics on user behavior, game play outcome, and so on; eventually enhancing the training process.

Ultimately, you need computing, storage, and network at the Edge to make this happen. An option is to use Distributed Edge Computing services offered by Edge Cloud providers. An alternative is to develop your own on-premises Cloud and/or use the device itself to do the needed processing; however, this will not be scalable. A smart solution can involve a combination of both; i.e., an Edge Computing service that can leverage computing resources at nearby data centers as well on-premises resources (e.g., IoT Gateway) for strategically distributing workloads. Not only is this smart, but it is also secure as the critical data is localized, and gated from reaching the centralized Cloud via the internet.

Inference

Based on all the data that has been collected and processed, the overall goal is for the computer to make a decision based on the patterns hidden in the raw data. Essentially, the computing resources can be used to make conclusions based on the facts or evidence embedded in the gathered data.

The implications of this is that machines will eventually make higher level decisions and act autonomously. Furthermore, as mentioned earlier, machines can point out failures in equipment before they occur, and, thereby, help reduce cost substantially.

Ideally, an enterprise will want to manage this locally without the involvement of cloud services; this will certainly address their security and privacy concerns, as well as the cost of cloud computing. However, this will cause them to lose focus on their business, and also pose challenges with scalability. Edge Computing services will alleviate these issues, and set the stage for building intelligent systems that can think for themselves.

Conclusion

Although AI has been around for some time now, it has recently started to build momentum. The ability for computers to learn and make decisions from patterns hidden in data, which could be very difficult for human beings to make manually, is the primary benefit of this technology. As applications mature through improved quality in the data lakes, efficiencies can be derived, and new opportunities will unfold.

An Edge Computing solution, when strategically deployed has the potential of becoming the operating system for AI.