What is Fog Computing?

Fog computing is a decentralized computing infrastructure or process in which computing resources are located between the data source and the cloud or any other data center.

Overview

To achieve real-time automation, data capture and analysis has to be done in real-time without having to deal with the high latency and low bandwidth issues that occur during the processing of network data. Although the cloud provided a scalable and flexible ecosystem for data analytics, communication and security challenges between local assets and the cloud lead to downtime and other risk factors.

To mitigate these risks, fog computing and edge computing were developed. These concepts brought computing resources closer to data sources and allowed these assets to access actionable intelligence using the data they produced without having to communicate with distant computing infrastructure.

Fog computing vs. edge computing

For every new technological concept, standards are created and they exist to provide users with regulations or directions when making use of these concepts. In the case of the edge and fog computing, while edge computing refers to bringing compute closer to data sources, fog computing is a standard that defines its operation and application in diverse scenarios.

Fog computing was coined by Cisco and it enables uniformity when applying edge computing across diverse industrial niches or activities. This makes them comparable to two sides of a coin, as they function together to reduce processing latency by bringing compute closer to data sources.

Fog computing and the cloud

Intel estimates that the average automated vehicle produces approximately 40TB of data every 8 hours it is used. In this case, fog computing infrastructure is generally provisioned to use only the data relevant for specific processes or tasks. Other large data sets that are not timely for the specified task are pushed to the cloud.

The cloud provides the extended computing resources needed for storing the vast amount of data that edge devices produce but do not use. It also provides more computing resources for further analysis, which makes the cloud a complementary ecosystem for fog computing applications.

How Fog Computing Works

A fog computing framework can have a variety of components and functions depending on its application. It could include computing gateways that accept data from data sources or diverse collection endpoints such as routers and switches connecting assets within a network.

The process of transferring data through fog computing architecture in an IoT environment includes the following steps:

  1. Signals from IoT devices are read by an automation controller.
  2. The controller executes the system program needed to automate the IoT devices.
  3. The control system program sends data through to a standard OPC Foundation server or through other gateway protocols. (OPC is the interoperability standard for data exchange in IoT.)
  4. This data is converted into a protocol understood by internet-based service providers such as MQTT or HTTP(S).
  5. Once converted, the data is sent to a fog node or IoT gateway. These endpoints collect the data for further analysis or transfer the data sets to the cloud for broader use.

Examples of Fog Computing

The use of automated guided vehicles (AGV) on industrial shop floors provide an excellent scenario that explains how fog computing functions. In this scenario, a real-time geolocation application using MQTT will provide the edge compute needed to track the AGVs movement across the shop floor.

The geolocation app works by querying data from the sensors attached to the AGV as it navigates an area. The sensor maintains a connection with a broker and the broker is notified in intervals about the location of the AGV. The notification message is sent via periodic MQTT messages as the AGV continues its movement. The regular updates from the AGV can then be used for diverse purposes including tracking the location of inventories or materials being transported across specified zones.

Thus, the option of processing data close to the edge decreases latency and brings up diverse use cases where fog computing can be used to manage resources. One such example is its application in energy management. Here, a real-time energy consumption application deployed across multiple devices can track the individual energy consumption rate of each device.

Sensors within the device periodically notify the broker about the amount of energy being consumed via periodic MQTT messages. Once a device is consuming excessive energy, the notification triggers the app to offload some of the overloaded device’s tasks to other devices consuming less energy.