What exactly is edge computing? Just like when cloud computing was introduced, there is a lot of confusion surrounding edge computing. Some industry experts are touting edge computing as the “next IT transformation”, while others have ranked it as one of the top 10 strategic technology trends for 2019.
What Is Edge Computing?
Although “edge computing” might sound like an exciting new IT technology, it is basically a network topology (structure/arrangement). Like any other topology, edge computing describes how the elements in this type of network are arranged and connected to each other. In this case, the main elements of interest are:
What sets edge computing apart is where the data-processing components, point 2 above, are located. In most other network configurations, these components are in a central location, the datacentre. They receive data from various devices, then process it. In an edge computing topology, some or all of the data-processing components are located near the edge devices. In other words, some or all of the data generated by edge devices is processed locally, not centrally.
Having the data from edge devices processed locally provides several advantages, including:
Perhaps the best advantage is that edge computing is not an “all or nothing” proposition. It can be incorporated into traditional network environments so that some data is processed on the edge while other data is handled centrally.
While there are many benefits to implementing edge computing, it’s not without challenges. For starters, the edge devices and local data-processing components need to be set up and regularly maintained. Having to update the software on numerous data-processing components, for instance, can be time-consuming.
Equally important, the edge devices and local data-processing components need to be secured. Edge devices in particular are more vulnerable. As expert security companies state, IoT-ready devices often have security vulnerabilities such as default passwords that are easy to crack and firmware updates that are easy to spoof.
Finally, companies still need to put systems in place to send the analyzed data to a central repository, assuming they want or need to keep that data. At the very least, they will need a reporting system.