And Back to the Edge We Go…

By Irwin Lazar
On Jan 21, 2014
Tuesday, January 21, 2014

Anyone who has been in the IT industry for more than a few years probably understands the cyclical nature of computing. Early computing was based on the idea of dumb endpoints interfacing with a centralized mainframe. PCs replaced that paradigm by decentralizing computing out to the end-points to achieve speed and scalability not feasible in a mainframe environment. Security, cost, and ubiquitous access requirements have driven computing away from the endpoints and back into a core (or “cloud”) in recent years. Now comes the latest swing of the pendulum – “edge” computing.

As defined in Wikipedia, edge computing pushes processing away from the core and to locations where data access and manipulation occurs. The argument for the return of this decentralized model is that the rapid rise of Internet-connected devices will rapidly overwhelm the ability of core models to acquire and process data. So by shifting analysis and processing out to the edge, one can save both bandwidth and core computing cycles for the really important stuff. Sound familiar?

Cisco is pushing their vision of edge computing which they call “fog” computing (see:, not surprising given Cisco’s push around the “Internet of Everything.” IBM’s recent cloud announcement is predicated on a need to push computing outward as well. In each of these approaches, edge computing is employed to enable local analysis and data aggregation, reducing the need to transmit data to the core. Expect rapid development of additional use cases for edge computing. In the UC space edge computing could enable more scalable peer-to-peer communications sessions for multiparty audio, video and web chats, replication of shared or streaming video and other forms of content, and localized processing of analytics related to performance, network usage, and security controls. Keep an eye on the development of edge computing as it relates to your own technology focus areas.