Learning from the past and understanding the new essentials of network monitoring and management for the present
Learn from the past, live in the present and prepare for the future.
While this may sound like it belongs hanging on a high school guidance counselor’s wall, they are, nonetheless, words to live by, especially in federal IT. And they apply perhaps to no other infrastructure element better than the network. After all, the network has long been a foundational building block of IT, it’s even more important today than it was in the days of SAGE and ARPANET, and its importance will only continue to grow in the future while simultaneously becoming more complex.
For those of us charged with maintaining the network, it’s valuable to take a step back and examine the evolution of the network every once in a while. Doing so helps us take an inventory of lessons learned—or the lessons we should have learned; determine what today’s essentials of monitoring and managing networks are; and finally, turn an eye to the future to begin preparing now for what’s on the horizon.
Learn from the Past
Think back to the time before the luxuries of Wi-Fi and the proliferation of virtualization, and before today’s wireless and cloud computing.
The network used to be defined by a mostly wired, physical entity controlled by routers and switches. Business connections were based on T1 and ISDN, and Internet connectivity was always backhauled through the data center. Each network device was a piece of agency-owned hardware, and applications operated on well-defined ports and protocols.
With this yesteryear in mind, consider the following lessons we all (should) have learned that still apply today:
It Has to Work
Where better to start than with a throw back to IEEE RFC1925, “The Twelve Networking Truths”?. It’s just as true today as it was in 1996—if your network doesn’t actually work, then all the fancy hardware is for naught. Anything that impacts the ability of your network to work should be suspect.
The Shortest Distance Between Two Points is Still a Straight Line
Wired or wireless and MPLS, EIGRP or OSPF, your job as a network engineer is still fundamentally to create the conditions where the distance between the provider of information, usually a server, and the consumer of that information, usually a PC, is as near to a straight line as possible. When you forget that but still get caught up in quality of service maps, and disaster recovery and continuity of operations plans, you’ve lost your way.
An Unconfigured Switch is Better than the Wizard
It was a long-standing truth/joke that running the configuration wizard on a switch was the fastest way to break it, whereas just unboxing and plugging it in would work fine. Wizards are a fantastic convenience and come in all forms, but if you don’t know what the wizard is making convenient, you are heading for trouble.
What is Not Explicitly Permitted is Forbidden
No, this policy is not fun and it won’t make you popular. And it will actually create work for you on an ongoing basis. But there is honestly no other way to run your network. If you are espousing that this policy will get you in trouble, then the truth is you’re going to get into trouble anyway. If you don’t do your part to make your agency network more secure, knowing that the bad guys are out there trying to do your agency harm, the next huge security breach is on you.
Live in the Present
Now let’s fast forward and consider the network of present day.
Wireless is becoming ubiquitous—it’s even overtaking wired networks in many instances—and the number of devices wirelessly connecting to the network is exploding (think Internet of Things). It doesn’t end there, though—networks are growing in all directions. Some network devices are even virtualized, resulting in a complex amalgam of the physical, the virtual and the Internet. Agency connections are T1 or similar services, and increased use of cloud services is stretching Internet capacity at remote sites, not to mention opening security and policy issues since it’s not all backhauled through the data center. BYOD, BYOA, tablets and smartphones, although not allowed by all agencies, are becoming prevalent and creating bandwidth capacity and security issues. Application visibility based on port and protocol is largely impossible due to applications tunneling via HTTP/HTTPS. VoIP is common, also imposing higher demands on network bandwidth, and LTE provides high-quality anywhere connectivity.
Are you nostalgic for the days of networking yore yet? The complexity of today’s networking environment underscores that while lessons of the past are still important, a new set of network monitoring and management essentials is necessary to meet the challenges of today’s network administration head on. These new essentials include:
While perhaps a bit back-to-basics and also suitable as a lesson we all should have learned by now, when you consider the complexity of today’s networks and network traffic, network mapping and the subsequent understanding of management and monitoring needs has never been more essential than it is today. Moving ahead without a plan—without knowing the reality on the ground—is a sure way to make the wrong choices in terms of network monitoring based on assumptions and guesswork.
The growth of wireless networks presents new problems, such as ensuring adequate signal strength and that the proliferation of devices and their physical mobility—potentially hundreds of thousands of network-connected devices, few of which are stationary and many of which may not be owned by the government (BYOD)—doesn’t get out of hand. What’s needed are tools such as wireless heat maps, user device tracking and tracking and managing device IP addresses.
When it comes to surviving the Internet of Things, you first must understand that all of the “things” connect to the cloud. Because they’re not coordinating with a controller on the LAN, each device incurs a full conversation load, burdening the WAN and every element in a network. And worse, many of these devices prefer IPv6, meaning you’ll have more pressure to dual-stack all of those components. Application firewalls can untangle device conversations, get IP address management under control and help prepare for IPv6. They can also classify and segment device traffic; implement effective quality of service to ensure that critical business traffic has headroom; and of course, monitor flow.
Everyone knows that the government isn’t growing these days, but it would be foolish to think that today’s users and applications have static needs. You need to integrate capacity for forecasting tools, configuration management and web-based reporting to be able to predict scale and demand requirements. There’s the oft-quoted statistic that 70 percent of network outages come from unexpected network configuration changes. Admins have to avoid the Jurassic Park effect—unexpected, but what in hindsight were clearly predictable outages is the bane of any IT manager’s existence. “How did we not know and respond to this?” is a question nobody wants to have to answer.
Application Performance Insight
Many network engineers have complained that the network would be stable if it weren’t for the end users. While it’s an amusing thought, it ignores the universal truth of IT—everything we do is because of and for end-users. The whole point of having a network is to run the applications stakeholders need to do their jobs. Face it, applications are king. Technologies such as deep packet inspection, or packet-level analysis, can help you ensure the network is not the source of application performance problems.
Prepare for the Future
Now that we’ve covered the evolution of the network from past to present—and identified lessons we can learn from the network of yesterday and what the new essentials of monitoring and managing today’s network are—we can prepare for the future. So, stay tuned for part two in this series to explore what the future holds for the evolution of the network.