Now that you’ve identified the sources of what’s important within your environment and crafted that data and information into messages that prompt action and decision making, it’s time to think about getting this data and information into a manageable format for processing and visualization.
I’ve discussed what events are and shared some initial thoughts on building events for BSM here and included references to complex event processing (CEP), event driven architecture (EDA) and event stream processing (ESP) here. I still plan on diving in deep to the topic of building events and the idea of the Common Base Event and Common Event Format. I also want to introduce the Event Data Dictionary / Event Catalog which will be useful for capturing information about what events exist in your environment and why. Every event that’s generated should be done so for a purpose. There’s nothing that will turn your NOC or IT support group against you quicker than if you’re collecting data and generating events just for the sake of doing so because they may be available via and SNMP MIB or agent. They don’t need any more “noise” to deal with during the day.
There may be many ways to incorporate this data and information into the messages you’re planning to communicate. The approaches and their ease of use are going to be entirely up to the tools, applications and solutions you’re using. You may be able to establish direct connections with the datasource, perform screen scrapes, import spreadsheets, or even perform queries against the source. The general concept of this series of articles has been around the assumption that you have the ability within your environment to generate events. Generating events usually comes through some form of instrumentation, collection and evaluation against a threshold, state, rule, etc.
What I want to talk about here is instrumenting those sources of important information, data and metrics within your environment you’ve identified as you completed your Metrics Catalog. Some of these sources may be outside the comfort zone or capabilities of the average IT Operations group normally used to operating with SNMP, server and application monitoring agents.
Since you’ve identified the source of the important information or data, how frequently it gets updated, and how to access it you’re half way there. The next task is to identify the person(s) or group(s) responsible for that information source. This may be the owner, administrator or support group for the application, tool, file, spreadsheet, database, server, etc. that produces, evaluates, communicates or makes available that information or data. The task here is to establish the business need with the owner to instrument that source so that the important data or information is provided in a way that can be easily processed upstream.
Once you’ve established the business need, you can have a discussion about the best way to instrument the information source and generate those events. Discuss the various tools in your event generating arsenal with the owner and their technical staff. Cover the normal EMS/NMS/OSS/BSS solutions and their capabilities for collecting information and generating events. Discuss more generic approaches such as log files (application, system, etc.), scripts, XML/SOAP/WebServices, etc. Scripts can be written to parse logs or collect other information from applications, GUIs, command lines, etc. and pass those off to an event generation function. If you’ve been able to consolidate information into a database or corporate data warehouse, consider leveraging database triggers and stored procedures to collect, format and generate an event. There are certainly more sophisticated methods available here if your organization leverages an EAI or ESB technology. Just keep in mind that the goal is to keep it simple, efficient and effective. You don’t want to be blamed for causing a performance slowdown or outage to that important business application!
You’ll want to map the events you’re generating into the appropriate format of your internal systems that will process them. Be sure to capture the relationship between these event types and their purpose for communicating an important metric, KPI/KPM, etc. At a minimum, one of the fields in the event format should be the Metric ID from the Metrics Catalog. This will be critical in linking the events to their purpose. The more thought and planning you put into how you build these events the better. Consider the use of an enumeration schema to capture information. This can be parsed and evaluated later by other solutions such as dashboard, BSM, BAM, BPM, rules or workflow solutions. An example may be populating a field in an event like this: “A1-2-3” which may represent Metric Source = A1 (CRM System), Metric ID = 2 (Customer Count) and Metric Update = 3 (Daily). The sky’s the limit here but do consider the impact these may have on your internal event processing solutions or those that will need to parse and evaluate the enumeration schema you create.
Spend some time testing and evaluating the effectiveness of the new instrumentation you’ve done. Follow up with the owners you identified and the business to make sure that the data, information, metrics, etc. you’re now collecting passes their “sniff tests”. They’ll have a fairly good understanding of what’s good or bad – they always seem to have a sixth sense about this. If you get the sense that this information isn’t accurate, useful or otherwise have them excited, immediately start to evaluate why and do whatever you can to remedy it. You absolutely do not want to be presenting bad information later!
Now that we’ve got these important bits of data, information, metrics, etc. being collected and processed by our internal systems and tools automatically, it’s time to think about visualizing our message effectively for our various audiences. Stay tuned for that topic in “You’ve Got Events, Now What? Part V: Visualizing the Message.
Catch up with the “You’ve Got Events, Now What?” series here.