ā‰” Menu

Interesting Links for November 26th

in General

Links that I have found interesting for November 26th:

  • Generating events in WebSphere Message Broker for transaction monitoring and auditing – This article shows you how to configure and generate monitoring events in a WebSphere Message Broker message flow. Monitoring events are very useful built-in features for transaction monitoring and auditing, and this article describes them in detail.
  • Data Center Asset Management, Inventory Control and Configuration Management Database Software for your IT data Center – Data Center Audit (DCA) is an Open Source web application designed for inventory control and tracking of IT data center hardware.
    DCA is specifically targeted for small to medium size data center administrators because DCA's strength is in its simplicity, effectiveness, and ease of use.

    No Agents! No Device Probing!

    Web based. Access anywhere

    Single view of equipment details

    View available or in-use systems

    View end-to-end connections

    Reserve (check out) elements

    Visual detailed view of each rack

    Export views into text CSV's

    Detailed log of device activity

  • Why Monolith? Things Change | A Monitoring Odyssey…..            with Monolith Software – ** Go go Bill the Blogger! šŸ™‚ **

    I recently joined Monolith and since doing so I have received numerous calls and emails asking me; what is Monolith and why did you go to Monolith???

    My experience in the Infrastructure management space has been at Micromuse, Voyence, and EMC. Each firm had a “best in breed” tool which was appropriate for the infrastructure management market at that point in time, but as David Mamet once titled a great movie, “Things Change”.

    Over the last several years, I have had customers consistently asking me for end-to-end views of their infrastructure. In the past, the answer was to wheel in multiple products, and then propose a services engagement to stitch them together. In reality, we recommended they add more software, hardware, and administrators, effectively increasing the cost and complexity to address a simple requirement – end-to-end management.

  • Integrien Claims Record Growth – Irvine-based Integrien, a developer of IT analytics and performance measurement software, reported today that it had "record" quarterly bookings in Q3 of 2009. The privately held firm said its bookings were up 238% year-over-year. Actual financials were not disclosed by the company. The firm said it closed the largest deal in its history in the quarter from a financial firm.
  • PDQ: Pretty Damn Quick – PDQ (Pretty Damn Quick) is open source software associated with the books Analyzing Computer System Performance with Perl::PDQ (Springer 2005) and The Practical Performance Analyst (McGraw-Hill 1998, iUniverse.com Press 2000). The PDQ software package may be downloaded freely from this web site whether or not you own a copy of the book. PDQ uses queue-theoretic paradigms to represent all kinds of computer systems.
  • Nastel BTM Webinar Replay – In the blindingly fast world of financial services the difference between gains and losses is often measured in microseconds. Financial Service firms often stay on the bleeding edge of hardware and software in order to ensure ultra low-latency in their transactions. Transaction volumes are increasing and as they do, so does latency. Increased competition also necessitates moving towards an ultra low-latency strategy.

    But…

    Normal accident theory suggests that in complex, tightly coupled systems (like trading floors), accidents causing latency are inevitable. To defend against "normal" latency, you need visibility – 360° situational awareness of your trading environment.

    Please join Charley Rich, VP of Marketing & Product Management at Nastel Technologies and featured guest Ellen Carney, Senior Analyst at Forrester Research as they discuss these issues and how to identify and resolve them.

  • Correlating Events to Recognize Problems | Heroix Blog – What’s The Problem?

    Events can be misleading. Consider an example where several servers are behind a switch. We’ll further assume that we are monitoring the availability of the switch and the servers. When the switch goes down, what happens? A ton of notification is sent alerting everyone that all the servers are down, which is effectively true, but isn’t really the problem. Of course eventually the switch down alert comes in with all the server down messages. This is a simple example, where most good engineers will immediately diagnose the problem when they read the switch down alert, but a lot of messages were sent to notify you of the true problem. I always cringe when I know my boss is getting flooded with email that the sky is falling. Now, what if we use some logic in our notification that only sends out server down messages when the switch is OK, and suppresses all the server down messages when the switch goes down? That would be useful.

  • The keys to Effective SLAs | Heroix Blog – Service Level Agreements are usually the object of desire, fear, and uncertainty all at the same time. They can be such useful tools that it’s important to demystify them. SLAs are desirable because they provide accountability and timely feedback to managers. They are to be feared when they include factors beyond control or that are poorly aligned with reality. SLAs are commonly approached with a high degree of uncertainty about what to measure and how to report results as an effective tool for all parties. While the ingredients in SLAs are as varied as applications and service providers, all effective SLAs share a few critical characteristics.

    Good and Bad SLAs

    Let’s start by poking fun at what will be the worst example of an SLA you’ve ever heard of or that I’ve been a party to implementing. I should point out this happened long before I became part of the Heroix team.

  • The HP Universal CMDB SPARQL Adapter – ITSM is about making IT accountable to business. IT and business functions meet at the service interface where business functions define themselves in terms of the services they deliver to their customers. The IT function, in turn, must provide the infrastructural capabilities and resources necessary to support it. The ITIL proposes that the IT configuration should be explicitly modeled within a Configuration Management System (CMS). The HP Universal Configuration Management Database (UCMDB) is one component of a CMS; maintaining a comprehensive and up-to-date snapshot of all managed assets and their inter-relationships across the IT environment. The UCMDB is not particularly web-friendly; there is no easy way to access configuration data using a conventional browser. Another drawback is that configuration records do not correspond directly to the language of business. This report addresses the first of these issues; making the IT configuration navigable on the web.
  • After 20 years event and fault management gets a make-over and dramatically breaks away from ALL current market offerings – The old fashioned way was to track, locate and fix faults on the infrastructure. But this meant that they had already found a way in and required manpower to sort out.

    Without a radical departure from legacy tools, there is the threat that, as systems become more sophisticated and dynamic, more and more potentially arresting demands will be placed on the infrastructure. For those relying on out-dated management tools this can lead to unprocessed events and missed alerts resulting in blind spots and silent failures that can cost a business millions…

    The new approach: Prevention is better than cureEvent trending is clearly the path to detecting standard operating behavioral anomalies and is the most effective way to address potential faults and problems.

    Business Logic + Configuration Management = Effective Event and Fault Management• RiverMuse Core is differentiated by its super-agility, yet still emulates the functionality found in popular legacy fault management systems.

  • RiverMuse appoints President and CEO Jean-Luc Valente – RiverMuse appoints Jean-Luc Valente as President and CEO. This announcement coincides with the company's roll out of next generation event management software. RiverMuse is the first – and only – commercial open source company of its kind in the event and fault management field and it is on course to set a new industry benchmark.
  • Mariner to Participate in Cisco’s TelcoTV Workshop – Mariner will demonstrate its xVu suite of IPTV service monitoring tools. These tools bring complete end-to-end monitoring of the access and home network to Cisco's IPTV solution by leveraging the innovative Visual Quality of Experience (VQE) platform. The collaboration follows the success of Mariner's full integration of xVu with Cisco's VQE platform.
  • RiverMuse – Formidable forces move in at RiverMuse with new board level appointments – 18th November 2009: Today, RiverMuse welcomes Rich Green to its Board of Directors as well as Con Blackett and Matt Asay to the company’s Advisory Board. In their respective roles they will provide counsel to the RiverMuse leadership team, advise on strategic initiatives according to their relative expertise, as well as assist in forging new industry relations. This elite group adds deep industry knowledge and leadership to RiverMuse and complements its premier line up.

    JL Valente, CEO and President at RiverMuse, said: “For a company aiming as high as we are it was imperative to attract a crack team of world class business and technology leaders. I have no doubt that these three industry heavyweights, with their collective experience, will enrich RiverMuse and help propel us to the next level.”

  • Dead Man Walking (CIC) | A Monitoring Odyssey – One of the most interesting observations for me during our trip related to CIC. For those of you unfamiliar with CIC, it is the Cisco OEM of Micromuse Netcool. Cisco called it Cisco Info Center (i.e. CIC). Cisco started the OEM relationship with Micromuse back in December, 1997, and initially the partnership had a great deal of focus from Cisco. They were not just going to simply put their bridge logo on the interface, but were committed to actually adding their value to the product. They built a development team to put (or try to put) their influences into the product: the object server was called the info server; the probes were called mediators. I recall one of the areas being their “real time trapd mediator”. Cisco was adding sequence numbers to traps to validate that none were missed.
  • Knoa Delivers New End-User Experience Monitoring Solution for Virtualization, Cloud, SaaS – # Dynamic Benchmarking, which enables the IT organization to compare system performance prior to the change with system performance before and after each change in the back-end infrastructure.
    # Comprehensive Threshold Alerting, which allows IT organizations to create and manage alerts based upon established Service Level Agreements (SLA).
    # Dynamic Base-lining, which allows the IT organization to monitor when any performance metric (response time, quality or utilization) varies from short or long-term trends. Dynamic base-lining directly attacks the difficult issue of ensuring minimal performance degradation for the thousands of transactions for which meaningful SLA thresholds have not yet been set.
    # Advance Root Cause Analysis, which allows the IT Operations team to evaluate the impact of end-user behavior and desktop resources and conditions on any performance anomaly. This is available as an additional capability.

Comments on this entry are closed.