≡ Menu

I’m always skeptical by what I see in a demo until I can dig into what’s under the covers, but what I saw in the Integrien Alive demo impressed me. It looks like what could be a solid foundation for Business Service Management (BSM) in the future with focus by Integrien in key areas such as dashboard visualization, modeling and alignment to business services and applications.

It looks like Integrien competes firmly with Netuitive and the former ProactiveNet (now BMC), maybe Firescope and Managed Objects to some degree.

Effective, trusted and value oriented Business Service Management absolutely depends on an accurate data stream whether it be events, metrics, KPIs, etc. Taking the default out of the box configurations and thresholds with your monitoring tools and poor monitoring and event management lifecycles has led to the development of solution such as Integrien’s to “take back control” and give you back a trusted insight into IT infrastructure.

I’d love to see or hear more about Integrien technology. Anyone have any first hand experience? IMO, we have a gap in the IBM Tivoli portfolio in this technology and capability area.

Comments on this entry are closed.

  • jkowall

    I have looked at all of the vendors you speak of in this space:

    Alive, proactivenet, and netuitive.

    I also looked at firescope, but not within this group of products. These products are specificly around predictive failure and complex analysis in order to give you smart rules.

    Out of the three I would rank netuitive the best, the issue with any of these is that you either need to use their agents for best results of have a specific configuration of whatever other agent you run on the hosts which gather the data the tools need to process.

    Hope this is helpful.

  • Dan Rogers

    We also evaluated all three solutoins. Here is my take: You don’t need to have agents with ProactiveNet Analytics. It is up to you to choose agent-based or agent-less data collection. It was our final choice. The biggest issue with Netuitive is that it only offers device-level analytics and not able to correllate data from all enterprise components to provide and end2end early warning, analytics, and alarming. As for Integrien, while they are working on developing a scientific approach to application performance monitoring, their technology is still underdeveloped and product lacks maturity and scalability.

  • Thanks for the feedback Jonah and Dan. How do you think this space will evolve? Other players? M&A targets?

    I know Netuitive’s pretty happy with their two big wins here in the SE with AT&T/Cingular and Wachovia. What BMC’s talking about with the ProactiveNet stuff is interesting.

    Could you quantify the value or ROI that your choice brought to the organization? Are the Operations teams lives/jobs any better now that you have it or is it just another tool with a pretty picture?

    Tks,

    Doug

  • hamoskin

    Having worked at ProactiveNet & having had friends who worked at Netuitive, it is easy to see the benefits of all three solutions. Integrien recently released their 6.0 product that puts the other 2 to shame IMAO. The web 2.0 frontend is the most flexible of the 3 products and allows a set of users to customize their environment very quickly with a few drag and drops. What they used to lack in scalability has been made up for in this latest release as they now can support up to 100,000 devices or 4 million attributes. I don’t think any of the other 2 can say they can do this on a single server. Integrien Alive is also agentless as it focuses its core value on the analytics and not on data collection. The openness of the technology allows for quick additions for those technologies that are not yet supported today. A whole bunch of APIs is also making the product very easy to adapt to when the thought of spending services dollars makes you cringe. Where Alive stands above the rest of the pack is in their predictive analytics. This finds issues and allows for the users to quickly understand what the problem is and then alerts them before it becomes a problem the next time around. Worth a look if you feel you’re drowning in data and you need something to make sense of that data.

  • Hamoskin,

    Thanks for the great feedback! Do you currently work for Integrien? What can you share about the analytics approaches here? Patents? How about what’s required to get this stuff to work? How does it collect information? What about clients who don’t know how to deploy or manage those domain specific tools so all you have is a “garbage in garbage out” scenario?

    I agree with the front end, it looked very nice in the demo. I think that if they could develop the product in a few more areas, they may have a pretty compelling BSM solution.

    Doug

  • Doug,

    As an Integrien employee, I’m happy to answer some of your questions…

    As far as the analytics approach, Integrien Alive uses a combination of statistical approaches to learn the normal behavior of every metric being measured in a given IT environment. Sophisticated correlation and problem modeling are also employed to reduce the total number of alerts, pinpoint problem solving efforts to reduce MTTI/MTTR and ultimately, to provide proactive alerts that allow IT staff to solve problems before they occur. Our latest version of Alive includes 2 new correlation algorithms and three new dynamic thresholding algorithms. The correlation algorthms allow prediction of future abnormal behavior based on current abnormalities. The three new dynamic thresholding algorithms allow us to account for seasonality and different distributions inherent in the behavior of metric data. This algorithmic sophistication is a requirement for reducing flase positive alerts.

    We have three patents currently filed. One is on our general approach to real time analytics, one is about our problem modeling (or Problem Fingerprinting) and the third is related to an algorithm for long term trending. Over the next quarter, three more patents will be filed. One for our new Dynamic Thresholding algorithms, one for a new change detection algorithm and a third around a new approach to alerting that determines the probability that an out of threshold event should be alerted on.

    To get our solution to work, you need some sort of monitoring infrastructure in place. Most customers we work with have a number of siloed solutions in place – one for the network stuff, another for servers and O/S, perhaps something for applications and databases or response time… These tools will range from implementations of a solution like IBM’s ITM to custom scripting. We have out of the box integrations to a number of monitoring solutions and a flexible adapter framework that allows easy integration to other data collection mechanisms. We are not a monitoring solution, – we rely on the customer’s existing monitoring infrastructure and integrate into that for metric data collection. Once we are installed and integrated into the customer’s data collection mechanisms, we start learning the behavior of the system. This learning is often jump started using historical data the customer provides from their data warehouse.

    As far as the “garbage in garbage out” scenario you mention, the important thing is working with the customer to understand the problem they are trying to solve and making sure that the appropriate data is being collected to solve that problem. For example, if you want to predict that a motherboard is going to fry, you have to be collecting information such as voltage spike data, temperature data, humidity data, cpu fan speed, etc. We work with our customers to understand the issues in their environment and help them determine if they need additional monitoring. We actually have an interesting paper written by our CTO, Mazda Marvasti. In it he is able to show that if you randomly collected just 40% of the available metrics across each tier of a given business service (e.g., network tier, web server tier, app server tier, db tier, etc), our algorithms would allow us to identify over 93% of the problems that could occur in that environment. The paper is available on our website.

    Sorry for the verbosity here, but I wanted to give complete answers to the questions you posed. We at Integrien appreciate your comments on the UI, which was a major focus of our latest release. We’d certainly be interested in exploring the areas you think we should invest in product development as well.

    Steve

  • J Fielding

    I’m near the completion of a large Netuitive deployment and key capability for making the implementation a success is positioning it as a self-learning performance management tool. Learning the normal application behavior profiles is the foundation of the analysis, but the software really excels when you prioritize performance events by how they impact customer experience. Netuitive SA uses multi-variable correlation to pinpoint application infrastructure metrics that are the likely cause of slow response.

    The new versions of Netuitive SI and SA are big advances over earlier versions. If you haven’t seen the latest versions then you should definitely check them out. The one issue that I think could be improved is the openness of the integrations. I got a brand new integration done in less than a week, but it wasn’t as easy as it should have been. This has been a big topic all to itself. The more open they are with the integrations then the more room there is for error, like Doug mentioned earlier “garbage in and garbage out.”

  • I have recently had discussion with Integrien sales people as they did the live presentation of Alive product for company I work for now. Yes, it is impressive and especially price. General impression is it’s too complex. I have implemented my own system (SEDS – statistical exception detection) that is much less complex and works fine for years now.

    Check my last posting in my blog (System Management by Exception) for details of how SEDS works comparing with Alive and other similar products:
    http://itrubin.blogspot.com/2009_02_01_archive.html

  • Doug, I work for Netuitive and am one of a handful of people who understand our sophisticated analytics – but know that how the technology works matters less than the results :-).

    The page in the link below includes a white paper “Path to Proactivity” done by a 3rd party analyst interview of our customer. It details operational as well as ROI results. I will let the paper speak for itself.

    http://www.netuitive.com/products/roi-analysis-netuitive.php

  • Thanks @ GGillen. I’d love to hear more on the sophisticated analytics from you in a series of podcasts or other discussions. I firmly believe that it’s critical that this is broken down into ways that the typical laymen understands and ultimately believes in this stuff. Feel free to contact me directly via the blog (see contact section).

    Tks,

    Doug

  • @Igor. Very interesting. I’d love to hear more about your SEDS and how it works! I left a comment on your blog.

    Doug