Moviri blog

Application Performance Management strategy: common mistakes you don’t want to make

Are your web store applications performing well enough?

You will probably answer this question with a sound “yes!”, probably you don’t see any issues now, but there are some critical periods during the year (Black Friday, Cyber Monday or Holiday Season) when website traffic will increase even 3x. Are your prepared for these kinds of peaks?

Poor applications performance will affect your revenues. 

We can provide you with a best in class approach to the Application Performance Management (APM), so you don’t have to worry about it.

First of all, we need a definition of APM.

Let’s start with the Gartner definition, covering five areas of “in production monitoring”.

Application Performance Management:

  • end-user experience monitoring (EUM)
  • runtime application architecture discovery modelling and display
  • user-defined transaction profiling
  • component deep-dive monitoring in application context
  • analytics capabilities

With this definition on one hand and our 15 years experience in IT performance management projects (from performance testing to capacity planning) on the other,  we can summarise the most frequent mistakes as follow.

Technology silo approach

Each team works as a silo using its owns tools, avoiding collaboration, knowledge sharing and tools streamlining. This often leads to monitoring agents proliferation resulting in an overhead on the systems and in a longer mean time to repair (MTTR). When an issue occurs, every team starts its own analysis on its own tools and data: the same data is often collected & analysed multiple times because the same metric could be named and or collected in a different way by each tool or team.

Heterogeneity of tools across environments (from dev to prod):

When an incident occurs in production (service outage or performance degradation), it is really difficult or even impossible to correlate the issue in the previous environments.

Default installation

Application Performance Management tools are usually sold as “easy-to-implement/auto-tuning” software. We strongly suggest avoiding default installation because tuning is the cornerstone of a golden configuration set.

Lack of a monitoring strategy

APM tools require a continuous improvement process that could be very expensive if underestimated during the monitoring planning phase and could lead to a high churn rate in tools usage.

Blindly adopt software suites

Application Performance Management tools sometimes come for free with other IT management software or in monitoring suite deals, but spending time in selecting the right APM tools that answer your needs could save a lot of money and maintenance effort to reach tools effectiveness.

Partial production coverage

Install agents on few servers to save licensing costs or install only when an issue occurs, hides you the entire picture and prevents the benchmarking between normal service behaviour and anomalies.

Single purpose

APM is mostly related to “incident management” process, but APM data is the input for Capacity Management, SLM, Security and other IT Management processes, so the entire data gathered should be considered

Misuse of web/marketing analytics software

These are often confused with Application Performance Management tools, forcing a tool to do the wrong job. User experience from a service and performance point of view is different from usability and user behaviour on service usage.

Looking forward

Do not stop in innovating and limiting your scope to “Application”: redefine your monitoring strategy as Digital Performance Management, address Cloud, IoT and Microservices initiatives, automate your operations, leverage Artificial Intelligence and Algorithmic IT Operations capabilities, be prepared to support DevOps and continuous delivery processes.

If your APM strategy incours into these mistakes, then we can help.


Filter Based Reporting with TrueSight Capacity Optimization

Moviri presented Filter Based Reporting with TrueSight Capacity Optimization, at BMC Days Chicago, as part of the Actionable Intelligence Series.  An introduction to advanced tools and techniques we can help you leverage to deliver additional value from existing data.

The thesis statement was essentially “is there an alternative to Quick Reports and Domain Based Reports? Something that was quick to change the objects that were reported on, while still being formatted appropriately to reveal actionable intelligence?”

We put on our thinking caps and find the answer is yes, with Filter Based Reporting.

What is  “Filter Based Reporting”?

Filter Based Reporting is reporting where the chart contents and formatting have been developed with a particular objective in mind, then the objects contained in the report is chosen at report time by being defined in a filter.

This allows for the best of quick reports and domain based reporting. The charts are formatted as desired to make quick actionable intelligence determinations AND the objects are chosen just in time.

Several reports are staged for particular investigative topics. Such as a device based study, or storage study  or  storage in violation of thresholds study.

Each study type has its own filter.

Objects are selected just in time, placed in the filters and the reports run. The filter can be based on searches, even complex searches.

In this example there are only a few charts but there is no limit. One filter drives a whole series of charts.

Filter Based Reporting


So let’s look in detail how this works.

Device study, the first section, has a couple of charts. Those charts are driven by the filter DeviceStudyFilter. To use this, set the filter, then run the charts.

Storage study, the next section, has a chart. That chart is driven by the StorageStudyFilter. Again set the filter, then run the chart.

The last two sections, each report has its own filter. They just got mixed up alphabetically. Anyway same idea, set the filter, run the chart.

Filters are search strings that can be tested on the search screen.

A few filter examples:

(type:sys –type:object)
(+(tag:AIX AND sys:p* NOT sys:*@*) –type:object
(((sys:*app01 OR sys:*app02) AND tag:Unix) –type:object)

The middle filter says Include everything tagged AIX, and are systems that starts with a p, which is probably production; but NOT systems that have an @ in the name. Also excluded anything that is a type ‘object’, which are reports.

Filters can be quite complex and can be combined. The result is a set of charts that are designed to show actionable intelligence quickly, cleanly, and with a specific target list.

A variation on this method is to use one filter to get an initial list of devices, then based on a criteria of that first list, generate a more targeted list.  This could drive a separate set of reports and can be repeated to provide an ever more targeted list.

There are a number methods to implement these tools and techniques to reveal actionable intelligence.

Call Moviri and let us help you use this, and a host of other tools and techniques.


Available References:

Having an imPACt at CMG 2016

When your company tagline explains that it’s all about performance, you cannot possibly miss out on the CMG 2016 conference. The fact that this edition’s venue was at La Jolla, San Diego, made it all even more pleasant!


La Jolla, San Diego CA

The temptation to go out with the board and get one wave in a Johnny Utah style was great, nonetheless the CMG imPACt agenda was so packed with top-notch speakers from Google, Microsoft, Facebook, .. Moviri!, that you could not have gotten any better excitement (for an engineer … anyway) outside the conference.
In this post you will get a quick overview of the sessions that we “movirians” held at the conference, and then a closer look at a selection (always a tough task) of the most interesting ones we attended.

Our Sessions

What really made us proud this year is that we had sessions based on the R&D efforts at Moviri (CPU Productivity, Containers and Big Data) and also based on our efforts on the field (Accounting and Chargeback training).

Over the past few years we have invested the time and brains of our best movirians into collaborations and long-lasting relationships with top universities, focusing particularly on solving innovative cutting-edge technology problems. We constantly rely on the fresh perspective of newly graduated students and we mix it up with our experience.

CPU Productivity: a New Metric for Performance Analysis

Have you ever used CPU Utilization to identify bottlenecks in your systems or to predict if your site has enough capacity to sustain the new marketing campaign (Black Friday, Christmas special deal , product launches, …) ? I suspect the answer is yes for 100% of us! The processor is the most critical resource in our systems and Utilization is the single most important metric used both to assure adequate service performance and to keep our IT footprint under control – both on premises and even more on the cloud.

But… there is a problem with the way CPU utilization metric works today: it was a perfect metric for systems created 15 years ago! Modern processors are significantly more complex than they were years ago, with the result that our beloved CPU Utilization is no longer measuring… the real utilization of your CPU!

The impact?

A real one: several customers engaged us with questions around why their business-critical service started to slow down… at 60% CPU utilization. You should have plenty of spare capacity, right? Well… not quite! At 50% of “utilization”, your CPU cores might well be at 80%, of the actual processing power they can deliver.


Moviri at CMG imPACt 2016

The features of modern processors defeat traditional capacity models


What can we do then?  At CMG, we had the pleasure to present the result of our latest effort, a real research project we conducted with one of the leading Italian universities with a “simple” goal: define a new metric, which we called Productivity, which is able to estimate how much business transactions our customers can drive with their CPU. Measure the actual work a CPU is doing, and how much capacity is left in a much more accurate way.

Moviri at CMG imPACt 2016

Error free capacity planning with Productivity


So, what is the secret sauce behind Productivity?

We exploited a new class of performance metrics gathered directly from the CPU itself via the so-called Hardware Performance Counters, and added some machine-learning on top to come up with an estimation of the amount of work a CPU can perform.

We were extremely pleased to see such a crowded room at CMG! This testifies that the problem we addressed is hurting people managing IT performance & capacity, and we were glad to receive such a positive feedback from recognized performance experts of the field! We even managed to raise the interest of a top-tier processor vendor…

How to collect data … focus on Hadoop, Containers, Network

This session was a compendium of some challenges we have faced at companies around the world and what we are working on in our Labs:

Today’s datacenters are increasingly becoming more complex, to the point that you either end up feeling like you are starring in Star Trek’s episode “The Trouble with Tribbles” or you have to do something about it

Moviri at CMG imPACt 2016

Complexity in datacenters has grown to an unmanageable situation


Many of our customers that deal with Enterprise Capacity Planning are looking at new Hadoop initiative and asking themselves “What is it all about?”  We presented some analogies to show how to address all the little critters in the Hadoop ecosystems

Moviri at CMG imPACt 2016

Each animal in the zoo deserves a different approach


Next, we addressed what to do for Containers

Moviri at CMG imPACt 2016

How bare metal, virtualization, and containers differ.


And last an overview of a systematic approach to perform Network Capacity Planning

Moviri at CMG imPACt 2016

Network capacity planning in or between datacenters: different needs

Accounting and Chargeback

During this 2 hours training session, we looked at the reasons behind setting up an Accounting and Chargeback process, the challenges, and the benefits it can bring to your organization


As part of the 3 components of the ACB process

Moviri at CMG imPACt 2016

Three sub-processes of Accounting and Chargeback


We focused on a prescriptive workflow to make the process repeatable and some of the tips on how to get it right the first time:

Moviri at CMG imPACt 2016

Workflow and tips for Accounting and Chargeback


On December 13th 2016 we held a webinar on the same topic.

CMG Session Highlights

Facebook: Managing Capacity and Performance in a Large Scale Production Environment

Facebook is known for tackling big engineering challenges, so when I sat down in the room I had high expectations from this talk. I have to say that Goranka Bjedov, head of Capacity Engineering, delivered an outstanding and super interesting talk! Doing capacity planning myself for years, I was eager to learn how capacity is managed in a leading and innovative web company, and compare with our practices in the enterprise.

So, what did I discover?

I think the main theme was focus on efficiency: Facebook capacity team makes great strides to squeeze every CPU cycle out of their machines! They are not afraid of developing custom tools that provide very granular (1 second resolution!) and detailed metrics about the full stack (up to the CPU performance counters). How is site capacity managed? Similarly to other web scale companies (e.g. Netflix), Facebook approach is to use live production traffic to stress systems and understand their maximum capacity – and this process is automated and performed multiple times per week!

Moviri at CMG imPACt 2016

Measuring web capacity using… CPU scheduling latency!

Goranka concluded the session by stating her future goals: being more proactive! This is funny: even with top notch practices and highly talented engineers, there is always room to improve in capacity management 🙂

Flying Two Mistakes High: a Guide to High Availability in the Cloud

Another great talk at CMG was Lee Atchison’s (@leeatchison) one. Lee is Principal Cloud Architect at New Relic, and was previously responsible of designing core cloud services at Amazon Retail & AWS. So, who could better explain how to architect cloud systems for high availability and scalability?

Are you on the cloud? You think capacity planning is no more required, as you can always automatically scale and be always online?


Moviri at CMG imPACt 2016

High availability in the Cloud really means doing proper capacity calculations.

Well, quite the opposite! Lee explained very clearly how capacity planning is not only mandatory in the cloud, but for certain aspects even more important to understand how to keep your service up. The cloud is full with new, “invisible” bottlenecks you don’t expect and sometimes you can also hardly measure with current approaches. Plus, you need to think carefully about HA in your capacity plan too!

Great work Lee, thanks for sharing such valuable insights!

Implementing Metrics-Driven DevOps: Why and How!

I really loved this talk by Dynatrace Andreas Grabner! (@grabnerandi) Many companies are (rightfully) adopting DevOps to bring down silos and increase the speed of delivery, reducing time to market and bringing features in the users’ hands faster and faster.

But how does performance and efficiency fit into this picture? Unfortunately, it is often left out in the cold! Continuous integration in the software development cycle typically consider just functional testing. Andreas showed what could be the impact of that, especially if you are on the cloud: a seemingly innocent code release might suddenly increase your cloud footprint – and your monthly bill – by 4x! If you don’t include performance and cost efficiency KPIs in your pipeline, you might discover the bad news weeks or months after the guilty release happened: almost impossible then to figure out which one was the culprit…

Moviri at CMG imPACt 2016

A DevOps pipeline with Performance and Efficiency as first class citizens


So how to prevent this from happening? The key is considering performance testing and benchmarking as a first-class citizen in your software development pipeline! The benefits? Not only preventing huge and unexpected cloud bills, but also establishing a performance/efficiency/scalability culture already in the dev phase, which is critical to deliver successful but also lean and cost effective products!


With transformative trends like DevOps and the rise of Cloud adoption, Big Data and Containers, IT is facing significant new changes: how do you manage them from the capacity and performance perspectives? Lots of established practices need to change to account for new delivery processes, technologies and pricing models.

Capacity Planning and Performance Management are not going away – quite the opposite! They are becoming more and more important to ensure good service quality, keep the IT cloud bills in check and enable safe IT transformation towards future architectures.

Enough with paddling. Pop-up, and start pumping your board!

We would really love to hear from you: if you have any questions or comments, please use the comment session below.

Anyway, you can stay up-to-date with our further developments and results by filling up the form at the top-right corner of this page.

Accounting & Chargeback What it is, what you can do


Many of our customers asked to know more about IT Financial Management and in particular Accounting & Chargeback.

This is why we decided to collect some material we use to present these concepts and presented it at two events:

  • During the CMG imPACt conference, in a 2 hrs session (plenty of time to review why and how and for some healthy discussions)
  • During a #movinar on December 13: a condensed version of the CMG training

Some of the questions that ACB tackles:

  1. Which services cost us the most, and why?
  2. What are our volumes and types of consumed services, and what is the correlating budget requirement?
  3. How efficient are our service provisioning models in relation to alternatives, and are we competitive on the market?

All the sessions mentioned above have a common set of conclusions:

  1. Accounting and Chargeback in IT can drive cost-efficiency, if the organization trusts how the processed is implemented.
  2. The process is simple at high-level, yet tedious and complex in details: automation and tools make it possible.
  3. There is not a single winning methodology, actually there are many: the criteria on which one is best are different for the various stakeholders, what’s important in your organization?

The benefits in terms of visibility of costs who is using our IT resources and how much it’s costing is quite clear and does not need to be discussed in details.

But there is something more:

from our background of working with Capacity Planners and in general trying to improve efficiency and cost of operations, the side-benefit of these processes is that in a world dominated by complexity, prioritizing Capacity Planning activities based on the cost-factor seems to be a winning strategy.

What can you do next?

Watch the webinar!

Download the handouts!

Drop your email address in the box below and we will send you an email with the presentation and an example.

More on Accounting & Chargeback

This is not the first time we address this subject: you can take a look at these excellent sessions we have held ourselves in previous cases or that organization we partnered with have setup and presented:

Moviri Reparto Corse


Hub On Wheels is a tradition for Boston. It is bicycle race through the city’s neighborhoods that helps raise money for local schools.

You can choose to ride in either the 10-mile family-friendly course or the full 40-mile course along the Emerald Necklace around the city.  Both routes will feature a car-free Storrow Drive!


This will be a great chance for “Moviri Reparto Corse” to ride along the Charles River with no cars getting in the way, explore the shoreline and the historic neighborhoods that populate the City. On Sunday September 18th Fabio, Brendon and Andrea will make some noise along the 40-mile long route while they keep Moviri flag flying.

Let’s cheer for them!!!

Moviri Managed TSCO: a success story

Imagine that your company is running one of the most important e-Commerce platforms in the world. Now imagine that your responsibility is to ensure that this platform performs perfectly, even when passing through marketing campaigns, critical business events and new code releases. What to do? Ask Moviri.

When one of the most important companies in the US market found themselves in this position, they selected BMC TrueSight Capacity Optimization (TSCO) to support the Capacity Management process for their e-Commerce platform. For this critical business decision, they partnered with Moviri, a company with more than 15 years of expertise on Capacity Management with TSCO and over a hundred successful projects all over the world.

The ingredients that sealed the partnership with Moviri? The joint initial implementation in Fall 2013 and the succesful experience with the “Silver level Moviri Managed TSCO” program.

The Moviri Managed TSCO service is designed to create more and more value around Capacity Management, focusing on:

  • Flexibility: to cope with unpredictable types and amounts of capacity-related work over time
  • Efficiency: to ensure results are achieved in a timely manner
  • Cost effectiveness: to engage professional services on-demand

Over the last two years, these goals have been translated in measured KPIs adorning the successful collaboration:

Moviri Success Story - Capacity Management

  • Every year, the Capacity Management process has been extended to new Business Units, remarking the success of the initiative. To support this growth, the TSCO infrastructure has been aligned to manage 15x the number of servers scoped-in the initial implementation
  • The daily monitoring activities resulted in a more reliable infrastructure, reducing the days where the Capacity Team cannot support the process (from ~90% of availability in the first two months of engagement to ~99% of availability over the last year)
  • 30+ data flow reconfigurations have been managed, executing historical recovery of data gaps by the end of the same day
  • On-demand data collections have been executed during Business Critical events, importing low resolution data and ensuring data collection and data quality

Long story short? Moviri takes care of your BMC TrueSight Capacity Optimization, runs and extends it, letting the Capacity Management Team focus on the process instead of the technology behind the scene.

Learn More

Digital Performance, a blink away from your business

Dynatrace Perform Day

Last June Moviri had the privilege to sponsor the 2016 Dynatrace Perform Day and we talked about Digital Performance.

The event, held in Rome and Milan, was all focused on how Dynatrace’s customers achieved in getting the most out of their digital performance and improve the End User Experience. Why user experience is so important from a digital performance perspective? Take a look at this infographic:


Digital Performance Management


The Application Performance Monitoring (APM) is no longer the answer to present-day challenges: the idea of application monitoring is fading out before the Digital Performance Management (DPM); the focus now moves from the application to the End User Experience, considering that even a poor infrastructure performance may cause a big loss in revenue. With DPM you can prevent it!

At Dynatrace Perform Day we had the chance to share our approach to Digital Performance: Stefano Doni, Performance and Capacity expert at Moviri, took on stage his knowledge on Java performance analysis.

Moviri at Perform Day in MilanDoni_Perform Day_Rome_1

Imagine you have to plan your marketing campaign for the Black Friday, how efficient are your Java applications? Do you know that some hidden bottlenecks might limit your application scalability? What would be the impact on your business? Poor End User Experience and unhappy customers will be the main outcome.

We presented innovative KPIs, not available in current monitoring tools that is the key to make accurate performance prediction.


Watch  the video [ITA Version]

Business and performance have a deep bond, if you are not going to take care of this relationship, some else is going to.

We are the optimization experts. We make your applications run faster, use less resources and meet business demand.

Learn More


More on Dynatrace? Start with 30-day free Dynatrace App Mon & UEM

2016 CMG Italia Annual Conference Recap

CMG ItaliaIt’s been a real pleasure to attend to 2016 CMG Italia Annual Conference, organized as a two-days event, held in Rome and Milan on 24th – 25th of May.

The Computer Measurement Group is a not-for-profit, worldwide organization of IT professionals committed to sharing information and best practices focused on ensuring the efficiency and scalability of IT service delivery to the enterprise through measurement, quantitative analysis and forecasting.

What was the event all about?

Near real-time performance & capacity information is getting more and more consideration within enterprise IT departments. Continuos delivery of new applications’ features is introducing a new challenge for IT managers: new releases may cause excessive resources’ usage or degraded performance, without having any benefits for the business. Considering the complexity of today’s IT environments, such anomalies can easily go unnoticed, until you have to pay your bills.

Sessions’ Highlights

Roberto Gioi, Head of Capacity Management at Monte dei Paschi di Siena, presented an interesting approach to release management control. His team developed a solution that integrates with the release management process and helps detecting resource usage anomalies of newly released applications; once a capacity incident is identified, the team is able to send a notification to the application’s owner and possibly add an optimization suggestion.

David Furnari, Manager at Moviri, addressed the problem with Mainframe resources: reducing their utilization could lead to large savings. David presented a near real-time monitoring dashboard that uses workload characterization to show how a business service is consuming MIPS and whether a load increase is related to business volumes increase or preformance degradation. Mainframe’s resource monitoring is still perceived as a topic that pertains to technicians only: instead, the information you collect is valuable for managers too, if you can connect technical metrics with business service labels.

Stefano Doni, Performance & Capacity evangelist at Moviri, brought on stage for the first time in Italy his paper on Java application memory that granted him the Best Paper Award at the 2015 CMG International. This study, through actionable methodologies, enables you to highlight the hidden bottlenecks of many Java applications, detect poor memory usage patterns and anticipate memory leaks.

Other interesting sessions were held by EPV Technologies (event organizer) about new technologies, like Simultaneous Multi-Threading, which represents a revolution in the Mainframe world and adds complexity for special engines (zIIP) capacity planning that are not related with the MIPS bill.

Interested in these topics? Talk to our Experts!

Learn More

Sanofi works with Moviri for Capacity Management

The relationship between Sanofi Aventis Group and Moviri dates back to 2014 when Moviri first engaged with the leading multinational pharmaceutical company to implement BMC’s Truesight Capacity Optimization solution. Sanofi was tasked with the ambitious goal of setting up the foundation of a world-class capacity management process. As the solution was implemented, Sanofi also decided to take advantage of Moviri’s Managed Service offering to support BMC’s TrueSight Capacity Optimization Solution on a daily basis.

At BMC Engage 2014 in Orlando, Sanofi and Moviri held a joint presentation to discuss how they effectively managed growth within Sanofi and continuously aligned IT to business demands.

You can find the recorded presentation here:

The partnership between Sanofi and us continues to grow stronger as Moviri has become a key partner and a trusted advisor for Sanofi’s global challenges. Each customer relationship is personal to Moviri and we strive to treat our customer’s needs as if they were our own.

Chris Wimer, Global Capacity Manager at Sanofi, has written a recommendation letter to endorse a leading Moviri consultant’s services and work ethic throughout his time while working with Sanofi.

Here is an excerpt from the letter:

“I am responsible for Capacity Management within the Application Technology and Solutions (ATS) and Global Infrastructure Services (GIS) groups at Sanofi. As Global Capacity Manager, I provide services to a variety of different internal and external Customers, including administration, support, operation and extension of the BMC TrueSight Capacity Optimization (TSCO) infrastructure” […]

If you are considering engaging a strategic partner with expertise in Capacity Management processes, or you’re interested in getting the best out of your BMC TrueSight Capacity Optimization instance, I would recommend meeting with Moviri and discuss how they can help you achieve your objectives.”

Moviri is greatly appreciative of the opportunity to partner with a market leader such as Sanofi. We take tremendous pride and understand the importance of supporting Sanofi to ensure their enterprise applications are running faster, using less resources and meeting business demands.

Operational Intelligence with Splunk and Moviri from Splunk! Live Italy [Ita version] point of view at Splunk! Live Italy, on how three customers have gained significant business benefit from working with Moviri and Splunk to find the value in their machine data (here is the recap of the event from Splunk Blog).

L’intelligenza operativa della piattaforma Splunk

SplunkLive! Italy Moviri

Lo sfruttamento dei big data e le analisi che se ne possono ricavare sono da tempo oggetto di riflessioni, sperimentazioni o sviluppi di nuovi progetti nelle aziende. In questa fase storica, si inizia a concretizzare un’evoluzione di questi concetti, che si sostanzia in quello di “intelligenza operativa”, in pratica una fase due della business intelligence, alla quale si abbina la capacità di costruire flussi di informazioni in tempo reale per ottenere analisi più raffinate e puntuali partendo da grandi quantità di dati grezzi.
Le aziende potenzialmente più interessate a questa evoluzione sono quelle che hanno già abbracciato (o stanno per farlo) i processi di trasformazione digitale, centrano componenti importanti della propria attività sulle informazioni e impostano il loro lavoro su metriche di business controllate con stringente regolarità in aree come i volumi di ordini o gli indicatori di soddisfazione dei clienti, ma anche le rilevazioni sugli incidenti di sicurezza o dati più tecnici.In termini di utilizzo, l’intelligenza operativa real time si sostanzia nel transaction monitoring, che consente a un’azienda di sapere cosa accade, ad esempio, sul fronte commerciale in ogni momento. Le visualizzazioni avvengono attraverso dashboard grafiche di lettura il più possibile semplice, per comprendere lo stato di un’attività in relazione agli obiettivi prefissati.
La premessa serve per riassumere ciò che, in quest’ambito, fa una realtà come Splunk, da qualche tempo presente anche in Italia e con un lotto di clienti apparentemente soddisfatti, al punto da utilizzare la piattaforma per attività anche differenti da quelle che avevano ispirato l’iniziale acquisto.Le ultime evoluzioni dell’offerta si sostanziano in Splunk Enterprise 6.4 e nella sua versione Cloud: “Obiettivo della tecnologia è la riduzione dei costi di storage dei dati storici – spiega Curzio Trezzani, Partner Account Manager South Europe di Splunk -. Il software è stato costruito per catturare, indicizzare e correlare dati in tempo reale, convogliandoli in un repository dal quale generare grafici, report, alert, dashboard o altre visualizzazioni”.
Attorno al nucleo centrale, Splunk ha sviluppato varie app, come It Service Intelligence, una soluzione di monitoraggio che offre un approccio machine-data driven per fornire una visibilità completa sulla salute operativa e i Kpi dei servizi It e infrastrutturali da tenere sotto controllo. User Behavior Analitycs (Uba), invece, aiuta le organizzazioni a individuare le minacce, note o meno note, ai sistemi informativi, utilizzando machine learning, analisi comportamentale, data science e correlazioni avanzate.


Una flessibilità verificata sul campo

Una delle prerogative della tecnologia Splunk è che le app possono essere costruite in modo abbastanza veloce dagli stessi clienti, in base alle loro esigenze. Unicredit, per esempio, ha implementato la piattaforma nel 2010 per analizzare problematiche di incidenti sui sistemi o sulla sicurezza, coinvolgendo, se necessario, anche persone esterne come gli sviluppatori: “Oggi siamo evoluti verso un vero e proprio Siem, utilizzato come soluzione principale per il monitoraggio e il troubleshooting interno – precisa Stefano Guidobaldi, ‎Ict Advanced Engineering del gruppo bancario – per coprire varie applicazioni, dall’Internet banking ai pagamenti con carte. Raccogliamo 2,5 Tb di dati al giorno da 180 fonti diverse, possiamo fare analisi più precise e abbiamo così risolto vari problemi collegati ai Pos, a report prima gestiti con Excel o ai log derivati dai contact center”.
Per Saipem, invece, il rapporto con Splunk parte qualche anno fa per gestire la problematica del log management, ma nel corso del tempo l’utilizzo della piattaforma si è esteso al controllo di vari indicatori gestionali, al monitoraggio del backup multipiattaforma, per intraprendere lo scorso anno un percorso più strutturato verso l’enterprise security: “La soluzione si adatta a differenti esigenze ed è in grado di elaborare qualunque tipologia di dati, per restituire informazioni di valore – commenta l’infrastructure architect Massimo Pessina -. Di fatto, oggi disponiamo di un unico portale di servizi per tutti gli It regional manager, con dati in tempo reale su infrastruttura, applicazioni e sicurezza. La sostituzione del precedente Cmdb proprietario e l’utilizzo anche come strumento per il software inventory sono altri effetti legati all’estrema flessibilità della piattaforma”.
Interessante anche la testimonianza di Yoox Net-à-Porter, specialista nella vendita di abbigliamento solo online, con 40 siti dove si commercializzano prodotti multimarca. Anche qui si può registrare un’evoluzione che ha portato la società da una filosofia technology-centric a una info-centric: “All’ottimizzazione dei processi legati alla sicurezza aziendale, con un’unica dashboard che li condensa tutti in modo chiaro – specifica il responsabile dell’information security Gianluca Gaiasabbiamo aggiunto la pattern recognition, che  che porta all’attenzione degli specialisti di sicurezza i dati veramente significativi, forieri di attacchi dei quali occuparsi con attenzione”.

[Note: this article was originally published in April 2016 on by Roberto Bonino]