Moviri blog

Moviri at Performance & Capacity 2015 CMG Conference



The Alamo Mission, part of the San Antonio Missions UNESCO World Heritage Site in San Antonio, Texas.

We had the pleasure of sponsoring and attending the Performance & Capacity 2015 in San Antonio (TX), the 41th US National Computer Measurement Group (CMG) conference.

The Computer Measurement Group is a not-for-profit, worldwide organization of IT professionals committed to sharing information and best practices focused on ensuring the efficiency and scalability of IT service delivery to the enterprise through measurement, quantitative analysis and forecasting.

What we like the most about CMG conferences is the unique opportunity to get in touch with top capacity and performance professionals, working on different industries and organizations around the globe. It is a pleasure and a privilege to sit together and share experiences and ideas about how to approach actual performance and capacity planning problems! Plus, it is also a great occasion for networking with our customers and prospects, getting to know each other in person and spending some relaxing time outside of the conference hotel.

As Moviri, we have a long tradition of attending CMG conferences to share our best practices and constantly keeping up to date with the advancements of our capacity and performance field.

However, this year was special for us as we not only managed to get a paper approved for this year conference (second year in a row!), but our Stefano Doni had the honor of receiving  the conference Best Paper award! We feel especially delighted for this important recognition and wish a huge thanks to all the referees that spent their time reviewing our paper and providing very useful feedbacks! We’re proud of this international award, we think it speaks for itself about the quality of what Moviri delivers.

Plus, Renato Bonomini managed to deliver an outstanding vendor tool talk on “Business Capacity Modeling with BMC TrueSight Capacity Optimization”, regarded as one of the most appreciated sessions by the attendees – great work Renato!

As usual, we also had a vendor booth which enabled us to present our offerings to US prospects and talk with customers & BMC partners. Thank you Jon!

FlyerBest Paper: Lessons from Capacity Planning a Java Enterprise Application: How to Keep Capacity Predictions on Target and Cut CPU Usage by 5x

What is this work all about? Suppose you need to estimate if your user facing service will be able to cope with the increased traffic of your most important business campaign. The stakes are high, you can’t be wrong or the business will be severely impacted. Chances are that your capacity predictions will fall short and the service will *crash* well before the estimated capacity will be reached. In this talk, we described actionable methodologies that we developed over years of capacity planning business critical Java enterprise applications.

Overall, the session has been very well received, we got a lot of questions and interesting feedbacks – thank you!

If you did not have a chance of attending the session, you can sign up for an upcoming webinar on this topic where we will present it live.




 CMG Session Highlights

As always, CMG is packed with many interesting sessions, selecting and summarizing the best ones is no easy task! Here are two of the most interesting sessions we have attended.

Hadoop super scaling

Outstanding talk by the great Neil Gunther on Hadoop scalability. Hadoop is the most common widely deployed big data framework. Despite its popularity, its performance characteristics are not well understood and the myth of linear (or even super-linear!) scalability is commonly thought among big data developers. Neil shed some light on the real Hadoop scalability and debunked the super-linear scalability myth.

Identify the cause of high latencies in Storage Traces Using Workload Decomposition and Feature Selection

Very good talk by Daniel Myers, a PhD and CS professor who also worked on modeling storage systems @ Google. The proposed methodology allows the identification of the real cause of high latency in storage systems using machine learning. For example, the method can identify which factors (among any) contributes the most to high storage latency, for example high IOPS, high randomness, burstiness and so on. The method was tested against production storage traces collected from Microsoft servers and was able to provide interesting results: nice work!


It’s been a real pleasure to attend this year CMG conference. It was packed with super interesting sessions, we enjoied networking with a lot with customers and partners and were delighted to share an award-winning Moviri-developed capacity planning methodology, hopefully contributing to advancing the capacity planning field with a new actionable approach.

Are you puzzled by our findings and would you like to know if the same is happening to your infrastructure and Java applications? Feel free to get in touch with us, we’re here to help you!


Movirian Stefano Doni awarded 2015 Computer Measurement Group Best Paper Award


Managing the capacity and efficiency of business-critical Java applications is challenging. At the next Computer Measurement Group’s 41th International Conference Performance & Capacity 2015 from November 2nd to November 5th in San Antonio, Texas, Moviri will give you actionable solutions.

We are delighted and honored to receive the prestigious 2015 Computer Measurement Group Best Paper Award. The award is CMG’s recognition for papers judged exemplary by industry peers and reflecting the best of the best offered at the conference. 

 Our capacity wizard and award recipient Stefano Doni will be a key speaker on Wednesday November 4th with a talk entitled Lessons from Capacity Planning a Java Enterprise Application: How to Keep Capacity Predictions on Target and Cut CPU Usage by 5x.

In this work Stefano proposes actionable methodologies and key metrics that enable you to

  • Highlight the hidden bottleneck of many Java applications
  • Devise a business-oriented capacity model that represents Java memory bottlenecks
  • Detect unsound memory usage patterns and anticipate memory leaks
  • Uncover a well kept secret – the garbage collector drives the CPU usage of your servers (not your business!) and how to fix it
  • Show how the garbage collector might be your first scalability bottleneck.

We will be also there with our booth.  So don’t miss the chance to meet us in person !

BMC Technology Day in Chicago March 2015: a great event in a great venue

BMC Tech Day33
Andrea Gallo and Renato Bonomini of Moviri report from the Willis Tower, which hosted the BMC Technology Day 2015 in Chicago on March 11th. BMC Software showcased the updated TrueSight portfolio and for Moviri it was the perfect opportunity to discuss trending technologies and solutions with customers and partners.

The event at the Willis Tower (formerly Sears Tower and for 25 years the tallest building in the world) was organized in five different rooms, one for each BMC portfolio pillar in the distributed world , where each room hosted several sessions where attendees shared success stories and use cases. Herb Van Hook, BMC’s VP and Deputy CTO, and Tony Seba, lecturer from Stanford University, delivered the opening keynote. Similarly to BMC Engage 2014 in Orlando, the keynote focused on the new “Digitalization era” and on what is everyone’s position on these new disruptive technologies. No matter what you or the market think, the future is now!

Moviri at BMC Tech Day Chicago

The break-out sessions included discussions around several topics – let’s summarize the hot ones:

  • Less click, more value – Ease of access to the important information, tailored to customer needs (the Vodafone experience with SmartIT).
  • Power is nothing without control – infrastructure continuously growing in dimension and complexity, with companies aiming to achieve clearer visibility of its composition and status.
  • Disruptive Benefits with IT Cost Transparency – how BMC TrueSight Capacity Optimization helped HCSC.
  • What it takes to be a Capacity Planner – the journey from a simple spreadsheets to a full-blown and mature Business Aware Capacity Management system and the skills that help in the job.

Two sessions in particular saw Moviri being mentioned several times in Performance Management & Analytics:

Session 2: Customer Case Study Health Care Services Corp (HCSC)

Our friend Ben Davies from Health Care Services Corporation set the motto of the day to “suck less” while describing the journey that enabled cost transparency at HCSC. The key business benefits, as Ben mentioned, were achieved with Moviri’s help to get onto the right direction and avoid “shooting ourselves in the head”, to the point where HSCS scaled the process from “1 to 150 applications in 90secs”.

BMC Tech Day15

Session 4: Proactively Adapting to Changes in IT – Customer and Product Mgmt. Q&A Panel

Moviri joined a very interesting Q&A Panel, moderated by Clem Chang of BMC Software, where participants were able to share their thoughts on the topic ‘Future trends of IT’. Renato, together with BMC software consultants and product managers, had the chance to share the vision and discuss how to proactively adapt to an ever changing IT world. The hottest topics discussed were:

  • Big Data & Cloud – How can IT Operations and Capacity Managers support these technologies? Moviri is in the front line and we are ready to announce new solutions for capacity of big-data.
  • A single console to rule them all – Is there a plan to merge all of the different consoles into a unified platform? Having an holistic view is the greatest help to connect the dots and stay ahead of application failures. Furthermore, the introduction of IT Data Analytics is designed to streamline operations to find and resolve outages causes as quickly as possible.
  • “We can’t find good capacity planners!” – a few organizations attending the panel pointed out that it’s very hard to find capacity planners with any level experience.

The Moviri team in Chicago shared what some of the key skills and common profiles of good capacity specialists are. That is, after all, what Moviri does best. If you are looking for some solid capacity skills, make sure you take a minute to talk to our experts.

BMC Tech Day38

Moviri and the Digitalization Era: 2014 BMC Engage Conference report

BMC Engage 2014

It was show time this October in Orlando for the 2014 BMC Engage Global User Conference!

We were present at the event both as sponsors and as speakers, learning about and building upon the renewed and refreshed BMC value proposition (our partners introduced officially their new brand identity and logo), showcasing some of our success cases and strengthening relationships and developing a vision of the future with and among our customers.

Capacity Management took the spotlight in 4 customer-centered sessions. During each of them, Moviri’s role was highlighted, confirming our leading role as partner-of-choice in BMC’s Performance and Availability Ecosystem. Citrix, HCSC and Sanofi explained how they make sure to fulfill critical business initiatives with our help, while Northern Trust identified the supported integration with SCOM – provided by Moviri – as one of the key driver in the decision to adopt TrueSight Capacity Optimization.

The Digitalization Era and Disruptive Technologies

bmc ceo

The main protagonists of the opening and closing keynotes, BMC’s CEO Bob Beauchamp and Tony Seba, lecturer in entrepreneurship, disruption and clean energy from Stanford University, introduced a vision that contains the seeds of a very exciting idea – or scary, depending on your perspective: the IT industry is again on the verge of a new era, the Digitalization Era, and considering how information technologies have been at the core of all market disruptions over the last decades it’s quite hard to confidently predict what we’ll all be talking about in three years.

What it takes to win then? Well, we can outline the key success factors as:

  • Embracing a constantly growing holistic approach, expanding visibility beyond the the four walls of the Datacenter
  • Unleashing and successfully leveraging the business opportunities coming from IoT, Cloud Computing and Big Data. While we’re approaching the Zettabyte of data milestone, the market is waiting for guidance on a renewed set of best practices to use in conjunction to “what it used to be
  • Consumerization, empowerment and automation. Architectures, hardware devices and applications keep becoming more and more powerful and complex – take for instance the spreading of Converged Architectures. It is critical to provide the market with automated, integrated, intuitive and robust solutions to ensure IT professionals spend time taking decisions rather than understanding how to tackle data.


How does Moviri fit in this picture? Looks like our efforts to stay on the edge of the curve most likely will pay off.  Let’s review some highlights of what we showcased at Engage.

Sanofi’s Transformation Journey

2014-10-14 23.51.54

Chris Wimer, Sanofi Global Capacity Manager, and I talked about how Capacity Management supported the ambitious IT Transformation program of consolidation, harmonization and automation in line with Sanofi’s renewed business strategy.


The key takeaways for the audience:

  • Rely on experts — and the experts are Moviri. ☺
  • Clearly define goals and challenges at the very start.
  • Perform a rock-solid assessment and analysis of the as-is, identify KPIs to measure the effectiveness of the process, review and prioritize the available data sources…then start implementing.
  • Think business-oriented: business value realization always comes first. That’s why processes and data streams need to be standardized, analytics need to be clear and accessible, areas of improvement must be measurable, priorities and service levels must be agreed upon.
  • Be holistic: as we have shown, one of the key success factors has been the capability to centralize IT and Business information under the same roof
  • Focus the reporting efforts on risk mitigation, provide improved visibility and reduce costs.
  • Leverage business-aware analysis, understand IT services as a whole, report and optimize capacity in terms of actual and predicted business demand… and make it all work by taking advantage of Splunk and Moviri’s connector for TrueSight Capacity Optimization!


Accounting and Chargeback in Citrix…


Troy Hall, Citrix Senior Business Analyst, and Moviri’s Renato Bonomini took the lead sharing best practices and recommendations on how to set up an effective Accounting and Chargeback (ACB) process with TrueSight Capacity Optimization.

One of Renato’s quotes stood out: “Not all servers were created equal”.  With that he referred to the business value of performing ACB and achieve improved cost visibility, making sure to have a fair allocation and showback of costs and resources.

As Renato explained, any ACB process is made up by 3 components:

  • Identify the costs, which is usually the most painful point – as confirmed by Troy – since reliable data is generally not available and needs to be estimated.
  • Allocate costs, choosing from a variety of possible models taking in account KPIs such as Simplicity, Fairness, Predictability, Controllability, Complexity.
  • Chargeback costs to the final consumer.


Knowing the theory is not enough: a critical success factor is to exercise ACB as an Iterative-V process:

  • Start with a model that makes sense considering the state-of-art information.
  • Run the analysis, check results and evaluate deviations from expectations.
  • Adjust cost rates and allocation models, working closely with finance.
  • Re-iterate the process.


Troy outlined the most important positive outcomes in adopting ACB:

  • Increased satisfaction of business units, more educated IT decisions.
  • Resources are freed up and reallocated to more critical business objectives.
  • Ability to quickly reconfigure models and automatically generate auditable results.
  • The effort required for monthly showback reporting is minimal, saving precious time and providing detail for even greater IT resources savings.

… and again ACB in HCSC!

Similar concepts were also part of the session held by another of our most important and beloved customers, HCSC. Rick Kickert, IT Planning Senior Manager, and Ben Davies, Senior IT Business Consultant, discussed how ACB helped achieving cost transparency at HCSC.


Rick focused on the need to drive competitive advantage via efficiencies and cost-effectiveness through IT optimization, simplification, and agile infrastructure & applications, closing his remarks with some figures about the results:


Ben delighted the audience with some real-life success tips. The concept of “useful, consistent lie” resonated in particular with the audience, as Ben’s way to explain that it’s wrong to expect to earn value from ACB only when perfect cost models are in place. The iterative-V approach – accept inaccuracy, but be more consistent and useful after each iteration – is the most sensible choice.


End even if we did not have the chance to present together, Ben took his time to give Moviri some credit and depict his experience working with us. Here’s a couple of quotes:

“If smart were sexy, and it is, the Moviri guys would be super models”,


“the best capacity results come from Moviri, BMC, then Moviri, Moviri, Moviri, Moviri.”

Hadoop Capacity, SAP Hana and more

Engage has not only been about conference sessions, but also and more importantly, about welcoming IT professionals at our booth, discussing about Performance & Availability and presenting our solutions, some of which we announced and premiered at Engage. The hottest topics have been:

  • New connectors for TrueSight Capacity Optimization: we are working on complete solutions – integrations and reports – for two of the most important platforms available in the market: Hadoop Cloudera and SAP Hana.
  • Moviri Managed BCO: our managed service offering for BMC Capacity Optimization (BCO) customers designed to maximize the return on their BCO investment.

Let me recall the seed of an idea I started this post with: the Digitalization Era and Disruptive Technologies. Well, we believe Hadoop can play a relevant role in the emerging market of Big Data. Of course as a source of data; but especially for capacity managers also as a service that needs to be planned, optimized and aligned to real business demands.

We’ve seen a few proposals of Hadoop capacity analysis best practices being published on the web during the last few months, and our feeling has frankly been that “this is not quite enough… we could do way better” – again, quite: we also had outstanding conversations!

Engage gave us the chance to announce our intent to take it to the next level. To us, a capacity analysis has always meant more than reporting on “how much CPU I’m using and how the trend looks like”.  How about dependencies and correlations? How about achieving an understanding of the true causes of resource consumption? How about being effective in identifying both hardware and software bottlenecks using a holistic approach? How about proactively planning based on actual demand information, rather than trending data?

It surely sound ambitious: that’s why we’ve been so excited in hearing so much good feedback on what we’ve shown, and we’re thrilled that many customers have agreed to help us moving forward by working together on a trial version of the final product, to test and refine our intuitions and use-case design on the field.

Want to know more or join us? Let’s Engage!

To learn more about Moviri’s work with BMC and our capacity management track record, you can directly contact one of our experts.


Saipem’s session at Splunk .conf2014: Security and Compliance with Moviri

splunk-conf2014-5150-0-s-307x512We recently joined our partners at Splunk at the global annual Splunk .conf2014 conference, which took place on October 6-9, 2014 in Las Vegas.

We had the pleasure to share the stage there with one of our key clients and Oil & Gas contract industry leader Saipem.

Saipem had a prominent role at the event, with a session led by Ugo Salvi and Luca Mazzocchi –  respectively Saipem CIO and CISO on “Splunk at Saipem: Security and Compliance in the Oil and Gas Industry”.


Why Splunk at Saipem

In his presentation, Mr. Salvi outlined how Saipem introduced Splunk in the IT ecosystem. Back in 2012, the company combined log management and compliance to establishing dashboards and an automatic alerting system to meet SOX and privacy compliance regulations. Thanks to the agility of the product and Moviri’s expertise, Saipem successfully achieved unprecedented flexibility and time to market and decided to build upon the early success.

When, at the end of 2012, in two instances Saipem’s competitors where it by business-disrupting malware attacks, Saipem started focusing on a few questions: are backup policies effectively in place? Can we restore business operations — Saipem’s yearly revenues exceed $10 billion), in case an attack is successful? The answers are now with what Saipem calls “Backup inspector”, a new application based on Splunk that has enabled Saipem to enforce policies across the enterprise for all the backups of all the relevant applications.

In 2014, Saipem has progressed even further by setting up a new SIEM (Security Information and Event Management) system using Splunk and the Enterprise Security app to identify, address and investigate security threats. Meanwhile, Saipem, convinced of SPlunk’s reliability as a source of information for IT and business (e.g. license usage or distribution of accounts around the world), is looking at possible future applications such as:

  • Industrial systems (SCADA, supervisory control and data acquisition)
  • APM, Monitoring control room and troubleshooting
  • IT and Business reporting

SCADA data as a new opportunity

During the Q&A that followed Saipem’s session, most questions were related to SCADA data. Mr Salvi pointed out that Saipem is going through a POC, waiting for the opportunity to have these devices available from their operations. Saipem is also working with R&D to understand how to monitor pipeline stress and stretching, using fiber optic technology. Splunk can correlate these industrial and pipeline data with other metrics. Another key challenge regarding SCADA data is investigating possible threats due to maintenance activities, such as for example the VPN that is opened to perform maintenance on systems on offshore vessels.

Key takeaways

splunk conf teamII

For Moviri, working with Saipem and Splunk has been a long and rewarding process. And after this first few years of these implementations, Mr. Salvi made it a point to share Saipem’s key takeaways:

  • Splunk has replaced and continues to successfully replace other tools within the Saipem IT ecosystem.
  • The challenge of digitization and IT in an enterprise setting that, like Oil & Gas, is by its very nature rooted in the analog, industrial world presents great opportunities.
  • It has been a long journey for Splunk at Saipem (SOX in 2012, Backup Inspector in 2013, SIEM in 2014)… and it is far from being over!!

To learn more about Moviri’s Splunk capabilities, visit our Analytics and Security services or talk to our Experts.




VMware vForum 2014 and the Future of the Data Center Operational Model

At the recent vForum conference in Milan, where several Moviri experts were in attendance, VMware unveiled the results of a survey of more than 1,800 IT executives. The key findings highlight the increasing gap between the needs of the business and what IT is actually able to deliver.

IT is slowing business down

Two-thirds of IT decision makers say that there is an average gap of about four months between what the business expects and what IT can provide. The exponential growth in business expectations is increasingly unsustainable for traditional IT management. The IT challenges in the Mobile-Cloud era, as defined by VMware, require for example real-time data analysis, continuous delivery or resource deployments in hours, if not in minutes. This is not achievable with old resource management defined by hardware-driven infrastructures.

The VMware’s answer

The answer, according to VMware, comes from the so called Software-Defined Data Center (SDDC). VMware’s vision for IT infrastructure extends virtualization concepts such as abstraction, pooling, and automation to all of the data center’s resources and services to achieve IT as a service (ITaaS). In a SDDC, all elements of the infrastructure (networking, storage, CPU and security) are virtualized and delivered as a service, in order to bring IT at the Speed of Business.

VMware IT at the Speed of Business

Nowadays, enterprises invest in average only 30% of their IT budget in innovation. The reasons include manual device management, slow and non-automated provisioning, production workloads handled via email and everything else IT needs to perform just to “keep the lights on”. According to VMware, SDDC could help enterprises save 30% of capex and up to 60% of opex, allowing the investment in innovation to reach 50% of the IT budget and thus increasing market competitiveness.

VMware NSX release

VMware has drawn inspiration from great players in infrastructure innovation like Amazon, Facebook, Netflix or Google and has developed products for each technologic silo: vSphere for x86 virtualization, VSAN for storage and the just recently released NSX for network virtualization, the big news of this year.

The VMware NSX network virtualization platform provides the critical third pillar of VMware’s SDDC architecture. NSX delivers for networking what VMware has already delivered for compute and storage using network virtualization concepts.

Network and Server Virtualization

In much the same way that server virtualization allows to manage virtual machines, network hypervisor enables virtual networks to be handled without requiring any reconfiguration of the physical network.

Network virtualization overview

With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 to Layer 7 networking services (e.g., switching, routing, access control, firewalling and load balancing) in software. The result fundamentally transforms the data center network operational model, reduces network provisioning and simplifies network operations.

Since networking is no more just connecting machines, but rather delivering services like enable balancing, manage firewall rules or route planning, the first impression is that network virtualization, thanks to the combination of OpenFlow capabilities and experienced companies like VMware or Cisco, will have a similar revolutionary impact on the network, as server virtualization has had on servers.

As for x86 hypervisors, network hypervisors do not replace but enhance and add features on top of physical layers. They do not make connectivity available, they provide services and improve datacenter network agility. Physical connectivity is still required, but complex connections are no longer a requirements because everything can be handled at the software level.

Network virtualization (NV) in a nutshell, it’s a tunnel. Rather than physically connecting two domains in a network, NV creates a connection through the existing network to connect two domains. NV is helpful because it saves the time required to physically wire up each new domain connection, especially for new virtual machines. This is valuable because companies don’t have to change what they have already done. They get a new instrument to virtualize their infrastructure and make changes on top of the existing infrastructure.

Network Virtualization

The key benefits of NV could be summarized in:

  • Ability to easily overcome VLAN limits to support scalability network requirements.
  • Each application can have its own network and security policy via NV traffic isolation improving multi-tenancy.
  • No need to touch Layer 1 for the majority of requests.
  • Improved performance for VM-to-VM traffic within the same server or rack due to the fact that traffic is handled by the virtual switch because all the hops to the physical layer are just skipped.
  • NV management tools represent a single point of configuration, monitoring and troubleshooting in large virtualized data centers.

However, there are some disadvantages:

  • The new workload coming with NV features is now handled by hypervisors’ kernel and not from dedicated hardware.
  • Performance degradation and network traffic increase by tunnel-header overhead.


Current adoption of NV technology is in its very early stages with a few organizations in production and more communications service providers and enterprises in pilot implementations. Running NV software like NSX as an overlay to existing network infrastructure provides a relatively easy way to manage VM networking challenges. As a result, we expect NV adoption to strongly increase during the next two years in order to close the gap with SDDC and speed up IT to meet business demands, as suggested by VMware.

(Images courtesy of VMware vForum 2014)


Insights from the DCIM leading edge at DatacenterDynamics Converged 2014

Claudio Bellia and I had the pleasure to attend the DatacenterDynamics Converged conference for the third year. DCD Converged is a one-day, peer-led data center conference that gathers IT and Facility professionals.

What I like the most about this conference are the case study sessions. During these sessions, organizations present real-life initiatives about how they managed to improve data center efficiency and save a good amount of energy – good for the environment and company cash. In the process, they share interesting internal details, sometime previously undisclosed, about the company data centers.

The DCD conference traditional audience are facility designers and operators. However, over the years I have noticed increasingly relevant IT sessions, which demonstrates a growing recognition that, in addition to facilities, also servers, storage and networks management offers large potential optimization opportunities.

Here’s some highlights from the two sessions I have found the most interesting.

Telecom Italia and CNR Case Study: Energy Consumption and Cost Reduction through Data Center Workload Consolidation

The case study highlighted how Telecom Italia (the largest italian telecommunication operator) is saving significant amounts of energy and costs thanks to initiatives specifically targeted at the “IT part” of the data center through servers, storage and workload consolidation.

The session started off by showing the global medium-term trends that are driving the enterprise IT evolution. Besides the usual suspects, such as cloud computing, big data and open source software, two lesser talked-about trends are the adoption of commodity hardware (no big news here) and IT efficiency, which can be summarized as proper server selection with energy efficiency in mind and, at the micro level, using knowledge of server workloads to perform consolidations and improve capacity planning (a relatively new concept). As IT optimization experts, in Moviri we wholeheartedly believe in IT Efficiency as as a major source of innovation, energy and cost savings, available to organizations todays and in the future.

The next key point was about technology refresh initiatives that Telecom Italia has performed to take advantage of the evolution in servers such as adoption of virtualization and more powerful and efficient CPUs and storage such as thin-provisioning and autotiering, to optimize the usage of the existing resources and to slash energy bills. Traditional capacity management approaches too often can be summarized as: “do not buy new hardware until the installed capacity (read: sunk investment) is fully used”. At Moviri we believe this mantra has become obsolete, as current data center cost structures are very different from 20 years ago. Energy costs are impacting data center TCO in important ways (20% and rising) and proper technology refresh and server selection are paramount to achieving energy and costs saving, while reducing footprint and increasing processing capacity too!

Another point is related to server processor selection. Despite CPUs increasing computing power and capacity over time (courtesy of Moore’s Law), what is often overlooked is which processor provides the best fit from a performance and cost perspective. Telecom Italia highlighted how newer Intel CPUs, if properly selected, can be a source of cost and energy savings. I can add that, in my experience, I have seen how different CPU performance and price greatly vary among different models on the market, so equipping the entire server fleet with the same, standard CPU will guarantee unused capacity, unnecessarily high acquisition (CAPEX) and energy (OPEX) costs. As performance is workload dependent, a proper characterization of datacenter workloads is paramount to really understand what are the requirements and consequently make the best investment.

Finally, the session focused on the adoption of an emerging paradigm called Intelligent Workload Management: managing the workloads in dynamic, virtualized environment, to achieve increased utilization levels, reduce stranded capacity and save costs. Telecom Italia adopted this concept by implementing two products: Intel Datacenter Manager and Eco4Cloud. The former product enables a fine-grained collection of power and thermal metrics directly from the servers. The latter is an automated workload consolidation solution, designed by a CNR (Italian National Research Council) spin-off company, that can pack virtual machines into fewer servers and hibernate the others. This resulted in at least 20% energy savings (gigawatt-hours!) and clearly highlights how a data center infrastructure management solution (DCIM) is important to optimize data center efficiency and capacity.


Case Study: Eni Green Data Center – Why 360° Monitoring?

This case study, presented by Eni (the largest italian gas utility), highlighted the energy efficiency design and operation of the newest company data center.

The first interesting data point relates to facility efficiency (a.k.a. Power Usage Effectiveness or PUE) and to where the power goes (read: is wasted) in a typical data center vs. an optimized one. Standard, legacy data centers typically are poorly efficient (PUE > 2, or even 3), which means that up to 2/3 of the total energy entering the data center is wasted before reaching the IT equipment. What are the greatest offenders? Chiller plants, fan and pump losses, UPS/transformer losses. In contrast, the newest ENI datacenter has been designed with a 1.2 target PUE, which means that energy wasted in facilities is less than 20 percent.

What did ENI do to achieve such level of efficiency? The actions where: (a) use of free cooling for > 75% of the time (b) use of high-efficiency chillers (c) introduction of large chimneys for natural heat extraction (taller than a 10-floor building!) (d) use of offline, small (200 KW), efficient (>99.4%) UPS (e) cold air containment.

A key insight that ENI shared was why a pervasive, comprehensive, fine-grained monitoring system is paramount to understand and tune a complex plant such as a data center. ENI’s monitoring system is tracking 50,000 metrics with a 10-second collection interval – and no data aggregation or purging is planned! Such a vast amount of data enables ENI to identify anomalies, increase efficiency and prevent issues before they impact production operations, such as identifying fans rotating in opposite directions or uncalibrated energy meters reporting wrong values.


I hope you enjoyed my summary. The main, positive message which I’d like to convey to IT and Facility managers struggling with tight budgets is: start looking closely to your datacenter efficiency and costs, chances are that you might save huge amounts of energy and money, decrease your company environmental footprint and increase your IT capacity, perhaps even avoid building unnecessarily new facilities. And be sure not to focus on facilities only, as the IT equipment is where most optimization potential can be realized.

If you’re looking for help, check out our Capacity Management offering!

Loadrunner in the cloud: testing everywhere you need

The new version 12 of Loadrunner and Performance Center, HP introduces several new features to the testing market-leading software and adds some enhancements that will leverage the time-to-test and extend the technical testing capabilities of the solution.

Testing everywhere

Cloud-based Load Generators

It is now possible to inject load from cloud services through Amazon Web Services: with this solution, customers with internet exposed application can execute load tests in an hybrid mode with a mix of load generators within their network (as for the preview version of LG) and load generators in the cloud, in order to simulate traffic from all over the world.

Enhanced support for Mobile Testing

It is now possible to test any mobile application, simply by using Android application (on rooted devices) and integrating Shunra Network Virtualization, that allows customers to discover and then virtualize real-world network conditions in the test environment, simulating different location and bandwidth situations.

New VuGen Testing Script Recording Features

It’s now possible to use the latest versions of most common browsers (not only IE 11, but also FireFox 23 and Chrome 30) for scripts recording. With the new integration with Wireshark and Fiddler, customers can now generate scripts in an easier way, avoiding use of custom requests for calls not recorded by the VuGen.

With several protocol enhancements, recording SPDY, HTML5, Flex, SilverLight (and much more) will no longer be a problem. The new TruClient to Web/HTTP converter utility allows you to reduce time to script, supporting simple Web as well as modern JavaScript-based applications and reducing scripting time.

Platforms Support Enhancements

Product installation is now possible also in Windows Server 2012 and without administrative accounts, with UAC and DEP enabled. This meets the needs of customers with strong security policies. The integration with Jenkins, and latest versions of Eclipse Juno, JUnit, and Selenium is now supported as well.

And more!

Many other enhancements on Continuous integration, new Protocol support are available with HP Loadrunner and Performance Center 12. You can find a full list at the following links:


Demystifying Oracle database capacity management with workload characterization

From simple resource consumption to business KPIs models, leveraging database workload

In a nutshell, Capacity Management is all about modeling an IT system (i.e. applications and infrastructure) so that we can answer business questions such as:

  • how much business growth can my current system support? (residual capacity)
  • where and how should I focus investments (savings) to increase (optimize) my capacity?

When facing typical three-tier enterprise applications, creating accurate models is increasingly difficult as we move from the front-end layer to the back-end.

Front-ends (i.e. web servers) and Mid-ends (i.e. application servers) can be modeled with relative ease: their resource consumption is typically driven by user activity in business-related terms (i.e. business transactions per hour,  user visits per hour etc). The chart below shows a typical linear regression model for such a system: the application servers CPU shows a good linear relationship with the volume of purchase orders passing through the application. The high correlation index (R-squared value next to 1) is a statistical indication that orders volume explains remarkably well the pattern of CPU utilization.


The problem: how to model databases?

When you deal with the back-end (i.e. database) component of an application, things are not as straightforward. Databases are typically difficult to understand and model as they are inherently shared and their resources pulled in several directions:

  • by multiple functions within the same application: the same database is typically used for both online transactions, background batches, business reporting, etc.
  • by multiple applications accessing the same database: quite often a single database is accessed by multiple applications at the same time, for different purposes
  • by infrastructure consolidation: a single server might run multiple database instances

The next chart shows a typical relations among the total CPU consumption of a shared database server vs. the business volume of one application. This model is evidently quite poor: a quick visual test shows that the majority of the samples do not lie on or close to the regression line. From a statistical standpoint, the R-squared value of 0.52 is well below normally acceptable levels (commonly used thresholds are 0,8 or 0,9).


The end result is: simple resource consumption vs. business KPIs models do not work for databases. This is unfortunate as databases are usually the most critical application components and the ones customers are most interested in properly and efficiently managing from a capacity standpoint.

The solution: “characterize” the database work

I will now show you a methodology that we have developed and used to solve challenging database capacity management projects for our customers. The solution lies in a better characterization of the workload the database is supporting. In other words, we need to identify the different sources of work placing a demand on the database (e.g. workload classes) and measure their impact in terms of resources consumption (e.g. their service demands).

If the database is accessed by different applications (or functions), we need to measure the resource consumption of each of them (e.g. CPU used by application A and B, IO requests issued by online and batch transactions and so on). Once we have that specific data, we are in a much better position to create capacity models correlating the specific workload source (i.e. online activity) and the corresponding workload intensity (i.e. purchased orders per hour).

That’s the theory, but how can we do it in practice? Keep reading and you will learn how we can take advantage of Oracle database metrics to accomplish this goal!

Characterizing Oracle databases work using AWR/ASH views

Starting from release 10, Oracle has added a considerable amount of diagnostic information into the DBMS engine. It turns out that this same information is very helpful also from the capacity management perspective.

It is possible to extract CPU and I/O resource consumption by instance, schema (database user), machine or even application module/action/client id (Module/action/client id information is available provided that the application developer has properly instrumented it. See for example: ).

Oracle tracks resource consumption on a variety of ways. The most important ones, from the capacity management perspectives are: per session and per SQL statement basis. The former metrics can be retrieved from V$ACTIVE_SESSION_HISTORY, the latter from V$SQLSTATS. The corresponding DBA_HIST_<VIEW> can be queried for more historical samples.

The idea is to:

  1.  identify how to isolate the application workload we are interested in (i.e. a specific user the application is using, or machine from which it is connecting)
  2. aggregate the session (SQL) based resource consumption to form the total resource consumption of the application.

How to do it in practice?

II’ll show you how Oracle work of an instance can be broken down by the application accessing it using session metrics.

In my experience per-session metrics proved to be more accurate than per-SQL statement metrics. Although the latter metrics measure actual SQL resource consumption (no sampling), the drawback is that they might not track all the relevant SQL statements that the database executed. Indeed, SQL statements might go out of library cache and therefore not being counted in AWR views and reports! See for example:

Step 1 is a function of how the application connects to the database. In this example, a single instance database was being accessed by different applications each using a specific user.

Step 2 is more interesting. How can we aggregate the application sessions to get the total number of CPU consumed? The Oracle Active Session History view samples session states every second. A possible approach is to:

  1. count the number of sessions that are ‘ON CPU’ and assign them a 1-second of CPU time
  2. sum all the CPU times within the desired time window (i.e. one hour), getting the total CPU busy time in seconds

It is important to note that this is a statistical measure of CPU consumption and therefore might not be 100% accurate. Always double check totals with resource consumption from other data sources known to be correct (i.e. monitoring agents or the operating system).

An example query that does this calculations can be found here:

SELECT TO_CHAR(sample_time, 'yyyy/mm/dd HH24') as sample_time, username,
   round(sum(DECODE(session_state,'ON CPU',
DECODE(session_type,'BACKGROUND',0,1),0))/60/60*10,2) AS cpu_secs
WHERE sample_time > sysdate - 7
   and a.user_id=b.user_id 
GROUP BY TO_CHAR(sample_time, 'yyyy/mm/dd HH24'), username 
   order by 1,2


By using the above outlined methodology, I have isolated the database CPU consumption caused by the work placed by a specific application I was interested in (for example, Order Management). The next chart shows the relation among the physical CPU consumption of the database (caused by our selected application) vs the business volume of the selected application. The model now works great!


I hope you are now agree with me that workload characterization is an essential step in capacity management. If properly conducted, it can provide you with remarkably good models when they are most needed!

Moviri at the Politecnico of Milan annual conference on the Internet of Things

The following post has been co-authored by Giorgio Adami.

Every year MIP, Politecnico di Milan’s school of management, organizes a conference to present the results of its research on the Internet of Things market. Moviri was in attendance and we are pleased to report some of the key themes, discussed throughout the event by a large spectrum of experts and influencers, academics, venture capitalists and managers of the most important companies in the industry.

First of all, just to establish common ground, Internet of Things (IoT) is an expression used to describe the phenomenon whereby popular and sometimes mundane objects are connected to, and reachable from, the Internet, becoming in fact nodes in the “Internet of Things”.  The evolution of IoT not only involves an efficient improvement of services or processes that already exist, but it primarily entails disruptive innovation.

Internet of Things today

Key data shared during the conference shows that the IoT market is growing. In 2013 in Italy alone devices connected to the cellular network have reached the number of 6 million, with an 20% increase compared to the previous year.

At the same time, the value of the IoT solutions based on a cellular connection is estimated at €900 millions, with an increase of 11% compared to the previous year. These numbers are remarkable, especially when they are considered within the context of the general trends in the economy and the negative growth of the local ICT market (-5% in 2013).


Img credit: rework of digital innovation (Politecnico di Milano, DIG)

It is not usual to evaluate the trends in technology innovation with an observation window of only one year, but last year has been an exception. In 2013 three main events concurred to accelerate the technology development:

  • Bluetooth Low Energy (BLE) is becoming a recognized standard. With its adoption in the Android OS and several other platforms (BLE is present in the Apple world since 2011), BLE aims to be the “official” technology platform for every IoT application in the Personal Area Network segment. BLE can overcome limits like the configuration of a mesh network, the compatibility between software and hardware of specific vendors and backward compatibility. These limits have slowed down the creation and the development of IoT applications.
  • Numerous platforms able to manage and develop application for multi-vendor devices were released, aiming to overcome the gap in the standardization.
  • GSMA has issued the first embedded SIM specs, i.e. specifications for a SIM integrated in the device while still in the factory. The bond between the physical SIM and the telco operator that owns and manages the SIM can be removed, so enabling the provisioning and administration of the SIM Over The Air.


Img credit: rework of digital innovation (Politecnico di Milano, DIG) 

Internet of Things means Smart Car

The automotive industry is the segment of the IoT market with the biggest opportunity for growth in the next few years. Under the European eCall regulation, starting in 2015 every new vehicle will have to be able to make emergency calls. As far as the rest of the world is concerned, Brazil and Russia have already released similar laws, while China and India are about to do that.

Some new models are already equipped with optional equipment that is capable of monitoring the health of the driver, to communicate with other vehicles (V2V) and with other infrastructure near the road (V2I). Today  95% of smart vehicles are equipped with GPS/GPRS sensors to trace the location and the actions of the driver for insurance reasons.

Internet of things means Smart City

Smart City is a concept related not only to technology, but IoT is becoming the standard technology layer for each Smart City implementation. The adoption of the IoT paradigm allows the multi-functionality of devices, promoting the development of projects shared with different actors, in a scenario where the allocation of the costs is a key factor for success.

Putting in place the so called Smart Urban Infrastructure allows the implementation of different “smart” applications that share the same technology infrastructure, with a saving of 50% and more in investment and operation costs. The main smart applications are dedicated to traffic and parking control, as well as to smart city lighting and waste collection. The implementation of a Smart Urban Infrastructure requires a model of cooperation between public and private actors and worldwide there are many successful cases of such a kind of cooperation, while in Italy there is still room for improvement.

Internet of Things means Smart Metering

Smart counters interconnected for real time measuring have been in use for years. In 2013 the Smart Metering segment has confirmed its positive growing trend.

Internet of Things means Smart Home & Building

Smart solutions for domotics and industrial automation are already in use; in 2013 significant improvements, focused on consumer benefits, have been introduced. BLE is the key factor for the development of IoT in the Smart Home & Building. More and more startups and as well as big actors like Google (as the recent acquisition of Nest Labs demonstrates) have invested in this segment.


In, sum the close relationship between IoT and other technological fields like IT Governance and Big Data has proved to be outstanding. Moviri will continue to closely follow the development of IoT around the world, helping its customers to maximize the value of the data they collect, supporting new business cases and leveraging the Big Data competencies of its consultants.