Moviri blog

Moviri and the Digitalization Era: 2014 BMC Engage Conference report

BMC Engage 2014

It was show time this October in Orlando for the 2014 BMC Engage Global User Conference!

We were present at the event both as sponsors and as speakers, learning about and building upon the renewed and refreshed BMC value proposition (our partners introduced officially their new brand identity and logo), showcasing some of our success cases and strengthening relationships and developing a vision of the future with and among our customers.

Capacity Management took the spotlight in 4 customer-centered sessions. During each of them, Moviri’s role was highlighted, confirming our leading role as partner-of-choice in BMC’s Performance and Availability Ecosystem. Citrix, HCSC and Sanofi explained how they make sure to fulfill critical business initiatives with our help, while Northern Trust identified the supported integration with SCOM – provided by Moviri – as one of the key driver in the decision to adopt TrueSight Capacity Optimization.

The Digitalization Era and Disruptive Technologies

bmc ceo

The main protagonists of the opening and closing keynotes, BMC’s CEO Bob Beauchamp and Tony Seba, lecturer in entrepreneurship, disruption and clean energy from Stanford University, introduced a vision that contains the seeds of a very exciting idea – or scary, depending on your perspective: the IT industry is again on the verge of a new era, the Digitalization Era, and considering how information technologies have been at the core of all market disruptions over the last decades it’s quite hard to confidently predict what we’ll all be talking about in three years.

What it takes to win then? Well, we can outline the key success factors as:

  • Embracing a constantly growing holistic approach, expanding visibility beyond the the four walls of the Datacenter
  • Unleashing and successfully leveraging the business opportunities coming from IoT, Cloud Computing and Big Data. While we’re approaching the Zettabyte of data milestone, the market is waiting for guidance on a renewed set of best practices to use in conjunction to “what it used to be
  • Consumerization, empowerment and automation. Architectures, hardware devices and applications keep becoming more and more powerful and complex – take for instance the spreading of Converged Architectures. It is critical to provide the market with automated, integrated, intuitive and robust solutions to ensure IT professionals spend time taking decisions rather than understanding how to tackle data.

B0Ppjj0IMAILTIa.jpg-large

How does Moviri fit in this picture? Looks like our efforts to stay on the edge of the curve most likely will pay off.  Let’s review some highlights of what we showcased at Engage.

Sanofi’s Transformation Journey

2014-10-14 23.51.54

Chris Wimer, Sanofi Global Capacity Manager, and I talked about how Capacity Management supported the ambitious IT Transformation program of consolidation, harmonization and automation in line with Sanofi’s renewed business strategy.

image14

The key takeaways for the audience:

  • Rely on experts — and the experts are Moviri. ☺
  • Clearly define goals and challenges at the very start.
  • Perform a rock-solid assessment and analysis of the as-is, identify KPIs to measure the effectiveness of the process, review and prioritize the available data sources…then start implementing.
  • Think business-oriented: business value realization always comes first. That’s why processes and data streams need to be standardized, analytics need to be clear and accessible, areas of improvement must be measurable, priorities and service levels must be agreed upon.
  • Be holistic: as we have shown, one of the key success factors has been the capability to centralize IT and Business information under the same roof
  • Focus the reporting efforts on risk mitigation, provide improved visibility and reduce costs.
  • Leverage business-aware analysis, understand IT services as a whole, report and optimize capacity in terms of actual and predicted business demand… and make it all work by taking advantage of Splunk and Moviri’s connector for TrueSight Capacity Optimization!

image10

Accounting and Chargeback in Citrix…

image11

Troy Hall, Citrix Senior Business Analyst, and Moviri’s Renato Bonomini took the lead sharing best practices and recommendations on how to set up an effective Accounting and Chargeback (ACB) process with TrueSight Capacity Optimization.

One of Renato’s quotes stood out: “Not all servers were created equal”.  With that he referred to the business value of performing ACB and achieve improved cost visibility, making sure to have a fair allocation and showback of costs and resources.

As Renato explained, any ACB process is made up by 3 components:

  • Identify the costs, which is usually the most painful point – as confirmed by Troy – since reliable data is generally not available and needs to be estimated.
  • Allocate costs, choosing from a variety of possible models taking in account KPIs such as Simplicity, Fairness, Predictability, Controllability, Complexity.
  • Chargeback costs to the final consumer.

image12

Knowing the theory is not enough: a critical success factor is to exercise ACB as an Iterative-V process:

  • Start with a model that makes sense considering the state-of-art information.
  • Run the analysis, check results and evaluate deviations from expectations.
  • Adjust cost rates and allocation models, working closely with finance.
  • Re-iterate the process.

image13

Troy outlined the most important positive outcomes in adopting ACB:

  • Increased satisfaction of business units, more educated IT decisions.
  • Resources are freed up and reallocated to more critical business objectives.
  • Ability to quickly reconfigure models and automatically generate auditable results.
  • The effort required for monthly showback reporting is minimal, saving precious time and providing detail for even greater IT resources savings.

… and again ACB in HCSC!

Similar concepts were also part of the session held by another of our most important and beloved customers, HCSC. Rick Kickert, IT Planning Senior Manager, and Ben Davies, Senior IT Business Consultant, discussed how ACB helped achieving cost transparency at HCSC.

image02

Rick focused on the need to drive competitive advantage via efficiencies and cost-effectiveness through IT optimization, simplification, and agile infrastructure & applications, closing his remarks with some figures about the results:

image04

Ben delighted the audience with some real-life success tips. The concept of “useful, consistent lie” resonated in particular with the audience, as Ben’s way to explain that it’s wrong to expect to earn value from ACB only when perfect cost models are in place. The iterative-V approach – accept inaccuracy, but be more consistent and useful after each iteration – is the most sensible choice.

image05

End even if we did not have the chance to present together, Ben took his time to give Moviri some credit and depict his experience working with us. Here’s a couple of quotes:

“If smart were sexy, and it is, the Moviri guys would be super models”,

and

“the best capacity results come from Moviri, BMC, then Moviri, Moviri, Moviri, Moviri.”

Hadoop Capacity, SAP Hana and more

Engage has not only been about conference sessions, but also and more importantly, about welcoming IT professionals at our booth, discussing about Performance & Availability and presenting our solutions, some of which we announced and premiered at Engage. The hottest topics have been:

  • New connectors for TrueSight Capacity Optimization: we are working on complete solutions – integrations and reports – for two of the most important platforms available in the market: Hadoop Cloudera and SAP Hana.
  • Moviri Managed BCO: our managed service offering for BMC Capacity Optimization (BCO) customers designed to maximize the return on their BCO investment.

Let me recall the seed of an idea I started this post with: the Digitalization Era and Disruptive Technologies. Well, we believe Hadoop can play a relevant role in the emerging market of Big Data. Of course as a source of data; but especially for capacity managers also as a service that needs to be planned, optimized and aligned to real business demands.

We’ve seen a few proposals of Hadoop capacity analysis best practices being published on the web during the last few months, and our feeling has frankly been that “this is not quite enough… we could do way better” – again, quite: we also had outstanding conversations!

Engage gave us the chance to announce our intent to take it to the next level. To us, a capacity analysis has always meant more than reporting on “how much CPU I’m using and how the trend looks like”.  How about dependencies and correlations? How about achieving an understanding of the true causes of resource consumption? How about being effective in identifying both hardware and software bottlenecks using a holistic approach? How about proactively planning based on actual demand information, rather than trending data?

It surely sound ambitious: that’s why we’ve been so excited in hearing so much good feedback on what we’ve shown, and we’re thrilled that many customers have agreed to help us moving forward by working together on a trial version of the final product, to test and refine our intuitions and use-case design on the field.

Want to know more or join us? Let’s Engage!

To learn more about Moviri’s work with BMC and our capacity management track record, you can directly contact one of our experts.

image06

Saipem’s session at Splunk .conf2014: Security and Compliance with Moviri

splunk-conf2014-5150-0-s-307x512We recently joined our partners at Splunk at the global annual Splunk .conf2014 conference, which took place on October 6-9, 2014 in Las Vegas.

We had the pleasure to share the stage there with one of our key clients and Oil & Gas contract industry leader Saipem.

Saipem had a prominent role at the event, with a session led by Ugo Salvi and Luca Mazzocchi –  respectively Saipem CIO and CISO on “Splunk at Saipem: Security and Compliance in the Oil and Gas Industry”.

 

Why Splunk at Saipem

In his presentation, Mr. Salvi outlined how Saipem introduced Splunk in the IT ecosystem. Back in 2012, the company combined log management and compliance to establishing dashboards and an automatic alerting system to meet SOX and privacy compliance regulations. Thanks to the agility of the product and Moviri’s expertise, Saipem successfully achieved unprecedented flexibility and time to market and decided to build upon the early success.

When, at the end of 2012, in two instances Saipem’s competitors where it by business-disrupting malware attacks, Saipem started focusing on a few questions: are backup policies effectively in place? Can we restore business operations — Saipem’s yearly revenues exceed $10 billion), in case an attack is successful? The answers are now with what Saipem calls “Backup inspector”, a new application based on Splunk that has enabled Saipem to enforce policies across the enterprise for all the backups of all the relevant applications.

In 2014, Saipem has progressed even further by setting up a new SIEM (Security Information and Event Management) system using Splunk and the Enterprise Security app to identify, address and investigate security threats. Meanwhile, Saipem, convinced of SPlunk’s reliability as a source of information for IT and business (e.g. license usage or distribution of accounts around the world), is looking at possible future applications such as:

  • Industrial systems (SCADA, supervisory control and data acquisition)
  • APM, Monitoring control room and troubleshooting
  • IT and Business reporting

SCADA data as a new opportunity

During the Q&A that followed Saipem’s session, most questions were related to SCADA data. Mr Salvi pointed out that Saipem is going through a POC, waiting for the opportunity to have these devices available from their operations. Saipem is also working with R&D to understand how to monitor pipeline stress and stretching, using fiber optic technology. Splunk can correlate these industrial and pipeline data with other metrics. Another key challenge regarding SCADA data is investigating possible threats due to maintenance activities, such as for example the VPN that is opened to perform maintenance on systems on offshore vessels.

Key takeaways

splunk conf teamII

For Moviri, working with Saipem and Splunk has been a long and rewarding process. And after this first few years of these implementations, Mr. Salvi made it a point to share Saipem’s key takeaways:

  • Splunk has replaced and continues to successfully replace other tools within the Saipem IT ecosystem.
  • The challenge of digitization and IT in an enterprise setting that, like Oil & Gas, is by its very nature rooted in the analog, industrial world presents great opportunities.
  • It has been a long journey for Splunk at Saipem (SOX in 2012, Backup Inspector in 2013, SIEM in 2014)… and it is far from being over!!

To learn more about Moviri’s Splunk capabilities, visit our Analytics and Security services or talk to our Experts.

 

 

 

VMware vForum 2014 and the Future of the Data Center Operational Model

At the recent vForum conference in Milan, where several Moviri experts were in attendance, VMware unveiled the results of a survey of more than 1,800 IT executives. The key findings highlight the increasing gap between the needs of the business and what IT is actually able to deliver.

IT is slowing business down

Two-thirds of IT decision makers say that there is an average gap of about four months between what the business expects and what IT can provide. The exponential growth in business expectations is increasingly unsustainable for traditional IT management. The IT challenges in the Mobile-Cloud era, as defined by VMware, require for example real-time data analysis, continuous delivery or resource deployments in hours, if not in minutes. This is not achievable with old resource management defined by hardware-driven infrastructures.

The VMware’s answer

The answer, according to VMware, comes from the so called Software-Defined Data Center (SDDC). VMware’s vision for IT infrastructure extends virtualization concepts such as abstraction, pooling, and automation to all of the data center’s resources and services to achieve IT as a service (ITaaS). In a SDDC, all elements of the infrastructure (networking, storage, CPU and security) are virtualized and delivered as a service, in order to bring IT at the Speed of Business.

VMware IT at the Speed of Business

Nowadays, enterprises invest in average only 30% of their IT budget in innovation. The reasons include manual device management, slow and non-automated provisioning, production workloads handled via email and everything else IT needs to perform just to “keep the lights on”. According to VMware, SDDC could help enterprises save 30% of capex and up to 60% of opex, allowing the investment in innovation to reach 50% of the IT budget and thus increasing market competitiveness.

VMware NSX release

VMware has drawn inspiration from great players in infrastructure innovation like Amazon, Facebook, Netflix or Google and has developed products for each technologic silo: vSphere for x86 virtualization, VSAN for storage and the just recently released NSX for network virtualization, the big news of this year.

The VMware NSX network virtualization platform provides the critical third pillar of VMware’s SDDC architecture. NSX delivers for networking what VMware has already delivered for compute and storage using network virtualization concepts.

Network and Server Virtualization

In much the same way that server virtualization allows to manage virtual machines, network hypervisor enables virtual networks to be handled without requiring any reconfiguration of the physical network.

Network virtualization overview

With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 to Layer 7 networking services (e.g., switching, routing, access control, firewalling and load balancing) in software. The result fundamentally transforms the data center network operational model, reduces network provisioning and simplifies network operations.

Since networking is no more just connecting machines, but rather delivering services like enable balancing, manage firewall rules or route planning, the first impression is that network virtualization, thanks to the combination of OpenFlow capabilities and experienced companies like VMware or Cisco, will have a similar revolutionary impact on the network, as server virtualization has had on servers.

As for x86 hypervisors, network hypervisors do not replace but enhance and add features on top of physical layers. They do not make connectivity available, they provide services and improve datacenter network agility. Physical connectivity is still required, but complex connections are no longer a requirements because everything can be handled at the software level.

Network virtualization (NV) in a nutshell, it’s a tunnel. Rather than physically connecting two domains in a network, NV creates a connection through the existing network to connect two domains. NV is helpful because it saves the time required to physically wire up each new domain connection, especially for new virtual machines. This is valuable because companies don’t have to change what they have already done. They get a new instrument to virtualize their infrastructure and make changes on top of the existing infrastructure.

Network Virtualization

The key benefits of NV could be summarized in:

  • Ability to easily overcome VLAN limits to support scalability network requirements.
  • Each application can have its own network and security policy via NV traffic isolation improving multi-tenancy.
  • No need to touch Layer 1 for the majority of requests.
  • Improved performance for VM-to-VM traffic within the same server or rack due to the fact that traffic is handled by the virtual switch because all the hops to the physical layer are just skipped.
  • NV management tools represent a single point of configuration, monitoring and troubleshooting in large virtualized data centers.

However, there are some disadvantages:

  • The new workload coming with NV features is now handled by hypervisors’ kernel and not from dedicated hardware.
  • Performance degradation and network traffic increase by tunnel-header overhead.

Conclusion

Current adoption of NV technology is in its very early stages with a few organizations in production and more communications service providers and enterprises in pilot implementations. Running NV software like NSX as an overlay to existing network infrastructure provides a relatively easy way to manage VM networking challenges. As a result, we expect NV adoption to strongly increase during the next two years in order to close the gap with SDDC and speed up IT to meet business demands, as suggested by VMware.

(Images courtesy of VMware vForum 2014)

 

Insights from the DCIM leading edge at DatacenterDynamics Converged 2014

Claudio Bellia and I had the pleasure to attend the DatacenterDynamics Converged conference for the third year. DCD Converged is a one-day, peer-led data center conference that gathers IT and Facility professionals.

What I like the most about this conference are the case study sessions. During these sessions, organizations present real-life initiatives about how they managed to improve data center efficiency and save a good amount of energy – good for the environment and company cash. In the process, they share interesting internal details, sometime previously undisclosed, about the company data centers.

The DCD conference traditional audience are facility designers and operators. However, over the years I have noticed increasingly relevant IT sessions, which demonstrates a growing recognition that, in addition to facilities, also servers, storage and networks management offers large potential optimization opportunities.

Here’s some highlights from the two sessions I have found the most interesting.

Telecom Italia and CNR Case Study: Energy Consumption and Cost Reduction through Data Center Workload Consolidation

The case study highlighted how Telecom Italia (the largest italian telecommunication operator) is saving significant amounts of energy and costs thanks to initiatives specifically targeted at the “IT part” of the data center through servers, storage and workload consolidation.

The session started off by showing the global medium-term trends that are driving the enterprise IT evolution. Besides the usual suspects, such as cloud computing, big data and open source software, two lesser talked-about trends are the adoption of commodity hardware (no big news here) and IT efficiency, which can be summarized as proper server selection with energy efficiency in mind and, at the micro level, using knowledge of server workloads to perform consolidations and improve capacity planning (a relatively new concept). As IT optimization experts, in Moviri we wholeheartedly believe in IT Efficiency as as a major source of innovation, energy and cost savings, available to organizations todays and in the future.

The next key point was about technology refresh initiatives that Telecom Italia has performed to take advantage of the evolution in servers such as adoption of virtualization and more powerful and efficient CPUs and storage such as thin-provisioning and autotiering, to optimize the usage of the existing resources and to slash energy bills. Traditional capacity management approaches too often can be summarized as: “do not buy new hardware until the installed capacity (read: sunk investment) is fully used”. At Moviri we believe this mantra has become obsolete, as current data center cost structures are very different from 20 years ago. Energy costs are impacting data center TCO in important ways (20% and rising) and proper technology refresh and server selection are paramount to achieving energy and costs saving, while reducing footprint and increasing processing capacity too!

Another point is related to server processor selection. Despite CPUs increasing computing power and capacity over time (courtesy of Moore’s Law), what is often overlooked is which processor provides the best fit from a performance and cost perspective. Telecom Italia highlighted how newer Intel CPUs, if properly selected, can be a source of cost and energy savings. I can add that, in my experience, I have seen how different CPU performance and price greatly vary among different models on the market, so equipping the entire server fleet with the same, standard CPU will guarantee unused capacity, unnecessarily high acquisition (CAPEX) and energy (OPEX) costs. As performance is workload dependent, a proper characterization of datacenter workloads is paramount to really understand what are the requirements and consequently make the best investment.

Finally, the session focused on the adoption of an emerging paradigm called Intelligent Workload Management: managing the workloads in dynamic, virtualized environment, to achieve increased utilization levels, reduce stranded capacity and save costs. Telecom Italia adopted this concept by implementing two products: Intel Datacenter Manager and Eco4Cloud. The former product enables a fine-grained collection of power and thermal metrics directly from the servers. The latter is an automated workload consolidation solution, designed by a CNR (Italian National Research Council) spin-off company, that can pack virtual machines into fewer servers and hibernate the others. This resulted in at least 20% energy savings (gigawatt-hours!) and clearly highlights how a data center infrastructure management solution (DCIM) is important to optimize data center efficiency and capacity.

IMG_3198

Case Study: Eni Green Data Center – Why 360° Monitoring?

This case study, presented by Eni (the largest italian gas utility), highlighted the energy efficiency design and operation of the newest company data center.

The first interesting data point relates to facility efficiency (a.k.a. Power Usage Effectiveness or PUE) and to where the power goes (read: is wasted) in a typical data center vs. an optimized one. Standard, legacy data centers typically are poorly efficient (PUE > 2, or even 3), which means that up to 2/3 of the total energy entering the data center is wasted before reaching the IT equipment. What are the greatest offenders? Chiller plants, fan and pump losses, UPS/transformer losses. In contrast, the newest ENI datacenter has been designed with a 1.2 target PUE, which means that energy wasted in facilities is less than 20 percent.

What did ENI do to achieve such level of efficiency? The actions where: (a) use of free cooling for > 75% of the time (b) use of high-efficiency chillers (c) introduction of large chimneys for natural heat extraction (taller than a 10-floor building!) (d) use of offline, small (200 KW), efficient (>99.4%) UPS (e) cold air containment.

A key insight that ENI shared was why a pervasive, comprehensive, fine-grained monitoring system is paramount to understand and tune a complex plant such as a data center. ENI’s monitoring system is tracking 50,000 metrics with a 10-second collection interval – and no data aggregation or purging is planned! Such a vast amount of data enables ENI to identify anomalies, increase efficiency and prevent issues before they impact production operations, such as identifying fans rotating in opposite directions or uncalibrated energy meters reporting wrong values.

IMG_3184

I hope you enjoyed my summary. The main, positive message which I’d like to convey to IT and Facility managers struggling with tight budgets is: start looking closely to your datacenter efficiency and costs, chances are that you might save huge amounts of energy and money, decrease your company environmental footprint and increase your IT capacity, perhaps even avoid building unnecessarily new facilities. And be sure not to focus on facilities only, as the IT equipment is where most optimization potential can be realized.

If you’re looking for help, check out our Capacity Management offering!

Loadrunner in the cloud: testing everywhere you need

The new version 12 of Loadrunner and Performance Center, HP introduces several new features to the testing market-leading software and adds some enhancements that will leverage the time-to-test and extend the technical testing capabilities of the solution.

Testing everywhere

Cloud-based Load Generators

It is now possible to inject load from cloud services through Amazon Web Services: with this solution, customers with internet exposed application can execute load tests in an hybrid mode with a mix of load generators within their network (as for the preview version of LG) and load generators in the cloud, in order to simulate traffic from all over the world.

Enhanced support for Mobile Testing

It is now possible to test any mobile application, simply by using Android application (on rooted devices) and integrating Shunra Network Virtualization, that allows customers to discover and then virtualize real-world network conditions in the test environment, simulating different location and bandwidth situations.

New VuGen Testing Script Recording Features

It’s now possible to use the latest versions of most common browsers (not only IE 11, but also FireFox 23 and Chrome 30) for scripts recording. With the new integration with Wireshark and Fiddler, customers can now generate scripts in an easier way, avoiding use of custom requests for calls not recorded by the VuGen.

With several protocol enhancements, recording SPDY, HTML5, Flex, SilverLight (and much more) will no longer be a problem. The new TruClient to Web/HTTP converter utility allows you to reduce time to script, supporting simple Web as well as modern JavaScript-based applications and reducing scripting time.

Platforms Support Enhancements

Product installation is now possible also in Windows Server 2012 and without administrative accounts, with UAC and DEP enabled. This meets the needs of customers with strong security policies. The integration with Jenkins, and latest versions of Eclipse Juno, JUnit, and Selenium is now supported as well.

And more!

Many other enhancements on Continuous integration, new Protocol support are available with HP Loadrunner and Performance Center 12. You can find a full list at the following links:

 

Demystifying Oracle database capacity management with workload characterization

From simple resource consumption to business KPIs models, leveraging database workload

In a nutshell, Capacity Management is all about modeling an IT system (i.e. applications and infrastructure) so that we can answer business questions such as:

  • how much business growth can my current system support? (residual capacity)
  • where and how should I focus investments (savings) to increase (optimize) my capacity?

When facing typical three-tier enterprise applications, creating accurate models is increasingly difficult as we move from the front-end layer to the back-end.

Front-ends (i.e. web servers) and Mid-ends (i.e. application servers) can be modeled with relative ease: their resource consumption is typically driven by user activity in business-related terms (i.e. business transactions per hour,  user visits per hour etc). The chart below shows a typical linear regression model for such a system: the application servers CPU shows a good linear relationship with the volume of purchase orders passing through the application. The high correlation index (R-squared value next to 1) is a statistical indication that orders volume explains remarkably well the pattern of CPU utilization.

Capacity_1

The problem: how to model databases?

When you deal with the back-end (i.e. database) component of an application, things are not as straightforward. Databases are typically difficult to understand and model as they are inherently shared and their resources pulled in several directions:

  • by multiple functions within the same application: the same database is typically used for both online transactions, background batches, business reporting, etc.
  • by multiple applications accessing the same database: quite often a single database is accessed by multiple applications at the same time, for different purposes
  • by infrastructure consolidation: a single server might run multiple database instances

The next chart shows a typical relations among the total CPU consumption of a shared database server vs. the business volume of one application. This model is evidently quite poor: a quick visual test shows that the majority of the samples do not lie on or close to the regression line. From a statistical standpoint, the R-squared value of 0.52 is well below normally acceptable levels (commonly used thresholds are 0,8 or 0,9).

Capacity_2

The end result is: simple resource consumption vs. business KPIs models do not work for databases. This is unfortunate as databases are usually the most critical application components and the ones customers are most interested in properly and efficiently managing from a capacity standpoint.

The solution: “characterize” the database work

I will now show you a methodology that we have developed and used to solve challenging database capacity management projects for our customers. The solution lies in a better characterization of the workload the database is supporting. In other words, we need to identify the different sources of work placing a demand on the database (e.g. workload classes) and measure their impact in terms of resources consumption (e.g. their service demands).

If the database is accessed by different applications (or functions), we need to measure the resource consumption of each of them (e.g. CPU used by application A and B, IO requests issued by online and batch transactions and so on). Once we have that specific data, we are in a much better position to create capacity models correlating the specific workload source (i.e. online activity) and the corresponding workload intensity (i.e. purchased orders per hour).

That’s the theory, but how can we do it in practice? Keep reading and you will learn how we can take advantage of Oracle database metrics to accomplish this goal!

Characterizing Oracle databases work using AWR/ASH views

Starting from release 10, Oracle has added a considerable amount of diagnostic information into the DBMS engine. It turns out that this same information is very helpful also from the capacity management perspective.

It is possible to extract CPU and I/O resource consumption by instance, schema (database user), machine or even application module/action/client id (Module/action/client id information is available provided that the application developer has properly instrumented it. See for example: http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_appinf.htm ).

Oracle tracks resource consumption on a variety of ways. The most important ones, from the capacity management perspectives are: per session and per SQL statement basis. The former metrics can be retrieved from V$ACTIVE_SESSION_HISTORY, the latter from V$SQLSTATS. The corresponding DBA_HIST_<VIEW> can be queried for more historical samples.

The idea is to:

  1.  identify how to isolate the application workload we are interested in (i.e. a specific user the application is using, or machine from which it is connecting)
  2. aggregate the session (SQL) based resource consumption to form the total resource consumption of the application.

How to do it in practice?

II’ll show you how Oracle work of an instance can be broken down by the application accessing it using session metrics.

In my experience per-session metrics proved to be more accurate than per-SQL statement metrics. Although the latter metrics measure actual SQL resource consumption (no sampling), the drawback is that they might not track all the relevant SQL statements that the database executed. Indeed, SQL statements might go out of library cache and therefore not being counted in AWR views and reports! See for example: http://jonathanlewis.wordpress.com/2013/03/29/missing-sql/

Step 1 is a function of how the application connects to the database. In this example, a single instance database was being accessed by different applications each using a specific user.

Step 2 is more interesting. How can we aggregate the application sessions to get the total number of CPU consumed? The Oracle Active Session History view samples session states every second. A possible approach is to:

  1. count the number of sessions that are ‘ON CPU’ and assign them a 1-second of CPU time
  2. sum all the CPU times within the desired time window (i.e. one hour), getting the total CPU busy time in seconds

It is important to note that this is a statistical measure of CPU consumption and therefore might not be 100% accurate. Always double check totals with resource consumption from other data sources known to be correct (i.e. monitoring agents or the operating system).

An example query that does this calculations can be found here:

SELECT TO_CHAR(sample_time, 'yyyy/mm/dd HH24') as sample_time, username,
   round(sum(DECODE(session_state,'ON CPU',
DECODE(session_type,'BACKGROUND',0,1),0))/60/60*10,2) AS cpu_secs
FROM DBA_HIST_ACTIVE_SESS_HISTORY a, dba_users b 
WHERE sample_time > sysdate - 7
   and a.user_id=b.user_id 
GROUP BY TO_CHAR(sample_time, 'yyyy/mm/dd HH24'), username 
   order by 1,2

Results

By using the above outlined methodology, I have isolated the database CPU consumption caused by the work placed by a specific application I was interested in (for example, Order Management). The next chart shows the relation among the physical CPU consumption of the database (caused by our selected application) vs the business volume of the selected application. The model now works great!

Capacity_3

I hope you are now agree with me that workload characterization is an essential step in capacity management. If properly conducted, it can provide you with remarkably good models when they are most needed!

Moviri at the Politecnico of Milan annual conference on the Internet of Things

The following post has been co-authored by Giorgio Adami.

Every year MIP, Politecnico di Milan’s school of management, organizes a conference to present the results of its research on the Internet of Things market. Moviri was in attendance and we are pleased to report some of the key themes, discussed throughout the event by a large spectrum of experts and influencers, academics, venture capitalists and managers of the most important companies in the industry.

First of all, just to establish common ground, Internet of Things (IoT) is an expression used to describe the phenomenon whereby popular and sometimes mundane objects are connected to, and reachable from, the Internet, becoming in fact nodes in the “Internet of Things”.  The evolution of IoT not only involves an efficient improvement of services or processes that already exist, but it primarily entails disruptive innovation.

Internet of Things today

Key data shared during the conference shows that the IoT market is growing. In 2013 in Italy alone devices connected to the cellular network have reached the number of 6 million, with an 20% increase compared to the previous year.

At the same time, the value of the IoT solutions based on a cellular connection is estimated at €900 millions, with an increase of 11% compared to the previous year. These numbers are remarkable, especially when they are considered within the context of the general trends in the economy and the negative growth of the local ICT market (-5% in 2013).

iot_moviri_01

Img credit: rework of Osservatori.net digital innovation (Politecnico di Milano, DIG)

It is not usual to evaluate the trends in technology innovation with an observation window of only one year, but last year has been an exception. In 2013 three main events concurred to accelerate the technology development:

  • Bluetooth Low Energy (BLE) is becoming a recognized standard. With its adoption in the Android OS and several other platforms (BLE is present in the Apple world since 2011), BLE aims to be the “official” technology platform for every IoT application in the Personal Area Network segment. BLE can overcome limits like the configuration of a mesh network, the compatibility between software and hardware of specific vendors and backward compatibility. These limits have slowed down the creation and the development of IoT applications.
  • Numerous platforms able to manage and develop application for multi-vendor devices were released, aiming to overcome the gap in the standardization.
  • GSMA has issued the first embedded SIM specs, i.e. specifications for a SIM integrated in the device while still in the factory. The bond between the physical SIM and the telco operator that owns and manages the SIM can be removed, so enabling the provisioning and administration of the SIM Over The Air.

iot_moviri_02

Img credit: rework of Osservatori.net digital innovation (Politecnico di Milano, DIG) 

Internet of Things means Smart Car

The automotive industry is the segment of the IoT market with the biggest opportunity for growth in the next few years. Under the European eCall regulation, starting in 2015 every new vehicle will have to be able to make emergency calls. As far as the rest of the world is concerned, Brazil and Russia have already released similar laws, while China and India are about to do that.

Some new models are already equipped with optional equipment that is capable of monitoring the health of the driver, to communicate with other vehicles (V2V) and with other infrastructure near the road (V2I). Today  95% of smart vehicles are equipped with GPS/GPRS sensors to trace the location and the actions of the driver for insurance reasons.

Internet of things means Smart City

Smart City is a concept related not only to technology, but IoT is becoming the standard technology layer for each Smart City implementation. The adoption of the IoT paradigm allows the multi-functionality of devices, promoting the development of projects shared with different actors, in a scenario where the allocation of the costs is a key factor for success.

Putting in place the so called Smart Urban Infrastructure allows the implementation of different “smart” applications that share the same technology infrastructure, with a saving of 50% and more in investment and operation costs. The main smart applications are dedicated to traffic and parking control, as well as to smart city lighting and waste collection. The implementation of a Smart Urban Infrastructure requires a model of cooperation between public and private actors and worldwide there are many successful cases of such a kind of cooperation, while in Italy there is still room for improvement.

Internet of Things means Smart Metering

Smart counters interconnected for real time measuring have been in use for years. In 2013 the Smart Metering segment has confirmed its positive growing trend.

Internet of Things means Smart Home & Building

Smart solutions for domotics and industrial automation are already in use; in 2013 significant improvements, focused on consumer benefits, have been introduced. BLE is the key factor for the development of IoT in the Smart Home & Building. More and more startups and as well as big actors like Google (as the recent acquisition of Nest Labs demonstrates) have invested in this segment.

 

In, sum the close relationship between IoT and other technological fields like IT Governance and Big Data has proved to be outstanding. Moviri will continue to closely follow the development of IoT around the world, helping its customers to maximize the value of the data they collect, supporting new business cases and leveraging the Big Data competencies of its consultants.

Active, or passive monitoring, that is the question —

How to make your user experience / SLA awake: active and passive monitoring combined

Service Level Management is  fundamental to ensure service quality at both the technical and business level. In order to be effective, not only internal services  but also third party ones (ex. ads) need to be monitored.

SLA monitoring should be carried out with both active and passive methods because they focus on two distinctive features:

  • Active monitoring, also referred to as synthetic monitoring, performs regular, scripted checks that simulate end-user behavior in controlled and predictable conditions

  • Passive monitoring concentrates on real end-user activities by “watching at” what they are doing right now

to be or not to be

Active (aka synthetic) Monitoring

What are the distinctive features of active monitoring and why is it useful ?

Synthetic script-based approach is the only feasible method for availability monitoring because:

  • it occurs from outside the datacenter and so it enables for monitoring of timing and errors specific to external web components not hosted locally

  • it determines if the site (or specific page) is up or down

  • it verifies the site availability when there are no end users hitting that particular page

  • it can test various protocols (not only HTTP/S) like Flash, Silverlight, Citrix ICA, FTP, etc.

  • it is able to execute scripts from specific locations and so it can determine region-specific issues

  • it is essential in order to establish a performance baseline before a new application release deployment

Actually, there are some drawbacks too:

  • a script tests the same navigational path over and over again

  • the test is not “real”: you can’t buy something every five minutes on a real web site

  • periodic sampling does not provide a good indication of what real users are experiencing

  • a script cannot test odd human combinations

Passive (aka sniffing) Monitoring

What are the distinctive features of passive monitoring and why is it useful ?

Passive “sniffing” approach is the only feasible way to look at real users conditions:

  • it looks at the traffic generated by real users of the website

  • it enables monitoring of real transactions, such as bank transfers, purchases, and so on

  • it measures all aspects of your users’ experiences: user location, browser, ISP, device, etc.

  • it detects all errors that can occur and is able to take screenshots of these situations

Actually, there are some drawbacks in this case too:

  • the data collected is limited to the traffic on the network the router is attached to

  • It requires real users traffic; if there are no users on the site it cannot collect any data

  • It offers no ability to monitor issues that occur outside of the network, such as DNS problems, network issues, and firewall and router outages

The combination

The combination of active and passive monitoring provides a more comprehensive view of performance, availability and end user experience. In addition, for companies that outsource their infrastructure, active monitoring offers a way to validate the SLAs provided by the outsourcer.

Using an approach that combines both passive and active monitoring methods offers the highest degree of quality assurance because issues can be detected before they occur or in near real time.

For additional information, pls contact me.

Hello Boston!

Last november Moviri reached another important milestone with the opening of its new US headquarters in Boston. This follows the relocation of a start-up team of five colleagues that will take charge to launch and scale the new venture.

I’d like to celebrate this milestone and go through the key reasons that led us to undertake this endeavor.

First, it’s a big business opportunity.

We’ll have access to the most mature market for IT services in the world, a market which appreciates and rewards companies built on skills, reliability and high-level professionalism, which is exactly how Moviri is and is perceived.

We’ve done this step by step, starting with the incorporation of a controlled legal entity and the progressive relocation of the core team during 2013. We’ve done this after we’ve tested the market, we’ve built solid business relationships with a core set of customers and partners, to the point where the US-based business already accounts for almost one sixth of our consulting services worldwide revenues.

The US market boasts some of the largest organizations in the world, where IT is a business driver (and no longer just an “enabler”) and where our IT Performance offering can provide the greatest value to customers.

Second, we are believers in internationalization.

To see and understand opportunities in the early stage; to create cultural diversity in what started as a “strongly typed” engineering organization; to have more interesting and varied work experiences for our team. It’s the unique blend of situations we aim to create and be a part of.

Last, we follow dreams and trust “gut feeling”.

If one walks into the MIT buildings, one cannot not notice the resemblance to the classrooms and feeling as sense of connection to Moviri’s engineering and research roots. It accounts for a lot. “Revenue” can be boring (even though “cash” is king…) and business  is also about passion. Choosing a place like Boston is part of it.

Our new office is located at One Boston Place, an iconic landmark in the Bostonian landscape. We’re going to write a new address on our email signatures:

Moviri Inc.
One Boston Place, Suite 2600
Boston, MA 02108

Why Boston? There are of course many very rational reasons for choosing Boston:

  • It’s a culturally rich city that features one of the most prominent global academic hubs with centers like Harvard, MIT, Boston University. In total 5 junior colleges and 18 colleges that primarily grant baccalaureate and master’s degrees, 9 research universities, and 26 special-focus institutions.
  • The “Greater Boston Area” has been home to many disruptive IT companies and it competes with Silicon Valley and lately the New York technology park.
  • It is a short flight away from major economic centers like New York, Philadelphia, Toronto and Washington DC and only few time zones away from Europe.
  • It is close to many of our banking and insurance customers.

I’d like to say “Good luck” to the Boston team and give an head-up to everybody: this is not an achievement that we celebrate, it’s just the beginning of a new journey… stay tuned.

Moviri Boston Office

HP Discover 2013 Highlights

HP discover 1

From December 9 to 12, the annual Hewlett Packard european convention, HP Discover, (@HPDiscover) took place in Barcelona. For the second time in a row, the Californian behemoth delivered the full firepower of its business units: Hardware & Networking, Printing & Personal Solutions, Software, Services, with more than 800 sessions throughout the event. The most discussed themes in software were Security, Big Data and Mobility.

Some highlights:

  • The annual HP-Capgemini report on “world testing trends” (built with data provided by 1500 clients), You can get the report here.
  • The Accenture customer case study on Security, for a global oil and gas enterprise customer, which featured clear evidence of how the impact and the effectiveness of cyber attacks often depend more on social context and the industry, rather than on technology.
  • The software product roadmaps and news did not deliver the “splash” we were used to expect from HP and Mercury, but rather focused mostly on the integration in the HP product ecosystem of the many solutions and products recently acquired, especially Vertica and Autonomy. As well as they paid particular attention to open source solutions that ara gaining traction in the market.
  • The demo of Autonomy implemented in inside a social media center, which included real time dashboards with sentiment analysis, tends and any message or post on the web that had to do with HP Discover. Not immediately applicable to Moviri’s business maybe, but visually impactful nonetheless.

HP discover 2

HP Discover was also a great opportunity to meet and converse with our customers, to identify new business opportunities and to plan jointly with HP for the upcoming year.

Finally, what better occasion to be in Barcelona and enjoy a tennis-like scoring soccer match at Camp Nou stadium between the Blaugrana e the beaten up Scots of Celtic football club?

HP discover 3