Santa Clara Moviri

Santa Clara (CA), June 14, 2011, h7.30 AM

Here we are: day 1 of Velocity 2011.

While we attended several events in the past as partners of HP, it’s our first time attending this O’Reilly conference, and we are quite curious in anticipation of what we will hear this week about IT performance, operations management, code optimization, mobile web analysis, and more.

The first step, as usual, is registration, then getting some funny gadgets from sponsors and filling up our coffee mugs. We are required to personalize our badges with some ribbons and create our personal profile… “Performance Tuning and Benchmarks” plus “web performance” should be ok.

After having completed our profile, it’s time to review the schedule of the day, refill one more time our mugs with hot java (pun intended), and then attend the first workshop of the day: “Performance Tools”. In this talk, a panel of Google, DynaTrace, Blaze and TruTV guys showed us a “must-have” toolkit that helps to address problem quicker by developing little tests. While that’s the best way to start a performance-related conference, we can understand that is becoming more and more importanto to approach performance matters also from the developer point of view (starting analyzing performance in dev environments, on one’s own laptop) , and taking care also of the client performance (rendering and resource consumption) as an integration to the standard approach, which includes server-side performance analysis, user concurrency, back-end optimization.

The second conference session was very surprising: under the title “Decisions in the Face of Uncertainty” we found a tightly-packed overview about how to manage data from a statistical point of view, from pascal triangle to baysian models, passing through percentile, normal distribution, game theory and other statistical concept. John Rauser (Amazon) gave a quick recap of basic statistics concepts (that were lost somewhere in our mind) that must have had sound like a lullaby to some of the attendee – the yawning rate was exceptionally high by the end of the talk.

Then the day went on to another workshop focused on mobile web:  the speakers showed us some fresh open-source tools that help developers testing in real-time their code and analyzing performance (e.g. phantomjs) and reduce bandwidth usage (e.g. the “cache manifest” technique, applied to mobile world). The other talks were focused on operations management issues and tools that can help takle them, especially in cloud environments: provisioning, orchestration and monitoring the cloud pattern are quickly became one of the hottest topic at this time. Cool stuff.

The last workshop of the day was about automating tests by creating some phyton script that interacts with web browsers based on the SeleniumHQ framework. A low cost way to get started addressing performance analysis from the development stage. It’s yet another interesting point of view, but our opinion is that this kind of approach cannot replace the standard load testing one, but can be a valid complement to it.

Summary of the first day: performance analysis can starts from the development phase thanks to open source tools, but when you are facing intranet application with restricted areas, complex navigational path, correlations, parameters, data sets, concurrent users and scalability, you will probably have to rely on standard performance management tools, with a dedicated testing environment.

Now waiting for day 2,where the exhibitors hall will open, so we can get more in-depth knowledge about the tech solutions behind this day’s workshops.

Luca & Nicholas

Comments are closed.