Performance Blog

Archive for the ‘Tools’ Category

I’ve been working for awhile now to revamp the entire stats processing and graphing design in Faban. For those who haven’t heard of Faban, it is a performance tool that helps creation and running of workloads. Faban currently uses a technology called Fenxi  to process and graph stats. Fenxi has given us lots of problems over time – it is poorly designed, lacks flexibility and doesn’t even seem to be maintained anymore. So I decided to get rid of it entirely.

I am really excited by the changes. I think this is one of the major enhancements to Faban since Akara (the original Faban architect) and I left Sun/Oracle.

So without much further adieu, here are the upcoming changes:

  •  New cooler looking dynamic graphs

I’m using jqPlot that produces nice looking graphs by default. The ability to zoom in on a section of the graph is really very very nice, in addition to actually seeing the datapoint values as you mouse over the graph. This one feature I think will make the Faban UI  more modern.

  •  Provide support for graphing stats from Linux.
   The Fenxi model does not support multiple OS’s well. So I’ve got rid of it completely. Instead, support for Linux tools (I currently have vmstat, iostat in) is added by using the post-processing functionality already baked into faban. The post-processing script will produce a ‘xan’ file (named after Xanadu, the original internal name for Fenxi). The nice thing about the xan format is that it is highly readable by a human. Take a look at any of your current detail.xan files produced by Faban. Very easy to read, so I’m sticking with this format.
  • New Viewer to graph all xan files
Of course, the above 2 enhancements are not possible without a way to actually interpret the xan files and convert them to jqplot json format. So a new Viewer has been implemented that renders the xan file nicely – both tables and graphs.
I’m attaching a screen-shot of a sample Linux Vmstat output to whet your appetite for the new features.
Stay tuned. I hope to check everything in the next couple of weeks.
If you are a Faban user, please join the Faban User google group.
Advertisements
Tags:

Going by the many posts in various LinkedIn groups and blogs, there seems to be some confusion about how to measure and analyze a web application’s performance. This article tries to clarify the different aspects of web performance and how to go about measuring it, explaining key terms and concepts along the way.

Web Application Architecture

The diagram below shows a high-level view of typical architectures of web applications.

The simplest applications have the web and app tiers combined while more complex ones may have multiple application tiers (called “middleware”) as well as multiple datastores.

The Front end refers to the web tier that generates the html response for the browser.

The Back end refers to the server components that are responsible for the business logic.

Note that in architectures where a single web/app server tier is responsible for both the front and back ends, it is still useful to think of them as logically separate for the purposes of performance analysis.

Front End Performance

When measuring front end performance, we are primarily concerned with understanding the response time that the user (sitting in front of a browser) experiences. This is typically measured as the time taken to load a web page. Performance of the front end depends on the following:

  • Time taken to generate the base page
  • Browser parse time
  • Time to download all of the components on the page (css,js,images,etc.)
  • Browser render time of the page

For most applications, the response time is dominated by the 3rd bullet above i.e. time spent by the browser in retrieving all of the components on a page. As pages have become increasingly complex, their sizes have mushroomed as well – it is not uncommon to see pages of 0.5 MB or more. Depending on where the user is located, it can take a significant amount of time for the browser to fetch components across the internet.

Front end Performance Tools

Front-end performance is typically viewed as waterfall charts produced by tools such as the Firebug Net Panel. During development, firebug is an invaluable tool to understand and fix client-side issues. However, to get a true measure of end user experience on production systems, performance needs to be measured from points on the internet where your customers typically are. Many tools are available to do this and they vary in price and functionality. Do your research to find a tool that fits your needs.

Back End Performance

The primary goal of measuring back end performance is to understand the maximum throughput that it can sustain.Traditionally, enterprises perform “load testing” of their applications to ensure they can scale. I prefer to call this “scalability testing“. Test clients drive load via bare-bones HTTP clients and measure the throughput of the application i.e. the number of requests per second they can handle. To increase the throughput, the number of client drivers need to be increased until the point where throughput stops to increase or worse stops to drop-off.

For complex multi-tier architectures, it is beneficial to break-up the back end analysis by testing the scalability of individual tiers. For example,  database scalability can be measured by running a workload just on the database. This can greatly help identify problems and also provides developers and QA engineers with tests they can repeat during subsequent product releases.

Many applications are thrown into production before any scalability testing is done. Things may seem fine until the day the application gets hit with increased traffic (good for business!). If the application crashes and burns because it cannot handle the load, you may not get a second chance.

Back End Performance Tools

Numerous load testing tools exist with varying functionality and price. There are also a number of open source tools available. Depending on resources you have and your budget, you can also outsource your entire scalability testing.

Summary

Front end performance is primarily concerned with measuring end user response times while back end performance is concerned with measuring throughput and scalability.

 

As many of you know, Faban is a free and open source benchmark development and automation framework. It was originally developed at Sun Microsystems, Inc. which made it available to the community under the CDDL license.

With the architect and lead developer Akara Sucharitakul and myself no longer working at Oracle (not to mention the demise of the project website without notice), we decided to host it at http://www.faban.org. The website isn’t pretty but at least it hosts all the documentation,  a downloadable kit and a pointer to the source on github.  In the coming weeks, I will work on organizing the site. A big thanks to all the folks who expressed concern about the future of this project. With your help, we can continue to support it.

If you are a faban user, please do join the new Faban Users forum at http://groups.google.com/group/faban-users.

 

Tags:

I thought I’d continue the theme of my last post “A lesson in Validation” with a lesson in analysis. This one is mostly focused on networking – one of my primary new focus areas but anyone can benefit from the process and lessons learned.

Recently, I was analyzing the results of  some tests run against Yahoo’s Malaysia Front Page. The response times were incredibly high and on digging down, it soon became apparent why. The time taken to retrieve the static objects was more than 100 times what it should have been.

CDN Magic

Like most websites, Yahoo! uses geographically distributed CDNs to serve static content. Malaysia gets served out of Singapore which is pretty darned close so there should be no reason for extra long hops. A large response time to retrieve a static object can mean one of two things: the object is not being cached or it is getting routed to the wrong location.

Sure enough, nslookup showed that the ip address being used to access all the static objects was located in the USA. It was therefore no surprise that it was taking so long. Satisfied that I had found a routing problem, I contacted the DNS team and they said … you guessed it. “It’s not our problem”. They ran some of their own tests and stated that Malaysia was getting routed correctly to Singapore, therefore the problem must be with my tests.

Dig(1) to the rescue

Since I was using a third-party tool, I contacted the vendor to see if they could help. The support engineer promptly ran dig and found that all the requests were being resolved (but didn’t care that they weren’t being resolved correctly!)  Here is the output of dig:

<<>> DiG 9.3.4 <<>> l1.yimg.com @GPNKULDB01 A +notrace +recurse
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1442
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 2, ADDITIONAL: 2 

;; QUESTION SECTION:
;l1.yimg.com.	 IN	A 

;; ANSWER SECTION:
l1.yimg.com.	 3068	IN	CNAME	geoycs-l.gy1.b.yahoodns.net.
geoycs-l.gy1.b.yahoodns.net. 3577 IN	CNAME	fo-anyycs-l.ay1.b.yahoodns.net.
fo-anyycs-l.ay1.b.yahoodns.net.	210 IN	A	98.137.88.34
fo-anyycs-l.ay1.b.yahoodns.net.	210 IN	A	98.137.88.84
fo-anyycs-l.ay1.b.yahoodns.net.	210 IN	A	98.137.88.83
fo-anyycs-l.ay1.b.yahoodns.net.	210 IN	A	98.137.88.37
fo-anyycs-l.ay1.b.yahoodns.net.	210 IN	A	98.137.88.36
fo-anyycs-l.ay1.b.yahoodns.net.	210 IN	A	98.137.88.35 

;; AUTHORITY SECTION:
ay1.b.yahoodns.net.	159379	IN	NS	yf1.yahoo.com.
ay1.b.yahoodns.net.	159379	IN	NS	yf2.yahoo.com. 

;; ADDITIONAL SECTION:
yf2.yahoo.com.	 1053	IN	A	68.180.130.15
yf1.yahoo.com.	 1053	IN	A	68.142.254.15 

;; Query time: 0 msec
;; SERVER: 172.16.37.138#53(172.16.37.138)
;; WHEN: Wed Jan 12 11:15:42 2011
;; MSG SIZE rcvd: 270

We now had the ip address of the DNS Resolver – 172.16.37.138. Where was this located? nslookup showed:

** server can't find 138.37.16.172.in-addr.arpa.: NXDOMAIN

No luck – this was a private IP address. Back I went to Tech Support:  “Can you please let me know what the public IP address is where this private IP is NATed to?”

And here’s what I got:  “I confirmed with our NOC team that the public IP Address of the DNS server located at “Kuala Lumpur, Malaysia – VNSL” site is a.b.c.d. Hope this is helpful for you.”

(I replaced the actual ip address above for security).  I promptly did another nslookup which showed the host name with KaulaLumpur in it. So it seemed that there was no problem on the testing side after all.  But not so fast … hostnames can be bogus!

Geo-Locating a Server

We had to find out where exactly this ip address was located in the world. So the ip address was plugged into http://whatismyipaddress.com/ip-lookup and it came back with:

Geolocation Information

Country: Canada ca flag
State/Region: Quebec
City: Montreal
Latitude: 45.5
Longitude: -73.5833
Area Code:
Postal Code: h3c6w2

The test server was actually located in Canada, while appearing to be from KaulaLampur!  No wonder our DNS servers were routing the requests to the US.

What seemed to be a problem with our routing, turned out to be a problem of the tool’s routing !

Moral of the story: Don’t trust any tool and validate, validate, validate!

At Yahoo!, I’m currently focused on analysis of end user performance. This is a little different than what I’ve worked on in the past, which was mainly server-side performance and scalability. The new focus requires a new list of tools so I thought I’d use this post to share the tools I’ve been learning and using in the past couple of months.

HttpWatch

This made the top of my list and I use it almost every day. Although it has features very similar to firebug, it has two features that I find very useful – the ability to save the waterfall data directly to a csv file and a stand-alone HttpWatch Studio tool that easily loads previously saved data and reconstructs the waterfall ( I know you can export Net data from firebug, but only in HAR format). And best of all, HttpWatch works with both IE and Firefox. The downside is that it works only on Windows and it’s not free.

Firebug

This is everyone’s favorite tool and I love it too. Great for debugging as well as performance analysis. It is a little buggy though – I get frustrated when after starting Firebug, I go to a URL expecting it to capture my requests, only to find that it disappears on me. I end up keeping firebug on all the time and this can get annoying.

HttpAnalyzer

This is also a Windows-only commercial tool – it is similar to Wireshark. It’s primary focus is http however, so it is easier to use than Wireshark. Since it sits at the OS level, it captures all traffic, irrespective of which browser or application is making the http request. As such, it’s a great tool for analyzing non-browser based http client applications.

Gomez

Yet another commercial tool, but considering our global presence and the dozens of websites that need to  be tested from different locations, we need a robust, commercial tool that can meet our synthetic testing and monitoring requirements. Gomez has pretty good coverage across the world.

I have a love-hate relationship with Gomez. I love the fact that I can do the testing I want at both backbone and  last mile, but I hate it’s user interface and limited data visualization. We have to resort to extracting the data using web services and do the analysis and visualization ourselves. I really can’t complain too much – since I didn’t have to build these tools myself !

Command-line tools

Last, but not the least, I rely heavily on standard Unix command-line tools like nslookup, DIG, curl, ifconfig, netstat, etc. And my favorite text processing tools remain sed and awk. Every time I say that, people shake their heads or roll their eyes. But we can agree to dis-agree without getting into language wars I think.

Tags:

Shanti's Photo

Pages

Latest Tweets

Categories

Archives