DISCLAIMER:
These are the tidied up notes I took from the session at the JavaOne 2008 conference in San Francisco, California. There may well be mistakes and ommissions. I will come back and correct them once the conference has completed. However my first priority is to get the information published before the backlog gets too large and swamps me. All comments welcome. Enjoy!
Bill Pugh, University of Maryland
AIM: Learn how to effectively use findbugs on a large project (100,000 + loc) and make effective use of the information it gives you
Static Analysis:
- It analyses your probgram without execution
- It doesn't need tests
- It doesn't (needen't) know what your program does
- It looks for violations of reasonable programming practices
Common (Incorrect) Wisdom about Bugs and Static Analysis
- Programmers are smart
- We have good techniquies to find bugs early
Why do Bugs Occur?
- Nobody is perfect
- THere are many types of common errors - misunderstood lang features, APIs, typos, misunderstood class or method invariants
- Everyne makes syntax errors but te compiler catches them - what about one step removed from a syntax error
Findbugs does scale:
- Sun, google and eBay use it on their entire java codebase
- google has fixed more than 1000 issues identified (and their code base is big)
Using Gindbugs Effectively:
- Static analysis isn't a silver bullet
- Other techniques are also valuable - tests, code review, etc
- Find the right combination - you have a finite and fixed budget and you want to find an effective / profitable way to use static analysis
- Make sure you get the most effedctive use out of this time
- Running analysis and finding stupid code is easy. But what is the code supposed to do? The hard part is often "who owns it?", "what is the code supposed to do?", "What is a test case to prove the bug?"
The Findbugs Ecosystem:
- is analyses classes (and jsp's if you compile them)so I can also analysis groovy, scala, jruby etc...
- filter files can be used to include / exclude certain issues
- output stored in XML files
- Many tools to post process the XML result
- WAys to perform analysis - Swing UI, CLI, Eclipse, Ant, Maven, NB, Cruise Control, Hudson
- CLI: using -xml:withMessages writes human readable message strings in the XML output. This is useful if any other tool other than findbugs will use the output
- Hudson: reads findbugs xml for each build and presents - warning trend graph, warning deltas for each graph, dashbaoard by package, link to source(working with Koshuke on integration with FB to make it even better)
Scaling up FB:
- Make it manageable. E.g. Eclipse 3.4M2 : 1. filter out low priority, 2. filter out vulnerability to malicious code (non final static variables), 3. filter out issues also present in Eclipse 3.3 (it shipped after QA so assume these ones are not that bad)
- Remember evaluations: if you want to evaluate an issue but don't immediately fix the code, you want to remember your evaluation - issues that must be addressed / fixed / reveiwed before the next release
- Highlight new issues - flag new as a result of latest commit (send email to comitter). Just keeping track of trend lines of total # of issues isn't good enough. If a change causes an issue, flag it. Hudson does this very well
- Integration with Bug Reporting / Tracking - scrape the XMl and import. Link the findbugs and bug db entry
- Typical FB warning density
- about 0.3 - 0.6 medium or high priority correctness warnings per 1000 lines of NCSS (non commenting source statements)
- About 1-4 other potentially relevant warnings per 1000 lines of code
- Don't use these numbers to judge whether your project is good or bad (lots of reasons results miht be biased)
- At Google
- Over 2 yesrs, perhaps 1 person year of effort on auditing issues
- Over that span reviewed 1663 issues - 804 fixed by developers
- What issues are you interested in?
- Priority - H/M/L. Looking at low pri not recc on large code bases. High/Med are useful for ranking within a pattern. Medium Foo issues might be more important than High Bar issues
- CAtegory - Correctness (developer probably made a mistake), Security (e.g. SQL injection & XSS), Bad Practice (code violates good practice - e.g. class with .equals() but no .hashCode()), Dodgy Code (code doing something unusual which may be incorrect - e.g. dead local store), Multithreaded Correctness (probs with synchronisation, notify() etc. but it is hard to do this with static analysis), Potential Perf. Probs (there is a more efficient way to do this), Malicious Code Vulnerability (static field which can be changed by untrusted code), Internationalisation
- Categories - Malicious code is v important if you run code in the same JVM as untrusted code. Perf issues are generally only importNT IN THE 10% OF YOUR CODE WHICH USED 90% OF YOUR cpu cycles (i.e. ignore static initialisation code)
- How to set up your config: Run first, then filter out the stuff you don't want
- Filtering can be simple or complex (e.g. in Ant or CLI). You can also use an xml filter file
- Filters can be "include" or "exclude" and can be used wen running the analysis, filtering bugs , and in the eclipse plugin
- Filter use cases - use to describe what is / isn't interesting. also use to filter out what has been reviewed and found not to be interesting
- The GUI can be used to build the filter files. Click on a bug and say "filter bugs like this" => select the attributes you want, and then add to the filter
- The GUI can also import and export filter files (to use in Eclipse, etc.)
Historical Bug Results:
- If you run FB as part of each build you can merge analysis results to combine into a bug history.
- You can then do queries on this history
- FB matches up corresponding bugs in successive versions - fuzzy match, not based on line numbers
- If a bug persists across multiple versions, the XML records the 1st and last version which contained the bug
- Querying: you can filter bugs basd on teh 1st and last version that contained an issue, or how it was introduced or removed
- Instance Hashes:
- when you generate an xml file with mesasges, an instance hash is associated with each bug. This is useful for connecting analysis results to bug databases and other forms of analysis processing
- Instance hash collisions - not guaranteed to be unique
- Unique identifiers. Each issue has an occurenceNum and an occurrenceMax as well as a hash. Concatenating all three gives something unique to the file and is unlikely to collide across successive versions
- Excluding baseline bugs:
- to look at e.g. just the bugs since relase 3.0 - too many to look at...
- estabish a bugs baseline and then "exclude all the issues in this bug file" (can be done in Eclipse) (based on the instance hash)
- Saving audit results:
- Sswing and Eclipse allow you to mark an issue: unclassified, bad analysis, should fix, must fix, etc...
- You can also add a free text annotation
- This is saved out in the XML and over history, these are matched up and combined
- The history is kept, but it doesn't provide an easy way to share it across workapsces. Relying on VCS to merge bug databases isn't recommended or supported. Plugin coming (this summer) to store this in an external database (plain text file, web server, etc.) to solve this
http://code.google.com/p/findbugs-tutorials
Monday, May 19, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
i have an article on findbugs as well
http://www.ideyatech.com/2008/10/static-code-analysis-using-findbugs/
Post a Comment