Defect metrics are not 'objective' and should not be used for appraisals. In-fact, as a practice metrics should not be used for performance appraisals because they may cause defensiveness and next time around one may find stuffed results (therefore devaluing the process & also cause inter/intra team conflicts -eg.if lots of bugs were found, would one conclude that the development was poor? It may have been a complex module or a buggy dependency, or even an unclear spec, a change in requirements!).
First of all, it should be understood that any evaluation IS subjective.One uses data in order to make evaluations 'fair' and normalize it over various teams (to bring about some consistency in evaluations across managers)
So how would one go about it? I would think, similar to evaluating developers.
To get a feel, involve in a sample of their work product reviews like test plans, test cases, test programs. This gives an insight into their domain knowledge and expertise.
Ask them to present how they would test a given requirement/design? Have they been innovative (eg.automated any testing)?
How have they contributed to design reviews, code reviews during the development cycles (testing does not begin after the code has been written)?
Have there been instability in the product after its release?
Has the team/ individual worked collaboratively with others (a team player)?
One needs to quote situations/events, how the individual responded, give constructive suggestions/feedback, acknowledge difficult situations.
This would, in my opinion be 'objective' and also supported by data. We can have a similar process for developers as well.
हेमंत