Who: The A Method for Hiring is a big title for a management book to live up to. At the moment business writers have something of a love affair going on with the idea that casting decisions are the most critical ones in the practice of management, so the timing of the book is certainly good, but the bar is pretty high. The trend probably started with Jim Collins (Good to Great), Marshall Goldsmith (What Got You Here Won’t Get You There) and the folks fueling the strengths movement, which includes Gallup Inc., the psychologist Martin Seligman, and popular writers like Marcus Buckingham and Dan Pink (A Whole New Mind and Johnny Bunko).
The book almost lost me with its opening premise (a scorecard method to define roles), but recovered and finished solidly. More than once I found myself cringing a little as the authors pointed out lazy, stupid and sloppy elements of common interviewing practices, many of which I personally have been guilty of. Overall, as a tactical manual for hiring, particularly around disciplined interviewing, it is excellent. The process is structured but not bureaucratic. It is natural, intuitive, information-driven and requires creativity and intelligence to apply.
The broader hiring strategy it advocates though, should be used with caution. Given the chaos of layoffs and hiring freezes we are living through, you would do well to follow the model in this book if you want to get the most bang for your remaining payroll bucks. The book should be valuable to interviewees as well, both as interview-prep, and as an aid to identifying and avoiding bad opportunities (for those who aren’t yet at ‘will take anything’ levels of desperation). It won’t help you ‘game’ the recommended hiring process though, since it isn’t a blind, checklist-driven or aptitude-test driven process.
The Who of the Book
Geoff Smart and Randy Street are with a consulting firm specializing in hiring processes called ghSMART (link), the underlined A being a mnemonic for their trademark process. Geoff’s father, Brad Smart, pioneered a popular interviewing technique called Topgrading which is the cornerstone of the more comprehensive A process described in this book. I had not heard of Topgrading, but when I read the description, I realized it was basically a careful codification and optimization of an interview process that most of use when we are not being lazy. Their consulting practice seems to target everything from CEO hunting to more routine hiring of plug-and-play employees. The authors seem familiar with, but not slavishly dependent on, the hiring literature. So overall, the book seems to stand on solid ground.
The A Process
The four strokes (counting the underline) in A represent the four process stages described in the book: Scorecard, Source, Select, Sell. The goal of the process is to get your hiring practices to the point where you are mostly hiring “A” players, candidates the authors define as follows:
He or she is not just a superstar. Think of an A Player as the right superstar, a talented person who can do the job you need done, while fitting in with the culture of your company. We define an A Player this way: a candidate who has at least a 90% chance of achieving a set of outcomes that only the top 10% of possible candidates could achieve.
If the apparent quantified concreteness raises a red flag in your mind, it should. Some of the model’s problems arise from it, but we’ll get to that. Let’s look at the strengths of the process first.
The discussion starts with an extended description of “voodoo” hiring practices, a label the authors apply to the grab-bag of informal models that we all use too often. In this list of thumbnail critiques of ten of the more problematic interviewer archetypes, you will find the Art Critic (people who hire by ‘gut feel’), the Trickster (‘gotcha’ interviewers) and the Prosecutor. Mostly, the criticism is valid, but in their enthusiasm for left-brained rigor, the authors do sell right-brained approaches a bit short, and throw out a few babies with a lot of bathwater. But we’ll let that pass.
Voodoo out of the way, the book gets down to the recommended process, prefaced by the insightful comment that hiring is one of the areas where you don’t want to get too innovative. Discipline, rigor and process are probably most needed in hiring, among all corporate functions, since trial and error is much too expensive and slow. The four steps are:
- Scorecard: this step advocates a careful development of a written role description that turns the expectations for candidates into quantified goals. Along with the scorecard, this step also advocates collective identification of “competencies” (such as aggressive vs. consensus builder) that are meant to capture the host culture in a neutral way, to prevent rejection by the native immune system. This step needs some qualifications, but so long as you are thoughtful about when to apply a scorecard model at all, this scorecard is as good as any.
- Source: this step is a rather quick-and-dirty set of tips about how to keep a good pipeline of top candidates coming in, through constant cultivation of potential A candidates before needs arise. Look to other books, such as Peter Cappelli’s Talent on Demand, for a more systematic understanding of sourcing.
- Select: this is where the book shines. You get a detailed and refined guide to four types of interview: Screening, TopGrading, Focused and Reference, which together serve to triangulate and validate all the important information about a candidate you need to make an informed decision. The level of detail varies from advice on general conversational tactics (for example, being ready to interrupt pretty frequently) to the microscopic (crafting specific questions down to the level of word choice). The approach outlined is more adversarial than most of us are comfortable with, but that’s all the more reason to pay attention: interviewing ought to be a somewhat adversarial process, since candidates have good reasons to hide relevant information. The use of references is particularly well-developed in the model. The process recommends calibrated use of the “threat of reference check” (TORC) to keep the interviewee honest, as well as more-diligent-than-usual selection and questioning of the references themselves. We tend to forget that breaking through the euphemisms/code of references is almost harder than breaking through the facade presented by interviewees.
- Sell: this is probably the part of the process that gets the least attention in most companies: selling the candidate after making an offer. Since true A players are likely in demand, a lot of value gets lost when hiring organizations fail to follow through enough. The approach here is mainly anecdotal, and there isn’t much by way of systematic tactics (other than general advice like paying attention to family needs)
Interleaved through the process description, you get entertaining anecdotes that illustrate the effectiveness of the techniques. One particularly hilarious one describes a candidate who revealed that he was fired from his previous job for slapping a senior executive, thanks to skilled interviewing. A key point the authors repeatedly emphasize is the importance of creative and curious follow-up questioning. The process not only allows you to break away from the script, but requires that you do so, to make it effective. The general mantra is “what?” “why?” and “tell me more” as both an attitude and a language guide designed to get the candidate talking more about things you want to probe.
Bugs in the A Process
Once you get away from the nitty-gritty of smart interviewing, and evaluate the process as a whole, one big problem leaps out: the reliance on an (apparently) clearly defined scorecard as a role definition. The supporting element, understanding of the culture competencies that define your organization’s immune system, is definitely valuable, but it is not clear that goals are always a good idea.
Why? Recall the Good to Great advice on casting: get the right people on the bus (and the wrong people off the bus) before you figure out where to drive it. The more complex and ambiguous the role, the better the argument for the “bus” process. In many cases, you allow very senior hires to hand pick support teams for precisely this reason: the what that drives their hiring might be rendered moot by the who decisions they make once on board. In such cases, the what tends to come mostly from the candidates themselves, not the hiring organization.
To frame it as an antithesis, the alternative to a goal-oriented scorecard against which candidates could be rated is a more creative process with a broader scope: where could the bus go, if this person were on board? This also means considering moving other people around to create the right chemistry around incoming talent. In the most dramatic cases, the company’s strategic storyline might need modification to accommodate the strengths of a high-potential candidate.
The point isn’t that either method is always the best one. The point is that the situation dictates whether or not you should precisely define a role before starting a search. If your company is poised for a big, strategic, good-to-great move, hiring against a too-precise scorecard is a terrible idea. If you are midstream in execution, with no major course changes on the horizon, a scorecard might be a very good idea. I talked about some of these issues in a previous piece on the tradeoffs in talent management.
The second problem with the scorecard model is the false sense of security you might get from its apparently quantitative nature. The “top 10% of all candidates” is reasonably measurable, but the “90% probability of success” is, in most cases, simply delusional. Even in roles where metrics are clear, like sales, “probability of achieving outcomes” is just not a meaningful measure for most complex jobs, since there are too many externalities.
But overall, a recommended read if hiring is a big priority in your role. You won’t get a big picture view of the future of work, talent management, the changing nature of the labor markets, free agency, cloudworking and all the other good stuff we talk about on this blog, but hiring is going to remain as important in the cloudworker future as it was in the organization man past.
Thanks for the very complete review. I totally agree with one of your conclusions: “The point is that the situation dictates whether or not you should precisely define a role before starting a search.” It’s all about balance et getting the right techniques for the right situation.
I haven’t read much about interviewing and hiring practices since I am not in management and don’t aspire to it. But as someone who has been through a lot of interviews, I have to ask: Does the method in this book provide any guidance for the flip-side of the interview – that is, the bit where the prospective employee is checking out the prospective employer?
With the economy currently being a hirer’s market, the question may not sound too important but eventually things will go the other way. One reason the question occurs to me is that before starting my current job, I was offered another job that would have paid a lot more money than I was then making (though less than I’m making now, as it turns out).
I turned the job down because there were some things during the interview that set off red flags in my mind. Though I couldn’t know it at the time, it turned out I was right. What does the literature say about making sure BOTH sides of the interview process work?
The book doesn’t have any advice for interviewees. I’ve seen quite a few blog posts doing the rounds from career counselors recently, on the ‘don’t take the first thing you’re offered’ theme.
Interviewees suffer a strong information asymmetry, whether times are good or bad. The only solid piece of advice I’ve seen is: go to as many interviews as you can. Comparing job opportunities weeds out irrelevant information and only leaves you thinking about discriminants. Comparing opportunity A to opportunity B is vastly more practical than comparing either to an abstract notion of ‘dream job.’
Venkatesh,
Great review of this book. Thanks.
I agree with your criticisms that we have too many methods these days that focus on objectivity as an undeniably good thing. In this book, it sounds like that is emphasized, with subjectivity being labeled as “voodoo.” C’mon. Objectivity and subjectivity are two sides of the same coin, they are both part of every process.
Another point I would add is that, from complexity theory, we hear that simple rules enable complex, intelligent behavior, while complex rules enable only simple, stupid behavior. I can so easily see someone using this scorecard and saying “Well, I picked the person who ranked 27 over the person with 26, even though my gut told me it was a bad choice.”
Good point about the simplicity-complexity tradeoff. There’s good reason to believe that besides complexity theory. More rules = heavier model = less bandwidth left for independent judgment about what’s NOT in the model. James Scott analyzes this in ‘Seeing Like a State’ beautifully. Also agree on how people use metrics sometimes to avoid responsibility for having to think.
That said, good processes are a matter of art, so they keep you honest without making you stupid. Overall, the process in this book meets that criterion.
How would you rate this book on a scale of 1 -10?
1 obvilously being appauling and, 10 being excellent.
Deep: Probably a 6/10, B+. For calibration, all the related books I’ve cited in this article would probably rate between 7 and 9.