Archives for April 2011

The More Things Change, The More They Stay The Same

Recently, I heard from a social media networking contact who has been away from sales enablement for awhile but is heading back into the field. He asked if I might suggest a few programs, approaches and resources to review. His goal was to catch up on “more current thinking and approaches to developing a sales force” as he “was sure things have evolved” over the past 10 years. Smart question.

I’m usually not short of opinions and advice, but I struggled on this one.  I guess that’s because, as a confirmed luddite, I don’t really think things have changed that much. In the end, I think he felt let down by my response, or perhaps insulted. I felt a little bad about that, because I could have said a lot of different things.  I’m not (completely) blind or deaf and am fairly well read.  I had a lot of academic replies on the tip of my tongue that would have made an OD professor proud.  But that wouldn’t have been me, or true. 

I Saw That! You Rolled Your Eyes

Yeah, okay. Sure. The economy is tough. Recessions suck. Budgets are tight. Cold calling no longer works (note: something I hear; not necessarily believe). Sales is getting tougher. Social media and the information explosion we call the Internet have both blasted into the picture. And now, we have amazing mobile technology. Surely the entire game has changed! Right!? 

Errr…  not so much.

Ch-ch-ch-changes

Yes, circumstances, culture, generations, tools and technology have evolved. But in my opinion, these are just the surrounding circumstances. People are generally the same. Organization behavioral is basically the same. Change management challenges are the same. Customers still want to pay less than we charge and get a great deal.  Sales people are still wired the same way.  And in every company with a chance for survival, something is working, which means someone has figured out the magic sauce there… “And that,” he said (as a hush fell over the crowd), “can be replicated.”

It’s the Process, Stupid… Not the Economy (with apologies to Bill Clinton)

So, as much as a few of my friends tell me otherwise, I believe the basic organization effectiveness work remains the same. Sometimes, I think that people who clamor so fervently about major changes are just trying to sell you their latest fad idea, to deal with all those “changes.” But you didn’t hear that here. 

I stick to, and get the best results from, the core basics. Blocking and tackling. In the sales enablement arena, most practitioners just don’t research well enough, or execute well enough. And then they blame the changes and “how tough it is…. out there” when they don’t deliver real results.

Give Me A Lever And A Place To Stand, And I’ll Move The World

The work I do is summarized here and elsewhere on this blog (like here). I’ve evolved the methods and techniques over the years, but it’s the same core stuff. 

In short:

  • Fix hiring. It’s almost always broken. Consider psychometric assessments combined with behavioral interviewing or Topgrading.  (See this post for more on assessments). 
  • Fix training. It’s almost always broken. Figure out the sales performance levers, and build training systems (not events) around the success factors for the frontline sales role, using best practices from top producers.  Orchestrate transfer systems to get training from the classroom or course into the real world.
  • Fix coaching. It’s almost always broken. Ensure that management deeply understands the levers, training content, how to use both reporting and dialogue to diagnose problems with the levers, and how to coach to fix them. (Heavy emphasis on coaching right, after hiring right.)
  • Fix compensation. It’s almost always broken. Ensure compensation makes sense and drives and rewards the right behaviors. 
  • Fix lever alignment.  It’s almost always broken. Improve or massage sales processes, policies, procedures and practices, or even customer service or products, to support the levers, and what top producers say they need to blow things out of the water. (Yes, you need to balance against the needs of the business, but for the most part, your top 20% and top 4% will get that… and want that… they want to stay employed, too). 

How Now, Chick-fil-A Cow

 Oversimplified? Well, yes. The magic is in how it’s done…

  • How the performer analysis is conducted
  • How the levers are indentified
  • How the best practices research is conducted
  • How that resulting knowledge is turned into learning and performance support systems
  • How comp gets fixed 
  • How you get the organization aligned around radical performance improvement

Simple? Yes.  Easy? Err… well, if it were, everybody would be doing it. And they’re not. But that doesn’t mean it’s changed. Sound the alarm.

Circumstantial Evidence

So, quit complaining about how much has changed, consider those things the “surrounding circumstances,” determine your levers, success factors and best practices, and do the hard work. That’s a lot less fun for some people than yelling about change, but it sure gets better results.

Me? I love it. How ’bout you? Feel the same? Think differently? Either way, let me know.

Be safe out there.

Mike

____________________________________
Mike Kunkle

 

Contact me:
mike_kunkle at mindspring dot-com
214.494.9950 Google Voice

 

Connect with me:
http://www.linkedin.com/in/mikekunkle 
http://twitter.com/mike_kunkle

Permalink

| Leave a comment  »

Should You Use Psychometric Assessments to Hire?

Right People on the Right Seats on the Bus

Given a choice between having well-designed, validated psychometric assessments as a balanced part of my selection process or not, I’d choose to have them, every time.  In the now-famous words of Jim Collins, there are few things as important as getting the right people in the right seats on the bus.

Educate, aka Hit the Abusers with a Book

When people raise concerns about assessments, you generally hear concerns about misuse and abuse. Education is the best defense against misuse – which I’m sure does happen. Picking the right assessments is the best start. Are the assessments ipsative or normative? Is the validation predictive or another type(s)? I prefer normative and predictive. All of this might seem daunting at first, but it’s not that difficult to understand the basics.

For those interested in exploring assessments or just learning more, here are some decent non-partisan educational resources:

Mix Candidate with One-Third Assessment and Stir Vigorously

The other way to avoid misuse is to implement assessments intelligently. In my experience, most reputable vendors suggest using assessments as one-third of the decision process. Many now generate multiple reports, including an interview guide to help you dig into certain areas where the candidate didn’t assess well, in comparison to the predictive model for your position. In addition, the information can also be used developmentally, as well as for selection. That’s a big bonus to me.

As Frankie Said, I Do it My Way

The selection process I’ve used whenever possible is:

  • Resume screen, mostly as a “fog the mirror test” and ensuring they have the sense not to use “sexydude-at-email.com” as their resume email and can represent themselves well in writing.
  • Phone screen, to ensure coherent speaking ability and that I’d want them representing my product and company, as well as covering any knock-out factors (they want 25% travel and the position is 80%; they want full base + bonus and it’s small base + mostly commission)
  • Conduct the Assessment
  • Review the Assessment review, and if all of the above indicate we have a decent or possible match…
  • Conduct a full-fledged interview (some behavioral elements, some problem-solving, some hypothetical, all judging different things) and likely, multiple interviews by different parties
  • As relevant, some sort of audition or other non-psychometric assessment (for instructors a mock-classroom audition, for sales folks a sales roleplay, for instructional designers a writing assignment, etc.).
  • A review and calibration of results.
  • Final decision and offer.
  • Background, criminal and reference checks.

Don’t Pull Grass Out with the Weeds

Now, while I strongly believe in assessments, to those who prefer to weed out candidates based on an assessment only, delivered first, I remain cautious about that.

There are several reasons why I review resumes and conduct brief phone screens prior to assessing.

Validity and Predictive Reliability

  • I’ve heard many wild claims, but I still haven’t personally seen a detailed validation study, conducted with generally-accepted statistical measurement and assessment validation methods, that shows more than 75-85% predictive reliability for selection. That means that the best are still not 100% reliable for predictive validity (statistically predicting success in a certain role), and in fact, may be inaccurate between 15-25% of the time (in the above example, which is common). I’m sure I don’t need to remind anyone that 25% is 1 out of 4 and 15% is 1 out of 6.67. Much better odds than without an assessment, but for my taste, still not a standalone tool.

From a logic perspective, if I have just added assessments to improve my hiring success, performance and retention rates, why would I then purposefully deflate my potential success rates but using only the assessment?

Cost Factors

  • After receiving the resume (because there are costs leading up to that point), reviewing a resume is inexpensive (just my time and lost opportunity costs, and I can scan them quickly, for the things I’m weeding for).
  • The screen conversation is relatively inexpensive.
  • Good assessments cost money. Real, hard dollars – an actual expense line item. (I find them worth it and believe there is significant ROI, but they’re not free. If you are hiring enough people can cost-justify and arrange an unlimited enterprise license, this is the way to go, in my opinion. But if not, you pay by the drink, each time.)

Weed-out Factors

  • If the candidate uses bad judgment in their resume (aka sexydude-at-email.com) or exhibits poor communication and print-presentations skills (and assuming those are required for the job), I might weed them out without an assessment.
  • In the screen, if they want a large base and bonus and the position is 100% commission, and they aren’t willing to consider the upsides and move forward, I don’t need an assessment to weed them out. Buh-bye. (And yes, I would be clear about that in postings, but people don’t always read closely, and they submit their resume anyway.)

If I didn’t do the weed out factors, it also connects back to cost. I might waste time and an assessment (and the associated costs, if you don’t have an unlimited enterprise license) on someone that I absolutely wouldn’t hire anyway. Why do that?

Putting Screens in Your Window

Now, if you are absolutely besieged with resumes and overwhelmed in the hiring process, it gets more palatable to screen with the assessment to pare down the number of viable candidates to a manageable level – but that’s using assessments as screening tool. (Of course, then you have the data for selection).

Whichever you do, do it consistently, legally and ensure you’ve got HR in the loop (preferably from the beginning). Reputable assessment companies comply with EEOC and legal guidelines, but you also need to comply with your organizational policies and practices.

Why the Risk-Averse Curse

By the way, if you do what I recommend and don’t use the assessment as a knock-out factor by itself, there is one inherent risk. Managers may still develop a preference for candidates based on old methods – because they like the candidate based on gut feel (or halo effect, or a variety of other interview biases) – and then they may resist the assessment results. That’s why I like to assess early, but after only a knock-out resume review and quick phone screen. Then you assess, before face to face interviews. It doesn’t give people as much of a chance to form those bonds, before getting the assessment data and corresponding interview guides, to prepare for the interview. In many organizations, the resume review and phone screen are done by a recruiter, who administers the assessments, and passes the candidate to a hiring manager for a decision. In those cases, this process is even more effective, because the hiring manager hasn’t had an opportunity to form a bias.

If You Could Be Any Animal…

So, those are some quick thoughts on using assessments. I use ‘em, whenever possible. Coming back to sales effectiveness, which is my playground, I always remember this:

  • Pigs aren’t birds. They don’t fly.
  • Turkeys are birds, but can’t achieve lift off.
  • Sparrows are birds. They fly, but they’re just not as strong or big as eagles.
  • Hawks are closer, but they’re still not quite eagles either.
  • If you want something that looks, sounds, flies, hunts and gets results like an eagle, why not go find an eagle?

With the right people in the right seats on your bus, your training, compensation, support, policies, processes and other performance tools and resources can really help move the needle. With great performance support, you can help great people excel even more. But you still can’t make pigs fly.

Be safe out there.

Mike

____________________________________
Mike Kunkle

 

Contact me:
mike_kunkle at mindspring dot-com
214.494.9950 Google Voice

 

Connect with me:
http://www.linkedin.com/in/mikekunkle
http://twitter.com/mike_kunkle

Permalink

| Leave a comment  »

Some Quick Thoughts on Sales Performer Analysis

While there are multiple reasons to study the production metrics of your sales force, one of my favorites is to conduct a Performer Analysis. The goal of a Sales Performer Analysis (or perhaps I should say, ‘my’ Sales Performer Analyses) is to decide who to study to determine best practices, as well as the differentiating factors between ‘top vs. middle’ and ‘top vs. low’ producers (often answering the question, “What should the average sales person CONTINUE, STOP and START doing, to be more effective?”).

Who’s on First

To determine who to study, you must first determine:

  • Which metrics truly matter and define ‘top performance’ in your business
  • What categories or performer that you will define to study
  • How you will place performers into those categories, based on your metrics

Say This 5 Times Fast:  Which Metrics Matter

Every sales force has its defining production metrics. The metrics may vary by industry, company and/or product, so I can’t offer specific advice, just general.

Generally speaking, you can consider production metrics such as:

  • The number of sales (units, pieces, orders)
  • Dollar volume/gross revenue
  • Dollar volume/net revenue
  • Profit per sale
  • Price or discounted price per sale/unit/piece/order
  • Some quality measure – perhaps orders delivered, orders cancelled, or a similar measure (Sales should have some influence or bearing on such a measure, if used)

Gather Unto Thee Thy Numbers

When you gather the metrics:

  • Gather them for your entire sales force, over some reasonable time period
  • Use a time frame that is long enough to show consistency or trends. I usually look at the last twelve months, but also the last quarter and last month, to see how the average metrics change by slice. (Is the organization – or the data for a particular performer – trending up, down, or wavering?)
  • Place your performers, their appropriate demographic data, and their production metrics on a spreadsheet, in pivot tables, in a database or in statistical analysis software, so you can sort, filter, pivot or query to slice and dice your data by a variety of ways.
  • Consider how (or if) you will level out differences in tenure. For example, if you look at dollar volume over the course of a year, a sales rep who was working for the full year has an advantage over someone who was hired after the first quarter and only worked nine months. This also makes it difficult to uncover a 3-month rep who ramped up to the top 20% (or even top 40%) very quickly. You either need to do some things to level the playing field, or only study people who were employed and actively selling in your defined timeframe. I tend toward leveling, and include as many people as possible. And I often use averages or sales (units and/or dollar volume) per day worked to level the field. 

Ow, I Think I Pulled Something

Aside from the metrics above, once I settle on which I’ll use, I’ll either ask for (in the initial data pull) or calculate averages per rep month or per day worked, for each metric. When I do this, I prefer to start counting work days from the date of first sale, rather than hire date. (Looking at elapsed time between start date and first sale is another slice of data you might want to consider, depending on what you’re hoping to accomplish.) This doubles the amount of metrics you’re looking at, but provides so many different options for analysis. (And weighting, if you want to get into that.)

Performer Categories – aka, If You Could Be an Animal, Which…

Here are some categories that I’ve used:

  • Top Seasoned Producers (top 20% and top 4%)
  • Top New Reps
  • Fastest Ramp Up
  • Most Improved Over <Timeframe>
  • Middle Producers (I often grab the slice between Mean and Median performance)
  • Bottom decile (or 8th or 9th decile, to avoid the complete bottom-feeders)

Using Metrics to Put Performers into Categories – aka Who Let the Category Out of the Bag?

Once you have your metrics with a solid rationale, and have determined who you’re trying to find, you do your analysis. (Or, if you’re smart, you find someone to do your analysis for you.)

To find the top producers, in any tenure band, I like to by sort each metric in descending order, determine the top quartile for each of those metrics, and highlight them in some way. Then I look across metrics and rank reps by how many times they fall into the top quartile. After you define that subgroup, you can do the same again within the smaller group, and consider factors like most revenue, best profit percentage, largest number of sales, and rank the best of the best.

Smaller Slice, Twice the Calories

Interestingly, if I am trying to build a hiring profile (preferably by using psychometric assessments), I include the top 4%. If I am looking to develop training best practices, I will study the top 4% to some degree, but spend more time with the rest of the top 20%, just below the top 4% (the remaining 16%). I’ve found that the top 4% (or at least a good portion of them) are often selling through their own special brand of magic or the force of their larger-than-life personalities. In reality, this means that what they do is often not replicable but the average person. Conversely, the rest of the top 20% are often ‘normal humans’ who have simply ‘figured out the magic sauce.’ And what they are doing is very often replicable by others. This isn’t a statistically-proven fact (or not by me, anyway) – it’s my personal observation based on my experiences doing this work. You’ll have to form your own opinions on this one – but that’s mine.

If You See the Buddha on the Road, Kill Him

When I started doing performance work, I’d ask the sales leaders to identify the best performers to study. Today, I may still ask opinions or look for anecdotal evidence to support my data-driven findings after-the-fact, but I long ago stopped simply asking. Here’s why.

I Swear This is True

I was once given a performer named (err… we’ll call him) Bill. Bill worked in (we’ll pretend…) Indianapolis. At the time Bill’s name was given to me by his Regional Sales VP, Bill’s monthly performance was about 40% higher than average. Pretty cool, huh? A great fellow to study. Except, Bill took over the territory from… Hmm… Sandy… about 6 months ago and had steadily been declining it. Fast forward about six months, and Bill’s performance had significantly decreased over time… now well BELOW average, and he was terminated. In hindsight, he’s not looking like a such a good guy to get best practices from, right? Fortunately, I looked at the trends over the last six months and quietly jettisoned Bill from my study group for top performers. But I still studied him… and spoke with some of his customers about him (and funny, they all told me some great things about the differences between Bill and Sandy – unprompted by me, I might add – which was awesome for my study: differentiating factors).

Meet My Auntie Doe Tull

This is why, after getting a good sense of the business and what I should look at, I do my own analysis, then seek anecdotal evidence to support or weed out performers. For sales managers, for example, you can remove their personal results and look at team performance to determine the best managers. But you also have to look for balance in the team’s performance, as opposed to one top producer carrying the group. And what you can’t tell (easily) from data but can ask about and smoke out on your own, is whether this manager lucked out with a great team, turned them around, was handed them, or built them up from scratch, recruiting, training and coaching them all to higher levels of success. Make no mistake, you *can* get to much of this through data, but it’s so challenging that it’s a lot easier to have some internal conversations and call it a day. If I’m feeling dubious after the conversations, I might do some other data-digging.

Quick is Relative (Just Like Auntie)

Well, I’m not sure now whether those are “Quick” Thoughts on Performer Analysis, or not, but I do hope they’re helpful.

Thoughts? Comments? I’d enjoy hearing from you.

More soon… in the meantime, as always, be safe out there.

Mike

____________________________________
Mike Kunkle

Contact me:
mike_kunkle at mindspring dot-com
214.494.9950 Google Voice

Connect with me:
http://www.linkedin.com/in/mikekunkle 
http://twitter.com/mike_kunkle

Permalink

| Leave a comment  »