julia-koblitz-RlOAwXt2fEA-unsplash (1)

Putting together just the right set of product features can feel like being a sorcerer creating a potion. Throw in a little of this, a little of that, and then poof…a complete product comes together.

However, there’s a major problem with this approach: you don’t know out of the gate what set of features and benefits is really going to make a product shine in customers’ minds. Rather than crossing your fingers and hoping you got the formula right, you can rely on two classic market research tactics to optimize a product’s features and benefits: Maximum Differential (MaxDiff) and Conjoint analyses.

While each approach has the same fundamental goal—helping you visualize how consumers make trade-offs between features and benefits when making decisions—they yield different results. As a result, you’ll need to understand the underlying differences in inputs and outputs to assess which approach is right for your business needs.

How MaxDiff & Conjoints Compare Against Each Other

Both types of studies aim to identify the relative preference for product components. However, a MaxDiff is far simpler since it just compares high-level sets of features, messages, or other variable groupings against each other. For instance, if an organization wants to test what makes someone more likely to purchase one soda over the other via a MaxDiff, they would ask about the relative importance of things like: comes in a 20 oz bottle, is sugar free, has less carbonation, or other soda-related variables.

A conjoint, in contrast, is more complex. It tests not just the relative importance of key product features but also how valuable the levels are within those features. Let’s use the soda example again. In this case, you could test the relative importance of price, size, and flavor in a purchase decision. At the same time, you could evaluate how compelling, or not compelling, certain feature levels are. For instance, a price point of $1.99 vs. $1.59 vs. $.99.

MaxDiff Conjoint
WHAT IT DOES
  • Measures preferences towards a list of features (e.g. affordable price, cancel any time) or messages (e.g. no trans fat, 50% more)
  • Measures relative preferences for a product’s feature or service bundle (e.g. price vs. cancellation terms)
  • Assesses the relative appeal, or lack thereof, of levels within each feature (e.g. $20/mth vs. $30/mth)
  • Optimizes the product or service bundle’s features

Given the methodology differences, the two approaches help answer different sets of research questions.

MaxDiff studies center around understanding the relative importance of features or messaging. So long as there isn’t a need to understand if certain features are alienating or could harm purchase decisions, and the product of interest is relatively simple and has limited components, a MaxDiff can be an ideal approach.

In contrast, a Conjoint study’s complexity allows it to answer additional sets of questions including the ideal bundling of features and levels as well as if certain levels within a feature may negatively impact purchase intent.

MaxDiff Conjoint
IDEAL TO USE WHEN…
  • You want to assess how important or unimportant certain features are to a consumer’s decision (e.g. affordable price, made in the USA)
  • You’re not concerned about measuring if a feature presents a barrier to purchase
  • You have a relatively simple product with few levels within each feature (e.g. only one cancellation policy)
  • You want to optimize the set of features offered as part of a product bundle (e.g. price=$20/mth + cancel any time)
  • You want to see the relative value of different product features to the consumer (e.g. price impacts purchase decision by 25%, cancellation term by 15%)
  • You want to weigh the extent to which levels within particular features may improve purchase intent or create a barrier to purchase. (e.g. increasing price by $10 has zero impact on purchase intent…or decreases it by 20%)

Both of these approaches are essentially trade off studies which means they require respondents to assess different features or levels, determine the relative value of those features and levels, and decide which ones they’d choose above others.  As a result, they must offer enough features or levels to really allow for trade offs to be made.

When fielding a MaxDiff, you should aim to have between 12-20 features for respondents to assess. In contrast, Conjoint studies are best when there are anywhere between 3-8 unique features of interest and there are between 2-7 levels for any given feature.

MaxDiff Conjoint
INPUT CRITERIA
  • Best practice is a minimum of 12 features and a maximum of 20
  • Aim for 3-8 features with each feature having 2-7 levels

 

Differences In The MaxDiff & Conjoint Respondent Experience

Respondents taking your survey will experience slightly different question prompts depending on whether they are taking part in a MaxDiff or Conjoint study.

Max Diff Style Questions

MaxDiff survey questions ask individuals to pick the most important and least important factors when making a decision.

Let’s take a look at the prompt below. The question would read, “Based on the list of features below, which of the features is the most important and least important when selecting a restaurant?” The respondent would see a table that lets them evaluate different features and then select the most and least important of just the features they are seeing on the screen. 

Once the respondent selects their choices, the table refreshes. The respondent may see completely new features, or a mix of new and old features, and is asked once again to pick the most and least important features in their decision process. This is repeated many times for each participant.

Conjoint Style Question

Conjoint questions ask respondents to select their preferred product based on the features it bundles together.

Let’s take a look at the conjoint prompt below. The question would read, “Given the restaurants you see below, which would you be most likely to visit?” The respondents see options that mix and match different features, and then must select the option they would most likely choose. 

Once the respondent selects their their preferred bundle, the table refreshes. The respondent will then see a new set of bundles and will once again be asked to select their favorite. This bundle re-shuffling is repeated many times for each participant.

MaxDiff versus Conjoint Output Differences

In both cases, the output will yield a relative utility measure for each feature. That is, you’ll be able to visualize how much any given feature impacts a purchase decision. In the case of the MaxDiff, you’ll be able to see each feature or benefit’s overall impact on a purchase decision.

In the case of the conjoint, you’d be able to see the relative value of different features in impacting purchase decision.

However, the conjoint also offers far more granular information as well. For starters, you can see something called the average utility of each feature’s levels. What that really means is that you can drill down within a feature and see the degree to which a specific level has strong impact on purchase intent or alienation. In the example below, we see that a $10 entree has a positive impact on purchase intent while an entree prices at $20 still has positive interest, but not as strong as if it were priced at $10. However, once the entree price reaches $25, there is purchase alienation.


Additionally, you can isolate what the ideal product bundle looks like. That is, what set of levels within each feature that will increase the chance of a customer making a purchase.

Determining If You’re Ready For A Conjoint or MaxDiff Study

A critical piece of performing Conjoint or MaxDiff studies is entering into the research with a discrete set of features you want to evaluate. As a result, you’re only ready to do this type of work if you already have a strong sense of the features and levels that make sense to test with a research audience.

Usually, that means product concept tests, fairly high-level assessments to get a general sense of a product’s appeal (or lack thereof), have been performed. If there is general interest, then further testing and optimization is absolutely warranted. However, if you haven’t done this step first, you’re likely a bit “cart before the horse,” working on product optimizations without having first validated product interest.