DesignWIKI

Fil Salustri's Design Site

Site Tools


design:concept_evaluation

Concept Evaluation

Concept evaluation is the process of ranking concepts to determine their relative merits. This requires (1) a set of criteria on which each concept is rated, (2) a reference design to use as a baseline against which comparisons are made, and (3) a method by which to rank concepts against the reference with respect to the criteria that is both effective and efficient.

Introduction

At this point in the design roadmap, you should have between 5 and 15 design concepts. You must evaluate these concepts in some meaningful way to determine the most fit one. In this case, fitness is defined as the one that provides an overall rating that beats all the other available concepts, with respect to the reference design and measured against the product characteristics.

Measuring concept fitness

Since the design concepts are all meant to address the requirements, the PRS is a very important document for concept evaluation.

All you have at this point are concepts which are by definition not detailed enough to analyze completely for fitness. Nevertheless, you can make some key distinctions between concepts - for instance, you need very little detail to know that a conventional internal combustion engine simply cannot be used to create a zero-emissions vehicle.

Since “fitness” does not have an objective scale (like “kilograms” or “GDP”), you can't know exactly what fitness means for a given design problem. Because of this, you have to assess every concept relative to some baseline, and rank them in order. The concept that is ranked first is the most fit, to the best of your knowledge.

The reference design

The reference design provides a baseline against which all assessments are made. It represents the thing you have to beat. The more you can beat it, the “better” the design.

See the page on reference designs for further details.

Choosing a measurement scale

To be able to rank the concepts with respect to the reference design, you have to be able to assign values such that you can say, with some confidence, that one concept is better than another. This implies some kind of quantitative scale that captures the range of values with which you will assess concepts.

You don't want a scale with a large range - such as from 0 to 100 - because that is too fine for the information available. Remember that your concepts are only vaguely defined at this point, so the level of precision available in a large-range scale will lead to unnecessary and arbitrary assessments. Large-range scales are just too precise for our needs.

On the other hand, you don't want a scale with too few values on it either, otherwise you won't be able to distinguish sufficiently well between the alternative concepts.

Over time, design researchers have found that a five-point scale provides sufficiently fine measurements without being overly precise. The scale is linear and ranges from -2 to +2, thus:

Value Meaning
-2 much less fit than the reference
-1 slightly less fit than the reference
0 roughly of equal fitness as the reference
+1 slightly more fit than the reference
+2 much more fit than the reference

It's convenient to range the scale around zero because zero is a naturally neutral value, negative numbers naturally suggest worse performance, and positive numbers naturally suggest superior performance.

Alternative Scales

This is not the only possible scale. In aerospace engineering, for example, a scale that ranges exponentially from 0 to 9 - that is, { 0, 1, 3, 9 } - is common because aerospace engineers tend to be overly conservative. The exponential scale offsets the conservatism by attributing significantly higher scores to superior features of a design.

Other industries have their own scales, which have typically been developed in response to industry-specific empirical research.

In the general case, however, the {-2, -1, 0, 1, 2} scale is a perfectly usable and robust scale.

What do we assess?

You need to assess a concept with respect to every requirement. This can be a long and difficult task. Considering that the concepts are quite vaguely defined, it is very inefficient to perform such a detailed evaluation. One can perform an easier evaluation by grouping the requirements according to the product characteristics.

Consider any PRS. It can be treated roughly as a hierarchy: each product characteristic is the root of a hierarchy of functional requirements and constraints. We can think of all the requirements under a PC as being in a group named by the PC.

For instance, consider usability. In your PRS, usability is defined as the set of FRs and Constraints associated with it. Thus, when you compare a concept to the reference design on usability, you need to think of that as performing a comparison using all the FRs and Constraints that connect to usability.

This also means that each team can (and very likely will) have very different “definitions” of each of the PCs. And that's just fine.

Treat all PCs equally

In real life, one requirement will be more (or less) important than other requirements. Determining which requirements are more important will depend on many things: the nature of the intervention being designed, the potential severity of its failures, the needs of the client and of the engineering firm, the available manufacturing resources, various regulatory factors, cultural influences, and a great deal of engineering knowledge.

Students are, however, generally not able to evaluate these factors; they lack the education, experience, training, time, and resources to do the work required.

Therefore, for the sake of this project, teams will assume that the PCs are all equally important.

How do we combine and compare measurements?

Once you have a PRS, a set of concepts, and a reference design, you can begin to perform the evaluation of your concepts.

An excellent tool for performing this kind of analysis is a decision matrix. It is a chart that captures every aspect of the assessment.

There is a variation of a DM, a weighted decision matrix that accommodates cases where some criteria are more important than others. However, we do not use this variant in this course.

Read more about using a decision matrix to assess your design concepts.

Executing a concept evaluation

This section describes, step by step, how to carry out a concept evaluation.

To execute a concept evaluation, you need your PSS, PRS, PAS, and an initial set of design concepts.

In a nutshell, each team member will conduct a concept evaluation of all concepts from the point of view of their Persona and SUC. All team members will then collaboratively combine their individual evaluations into a single overall evaluation.

Step 1: Set up individual DMs

Each team gets a DM template. This Google Sheet has several tabs:

  • P1 through P6: These are individual DMs, one for each team member / Persona.
  • DM: The DM tab gathers all the results from P1..P6.
  • Clustering: This tab generates a visual representation of the clustering of concepts based on the data in the DM.

The template already has the basic PCs listed and room for 10 concepts.

The template also already has the reference design listed. Its rankings are all zero because it's the baseline; your concepts may do better or worse than this baseline.

Step 2: Verify concept names

In the DM tab, in Column A, replace the xs with proper names of each of your 10 concepts.

The concept names will be automatically copied from the DM tab to every other tab as needed, automatically.

Step 3: Individually evaluate all concepts

Tabs P1..P6 are for each Persona. Since each team member is in charge of one Persona, then there's one tab for each team member. If there are only five people in your team, leave P6 empty of data - do not fill it with zeros.

Each of the P1..P6 tabs has a cell at the top to name the Persona for which the DM is being prepared.

Each team member now evaluates each concept against all the PCs, for their Persona and SUC.

This can be done individually and in parallel across your team. The idea here is that you want to decide which concept is best for your Persona in your SUC.

An example DM for one specific Persona is given below.

Each PC represents a set of FRs and constraints; make sure you're evaluating a concept with respect to those FRs and constraints, not with respect to some “dictionary definition” of what each PC may mean.

Document your rationale for each rating. You can do this by adding a “comment” or “note” to that cell of the spreadsheet. This is useful because it's relatively easy to copy and paste that information later into your design report.

Remember that you are your Persona's agent - that is, it's your job to make sure that the final design meets your Persona's needs, so you need to evaluate each concept only from their point of view.

When evaluating a concept with respect to one PC, ignore all the other PCs; rate the concept only on the given PC as if that were all that mattered.

Step 4: Create a single overall DM

This is done automatically by the DM tab of the spreadsheet.

The spreadsheet will find the lowest value of each commponent DM cell. This is because you need to find a concept that works for all Personas. If a given concept does poorly for one Persona, then you're effectively excluding all users that are like that Persona from using your design. That would be bad.

For instance, say your team is reviewing Concept 3 with respect to Usability. The lowest rating across all Personas for the concept's usability will be the value that appears in the DM tab.

You can carefully review the example decision matrix to see how that works.

Step 5: Identify the best concepts

The last thing you do is to identify the cluster of best concepts. Fortunately, if you're using the DM template, most of the calculations will be done for you. Remember that since your concepts are still quite vague and qualitative, the numbers that represent the Scores of the concepts aren't very reliable. So you can't just take the top-rated one as best.

Consider the example below.

This is the cluster analysis of the sample DM. All the data here is automatically generated. Assuming you did everything right, you shouldn't have to do a thing here except interpret the results. All the data is pulled from your DM - in this case, the overall DM.

Some notes are in order.

Notice the yellow columns show the concept names and rankings taken directly from the DM tab, but sorted by descending rank value.

Notice that the reference design is included here. This is so you can clearly tell how many concepts were rated as being overall better than the reference.

The chart helps you visualize the clustering of various concepts.

Your team must decide which concepts to keep, and which can be discarded. There are two strategies you can use to decide this.

Choose the top cluster
You may notice a significant gap between the first few concepts and the rest. Those top concepts are the top cluster you're after. Your team would keep these concepts and discard the rest.
Discard the worst concepts
If there's no apparent top cluster (such as in the example above), you may instead choose to discard everything below the reference design, since all those concepts are definitely not better than the reference.

Note that a top cluster containing only one concept should not be taken to mean you've found a definite winner. If you end up with a top cluster containing only one concept, you should use the second strategy.

Deliverables

The deliverables from a concept evaluation activity include:

  • The individual DMs created by each team member.
    • These should be put into an Appendix of your report.
    • Each DM should clearly identify which Persona and SUC was used to develop it.
    • Each DM should include key justifications for the ratings given.
  • The overall DM combining the data from all the individual DMs.
    • This should be in the body of the report.
    • Key justifications must be included (taken from the individual DMs to explain the specific values appearing in the overall DM).
  • The cluster chart from the Clustering tab of the overall DM, and a clear and justified statement of which concepts your team has determined to be the top cluster.
design/concept_evaluation.txt · Last modified: 2020.03.12 13:30 (external edit)