# DesignWIKI

Fil Salustri's Design Site

# COLLECTIONS

2015.01.09 11:13

Grading team reports can be so complex that it needs a page of its own.

## Overview

When grading team design reports, I use a special spreadsheet. Here's how it works.

There are too many substantial differences between sections to group the entire class together. Each section is assigned a different project; each section has a different TA; each section has a different class schedule…. These are all variables that can affect grades but that I cannot control. Therefore, I assess one section of teams at a time.

So there is a separate spreadsheet for each section. Each spreadsheet has a number of sheets - one for me to assess each team separately - and a single other sheet used to tabulate and calculate the grades of individual students in their teams per section.

Details are given below.

## The assessment component

The first part is the rubric that I use to assess the elements of the project. Each team has a separate sheet. These are the tabs marked T01, T02, and so on. There's one such tab for each team in a section. The W tab defines the weights for each graded item.

The project is divided into a number of elements, as described in the project workload distribution form (e.g. PRS, PAS, PCS, and so on). Each element has its own section in the sheet.

Each element has a weight between 1 and 3, used to rate the relative importance of that element to the overall project. These weights are in the BLUE cells.

Each element is assessed with respect to a number of different criteria. These are given below each element. Each criterion has its own weight too (on a 1-3 scale).

In the T tabs, the white columns are where I put my assessment of each line item. The rating scale is described elsewhere. It is essential to note that the report is graded as if only one person wrote it. I don't look at the team's size or membership; I just mark what has been submitted.

The first two rows show the overall contribution of each element to the overall grade of the report, and the grade (out of 4) of each element.

At the top right of the T tabs is the report's grade (in green). This is the overall grade out of 4 that I have given to the report/project as a whole, and is simply a sum of all the component values in the Overall Contribution column, and scaled to be out of 4.

The data in Row 1 is transferred automatically on the X tab, which is where I generate individual student grades.

The grade distribution sheet is the first sheet of the spreadsheet (it's called X), and has a region for each team. It uses the assessment component (described above) and the information from your project workload distribution form to calculate grades for each team member automatically.

The number in green is the overall project score from the corresponding T tab.

The number in red is the project score, adjusted to accommodate the difference in team size and normalized to the range [0-1]. Larger teams have more people to work, and so are expected to produce slightly better projects; smaller teams lack the “human resources” of larger teams, and so are expected to produce slightly inferior projects. Further details of this are given in the next section.

Next is the data from your team's WDF, which I copy and paste from the soft-copy WDF that you have shared with me.

The R-tot column provides a sum of each student's contribution - i.e., the total amount of responsibility each student claims for the project as a whole.

The CUM column produces a weighted sum of the student's data and the weighted scores I gave for each element of the project. In other words, it's the product of how much responsibility you claim and the quality of the work responsibility for which you claim.

This means that a student who does a lot of good work on an important part of the project will do better than a student who does a lot of good work on a less important part of the project. It also ensures that a student who does poor work, or little work (or both), will get a low grade.

The GRADE column uses the differences between the aCUMulated rating of students to either increase or decrease the report grade for each team member. This column is contains the actual grades out of 4 for each student in each team.

One feature that has not been accounted for so far is the variability in the sizes of different teams. Team sizes will vary; some teams may have four members, others may have as many as six. There are various reasons for this - none of them avoidable.

However, I do not expect some people to work more (or less) than others just because they are in a smaller (or larger) team.

This means that, ceteris paribus, a project done by a team of four will not generally be as complete/good as a project done by a team of six.

Clearly, this isn't fair. I needed a way to account for the unavoidable variability in team size that won't impose extra administrative work on either student or instructor.

The solution I have found is to perform a section-wide adjustment on project grades.

• The adjustment is performed on a section by section basis. Each section is assigned a different design project. Team performance on a particular project is coupled to the number of people on the teams. I do not want that coupling to risk distorting the grades unreasonably. The only way to do this is to perform adjustments on a per-section basis, which decouples team performance from the nature of the projects they are assigned.
• The adjustment is based on the notion that a team of four will likely do less work of a team of five.
• I have found that a simple linear scaling factor tends to give too much benefit to a small team, and too much penalty to a large team. I have found historically that scaling the impact of team size by 50% leads to far more consistent results. I check these numbers every year to ensure that this scaling is done fairly.

## Other notes

Some overall notes on this process:

• Since I assess the report as if one person wrote it, and all the calculations are automated, then everyone is treated fairly.
• The PENALTIES item is used to penalize teams for various reasons. The typical one is lateness. Refer to the Course Outline for details on these penalties.
• The last three columns to the far right of the Sheet are only for program and accreditation purposes and have nothing to do with grades awarded to individual students.
• You can't beat the system. If a team gives itself all maximal scores of 4 in all features on the Final Workload Distribution form, then everyone ends up with the same score: the score given to the report as a whole.
• Signing the workload distribution form means each student agrees with it in its entirety. It is up to the student to come to me to discuss any problems arising from this.