The following table gives the performance grade for each team from the competition.
As we said from the start, the grades for performance in the competition are somewhat subjective. This is necessary since any rigid grading scheme that we've been able to come up with fails in particular cases (e.g., if we'd designed a scheme based on previous experience, we would probably have all but one team scored at 24/24 this year). That said, being engineers, we want to quantify it, so the following is how we did it. The 'raw' results are available here.
We understand that there has been some concern that the competition performance does not evaluate the design goals that were stated and discussed in the earlier reviews. This is a real challenge in courses such as this because the key educational outcome is about design, but performance in a single short competition cannot effectively evaluate design. Quality of design would be reflected in such things as suitability for the purpose at hand, robustness over time and varied application, ease of enhancement and adaptability to new situations. We recognize this and this is exactly why the final competition performance is only a component of the total mark and there is an explicit design component of the mark, which has been evaluated with your draft documents and will be with the final documents. The mark that is assigned for design is not purely based on the documentation, but is an evaluation of what we see as the quality of your design in all the ways that we can observe it, including observed performance in the demos and competition.
Last modified: $Date: 2007-11-28 11:25:12 -0330 (Wed, 28 Nov 2007) $ ($Revision: 256 $) by $Author: dpeters $.