User Tools

Site Tools


Sidebar

project-wiki:design_reviews:architecture_review_and_fall_semester_design_report

Architecture Review and Fall Semester Design Report

Early in the Concept Development stage, a concept is selected that is believed to meet the design needs. At this point the Concept Review is held to show how the concept is a good choice. However, the selection of the concept is not the end of Concept Development. As described on page 67 of Product Development, Concept Development is only complete when a high-quality, verified system architecture has been completed.

The Architecture Review takes place near the end of Fall Semester. The purpose of the review is to answer the questions, “What is the architecture for the system and each of the subsystems, and how do the elements combine to meet the system requirements?” and “How have you demonstrated that the defined architecture is correct, and ensures that the final product will meet its requirements?” Your review will be scheduled to accommodate the team, the instructor, and the review coaches. In preparation for the Architecture Review, you will prepare a draft version of the Fall Semester Design Report. The final version of the report will be submitted after the review, according to the schedule on Learning Suite.

The system architecture at the end of Concept Development includes the following components.

  1. An accurate, unambiguous, transferable definition of the selected product concept, where enough detail is provided about the spatial and structural relationships of the principal subsystems that basic cost, size, weight, and feasibility estimates can be made. There should also be a transferable demonstration of how the concept's parts and its whole meet the opportunity.
  2. The decomposition of the system into major components or subsystems. See page 71 and page 78 of Product Development.
  3. The definition of the interfaces between the systems. See pages 71--72 and pages 78--79 of Product Development.
  4. Requirements clearly stated using requirements matrices or software requirements specifications for each of the subsystems, as appropriate for your project. The subsystem requirements should be derived from the performance measures of the system. See page 69 and pages 78--82 of Product Development.
  5. Target values for the system and each of the subsystems, as appropriate for your project. See pages 69--70 and page 128 of Product Development.

NOTE: If your project is simple enough that subsystems are not necessary, you do not need to include either the decomposition or the subsystem requirements matrices. However, you should explain why you believe no subsystems are necessary.

The primary purpose of the Fall Semester Design Report is to make this chosen system architecture transferable.

Fall Semester Design Report

The submission used for the Architecture Review is the Fall Semester Design Report. The purpose of the Fall Semester Design Report is to answer the questions, “What is the architecture for the system and each of the subsystems, and how do the elements combine to meet the system requirements?” and “How have you demonstrated that the defined architecture is correct, and ensures that the final product will meet its requirements?” The Fall Semester Design Report is a single PDF compilation containing the following:

  1. A Fall Semester Design Report title page
  2. A Table of Contents listing the Executive Summary and all of the included artifacts, but with no page numbers for the artifacts. The PDF compilation must have bookmarks for each of the Table of Contents entries. See Artifact Formatting for an easy way to make these bookmarks. Individual artifacts should have page numbers. The page numbering of each artifact should start at 1.
  3. The Executive Summary. See the next section for more information.
  4. PDF copies of each of the Primary Artifacts, as listed below. The Primary Artifacts provide a high-level description of the chosen architecture and the state of the project. Primary artifacts need not contain all of the details needed to reproduce the work but should cite Supporting Artifacts that do contain these details.
  5. PDF copies of each of the Supporting Artifacts for the review, as listed under the review questions. Supporting Artifacts will be evaluated by the graders only when they have questions they would like to answer.
  6. All PDF files included in the package must be searchable for text, meaning they must not be scanned documents

The artifacts in the Fall Semester Design Report should be thoughtfully arranged to facilitate rapid access and understanding.

A signature page was required in previous years for the report, but is no longer required. However, a signature page is required for the Project Success Agreement. The Project Success Agreement should be included in this report.

Executive Summary

The Executive Summary for the Fall Semester Design Report is a two-page top-level summary of the results of your design work. In addition to text, the summary may contain a few summary figures and/or tables that best show your work. The Executive Summary supplements, rather than replaces, the individual artifacts.

The Executive Summary is formatted as a report, not an artifact. It does not have a title block or list of referenced documents.

The primary audience for the Executive Summary is the management team for the sponsor. It is not the liaison for your project; it is the managers for your liaison. The Executive Summary should stand on its own and inform the management team of the progress on your project, without reading any of the artifacts. However, it should be consistent with the artifacts and make it easy for management to know which artifacts to peruse to get more information. Because it is so brief, you will need to be very concise and carefully consider what information is most important for your target audience to know.

The Primary Artifacts contain more detail than the Executive Summary. The principle you should follow is that the Summary summarizes the artifacts (which contain the full details).

The Executive Summary should summarize the following topics:

  • Introduction: Appropriate (brief) background of the project. Indicate why the project is important. Share your project objective statement. Indicate the Key Success Measures for the project.
  • Summary of Selected Architecture: Briefly describe the selected concept in words and with appropriate top-level pictures. Also describe the selected system decomposition. The description should not be a detailed definition. The detailed definition is found in the artifacts. The description should provide an overview of the concept that will make it easy for the reader to understand the details found in the artifacts. Refer to appropriate design artifacts as needed.
  • Summary of Architecture Justification: Explain why the selected architecture is appropriate. There are likely two elements to this justification:
    1. A summary of concept selection matrices and/or other decision methods that demonstrate you have considered a relatively large number of concepts and the selected concept is judged to be among the best at meeting the requirements. You will refer to one or more concept selection artifacts for details. You will also refer to the decomposition associated with the selected concept.
    2. A summary of testing, coupled with modeling, that allows you to predict how well the performance of the system you intend to design meets the requirements, especially the Key Success Measures. Testing will require you to create a system prototype at the best fidelity possible within available resources. You will refer to artifacts that define the prototype, the testing procedures, and the results. You will also refer to artifacts for the models used to predict performance.

Artifacts

The artifacts in your report should answer the questions, “What is the architecture for the system and each of the subsystems, and how do the elements combine to meet the system requirements?” and “How have you demonstrated that the defined architecture is correct, and ensures that the final product will meet its requirements?” The documents described below will help you to answer these questions.

Primary Artifacts

These artifacts should be higher-level documents that summarize and draw conclusions from the more detailed material in the supporting artifacts.

  • System architecture definition that describes the architecture of the system and any subsystems and defines the key technologies that will be used in these elements. It should also identify which parts of the architecture are off-the-shelf and which are custom designed.
  • Requirements matrix (or other requirement specification) for the system and for each defined subsystem. Each matrix should include target values for all performance measures.
  • Architecture selection (or justification) document that explains why the architecture was chosen. It should show that the architecture is expected to lead to a desirable design. Normally, this evolves from the Concept Selection Report prepared for the Concept Review, with information on subsystems and interfaces added.
  • Prototype testing and modeling reports whose results demonstrate that a final design based on your concept is likely to achieve the target values
  • Approved Project Success Agreement. This agreement formalizes the understanding between the team, the sponsor, and the Capstone administration about how project success will be evaluated. The team, sponsor, pod instructor, and External Relations negotiate the agreement. The team submits the agreement to the External Relations Manager via the capstonereports@byu.edu email address, and the External Relations Manager obtains sponsor approval of the Project Success Agreement.
  • Updated Project Milestones Table

Supporting Artifacts

Supporting Artifacts may include the following, plus any other necessary artifacts (if the information is sufficiently included in the Primary Artifacts, you may not need all of the supporting artifacts). See page 68 of Product Development for more information on these artifacts.

  • Concept screening and/or scoring matrices with justification for ratings
  • Justification of target values that are less than ideal due to necessary tradeoffs
  • Definitions of models and/or prototypes used in testing
  • Test procedures used to obtain the results listed above
  • Computer files used in the testing work
  • Geometric and other appropriate definition of the concept. This is the beginning of the System Design Package. The characteristics of a complete System Design Package are listed under the Supporting Artifacts for the Subsystem Engineering Review.
  • System decomposition.
  • Subsystem interface definitions.
  • Preliminary Bill of Materials. It is very important that the BOM contains information about long-lead-time items. It is in your best interest to order long-lead-time items as early in Fall semester as possible, so they can be shipped over the semester break. It is also helpful to identify the major custom-designed components of your design so you can get started ASAP on their design.
  • List of concepts considered, preferably including structure and/or organization showing how concepts are related
  • Evaluation of concept set Novelty, Variety, Quantity, and Quality

Evaluation Rubric

The Fall Semester Design Report will be reviewed for how clearly and completely it answers the questions above. The concept and tests presented in the artifacts will also be reviewed for their quality.

The rubrics used in the evaluation is shown below. There are different rubrics for hardware or mixed projects and software projects. Discuss with your pod instructor which rubrics you will use.

The scoresheets used by the evaluators are available on Box for hardware or mixed projects and for software projects.

Evaluation rubric for the Fall Semester Design Report

Element Item(weight) Excellent (9-10) Good (7-8) Poor (0-6)
Project Success (1.5) The reviewer's independent assessment of the expected product performance based on the Key Success Measures and the other performance measures in the PSA. The reviewer makes a best judgement of where he or she believes the team will end up. The team provides their assessment of the expected performance in the Executive Summary. Your assessment should range from 0 to 10.
Executive Summary Completeness (0.5) Contains all of the following items, and they are thoughtfully and carefully presented: Title page, Introduction, Selected Architecture, Justification for Architecture Selection. Body of Summary stays within two-page length. Contains most of the required items. Some required items are missing or are of low quality. There are multiple missing items, or the items are haphazardly prepared and/or presented. As presented, the Executive Summary appears to have little value to the team or the sponsor.
Selected Architecture (1.a.i) (0.5) Briefly and clearly describes the selected concept with words and appropriate graphics. Refers to system architecture definition artifact for more detail. Makes it easy for the reader to understand the selected architecture. Reviewer is convinced that the architecture is of high quality. Architecture is described, but description is not clear or otherwise has some challenges to understand. Reviewer has some questions about the quality of the architecture. Architecture is poorly described and is difficult or impossible to understand. Reviewer believes the selected architecture is of poor quality and will not meet the needs.
Justification for Architecture Selection(2.a.i) (0.5) Summarizes the evidence presented in the Architecture Justification that shows the selected architecture is the best one considered. Summarizes quantitative prototype testing results that show how a design based on the selected architecture is expected to meet the product requirements. Has summary tables and/or figures copied from the primary artifacts as appropriate. Summary is believable. Summarized evidence is sufficient to justify the selected architecture. Refers to appropriate artifacts that have more details on the evidence. Summarizes evidence for architecture selection, but does not present summary evidence. Refers to artifacts for more information. Summary is plausible, but may have some holes in it. Summary is weak on evidence and relies primarily on assertions. No data given. Either no references to artifacts, or artifacts referenced are inappropriate. Justification is not believable.
Primary Artifacts – reviewers evaluate all System Architecture Definition (1.a.i) (0.75) Uses both words and images to clearly define the system architecture. Describes the decomposition into principal components and subsystems, as well as interfaces and interface requirements. For physical products, allows basic cost, size, weight, and feasibility estimates. For software products, clearly shows overall system architecture, modules, and interfaces between modules. Allows evaluation relative to product requirements. Work is of high quality. Definition is under effective revision control. Uses primarily words or images to define the architecture, with some lack of clarity. Mentions the principal components and subsystems. Leaves some questions about interfaces and interface requirements. Somewhat allows basic cost, size, weight, and feasibility estimates for physical products. For software products, somewhat shows overall system architecture, modules, and interfaces between modules. Evaluation relative to product requirements may be hindered because of lack of detail in architecture. Work is of acceptable quality. Definition is under revision control. Definition of architecture is unclear. Principal components and subsystems are poorly mentioned or missing. Interfaces and interface requirements are missing or poorly presented. Basic cost, size, weight, and feasibility estimates are not possible for physical products due to low level of detail in the definition. For software products, some elements of the overall system architecture, modules, and interfaces between modules are not shown. Evaluation relative to product requirements is not possible. Work is of poor quality. Revision control is missing or superficial.
Architecture Justification (0.75) Clearly identifies the process used to select the final architecture (2.a.i). Shows excellent screening and/or scoring matrices (2.b.i). Ratings in matrices have a clearly-defined rationale and are believable (2.b.i). The advantages of the selected architecture are clearly shown. The reason for the selection is clearly explained beyond just the scores in the matrices. The selected architecture is clearly appropriate. The report is under effective revision control. Describes the process used to select the final architecture. Shows adequate screening and/or scoring matrices. Ratings in matrices appear to have a rationale and are somewhat believable. The advantages of the selected architecture can be figured out by the reader. The reason for the selection is not clearly explained beyond just the scores in the matrices. The selected architecture is probably not wrong. The report is under revision control. Poorly describes the process used to select the final architecture. Has poor or missing screening and/or scoring matrices. Ratings in matrices are not believable. The advantages of the selected architecture are not apparent. The selection seems wrong or arbitrary. Revision control is missing or superficial.
Requirements Matrices for System and Subsystems (1.a.ii, 2.a.ii) (0.5) Matrices contain Market Requirements, Performance Measures, Requirement-Measurement Relationships, Ideal Values, and Target Values. Meets all of the criteria for the requirements matrix information found in Table 4.1 or Table A.2 of the textbook. If used, the Software Requirements Specification document is professionally and thoughtfully prepared and thouroughly captures the requirements for the project. Matrix or SRS is under effective revision control. Artifact has been checked by someone other than the author. Artifact is included in vector or text (rather than bitmap) form so it can be zoomed up and read easily. All elements of specification are thoughtful and appear correct. Matrices or SRS are mostly well done, but fall short of the listed criteria in one or two areas. Some elements of the specification appear to be superficial. Commonly we would see lots of subjective or arbitrary-scale performance measures in this range; this shows the team has not really seriously thought about how to measure performance. One or two necessary requirements are missing. Revision control is superficial. Matrices or SRS have some serious weaknesses that raise questions about their utility. In the opinion of the reviewer, the market requirements are a poor reflection of the actual market desires. In the opinion of the reviewer, the requirement specification is not sufficient to ensure successful project completion even if all of the listed specifications are met. Revision control is missing.
Project Success Agreement (2.a.iii) (0.75) Scope and fidelity of final deliverable hardware and software are clearly defined. A high-quality Requirement Specification (or Requirements Matrices) is included. Key success measures are appropriately chosen to give a clear indication of the success of the product and to be trackable during the development process. Excellent, Good, and Fair performance values are consistent with target values from requirement specification. Selected stretch goals are appropriate. Thoughtfully presented. Approved by team, coach, instructor, and sponsor. Under effective revision control. Scope and fidelity are listed, but may be a little vague. Adequate requirement specification is included. Key success measures are mostly appropriate. There may be too many or too few. Key success measures mostly give indication of product success and are mostly trackable during development. Ratings of performance are mostly consistent with target values in requirements matrix. Approved by team, coach, instructor, external relations manager, and sponsor. Under revision control. Scope and fidelity are not presented or are inappropriate for project. Key success measures are superficial or disconnected from the market. Measures are insufficient to measure attainment of market requirements. Ratings are unrelated to target values in requirements matrix. One or more approvals are missing. Revision control is missing or superficial.
Project Milestones Table (2.a.iv) (0.75) Lists appropriate milestones for the project. Milestones included key project-specific milestones, not just Capstone-required milestones. Milestones show thoughtful work. Completion dates for each stage are reasonable. Approval Design Artifacts are clearly specific to and appropriate for the project. Table is under effective revision control. Table has all required elements, but some appear to be superficial or reflect careless thought. Some customization to the project, but mostly copied from examples. Under revision control. Key elements are missing or incorrect. No evidence of thought about the specific project. Revision control is missing or superficial.
Prototype Testing and Modelling Results (2.a.v) (0.75) Effective tests of prototypes and models have demonstrated that the concept is feasible and will lead to a product that meets the requirements. Test results are clearly and concisely written. Desirable and transferable test procedures, prototypes, and models are referenced to ensure that testing is reliable and repeatable. Test data is present. Conclusions are well-supported by test results. Artifacts are under effective revision control. Some testing of prototypes and models has occurred; the concept may be feasible. Test results are written. Test procedures, prototypes, and models are referred to but these artifacts are not fully desirable or transferable. Some test data is present. Conclusions are consistent with test results. Matrix is under revision control. Testing of prototypes and models is missing or very inadequate. Test results are not supported with test data. Procedures, prototypes, and models are not referred to. Work could not be repeated. Revision control is missing or superficial.
Supporting Artifacts – reviewers spot check Completeness (0.75) All necessary supporting artifacts are included. There is nothing the reviewer felt should be included that is missing (this can include information covered in primary artifacts). For Concept Development, likely supporting artifacts include:
* Geometric or other appropriate definition of the concept (1.b.i)
* System decomposition (1.b.ii)
* Subsystem interface definitions (1.b.iii)
* Preliminary Bill of Materials (1.b.iv)
* Structured list of concepts, with evaluation of Novelty, Variety, Quality, and Quantity (1.b.v, 1.b.vi)
* Concept screening and/or scoring matrices, with justification for ratings (2.b.i)
* Tradeoff studies for choosing target values that are not ideal (2.b.ii)
* Definitions of models and/or prototypes (2.b.iii)
* Test procedures used to obtain results in Primary Artifacts (2.b.iv)
* Complete test results summarized in Primary Artifacts (1.b.v, 2.b.iv)
* Computer files used in testing work (2.b.v)
Most necessary artifacts are included. One or two things the reviewer felt should be included are missing. The secondary artifacts are incomplete. There are not enough supporting artifacts to support approval.
Transferability (0.75) The supporting artifacts are clearly written, and allow the reader to replicate a test and/or use a prototype to repeat the work. The supporting artifacts have appropriate mechanics and are under effective revision control. The supporting artifacts are somewhat clear. They provide enough guidance to allow the reader to apply engineering judgment to replicate a test and/or use a prototype to repeat the work. The supporting artifacts have acceptable mechanics; they are under revision control. Supporting artifacts are unclear and/or otherwise difficult to understand. They provide insufficient information to allow the reader to replicate a test and/or use a prototype to repeat the work. The artifact mechanics are unacceptable. Revision control is missing or superficial.
Quality (0.75) The supporting artifacts reflect thoughtful, appropriate engineering practice. The correct activities were performed and properly reported in order to support approval. The reviewer is confident in the information provided to support approval. The supporting artifacts reflect engineering practice. Some of the correct activities were performed and properly reported. There is some data supporting approval, but it is not thorough or complete. The reviewer may have some questions about the information provided to support approval. The supporting artifacts reflect poor engineering practice. There is missing evidence for some expected activities. With the existing artifacts, the reviewer has insufficient information to make an informed judgment to approve stage completion.
Formatting (0.5) Complies with all formatting requirements in the Capstone Wiki and is carefully and thoughtfully formatted. Bookmarks and/or hyperlinks are provided to help the reader navigate the document. Free from grammar and spelling errors. Mostly complies with formatting requirements. One or two documents need to be fixed to comply with formatting requirements. Has a few grammar and/or spelling errors. Shows little or no evidence of following formatting requirements. Haphazard formatting, difficult to read. Substantial grammar and/or spelling errors.
Rubric revision 1.5 made from ScoreSheetMaker revision 1.5

Evaluation rubric for the Fall Semester Design Report (Software Projects)

Element Item(weight) Excellent (9-10) Good (7-8) Poor (0-6)
Project Success (1.5) The reviewer's independent assessment of the expected product performance based on the Key Success Measures and the other performance measures in the PSA. The reviewer makes a best judgement of where he or she believes the team will end up. The team provides their assessment of the expected performance in the Executive Summary. Your assessment should range from 0 to 10.
Executive Summary Completeness (0.5) Contains all of the following items, and they are thoughtfully and carefully presented: Title page, Introduction, Selected Architecture, Justification for Architecture Selection. Body of Summary stays within two-page length. Contains most of the required items. Some required items are missing or are of low quality. There are multiple missing items, or the items are haphazardly prepared and/or presented. As presented, the Executive Summary appears to have little value to the team or the sponsor.
Selected Architecture (0.5) Briefly and clearly describes the selected concept with words and appropriate graphics. Refers to system architecture definition artifact for more detail. Makes it easy for the reader to understand the selected architecture. Reviewer is convinced that the architecture is of high quality. Architecture is described, but description is not clear or otherwise has some challenges to understand. Reviewer has some questions about the quality of the architecture. Architecture is poorly described and is difficult or impossible to understand. Reviewer believes the selected architecture is of poor quality and will not meet the needs.
Justification for Architecture Selection (0.5) Summarizes the evidence presented in the Architecture Justification that shows the selected architecture is the best one considered. Code snippets and function outputs show how a design based on the selected architecture is expected to meet the project requirements. Summary is believable. Summarized evidence is sufficient to justify the selected architecture. Refers to appropriate artifacts that have more details on the evidence. Summarizes evidence for architecture selection but does not present summary evidence. Refers to artifacts for more information. Summary is plausible but may have some holes in it. Summary is weak on evidence and relies primarily on assertions. No data given. Either no references to artifacts, or artifacts referenced are inappropriate. Justification is not believable.
Primary Artifacts – reviewers evaluate all System Architecture Definition (0.75) Uses both words and images to clearly define the system architecture. Defines the principal components and subsystems, as well as interfaces and interface requirements. Clearly shows overall system architecture, modules, and interfaces between modules. Allows evaluation relative to product requirements. Work is of high quality. Definition is under effective revision control. Uses primarily words or images to define the architecture, with some lack of clarity. Mentions the principal components and subsystems. Leaves some questions about interfaces and interface requirements. Somewhat shows overall system architecture, modules, and interfaces between modules. Evaluation relative to product requirements may be hindered because of lack of detail in architecture. Work is of acceptable quality. Definition is under revision control. Definition of architecture is unclear. Principal components and subsystems are poorly mentioned or missing. Interfaces and interface requirements are missing or poorly presented. Some elements of the overall system architecture, modules, and interfaces between modules are not shown. Evaluation relative to product requirements is not possible. Work is of poor quality. Revision control is missing or superficial.
Architecture Justification (0.75) Clearly identifies the process used to select the final architecture. The advantages of the selected architecture are clearly shown. The reason for the selection is clearly explained. The selected architecture is clearly appropriate. The report is under effective revision control. Describes the process used to select the final architecture. The advantages of the selected architecture can be figured out by the reader. The reason for the selection is not clearly explained. The selected architecture is probably not wrong. The report is under revision control. Poorly describes the process used to select the final architecture. Has poor or missing justification. The advantages of the selected architecture are not apparent. The selection seems wrong or arbitrary. Revision control is missing or superficial.
Software Requirements Specification (0.5) SRS is professionally and thoughtfully prepared and thoroughly captures the requirements for the project. All sections are complete. Significant additions and revisions have been made since Opportunity Development. Coding standard has been specified. User documentation that will be delivered has been specified. Consideration has been given to testability. SRS is under effective revision control. Artifact has been checked by someone other than the author. Artifact is included in vector or text (rather than bitmap) form so it can be zoomed up and read easily. All elements of specification are thoughtful and appear correct. SRS is mostly well done but falls short of the listed criteria in one or two areas. Some elements of the specification appear to be superficial or perfunctory. One or two necessary requirements are missing. Revision control is superficial. SRS has some serious weaknesses that raise questions about its utility. In the opinion of the reviewer, the market requirements are a poor reflection of the actual market desires. In the opinion of the reviewer, the requirement specification is not sufficient to ensure successful project completion even if all of the listed specifications are met. Revision control is missing.
Project Success Agreement (0.75) Scope and fidelity of final deliverable software is clearly defined. Key success measures are appropriately chosen to give a clear indication of the success of the product and to be trackable during the development process. Excellent, Good, and Fair performance values are consistent with SRS. Selected stretch goals are appropriate. Thoughtfully presented. Approved by team, coach, instructor, and sponsor. Under effective revision control. Scope and fidelity are listed but may be vague. Adequate requirement specification is included. Key success measures are mostly appropriate. There may be too many or too few. Key success measures mostly give indication of product success and are mostly trackable during development. Ratings of performance are mostly consistent with SRS. Approved by team, coach, instructor, external relations manager, and sponsor. Under revision control. Scope and fidelity are not presented or are inappropriate for project. Key success measures are superficial or disconnected from the market. Measures are insufficient to measure attainment of market requirements. One or more approvals are missing. Revision control is missing or superficial.
Project Milestones Table (0.75) Lists appropriate milestones for the project. Milestones included key project-specific milestones, not just Capstone-required milestones. Milestones show thoughtful work. Completion dates for each stage are reasonable. Approval Design Artifacts are clearly specific to and appropriate for the project. Table is under effective revision control. Table has all required elements, but some appear to be superficial or reflect careless thought. Some customizations to the project, but mostly copied from examples. Under revision control. Key elements are missing or incorrect. No evidence of thought about the specific project. Revision control is missing or superficial.
Prototype Testing and Modelling Results (0.75) Effective tests of prototypes and models have demonstrated that the concept is feasible and will lead to a product that meets the requirements. Test results are clearly and concisely written. Desirable and transferable test procedures, prototypes, and models are referenced to ensure that testing is reliable and repeatable. Test data is present. Conclusions are well-supported by test results. Artifacts are under effective revision control. Some testing of prototypes and models has occurred; the concept may be feasible. Test results are written. Test procedures, prototypes, and models are referenced, but these artifacts are not fully desirable or transferable. Some test data is present. Conclusions are consistent with test results. Matrix is under revision control. Testing of prototypes and models is missing or very inadequate. Test results are not supported with test data. Procedures, prototypes, and models are not referred to. Work could not be repeated. Revision control is missing or superficial.
Supporting Artifacts – reviewers spot check Completeness (0.75) All necessary supporting artifacts are included. There is nothing the reviewer felt should be included that is missing. Software projects will likely necessitate fewer supporting artifacts. For Concept Development, possible supporting artifacts include
* Summary of selection process
* Information about various algorithms and functions that were considered during the concept selection process
* Analysis of different funtions or algorithms
* Definitions of models and/or prototypes * Interface definition
* Test procedures used to obtain results in Primary Artifacts
* Test results reported in Primary Artifacts
* Computer files used in testing work
Most necessary artifacts are included. One or two things the reviewer felt should be included are missing. The secondary artifacts are incomplete. There are not enough supporting artifacts to support approval.
Transferability (0.75) The supporting artifacts are clearly written, and allow the reader to replicate a test and/or use a prototype to repeat the work. The supporting artifacts have appropriate mechanics and are under effective revision control. The supporting artifacts are somewhat clear. They provide enough guidance to allow the reader to apply engineering judgment to replicate a test and/or use a prototype to repeat the work. The supporting artifacts have acceptable mechanics; they are under revision control. Supporting artifacts are unclear and/or otherwise difficult to understand. They provide insufficient information to allow the reader to replicate a test and/or use a prototype to repeat the work. The artifact mechanics are unacceptable. Revision control is missing or superficial.
Quality (0.75) The supporting artifacts reflect thoughtful, appropriate engineering practice. The correct activities were performed and properly reported in order to support approval. The reviewer is confident in the information provided to support approval. The supporting artifacts reflect engineering practice. Some of the correct activities were performed and properly reported. There is some data supporting approval, but it is not thorough or complete. The reviewer may have some questions about the information provided to support approval. The supporting artifacts reflect poor engineering practice. There is missing evidence for some expected activities. With the existing artifacts, the reviewer has insufficient information to make an informed judgment to approve stage completion.
Formatting (0.5) Complies with all formatting requirements in the Capstone Wiki and is carefully and thoughtfully formatted. Bookmarks and/or hyperlinks are provided to help the reader navigate the document. Free from grammar and spelling errors. Mostly complies with formatting requirements. One or two documents need to be fixed to comply with formatting requirements. Has a few grammar and/or spelling errors. Shows little or no evidence of following formatting requirements. Haphazard formatting, difficult to read. Substantial grammar and/or spelling errors.
Rubric revision 0.3 made from ScoreSheetMaker revision 1.5

Review Mechanics

The mechanics of the Architecture Review (and Fall Semester Design Report submission) are as follows:

  1. No later than 10:00 AM three class days (not counting weekends or holidays) ahead of the scheduled review, the team submits the Fall Semester Design Report, as described above. The submission is done from the Capstone dashboard. The evaluating coaches and the pod instructor will receive an email when the submission is completed. If the submission is late, the evaluating coaches will not provide a grade for the initial submission.
  2. No later than 8:00 AM one class day (not counting weekends or holidays) ahead of the scheduled review, the two coaches who are assigned to evaluate the submission will complete their grading and written evaluation of the Fall Semester Design Report. These evaluations will be available for download from the Capstone website team summary page. If the initial submission was late, grades will not be available, and the written evaluations may not be available before the review.
  3. The team prepares an oral presentation no more than five minutes in length that includes the project objective statement and discusses why the team believes the architecture is ready for approval, including responses to the evaluator comments where appropriate.
  4. At the Architecture Review, the team will make their brief presentation and demonstrate prototypes used to test the performance of and/or demonstrate the feasibility of the system concept.
  5. The reviewers, the team, and the coach will discuss the answers to the questions as contained in the Fall Semester Design Report and the presentation for 18 minutes. Weaknesses in the answers or the report will be explored, and recommendations for improvement will be given.
  6. After consulting with the evaluation coaches, the instructor will spend two minutes to communicate the results of the review to the team. Possible results are:
    • Stage approval is granted unconditionally. When approval is granted, the team can choose to allow the existing grades to stand and let the initial submission serve as the final submission. Or the team may choose to make a revised submission as described below.
    • Stage approval is conditionally granted, subject to the team making some required revisions. The team must clear the revisions with their pod instructor before approval is granted. The team presents the revised submission directly to the pod instructor for approval. After receiving approval, the team submits the revised submission on the Capstone website.
    • Stage approval is denied. The team will need to improve their work, schedule a new review, and return to step 1.
  7. When the team has received stage approval, they proceed to Subsystem Engineering beginning with the Winter Planning Review.
  8. If the approval was conditional, the team must submit a revised copy of the Fall Semester Design Report after the revisions are approved. The revised submission is due at 11:59 PM on the last day of classes for the semester. If no revised submission is received by the deadline, the team will receive a grade of zero for the stage.
  9. If approval was unconditional, the team may submit a revised copy of the Fall Semester Design Report to improve their grade. The revised submission is due at 11:59 PM on the last day of classes for the semester. If no revised submission is received by the deadline, the grade on the initial submission will be used.
  10. A revised submission must include the following:
    • A memo summarizing the changes to the Fall Semester Design Report, and indicating how these changes are responsive to the feedback received. This memo should not have the changed artifacts attached; it should just be a memo.
    • A final copy of the revised Fall Semester Design Report.
      We ask for these items to facilitate the work of those who must evaluate the resubmission.
      The revised submission will be graded by the same two coaches who graded the initial submission, and the grades will replace the initial grades.
  11. The average of the two coaches' scores will be the final grade for the stage. However, if the difference in the coaches' scores for the stage is greater than or equal to seven points, the instructor will also grade the Fall Semester Design Report, and the median of the three scores will then be recorded as the final grade.

The final submission of the Fall Semester Design Report should explicitly reflect feedback given by the sponsor after the Fall Semester Design Presentation.

Formatting

The Final Design Report should be formatted according to the instructions in Artifact Formatting.

If your sponsor requires a bound printed report, follow the additional instructions in Formatting Printed Reports.

Delivering to your Sponsor

You should deliver the Fall Semester Design Report to your sponsor in PDF form and/or printed form, as desired by your sponsor. Ask them what they would like. If they would like a printed version, use your CAEDM group account to print it, and use the spiral binding system in the Capstone office to bind the report.

NOTE: No reimbursements will be given for printing or binding charges. You should use CAEDM and the Capstone office to accomplish these functions.

project-wiki/design_reviews/architecture_review_and_fall_semester_design_report.txt · Last modified: 2023/12/14 09:33 by bdj2