Loading
Loading
No Validated Assessment Instrument Measures Engineering Ethics Competency as Distinct from Ethics Knowledge Recall
ABET requires all accredited engineering programs to demonstrate that graduates can "recognize ethical and professional responsibilities in engineering situations and make informed judgments." Despite this mandate existing since 2000, no validated assessment instrument exists that measures ethical reasoning competency — the ability to identify ethical dimensions of engineering decisions, analyze competing obligations, and make defensible judgments — as distinct from ethics knowledge recall (recognizing that codes of ethics exist, identifying which professional society governs a discipline). Current assessments are almost exclusively knowledge-based: multiple-choice questions about code provisions, case study analyses graded for "correct" identification of the ethical issue. These test whether students know the vocabulary of engineering ethics, not whether they can navigate ethical complexity in practice.
Engineering decisions with ethical dimensions — algorithmic bias, infrastructure life-safety tradeoffs, environmental justice, dual-use technology, informed consent in human subjects research — are becoming more frequent and more consequential. ABET's accreditation review accepts course-level evidence (syllabi, assignment samples) rather than outcome-level evidence of ethical reasoning ability, creating a compliance-without-competency pattern. Programs can satisfy ABET by offering an ethics module with a knowledge quiz, regardless of whether students develop genuine reasoning capacity. When the Boeing 737 MAX MCAS failure, the Volkswagen emissions scandal, or the Theranos fraud are analyzed, the engineers involved did not lack knowledge of professional codes — they lacked the practiced capacity to recognize and act on ethical dimensions under organizational pressure.
The Defining Issues Test (DIT-2) is the most widely used instrument for measuring moral reasoning in engineering education, but it measures general moral development (Kohlberg stages), not domain-specific engineering ethics reasoning. The Engineering Ethical Reasoning Instrument (EERI) was developed as a domain-specific alternative but has been validated only in small samples and measures recognition of ethical issues, not reasoning quality. Rubric-based assessment of written ethical analyses can evaluate reasoning but requires trained raters, is expensive to scale, and inter-rater reliability is typically <0.7 (below psychometric standards for consequential assessment). The fundamental challenge is that ethical reasoning is contextual, multi-dimensional, and does not have "correct answers" in the way that technical problems do — making it resistant to the standardized assessment approaches that engineering education relies on.
A scalable assessment that presents realistic engineering scenarios requiring ethical reasoning (not just recognition) and scores responses for reasoning quality rather than conclusion correctness. Promising approaches include: (1) constructed-response items with AI-assisted scoring calibrated against expert rater judgments; (2) situational judgment tests (SJTs) adapted from medical education's extensive SJT research, presenting ethical dilemmas with ranked response options scored against expert consensus; (3) behavioral observation in team-based design projects, using structured rubrics to assess how students handle ethical dimensions that emerge naturally in engineering design work. The key insight from medical education's experience with clinical reasoning assessment is that scenario-based performance assessment is valid but requires massive item banks and sophisticated scoring — the development cost is high but the instrument serves an entire profession.
A team could develop and pilot-test a situational judgment test for engineering ethics, creating scenarios from real engineering failures (anonymized), recruiting practicing engineers to establish expert consensus on response quality, and administering to engineering students to test discrimination. A team could design a team-based design project with embedded ethical dimensions (e.g., specify a product for a market where safety standards are lower) and develop a behavioral rubric to assess how teams navigate these dimensions. Relevant disciplines: psychometrics, engineering education, moral psychology, assessment design.
The "not-attempted" tag reflects that the assessment development effort required (large-scale item development, expert consensus panels, longitudinal validation) has never been funded for engineering ethics — unlike medical education, which has invested heavily in clinical reasoning assessment. The "ignored-context" tag reflects that ethics assessment imports generic educational testing approaches (multiple choice, short answer) that are structurally unable to capture the contextual, multi-stakeholder reasoning that defines ethical competency. Related briefs: education-curriculum-assessment-misalignment (same pattern: assessment doesn't measure what curriculum targets), education-stem-faculty-ebip-adoption-gap (institutional barriers to educational improvement).
ABET Criterion 3.4, "An ability to recognize ethical and professional responsibilities in engineering situations and make informed judgments, which must consider the impact of engineering solutions in global, economic, environmental, and societal contexts," 2019; Hess & Fore, "A Systematic Literature Review of US Engineering Ethics Interventions," *Science and Engineering Ethics*, 2018; Borenstein et al., "Assessing Ethical Reasoning in Engineering," *Journal of Engineering Education*, 2010. Accessed 2026-02-25.