Difficulty: Medium
Correct Answer: means of measuring system-development time spent on the project
Explanation:
Introduction / Context:
DSS evaluation focuses on whether the system improves decision quality, usability, responsiveness, and stakeholder satisfaction. While project management metrics (like time spent) are useful for delivery oversight, they are not central to evaluating the effectiveness of the DSS itself. This question distinguishes product evaluation from project administration.
Given Data / Assumptions:
Concept / Approach:
An effective evaluation framework specifies criteria (what “good” looks like), monitoring (how progress toward those criteria is tracked), and a formal review process (how findings drive decisions). Measuring development time is relevant to schedule and cost management, but it neither proves nor disproves the DSS’s utility in supporting decisions.
Step-by-Step Solution:
Verification / Alternative check:
Best-practice frameworks (usability testing, KPI impact, A/B pilots) emphasize outcomes and user adoption, not engineering hours, when judging DSS success.
Why Other Options Are Wrong:
Criteria, monitoring, and formal reviews are core to systematic evaluation.
“All of the above” is incorrect because one option is not key to evaluation proper.
Common Pitfalls:
Equating on-time delivery with effectiveness; a timely but unhelpful DSS still fails users.
Final Answer:
means of measuring system-development time spent on the project
Discussion & Comments