Computer Science Principles
Plagiarism Usability Test Plan (Sample)
Plagiarism Prevention E-Learning Module: Usability Testing Plan
A. Overview of E-Learning Solution
Module Name: “Academic Integrity Mastery Quest: Plagiarism Prevention & APA Citation Excellence”
Description: An interactive, quest-based e-learning module designed to teach students academic integrity principles through six progressive challenges (C1-C6). The module combines case studies, interactive demonstrations, hands-on practice, and AI-powered analysis tools to create an engaging learning experience that results in a mastery certificate.
Target Audience: High school and college students who need to develop academic writing and citation skills.
B. Purpose of Testing
Primary Purpose
Evaluate the effectiveness, usability, and engagement level of the plagiarism prevention module before full deployment to ensure it meets educational objectives and provides an optimal user experience.
Potential Benefits
- Educational Effectiveness: Validate that the quest-based approach successfully teaches plagiarism prevention and APA citation skills
- User Experience Optimization: Identify navigation issues, content clarity problems, and engagement barriers
- Content Validation: Ensure case studies, examples, and interactive elements resonate with the target audience
- Technical Performance: Test functionality of interactive tools, AI analysis features, and assessment mechanisms
- Engagement Assessment: Determine if the gamified quest format maintains student interest throughout all six modules
C. Parts of E-Learning Solution to be Tested
Core Modules (Primary Focus)
- C1: Plagiarism Case Studies - Real-world consequences and impact scenarios
- C2: APA Reference Instruction - Interactive citation tools and Salem’s dilemma
- C3: Reference Correction Practice - Error identification and correction exercises
- C4: Plagiarism Avoidance Workshop - AI-powered writing analysis tools
Supporting Elements
- Navigation System - Module progression, menu usability, breadcrumb navigation
- Video Integration - Video playback, quality, and educational value
- Interactive Tools - APA citation generator, plagiarism checker interface
- Visual Design - Image clarity, layout effectiveness, responsive design
- Progress Tracking - Level progression indicators, completion status
Advanced Features (Secondary Focus)
- C5: Instructor Assessment - Portfolio review interface
- C6: Mastery Certificate - Certificate generation and feedback system
D. Test Objectives
Learning Objectives to Validate
- Comprehension Validation: Students can identify plagiarism scenarios after completing C1
- Skill Application: Students can create proper APA citations using C2 tools
- Error Recognition: Students can spot and correct citation errors through C3 practice
- Prevention Mastery: Students can use AI tools effectively to avoid plagiarism via C4
Usability Assumptions to Test
- Navigation Intuition: Students can progress through modules without confusion
- Content Clarity: Instructions and examples are clear and actionable
- Engagement Sustainability: Quest format maintains interest across all modules
- Tool Accessibility: Interactive elements work across different devices and skill levels
Technical Assumptions to Validate
- Performance: Modules load quickly and function smoothly
- Compatibility: Content displays correctly on various devices/browsers
- Interactivity: All interactive elements respond appropriately to user input
E. Usability Metrics to be Gathered
Quantitative Metrics
- Task Completion Rate: Percentage of students completing each module successfully
- Time on Task: Average time spent on each module and specific activities
- Error Rate: Number of mistakes made during interactive exercises
- Navigation Efficiency: Number of clicks/steps required to complete tasks
- Drop-off Rate: Percentage of students discontinuing at each module
Qualitative Metrics
- User Satisfaction Scores: Post-module rating scales (1-10) for usefulness and engagement
- Perceived Learning Value: Self-reported confidence in applying learned skills
- Content Clarity Ratings: Understanding scores for instructions and examples
- Visual Appeal Assessment: Ratings for design, images, and overall presentation
- Engagement Feedback: Specific comments on what maintains or loses interest
Behavioral Metrics
- Help-Seeking Frequency: How often students request assistance or clarification
- Replay/Revisit Patterns: Which content students return to review
- Interactive Tool Usage: How students engage with citation generators and analysis tools
F. Usability Performance Goals
Completion and Accuracy Goals
- Primary Goal: 85% of participants complete C1-C4 modules within the testing session
- Accuracy Target: 80% success rate on C3 correction exercises without hints
- Efficiency Goal: Average module completion time of 15-20 minutes per module
User Satisfaction Targets
- Overall Satisfaction: Average rating of 7.5/10 or higher for module usefulness
- Engagement Score: Average rating of 7/10 or higher for “interesting/engaging” content
- Clarity Rating: 90% of participants rate instructions as “clear” or “very clear”
Technical Performance Standards
- Load Time: All modules and interactive tools load within 3 seconds
- Error-Free Operation: 95% of interactive elements function without technical issues
- Cross-Platform Compatibility: Consistent experience across desktop, tablet, and mobile devices
Learning Outcome Benchmarks
- Knowledge Retention: 75% accuracy on post-module quiz questions
- Skill Transfer: 80% of participants can correctly cite a new source using learned APA format
- Confidence Increase: Average self-reported confidence gain of 3+ points on 10-point scale
G. Usability Tasks and Scenarios
Pre-Test Scenario Setup
Context: “You’re a student working on a research paper for your English class. You want to learn about academic integrity and proper citation to avoid plagiarism. You’ve heard that this quest-based module can help you become an expert in these skills.”
Module-Specific Tasks
C1: Case Studies Tasks
- Scenario: “Review the plagiarism case studies and identify the consequences in each scenario”
- Task: Navigate through case studies, watch embedded video, and complete the reflection activity
- Success Criteria: Correctly identify 3/4 consequences and provide meaningful reflection
C2: APA Reference Tasks
- Scenario: “You need to cite a book and a website for your research paper”
- Task: Use the interactive APA citation tool to create proper citations for provided sources
- Success Criteria: Generate correctly formatted citations within 10 minutes
C3: Correction Practice Tasks
- Scenario: “Your classmate has shared their reference list, but you notice some errors”
- Task: Identify and correct 5 citation errors using the practice tool
- Success Criteria: Find and fix at least 4/5 errors with explanations
C4: Workshop Tasks
- Scenario: “You’ve written a paragraph for your paper and want to check for potential plagiarism”
- Task: Submit text to AI analysis tool and implement suggested improvements
- Success Criteria: Successfully use tool and understand feedback provided
Cross-Module Navigation Tasks
- Quest Progress: “Check your overall progress and navigate back to review C2 content”
- Help Access: “Find and use the help or support feature when stuck”
- Mobile Transition: “Complete one module on desktop, then continue on mobile device”
H. Test Methodology
Test Type Components
- Format: In-person testing with remote follow-up survey
- Approach: Exploratory usability testing with assessment elements
- Moderation: Moderated sessions with think-aloud protocol
Participant Components
Number of Participants
- Primary Group: 12-15 students (allows for 2-3 groups of 4-5 students each)
- Rationale: Sufficient for identifying major usability issues while managing logistics
Eligibility Requirements
- Currently enrolled high school or early college students (ages 16-20)
- Basic computer/internet skills (can navigate websites independently)
- Currently or recently enrolled in classes requiring research papers
- No prior experience with formal APA citation training
Participant Qualifications
- Academic Level: Mix of high school juniors/seniors and college freshmen
- Technology Comfort: Comfortable using computers, tablets, and smartphones
- Writing Experience: Have written at least one research-based assignment
- Motivation: Expressed interest in improving academic writing skills
Required Skills
- Basic Digital Literacy: Can navigate websites, click links, fill forms
- Reading Comprehension: Can understand high school level instructional text
- Task Focus: Able to concentrate on activities for 60-90 minutes
- Communication: Comfortable expressing thoughts and feedback aloud
Participant Training
Pre-Test Briefing (10 minutes):
- Explanation of think-aloud protocol: “Say what you’re thinking as you work”
- Clarification that we’re testing the system, not the participant
- Overview of the quest concept and module structure
- Practice think-aloud with a simple warm-up task (navigating a sample webpage)
- Permission to ask questions and request help during testing
- Consent for recording (screen capture and audio for analysis)
Test Procedures
Test Setting
- Location: Computer lab with individual workstations
- Equipment: Computers with reliable internet, screen recording software, audio recording
- Environment: Quiet space with minimal distractions
- Duration: 90-120 minutes per session including briefing and debrief
Participant Steps
- Arrival & Setup (10 min): Check-in, consent forms, equipment testing
- Training & Briefing (10 min): Think-aloud training and context setting
- Module Testing (60-80 min): Complete C1-C4 with think-aloud commentary
- Post-Test Survey (10 min): Complete satisfaction and feedback questionnaire
- Debrief Interview (10-15 min): Open discussion about experience and suggestions
Observer Steps
- Pre-Session: Set up recording equipment, prepare observation sheets
- During Session: Take notes on user behavior, errors, and comments
- Minimal Intervention: Only assist if participant is completely stuck
- Post-Session: Review recordings and compile initial observations
I. Roles of Test Team Members
Primary Facilitator (You - Instructor)
- Responsibilities: Lead participant briefing, guide think-aloud process, ask follow-up questions
- Focus Areas: Educational content effectiveness, learning objective achievement
- Skills Required: Experience with educational assessment, familiarity with module content
UX Observer/Notetaker (Student Assistant or Colleague)
- Responsibilities: Document user behavior, navigation patterns, error occurrences
- Focus Areas: Interface usability, technical issues, user confusion points
- Tools: Observation checklist, timing sheets, behavior coding forms
Technical Monitor (Student or IT Support)
- Responsibilities: Manage recording equipment, troubleshoot technical issues
- Focus Areas: System performance, browser compatibility, interactive tool functionality
- Backup Role: Secondary notetaker for user comments and reactions
Subject Matter Expert (Optional - Education Technology Colleague)
- Responsibilities: Evaluate pedagogical effectiveness, content accuracy
- Focus Areas: Learning design principles, assessment validity, educational value
- Contribution: Post-test analysis of learning outcomes and instructional design
J. Problem Ranking and Reporting Framework
Severity Scale (1-4 Point System)
Level 4: Critical (Red Flag)
- Task Impact: Prevents task completion entirely
- Frequency: Affects 75%+ of users
- Examples: Module won’t load, interactive tools non-functional, content completely unclear
- Response: Immediate fix required before any deployment
Level 3: Major (High Priority)
- Task Impact: Significantly delays task completion or causes multiple errors
- Frequency: Affects 50-74% of users
- Examples: Confusing navigation, unclear instructions, broken citation tool features
- Response: Must be addressed before wide release
Level 2: Minor (Medium Priority)
- Task Impact: Causes brief confusion or minor delays
- Frequency: Affects 25-49% of users
- Examples: Small design inconsistencies, minor text clarity issues, slow loading
- Response: Address in next iteration cycle
Level 1: Cosmetic (Low Priority)
- Task Impact: Minimal impact on task completion
- Frequency: Affects <25% of users
- Examples: Color preferences, minor visual alignments, optional feature requests
- Response: Consider for future enhancements
Frequency Documentation
- Universal (4): Occurs for all or nearly all participants
- Common (3): Occurs for majority of participants
- Occasional (2): Occurs for some participants
- Rare (1): Occurs for few participants
Impact Assessment Matrix
Problems will be ranked using: Severity × Frequency = Priority Score
- Scores 12-16: Immediate action required
- Scores 8-11: High priority for next sprint
- Scores 4-7: Medium priority for future iterations
- Scores 1-3: Low priority, monitor for patterns
K. Post-Test Process Framework
Reporting and Describing Results
Quantitative Results Report
- Completion Metrics: Module completion rates, task success percentages, time on task averages
- Error Analysis: Types and frequency of errors, help-seeking patterns
- Performance Benchmarks: Comparison against established goals from Section F
- Statistical Summary: Means, medians, standard deviations for key metrics
Qualitative Results Documentation
- User Feedback Synthesis: Common themes from comments and suggestions
- Observational Insights: Patterns in user behavior, confusion points, engagement levels
- Quote Documentation: Representative participant quotes for each major finding
- Video Analysis: Key moments showing user struggles or successes
Evaluating Metrics and Goals
Goal Achievement Assessment
- Met Goals: Areas where performance targets were achieved or exceeded
- Missed Targets: Specific metrics that fell short of established benchmarks
- Gap Analysis: Quantify differences between actual and target performance
- Context Factors: External factors that may have influenced results
Metric Reliability Review
- Data Quality: Assess completeness and accuracy of collected metrics
- Measurement Validity: Evaluate whether metrics truly captured intended outcomes
- Baseline Establishment: Use results to set realistic benchmarks for future testing
Discussing Subjective Findings
Participant Satisfaction Analysis
- Overall Experience: Aggregate satisfaction scores and sentiment analysis
- Module-Specific Feedback: Strengths and weaknesses identified for each C1-C4 module
- Engagement Patterns: What kept students interested vs. what caused disengagement
- Learning Perception: Students’ self-reported confidence and skill development
Observational Insights
- Behavior Patterns: Common user strategies, workarounds, and adaptation methods
- Emotional Responses: Frustration points, excitement moments, confusion reactions
- Unexpected Discoveries: Surprising user behaviors or alternative use patterns
Making Recommendations for Addressing Problems
Immediate Actions (Critical Issues)
- Quick Fixes: Simple changes that can be implemented immediately
- Content Revisions: Text clarifications, instruction improvements, example updates
- Technical Repairs: Bug fixes, performance optimizations, compatibility issues
Short-Term Improvements (Major Issues)
- Design Iterations: Interface redesigns, navigation improvements, visual enhancements
- Content Restructuring: Module reordering, activity modifications, assessment updates
- Feature Additions: New interactive elements, help systems, progress indicators
Long-Term Enhancements (Minor/Future Considerations)
- Advanced Features: AI tool improvements, personalization options, adaptive learning
- Scalability Planning: Multi-language support, accessibility enhancements, mobile optimization
- Integration Opportunities: LMS compatibility, gradebook connections, analytics dashboard
Implementation Priority Matrix
Each recommendation will include:
- Effort Required: Time and resources needed for implementation
- Impact Potential: Expected improvement in user experience and learning outcomes
- Risk Assessment: Potential negative consequences of changes
- Success Metrics: How improvements will be measured in future testing
Next Steps for Implementation
- Recruit Participants: Identify and schedule 12-15 student testers from your current classes
- Prepare Test Environment: Set up recording equipment and testing space
- Create Assessment Materials: Develop observation sheets, post-test surveys, and debrief questions
- Conduct Pilot Test: Run through the process with 1-2 students to refine procedures
- Execute Full Testing: Complete all testing sessions within a 1-2 week window
- Analyze and Report: Compile findings and create actionable improvement plan
This comprehensive testing plan will provide valuable insights into both the educational effectiveness and usability of your plagiarism prevention quest, ensuring it truly serves your students’ learning needs while maintaining high engagement levels.