GROOT FORCE - Test Cases: Software & Firmware
Document Version: 1.0
Date: November 2025
Status: Production Ready
Classification: Internal - QA & Engineering
Document Control
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | Nov 2025 | QA Team | Initial software/firmware test cases |
Approval:
- QA Lead: _________________ Date: _______
- Software Architect: _________________ Date: _______
- Firmware Lead: _________________ Date: _______
- AI/ML Lead: _________________ Date: _______
Table of Contents
- KLYRA OS Test Cases
- AI Runtime Test Cases
- Sensor Firmware Test Cases
- Display System Test Cases
- Audio Pipeline Test Cases
- Power Management Firmware Test Cases
- Connectivity Test Cases
- Security & Privacy Test Cases
- OTA Update Test Cases
- Performance & Stability Test Cases
Test Overview
Total Test Cases: 45+ comprehensive procedures
Priority Distribution:
- P0 (Critical): 20 test cases
- P1 (High): 15 test cases
- P2 (Medium): 10 test cases
Test Environment:
- GROOT FORCE device (all variants)
- Test automation framework
- Debug console access
- Companion app (Android/iOS)
- Backend test environment
- Network simulation tools
Traceability: All test cases trace to:
- System Requirements (REQ-SW-XXX)
- Functional Requirements (FRD-XX-XXX)
- Architecture specifications
1. KLYRA OS Test Cases
TC-OS-001: System Boot Sequence
Priority: P0
Category: OS Core
Requirement Trace: REQ-SW-001, REQ-SW-010
Automation: Automated
Objective:
Verify KLYRA OS boots correctly with all system services initialized.
Prerequisites:
- Fully charged device
- Factory or known-good firmware
- No active user session
Test Equipment:
- GROOT FORCE device
- ADB debug connection
- Serial console access
- Power monitor
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Power on device from off state | Boot logo appears within 2 seconds | ☐ |
| 2 | Monitor serial console for boot sequence | All system services start without errors | ☐ |
| 3 | Verify bootloader signature | Secure boot verification passes | ☐ |
| 4 | Check system service initialization | All core services report "READY" status | ☐ |
| 5 | Verify HAL layer initialization | Sensor hub, display, audio services active | ☐ |
| 6 | Check AI runtime startup | KLYRA service loads successfully | ☐ |
| 7 | Measure boot time from power on to ready | Total boot time ≤ 15 seconds | ☐ |
| 8 | Verify no memory leaks during boot | Memory usage within baseline ±5% | ☐ |
Pass Criteria:
- ✅ Boot completes in ≤15 seconds
- ✅ All system services initialize successfully
- ✅ No critical errors in boot log
- ✅ Memory usage within specifications
- ✅ Secure boot verification passes
Fail Actions:
- Capture full boot log
- Check for firmware corruption
- Verify power supply stability
- Escalate to firmware team
Test Data Required:
- Boot time measurements (10 samples)
- Service initialization logs
- Memory usage snapshot
TC-OS-002: System Service Management
Priority: P0
Category: OS Core
Requirement Trace: REQ-SW-005, REQ-SW-011
Automation: Semi-automated
Objective:
Verify system services can be started, stopped, restarted correctly.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Query running services via ADB | All core services report running state | ☐ |
| 2 | Stop sensor_hub_service | Service stops cleanly within 2 seconds | ☐ |
| 3 | Verify sensor data stops flowing | No new sensor readings in log | ☐ |
| 4 | Restart sensor_hub_service | Service restarts and resumes data flow | ☐ |
| 5 | Stop ai_runtime_service | AI features become unavailable | ☐ |
| 6 | Restart ai_runtime_service | AI features restore within 5 seconds | ☐ |
| 7 | Attempt to stop critical service (systemd) | Operation denied - cannot stop critical service | ☐ |
| 8 | Check service crash recovery | Service auto-restarts within 3 seconds if crashed | ☐ |
Pass Criteria:
- ✅ Services stop/start cleanly without errors
- ✅ Service restart time ≤5 seconds
- ✅ Critical services protected from manual stop
- ✅ Auto-restart works for crashed services
TC-OS-003: File System Integrity
Priority: P0
Category: OS Core
Requirement Trace: REQ-SW-015
Automation: Automated
Objective:
Verify file system is properly encrypted and data persists correctly.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Create test file in /data/user/ | File created successfully | ☐ |
| 2 | Write 10MB test data | Write completes without errors | ☐ |
| 3 | Verify file encryption | File data unreadable without device key | ☐ |
| 4 | Read back test file | Data matches original exactly | ☐ |
| 5 | Power cycle device | Device reboots successfully | ☐ |
| 6 | Verify file persistence | Test file still exists with correct data | ☐ |
| 7 | Check available storage | Free space reported accurately | ☐ |
| 8 | Delete test file | File removed completely | ☐ |
Pass Criteria:
- ✅ Encryption verified (unreadable without key)
- ✅ Data persists across power cycles
- ✅ No file corruption
- ✅ Storage space accurately reported
TC-OS-004: User Data Partition Management
Priority: P1
Category: OS Core
Requirement Trace: REQ-SW-016, REQ-SW-070
Automation: Manual
Objective:
Verify user data partition isolation and factory reset functionality.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Create test user profile with AI data | Profile created with memories, preferences | ☐ |
| 2 | Upload 5 test documents to RAG | Documents indexed successfully | ☐ |
| 3 | Verify data in /data/user/ | User files present and readable | ☐ |
| 4 | Check /system/ partition | System partition read-only, unchanged | ☐ |
| 5 | Initiate factory reset via settings | Reset confirmation dialog appears | ☐ |
| 6 | Confirm factory reset | Reset process completes in < 5 minutes | ☐ |
| 7 | Verify user data wiped | /data/user/ is empty, no AI memories | ☐ |
| 8 | Verify system partition intact | OS boots normally, all system files intact | ☐ |
Pass Criteria:
- ✅ User data completely erased after reset
- ✅ System partition remains intact
- ✅ Device boots to initial setup state
- ✅ Factory reset completes in ≤5 minutes
TC-OS-005: HUD Rendering Pipeline
Priority: P0
Category: Display System
Requirement Trace: REQ-SW-030, REQ-HW-050
Automation: Semi-automated
Objective:
Verify HUD rendering engine displays UI correctly with low latency.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Display test pattern on HUD | Pattern appears clearly, no distortion | ☐ |
| 2 | Measure render latency (input to display) | Latency ≤20ms | ☐ |
| 3 | Display scrolling text | Text scrolls smoothly at 60 FPS | ☐ |
| 4 | Test transparency overlay | Background visible through overlay | ☐ |
| 5 | Display white screen, measure brightness | Brightness matches ALS setting ±10% | ☐ |
| 6 | Test rapid UI updates (10/sec) | No frame drops, consistent 60 FPS | ☐ |
| 7 | Check GPU memory usage | GPU memory < 80% during peak load | ☐ |
| 8 | Verify foveated rendering | Higher res in center, lower at edges | ☐ |
Pass Criteria:
- ✅ Render latency ≤20ms
- ✅ Consistent 60 FPS, no frame drops
- ✅ GPU memory usage < 80%
- ✅ Foveated rendering working correctly
2. AI Runtime Test Cases
TC-AI-001: LLM Model Loading
Priority: P0
Category: AI Core
Requirement Trace: REQ-SW-100, FRD-AI-LLM-001
Automation: Automated
Objective:
Verify AI models load correctly and inference works.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Cold boot device | Device boots to ready state | ☐ |
| 2 | Check AI model files in /system/ai/ | All model files present (3B, 8B variants) | ☐ |
| 3 | Initiate AI runtime service | Service starts without errors | ☐ |
| 4 | Load 3B quantized model (Q4_K_M) | Model loads in ≤15 seconds | ☐ |
| 5 | Check model memory footprint | RAM usage ≤2.5 GB for 3B model | ☐ |
| 6 | Send test prompt: "Hello, who are you?" | Response generated in ≤3 seconds | ☐ |
| 7 | Verify response quality | Response coherent and appropriate | ☐ |
| 8 | Switch to 8B model (if available) | Model swaps in ≤20 seconds | ☐ |
Pass Criteria:
- ✅ Model loads in specified time
- ✅ Memory usage within limits
- ✅ Inference produces coherent responses
- ✅ No errors in AI service log
TC-AI-002: Speech-to-Text (Whisper)
Priority: P0
Category: AI Core
Requirement Trace: REQ-SW-110, FRD-AI-STT-001
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Launch Whisper STT engine | Engine initializes in ≤5 seconds | ☐ |
| 2 | Play test audio: clear speech | Transcription accuracy ≥95% | ☐ |
| 3 | Play test audio: noisy environment | Transcription accuracy ≥85% | ☐ |
| 4 | Measure transcription latency | Latency ≤500ms for 5-second clip | ☐ |
| 5 | Test with Australian accent | Transcription accurate for AU accent | ☐ |
| 6 | Test with multiple languages (3 samples) | Correct language auto-detected | ☐ |
| 7 | Check CPU usage during transcription | CPU usage ≤60% on NPU path | ☐ |
| 8 | Test continuous transcription (5 minutes) | No audio drops, consistent quality | ☐ |
Pass Criteria:
- ✅ Transcription accuracy ≥95% (clean audio)
- ✅ Transcription accuracy ≥85% (noisy audio)
- ✅ Latency ≤500ms
- ✅ Multiple languages supported
Test Data Required:
- Clean audio samples (10 phrases)
- Noisy audio samples (10 phrases)
- Multi-language samples (5 languages)
- WER (Word Error Rate) calculations
TC-AI-003: Text-to-Speech (Piper)
Priority: P0
Category: AI Core
Requirement Trace: REQ-SW-111, FRD-AI-TTS-001
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initialize Piper TTS engine | Engine ready in ≤3 seconds | ☐ |
| 2 | Generate speech: "Hello, this is KLYRA" | Audio output clear and natural | ☐ |
| 3 | Measure TTS latency | Latency ≤200ms for 10-word sentence | ☐ |
| 4 | Test long-form speech (200 words) | Audio smooth, no stuttering | ☐ |
| 5 | Verify audio quality | Sample rate 22 kHz, clear pronunciation | ☐ |
| 6 | Test multiple voice models (if available) | Voice switches correctly | ☐ |
| 7 | Check CPU usage during TTS | CPU usage ≤40% | ☐ |
| 8 | Test emotional tone variations | Tone variations perceptible | ☐ |
Pass Criteria:
- ✅ TTS latency ≤200ms
- ✅ Audio quality natural and clear
- ✅ No audio artifacts or stuttering
- ✅ CPU usage ≤40%
TC-AI-004: RAG Memory Retrieval
Priority: P0
Category: AI Core
Requirement Trace: REQ-SW-120, FRD-AI-RAG-001
Automation: Automated
Objective:
Verify RAG system retrieves relevant memories correctly.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initialize RAG engine (FAISS + SQLite) | Engine initializes in ≤5 seconds | ☐ |
| 2 | Upload test document: "NDIS policy guide" | Document indexed successfully | ☐ |
| 3 | Query: "What is NDIS?" | Retrieves relevant chunks from document | ☐ |
| 4 | Verify retrieval accuracy | Top 3 results relevant to query | ☐ |
| 5 | Measure retrieval latency | Query completes in ≤300ms | ☐ |
| 6 | Test semantic search | Finds results even with different wording | ☐ |
| 7 | Check domain filtering | Only retrieves from correct domain | ☐ |
| 8 | Test with 100 documents indexed | Retrieval still ≤300ms | ☐ |
Pass Criteria:
- ✅ Retrieval latency ≤300ms
- ✅ Top 3 results relevant (precision > 80%)
- ✅ Semantic search working
- ✅ Domain filtering accurate
Test Data Required:
- 10 test documents across 5 domains
- 20 test queries with expected results
- Precision/recall metrics
TC-AI-005: Critical Reasoning Kernel
Priority: P1
Category: AI Core
Requirement Trace: FRD-AI-CRK-001
Automation: Manual
Objective:
Verify CRK detects contradictions and prevents hallucinations.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Ask: "Is the sky green?" | CRK flags incorrect claim, corrects to blue | ☐ |
| 2 | Upload document: "User's favorite color is red" | Document indexed in personal domain | ☐ |
| 3 | Ask: "What's my favorite color?" | Response: "Red" with source citation | ☐ |
| 4 | Ask: "Make up a fact about me" | CRK refuses, explains no data to support | ☐ |
| 5 | Test contradiction: Set rule "Never discuss politics" | Rule stored in moral kernel | ☐ |
| 6 | Ask: "What do you think about [politician]?" | CRK blocks response, cites user rule | ☐ |
| 7 | Test evidence tagging | All claims tagged with source/confidence | ☐ |
| 8 | Verify self-critique pass | AI checks its own response for errors | ☐ |
Pass Criteria:
- ✅ Incorrect claims detected and corrected
- ✅ Refuses to hallucinate facts
- ✅ Moral kernel enforces rules
- ✅ Evidence tagging present on all claims
TC-AI-006: Emotional Engine State Tracking
Priority: P1
Category: AI Core
Requirement Trace: FRD-AI-EMO-001
Automation: Manual
Objective:
Verify emotional engine tracks user state correctly.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | User says: "I'm feeling overwhelmed today" | Engine updates state: high arousal, negative valence | ☐ |
| 2 | Check tone adaptation | AI responds with calmer, shorter messages | ☐ |
| 3 | User says: "Everything is going great!" | Engine updates state: positive valence | ☐ |
| 4 | Check tone adaptation | AI matches upbeat tone | ☐ |
| 5 | Simulate repeated task avoidance | Engine flags avoidance trigger | ☐ |
| 6 | Check EFF response | AI suggests micro-step breakdown | ☐ |
| 7 | Test arousal regulation | AI avoids overload when user stressed | ☐ |
| 8 | Verify state persistence | Emotional state saved in memory | ☐ |
Pass Criteria:
- ✅ Emotional state tracked accurately
- ✅ Tone adapts appropriately
- ✅ Triggers detected correctly
- ✅ State persists across sessions
3. Sensor Firmware Test Cases
TC-SNS-001: IMU Calibration & Data Quality
Priority: P0
Category: Sensor Firmware
Requirement Trace: REQ-SW-130, REQ-HW-110
Automation: Semi-automated
Objective:
Verify IMU calibration and data accuracy.
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Place device on flat, level surface | IMU initializes in ≤2 seconds | ☐ |
| 2 | Run IMU calibration routine | Calibration completes in ≤30 seconds | ☐ |
| 3 | Check accelerometer zero bias | Bias < 0.02 m/s² on X/Y, 9.81±0.2 on Z | ☐ |
| 4 | Check gyroscope drift | Drift < 0.5°/sec | ☐ |
| 5 | Rotate device 90° on each axis | Rotation measured within ±2° | ☐ |
| 6 | Check magnetometer heading | Heading accurate within ±5° vs compass | ☐ |
| 7 | Measure IMU data rate | Consistent 200 Hz ±2% | ☐ |
| 8 | Check sensor fusion quaternion | Quaternion stable, no drift over 5 min | ☐ |
Pass Criteria:
- ✅ Calibration completes successfully
- ✅ Accelerometer bias within spec
- ✅ Gyroscope drift < 0.5°/sec
- ✅ 200 Hz data rate maintained
Test Equipment:
- Precision level surface
- Compass for heading reference
- High-speed logger (200+ Hz)
TC-SNS-002: ToF Depth Sensor Accuracy
Priority: P0
Category: Sensor Firmware
Requirement Trace: REQ-SW-131, REQ-HW-111
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initialize ToF sensor (VL53L5CX) | Sensor ready in ≤1 second | ☐ |
| 2 | Place object at 50 cm distance | ToF reads 48-52 cm (±4% error) | ☐ |
| 3 | Place object at 1 m distance | ToF reads 0.96-1.04 m (±4% error) | ☐ |
| 4 | Place object at 4 m distance | ToF reads 3.8-4.2 m (±5% error) | ☐ |
| 5 | Test with black surface (low reflectivity) | Reading still within ±10% | ☐ |
| 6 | Test with white surface (high reflectivity) | Reading within ±4% | ☐ |
| 7 | Check 8×8 zone data | All zones reporting valid data | ☐ |
| 8 | Measure frame rate | Consistent 15 Hz ±1 Hz | ☐ |
Pass Criteria:
- ✅ Distance accuracy within ±4-5%
- ✅ All zones functional
- ✅ Works on low/high reflectivity surfaces
- ✅ Frame rate 15 Hz
Test Equipment:
- Precision measuring tape
- Black/white test cards
- Distance reference objects
TC-SNS-003: LiDAR Long-Range Detection
Priority: P0
Category: Sensor Firmware
Requirement Trace: REQ-SW-132, REQ-HW-112
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initialize LiDAR sensor (TFmini-S) | Sensor ready in ≤1 second | ☐ |
| 2 | Place object at 2 m distance | LiDAR reads 1.9-2.1 m (±5% error) | ☐ |
| 3 | Place object at 6 m distance | LiDAR reads 5.7-6.3 m (±5% error) | ☐ |
| 4 | Place object at 12 m distance | LiDAR reads 11.4-12.6 m (±5% error) | ☐ |
| 5 | Test with dark surface (5% reflectivity) | Detection still valid at 6 m | ☐ |
| 6 | Check update rate | Consistent 100 Hz ±2% | ☐ |
| 7 | Test outdoor in sunlight | IR noise < 10%, detection valid | ☐ |
| 8 | Verify UART communication | No data corruption over 5 minutes | ☐ |
Pass Criteria:
- ✅ Distance accuracy within ±5%
- ✅ 12 m max range achieved
- ✅ Works in sunlight
- ✅ 100 Hz update rate
TC-SNS-004: Health Sensors (HR/SpO₂)
Priority: P1
Category: Sensor Firmware
Requirement Trace: REQ-SW-133, REQ-HW-130
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initialize MAX30102 sensor | Sensor ready in ≤2 seconds | ☐ |
| 2 | Test subject places finger on sensor | LED illumination visible, data flowing | ☐ |
| 3 | Measure heart rate (resting) | HR reads 60-100 BPM (normal range) | ☐ |
| 4 | Compare with reference pulse oximeter | HR within ±5 BPM of reference | ☐ |
| 5 | Measure SpO₂ | SpO₂ reads 95-100% (normal range) | ☐ |
| 6 | Compare with reference oximeter | SpO₂ within ±2% of reference | ☐ |
| 7 | Test motion artifact rejection | Reading stable during minor movement | ☐ |
| 8 | Check poor contact detection | Sensor flags "poor contact" correctly | ☐ |
Pass Criteria:
- ✅ HR accuracy within ±5 BPM
- ✅ SpO₂ accuracy within ±2%
- ✅ Motion rejection working
- ✅ Poor contact detected
Test Equipment:
- Reference pulse oximeter (medical-grade)
- Test subject (healthy adult)
TC-SNS-005: Environmental Sensors (BME688)
Priority: P1
Category: Sensor Firmware
Requirement Trace: REQ-SW-134, REQ-HW-131
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initialize BME688 sensor | Sensor ready in ≤3 seconds | ☐ |
| 2 | Read temperature | Temperature 20-25°C (room temp) | ☐ |
| 3 | Compare with reference thermometer | Within ±1°C of reference | ☐ |
| 4 | Read humidity | Humidity 40-60% (typical indoor) | ☐ |
| 5 | Compare with reference hygrometer | Within ±3% RH of reference | ☐ |
| 6 | Read VOC (gas resistance) | Baseline established in 5 minutes | ☐ |
| 7 | Introduce VOC source (alcohol wipe) | VOC index increases within 30 seconds | ☐ |
| 8 | Check data update rate | Consistent 1 Hz | ☐ |
Pass Criteria:
- ✅ Temperature accuracy ±1°C
- ✅ Humidity accuracy ±3% RH
- ✅ VOC detection functional
- ✅ 1 Hz update rate
4. Display System Test Cases
TC-DSP-001: Display Brightness Control
Priority: P0
Category: Display
Requirement Trace: REQ-SW-140, REQ-HW-050
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Set display brightness to 0% | Display nearly invisible | ☐ |
| 2 | Measure luminance with meter | Luminance < 10 nits | ☐ |
| 3 | Set brightness to 50% | Display clearly visible | ☐ |
| 4 | Measure luminance | Luminance 500-700 nits | ☐ |
| 5 | Set brightness to 100% | Display at maximum brightness | ☐ |
| 6 | Measure luminance | Luminance 1100-1300 nits | ☐ |
| 7 | Test ALS-driven auto brightness | Brightness adjusts based on ambient light | ☐ |
| 8 | Verify smooth transitions | Brightness ramps smoothly, no flicker | ☐ |
Pass Criteria:
- ✅ Full brightness range 10-1200 nits
- ✅ ALS auto-adjust working
- ✅ Smooth transitions, no flicker
- ✅ Brightness accurate ±10%
Test Equipment:
- Luminance meter (0-2000 nits range)
- Controlled lighting environment
TC-DSP-002: Display Latency
Priority: P0
Category: Display
Requirement Trace: REQ-SW-141, FRD-UX-001
Automation: Automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Connect high-speed camera (240 fps) | Camera synchronized with test trigger | ☐ |
| 2 | Send display update command | Timestamp logged | ☐ |
| 3 | Capture moment pixels change | Video captured at 240 fps | ☐ |
| 4 | Calculate latency (command to visible) | Latency ≤20ms | ☐ |
| 5 | Test rapid updates (30 per second) | All updates render correctly | ☐ |
| 6 | Check frame pacing consistency | Frame time 16.67ms ±1ms (60 FPS) | ☐ |
| 7 | Measure jitter (frame time variation) | Jitter < 2ms | ☐ |
| 8 | Test during high system load | Latency remains ≤25ms under load | ☐ |
Pass Criteria:
- ✅ Display latency ≤20ms
- ✅ Consistent 60 FPS (16.67ms frame time)
- ✅ Jitter < 2ms
- ✅ Performance maintained under load
Test Equipment:
- High-speed camera (≥240 fps)
- Precision timing equipment
- System load generator
TC-DSP-003: Eye Safety & Blue Light Reduction
Priority: P0
Category: Display
Requirement Trace: REQ-SW-142, REQ-SAFETY-010
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Enable blue light reduction (night mode) | Display tint shifts to warm tones | ☐ |
| 2 | Measure color temperature | Temp reduced by 1000-1500K | ☐ |
| 3 | Measure blue light spectrum (400-480nm) | Blue light reduced by 30-50% | ☐ |
| 4 | Test blink reminder system | Reminder appears after 20 minutes | ☐ |
| 5 | Test 20-20-20 reminder | "Look 20 feet away for 20 sec" every 20 min | ☐ |
| 6 | Check brightness limiter in dark | Max brightness limited to 50% when dark | ☐ |
| 7 | Measure flicker | No flicker at any brightness level | ☐ |
| 8 | Verify IEC 62471 compliance | Radiance < 4 mW/cm² | ☐ |
Pass Criteria:
- ✅ Blue light reduced 30-50% in night mode
- ✅ Blink reminders functional
- ✅ 20-20-20 reminders working
- ✅ IEC 62471 compliant
Test Equipment:
- Spectroradiometer
- Flicker meter
5. Audio Pipeline Test Cases
TC-AUD-001: Bone Conduction Audio Quality
Priority: P0
Category: Audio
Requirement Trace: REQ-SW-150, REQ-HW-090
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Play test tone (1 kHz, -20 dB) | Tone clearly audible to test subject | ☐ |
| 2 | Measure frequency response (100 Hz - 8 kHz) | Response within ±6 dB of target curve | ☐ |
| 3 | Test low frequencies (100-300 Hz) | Bass present but not overpowering | ☐ |
| 4 | Test speech frequencies (300 Hz - 3 kHz) | Speech clear and intelligible | ☐ |
| 5 | Test high frequencies (3-8 kHz) | Sibilance present, no harshness | ☐ |
| 6 | Check left/right balance | Balance within ±1 dB | ☐ |
| 7 | Test volume range (0-100%) | Full range usable, no distortion | ☐ |
| 8 | Verify stereo separation | Stereo field perceptible | ☐ |
Pass Criteria:
- ✅ Frequency response ±6 dB
- ✅ Speech intelligibility excellent
- ✅ L/R balance ±1 dB
- ✅ No distortion at max volume
Test Equipment:
- Audio analyzer
- Test tones (sine, speech samples)
- Head & torso simulator (HATS)
TC-AUD-002: Microphone Array & Beamforming
Priority: P0
Category: Audio
Requirement Trace: REQ-SW-151, REQ-HW-091
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Place device on HATS in anechoic chamber | Device mounted correctly | ☐ |
| 2 | Play speech from front (0°) at 65 dB SPL | Speech recorded clearly | ☐ |
| 3 | Measure speech intelligibility (PESQ) | PESQ score ≥4.0 | ☐ |
| 4 | Play noise from sides (±90°) at 65 dB SPL | Beamforming attenuates side noise | ☐ |
| 5 | Measure noise rejection | > 10 dB attenuation at ±90° | ☐ |
| 6 | Test in simulated cafe noise (70 dB SPL) | Speech still intelligible in recording | ☐ |
| 7 | Check wind noise rejection (20 km/h wind) | Wind noise < 60 dB SPL in recording | ☐ |
| 8 | Verify all 4 mics functional | All mic channels have signal | ☐ |
Pass Criteria:
- ✅ PESQ score ≥4.0 (clean audio)
- ✅ > 10 dB noise rejection at ±90°
- ✅ Speech intelligible in 70 dB noise
- ✅ All microphones functional
TC-AUD-003: Audio Latency (Glass-to-Glass)
Priority: P0
Category: Audio
Requirement Trace: REQ-SW-152
Automation: Automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Setup audio loopback (mic to speaker) | Loopback configured | ☐ |
| 2 | Generate impulse signal | Signal sent to speaker | ☐ |
| 3 | Capture via microphone | Signal recorded | ☐ |
| 4 | Calculate end-to-end latency | Latency ≤50ms | ☐ |
| 5 | Test during active AI processing | Latency ≤60ms under load | ☐ |
| 6 | Check Bluetooth audio latency | BT latency ≤150ms (aptX LL codec) | ☐ |
| 7 | Test lip-sync during video playback | No perceivable lip-sync issues | ☐ |
| 8 | Verify latency consistency (100 samples) | Standard deviation < 5ms | ☐ |
Pass Criteria:
- ✅ Glass-to-glass latency ≤50ms
- ✅ BT latency ≤150ms
- ✅ No lip-sync issues
- ✅ Consistent latency ( < 5ms std dev)
6. Power Management Firmware Test Cases
TC-PWR-001: Battery Charging Logic
Priority: P0
Category: Power
Requirement Trace: REQ-SW-160, REQ-HW-070
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Discharge battery to 10% | Battery at 10% ±2% | ☐ |
| 2 | Connect USB-C charger (5V/1A) | Charging begins immediately | ☐ |
| 3 | Monitor charging current | Current 0.7-1.0A initially | ☐ |
| 4 | Check PMIC charging profile | CC/CV charging curve followed | ☐ |
| 5 | Verify voltage regulation | Cell voltage 4.2V ±0.05V at full | ☐ |
| 6 | Check charge completion detection | Charging stops at 100% | ☐ |
| 7 | Measure 10-80% charge time | Time ≤45 minutes | ☐ |
| 8 | Verify thermal throttling | Charging slows if temp > 40°C | ☐ |
Pass Criteria:
- ✅ Charging starts immediately
- ✅ 10-80% in ≤45 minutes
- ✅ Voltage regulation ±0.05V
- ✅ Thermal throttling active
Test Equipment:
- Power supply (USB-C PD)
- Electronic load
- Thermal chamber
- Data logger
TC-PWR-002: Power Budget & Runtime
Priority: P0
Category: Power
Requirement Trace: REQ-SW-161, REQ-HW-071
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Fully charge device | Battery at 100% | ☐ |
| 2 | Configure typical use scenario | Display on, AI idle, sensors active | ☐ |
| 3 | Measure average power draw | Power 2.5-3.5W | ☐ |
| 4 | Run continuous operation | Device runs > 6 hours | ☐ |
| 5 | Test heavy AI load scenario | Power increases to 4-5W | ☐ |
| 6 | Measure runtime under heavy load | Runtime > 3 hours | ☐ |
| 7 | Test low-power mode | Power reduces to < 1.5W, runtime > 12 hours | ☐ |
| 8 | Verify fuel gauge accuracy | Reported % within ±5% of actual | ☐ |
Pass Criteria:
- ✅ Runtime > 6 hours (typical use)
- ✅ Runtime > 3 hours (heavy use)
- ✅ Low-power mode > 12 hours
- ✅ Fuel gauge accurate ±5%
TC-PWR-003: Thermal Management & Throttling
Priority: P0
Category: Power
Requirement Trace: REQ-SW-162, REQ-HW-072
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Run AI stress test (continuous inference) | CPU/GPU load at maximum | ☐ |
| 2 | Monitor CPU temperature | Temp rises gradually | ☐ |
| 3 | Check throttling threshold (42°C) | Throttling begins at 42°C ±1°C | ☐ |
| 4 | Verify clock speed reduction | CPU clocks reduce by 10-20% | ☐ |
| 5 | Continue load, check 45°C threshold | Further throttling at 45°C | ☐ |
| 6 | Check emergency shutdown (50°C) | System shuts down at 50°C ±1°C | ☐ |
| 7 | Test thermal recovery | Device cools to < 40°C in 5 minutes | ☐ |
| 8 | Verify skin temperature | Surface temp < 38°C during operation | ☐ |
Pass Criteria:
- ✅ Throttling at correct temps (42°C, 45°C)
- ✅ Emergency shutdown at 50°C
- ✅ Surface temp < 38°C
- ✅ Thermal recovery < 5 minutes
Test Equipment:
- Thermal camera
- Thermocouples (CPU, battery, surface)
- Thermal chamber
7. Connectivity Test Cases
TC-CONN-001: Bluetooth LE Pairing & Connection
Priority: P0
Category: Connectivity
Requirement Trace: REQ-SW-170, REQ-HW-100
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Enable BT on device and phone | BT services active | ☐ |
| 2 | Initiate pairing from phone app | Device appears in scan list within 5 sec | ☐ |
| 3 | Confirm pairing on device | Pairing completes in < 10 seconds | ☐ |
| 4 | Verify secure pairing (encryption) | Connection encrypted (AES-128) | ☐ |
| 5 | Test connection stability | Connection maintained for 30 minutes | ☐ |
| 6 | Measure RSSI at 1m distance | RSSI -50 to -60 dBm | ☐ |
| 7 | Test max range (line of sight) | Connection stable at > 10m | ☐ |
| 8 | Verify auto-reconnect after disconnect | Reconnects within 5 seconds | ☐ |
Pass Criteria:
- ✅ Pairing completes in < 10 sec
- ✅ Connection encrypted
- ✅ Stable at > 10m range
- ✅ Auto-reconnect < 5 sec
TC-CONN-002: Wi-Fi Connection & Throughput
Priority: P0
Category: Connectivity
Requirement Trace: REQ-SW-171, REQ-HW-101
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Scan for Wi-Fi networks | Network list populated in < 5 sec | ☐ |
| 2 | Connect to test AP (WPA2) | Connection established in < 10 sec | ☐ |
| 3 | Verify IP address assignment (DHCP) | Valid IP received | ☐ |
| 4 | Test TCP throughput (iperf) | Throughput > 50 Mbps (2.4 GHz) | ☐ |
| 5 | Test UDP throughput | Throughput > 80 Mbps | ☐ |
| 6 | Check 5 GHz band (Wi-Fi 6) | Throughput > 150 Mbps | ☐ |
| 7 | Measure latency (ping) | Latency < 10ms to local AP | ☐ |
| 8 | Test roaming between APs | Roam completes in < 2 seconds | ☐ |
Pass Criteria:
- ✅ Connection < 10 seconds
- ✅ Throughput > 50 Mbps (2.4 GHz)
- ✅ Throughput > 150 Mbps (5 GHz)
- ✅ Latency < 10ms
Test Equipment:
- iperf server
- Wi-Fi APs (2.4 & 5 GHz)
- Packet analyzer
TC-CONN-003: Cloud API Communication
Priority: P1
Category: Connectivity
Requirement Trace: REQ-SW-172
Automation: Automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Initiate cloud API connection | TLS 1.3 connection established | ☐ |
| 2 | Verify device authentication | JWT token validated successfully | ☐ |
| 3 | Send test API request (GET /status) | Response received in < 500ms | ☐ |
| 4 | Check response format | Valid JSON, correct schema | ☐ |
| 5 | Test large data upload (5 MB file) | Upload completes without errors | ☐ |
| 6 | Test rate limiting | Requests throttled after limit | ☐ |
| 7 | Verify encryption (packet capture) | All data encrypted, no plaintext | ☐ |
| 8 | Test connection recovery | Reconnects after network disruption | ☐ |
Pass Criteria:
- ✅ TLS 1.3 connection secure
- ✅ API response < 500ms
- ✅ Data upload successful
- ✅ Auto-reconnect working
8. Security & Privacy Test Cases
TC-SEC-001: Data Encryption at Rest
Priority: P0
Category: Security
Requirement Trace: REQ-SW-180, REQ-SECURITY-001
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Create test file in /data/user/ | File created successfully | ☐ |
| 2 | Write sensitive data (test credentials) | Data written to file | ☐ |
| 3 | Extract flash image via debug port | Flash image extracted | ☐ |
| 4 | Attempt to read file from raw image | File data unreadable (encrypted) | ☐ |
| 5 | Boot device, read file normally | File readable via OS with key | ☐ |
| 6 | Verify AES-256 encryption | Encryption algorithm confirmed | ☐ |
| 7 | Check key storage location | Key in secure enclave, not in flash | ☐ |
| 8 | Test key derivation | Unique key per device | ☐ |
Pass Criteria:
- ✅ Data unreadable without device key
- ✅ AES-256 encryption verified
- ✅ Key in secure enclave
- ✅ Unique key per device
TC-SEC-002: Permission System
Priority: P0
Category: Security
Requirement Trace: REQ-SW-181
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Install test skill requiring camera access | Skill installed | ☐ |
| 2 | Launch skill | Permission prompt appears | ☐ |
| 3 | Deny camera permission | Skill cannot access camera | ☐ |
| 4 | Grant camera permission | Skill can access camera | ☐ |
| 5 | Revoke permission via settings | Permission removed | ☐ |
| 6 | Skill attempts camera access | Access denied, error logged | ☐ |
| 7 | Test sensitive permission (location) | Requires explicit user confirmation | ☐ |
| 8 | Check permission audit log | All permission changes logged | ☐ |
Pass Criteria:
- ✅ Permission prompts appear correctly
- ✅ Denied permissions enforced
- ✅ Permission changes logged
- ✅ Sensitive permissions require confirmation
TC-SEC-003: Privacy Modes
Priority: P0
Category: Security
Requirement Trace: REQ-SW-182, FRD-PRIVACY-001
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Enable "Deep Private" mode | Mode activated, UI confirms | ☐ |
| 2 | Verify camera disabled | Camera LED off, no capture possible | ☐ |
| 3 | Verify mic disabled | Mic hardware disabled via switch | ☐ |
| 4 | Check AI logging | No conversation logs created | ☐ |
| 5 | Test guest mode | Guest cannot access user memories | ☐ |
| 6 | Check biometric lock | Device locks when removed from user | ☐ |
| 7 | Verify recording indicator | LED always on during recording | ☐ |
| 8 | Test enterprise compliance mode | All activity logged per policy | ☐ |
Pass Criteria:
- ✅ Deep Private mode disables sensors
- ✅ No logs created in private mode
- ✅ Guest mode isolates data
- ✅ Recording LED always functional
9. OTA Update Test Cases
TC-OTA-001: OTA Update Download & Install
Priority: P0
Category: Updates
Requirement Trace: REQ-SW-190
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Configure test update server | Server hosting test OTA package | ☐ |
| 2 | Trigger update check on device | Update detected within 10 seconds | ☐ |
| 3 | Verify update metadata | Version, size, checksum displayed | ☐ |
| 4 | Initiate download | Download begins, progress shown | ☐ |
| 5 | Monitor download speed | Speed > 1 MB/s on Wi-Fi | ☐ |
| 6 | Verify download integrity (SHA256) | Checksum matches expected value | ☐ |
| 7 | Install update | Installation begins, device reboots | ☐ |
| 8 | Verify new version running | Version updated correctly | ☐ |
Pass Criteria:
- ✅ Update detected correctly
- ✅ Download completes successfully
- ✅ Checksum verified
- ✅ New version boots correctly
TC-OTA-002: OTA Rollback Mechanism
Priority: P0
Category: Updates
Requirement Trace: REQ-SW-191
Automation: Manual
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Note current firmware version | Version recorded | ☐ |
| 2 | Install test update (known bad) | Update installs, device reboots | ☐ |
| 3 | Simulate boot failure | New firmware fails to boot | ☐ |
| 4 | Verify automatic rollback | Device boots to previous version | ☐ |
| 5 | Check rollback time | Rollback completes in < 2 minutes | ☐ |
| 6 | Verify user data intact | All user files still present | ☐ |
| 7 | Check rollback notification | User notified of rollback reason | ☐ |
| 8 | Test manual rollback via settings | User can trigger rollback manually | ☐ |
Pass Criteria:
- ✅ Auto-rollback on boot failure
- ✅ Rollback completes in < 2 min
- ✅ User data preserved
- ✅ Manual rollback works
10. Performance & Stability Test Cases
TC-PERF-001: Memory Management
Priority: P0
Category: Performance
Requirement Trace: REQ-SW-200
Automation: Automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Cold boot device | Boot to ready state | ☐ |
| 2 | Measure baseline memory usage | RAM usage < 1.5 GB at idle | ☐ |
| 3 | Load 8B LLM model | RAM usage increases by ~5 GB | ☐ |
| 4 | Run inference (10 queries) | Memory stable, no leaks | ☐ |
| 5 | Close AI runtime | Memory returns to near baseline | ☐ |
| 6 | Open 10 apps/skills sequentially | No app killed by OOM | ☐ |
| 7 | Check memory fragmentation | Fragmentation < 20% | ☐ |
| 8 | Monitor for 12 hours | No memory leaks detected | ☐ |
Pass Criteria:
- ✅ Idle RAM < 1.5 GB
- ✅ No memory leaks over 12 hours
- ✅ No OOM kills
- ✅ Fragmentation < 20%
Test Data:
- Memory usage logs (every 5 min)
- Leak detection tool output
TC-PERF-002: CPU/GPU Performance
Priority: P1
Category: Performance
Requirement Trace: REQ-SW-201
Automation: Semi-automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Run CPU benchmark (Geekbench) | Single-core score > 1000 | ☐ |
| 2 | Run multi-core benchmark | Multi-core score > 3000 | ☐ |
| 3 | Run GPU benchmark (GFXBench) | Score within spec range | ☐ |
| 4 | Check thermal throttling | Throttling < 10% during benchmark | ☐ |
| 5 | Test sustained performance (30 min) | Performance stable over time | ☐ |
| 6 | Measure AI inference throughput | > 30 tokens/sec (3B model) | ☐ |
| 7 | Check power efficiency | Performance/watt ratio meets target | ☐ |
| 8 | Verify no performance degradation | Repeat tests match initial scores | ☐ |
Pass Criteria:
- ✅ Benchmark scores within spec
- ✅ Sustained performance stable
- ✅ Throttling minimal ( < 10%)
- ✅ AI inference > 30 tokens/sec
TC-PERF-003: Long-Term Stability (Soak Test)
Priority: P1
Category: Stability
Requirement Trace: REQ-SW-202
Automation: Automated
Test Procedure:
| Step | Action | Expected Result | Pass/Fail |
|---|---|---|---|
| 1 | Configure continuous operation test | Test script prepared | ☐ |
| 2 | Run for 72 hours continuously | Device remains operational | ☐ |
| 3 | Execute AI queries every 5 minutes | All queries complete successfully | ☐ |
| 4 | Cycle through all major features | No feature failures | ☐ |
| 5 | Monitor system logs | No critical errors logged | ☐ |
| 6 | Check resource leaks | Memory/CPU stable over time | ☐ |
| 7 | Verify thermal stability | Temps remain within operating range | ☐ |
| 8 | Power cycle device | Device boots normally | ☐ |
Pass Criteria:
- ✅ No crashes over 72 hours
- ✅ All features functional
- ✅ No resource leaks
- ✅ Thermal stability maintained
Test Duration: 72 hours minimum
Appendix A: Test Environment Setup
Hardware Requirements
- GROOT FORCE devices (all variants)
- Test phones (Android & iOS)
- Precision measurement equipment:
- Multimeter
- Oscilloscope
- Thermal camera
- Luminance meter
- Audio analyzer
- Network analyzer
Software Requirements
- ADB tools
- Test automation framework
- Data logging software
- Analysis tools (Python scripts)
- CI/CD integration
Network Requirements
- Test Wi-Fi APs (2.4 & 5 GHz)
- Controlled network environment
- Backend test server
- Packet capture tools
Appendix B: Test Data Collection
All test executions must log:
- Test ID
- Execution timestamp
- Firmware version
- Hardware variant
- Pass/Fail result
- Measurement data
- Logs and screenshots
- Defect IDs (if failed)
Data Storage:
- Test results database
- Log archive (30 day retention)
- Defect tracking system
Appendix C: Pass/Fail Criteria Summary
| Category | Total Tests | P0 Critical | P1 High | P2 Medium |
|---|---|---|---|---|
| KLYRA OS | 5 | 4 | 1 | 0 |
| AI Runtime | 6 | 4 | 2 | 0 |
| Sensor Firmware | 5 | 3 | 2 | 0 |
| Display System | 3 | 3 | 0 | 0 |
| Audio Pipeline | 3 | 3 | 0 | 0 |
| Power Management | 3 | 3 | 0 | 0 |
| Connectivity | 3 | 2 | 1 | 0 |
| Security & Privacy | 3 | 3 | 0 | 0 |
| OTA Updates | 2 | 2 | 0 | 0 |
| Performance | 3 | 1 | 2 | 0 |
| TOTAL | 45 | 28 | 10 | 0 |
Document Approval
Reviewed by:
- QA Lead: _________________ Date: _______
- Software Architect: _________________ Date: _______
- Firmware Lead: _________________ Date: _______
- AI/ML Lead: _________________ Date: _______
- Security Lead: _________________ Date: _______
END OF SOFTWARE & FIRMWARE TEST CASES
This document provides comprehensive test procedures for validating all software and firmware aspects of GROOT FORCE. Each test case is designed to be executed by QA engineers and provides clear pass/fail criteria for go/no-go decisions.