Fenrir Test Suite
This directory contains automated tests for the Fenrir screen reader. Testing a screen reader that requires root access and hardware interaction presents unique challenges, so we use a multi-layered testing strategy.
Test Strategy
1. Unit Tests (No Root Required)
Test individual components in isolation without requiring hardware access:
- Core Managers: Logic testing without driver dependencies
- Utility Functions: String manipulation, cursor calculations, text processing
- Settings Validation: Configuration parsing and validation
- Remote Command Parsing: Command/setting string processing
2. Integration Tests (No Root Required)
Test component interactions using mocked drivers:
- Remote Control: Unix socket and TCP communication
- Command System: Command loading and execution flow
- Event Processing: Event queue and dispatching
- Settings Manager: Configuration loading and runtime changes
3. Driver Tests (Root Required, Optional)
Test actual hardware interaction (only run in CI or explicitly by developers):
- VCSA Driver: Screen reading on real TTY
- Evdev Driver: Keyboard input capture
- Speech Drivers: TTS output validation
- Sound Drivers: Audio playback testing
4. End-to-End Tests (Root Required, Manual)
Real-world usage scenarios run manually by developers:
- Full Fenrir startup/shutdown cycle
- Remote control from external scripts
- VMenu navigation and command execution
- Speech output for screen changes
Running Tests
# Install test dependencies
pip install pytest pytest-cov pytest-mock pytest-timeout
# Run all unit and integration tests (no root required)
pytest tests/
# Run with coverage report
pytest tests/ --cov=src/fenrirscreenreader --cov-report=html
# Run only unit tests
pytest tests/unit/
# Run only integration tests
pytest tests/integration/
# Run specific test file
pytest tests/unit/test_settings_manager.py
# Run with verbose output
pytest tests/ -v
# Run driver tests (requires root)
sudo pytest tests/drivers/ -v
Test Organization
tests/
├── README.md # This file
├── conftest.py # Shared pytest fixtures
├── unit/ # Unit tests (fast, no mocking needed)
│ ├── test_settings_validation.py
│ ├── test_cursor_utils.py
│ ├── test_text_utils.py
│ └── test_remote_parsing.py
├── integration/ # Integration tests (require mocking)
│ ├── test_remote_control.py
│ ├── test_command_manager.py
│ ├── test_event_manager.py
│ └── test_settings_manager.py
└── drivers/ # Driver tests (require root)
├── test_vcsa_driver.py
├── test_evdev_driver.py
└── test_speech_drivers.py
Writing Tests
Example Unit Test
def test_speech_rate_validation():
"""Test that speech rate validation rejects out-of-range values."""
manager = SettingsManager()
# Valid values should pass
manager._validate_setting_value('speech', 'rate', 0.5)
manager._validate_setting_value('speech', 'rate', 3.0)
# Invalid values should raise ValueError
with pytest.raises(ValueError):
manager._validate_setting_value('speech', 'rate', -1.0)
with pytest.raises(ValueError):
manager._validate_setting_value('speech', 'rate', 10.0)
Example Integration Test
def test_remote_control_unix_socket(tmp_path):
"""Test Unix socket remote control accepts commands."""
socket_path = tmp_path / "test.sock"
# Start mock remote driver
driver = MockUnixDriver(socket_path)
# Send command
send_remote_command(socket_path, "command say Hello")
# Verify command was received
assert driver.received_commands[-1] == "command say Hello"
Test Coverage Goals
- Unit Tests: 80%+ coverage on utility functions and validation logic
- Integration Tests: 60%+ coverage on core managers and command system
- Overall: 70%+ coverage on non-driver code
Driver code is excluded from coverage metrics as it requires hardware interaction.
Continuous Integration
Tests are designed to run in CI environments without root access:
- Unit and integration tests run on every commit
- Driver tests are skipped in CI (require actual hardware)
- Coverage reports are generated and tracked over time
Test Principles
- No Root by Default: Most tests should run without elevated privileges
- Fast Execution: Unit tests complete in <1 second each
- Isolated: Tests don't depend on each other or external state
- Deterministic: Tests produce same results every run
- Documented: Each test has a clear docstring explaining what it tests
- Realistic Mocks: Mocked components behave like real ones
Future Enhancements
- Performance Tests: Measure input-to-speech latency
- Stress Tests: Rapid event processing, memory leak detection
- Accessibility Tests: Verify all features work without vision
- Compatibility Tests: Test across different Linux distributions