431 lines
11 KiB
Markdown
431 lines
11 KiB
Markdown
# Fenrir Testing Guide
|
|
|
|
Complete guide to running and writing tests for the Fenrir screen reader.
|
|
|
|
## Quick Start
|
|
|
|
### 1. Install Test Dependencies
|
|
|
|
```bash
|
|
# Install test requirements
|
|
pip install -r tests/requirements.txt
|
|
|
|
# Or install individually
|
|
pip install pytest pytest-cov pytest-mock pytest-timeout
|
|
```
|
|
|
|
### 2. Run Tests
|
|
|
|
```bash
|
|
# Run all tests (unit + integration)
|
|
pytest tests/
|
|
|
|
# Run only unit tests (fastest)
|
|
pytest tests/unit/ -v
|
|
|
|
# Run only integration tests
|
|
pytest tests/integration/ -v
|
|
|
|
# Run with coverage report
|
|
pytest tests/ --cov=src/fenrirscreenreader --cov-report=html
|
|
# Then open htmlcov/index.html in a browser
|
|
|
|
# Run specific test file
|
|
pytest tests/unit/test_settings_validation.py -v
|
|
|
|
# Run specific test class
|
|
pytest tests/unit/test_settings_validation.py::TestSpeechSettingsValidation -v
|
|
|
|
# Run specific test
|
|
pytest tests/unit/test_settings_validation.py::TestSpeechSettingsValidation::test_speech_rate_valid_range -v
|
|
```
|
|
|
|
### 3. Useful Test Options
|
|
|
|
```bash
|
|
# Stop on first failure
|
|
pytest tests/ -x
|
|
|
|
# Show test output (print statements, logging)
|
|
pytest tests/ -s
|
|
|
|
# Run tests in parallel (faster, requires: pip install pytest-xdist)
|
|
pytest tests/ -n auto
|
|
|
|
# Show slowest 10 tests
|
|
pytest tests/ --durations=10
|
|
|
|
# Run only tests matching a keyword
|
|
pytest tests/ -k "remote"
|
|
|
|
# Run tests with specific markers
|
|
pytest tests/ -m unit # Only unit tests
|
|
pytest tests/ -m integration # Only integration tests
|
|
pytest tests/ -m "not slow" # Skip slow tests
|
|
```
|
|
|
|
## Test Structure
|
|
|
|
```
|
|
tests/
|
|
├── README.md # Test overview and strategy
|
|
├── TESTING_GUIDE.md # This file - detailed usage guide
|
|
├── requirements.txt # Test dependencies
|
|
├── conftest.py # Shared fixtures and pytest config
|
|
├── unit/ # Unit tests (fast, isolated)
|
|
│ ├── __init__.py
|
|
│ ├── test_settings_validation.py # Settings validation tests
|
|
│ ├── test_cursor_utils.py # Cursor calculation tests
|
|
│ └── test_text_utils.py # Text processing tests
|
|
├── integration/ # Integration tests (require mocking)
|
|
│ ├── __init__.py
|
|
│ ├── test_remote_control.py # Remote control functionality
|
|
│ ├── test_command_manager.py # Command loading/execution
|
|
│ └── test_event_manager.py # Event queue processing
|
|
└── drivers/ # Driver tests (require root)
|
|
├── __init__.py
|
|
├── test_vcsa_driver.py # TTY screen reading
|
|
└── test_evdev_driver.py # Keyboard input capture
|
|
```
|
|
|
|
## Writing New Tests
|
|
|
|
### Unit Test Example
|
|
|
|
```python
|
|
"""tests/unit/test_my_feature.py"""
|
|
import pytest
|
|
|
|
@pytest.mark.unit
|
|
def test_speech_rate_calculation():
|
|
"""Test that speech rate is calculated correctly."""
|
|
rate = calculate_speech_rate(0.5)
|
|
assert 0.0 <= rate <= 1.0
|
|
assert rate == 0.5
|
|
```
|
|
|
|
### Integration Test Example
|
|
|
|
```python
|
|
"""tests/integration/test_my_integration.py"""
|
|
import pytest
|
|
|
|
@pytest.mark.integration
|
|
def test_remote_command_execution(mock_environment):
|
|
"""Test remote command execution flow."""
|
|
manager = RemoteManager()
|
|
manager.initialize(mock_environment)
|
|
|
|
result = manager.handle_command_execution_with_response("say test")
|
|
|
|
assert result["success"] is True
|
|
mock_environment["runtime"]["OutputManager"].speak_text.assert_called_once()
|
|
```
|
|
|
|
### Using Fixtures
|
|
|
|
Common fixtures are defined in `conftest.py`:
|
|
|
|
```python
|
|
def test_with_mock_environment(mock_environment):
|
|
"""Use the shared mock environment fixture."""
|
|
# mock_environment provides mocked runtime managers
|
|
assert "runtime" in mock_environment
|
|
assert "DebugManager" in mock_environment["runtime"]
|
|
|
|
def test_with_temp_config(temp_config_file):
|
|
"""Use a temporary config file."""
|
|
# temp_config_file is a Path object to a valid test config
|
|
assert temp_config_file.exists()
|
|
content = temp_config_file.read_text()
|
|
assert "[speech]" in content
|
|
```
|
|
|
|
## Test Markers
|
|
|
|
Tests can be marked to categorize them:
|
|
|
|
```python
|
|
@pytest.mark.unit # Fast, isolated unit test
|
|
@pytest.mark.integration # Integration test with mocking
|
|
@pytest.mark.driver # Requires root access (skipped by default)
|
|
@pytest.mark.slow # Takes > 1 second
|
|
@pytest.mark.remote # Tests remote control functionality
|
|
@pytest.mark.settings # Tests settings/configuration
|
|
@pytest.mark.commands # Tests command system
|
|
@pytest.mark.vmenu # Tests VMenu system
|
|
```
|
|
|
|
Run tests by marker:
|
|
```bash
|
|
pytest tests/ -m unit # Only unit tests
|
|
pytest tests/ -m "unit or integration" # Unit and integration
|
|
pytest tests/ -m "not slow" # Skip slow tests
|
|
```
|
|
|
|
## Code Coverage
|
|
|
|
### View Coverage Report
|
|
|
|
```bash
|
|
# Generate HTML coverage report
|
|
pytest tests/ --cov=src/fenrirscreenreader --cov-report=html
|
|
|
|
# Open report in browser
|
|
firefox htmlcov/index.html # Or your preferred browser
|
|
|
|
# Terminal coverage report
|
|
pytest tests/ --cov=src/fenrirscreenreader --cov-report=term-missing
|
|
```
|
|
|
|
### Coverage Goals
|
|
|
|
- **Unit Tests**: 80%+ coverage on utility functions and validation logic
|
|
- **Integration Tests**: 60%+ coverage on core managers
|
|
- **Overall**: 70%+ coverage on non-driver code
|
|
|
|
Driver code is excluded from coverage as it requires hardware interaction.
|
|
|
|
## Testing Best Practices
|
|
|
|
### 1. Test One Thing
|
|
|
|
```python
|
|
# Good - tests one specific behavior
|
|
def test_speech_rate_rejects_negative():
|
|
with pytest.raises(ValueError):
|
|
validate_rate(-1.0)
|
|
|
|
# Bad - tests multiple unrelated things
|
|
def test_speech_settings():
|
|
validate_rate(0.5) # Rate validation
|
|
validate_pitch(1.0) # Pitch validation
|
|
validate_volume(0.8) # Volume validation
|
|
```
|
|
|
|
### 2. Use Descriptive Names
|
|
|
|
```python
|
|
# Good - clear what's being tested
|
|
def test_speech_rate_rejects_values_above_three():
|
|
...
|
|
|
|
# Bad - unclear purpose
|
|
def test_rate():
|
|
...
|
|
```
|
|
|
|
### 3. Arrange-Act-Assert Pattern
|
|
|
|
```python
|
|
def test_remote_command_parsing():
|
|
# Arrange - set up test data
|
|
manager = RemoteManager()
|
|
command = "say Hello World"
|
|
|
|
# Act - execute the code being tested
|
|
result = manager.parse_command(command)
|
|
|
|
# Assert - verify the result
|
|
assert result["action"] == "say"
|
|
assert result["text"] == "Hello World"
|
|
```
|
|
|
|
### 4. Mock External Dependencies
|
|
|
|
```python
|
|
def test_clipboard_export(mock_environment, tmp_path):
|
|
"""Test clipboard export without real file operations."""
|
|
# Use mock environment instead of real Fenrir runtime
|
|
manager = RemoteManager()
|
|
manager.initialize(mock_environment)
|
|
|
|
# Use temporary path instead of /tmp
|
|
clipboard_path = tmp_path / "clipboard"
|
|
mock_environment["runtime"]["SettingsManager"].get_setting = Mock(
|
|
return_value=str(clipboard_path)
|
|
)
|
|
|
|
manager.export_clipboard()
|
|
|
|
assert clipboard_path.exists()
|
|
```
|
|
|
|
### 5. Test Error Paths
|
|
|
|
```python
|
|
def test_remote_command_handles_invalid_input():
|
|
"""Test that invalid commands are handled gracefully."""
|
|
manager = RemoteManager()
|
|
|
|
# Test with various invalid inputs
|
|
result1 = manager.handle_command_execution_with_response("")
|
|
result2 = manager.handle_command_execution_with_response("invalid")
|
|
result3 = manager.handle_command_execution_with_response("command unknown")
|
|
|
|
# All should return error results, not crash
|
|
assert all(not r["success"] for r in [result1, result2, result3])
|
|
```
|
|
|
|
## Debugging Tests
|
|
|
|
### Run with More Verbosity
|
|
|
|
```bash
|
|
# Show test names and outcomes
|
|
pytest tests/ -v
|
|
|
|
# Show test names, outcomes, and print statements
|
|
pytest tests/ -v -s
|
|
|
|
# Show local variables on failure
|
|
pytest tests/ --showlocals
|
|
|
|
# Show full diff on assertion failures
|
|
pytest tests/ -vv
|
|
```
|
|
|
|
### Use pytest.set_trace() for Debugging
|
|
|
|
```python
|
|
def test_complex_logic():
|
|
result = complex_function()
|
|
pytest.set_trace() # Drop into debugger here
|
|
assert result == expected
|
|
```
|
|
|
|
### Run Single Test Repeatedly
|
|
|
|
```bash
|
|
# Useful for debugging flaky tests
|
|
pytest tests/unit/test_my_test.py::test_specific_test --count=100
|
|
```
|
|
|
|
## Continuous Integration
|
|
|
|
### GitHub Actions Example
|
|
|
|
```yaml
|
|
name: Tests
|
|
|
|
on: [push, pull_request]
|
|
|
|
jobs:
|
|
test:
|
|
runs-on: ubuntu-latest
|
|
steps:
|
|
- uses: actions/checkout@v3
|
|
- uses: actions/setup-python@v4
|
|
with:
|
|
python-version: '3.9'
|
|
- name: Install dependencies
|
|
run: |
|
|
pip install -r requirements.txt
|
|
pip install -r tests/requirements.txt
|
|
- name: Run tests
|
|
run: pytest tests/ --cov=src/fenrirscreenreader --cov-report=xml
|
|
- name: Upload coverage
|
|
uses: codecov/codecov-action@v3
|
|
```
|
|
|
|
## Common Issues
|
|
|
|
### ImportError: No module named 'fenrirscreenreader'
|
|
|
|
**Solution**: Make sure you're running pytest from the project root, or set PYTHONPATH:
|
|
```bash
|
|
export PYTHONPATH="${PYTHONPATH}:$(pwd)/src"
|
|
pytest tests/
|
|
```
|
|
|
|
### Tests hang or timeout
|
|
|
|
**Solution**: Use the timeout decorator or pytest-timeout:
|
|
```bash
|
|
pytest tests/ --timeout=30 # Global 30s timeout
|
|
```
|
|
|
|
Or mark specific tests:
|
|
```python
|
|
@pytest.mark.timeout(5)
|
|
def test_that_might_hang():
|
|
...
|
|
```
|
|
|
|
### Mocks not working as expected
|
|
|
|
**Solution**: Check that you're patching the right location:
|
|
```python
|
|
# Good - patch where it's used
|
|
@patch('fenrirscreenreader.core.remoteManager.OutputManager')
|
|
|
|
# Bad - patch where it's defined
|
|
@patch('fenrirscreenreader.core.outputManager.OutputManager')
|
|
```
|
|
|
|
## Advanced Topics
|
|
|
|
### Parametrized Tests
|
|
|
|
Test multiple inputs with one test:
|
|
|
|
```python
|
|
@pytest.mark.parametrize("rate,expected", [
|
|
(0.0, True),
|
|
(1.5, True),
|
|
(3.0, True),
|
|
(-1.0, False),
|
|
(10.0, False),
|
|
])
|
|
def test_rate_validation(rate, expected):
|
|
try:
|
|
validate_rate(rate)
|
|
assert expected is True
|
|
except ValueError:
|
|
assert expected is False
|
|
```
|
|
|
|
### Test Fixtures with Cleanup
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def temp_fenrir_instance():
|
|
"""Start a test Fenrir instance."""
|
|
fenrir = FenrirTestInstance()
|
|
fenrir.start()
|
|
|
|
yield fenrir # Test runs here
|
|
|
|
# Cleanup after test
|
|
fenrir.stop()
|
|
fenrir.cleanup()
|
|
```
|
|
|
|
### Testing Async Code
|
|
|
|
```python
|
|
@pytest.mark.asyncio
|
|
async def test_async_speech():
|
|
result = await async_speak("test")
|
|
assert result.success
|
|
```
|
|
|
|
## Getting Help
|
|
|
|
- **Pytest Documentation**: https://docs.pytest.org/
|
|
- **Fenrir Issues**: https://github.com/chrys87/fenrir/issues
|
|
- **Test Coverage**: Run with `--cov-report=html` and inspect `htmlcov/index.html`
|
|
|
|
## Contributing Tests
|
|
|
|
When contributing tests:
|
|
|
|
1. **Follow naming conventions**: `test_*.py` for files, `test_*` for functions
|
|
2. **Add docstrings**: Explain what each test verifies
|
|
3. **Use appropriate markers**: `@pytest.mark.unit`, `@pytest.mark.integration`, etc.
|
|
4. **Keep tests fast**: Unit tests should complete in <100ms
|
|
5. **Test edge cases**: Empty strings, None, negative numbers, etc.
|
|
6. **Update this guide**: If you add new test patterns or fixtures
|
|
|
|
Happy testing! 🧪
|