A few bug fixes, better checking in place to make sure syntax and other errors do not make it to commits.

This commit is contained in:
Storm Dragon
2025-07-24 18:34:12 -04:00
18 changed files with 2283 additions and 18 deletions

192
RELEASE_CHECKLIST.md Normal file
View File

@ -0,0 +1,192 @@
# Fenrir Release Validation Checklist
This checklist ensures thorough validation before releasing Fenrir packages.
## 🔧 Setup Tools (One-time setup)
### Install Pre-commit Hook
```bash
# Safely install composite hook (preserves existing version management)
./tools/install_validation_hook.sh
# Test the hook
./.git/hooks/pre-commit
```
### Validation Scripts
- `tools/validate_syntax.py` - Python syntax validation
- `tools/validate_pep8.py` - PEP8 compliance checking with safe auto-fix
- `tools/validate_release.py` - Comprehensive release validation
- `tools/cleanup_cache.py` - Remove Python cache files and directories
- `tools/pre-commit-hook` - Git pre-commit validation
## 📋 Pre-Release Checklist
### 1. Code Quality Validation ✅
```bash
# Comprehensive release validation (includes syntax, imports, structure)
python3 tools/validate_release.py
# If issues found, try auto-fix
python3 tools/validate_release.py --fix
# Quick validation (skips slow dependency checks)
python3 tools/validate_release.py --quick
```
**Expected Result**: All tests pass, no syntax errors
### 2. Dependency Validation ✅
```bash
# Validate all dependencies are available
python3 check-dependencies.py
```
**Expected Result**: All required dependencies reported as available
### 3. Core Functionality Test ✅
```bash
# Test core imports (safe to run without sudo)
cd src
python3 -c "
import fenrirscreenreader.core.fenrirManager
import fenrirscreenreader.core.commandManager
import fenrirscreenreader.core.eventManager
print('Core imports successful')
"
cd ..
```
**Expected Result**: No import errors
### 4. Installation Script Validation ✅
```bash
# Validate setup.py syntax
python3 -m py_compile setup.py
# Check setup.py can be parsed
python3 setup.py --help-commands >/dev/null
```
**Expected Result**: No syntax errors, setup.py functional
### 5. Configuration Validation ✅
```bash
# Verify config files exist and are parseable
ls -la config/settings/settings.conf
ls -la config/keyboard/desktop.conf
ls -la config/punctuation/default.conf
```
**Expected Result**: All core config files present
### 6. Manual Testing (User/Package Maintainer) ⚠️
**Important**: These require user interaction as they need sudo access or specific hardware.
```bash
# Test basic functionality (ask user to run)
sudo ./src/fenrir --help
# Test in emulation mode (safer for desktop environments)
sudo ./src/fenrir -e --version
# Quick functionality test (3-5 seconds)
sudo timeout 5 ./src/fenrir -e -f || echo "Timeout reached (expected)"
```
**Expected Result**: No immediate crashes, basic help/version output works
### 7. Package-Specific Validation ✅
```bash
# Test the same compilation process used by package managers
python3 -m compileall src/fenrirscreenreader/ -q
# Verify no __pycache__ permission issues
find src/ -name "*.pyc" -delete
find src/ -name "__pycache__" -delete
```
**Expected Result**: Clean compilation, no permission errors
## 🚨 Known Issue Categories
### Critical Issues (Block Release)
- **Python syntax errors** (SyntaxError, unterminated strings)
- **Missing core dependencies** (dbus-python, evdev, etc.)
- **Import failures in core modules** (fenrirManager, commandManager)
- **Missing critical config files** (settings.conf, desktop.conf)
### Warning Issues (Address if Possible)
- **PEP8 violations** (cosmetic, don't block release)
- **Missing optional dependencies** (for specific features)
- **Command structure issues** (missing methods in command files)
- **Very long lines** (>120 characters)
## 🔍 Root Cause Analysis
### Why These Errors Weren't Caught Previously
1. **No automated syntax validation** - The codebase relied on manual testing
2. **No pre-commit hooks** - Syntax errors could be committed
3. **No CI/CD pipeline** - Package compilation happens only during release
4. **Manual PEP8 cleanup** - F-string refactoring introduced syntax errors during batch cleanup
## 📖 Usage Instructions
### For Developers
```bash
# Before committing changes
git add .
git commit # Pre-commit hook will run automatically
# Before creating tags/releases
python3 tools/validate_release.py
```
### For Package Maintainers
```bash
# Before packaging
python3 tools/validate_release.py
# If validation fails
python3 tools/validate_release.py --fix
# Quick check (if dependencies are known good)
python3 tools/validate_release.py --quick
```
### For Release Managers
```bash
# Complete validation before tagging
python3 tools/validate_release.py
# Manual verification (requires sudo)
sudo ./src/fenrir --version
# Tag release only after all validations pass
git tag -a v2.x.x -m "Release v2.x.x"
```
## 🎯 Future Improvements
### Recommended Additions
1. **GitHub Actions CI/CD** - Automated validation on every push
2. **Automated testing** - Unit tests for core functionality
3. **Integration testing** - Test driver interactions
4. **Package testing** - Validate actual package installation
### Modern Python Packaging
- Consider migrating to `pyproject.toml` (PEP 621)
- Use `build` instead of `setup.py` directly
- Add `tox.ini` for multi-environment testing
## 📞 Support
If validation fails and auto-fix doesn't resolve issues:
1. **Check the specific error messages** in validation output
2. **Review recent commits** that might have introduced issues
3. **Run individual validation steps** to isolate problems
Remember: **Working code is better than perfect code** - especially for accessibility software where reliability is critical.

View File

@ -24,9 +24,8 @@ class command:
def run(self):
try:
self.env["runtime"]["OutputManager"].present_text(
f"Fenrir screen reader version {
fenrirVersion.version}-{
fenrirVersion.code_name}",
f"Fenrir screen reader version "
f"{fenrirVersion.version}-{fenrirVersion.code_name}",
interrupt=True,
)
except Exception as e:

View File

@ -393,12 +393,10 @@ class command:
"""Check if text contains URLs that might cause false progress detection"""
import re
# Common URL patterns that might contain progress-like patterns
# Specific URL patterns - only match actual URLs, not filenames
url_patterns = [
r"https?://[^\s]+", # http:// or https:// URLs
r"ftp://[^\s]+", # ftp:// URLs
r"www\.[^\s]+", # www. domains
r"[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}[/\w.-]*", # domain.com/path patterns
r"\S+://\S+\.\S{2,}", # Any protocol:// with domain.ext
r"www\.[^\s]+\.[a-zA-Z]{2,}", # www.domain.ext patterns
]
for pattern in url_patterns:

View File

@ -59,8 +59,7 @@ class command(config_command):
except Exception as e:
self.present_text(
f"Failed to reset configuration: {
str(e)}",
f"Failed to reset configuration: {str(e)}",
interrupt=False,
flush=False,
)

View File

@ -34,8 +34,7 @@ class command:
SpeechDriver.initialize(self.env)
except Exception as e:
print(
f"revert_to_saved SpeechDriver: Error reinitializing speech driver: {
str(e)}"
f"revert_to_saved SpeechDriver: Error reinitializing speech driver: {str(e)}"
)
# Reinitialize sound system with restored settings

View File

@ -45,8 +45,7 @@ class command:
self.env["runtime"]["SpeechDriver"].set_rate(new_rate)
except Exception as e:
print(
f"adjust_speech_rate set_rate: Error setting speech rate: {
str(e)}"
f"adjust_speech_rate set_rate: Error setting speech rate: {str(e)}"
)
new_percent = int(new_rate * 100)

View File

@ -29,9 +29,7 @@ class DynamicVoiceCommand:
def run(self):
try:
self.env["runtime"]["OutputManager"].present_text(
f"Testing voice {
self.voice} from {
self.module}. Please wait.",
f"Testing voice {self.voice} from {self.module}. Please wait.",
interrupt=True,
)

View File

@ -4,5 +4,5 @@
# Fenrir TTY screen reader
# By Chrys, Storm Dragon, and contributors.
version = "2025.07.19"
version = "2025.07.24"
code_name = "master"

View File

@ -561,6 +561,15 @@ class driver(inputDriver):
# 0 = Numlock
# 1 = Capslock
# 2 = Rollen
# Use the first device with LED capability as authoritative source
# to avoid inconsistent readings from multiple devices during initialization
for fd, dev in self.iDevices.items():
# Check if device has LED capability (capability 17)
if 17 in dev.capabilities():
return led in dev.leds()
# Fallback to old behavior if no device has LED capability
for fd, dev in self.iDevices.items():
if led in dev.leds():
return True

288
tools/cleanup_cache.py Executable file
View File

@ -0,0 +1,288 @@
#!/usr/bin/env python3
"""
Fenrir Cache Cleanup Tool
Removes Python cache files and directories from the repository.
These files should never be committed and can cause issues.
Usage:
python3 tools/cleanup_cache.py # Show what would be removed
python3 tools/cleanup_cache.py --remove # Actually remove cache files
python3 tools/cleanup_cache.py --check # Exit with error if cache files found
"""
import os
import sys
import argparse
import shutil
from pathlib import Path
class CacheCleanup:
def __init__(self, verbose=True):
self.verbose = verbose
self.cache_dirs = []
self.cache_files = []
def log(self, message, level="INFO"):
"""Log a message with appropriate formatting."""
if not self.verbose and level == "INFO":
return
colors = {
"INFO": "\033[0;36m", # Cyan
"SUCCESS": "\033[0;32m", # Green
"WARNING": "\033[1;33m", # Yellow
"ERROR": "\033[0;31m", # Red
"HEADER": "\033[1;34m", # Bold Blue
}
reset = "\033[0m"
color = colors.get(level, "")
if level == "HEADER":
print(f"\n{color}{'='*60}")
print(f"{message}")
print(f"{'='*60}{reset}")
else:
symbol = {
"SUCCESS": "",
"ERROR": "",
"WARNING": "",
"INFO": ""
}.get(level, "")
print(f"{color}{symbol} {message}{reset}")
def find_cache_files(self, directory):
"""Find all Python cache files and directories."""
directory = Path(directory)
for root, dirs, files in os.walk(directory):
root_path = Path(root)
# Skip .git directory entirely
if '.git' in root_path.parts:
continue
# Find __pycache__ directories
if '__pycache__' in dirs:
cache_dir = root_path / '__pycache__'
self.cache_dirs.append(cache_dir)
# Don't traverse into __pycache__ directories
dirs.remove('__pycache__')
# Find .pyc files outside of __pycache__
for file in files:
if file.endswith('.pyc'):
cache_file = root_path / file
self.cache_files.append(cache_file)
def show_findings(self):
"""Display what cache files were found."""
total_items = len(self.cache_dirs) + len(self.cache_files)
if total_items == 0:
self.log("No Python cache files found", "SUCCESS")
return True
self.log(f"Found {total_items} cache items:", "WARNING")
if self.cache_dirs:
self.log(f"\n__pycache__ directories ({len(self.cache_dirs)}):", "WARNING")
for cache_dir in sorted(self.cache_dirs):
# Show size of directory
size = self.get_directory_size(cache_dir)
self.log(f" {cache_dir} ({size} files)", "WARNING")
if self.cache_files:
self.log(f"\nLoose .pyc files ({len(self.cache_files)}):", "WARNING")
for cache_file in sorted(self.cache_files):
# Show file size
try:
size = cache_file.stat().st_size
size_str = self.format_size(size)
self.log(f" {cache_file} ({size_str})", "WARNING")
except OSError:
self.log(f" {cache_file} (size unknown)", "WARNING")
return False
def get_directory_size(self, directory):
"""Get the number of files in a directory."""
try:
return len(list(directory.rglob('*')))
except OSError:
return 0
def format_size(self, size_bytes):
"""Format file size in human-readable format."""
if size_bytes < 1024:
return f"{size_bytes} B"
elif size_bytes < 1024 * 1024:
return f"{size_bytes // 1024} KB"
else:
return f"{size_bytes // (1024 * 1024)} MB"
def remove_cache_files(self):
"""Actually remove the cache files and directories."""
removed_count = 0
errors = []
# Remove __pycache__ directories
for cache_dir in self.cache_dirs:
try:
if cache_dir.exists():
shutil.rmtree(cache_dir)
self.log(f"Removed directory: {cache_dir}", "SUCCESS")
removed_count += 1
except OSError as e:
error_msg = f"Failed to remove {cache_dir}: {e}"
errors.append(error_msg)
self.log(error_msg, "ERROR")
# Remove .pyc files
for cache_file in self.cache_files:
try:
if cache_file.exists():
cache_file.unlink()
self.log(f"Removed file: {cache_file}", "SUCCESS")
removed_count += 1
except OSError as e:
error_msg = f"Failed to remove {cache_file}: {e}"
errors.append(error_msg)
self.log(error_msg, "ERROR")
if errors:
self.log(f"Encountered {len(errors)} errors during cleanup", "ERROR")
return False
else:
self.log(f"Successfully removed {removed_count} cache items", "SUCCESS")
return True
def check_gitignore(self):
"""Check if .gitignore properly excludes cache files."""
gitignore_path = Path('.gitignore')
if not gitignore_path.exists():
self.log("Warning: No .gitignore file found", "WARNING")
return False
try:
with open(gitignore_path, 'r') as f:
content = f.read()
has_pycache = '__pycache__' in content or '__pycache__/' in content
has_pyc = '*.pyc' in content
if has_pycache and has_pyc:
self.log("✓ .gitignore properly excludes Python cache files", "SUCCESS")
return True
else:
missing = []
if not has_pycache:
missing.append("__pycache__/")
if not has_pyc:
missing.append("*.pyc")
self.log(f"Warning: .gitignore missing: {', '.join(missing)}", "WARNING")
return False
except OSError as e:
self.log(f"Could not read .gitignore: {e}", "ERROR")
return False
def suggest_gitignore_additions(self):
"""Suggest additions to .gitignore."""
self.log("\nRecommended .gitignore entries for Python:", "INFO")
print("""
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
""")
def main():
parser = argparse.ArgumentParser(description='Clean Python cache files from Fenrir repository')
parser.add_argument('--remove', action='store_true',
help='Actually remove cache files (default is dry-run)')
parser.add_argument('--check', action='store_true',
help='Exit with non-zero code if cache files found')
parser.add_argument('--quiet', action='store_true',
help='Reduce output verbosity')
parser.add_argument('--directory', default='.',
help='Directory to scan (default: current directory)')
args = parser.parse_args()
# Ensure we're in the project root
if not Path("src/fenrirscreenreader").exists():
print("Error: Must be run from Fenrir project root directory")
sys.exit(1)
cleanup = CacheCleanup(verbose=not args.quiet)
cleanup.log("FENRIR CACHE CLEANUP", "HEADER")
cleanup.log(f"Scanning directory: {Path(args.directory).absolute()}")
# Find cache files
cleanup.find_cache_files(args.directory)
# Show what we found
no_cache_found = cleanup.show_findings()
if no_cache_found:
# Check .gitignore anyway
cleanup.check_gitignore()
cleanup.log("\n✅ Repository is clean of Python cache files", "SUCCESS")
sys.exit(0)
# Check .gitignore
gitignore_ok = cleanup.check_gitignore()
if not gitignore_ok:
cleanup.suggest_gitignore_additions()
# Handle different modes
if args.remove:
cleanup.log("\n🧹 REMOVING CACHE FILES", "HEADER")
success = cleanup.remove_cache_files()
if success:
cleanup.log("\n✅ Cache cleanup completed successfully", "SUCCESS")
sys.exit(0)
else:
cleanup.log("\n❌ Cache cleanup completed with errors", "ERROR")
sys.exit(1)
elif args.check:
cleanup.log("\n❌ Cache files found - validation failed", "ERROR")
cleanup.log("Run with --remove to clean up cache files", "INFO")
sys.exit(1)
else:
# Dry run mode
cleanup.log("\n💡 DRY RUN MODE", "HEADER")
cleanup.log("Add --remove to actually delete these files", "INFO")
cleanup.log("Add --check to fail if cache files are present", "INFO")
sys.exit(0)
if __name__ == '__main__':
main()

105
tools/clipboard_sync.sh Executable file
View File

@ -0,0 +1,105 @@
#!/bin/bash
# Fenrir X11 Clipboard Sync
# Synchronizes between X11 clipboard and Fenrir clipboard file
# Prevents loops using checksums and timestamps
# Check for root privileges
if [[ $(whoami) != "root" ]]; then
echo "Error: This script must be run as root to access Fenrir's clipboard file"
echo "Run with: sudo DISPLAY=:0 ./clipboard_sync.sh"
exit 1
fi
FENRIR_CLIPBOARD_FILE="${1:-/tmp/fenrirClipboard}"
STATE_FILE="/var/tmp/.fenrir_clipboard_state"
# Simple state tracking without complex locking
get_file_checksum() {
if [[ -f "$FENRIR_CLIPBOARD_FILE" ]]; then
md5sum "$FENRIR_CLIPBOARD_FILE" 2>/dev/null | cut -d' ' -f1
else
echo ""
fi
}
get_clipboard_checksum() {
xclip -o -selection clipboard 2>/dev/null | md5sum | cut -d' ' -f1
}
# Initialize state
rm -f "$STATE_FILE" 2>/dev/null
echo "Starting Fenrir clipboard sync..."
echo "Monitoring file: $FENRIR_CLIPBOARD_FILE"
# Check dependencies
if ! command -v xclip >/dev/null 2>&1; then
echo "Error: xclip is required but not installed"
echo "Install with: sudo apt install xclip"
exit 1
fi
if ! command -v inotifywait >/dev/null 2>&1; then
echo "Error: inotify-tools is required but not installed"
echo "Install with: sudo apt install inotify-tools"
exit 1
fi
# Create clipboard file if it doesn't exist
touch "$FENRIR_CLIPBOARD_FILE"
echo "Starting Fenrir clipboard sync..."
while true; do
# Read last state
if [[ -f "$STATE_FILE" ]]; then
read -r LAST_FILE_CHECKSUM LAST_CLIPBOARD_CHECKSUM LAST_UPDATE_TIME < "$STATE_FILE"
else
LAST_FILE_CHECKSUM=""
LAST_CLIPBOARD_CHECKSUM=""
LAST_UPDATE_TIME="0"
fi
# Get current checksums
CURRENT_FILE_CHECKSUM=$(get_file_checksum)
CURRENT_CLIPBOARD_CHECKSUM=$(get_clipboard_checksum)
CURRENT_TIME=$(date +%s)
# Skip update if we just made one (prevent immediate loops)
TIME_SINCE_LAST=$((CURRENT_TIME - LAST_UPDATE_TIME))
if [[ $TIME_SINCE_LAST -lt 3 ]]; then
sleep 1
continue
fi
# Clipboard changed
if [[ "$CURRENT_CLIPBOARD_CHECKSUM" != "$LAST_CLIPBOARD_CHECKSUM" ]]; then
echo "X11 clipboard changed, updating file..."
if xclip -o -selection clipboard > "$FENRIR_CLIPBOARD_FILE" 2>/dev/null; then
CURRENT_FILE_CHECKSUM=$(get_file_checksum)
echo "$CURRENT_FILE_CHECKSUM $CURRENT_CLIPBOARD_CHECKSUM $CURRENT_TIME" > "$STATE_FILE"
echo "File updated successfully"
else
echo "Failed to update file from clipboard"
fi
sleep 1
continue
fi
# File changed
if [[ "$CURRENT_FILE_CHECKSUM" != "$LAST_FILE_CHECKSUM" ]]; then
echo "Fenrir clipboard file changed, updating X11 clipboard..."
if cat "$FENRIR_CLIPBOARD_FILE" | xclip -i -selection clipboard 2>/dev/null; then
CURRENT_CLIPBOARD_CHECKSUM=$(get_clipboard_checksum)
echo "$CURRENT_FILE_CHECKSUM $CURRENT_CLIPBOARD_CHECKSUM $CURRENT_TIME" > "$STATE_FILE"
echo "X11 clipboard updated successfully"
else
echo "Failed to update clipboard from file"
fi
sleep 1
continue
fi
sleep 1
done

110
tools/install_validation_hook.sh Executable file
View File

@ -0,0 +1,110 @@
#!/bin/bash
# Safe Installation of Fenrir Validation Hook
#
# This script safely installs the composite pre-commit hook that combines
# your existing version management with new code quality validation.
set -e
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
BLUE='\033[1;34m'
NC='\033[0m' # No Color
echo -e "${BLUE}Fenrir Validation Hook Installation${NC}"
echo "===================================="
# Check we're in the right directory
if [ ! -f "CLAUDE.md" ] || [ ! -d "src/fenrirscreenreader" ]; then
echo -e "${RED}Error: Must be run from Fenrir project root directory${NC}"
exit 1
fi
# Check if there's already a pre-commit hook
if [ -f ".git/hooks/pre-commit" ]; then
echo -e "\n${YELLOW}Existing pre-commit hook detected${NC}"
# Check if it's a symlink (our validation hook) or a regular file (version hook)
if [ -L ".git/hooks/pre-commit" ]; then
echo -e "${YELLOW}Current hook appears to be our validation hook (symlink)${NC}"
read -p "Replace with composite hook that includes version management? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo -e "${YELLOW}Installation cancelled${NC}"
exit 0
fi
rm .git/hooks/pre-commit
else
echo -e "${GREEN}Current hook appears to be the version management hook (regular file)${NC}"
# Back up the existing hook
backup_name=".git/hooks/pre-commit.backup.$(date +%Y%m%d_%H%M%S)"
cp .git/hooks/pre-commit "$backup_name"
echo -e "${GREEN}✓ Existing hook backed up to: $backup_name${NC}"
# Verify the backup contains version management code
if grep -q "versionFile" "$backup_name"; then
echo -e "${GREEN}✓ Backup contains version management logic${NC}"
else
echo -e "${YELLOW}⚠ Backup doesn't appear to contain version management logic${NC}"
echo -e "${YELLOW} You may need to manually restore version management functionality${NC}"
fi
read -p "Install composite hook (version management + validation)? (Y/n): " -n 1 -r
echo
if [[ $REPLY =~ ^[Nn]$ ]]; then
echo -e "${YELLOW}Installation cancelled${NC}"
exit 0
fi
fi
else
echo -e "${YELLOW}No existing pre-commit hook found${NC}"
read -p "Install composite hook? (Y/n): " -n 1 -r
echo
if [[ $REPLY =~ ^[Nn]$ ]]; then
echo -e "${YELLOW}Installation cancelled${NC}"
exit 0
fi
fi
# Install the composite hook
echo -e "\n${YELLOW}Installing composite pre-commit hook...${NC}"
cp tools/pre-commit-composite .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit
echo -e "${GREEN}✓ Composite hook installed${NC}"
# Test the hook
echo -e "\n${YELLOW}Testing the composite hook...${NC}"
if ./.git/hooks/pre-commit >/dev/null 2>&1; then
echo -e "${GREEN}✓ Composite hook test passed${NC}"
else
echo -e "${RED}⚠ Composite hook test found issues (this may be normal)${NC}"
echo " Run manually to see details: ./.git/hooks/pre-commit"
fi
# Final instructions
echo -e "\n${GREEN}Installation Complete!${NC}"
echo ""
echo "Your composite pre-commit hook now provides:"
echo " 1. ✓ Version management (existing functionality preserved)"
echo " 2. ✓ Python syntax validation"
echo " 3. ✓ Core module import testing"
echo " 4. ✓ Common issue detection"
echo ""
echo "Development workflow:"
echo " • Make your changes"
echo " • git add . && git commit"
echo " • Hook runs automatically (version update + validation)"
echo ""
echo "Manual validation (optional):"
echo " • python3 tools/validate_syntax.py --fix"
echo " • python3 tools/validate_release.py --quick"
echo ""
echo -e "${BLUE}Environment variables:${NC}"
echo -e "${BLUE} SKIP_VERSION_UPDATE=1 Skip version management${NC}"
echo ""
if [ -f ".git/hooks/pre-commit.backup."* ]; then
echo -e "${YELLOW}Note: Your original hook is backed up and can be restored if needed${NC}"
fi

268
tools/pre-commit-composite Executable file
View File

@ -0,0 +1,268 @@
#!/bin/bash
# Fenrir Composite Pre-commit Hook
#
# This hook combines version management and code quality validation.
# It first runs the version management logic, then runs validation.
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[1;34m'
NC='\033[0m' # No Color
echo -e "${BLUE}Fenrir Pre-commit Validation${NC}"
echo "=================================="
# Get the repository root
REPO_ROOT=$(git rev-parse --show-toplevel)
cd "$REPO_ROOT"
# ============================================================================
# PART 1: VERSION MANAGEMENT (existing logic)
# ============================================================================
echo -e "\n${YELLOW}1. Version Management...${NC}"
# Check if SKIP_VERSION_UPDATE is set
if [[ "${SKIP_VERSION_UPDATE}" = "1" ]]; then
echo -e "${YELLOW}Notice: Skipping version update due to SKIP_VERSION_UPDATE=1${NC}"
else
# Verify .git/versionpath exists
if [[ ! -f ".git/versionpath" ]]; then
echo -e "${RED}Error: .git/versionpath not found. Please create it with contents:${NC}"
echo -e "${YELLOW}versionFile=\"path/to/your/version/file\"${NC}"
exit 1
fi
# Source the version path file
source ".git/versionpath"
# Validate that versionFile variable was set
if [[ -z "$versionFile" ]]; then
echo -e "${RED}Error: versionFile variable not set in .git/versionpath${NC}"
exit 1
fi
# Get current date components
year=$(date +%Y)
month=$(date +%m)
day=$(date +%d)
# Create new version string
newVersion="$year.$month.$day"
# Get current branch name
branchName=$(git rev-parse --abbrev-ref HEAD)
# Check if we're in the middle of a merge
if [[ -f ".git/MERGE_HEAD" ]]; then
echo -e "${YELLOW}Warning: In the middle of a merge. Skipping version update.${NC}"
else
# Check if file exists relative to git root
if [[ ! -f "$versionFile" ]]; then
echo -e "${RED}Error: Version file not found at $versionFile${NC}"
exit 1
fi
# Store original version file content
originalContent=$(cat "$versionFile")
# Check if version actually needs updating
if ! grep -q "version = \"$newVersion\"" "$versionFile"; then
# Update the version in the file
sed -i "s/version = [\"']\{0,1\}[0-9.]\+[\"']\{0,1\}/version = \"$newVersion\"/" "$versionFile"
fi
# Check if codeName exists and isn't "stable"
if grep -q "codeName.*=.*\"stable\"" "$versionFile"; then
# Don't modify stable codeName
:
elif grep -q "codeName.*=.*\"$branchName\"" "$versionFile"; then
# CodeName already matches branch name, no need to update
:
elif grep -q "codeName" "$versionFile"; then
# Update existing codeName
sed -i "s/codeName = [\"']\{0,1\}[^\"']*[\"']\{0,1\}/codeName = \"$branchName\"/" "$versionFile"
else
# Add codeName after the version line
sed -i "/version = / a\codeName = \"$branchName\"" "$versionFile"
fi
# Check if the file was actually modified
if [[ "$(cat "$versionFile")" != "$originalContent" ]]; then
echo -e "${GREEN}✓ Version file updated to $newVersion${NC}"
if ! git diff --cached --quiet "$versionFile"; then
echo -e "${YELLOW}Notice: Version file was already staged, updates made to staged version${NC}"
else
git add "$versionFile"
echo -e "${YELLOW}Notice: Version file has been staged${NC}"
fi
else
echo -e "${GREEN}✓ No version updates needed${NC}"
fi
fi
fi
# ============================================================================
# PART 2: CODE QUALITY VALIDATION (our new logic)
# ============================================================================
echo -e "\n${YELLOW}2. Code Quality Validation...${NC}"
# Track validation results
VALIDATION_FAILED=0
# 2a. Python Syntax Validation
echo -e "\n${YELLOW} 2a. Validating Python syntax...${NC}"
if python3 tools/validate_syntax.py --check-only >/dev/null 2>&1; then
echo -e "${GREEN} ✓ Syntax validation passed${NC}"
else
echo -e "${RED} ✗ Syntax validation failed${NC}"
echo " Run: python3 tools/validate_syntax.py --fix"
VALIDATION_FAILED=1
fi
# 2a2. PEP8/flake8 Validation (for staged Python files only)
echo -e "\n${YELLOW} 2a2. Checking PEP8 compliance...${NC}"
if command -v flake8 >/dev/null 2>&1 && [ -n "$STAGED_PYTHON_FILES" ]; then
PEP8_ISSUES=0
# Check staged Python files with flake8
# Focus on critical issues, ignore cosmetic ones for pre-commit
FLAKE8_SELECT="E9,F63,F7,F82" # Critical syntax/import errors only
FLAKE8_IGNORE="E501,W503,E203" # Ignore line length and some formatting
for file in $STAGED_PYTHON_FILES; do
if [ -f "$file" ]; then
flake8_output=$(flake8 --select="$FLAKE8_SELECT" --ignore="$FLAKE8_IGNORE" "$file" 2>/dev/null || true)
if [ -n "$flake8_output" ]; then
if [ $PEP8_ISSUES -eq 0 ]; then
echo -e "${RED} ✗ Critical PEP8 issues found:${NC}"
fi
echo -e "${RED} $file:${NC}"
echo "$flake8_output" | sed 's/^/ /'
PEP8_ISSUES=1
fi
fi
done
if [ $PEP8_ISSUES -eq 0 ]; then
echo -e "${GREEN} ✓ No critical PEP8 issues in staged files${NC}"
else
echo -e "${RED} ✗ Critical PEP8 issues found${NC}"
echo -e "${YELLOW} Run: flake8 --select=E9,F63,F7,F82 <file> for details${NC}"
VALIDATION_FAILED=1
fi
elif [ -n "$STAGED_PYTHON_FILES" ]; then
echo -e "${YELLOW} ⚠ flake8 not available (install with: pip install flake8)${NC}"
else
echo -e "${GREEN} ✓ No Python files to check${NC}"
fi
# 2b. Check for common issues in modified files
echo -e "\n${YELLOW} 2b. Checking modified files for common issues...${NC}"
# Get list of staged files (all types)
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM || true)
STAGED_PYTHON_FILES=$(echo "$STAGED_FILES" | grep '\.py$' || true)
if [ -n "$STAGED_FILES" ]; then
ISSUES_FOUND=0
# Check for cache files being committed
CACHE_FILES=$(echo "$STAGED_FILES" | grep -E '(__pycache__|\.pyc$)' || true)
if [ -n "$CACHE_FILES" ]; then
echo -e "${RED} ✗ Python cache files staged for commit:${NC}"
echo "$CACHE_FILES" | while read cache_file; do
echo -e "${RED} $cache_file${NC}"
done
echo -e "${RED} ✗ Run: python3 tools/cleanup_cache.py --remove${NC}"
ISSUES_FOUND=1
fi
# Check Python files for specific issues
if [ -n "$STAGED_PYTHON_FILES" ]; then
for file in $STAGED_PYTHON_FILES; do
if [ -f "$file" ]; then
# Check for unterminated strings (the main issue from the email)
if grep -n 'f".*{$' "$file" >/dev/null 2>&1; then
echo -e "${RED} ✗ $file: Potential unterminated f-string${NC}"
ISSUES_FOUND=1
fi
# Check for missing imports that are commonly used
if grep -q 'debug\.DebugLevel\.' "$file" && ! grep -q 'from.*debug' "$file" && ! grep -q 'import.*debug' "$file"; then
echo -e "${YELLOW} ⚠ $file: Uses debug.DebugLevel but no debug import found${NC}"
fi
fi
done
fi
if [ $ISSUES_FOUND -eq 0 ]; then
echo -e "${GREEN} ✓ No common issues found in staged files${NC}"
else
echo -e "${RED} ✗ Common issues found in staged files${NC}"
VALIDATION_FAILED=1
fi
else
echo -e "${GREEN} ✓ No files staged for commit${NC}"
fi
# 2c. Quick import test for core modules (informational only)
echo -e "\n${YELLOW} 2c. Testing core module imports...${NC}"
IMPORT_WARNINGS=0
# Test core imports that are critical (but don't fail on import issues - might be dependency related)
CORE_MODULES=(
"src.fenrirscreenreader.core.fenrirManager"
"src.fenrirscreenreader.core.commandManager"
"src.fenrirscreenreader.core.eventManager"
)
cd src
for module in "${CORE_MODULES[@]}"; do
if python3 -c "import $module" 2>/dev/null; then
echo -e "${GREEN} ✓ $module${NC}"
else
echo -e "${YELLOW} ⚠ $module (import failed - might be dependency related)${NC}"
IMPORT_WARNINGS=1
fi
done
cd "$REPO_ROOT"
if [ $IMPORT_WARNINGS -eq 1 ]; then
echo -e "${YELLOW} ⚠ Some core module imports failed (non-blocking)${NC}"
echo -e "${YELLOW} This may be due to missing runtime dependencies${NC}"
else
echo -e "${GREEN} ✓ Core module imports successful${NC}"
fi
# ============================================================================
# FINAL SUMMARY
# ============================================================================
echo -e "\n============================================================"
if [ $VALIDATION_FAILED -eq 0 ]; then
echo -e "${GREEN}✓ All pre-commit validations passed${NC}"
echo -e "${GREEN}✓ Version management completed${NC}"
echo -e "${GREEN}✓ Code quality checks passed${NC}"
echo -e "${GREEN}Commit allowed to proceed${NC}"
# Show skip option
echo -e "\n${BLUE}Tip: You can skip version updates with SKIP_VERSION_UPDATE=1${NC}"
exit 0
else
echo -e "${RED}✗ Pre-commit validation failed${NC}"
echo -e "${RED}Commit blocked - please fix issues above${NC}"
echo ""
echo "Quick fixes:"
echo " • Python syntax: python3 tools/validate_syntax.py --fix"
echo " • Review flagged files manually"
echo " • Re-run commit after fixes"
echo ""
echo -e "${BLUE}Note: Version management completed successfully${NC}"
exit 1
fi

143
tools/pre-commit-hook Executable file
View File

@ -0,0 +1,143 @@
#!/bin/bash
# Fenrir Pre-commit Hook
#
# This hook validates Python syntax and basic code quality before commits.
# Install with: ln -sf ../../tools/pre-commit-hook .git/hooks/pre-commit
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}Fenrir Pre-commit Validation${NC}"
echo "=================================="
# Get the repository root
REPO_ROOT=$(git rev-parse --show-toplevel)
cd "$REPO_ROOT"
# Track validation results
VALIDATION_FAILED=0
# 1. Python Syntax Validation
echo -e "\n${YELLOW}1. Validating Python syntax...${NC}"
if python3 tools/validate_syntax.py --check-only; then
echo -e "${GREEN}✓ Syntax validation passed${NC}"
else
echo -e "${RED}✗ Syntax validation failed${NC}"
echo "Run: python3 tools/validate_syntax.py --fix"
VALIDATION_FAILED=1
fi
# 2. Check for common issues in modified files
echo -e "\n${YELLOW}2. Checking modified files for common issues...${NC}"
# Get list of staged Python files
STAGED_PYTHON_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep '\.py$' || true)
if [ -n "$STAGED_PYTHON_FILES" ]; then
ISSUES_FOUND=0
for file in $STAGED_PYTHON_FILES; do
if [ -f "$file" ]; then
# Check for unterminated strings (the main issue from the email)
if grep -n 'f".*{$' "$file" >/dev/null 2>&1; then
echo -e "${RED}✗ $file: Potential unterminated f-string${NC}"
ISSUES_FOUND=1
fi
# Check for missing imports that are commonly used
if grep -q 'debug\.DebugLevel\.' "$file" && ! grep -q 'from.*debug' "$file" && ! grep -q 'import.*debug' "$file"; then
echo -e "${YELLOW}⚠ $file: Uses debug.DebugLevel but no debug import found${NC}"
fi
# Check for extremely long lines (over 120 chars) that might indicate issues
if awk 'length($0) > 120 {print NR ": " $0; exit 1}' "$file" >/dev/null 2>&1; then
# Only warn, don't fail
line_num=$(awk 'length($0) > 120 {print NR; exit}' "$file")
echo -e "${YELLOW}⚠ $file:$line_num: Very long line (>120 chars)${NC}"
fi
fi
done
if [ $ISSUES_FOUND -eq 0 ]; then
echo -e "${GREEN}✓ No common issues found in modified files${NC}"
else
echo -e "${RED}✗ Common issues found in modified files${NC}"
VALIDATION_FAILED=1
fi
else
echo -e "${GREEN}✓ No Python files modified${NC}"
fi
# 3. Quick import test for core modules
echo -e "\n${YELLOW}3. Testing core module imports...${NC}"
IMPORT_FAILED=0
# Test core imports that are critical
CORE_MODULES=(
"src.fenrirscreenreader.core.fenrirManager"
"src.fenrirscreenreader.core.commandManager"
"src.fenrirscreenreader.core.eventManager"
)
cd src
for module in "${CORE_MODULES[@]}"; do
if python3 -c "import $module" 2>/dev/null; then
echo -e "${GREEN}✓ $module${NC}"
else
echo -e "${RED}✗ $module (import failed)${NC}"
IMPORT_FAILED=1
fi
done
cd "$REPO_ROOT"
if [ $IMPORT_FAILED -eq 1 ]; then
echo -e "${RED}✗ Core module import test failed${NC}"
VALIDATION_FAILED=1
else
echo -e "${GREEN}✓ Core module imports successful${NC}"
fi
# 4. Check for secrets or sensitive data
echo -e "\n${YELLOW}4. Checking for potential secrets...${NC}"
SECRETS_FOUND=0
if [ -n "$STAGED_PYTHON_FILES" ]; then
for file in $STAGED_PYTHON_FILES; do
if [ -f "$file" ]; then
# Check for potential passwords, keys, tokens
if grep -i -E '(password|passwd|pwd|key|token|secret|api_key).*=.*["\'][^"\']{8,}["\']' "$file" >/dev/null 2>&1; then
echo -e "${RED}✗ $file: Potential hardcoded secret detected${NC}"
SECRETS_FOUND=1
fi
fi
done
fi
if [ $SECRETS_FOUND -eq 0 ]; then
echo -e "${GREEN}✓ No potential secrets found${NC}"
else
echo -e "${RED}✗ Potential secrets found - please review${NC}"
VALIDATION_FAILED=1
fi
# Summary
echo -e "\n${'='*50}"
if [ $VALIDATION_FAILED -eq 0 ]; then
echo -e "${GREEN}✓ All pre-commit validations passed${NC}"
echo -e "${GREEN}Commit allowed to proceed${NC}"
exit 0
else
echo -e "${RED}✗ Pre-commit validation failed${NC}"
echo -e "${RED}Commit blocked - please fix issues above${NC}"
echo ""
echo "Quick fixes:"
echo " • Python syntax: python3 tools/validate_syntax.py --fix"
echo " • Review flagged files manually"
echo " • Re-run commit after fixes"
exit 1
fi

98
tools/setup_validation.sh Executable file
View File

@ -0,0 +1,98 @@
#!/bin/bash
# Fenrir Validation Setup Script
#
# Sets up the validation tools and pre-commit hooks for Fenrir development.
# Run this once after cloning the repository.
set -e
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
echo -e "${GREEN}Fenrir Development Environment Setup${NC}"
echo "======================================"
# Check we're in the right directory
if [ ! -f "CLAUDE.md" ] || [ ! -d "src/fenrirscreenreader" ]; then
echo -e "${RED}Error: Must be run from Fenrir project root directory${NC}"
exit 1
fi
# Make validation scripts executable
echo -e "\n${YELLOW}1. Making validation scripts executable...${NC}"
chmod +x tools/validate_syntax.py
chmod +x tools/validate_pep8.py
chmod +x tools/validate_release.py
chmod +x tools/cleanup_cache.py
chmod +x tools/pre-commit-hook
chmod +x tools/install_validation_hook.sh
chmod +x tools/pre-commit-composite
echo -e "${GREEN}✓ Scripts are now executable${NC}"
# Install pre-commit hook
echo -e "\n${YELLOW}2. Installing composite pre-commit hook...${NC}"
echo -e "${YELLOW}This preserves existing version management functionality.${NC}"
# Use the safe installation script
if ./tools/install_validation_hook.sh; then
echo -e "${GREEN}✓ Composite pre-commit hook installed${NC}"
else
echo -e "${RED}⚠ Hook installation encountered issues${NC}"
echo " You can install manually with: ./tools/install_validation_hook.sh"
fi
# Test validation tools
echo -e "\n${YELLOW}3. Testing validation tools...${NC}"
# Test syntax validator
if python3 tools/validate_syntax.py --check-only >/dev/null 2>&1; then
echo -e "${GREEN}✓ Syntax validator working${NC}"
else
echo -e "${RED}⚠ Syntax validator found issues${NC}"
echo " Run: python3 tools/validate_syntax.py --fix"
fi
# Test pre-commit hook
if ./tools/pre-commit-hook >/dev/null 2>&1; then
echo -e "${GREEN}✓ Pre-commit hook working${NC}"
else
echo -e "${RED}⚠ Pre-commit hook found issues${NC}"
echo " This is normal if there are uncommitted changes"
fi
# Verify dependencies for full validation
echo -e "\n${YELLOW}4. Checking validation dependencies...${NC}"
missing_deps=()
if ! command -v python3 >/dev/null 2>&1; then
missing_deps+=("python3")
fi
if ! python3 -c "import ast" >/dev/null 2>&1; then
missing_deps+=("python3-ast")
fi
if [ ${#missing_deps[@]} -eq 0 ]; then
echo -e "${GREEN}✓ All validation dependencies available${NC}"
else
echo -e "${RED}Missing dependencies: ${missing_deps[*]}${NC}"
fi
# Final instructions
echo -e "\n${GREEN}Setup complete!${NC}"
echo ""
echo "Development workflow:"
echo " 1. Make your changes"
echo " 2. python3 tools/validate_syntax.py --fix"
echo " 3. python3 tools/validate_release.py --quick"
echo " 4. git add . && git commit (pre-commit hook runs automatically)"
echo ""
echo "Before releases:"
echo " python3 tools/validate_release.py"
echo " cat RELEASE_CHECKLIST.md"
echo ""
echo -e "${YELLOW}Tip: The pre-commit hook will now run automatically on every commit${NC}"
echo -e "${YELLOW} and prevent syntax errors from being committed.${NC}"

365
tools/validate_pep8.py Executable file
View File

@ -0,0 +1,365 @@
#!/usr/bin/env python3
"""
Fenrir PEP8 Validation and Auto-Fix Tool
Validates Python code style using flake8 and applies safe automatic fixes.
Designed to work with Fenrir's existing codebase while respecting timing-critical code.
Usage:
python3 tools/validate_pep8.py # Check all Python files
python3 tools/validate_pep8.py --fix-safe # Auto-fix safe issues
python3 tools/validate_pep8.py --check-only # Exit with error if issues found
python3 tools/validate_pep8.py --staged # Check only staged files
"""
import os
import sys
import argparse
import subprocess
import tempfile
from pathlib import Path
class PEP8Validator:
def __init__(self, verbose=True):
self.verbose = verbose
self.errors = []
self.warnings = []
self.fixes_applied = []
def log(self, message, level="INFO"):
"""Log a message with appropriate formatting."""
if not self.verbose and level == "INFO":
return
colors = {
"INFO": "\033[0;36m", # Cyan
"SUCCESS": "\033[0;32m", # Green
"WARNING": "\033[1;33m", # Yellow
"ERROR": "\033[0;31m", # Red
"HEADER": "\033[1;34m", # Bold Blue
}
reset = "\033[0m"
color = colors.get(level, "")
if level == "HEADER":
print(f"\n{color}{'='*60}")
print(f"{message}")
print(f"{'='*60}{reset}")
else:
symbol = {
"SUCCESS": "",
"ERROR": "",
"WARNING": "",
"INFO": ""
}.get(level, "")
print(f"{color}{symbol} {message}{reset}")
def check_flake8_available(self):
"""Check if flake8 is available."""
try:
result = subprocess.run(["flake8", "--version"],
capture_output=True, text=True, timeout=5)
if result.returncode == 0:
version = result.stdout.strip().split('\n')[0]
self.log(f"Using flake8: {version}")
return True
else:
return False
except (subprocess.TimeoutExpired, FileNotFoundError):
return False
def get_python_files(self, directory=None, staged_only=False):
"""Get list of Python files to check."""
if staged_only:
try:
result = subprocess.run([
"git", "diff", "--cached", "--name-only", "--diff-filter=ACM"
], capture_output=True, text=True, timeout=10)
if result.returncode == 0:
files = [f for f in result.stdout.strip().split('\n')
if f.endswith('.py') and Path(f).exists()]
return [Path(f) for f in files if f]
else:
self.warnings.append("Could not get staged files, checking all files")
staged_only = False
except subprocess.TimeoutExpired:
self.warnings.append("Git command timed out, checking all files")
staged_only = False
if not staged_only:
directory = Path(directory or "src/fenrirscreenreader")
if not directory.exists():
self.errors.append(f"Directory {directory} does not exist")
return []
python_files = list(directory.rglob("*.py"))
# Filter out cache and build directories
python_files = [f for f in python_files if not any(
part.startswith(('__pycache__', '.git', 'build', 'dist'))
for part in f.parts)]
return python_files
return []
def run_flake8(self, files, select=None, ignore=None):
"""Run flake8 on the given files."""
if not files:
return True, ""
cmd = ["flake8"]
if select:
cmd.extend(["--select", select])
if ignore:
cmd.extend(["--ignore", ignore])
# Add files
cmd.extend([str(f) for f in files])
try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
return result.returncode == 0, result.stdout
except subprocess.TimeoutExpired:
self.errors.append("flake8 command timed out")
return False, ""
except Exception as e:
self.errors.append(f"Failed to run flake8: {e}")
return False, ""
def categorize_issues(self, flake8_output):
"""Categorize flake8 issues by severity and safety for auto-fixing."""
lines = flake8_output.strip().split('\n')
issues = {'critical': [], 'safe_fixable': [], 'manual': []}
for line in lines:
if not line.strip():
continue
# Parse flake8 output: filename:line:col: code message
parts = line.split(':', 3)
if len(parts) < 4:
continue
filename = parts[0]
line_num = parts[1]
col = parts[2]
code_msg = parts[3].strip()
code = code_msg.split()[0] if code_msg else ""
# Categorize by error code
if code.startswith('E9') or code.startswith('F'):
# Critical syntax/import errors
issues['critical'].append(line)
elif code in ['E111', 'E114', 'E117', 'E121', 'E122', 'E123', 'E124',
'E125', 'E126', 'E127', 'E128', 'E129', 'E131', 'E133',
'W291', 'W292', 'W293']:
# Safe indentation and whitespace fixes
# But skip timing-critical files
if not any(critical in filename.lower() for critical in
['evdevdriver', 'vcsadriver', 'screenmanager', 'inputmanager']):
issues['safe_fixable'].append(line)
else:
issues['manual'].append(line)
else:
# Everything else needs manual review
issues['manual'].append(line)
return issues
def apply_safe_fixes(self, files):
"""Apply safe automatic fixes using autopep8."""
try:
# Check if autopep8 is available
result = subprocess.run(["autopep8", "--version"],
capture_output=True, text=True, timeout=5)
if result.returncode != 0:
self.warnings.append("autopep8 not available for auto-fixing")
return False
except (subprocess.TimeoutExpired, FileNotFoundError):
self.warnings.append("autopep8 not available for auto-fixing")
return False
fixed_count = 0
for file_path in files:
# Skip timing-critical files
if any(critical in str(file_path).lower() for critical in
['evdevdriver', 'vcsadriver', 'screenmanager', 'inputmanager']):
self.log(f"Skipping timing-critical file: {file_path}", "WARNING")
continue
try:
# Apply safe fixes only
cmd = [
"autopep8",
"--in-place",
"--select", "E111,E114,E117,E121,E122,E123,E124,E125,E126,E127,E128,E129,E131,E133,W291,W292,W293",
str(file_path)
]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
if result.returncode == 0:
self.fixes_applied.append(f"Applied safe PEP8 fixes to {file_path}")
fixed_count += 1
else:
self.warnings.append(f"Could not auto-fix {file_path}: {result.stderr}")
except subprocess.TimeoutExpired:
self.warnings.append(f"Auto-fix timed out for {file_path}")
except Exception as e:
self.warnings.append(f"Error auto-fixing {file_path}: {e}")
return fixed_count > 0
def validate_files(self, files, fix_safe=False):
"""Validate Python files for PEP8 compliance."""
if not files:
self.log("No Python files to validate")
return True
self.log(f"Validating {len(files)} Python files")
# Run comprehensive flake8 check
success, output = self.run_flake8(files)
if not output.strip():
self.log("All files pass PEP8 validation", "SUCCESS")
return True
# Categorize issues
issues = self.categorize_issues(output)
# Report critical issues
if issues['critical']:
self.log(f"Critical issues found ({len(issues['critical'])}):", "ERROR")
for issue in issues['critical'][:10]: # Limit output
self.log(f" {issue}", "ERROR")
if len(issues['critical']) > 10:
self.log(f" ... and {len(issues['critical']) - 10} more", "ERROR")
# Handle safe fixable issues
if issues['safe_fixable']:
if fix_safe:
self.log(f"Auto-fixing {len(issues['safe_fixable'])} safe issues...", "INFO")
# Get unique files from safe_fixable issues
fix_files = set()
for issue in issues['safe_fixable']:
filename = issue.split(':')[0]
fix_files.add(Path(filename))
if self.apply_safe_fixes(fix_files):
self.log("Safe auto-fixes applied", "SUCCESS")
# Re-run flake8 to see remaining issues
success, output = self.run_flake8(files)
if output.strip():
remaining_issues = self.categorize_issues(output)
issues = remaining_issues
else:
issues = {'critical': [], 'safe_fixable': [], 'manual': []}
else:
self.log(f"Safe fixable issues found ({len(issues['safe_fixable'])}):", "WARNING")
for issue in issues['safe_fixable'][:5]:
self.log(f" {issue}", "WARNING")
if len(issues['safe_fixable']) > 5:
self.log(f" ... and {len(issues['safe_fixable']) - 5} more", "WARNING")
self.log("Run with --fix-safe to auto-fix these", "INFO")
# Report manual issues
if issues['manual']:
self.log(f"Manual review needed ({len(issues['manual'])}):", "WARNING")
for issue in issues['manual'][:5]:
self.log(f" {issue}", "WARNING")
if len(issues['manual']) > 5:
self.log(f" ... and {len(issues['manual']) - 5} more", "WARNING")
# Return success if only manual issues remain (non-critical)
return len(issues['critical']) == 0
def generate_report(self):
"""Generate final validation report."""
total_issues = len(self.errors) + len(self.warnings)
if self.fixes_applied:
self.log(f"\nAUTO-FIXES APPLIED ({len(self.fixes_applied)}):", "HEADER")
for fix in self.fixes_applied:
self.log(fix, "SUCCESS")
if self.errors:
self.log(f"\nERRORS ({len(self.errors)}):", "HEADER")
for error in self.errors:
self.log(error, "ERROR")
if self.warnings:
self.log(f"\nWARNINGS ({len(self.warnings)}):", "HEADER")
for warning in self.warnings:
self.log(warning, "WARNING")
if len(self.errors) == 0:
self.log("\n✅ PEP8 VALIDATION PASSED", "SUCCESS")
if self.warnings:
self.log("Non-critical style issues found - consider manual review", "INFO")
return True
else:
self.log("\n❌ PEP8 VALIDATION FAILED", "ERROR")
self.log("Critical issues must be fixed", "ERROR")
return False
def main():
parser = argparse.ArgumentParser(description='Validate and fix PEP8 compliance in Fenrir')
parser.add_argument('--fix-safe', action='store_true',
help='Apply safe automatic fixes (avoids timing-critical files)')
parser.add_argument('--check-only', action='store_true',
help='Exit with non-zero code if issues found')
parser.add_argument('--staged', action='store_true',
help='Check only staged files')
parser.add_argument('--quiet', action='store_true',
help='Reduce output verbosity')
parser.add_argument('--directory', default='src/fenrirscreenreader',
help='Directory to scan (default: src/fenrirscreenreader)')
args = parser.parse_args()
validator = PEP8Validator(verbose=not args.quiet)
validator.log("FENRIR PEP8 VALIDATION", "HEADER")
# Check if flake8 is available
if not validator.check_flake8_available():
validator.log("flake8 is required but not available", "ERROR")
validator.log("Install with: pip install flake8", "INFO")
if args.fix_safe:
validator.log("For auto-fixing, also install: pip install autopep8", "INFO")
sys.exit(1)
# Get files to validate
files = validator.get_python_files(
directory=args.directory if not args.staged else None,
staged_only=args.staged
)
if not files:
validator.log("No Python files found to validate")
sys.exit(0)
# Validate files
success = validator.validate_files(files, fix_safe=args.fix_safe)
# Generate report
validation_passed = validator.generate_report()
if args.check_only and not validation_passed:
sys.exit(1)
elif validation_passed:
sys.exit(0)
else:
sys.exit(1)
if __name__ == '__main__':
main()

459
tools/validate_release.py Executable file
View File

@ -0,0 +1,459 @@
#!/usr/bin/env python3
"""
Fenrir Release Validation Tool
Comprehensive validation suite for Fenrir releases, including syntax validation,
dependency checking, import testing, and basic functionality validation.
Usage:
python3 tools/validate_release.py # Full validation
python3 tools/validate_release.py --quick # Skip slow tests
python3 tools/validate_release.py --fix # Auto-fix issues where possible
"""
import ast
import os
import sys
import argparse
import subprocess
import tempfile
import importlib.util
from pathlib import Path
import time
class ReleaseValidator:
def __init__(self, verbose=True):
self.verbose = verbose
self.errors = []
self.warnings = []
self.fixes_applied = []
self.tests_run = 0
self.tests_passed = 0
def log(self, message, level="INFO"):
"""Log a message with appropriate formatting."""
if not self.verbose and level == "INFO":
return
colors = {
"INFO": "\033[0;36m", # Cyan
"SUCCESS": "\033[0;32m", # Green
"WARNING": "\033[1;33m", # Yellow
"ERROR": "\033[0;31m", # Red
"HEADER": "\033[1;34m", # Bold Blue
}
reset = "\033[0m"
color = colors.get(level, "")
if level == "HEADER":
print(f"\n{color}{'='*60}")
print(f"{message}")
print(f"{'='*60}{reset}")
else:
symbol = {
"SUCCESS": "",
"ERROR": "",
"WARNING": "",
"INFO": ""
}.get(level, "")
print(f"{color}{symbol} {message}{reset}")
def run_test(self, name, test_func, *args, **kwargs):
"""Run a test and track results."""
self.tests_run += 1
try:
result = test_func(*args, **kwargs)
if result:
self.tests_passed += 1
self.log(f"{name}: PASSED", "SUCCESS")
else:
self.log(f"{name}: FAILED", "ERROR")
return result
except Exception as e:
self.log(f"{name}: ERROR - {e}", "ERROR")
self.errors.append(f"{name}: {e}")
return False
def validate_python_syntax(self, directory, fix_mode=False):
"""Validate Python syntax across all files."""
python_files = list(Path(directory).rglob("*.py"))
# Filter out cache and build directories
python_files = [f for f in python_files if not any(part.startswith(('__pycache__', '.git', 'build', 'dist')) for part in f.parts)]
syntax_errors = []
fixed_files = []
for filepath in python_files:
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
ast.parse(content, filename=str(filepath))
except SyntaxError as e:
syntax_errors.append((filepath, e))
if fix_mode:
# Try to fix common f-string issues
fixed_content = self.fix_fstring_issues(content)
if fixed_content != content:
try:
ast.parse(fixed_content, filename=str(filepath))
# Fix worked, write it back
with open(filepath, 'w', encoding='utf-8') as f:
f.write(fixed_content)
fixed_files.append(filepath)
syntax_errors.pop() # Remove from errors
except SyntaxError:
pass # Fix didn't work
except Exception as e:
syntax_errors.append((filepath, e))
if fixed_files:
self.fixes_applied.extend([f"Fixed f-string syntax in {f}" for f in fixed_files])
if syntax_errors:
for filepath, error in syntax_errors[:5]: # Show only first 5
if isinstance(error, SyntaxError):
self.errors.append(f"Syntax error in {filepath}:{error.lineno}: {error.msg}")
else:
self.errors.append(f"Error in {filepath}: {error}")
if len(syntax_errors) > 5:
self.errors.append(f"... and {len(syntax_errors) - 5} more syntax errors")
return len(syntax_errors) == 0
def fix_fstring_issues(self, content):
"""Fix common f-string syntax issues."""
lines = content.split('\n')
for i, line in enumerate(lines):
# Look for f-strings that span multiple lines incorrectly
if ('f"' in line and line.count('"') % 2 == 1 and
i + 1 < len(lines) and lines[i + 1].strip()):
next_line = lines[i + 1]
# Common patterns to fix
if (next_line.strip().endswith('}"') or
'str(e)}' in next_line or
next_line.strip().startswith(('fenrirVersion.', 'self.'))):
# Merge the lines properly
fixed_line = line.rstrip() + next_line.strip()
lines[i] = fixed_line
lines[i + 1] = ''
return '\n'.join(line for line in lines if line.strip() or not line)
def validate_dependencies(self):
"""Run the existing dependency checker."""
try:
result = subprocess.run([
sys.executable, "check-dependencies.py"
], capture_output=True, text=True, timeout=30)
if result.returncode == 0:
return True
else:
self.errors.append(f"Dependency check failed: {result.stderr}")
return False
except subprocess.TimeoutExpired:
self.errors.append("Dependency check timed out")
return False
except Exception as e:
self.errors.append(f"Could not run dependency check: {e}")
return False
def validate_core_imports(self):
"""Test importing core Fenrir modules."""
# Change to src directory for imports
original_path = sys.path.copy()
src_dir = Path.cwd() / "src"
if src_dir.exists():
sys.path.insert(0, str(src_dir))
core_modules = [
"fenrirscreenreader.core.fenrirManager",
"fenrirscreenreader.core.commandManager",
"fenrirscreenreader.core.eventManager",
"fenrirscreenreader.core.screenManager",
"fenrirscreenreader.core.inputManager",
"fenrirscreenreader.core.outputManager",
]
import_failures = []
for module_name in core_modules:
try:
importlib.import_module(module_name)
except ImportError as e:
import_failures.append(f"{module_name}: {e}")
except Exception as e:
import_failures.append(f"{module_name}: Unexpected error: {e}")
# Restore path
sys.path = original_path
if import_failures:
self.errors.extend(import_failures)
return False
return True
def validate_command_structure(self):
"""Validate command file structure and naming."""
commands_dir = Path("src/fenrirscreenreader/commands")
if not commands_dir.exists():
self.errors.append("Commands directory not found")
return False
issues = []
# Check command directories
expected_dirs = ["commands", "onHeartBeat", "onKeyInput", "onCursorChange",
"onScreenUpdate", "onScreenChanged", "vmenu-profiles"]
for expected_dir in expected_dirs:
if not (commands_dir / expected_dir).exists():
issues.append(f"Missing expected directory: {expected_dir}")
# Check for critical issues only (skip template files and base classes)
for py_file in commands_dir.rglob("*.py"):
if (py_file.name.startswith("__") or
"template" in py_file.name.lower() or
"base" in py_file.name.lower()):
continue
try:
with open(py_file, 'r', encoding='utf-8') as f:
content = f.read()
# Critical structure checks only
if "class command" not in content:
issues.append(f"{py_file}: Missing 'class command' definition")
# Skip method checks for files that inherit from base classes
if ("super().__init__" in content or
"importlib.util" in content or
"_base.py" in content):
continue # These inherit methods from base classes
# Only check direct implementations
# Special case: Application profile commands use load/unload instead of run
if "onSwitchApplicationProfile" in str(py_file):
if "def load" not in content and "def unload" not in content:
issues.append(f"{py_file}: Missing 'load' or 'unload' method")
else:
critical_methods = ["run"] # Focus on the most critical method
for method in critical_methods:
if (f"def {method}" not in content and
"super()" not in content): # Skip if uses inheritance
issues.append(f"{py_file}: Missing '{method}' method")
except Exception as e:
issues.append(f"{py_file}: Could not validate structure: {e}")
# Only report critical issues, not template/base class warnings
critical_issues = [issue for issue in issues if not any(skip in issue.lower()
for skip in ["template", "base", "missing 'initialize'", "missing 'shutdown'"])]
if critical_issues:
self.warnings.extend(critical_issues[:5]) # Limit warnings
if len(critical_issues) > 5:
self.warnings.append(f"... and {len(critical_issues) - 5} more critical command structure issues")
# Return success if no critical issues (warnings are acceptable)
return len(critical_issues) == 0
def validate_configuration_files(self):
"""Validate configuration file structure."""
config_dir = Path("config")
if not config_dir.exists():
self.errors.append("Config directory not found")
return False
required_configs = [
"settings/settings.conf",
"keyboard/desktop.conf",
"punctuation/default.conf"
]
missing_configs = []
for config_file in required_configs:
if not (config_dir / config_file).exists():
missing_configs.append(config_file)
if missing_configs:
self.errors.extend([f"Missing config file: {f}" for f in missing_configs])
return False
return True
def validate_installation_scripts(self):
"""Validate installation and setup scripts."""
required_scripts = ["setup.py", "install.sh", "uninstall.sh"]
missing_scripts = []
for script in required_scripts:
if not Path(script).exists():
missing_scripts.append(script)
if missing_scripts:
self.warnings.extend([f"Missing installation script: {s}" for s in missing_scripts])
# Check setup.py syntax if it exists
if Path("setup.py").exists():
try:
with open("setup.py", 'r') as f:
content = f.read()
ast.parse(content, filename="setup.py")
except SyntaxError as e:
self.errors.append(f"setup.py syntax error: {e}")
return False
return len(missing_scripts) == 0
def validate_repository_cleanliness(self):
"""Check for cache files and other artifacts that shouldn't be in git."""
# Check for Python cache files in git tracking
try:
result = subprocess.run([
"git", "ls-files", "--cached"
], capture_output=True, text=True, timeout=10)
if result.returncode == 0:
tracked_files = result.stdout.strip().split('\n')
cache_files = [f for f in tracked_files if '__pycache__' in f or f.endswith('.pyc')]
if cache_files:
self.errors.extend([f"Python cache file tracked in git: {f}" for f in cache_files[:5]])
if len(cache_files) > 5:
self.errors.append(f"... and {len(cache_files) - 5} more cache files in git")
return False
else:
return True
else:
self.warnings.append("Could not check git tracked files")
return True
except subprocess.TimeoutExpired:
self.warnings.append("Git check timed out")
return True
except Exception as e:
self.warnings.append(f"Could not check repository cleanliness: {e}")
return True
def generate_report(self):
"""Generate final validation report."""
self.log("FENRIR RELEASE VALIDATION REPORT", "HEADER")
# Test Summary
success_rate = (self.tests_passed / self.tests_run * 100) if self.tests_run > 0 else 0
self.log(f"Tests run: {self.tests_run}")
self.log(f"Tests passed: {self.tests_passed}")
self.log(f"Success rate: {success_rate:.1f}%")
# Fixes Applied
if self.fixes_applied:
self.log("\nAUTO-FIXES APPLIED:", "HEADER")
for fix in self.fixes_applied:
self.log(fix, "SUCCESS")
# Errors
if self.errors:
self.log(f"\nERRORS ({len(self.errors)}):", "HEADER")
for error in self.errors:
self.log(error, "ERROR")
# Warnings
if self.warnings:
self.log(f"\nWARNINGS ({len(self.warnings)}):", "HEADER")
for warning in self.warnings:
self.log(warning, "WARNING")
# Final Status
if not self.errors and success_rate >= 80:
self.log("\n🎉 RELEASE VALIDATION PASSED", "SUCCESS")
self.log("The codebase appears ready for release", "SUCCESS")
return True
elif not self.errors:
self.log("\n⚠️ RELEASE VALIDATION PASSED WITH WARNINGS", "WARNING")
self.log("Release is possible but issues should be addressed", "WARNING")
return True
else:
self.log("\n❌ RELEASE VALIDATION FAILED", "ERROR")
self.log("Critical issues must be fixed before release", "ERROR")
return False
def main():
parser = argparse.ArgumentParser(description='Comprehensive Fenrir release validation')
parser.add_argument('--quick', action='store_true',
help='Skip slow tests (dependency checking)')
parser.add_argument('--fix', action='store_true',
help='Attempt to fix issues automatically where possible')
parser.add_argument('--quiet', action='store_true',
help='Reduce output verbosity')
args = parser.parse_args()
# Ensure we're in the project root
if not Path("src/fenrirscreenreader").exists():
print("Error: Must be run from Fenrir project root directory")
sys.exit(1)
validator = ReleaseValidator(verbose=not args.quiet)
validator.log("FENRIR RELEASE VALIDATION STARTING", "HEADER")
start_time = time.time()
# Run validation tests
validator.run_test(
"Python syntax validation",
validator.validate_python_syntax,
"src/fenrirscreenreader",
args.fix
)
if not args.quick:
validator.run_test(
"Dependency validation",
validator.validate_dependencies
)
validator.run_test(
"Core module imports",
validator.validate_core_imports
)
validator.run_test(
"Command structure validation",
validator.validate_command_structure
)
validator.run_test(
"Configuration files validation",
validator.validate_configuration_files
)
validator.run_test(
"Installation scripts validation",
validator.validate_installation_scripts
)
validator.run_test(
"Repository cleanliness validation",
validator.validate_repository_cleanliness
)
# Generate final report
elapsed_time = time.time() - start_time
validator.log(f"\nValidation completed in {elapsed_time:.1f} seconds")
success = validator.generate_report()
sys.exit(0 if success else 1)
if __name__ == '__main__':
main()

236
tools/validate_syntax.py Executable file
View File

@ -0,0 +1,236 @@
#!/usr/bin/env python3
"""
Fenrir Syntax Validation Tool
Validates Python syntax across the entire Fenrir codebase without writing
cache files. Designed to catch syntax errors before packaging or releases.
Usage:
python3 tools/validate_syntax.py # Validate all Python files
python3 tools/validate_syntax.py --fix # Fix common issues automatically
python3 tools/validate_syntax.py --check-only # Exit with non-zero if errors found
"""
import ast
import os
import sys
import argparse
import tempfile
from pathlib import Path
class SyntaxValidator:
def __init__(self):
self.errors = []
self.warnings = []
self.fixed = []
def validate_file(self, filepath):
"""Validate syntax of a single Python file."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
# Parse with AST (catches syntax errors)
ast.parse(content, filename=str(filepath))
return True, content
except SyntaxError as e:
error_msg = f"{filepath}:{e.lineno}: {e.msg}"
self.errors.append((filepath, e, content))
return False, content
except UnicodeDecodeError as e:
error_msg = f"{filepath}: Unicode decode error: {e}"
self.errors.append((filepath, e, None))
return False, None
except Exception as e:
error_msg = f"{filepath}: Unexpected error: {e}"
self.errors.append((filepath, e, None))
return False, None
def fix_common_issues(self, filepath, content):
"""Attempt to fix common syntax issues automatically."""
if not content:
return False, content
original_content = content
fixed_issues = []
# Fix unterminated f-strings (the main issue from the email)
lines = content.split('\n')
modified = False
for i, line in enumerate(lines):
# Look for f-strings that span multiple lines incorrectly
if 'f"' in line and line.count('"') % 2 == 1:
# Check if this looks like a broken multi-line f-string
indent = len(line) - len(line.lstrip())
# Look ahead for continuation
j = i + 1
while j < len(lines) and lines[j].strip():
next_line = lines[j]
next_indent = len(next_line) - len(next_line.lstrip())
# If next line is indented more and has closing brace/quote
if (next_indent > indent and
('"' in next_line or '}' in next_line)):
# Try to fix by joining lines properly
combined_line = line.rstrip()
continuation = next_line.strip()
if continuation.startswith(('"', '}', 'str(e)', 'self.', 'fenrirVersion.')):
# Fix common patterns
if 'str(e)}' in continuation:
fixed_line = line.replace('f"', 'f"').rstrip() + '{' + continuation.replace('"', '') + '}'
elif continuation.startswith('"'):
fixed_line = line + continuation
else:
fixed_line = line.rstrip() + continuation
lines[i] = fixed_line
lines[j] = '' # Remove the continuation line
modified = True
fixed_issues.append(f"Line {i+1}: Fixed multi-line f-string")
break
j += 1
if modified:
content = '\n'.join(lines)
# Clean up empty lines that were created
content = '\n'.join(line for line in content.split('\n') if line.strip() or not line)
# Verify the fix worked
try:
ast.parse(content, filename=str(filepath))
self.fixed.append((filepath, fixed_issues))
return True, content
except SyntaxError:
# Fix didn't work, return original
return False, original_content
return False, content
def scan_directory(self, directory, fix_mode=False):
"""Scan directory for Python files and validate them."""
python_files = []
# Find all Python files
for root, dirs, files in os.walk(directory):
# Skip cache and build directories
dirs[:] = [d for d in dirs if not d.startswith(('__pycache__', '.git', 'build', 'dist'))]
for file in files:
if file.endswith('.py'):
python_files.append(Path(root) / file)
print(f"Validating {len(python_files)} Python files...")
valid_count = 0
fixed_count = 0
for filepath in sorted(python_files):
is_valid, content = self.validate_file(filepath)
if is_valid:
valid_count += 1
print(f"{filepath}")
else:
print(f"{filepath}")
if fix_mode and content:
# Try to fix the file
was_fixed, fixed_content = self.fix_common_issues(filepath, content)
if was_fixed:
# Write the fixed content back
with open(filepath, 'w', encoding='utf-8') as f:
f.write(fixed_content)
print(f" → Fixed automatically")
fixed_count += 1
# Re-validate
is_valid_now, _ = self.validate_file(filepath)
if is_valid_now:
valid_count += 1
return valid_count, len(python_files), fixed_count
def print_summary(self, valid_count, total_count, fixed_count=0):
"""Print validation summary."""
print(f"\n{'='*60}")
print(f"SYNTAX VALIDATION SUMMARY")
print(f"{'='*60}")
print(f"Valid files: {valid_count}/{total_count}")
print(f"Invalid files: {total_count - valid_count}")
if fixed_count > 0:
print(f"Auto-fixed: {fixed_count}")
if self.errors:
print(f"\nERRORS ({len(self.errors)}):")
for filepath, error, _ in self.errors:
if isinstance(error, SyntaxError):
print(f" {filepath}:{error.lineno}: {error.msg}")
else:
print(f" {filepath}: {error}")
if self.fixed:
print(f"\nAUTO-FIXES APPLIED ({len(self.fixed)}):")
for filepath, fixes in self.fixed:
print(f" {filepath}:")
for fix in fixes:
print(f" - {fix}")
success_rate = (valid_count / total_count) * 100 if total_count > 0 else 0
print(f"\nSuccess rate: {success_rate:.1f}%")
return len(self.errors) == 0
def main():
parser = argparse.ArgumentParser(description='Validate Python syntax in Fenrir codebase')
parser.add_argument('--fix', action='store_true',
help='Attempt to fix common syntax issues automatically')
parser.add_argument('--check-only', action='store_true',
help='Exit with non-zero code if syntax errors found')
parser.add_argument('--directory', default='src/fenrirscreenreader',
help='Directory to scan (default: src/fenrirscreenreader)')
args = parser.parse_args()
# Find project root
script_dir = Path(__file__).parent
project_root = script_dir.parent
target_dir = project_root / args.directory
if not target_dir.exists():
print(f"Error: Directory {target_dir} does not exist")
sys.exit(1)
print(f"Fenrir Syntax Validator")
print(f"Target directory: {target_dir}")
print(f"Fix mode: {'ON' if args.fix else 'OFF'}")
print()
validator = SyntaxValidator()
valid_count, total_count, fixed_count = validator.scan_directory(target_dir, fix_mode=args.fix)
all_valid = validator.print_summary(valid_count, total_count, fixed_count)
if args.check_only and not all_valid:
print(f"\nValidation failed: {total_count - valid_count} files have syntax errors")
sys.exit(1)
elif not all_valid:
print(f"\nWarning: {total_count - valid_count} files have syntax errors")
if not args.fix:
print("Run with --fix to attempt automatic fixes")
sys.exit(1)
else:
print(f"\n✓ All {total_count} files have valid syntax")
sys.exit(0)
if __name__ == '__main__':
main()