Research
BlinkTrace studies vision-based deepfake detection under generation shift, with a long-term goal of supporting reliable real-time detection workflows.
Research Thesis
BlinkTrace treats deepfake detection as a representation problem under generation shift, not a closed-set classification problem. The core question is which forensic signals continue to work when manipulation methods change, compression changes, and shortcut cues become unreliable. This work is intentionally vision-only: the current pipeline does not train on audio and does not perform audio detection.
Why Real-Time Detection Is Hard
A real-time system cannot depend on easy shortcuts or familiar generator fingerprints. If a detector only performs well on known distributions, it becomes fragile the moment the generation family, encoding path, or source conditions change. That is why BlinkTrace emphasizes held-out generation methods, source-matched validation, and explicit attention to shortcut risk.
Methodology
Appearance-Based Detection
A CNN branch analyzes normalized face crops to learn transferable signals such as blending, shading, texture, and resampling artifacts under held-out generation-family evaluation across families including FaceFusion, LivePortrait, HelloMeme, Diff2Lip, LatentSync, and Memo.
Temporal Facial Dynamics
A GRU branch models normalized landmark sequences to test whether facial motion geometry provides useful signal beyond still-frame appearance cues when those same manipulation families are held out for validation.
Fusion And Evaluation
Shared held-out validation enables comparison between branches and tests whether fusion adds new ranking power or mainly improves operating-point behavior.
Curated Findings
0.82 AUC
Appearance branch ceiling in a held-out run
A representative CNN run climbs from near-chance initialization to the low 0.8 AUC range, suggesting real signal without trivial initialization leakage.
0.62-0.67 AUC
Temporal branch on shared held-out tests
Landmark-only modeling carries some signal, but currently transfers less reliably than appearance-based detection.
0.95+ Acc
Fusion can improve hard decisions
Late fusion improves operating-point accuracy on shared held-out splits, even when ranking metrics remain dominated by the appearance branch.
1.0 AUC cases
A warning sign, not a victory lap
Some generation families may still be too easy, including certain runs on FaceFusion, LivePortrait, HelloMeme, Diff2Lip, LatentSync, or Memo, which is exactly why BlinkTrace treats evaluation rigor as part of the research problem.
Toward Real-Time Deployment
Real-time detection requires more than a promising model. BlinkTrace is also building the surrounding research operations: delta-only processing, training triggers, validation reports, storage checks, and orchestration logic that can support repeatable inference-oriented experimentation over time.
Limitations And Next Steps
Temporal modeling needs headroom
Current landmark-sequence models are informative, but they do not yet match the transfer strength of appearance-based detectors.
Shortcut risk remains real
Extremely strong results on some held-out families may still reflect generator-specific shortcuts, codec bias, or other easy signals.
Deployment-grade evaluation is the next bar
The next phase is tightening evaluation and presentation so the work increasingly reflects real-time operational constraints, not just offline research wins.
Research in Action
A visual overview of the BlinkTrace development process, spanning model training, security thinking, real-time testing, and data operations.