“Remote viewing” (RV) refers to double-blind, free-response descriptions of hidden, randomly selected targets, typically locations or images. The most substantial evidence base comes from U.S. government–sponsored programs at SRI/SAIC (a.k.a. Star Gate), independent statistical reviews commissioned by the CIA, and large academic programs such as Princeton’s PEAR “remote perception.” This review highlights the most methodologically influential and frequently cited supportive studies and evaluations. Effects are small and controversial, but multiple datasets report above-chance outcomes under blinded conditions.
Introduction
Core RV protocols isolate a “viewer” from any ordinary sensory access to the target, use hardware RNGs or secure computer randomization for target selection, and apply blind judging against decoy targets. SRI/SAIC introduced layered security, time-locking, and audit trails; later work (e.g., PEAR) developed algorithmic judging to replace human raters.
Key Lines of Evidence
1) CIA-Commissioned Evaluations of SRI/SAIC (1970s–1990s)
Jessica Utts’ (1995) statistical assessment—commissioned during the CIA’s declassification of Star Gate—concluded that the RV database exhibited results “far beyond what is expected by chance,” arguing that psychic functioning had been demonstrated by conventional statistical standards (while noting questions about operational utility) [1].
American Institutes for Research (AIR) review (1995)—the CIA’s external evaluation integrating Utts’ report and Ray Hyman’s critique—acknowledged statistically significant outcomes in lab series but remained skeptical about intelligence value and called for tighter prospective testing [2,3].
Program technical documents (SRI/SAIC) describe double-blind protocols, effect moderators (e.g., feedback timing), and series with independently significant viewers; several of these reports are now public in the CIA Reading Room [4,5,6].
2) Foundational Peer-Reviewed SRI Studies
Targ & Puthoff (1974, Nature) reported “information transmission under conditions of sensory shielding,” presenting early controlled RV-style experiments with statistically positive outcomes and laying out the methodological template that later programs refined [7,8].
3) Princeton Engineering Anomalies Research (PEAR): Remote Perception
PEAR conducted hundreds of free-response “remote perception” trials over ~two decades, emphasizing algorithmic/analytical judging to reduce rater bias. Replication-style papers (e.g., Nelson, 1996) summarize databases with small but significant positive deviations and discuss correlations with protocol features [9,10]. Earlier work also explored precognitive versions of RV and formal replications of SRI paradigms [11].
Methodological Advances Across Programs
- Randomization & Blinding: RNG-based target selection, sealed archives, and independent blind judging reduce cueing and experimenter effects.
- Audit Trails: Time-stamped session records (Star Gate) and public archiving (CIA Reading Room) enable re-analysis.
- Analytical Judging: PEAR’s algorithmic scoring attempts to remove human rater subjectivity in rank-ordering matches.
Common Objections & Replies
- Operational value vs. statistical effects: CIA’s AIR review judged lab effects insufficient for intelligence use, even if above-chance in controlled series [2,3]. Proponents reply that operational success criteria differ from scientific detectability.
- Leakage/artifacts: Critics note potential cueing in early work; later protocols (secure randomization, strict blinding, algorithmic judging) were designed to close these loopholes [4,5,9,10].
- Replicability: Effect sizes are modest and heterogeneous; multi-lab replications with preregistered analyses remain the decisive next step.
Assessment
The strongest pro-RV case rests on (a) CIA-commissioned statistical evaluations acknowledging non-chance outcomes in controlled settings, (b) peer-reviewed early SRI papers that inspired subsequent standards, and (c) large academic datasets (PEAR) that converged on small but positive effects while tightening judging methods. The literature leaves open whether these departures from chance reflect genuine anomalous cognition or residual artifacts. High-power, preregistered, cross-lab replications using standardized targets and automated scoring are the clearest path forward.
References (Open Sources)
- Utts, J. (1995). An Assessment of the Evidence for Psychic Functioning. Statistical review prepared for CIA declassification of Star Gate. PDF | Alt
- American Institutes for Research (1995). An Evaluation of Remote Viewing: Research and Applications. PDF
- Hyman, R. (1996). Evaluation of the Military’s Twenty-Year Program on Psychic Functioning. PDF
- CIA Reading Room. Review of the Psychoenergetic Research Conducted at SRI International (1973–1988). Doc
- CIA Reading Room. An Evaluation of the Remote Viewing Program. Report | Appendix
- CIA Reading Room. Feedback and Precognition-Dependent Remote Viewing (selected experiments). PDF
- Targ, R., & Puthoff, H. (1974). Information transmission under conditions of sensory shielding. Nature, 251, 602–607. Article | Declassified copy
- ResearchGate mirror of Targ & Puthoff (1974). PDF
- Nelson, R. D. (1996). Precognitive Remote Perception: Replication of Remote Viewing (ICRL/PEAR). PDF
- Overview (ResearchGate): PEAR Remote Perception program. Summary
- Dunne, B. J. (1979). Precognitive Remote Viewing: Replication of the Stanford Experiment (PEAR predecessor work). PDF