Ed May and James Spottiswoode have contributed substantial effort to
critical examination of the Radin analysis, and have done independent
calculations. Some comments, primarily from Ed, are summarized here.
The upshot is that
although the calculations generally confirm Dean's outcome, it becomes clear
that small changes in the procedure may result in large changes in the result.
Date: Thu, 3 Feb 2000 03:04:43 0800
From: Edwin C. May
To: 'Dean Radin' ,
"'S. James P. Spottiswoode (Email)'" ,
"'Richard Shoup (Email)'"
Cc: rdnelson@Princeton.EDU
Subject: RE: The answer
[snip on the new method]
>
> Regardless of what might be responsible for the Y2K result I see, and for GCP
> as a whole, it's statistically quite clear that something is going on that
> requires an explanation. The answer might be DAT, but then again, it might
> not. The results I see are almost exactly what one would expect to see if a
> freerunning random system was perturbed repeatedly at a predefined time over
> the course of a 24hour day.
The problem is that the world is NOT divided into 24 time zones. 10^9 people
(1/6 of the word's population!) line in 1/2 hour time zones. So I profoundly
disagree with that statement above.
> DAT might play a role in determining how I analyzed the data, but even
> correcting for multiple analyses you still get a healthy result. And if I
> figure out a way to do the original analysis I was going to try, and that
> works, then even multiple analysis is not an issue.
> (Actually, I just thought of a way to beat this problem!)
>
> Bottom line: I'm not so sure we can so easily dismiss GCP as nothing but DAT.
> One of the reasons I am suspicious of DAT as TOE is because theoretical
> prejudices always act as blinders. Witness von Lucadou seeing everything as
> the theory of pragmatic information, Walker seeing everything as quantum
> whatever, skeptics seeing everything as fraud, etc. Please, let's not fall
> into conceptual sclerosis.
I agree that we should not fall into conceptual sclerosis and admit that I
think that DAT or experimenter effect if you like, is the likely or even best
description of what is happening. But Dean you too fell into this trap by not
acknowledging to Roger or us ...
(from my last email)
=================
My earlier note reproduced:
I have run 100 Monte Carlo passes regenerating the raw data matrix each pass
for the case of +/ 900 sec and all 36 time zones using Dean's e^kurtosis
measure. (Dean and my analysis codes (his spread sheet, mine WAVE) give the
same answers on the same data.
The observed minimum in the Y2K EGG data was at +9 second with a value of 1.057.
From the Monte Carlo, 78 cases out of 100 had minimum lower: p = 0.78 > z = 0.772
The attached graph is the distribution of where the minimum on each pass
happened. To first order it is flat in the 1800 s region as expected and as
Dean finds too. Thus the pvalue of finding the smallest peak within +/ 9 sec
is simply p= 18/1800 = 0.01 > z = 2.33
Stouffer's Z = 1.10 > 0.136.
This null result, which includes the many 1/2 time zones has no overlap of
data by construction. There is no argument that I can think of to only include
Dean's integral ones.
By tomorrow morning we will have 1000 newdata Monte Carlo passes for
Dean's subset of time zones. I suspect we will confirm his pvalues or close.
Even if we do, I do not see a way past the analysis of all the time zones
if we want to say that somehow the worldwide distribution of EGGs somehow
"knew" about the human affair of Y2K. I do not buy it. Am I missing something?
==============
This is a VERY important result that directly addresses experimenter effect.
Was it concept hardening that prevented you from even mentioning this?
You might then, have simply asked me to run the case with the 36 time zones.
Now it is a few lines of code that must be change. Later in this AM I
will do that.
BTW: Since there is no overlap for the 1800 second case, you can do this
in your spreadsheet too. If this is some Eggcentered real effect differing
from selection of some sort, and if you claim something interesting near zero,
surely it will not go away when you reduce the window of data collection
from 3600 to 1800 seconds keeping the sliding average at 300 seconds.
Oh, while we are on the case of selective reporting, you might have told
Roger that the effect is highly dependent upon the sliding window width 
something we must understand before we make claims of an Eggcentered effect.
Back to bed.
Ed
======================================================================
Edwin C. May, Ph.D.
President, Laboratories for Fundamental Research
650.327.2007 Voice
650.322.7960 Fax
http://www.lfr.org Web
Date: Thu, 3 Feb 2000 04:46:47 0800
From: Edwin C. May
To: "Dean I. Radin (Email)" ,
"S. James P. Spottiswoode (Email)" ,
"Roger D. Nelson (Email)" ,
"Richard Shoup (Email)"
Subject: My replication attempt at Dean's new analysis
Dean, I believe I have done what you suggest. For each tick compute the
mean and sd of of the variances across the time zones. Then normalize those
variances to a zscore. Than I take the average of these and then compute a
sliding window of 300 seconds.
The following figure is the result.
First I notice that the average z's is MDCZ (mighty damn close to zero! :) ).
But there is a big negative peak at about +3.5 minutes when I use the integer
time zones. Using all the time zones (minus the 45 minute ones) I get the
red curve which has a peak at +7 m.
Admittedly I use +/900 seconds instead of 1800 so that there will be no
overlap of data. But when I did this with your kurtosis approach you and I
got the same numbers, so I think this is okay.
Do you get such small numbers for the average Z. I also do not have a
feeling of the significance of this peak. I do not want to run Monte Carlo's
until we agree on the graph.
Ed
Date: Thu, 3 Feb 2000 05:25:17 0800
From: Edwin C. May
To: "Dean I. Radin (Email)" ,
"S. James P. Spottiswoode (Email)" ,
"Roger D. Nelson (Email)" ,
"Richard Shoup (Email)"
Subject: Final Kurtosis Result
Between James and I, we ran 954 Monte Carlo's in which we randomly created
new data at the pseudo egg level rather than permuting anything. There were
14 passes that produced negative peaks less than or equal to 1.082 for a
pvalue = 0.01467 anywhere in +/900 s of Y2K. Given that the observed peak
was 2 s from Y2k and that the distribution of where the minimum was on each pass
was flat, then the associated pvalue for being +/2 s is 2/1800 or p=0.0022.
Converting these two pvalues to zscores and computing a Stouffer's Z I get
Zs = 3.55 (p = 0.00019). Dean, is this not consistent with your permutation results?
As I said on the previous kurtosis comment, given that with integer TZ we get
this but using all the nonoverlapping time zones we get p = 0.136. I think
there is a problem of interpretation. From my conceptually hardened view,
this is psi data selection. :)
Ed
