wandelaar

Tests for the non-random character of the I Ching

Recommended Posts

I wasn't happy with just hexagrams as the output. I wanted to capture the changing lines too so I modified the program to give me the full set of yarrow data instead of the post-processed data. The results are interesting. Here is a sample of the output when run for one million iterations.

 

$ gcc i-ching-blaster.c -o i-ching-blaster
$ # I-Ching, what do you think of Wandelaar's idea to analyze the yarrow data sets?
$ ./i-ching-blaster 1000000 2>/dev/null | sort | uniq -c | sort -n > wandelaar.txt
$ head -5 wandelaar.txt 
      1 544544544544984984
      1 544544544584944988
      1 544544544588944984
      1 544544544944944988
      1 544544544944948944
$ tail -5 wandelaar.txt 
     98 548548544544584548
     99 544548544548584548
    100 548548544548544548
    101 544548544548544588
    110 544548548544548544

 

The clear winner is 544548548544548544. Let's translate that to something meaningful.

 

544 333 9 ---------      ---   ---

548 332 8 ---   ---      ---   ---

544 333 9 ---------      ---   ---

548 332 8 ---   ---      ---   ---

548 332 8 ---   ---      ---   ---

544 333 9 ---------      ---   ---

 

Interesting. The "winning" hexagram is fire over thunder, or #21 Shih Ho / Biting Through. The changing lines are 1, 4, and 6. The transformed hexagram is #2 K'un / The Receptive.

 

I'll leave it to you to find meaning in this.

 

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites

My first intuitive interpretation was that putting some hard work into further developing the method would open a new channel for the I Ching to deliver its messages. But then I read the Wilhelm-translation about hexagram #21 and thought that maybe the I Ching gives me a warning to stop with this "criminal method". So I am not sure what the hexagram means. 

 

But what I do like is that there is a clearly winning hexagram despite the huge amount of iterations.

Edited by wandelaar
  • Like 2

Share this post


Link to post
Share on other sites

It would also be interesting to see in what manner the appearance of "clearly winning hexagrams" depends on the amount of iterations in case only pure chance is operating (that is: when the results are not used for consultation). That would give us a baseline to compare against.

Share this post


Link to post
Share on other sites
15 hours ago, wandelaar said:

It would also be interesting to see in what manner the appearance of "clearly winning hexagrams" depends on the amount of iterations in case only pure chance is operating (that is: when the results are not used for consultation). That would give us a baseline to compare against.

 

It's funny you mention this. I have a script running for the last 8 hours or so that is calculating the yarrow stalk results one billion times (yes, 1,000,000,000 times). I want to see what the most common yarrow stalk result is. As of 200 million iterations the most common is mountain/mountain transforming into heaven/fire. We'll see what happens in another 800 million iterations. :)

  • Like 2
  • Thanks 1

Share this post


Link to post
Share on other sites

According to the law of large numbers we would expect the clearly winning hexagrams to disappear as the relative frequencies of all hexagrams approach their specific (pure chance) probability in the case of extremely many iterations. Great experiment! :D

Share this post


Link to post
Share on other sites
1 hour ago, Michael Sternbach said:

Don't tell me the result is hexagram 42... :o

 

Well, now that you mention it... Here are the results!

 

count:  262,144

min:    8       948984988984948948 (1 occurrence)

max:    83,966  548548544548548544 (1 occurrence)

avg:    3,815   15 occurrences
mode:   80      1,026 occurrences

 

There were 262,144 unique results after 1 billion attempts. This number is no accident. An I-Ching reading is essentially an eighteen digit binary number. 2^18 equal 262,144, proving that my program found every permutation possible via the yarrow stalk method.

 

The rarest reading was 948984988984948948 which translates to #10 ䷉ Lu / Treading and transforms to #1 ䷀ Ch'ien / The Creative. 

 

MINIMUM
948 232 7 ---  ---
948 232 7 ---  ---
984 223 7 ---  ---
988 222 6 - -  ---
984 223 7 ---  ---
948 232 7 ---  ---
          #10  #1

 

The most common was 548544544548544544 and translates to #52 ䷳ Ken / Keeping Still / Mountain transforming to #2 ䷁ K'un / The Receptive.

 

MAXIMUM
544 333 9 ---  - -
548 332 8 - -  - -
548 332 8 - -  - -
544 333 9 ---  - -
548 332 8 - -  - -
548 332 8 - -  - -
          #52  #2

 

The average number of occurrences per result are 3,815. Fifteen readings satisfied that criteria. The mode was 80 and 1,026 readings landed on that number.

 

If you want the final results I can send it to you. It's a 992 KB (compressed) text file. PM me with your email and I'll send it to you.

 

 

 

 

 

 

 

Edited by Lost in Translation
Edited for accuracy
  • Thanks 3

Share this post


Link to post
Share on other sites

Now according to the law of large numbers the relative frequencies of the found hexagrams are expected to approximate the theoretical probabilities of the hexagrams in a way similar to this:

 

law-large-numbers.png.e2abdc92d772a8f521ab41ec3c93cf01.png

Source: https://en.wikipedia.org/wiki/Law_of_large_numbers#/media/File:Lawoflargenumbers.svg

 

Some of the theoretical probabilities of hexagrams can be found here:

 

table.thumb.png.d97ca5d5607d49ed942eba6b9077d783.png

 

It is interesting to see whether the relative frequencies found with the computer program are the same as the theoretical probabilities in the book. If that is so than the next question is: what is the minimal number of iterations necessary to get relative frequencies that correspond with the theoretical probabilities. 

  • Like 1

Share this post


Link to post
Share on other sites

I still need to transform the raw yarrow results into the hexagrams with transformations since there are multiple yarrow results that yield the same output. For example 548 yields the same as 584. I'll do this tonight.

Edited by Lost in Translation
  • Thanks 2

Share this post


Link to post
Share on other sites

More stats! I converted the raw Yarrow Stalk results into the standard 6789 format.

 

$ wc -l i-ching-converted-1-billion.txt 
4096 i-ching-converted-1-billion.txt

 

4096 is 4^6, the maximum number of I-Ching calculations (hexagrams + changing line information).


$ head -15 i-ching-converted-1-billion.txt 
24    666666
73    669666
77    666696
80    696666
87    966666
89    666966
100    666669
100    667666
108    666766
114    766666
115    666667
115    666676
122    676666
164    666668
170    866666

 

24/1,000,000,000 or 0.0000024 % chance of perfect earth transforming to perfect heaven. If you ever receive this then consider yourself very special!


$ tail -15 i-ching-converted-1-billion.txt 
3393826    778888
3394406    877888
3741614    898888
3741643    888988
3742322    888889
3742558    889888
3745425    888898
3747578    988888
5210630    888887
5213124    788888
5213837    888788
5214520    888878
5215113    878888
5218324    887888
8018967    888888

 

8,018,967/1,000,000,000 or 0.8018967 % chance of receiving perfect earth with no transformation. If you ever get this, eh - not so special. ;)

 

Well, there you have it. We've isolated the most and least common specific calculation using the Yarrow method and the most and least common transformation using the same. I might calculate again via the coin method but something tells me it won't be nearly as exciting.

 

 

Edited by Lost in Translation
corrected mistakes with transformation explanations.
  • Like 1

Share this post


Link to post
Share on other sites

Just in case you are curious.

 

$ grep 999999 i-ching-converted-1-billion.txt 
82958    999999

 

82,958/1,000,000,000 = 0.0082958 % - about 8/10,000. Pretty good.

 

$ grep 777777 i-ching-converted-1-billion.txt 
607036    777777

 

607,000/1,000,000,000 = 0.0607036 % - 6 in 1,000.

  • Like 1

Share this post


Link to post
Share on other sites
43 minutes ago, Lost in Translation said:

24/1,000,000,000 or 0.0000024 % chance of perfect earth transforming to perfect heaven. If you ever receive this then consider yourself very special!

 

If that is the least probable outcome than the unhappy conclusion of all this is that we do indeed need to throw 1,000,000,000 or more hexagrams to get a good impression of the probabilities of all the hexagrams. And that would make the method impractical because it would take many hours (as I understand it did) to run the program. :(

 

Am I correct?

Share this post


Link to post
Share on other sites

To compare with the theoretical value of the book we need the number in the red circle (if I understand correctly):

 

table2.png.c3344162afcef85c1a1270bd29c463ea.png

 

  • Like 1

Share this post


Link to post
Share on other sites
5 hours ago, wandelaar said:

 

If that is the least probable outcome than the unhappy conclusion of all this is that we do indeed need to throw 1,000,000,000 or more hexagrams to get a good impression of the probabilities of all the hexagrams. And that would make the method impractical because it would take many hours (as I understand it did) to run the program. :(

 

Am I correct?

 

It did take about 15 hours to generate the original data. Of course I ran the program on a very poor machine (Celeron 1.6 Ghz processor). With a better machine it would have completed much faster.

 

Here's something you might find interesting. Remember this thread?

 

 

I had run another program to calculate the percentage of each line based upon variance. At variance 5 I had these results:

 

Quote

Variance: 5
Permutations: 1000
Yang (old) percent: 20.000000
Yang (young) percent: 30.000000
Yin (old) percent: 5.000000
Yin (young) percent: 45.000000

 

Using the data from last night's run I reverse engineered the line occurrence percentage by looking at lines 999999, 777777, 666666 and 888888 .

 

999999

6 hours ago, Lost in Translation said:

82,958/1,000,000,000 = 0.0082958 %

 

777777

6 hours ago, Lost in Translation said:

607,000/1,000,000,000 = 0.0607036 %

 

666666

6 hours ago, Lost in Translation said:

24/1,000,000,000 or 0.0000024 %

 

888888

6 hours ago, Lost in Translation said:

8,018,967/1,000,000,000 or 0.8018967 %

 

Here are the reverse engineered line percentages:

 

Observed
0.000082958 (20.88%)
0.000607036 (29.10%)
0.000000024 ( 5.37%)
0.008018967 (44.74%)
 

(take the 6th root)

 

As you can see, the numbers are almost identical to those we calculated with our hypothetical +- 5 variance!

 

How awesome is that?

 

EDIT:

 

Just for completeness I'll include the 3/16th, 5/16th, 1/16th, and 7/16th theoretical values as percentages.

 

3/16 = 18.75%

5/16 = 31.25%

1/16 =   6.25%

7/16 = 43.75%

 

So, yes, the theoretical numbers are quite close, but they assume illogical stack sizes (like 1). The "probable" stack sizes skew the results, giving slightly different (but more accurate) values.

 

Can I put this to rest now? ;)

Edited by Lost in Translation
  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites
5 hours ago, wandelaar said:

To compare with the theoretical value of the book we need the number in the red circle (if I understand correctly):

 

table2.png.c3344162afcef85c1a1270bd29c463ea.png

 

 

If I am understanding this correctly then the numbers across the top are the "from" hexagram and the numbers on the left are the "to" hexagram. The sample size is 16,777,216. To convert this sample size to one billion we multiply by 59.6 (approx). This means the chance of moving from hex 2 to hex 1 is 59 out of a billion, since the listed item is 1.

 

My results show the chance of moving from hex 2 to hex 1 is 24 out of a billion - approximately half of what is listed here. Given the sample size I think that's probably correct.

 

 

  • Like 1

Share this post


Link to post
Share on other sites
6 hours ago, wandelaar said:

Could you explain your last post? What are you calculating there?

 

6 hours ago, Lost in Translation said:

Just in case you are curious.

 

$ grep 999999 i-ching-converted-1-billion.txt 
82958    999999

 

82,958/1,000,000,000 = 0.0082958 % - about 8/10,000. Pretty good.

 

$ grep 777777 i-ching-converted-1-billion.txt 
607036    777777

 

607,000/1,000,000,000 = 0.0607036 % - 6 in 1,000.

 

I am showing the occurrence count (out of a billion) and calculated percentages for hex #1 transforming to hex #2 (e.g. 999999) and hex #1 with no changing lines (e.g. 777777).

  • Like 2

Share this post


Link to post
Share on other sites

Crucial in the whole procedure are the repeated divisions of the heap of stalks. That is where the randomness comes in. Now the probabilities of the possible divisions of the heap of stalks may be slightly different in the book and in the computer program. That would then explain the difference.

 

How long do you think it would take a fast computer to run the program?

 

Share this post


Link to post
Share on other sites
43 minutes ago, Lost in Translation said:

If I am understanding this correctly then the numbers across the top are the "from" hexagram and the numbers on the left are the "to" hexagram. The sample size is 16,777,216. To convert this sample size to one billion we multiply by 59.6 (approx). This means the chance of moving from hex 2 to hex 1 is 59 out of a billion, since the listed item is 1.

 

My results show the chance of moving from hex 2 to hex 1 is 24 out of a billion - approximately half of what is listed here. Given the sample size I think that's probably correct.

 

The table in the book contains calculated probabilities, so the number 16,777,216 is not the sample size. See:

 

book.png.bcd113b33697e7e1e88e4da15151ae76.png

  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, Lost in Translation said:

EDIT:

 

Just for completeness I'll include the 3/16th, 5/16th, 1/16th, and 7/16th theoretical values as percentages.

 

3/16 = 18.75%

5/16 = 31.25%

1/16 =   6.25%

7/16 = 43.75%

 

So, yes, the theoretical numbers are quite close, but they assume illogical stack sizes (like 1). The "probable" stack sizes skew the results, giving slightly different (but more accurate) values.

 

Can I put this to rest now? ;)

 

Ah! Sorry. Didn't see the edit before.

Share this post


Link to post
Share on other sites
40 minutes ago, wandelaar said:

How long do you think it would take a fast computer to run the program?

 

Perhaps an hour, maybe less. I could spread the processing out over a cluster and process results in a matter of minutes if I were so motivated.

 

Share this post


Link to post
Share on other sites