Patterns and perception

3

tapang

Many studies show we humans are experts at finding patterns. We see patterns even in the seeming random occurrences around us such as stars forming constellations, clouds forming familiar shapes or numbers lining up to our visual delight. We cannot resist toput the familiar into what we perceive through our eyes and other senses.

One has to dig deeper beyond perception in order to validate the existence of such patterns. The constellations that we form in the night sky turn out to be stars that are separated by millions of kilometers away from each other that only look to line up with each other when viewed from the Earth. Worse, these constellations change as these stars move ever so slowly relative to our solar system. What is the North star for us right now, Polaris, would not be at that position in a few hundreds of million of years as it was not the North star a few hundreds of millions of years past.

Humans, according to some scientists, are hard-wired to identify human faces. The smiley in our emails tend to verify this phenomenon. Despite the unreal representation, even children can already identify emotions from such stick figures. A recent internet meme was the so-called Mars rat, where some stones near a Mars rover were photographed in such a way that it resembles an ordinary rat. We even see faces in satellite images on Mars, the man in the Moon as well as faces in our bread toast, trees or other natural objects. This psychological phenomenon of finding regular objects in a random field is sometimes called pareidoia.

This experience is specially true when looking at the results of the recent elections. Many internet posts and opinions were already exchanged on the existence of a 60:30:10 ratio in the aggregated voter preferences in the May 2013 elections. The data shows that there is a 6:3:1 ratio for LP vs UNA vs others at the national level. If we look at the various visualizations of data at lower levels, there is a pattern (although not as pronounced) that we can see even down to the precinct level.

What does this ratio tell us? Aside from the observed ratios of “aggregated voter preferences”—none. The problem lies in three things: the “arbitrary” aggregation of senators, the law of large numbers and the veracity of the data being used.

The aggregation is “arbitrary” since aside from the LP/UNA categories, the third category “others” seems to be just a catch all despite the rather wide differences of the candidates under this category. Even from the number of candidates being fielded by the two main parties, there already exists an uneven ratio. LP fielded a full slate of 12 while UNA only had nine (9), although both teams have shared guest candidates at the start. From this 12:9 ratio, we already see that LP would have a 4:3 ratio already with UNA. Even assuming random preferences, LP would have an advantage in winning seats simply by having more candidates than UNA (on top of the whole administration machinery).

It turns out that if one pick an arbitrary but different set of categorization, we would see similar consistent patterns appearing both in national and local results. We can demonstrate that a consistent pattern appears if one aggregates Team Buhay:Team Patay:Others (33:25:42) or say Male:Female (70:30) or even the absurd categorization of the first 11:second 11:last 11 (29:36:35) when the senatoriables are arranged alphabetically. We wonder how the discourse would be if the first “suspicious” pattern that was posted turned out to be the Team Buhay/Team Patay/Others ratio.

Much has been written down on the second point of why these patterns approach an average behavior especially at the national level. There is indeed a tendency of large collections of samples to an average behavior This is just another way to state the law of large numbers.

Lastly, we have already noted last week in this column that even if these patterns exist, and they turn out to be more interesting that what we have pointed out above, we still face a problem of the veracity of the data that we are processing. Smartmatic and Comelec has not demonstrated in a transparent manner that the PCOS machine does indeed translate our ballots to the correct votes that will be counted correctly up the line.

There are ways to check the veracity of a list of numbers such as those coming out from the election returns or the statement of votes. We can check if these numbers were generated suspiciously or a the natural result of a fair elections. We now have a team of scientists and volunteers that are crunching the data of both the 2010 and 2013 elections at all levels—from the national level to down the precincts— and checking suspicious values and turnouts.

The presence (or absence) of a 60:30:10 ratio is not why we should call for the junking of the PCOS machines. It is the fact that more than 18,000 machines failed to do what it was supposed to do during election day. It is the fact that the Smartmatic-Comelec automated election system is not transparent and has sacrificed speed over accuracy and voter verifiability. It is the fact that the Comelec has surrendered its constitutional function to a foreign-controlled lemon of a system that is the Smartmatic AES.

Share.
loading...
Loading...

Please follow our commenting guidelines.

3 Comments

  1. It is in point of fact a nice and helpful piece of info.
    I am glad that you just shared this helpful information with us.
    Please keep us informed like this. Thanks for sharing.