Archive for December, 2009

Defining Global Consciousness

December 24, 2009

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

The emergence of the next generation Web forces us to engage ourselves in deeper and deeper conversations on the topic of global consciousness. While the concept has been around for decades, it’s starting to gain more and more attention as we crawl closer to a new Web experience where the amount of information and connections between peers is reaching heights never seen before.

Semantic definition

When we’re talking about global consciousness, it is not always clear what we mean by it. Jay Earley describes global consciousness as the next step in human evolution. His definition goes as,

By global consciousness I mean “collective global reflexive consciousness.” The word “reflexive” refers to the ability of consciousness to reflect back on itself, the ability of a being to be aware of itself. “Global” refers to all of humanity. I use the world “collective” because I am referring to the consciousness of the human race as a whole, not a global consciousness that any of us might have as individuals.

While Jay’s semantic definition is very clear and helps us a great deal to understand and visualize the meaning of global consciousness, it can’t be used for analysis exactly because of that semantic nature.

The Collective Entity Space

Jay Earley’s definition however, provides a framework in which a multitude of global consciousnesses may co-exist. These autonomous entities might agree in their components, but they may differ in the extent of individual engagement and awareness, as well as in the nature and technology dependence of connections between components.

These independent properties (components, engagement, awareness, nature, technology) form the grid of a multi-dimensional space in which a collective entity (collective reflexive consciousness) such as a global consciousness, manifests.

  • Components: A collective entity is formed by its components. Depending on the theory or technology the entity is fundamentally based on, components may be constituted by living, inanimate or even abstract entities.
  • Engagement: Engagement expresses the level of activeness shown by components. A collective entity is considered active, if its formation and functioning requires its components to actively engage themselves in the process. Consequently, the entity is passive, if they don’t.
  • Awareness: Whether components are aware of their contribution to the collective entity.
  • Nature: The process of maintaining connections between components, which is essential to the entity’s existence, must be rooted in natural phenomena: physical, chemical, bio-chemical, biological, psychical or social, or their combination. The exact mixture is often specific to the entity.
  • Technology: Connections, beside being based on a mix of natural laws, may also emerge by the use of technology.

Looking at the possible values each property may take, the possibilities are endless.

Quantitative definition

Using the concept of Collective Entity Space (CES), we can now put the definition of global consciousness in different wording.

A global consciousness is a finite volume in the Collective Entity Space, which, projected to the components axis, reveals the relevant components of Earth.

This definition has two advantages.

  • It’s applicable to analysis as it’s derived from the Collective Entity Space.
  • The possible existence of multiple global consciousnesses is obvious, since projection is surjective.

In the definition, I used “relevant components of Earth” as opposed to Jay Earley’s “humanity”, because there are theories out there that do not restrict components to human beings (e.g. the Gaia Theory).

Examples

The viability of the Collective Entity Space concept is proven by how existing theories and projects concerning global consciousness fit into it. Four examples are listed below with a short description and their CES properties.

Gaia Theory

The Gaia Theory presumes that the entire planet, including the biosphere forms an organic system that maintains a homeostasis in favor of life.

Components Engagement Awareness Nature Technology
All components of Earth Passive Unaware Mixed None

GCP

The Global Consciousness Project (GCP) is an experimental network of quantum random number generators where output patterns are observed, recorded, and correlated to world events.

Components Engagement Awareness Nature Technology
All humanity Passive Unaware Physical (quantum), Psychical None

Global Brain

The Global Brain theory, as described by Dean Pomerleau, compares the characteristics of real-time social networks, such as Twitter, to those of the human brain. Consciousness is reached by the ever improving complexity and responsiveness of these networks.

Components Engagement Awareness Nature Technology
Twitter users Active (retweeting) Unaware Physical (electrodynamics) Internet

Lightworkers

Lightworkers is a spiritual group aiming to create global consciousness (“divine consciousness”, or “collective whole”) on a spiritual level through meditation (“Global Unity Meditation”).

Components Engagement Awareness Nature Technology
Group followers Active (meditation) Aware Psychical (spiritual) “Hemisynch Meditation”

Conclusion

Comparing and analyzing instances of any concept starts with a clear and quantitative definition. In the very case of global consciousness one way of obtaining such a definition is through the Collective Entity Space which is constructed of generalizations made during the deconstruction of global consciousness itself.

Advertisements

Querying randomly from uneven distributions

December 20, 2009

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Not long ago I was looking for a way to select random elements from an unevenly distributed set.

Solutions out there

First thing, googled up every possible combination of all the keywords that made sense. Yet all I’ve found on the subject was one single forum thread. They came up with the following.

  • Increase row ID by element weight on each element insert.
  • Query a random element by generating a random ID between the first and latest and return the row with the nearest lower ID.
  • Update weights by rebuilding the table from the row in question.

Needless to say, this solution is inferior at a high element count and outright unacceptable in real-time web applications. So, I was forced to put together a solution of my own.

Scraping the surface

The first idea that popped into my mind was an SQL approach with two tables: ‘Elements’ and ‘Multiplicity’. In ‘Elements’ go all the properties of the actual elements of the distribution, one row for each. In ‘Multiplicity’, rows are inserted or deleted each time an element’s weight is changed. A query goes as simple as this (MySQL):


SELECT * FROM Elements NATURAL JOIN Multiplicity ORDER BY RAND() LIMIT 1

(The join is there only to show that it’s the information in Elements that carries value for us. I could just as well select from Multiplicity and then look up the corresponding element by key. Also, I looked past the inefficiency of ORDER BY RAND() for the sake of compactness.)

While the above does work, there are still problems. For one, only integer weights may be concerned. But I could live with that. The real problem is that every single unit of weight is another row in Multiplicity. Imagine just 1 million rows in Elements, each having an average weight of 1000. Looks like I’ve just added three magnitudes to my data storage needs. No doubt, this concept also fails, albeit for a different reason, on large-scale.

One that works

So, what’s the solution, that’s fast and doesn’t consume the entire Internet? Right after the SQL approach I started experimenting with something similar to the first solution (inserting and querying being the same as there), but I already knew that updating weight was not an option. And that was the key. I cannot update the weight in a row but I can update element properties. So what I do is insert the same element with the new weight and update the original row by removing all element properties and leaving only the weight, thus creating an empty, but weighted row. This empty row becomes a placeholder for a future element with the same weight. So before inserting an element I first check for a placeholder and insert only if there’s none. Otherwise I just update the placeholder. This way the IDs and weights remain consistent, and the method is applicable to real-time applications even with huge datasets.

There are only two minor problems. To keep the number of placeholders low, only a predefined set of weights may be used. Powers of two, Fibonacci numbers, et cetera. Secondly, empty elements count too in the querying process, distorting the relative weight of real elements. But this can be easily resolved by re-running the query again and again until it returns a real element.