Posts Tagged ‘Thought’

Thinking About Thinking: Your Inner Avatar and You.

November 6, 2009

I don’t think the Wikipedia definition of “Avatar” does the concept justice.

When you enter a virtual space – such as a video game, or online community, you do so through a kind of logical entity which maps your actions into the designated space (for instance, you hit a button, your guy fires his rocket launcher) and which in turn maps the virtual actions in that space and presents them back to you (for instance, if your character dies, your screen might turn all red and say ‘You are dead’).

In that sense, an avatar isn’t just a representation of you, but a medium for you, allowing you to interact in other, artificial worlds. An avatar is used to make you – your identity, your thoughts and actions – extend beyond the physical world and into the logical one.

In that sense, the avatar is not just a concept in computers. It’s a strong aspect of people’s everyday lives.

Whenever you construct yourself in any artificial or imaginary space, and then use that extension of yourself to interact in that space, you’re making use of your avatar. A good example might be the fictional character type of the “Mary Sue” – an authorial self-insertion into a story for wish fulfillment purposes. Here, the avatar exists in a world created, in part or in whole, by the avatar’s designer.

And yet even that, I think, only represents a small fraction of our everyday avatar usage. I think, rather, that the ultimate origin of the concept of the avatar is our own imaginations.

Imagining ourselves in a different situation, to envision how we would act, or how we should or would like to act, is the most fundamental and vivid use of the concept of the avatar – our inner avatar, if you will.

Which all finally leads me to the point of this – thoughts on how to be better aware of that inner avatar and how to better use it.

Know when to be realistic versus being idealistic: It may be fun to imagine yourself effortlessly accomplishing everything in front of you, you don’t necessarily benefit from that mental exercise.

Framing a challenge for your avatar in detail, as the challenge you expect it to be, can provide intellectual or behavioral insight. It can show you how to behave, or it can show you what information you may be missing about the challenge.

But don’t think that it’s never beneficial to use your avatar in a powerfully emotional matter, even if the scenario you envision isn’t very likely. Instead, the insights you gain from such an exercise are emotional ones. For instance, you can use your avatar in this way to bolster yourself against your fears and anxieties, or to energize yourself to take action on behalf of someone else.

Understand when you want to use your inner avatar for immersion or interaction with a fictional world: This is a choice similar to the above realistic/idealistic choice, but tweaked a bit for the relevant context.

If you don’t do this already, don’t be afraid to place your avatar into fictional worlds you’re experiencing – movies or TV shows, or books, or comics, or anything – if there is a fictional story, you can imagine yourself interacting with it, to your ultimate enrichment.

Using your avatar for immersion falls under the ‘mary sue’ form of self-idealization that I mentioned earlier in the article. It increases your emotional involvement in a fictional environment, and thus increases the energy your brain spends drawing potentially useful information from that environment.

Using your avatar for interaction, however, is more like constructing a video game in your head and then operating your inner avatar as a character in that game, obeying the internal rules of the universe. By doing so, you increase your intellectual understanding of that fictional environment and its’ rules, allowing you to better explore any themes the author might have written into the work.

Anyway, those are my thoughts on the subject. Now, if you will excuse me, I’m going to go imagine I’m an ancient Greek hero now.

Advertisements

Thinking About Thinking: Start Thinking About What You’re Thinking!

October 24, 2009

I would suggest a new concept today. Well, kind of. Kind of an addition to an old idea, too.

Imagine what happens when you say something to someone else. If they’re paying attention, they remember what you said, and those words and ideas are, in a sense, absorbed into them. After that, all their actions after that, everything they do and say, is influenced, to some degree (however potentially incredibly small) by what they heard. And everything they say to other people becomes part of them, not only propagating their own ideas, but the influence your ideas had upon their ideas.

Thusly, everything you do and say can potentially, over time, influence any and every other person in the human species, to some degree. Ideas don’t just get propagated consciously, but eventually become subliminated to some degree in the behavior of many more individuals, perhaps to emerge without prompting, or even without the understanding of the person taking the action.

This produces a massively intricate system of human thoughts and behaviors, kind of like a cognitive ecosystem. Unlike most ecosystems, however, this one is entirely artificial, created by people, for people. Yet, in many ways, it behaves similarly.

A natural ecosystem is a robust thing, able to absorb and adapt to radical changes, though individual parts of the ecosystem may disintegrate. The components of ecosystems, however, grow more delicate the more complex your ecosystem gets. And ecosystems tend to be changed slowly, but respond to sufficient change quickly – changes will build up towards a critical mass which will start a chain reaction that, when the criteria are met, will flash through the ecosystem.

Our mental ecosystem behaves similarly in all these ways, right down to the fascinating similarities involving punctuated equilibrium phenomena. And this is an important part, as it implies that for something really big to happen in our society, it has to sufficiently suffuse itself among us, a process that basically requires us to talk with each other about it a lot, until eventually, one day, when we try to talk about it, we find that we all already agree with each other – and from there, we do something about it, all at once.

So, what does all this mean, you ask. It means that, if our minds, our culture, is like an ecosystem, then we desperately need some way to control pollution. And that control has to come from us as individuals.

It means that you should think about every word you say.

And to do that, it means you need to understand every word you say. If you’re about to say something, you need to ask yourself, “Do I know this is true, or is it just something I heard but never thought about? How do I know it’s true? What does it mean if it’s true?”

If you don’t know something is true, then don’t say it – or at least, make clear that you don’t know if it’s true. Always be able to answer the question, “But how do you know that?”, and always be willing to ask it to others.

The same applies to everything you do, too. “What’s the reason I do this? Do I know it’s a good reason? How do I know it’s a good reason?”

If you don’t have a good reason to do something, then don’t do it – or at least, do it with the understanding that you might not be doing the right thing. Always be able to answer the question, “Why do you act like that?” and always be willing to ask it to others.

It also means that we might not see the impact of what we’re doing – at least, not immediately. But thoughts and actions build up inertia, until eventually they burst, all of a sudden, into very real and very visible effects.

Thinking About Thinking: The Forer Effect.

September 20, 2009

Courtesy of Wikipedia:

In 1948, psychologist Bertram R. Forer gave a personality test to his students. Afterward, he told his students they were each receiving a unique personality analysis that was based on the test’s results and to rate their analysis on a scale of 0 (very poor) to 5 (excellent) on how well it applied to themselves. In reality, each received the same analysis:

You have a need for other people to like and admire you, and yet you tend to be critical of yourself. While you have some personality weaknesses you are generally able to compensate for them. You have considerable unused capacity that you have not turned to your advantage. Disciplined and self-controlled on the outside, you tend to be worrisome and insecure on the inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You also pride yourself as an independent thinker; and do not accept others’ statements without satisfactory proof. But you have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, and sociable, while at other times you are introverted, wary, and reserved. Some of your aspirations tend to be rather unrealistic.

On average, the rating was 4.26, but only after the ratings were turned in was it revealed that each student had received identical copies assembled by Forer from various horoscopes.

To summarize, the Forer Effect describes how people take things they hear – particularly positive things – and believe it applies to them for the sole reason that they think it’s supposed to apply to them.

The most obvious consequence of the Forer Effect is that it encourages people to believe in systems that tell things about themselves, even if those systems make no sense.

A couple examples of those obvious Forer-influenced systems:

  • Horoscopes – a system about how the stars at the time of your birth dictate things about you.
  • Japanese blood type personality prediction – a system about how your blood type dictates things about you. (the wiki seems written by someone who does not speak English as a first language)
  • Phrenology – a system about how the bumps on your head dictate things about you.

The principle seems pretty simple: Humans have this little quirk, this ‘feature’, that can make us believe in silly things because they tell us things that we’re hardwired to want to hear and that we’re inherently inclined to believe.

But it’s a bit more significant than that. Vague statements make the effect more obvious, but the key to the effect is not the vagueness, but the impression of legitimacy of the source, which is what makes us inherently more likely to believe it.

As the Forer effect applies to any system which people believe legitimate, and which tells people things about themselves, this behavioral quirk applies to all systems which humans use to describe their own behavior.

This is a bit more significant, ’cause there are a lot of those systems, and some are considered pretty serious science this day and age.

Examples of systems to which the Forer effect applies that are (more)widely considered legitimate include:

  • The Meyers-Briggs Type Indicator – a personality inventory that, given a wide battery of tests about what you prefer doing and what you like or don’t like, etc, tells you a bunch of stuff about yourself. Ditto any other personality inventories.
  • Pretty much any measurement of Emotional Intelligence, by definition, is going to end up telling you a lot about yourself.
  • While we’re talking about intelligence, IQ testing itself invokes the effect – it tells you about yourself and it’s very widely considered to be legitimate in doing so.

What seem to be very simple concepts would seem to invoke this inherently irrational feature in our behavior – introversion and extroversion, imagination and practicality, intelligence, wisdom…

It’s a phenomenon intricately tied into how we see ourselves – every time someone or something we trust says something about us, we’re inclined to believe it for no other reason than they said it about us.

Given all that, how much of our self-identities really describe us, versus just stuff we’ve been told describe us?

Anyway, all that philosophical navel-gazing aside, I think there’s a very practical application of this effect.

Human beings aren’t just inclined to believe things we think are about us. We enjoy hearing things that are about us. Invoking the Forer effect, even in nonsensical ways, appears to be emotionally and intellectually stimulating, and something that we inherently seek out.

My pet theory is that this is because being told things about ourselves is good for us – emotionally, and mentally. It strengthens our self-identity (for better or for worse, admittedly), and it ingrains concepts in our minds by associating them with something we obviously consider is important: Ourselves.

So being told about ourselves, even if those things are just outright false, can sharpen our focus and broaden our horizons – which I also imagine is why so many people tell themselves things about themselves so often.

So, knowing all this, why not use it? Take fancy personality, intelligence, and emotional intelligence tests. Read horoscopes. Find out which of the four humours you are (I’m phlegmatic!). Take the second half of this article seriously.

It’s good for you. Probably. After all, what’re the chances all the students in Dr. Forer’s class are wrong?

The Necessity of Mortality

August 29, 2009

Think of your parents. Compared to you, how well do they fit in in modern society? Do they know how to use their computers? Do they even have computers?

Now one step further. If you knew your grandparents, how about them? They ever own a computer? Or maybe they stick out in a more noticeable sense – maybe they don’t believe people should marry outside their race, for example, or that <enter religion or lack thereof here> is evil and Unamerican and all its’ adherents should be exterminated, or maybe they hold a number of wildly fictional beliefs about the world around them.

Now lets go back to great-grandparents. Maybe they lived with, and were okay with, slavery, or women not being able to vote. Maybe they thought the world was six thousand years old or that space was a painted shell around the earth or something like that (Okay, I admit, I’m exaggerating a bit for effect).

My point here is sometimes, people learn things they can’t unlearn. Our ability as human beings to relearn as the world and our society changes around us is limited, and we tend to leave our elderly behind, as our children will leave us behind in turn.

Within a couple generations, we’re likely to discover a way to attain clinical immortality – that is to say, diseases will no longer be able to kill us, and we may also overcome the process of aging itself.

Now, that introduces problems, of course – the earth can only hold so many people, for instance. But I think the biggest problem with that would be cultural.

What if, in America, all our great-grandparents were alive to vote in this last presidential election? Would the black or female candidates have had any chance? If we went back another generation or two, we’d need to ask if black people or women would even be able to vote.

Since the start of history, human culture has progressed continuously. And one of the primary motivators of that change – perhaps the primary motivator, and certainly a vital one – is that old people die.

Without old people dying, essentially getting out of the way for better-educated, slightly wiser future generations to take the reins of civilization, we would almost certainly stagnate.

This is a problem made worse by the fact that the first individuals likely to benefit from immortality are the ones who most need to die for society to proceed – wealthy, powerful, influential elderly.

A demographic of society which since time immemorial has resisted change, for themselves and for all of us, until the day they die will no longer be dying.

This is a problem compounded by other complications of immortality – what if our solution to overpopulation is to severely restrict the birthrate, causing the elderly to by far outnumber young people with fresh, new ideas? There might not even be any change for such a methuselah population to resist.

Mind that, when I refer to the elderly, I’m not referring to our parents and grandparents. They’re very unlikely to see clinical immortality.  Younger generations alive now – we are the ones most likely to be the oldest generations to experience immortality.

I see a few different possible scenarios unfolding for such a future:

  • The Methuselah Scenario: Mankind attains clinical immortality and deals with the subsequent sustainability problem by severely restricting new births. The elderly hold the overwhelming majority of economic and social power in civilization, and have hundreds of years to indoctrinate the youthful minority into the exact same civilization. Our progress as a society slows to a snail’s pace as the elderly are still replaced through attrition, but only the very slow attrition of tragic catastrophe.
  • The Olympian Scenario: Enough youths are born to form a notable subculture. These individuals are creative, energetic, and increasingly spiteful of the immortals who continue to control economic and social power. Eventually they reach their breaking point and attempt to impose a new order on society by any means, likely through violence as they lack the power to do it any other way. Eventually such a group succeeds and replaces the elderly that a new group of youths rails against, and the cycle repeats indefinitely if civilization is lucky.
  • Planned Obsolescence: Maybe we’re just not built to be immortal. People weary of life arrange their own deaths whenever they feel ready to take that step, and clinical immortality only works out to be a significant age increase. Society continues to progress, perhaps a bit slower than before, but not at an intolerable rate.
  • Future Generations Just Deal With It: As a civilization, we also figure out a way to relearn and/or reeducate ourselves sufficiently such that our brains become culturally sustainable across both young and old generations. I don’t see any living generation pulling that off, but maybe I’m just not being imaginative enough.

It could well be in our power to place our collective hands upon the march of human history, and with our undying strength, force that march to halt. Should we be allowed to let that happen?

Thinking About Thinking: This Stuff Is Hard!

July 3, 2009

So, I thought as a follow-up to one of my earlier thinking about thinking posts, that I could try classifying and describing how my memory was structured in detail, then look into how much of that might be generalizable to humanity as a whole – it might provide insight as to how we identify and treat our own ideas, which would be nifty.

Yeah, turns out, that’s not easy. In fact, it’s really hard to do. Brain-hurtingly hard. But it nonetheless seems interesting enough to keep working at.

Sooo, might not be making too many other blog posts until I hammer that whole thing out.

(I guess this post would be the first time I’m using the blog format as an actual blog, rather than as a dumping ground for my brain. Go figure)

An Idea for a Study on Disgust in Regards to Cognition

June 13, 2009

Establish two groups, a control group and an experimental group.

Show both groups a series of pictures, both disgusting and not disgusting, and ask the individuals how disgusting they found each picture, to get a baseline (100 pictures, 50 more disgusting and 50 less/not disgusting sounds like a good set – experimenters would be expected to eyeball the pictures for pre-baseline categorization, and reevaluate them after the baseline based on the mean disgust level).

Then, ask the persons in the control group to answer detailed questions about the pictures which were found less or not disgusting (providing them copies of the pictures for analysis), and meanwhile ask the persons in the experimental group to answer similar questions about the pictures that were found more disgusting.

Let’s say, 10 pictures with 3 questions regarding each, the pictures being selected randomly per individual.

Afterwards, show both groups a second set of pictures and ask the individuals how disgusting they found each picture.

I suspect the experimental group will show a decrease in degree of disgust relative to the control group versus the baseline measurements, despite the set of pictures being different.

More in-depth examination along this line would involve more engaging analysis of items found disgusting, and successively more distinct pictures in the second set, particularly in terms of content (i.e. if there are spiders in the first set, the second set should not have spiders).

The objective of this is to see if repeated use of articulation capability in regards to disgust reactions can instill a general shift in individual disgust response from purely emotional, to a more reasoned, articulated response.

The Cognitive Power of Belief

May 29, 2009

There are people who believe in very, very many things out there. That medicine can become more powerful the less of it you take, that Atlantis was a super-advanced civilization that interacted with aliens, that the universe is six thousand and change years old, that the leader of a group is in fact a god come to earth, that everything, including abstract objects like rivers and mountains, have souls, that our government is secretly working against us all, that our government isn’t working against us all… and so on. The gamut of human belief is absurdly wide, and the depth of human belief is similarly stunning.

I can’t help but think there’s something to that.

It seems clear from these beliefs that human beings are not rational creatures, that we function using some other fundamental thought method that only incidentally supports rationalism. But what method could that be?

I’m of the personal opinion that what we describe as ‘spiritualism’, ‘mysticism’, ‘religiousity’, and so on (henceforth to be called ‘mysticism’ ’cause I think the word sounds cool) functions as a kind of emotional interface to our conscious, cognitive and linguistic abilities, by functioning as a system of triggers that when activated cause us to learn associated ideas and behaviors quickly and persistently. So persistently, in fact, that we could have difficulty overcoming them consciously even if on a rational level we know they are incorrect.

The existence of the phenomenon of cognitive dissonance in regards to mysticism, in particular, fascinates me. What I imagine, is that what someone learns with strong facilitation by such a ‘mystical’ drive is something well-learned indeed, and unlikely to be overridden except through use of a comparably powerful cognitive tool.

All in all, it seems that mysticism, long used by hucksters and cult leaders to mislead and confuse people, could be used as a tool for good – and considering its’ power, used very potently indeed.

The first question would regard what we could use mysticism for – to that end, I would posit that the collective functions of mysticism serve two purposes for the human mind: To describe the world around us in an intricate and emotionally engaging way (descriptive mysticism), and to proscribe to us correct behaviors on a largely emotional level (proscriptive mysticism).

It seems to me the bigger question, though, is to precisely how to go about using mysticism constructively, a task which additionally entails coming to understand mystical functions in the human mind in greater depth.

I’m musing (pun appreciated, but not intended) on that one.

Thinking About Thinking: Relational Databases and You (You being anybody)

May 25, 2009

Okay, first a bit of background information.

A relational database is a method computers use to organize information (based on the relational model). Information is organized into ‘Tables’ based on their content (each Table represents a single subject of information), and generally each piece of unique information is given a corresponding identifier called a ‘Primary Key’.

A quick example, number of days in a month:
Month        #Days
January        31
February    28 (29)
March        31
April        30
May        31
June        30
July        31
August        31
September    30
October        31
November    30
December    31

This table organizes the months and contains how many days are in each month.

Relational data is awesome. Its’ neat organizational structure allows for quick recall of detailed facts and sophisticated examinations of massive amounts of data – even when the data has to be calculated from scratch*.

Humans don’t naturally think in relational terms (well, not efficiently, and not usually by default, anyway) – but we can learn to do so, either by ourselves or by picking it up from other people.

It’s all about internally organizing the way you think in a couple different ways.

First, group up things you know into specific subjects in which each bit of information has a unique identifier.

To extend the earlier example, the subject there would be “Months of the (modern gregorian calendar) year”. Note all the implied information there is in “Months of the Year” – if you were to learn a different calendar than the modern gregorian calendar, you would have to specify that into two different subjects. Expect to reevaluate how you divide what you know into categories as you learn more about things, and thus you have to draw more distinctions between subjects in order to keep them well-organized in your head.

Now, for this, we need a unique identifier – in the case of our months, the names of the months should do just fine – each month has a different name from all the others (Another unique identifier for months could be the order in which each appears on the calendar). That unique identifier is the subject’s “Key”.

Note that the identifier doesn’t have to be the same as the subject: If you were to memorize each month based on their order rather than their names, the subject would still be months, even though you would think more like, “The first month in the year is named January” instead of “January is the first month in the year.” You could even use both as unique identifiers at the same time (and if you have to deal with the months of the year frequently enough, you probably do).

Second, group up as much information as you can into those subjects.

To extend the months example, you don’t have to just keep track of the months’ names and how many days there are in there. You could make it look more like:

Month        Order in Year    #Days    Season
January        1        31    Spring
February    2        28 (29)    Spring
March        3        31    Spring
April        4        30    Summmer
May        5        31    Summmer
June        6        30    Summmer
July        7        31    Fall
August        8        31    Fall
September    9        30    Fall
October        10        31    Winter
November    11        30    Winter
December    12        31    Winter

Third, well, from there it’s all a matter of how you think.

When you learn new information, restate it to yourself in a matter consistent with how you’re storing information (“January is a Month. January is the first month of the year. January has 31 days. January is in Spring.”)

When you recall information, try to access it by asking yourself questions that access data based on however you stored it. So if someone asks, “What’s the month after March?” and you don’t have that bit of information memorized, you can break it down into: “Which month is March? (Third)” followed by “What’s the fourth month? (April)”

Of course, this might not work for everyone. This is just something I do to organize information I learn, and it doesn’t work out badly. On the other hand, you might already do this sort of thing subconsciously.

Mind also that there are tons of other ways to store and recall information, some of which work better than this sometimes (like a catchy rhyme for learning a one-off fact, for instance). It’s one of the awesome features of the brain that we can employ multiple different setups to be able to store and recall information.

*-Describing in-depth the power of the relational data structure is well beyond the scope of this article – this article’s just about touching upon the ideas involved, not going into full depth on them.

Thinking About Thinking: Why you should talk to yourself (or keep it up if you do already)

May 21, 2009

It’s actually really simple: We remember things we perceive. We also remember things we think.

When you say something to yourself, you’re simultaneously thinking it and perceiving it – so you’re more likely to remember it. Ditto with writing it down or typing it out on something, even if you never happen to read whatever you wrote again.

And it doesn’t have to be socially awkward, either – if you’re in a place where it would be inappropriate to talk to yourself, you can (and probably do) resort to imagining that you are talking to yourself. Of course, you generally don’t have to resort to imagining you’re writing something down, as generally writing something down doesn’t seem as creepy as someone just mumbling to nobody, so that can work too.

Now,  if you find you need to talk to yourself to remember things on a regular basis, that might be a problem – one with your memory. It might merit talking to a professional about it (frankly, though, memory isn’t all too well-understood, so you might not be able to get anything from it). It might also be worth a diary.

Thinking About Thinking: How to know what’s worth knowing?

May 17, 2009

Knowledge is power. But it’s power that you need knowledge in order to use – being able to evaluate information sources without bias and being able to select sources to obtain further information from are both learned skills.

That’s a really hard skill to learn – it’s time-consuming and chancy, but ultimately, it’s really important. Being able to trust what you know, and being able to learn more things effectively, is vital to be able to do things and get what you want out of life, no matter what that may be (after all, we make all our decisions based on what we know, and if you don’t know what’s going on, well, your decisions could easily be wrong).

So I thought to write down a quick starter’s guide about ‘knowing knowledge’, as it were, something that wouldn’t require too much time and energy but can still get fairly good results.

Anyway:

First, acknowledge the problem: There’s a lot of information out there and you can’t be sure what’s true and what isn’t.

Now, you could investigate and analyze each and every bit of information you get, like some sort of scientist (frankly, not even scientists do that). But let’s be honest, here: Nobody has that sort of time to spend just on going over information. So the trick is to save time and energy while still being able to trust what you know.

Ultimately, it’s all about trust. All information comes from somewhere, and you need to be able to get information from places you can trust. That makes your decisions about who and what you place your trust in, who and what you believe, vital to understanding the world around you.

Because you’re trusting information based on where you get it, you need to remember that all information comes from somewhere. If you have a good friend who relates to you something they heard from someone you don’t trust, should you trust the information just because your friend is saying it?

This ties into something else you should remember: no matter how much you trust someone, they may not be right. This is of importance since you will encounter information that contradicts information you already have, and sometimes this will merit closer inspection.

This inspection is important, since it helps to give strong support behind your decisions to trust or not trust someone or something for information. So ultimately, even though you try to avoid needing to do it all the time, sometimes you have to really research something.

How you research something is important. I recommend something like the scientific method (it can’t be the same, ultimately, since you’re probably already starting with a ‘hypothesis’):

  • Start with the information you have.
  • Forget where the information came from – while you’re testing information, your trust of its’ source doesn’t matter.
  • Ask yourself, “Is this information disagreeing with information from other sources, or from information that I’m creating?” All information comes from somewhere, after all, and you’re just about the most trustworthy source you have of it.
  • See if you can set up a test for the information – something that can allow you to see for yourself if something is true or false. Mind that sometimes tests are only conclusive in one way, and that sometimes multiple tests, each testing the information in a different way, are best.
  • And remember these two things: An argument might mean someone is wrong, but it doesn’t always mean anyone is right. There’s nothing wrong with saying “I don’t know”, judging that you lack the information to say if something is true or false.

If you find that something you learned was wrong, don’t just stop trusting that information source outright. For instance, the information you gained might have been second-hand, and your source conveyed to you the information out of trust for their source. When you find our something you heard was wrong, look into why it was wrong, and how that information made it to you.

When you do find something wrong with information you got directly from a source, it might be a good idea to look into other information from that source as well. Remember, it’s all about evaluating how much trust your information sources deserve.

Now, something to keep in mind: Try to have multiple different sources of information. And remember, all information comes from somewhere – having two sources of information that both come from the same place does not count!

The reason you need multiple sources of information is because you can only tell what information needs to be double-checked when it disagrees with something else. So you might have to go looking for information that disagrees with what you know.

And a last bit of actual, practical advice: Don’t make decisions based on information you only got from one source, unless you really trust it.

That’s it. I’d consider that a ‘beta’ version, there’s no doubt extensive improvement capability in readability and probably the method itself. I just wanted to get down what I thought up at the time.