Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Friday, November 08, 2013

Creativity Quote of the Day: on Doubting

If you tried to doubt everything you would not get as far as doubting anything. The game of doubting itself presupposes certainty.
The child learns by believing the adult. Doubt comes after belief.

- Wittgenstein.

Sunday, July 15, 2012

Predicting people's future locations with mobile data.

MIT Tech Review reports on an algorithm that allows mobile tracking systems to predict our future locations:
Beyond merely tracking where you've been and where you are, your smartphone might soon actually know where you are going—in part by recording what your friends do.

Researchers in the U.K. have come up with an algorithm that follows your own mobility patterns and adjusts for anomalies by factoring in the patterns of people in your social group (defined as people who are mutual contacts on each other's smartphones).

The method is remarkably accurate. In a study on 200 people willing to be tracked, the system was, on average, less than 20 meters off when it predicted where any given person would be 24 hours later. The average error was 1,000 meters when the same system tried to predict a person's direction using only that person's past movements and not also those of his friends, says Mirco Musolesi, a computer scientist at the University of Birmingham who led the study.
 From a philosophical point of view, in a dense social network one's freedom of the will seems to be quite limited.

tags: social, networking, mobile, detection, control, aboutness

Saturday, January 07, 2012

Lunch Talk: (TED) Dan Dennett on human consciousness

Philosopher Dan Dennett makes a compelling argument that not only don't we understand our own consciousness, but that half the time our brains are actively fooling us.



link.

Sunday, January 01, 2012

How Anesthesia Changes Mind.

Studying human brain under anesthesia presents not only ethical, scientific, and philosophical problems, but also involves some down-to-earth technological challenges. For example, it's difficult to collect and correlate people's different types of vital signs, both from brain imaging and more traditional collection techniques. Here's how engineers and scientists managed to solve the problem.
January 1, 2012. MTR -- Brain imaging in human subjects undergoing anesthesia is tricky because it requires anesthetizing people within a scanner and outside a normal operating room. Brown and his colleagues found a way to solve the technical and safety problems: they recruited volunteers who had already received tracheostomies, or surgical holes in the throat. That meant a tube could readily be used to restore their breathing in an emergency. In 2009, the researchers demonstrated that they could safely record both EEG and fMRI data on people under anesthesia; now they are working to correlate the imaging and EEG data with the observable changes seen as patients enter an anesthetized state.
Rather than "modifying" people to make their key vital signs exposed, they found those who's already been "modified" for other purposes. (This approach is generally outlined in Principles 9 through 11 in classical TRIZ problem-solving recommendations. These are instances of Separation in Time from the dilemma resolution techniques.)
Anesthesia studies have already cast doubt on one popular theory, which links consciousness to a particular type of brain wave with a frequency around 40 hertz. Mashour points out that research in anesthesia shows these waves can exist even when patients are unconscious. But the patterns that anesthesiologists see do support another theory: that consciousness emerges from the integration of information across large networks in the brain. 
 I wonder how much of our "everything is a network" thinking is determined by everyday exposure to the Internet. Brain is much more than a network, but we don't have the right words to describe it yet.


tags: mind, brain, biology, philosophy, problem, solution, triz

Monday, December 19, 2011

Invention of the Day: Systematic Doubt.


Descartes (1596-1650), the founder of modern philosophy, invented a method which may still be used with profit—the method of systematic doubt. He determined that he would believe nothing which he did not see quite clearly and distinctly to be true.
Whatever he could bring himself to doubt, he would doubt, until he saw reason for not doubting it. By applying this method he gradually became convinced that the only existence of which he could be quite certain was his own.
He imagined a deceitful demon, who presented unreal things to his senses in a perpetual phantasmagoria; it might be very improbable that such a demon existed, but still it was possible, and therefore doubt concerning things perceived by the senses was possible. (Bertrand Russell. Problems of Philosophy.)

There seems to be a natural tension between systematic doubt, or reductionism, invented by Rene Descartes and "reality distortion field" perfected by Thomas Edison and Steve Jobs. Both approaches require different kinds of creativity.

doubt <--------------------------------o------------------------------>faith

As a side note, I can see how Descartes personalizes an abstract problem, presenting it as a deceitful demon. Similarly, Maxwell personalized his theory of thermodynamics with another demon. Einstein came up with his theory of relativity by imagining somebody sitting on a particle moving with the speed of light. Schrodinger had his cat, Altshuller - tiny mighty men, Kahneman - Systems 1 and 2.

Among all of them, Kahneman, the psychologist, used this approach consciously and deliberately. In the talk I posted in this journal last month he explained that people intuitively understand agents and spaces, but have trouble relating to abstract distributed processes. Therefore, it is useful to invent a personalized agent to explain and understand a difficult concept.
Do you see his point? Paradoxically, we have to distort the reality in order to better understand it. But after that we need somebody like Descartes to apply systematic doubt and destroy this useful, but false understanding.

tags: creativity, invention, philosophy, tools, method

Tuesday, December 13, 2011

The quirky philosophy of social media.

Bertrand Russel writes about Idealism,
The first serious attempt to establish idealism .... was that of Bishop Berkeley.
He fully admits that the tree must continue to exist even when we shut our eyes or when no human being is near it. But this continued existence, he says, is due to the fact that God continues to perceive it; the 'real' tree, which corresponds to what we called the physical object, consists of ideas in the mind of God, ideas more or less like those we have when we see the tree, but differing in the fact that they are permanent in God's mind so long as the tree continues to exist. All our perceptions, according to him, consist in a partial participation in God's perceptions, and it is because of this participation that different people see more or less the same tree.
All our perceptions, according to him, consist in a partial participation in God's perceptions, and it is because of this participation that different people see more or less the same tree.
Let's run a thought experiment and think about Berekeley's 'tree' as 'customized webpage.' On Facebook all pages are built on demand. They are assembled on-the-fly from bits and pieces for a particular individual. Therefore, a customized webpage only exists when the individual invokes and perceives it. Once the individual turns off her demand for the page, it disappears. In this case, Facebook infrastructure plays the role of God's mind, ensuring that the page will be available when the individual invokes it next time.

When Facebook goes bankrupt or runs out of power, the world of social media disappears. All 'trees' of our perceptions are gone, just as Bishop Berkeley had predicted.

Maybe, people keep coming back to their Facebook pages all the time because they are subconsciously afraid that their world is going to vanish without them.

Sunday, December 04, 2011

Invention as a language paradox.


Picture courtesy Gizmodo


While working on lectures for the Patent Paradox, I finally figured out (maybe) why it is so difficult to understand the true nature of inventive process.

The language itself does not allow us an easy way to express the idea that the novel device you are holding in your hand is an amalgamation of inventions. That is, you can naturally say "iPhone is an invention", despite a well-documented fact that Steve Jobs and his team put together iPhone from a large number of ideas, rejecting some and accepting others. The process of selecting and implementing these inventions involved thousands of people and took several years. (see Steve Jobs, by Walter Isaacson.)

Nevertheless, a true statement like "iPhone is inventions" is neither grammatically nor syntactically correct. Moreover, if you make this simple true statement in public you will sound plain stupid. "The lightbulb is inventions" also sounds stupid and, despite all the facts, it is easy to say and believe that "Edison invented the lightbulb."

The same goes for statements like "Invention of Agriculture", or "Tim Berners-Lee invented the world-wide-web", etc. As David Kahneman showed, we tend believe what feels intuitively right, not what is right. (System 1 vs System 2.)

Back to The Patent Paradox. To explain to people why we have hundreds of patents covering iPhone-like devices and why we have patent wars between Apple, Samsung, Motorola, Nokia and others, I cannot simply say "Smartphone is inventions, therefore it is covered by hundreds and thousands of patents." No, to convey this simple true thought, I have to show a bunch of examples, explain that "unimportant" patents are as important as the "important" ones, etc.

It's remarkable that our legal system is also stuck in the wrong language paradigm and cannot deal with the fact that a successful device "is inventions." As a result, juries award multimillion dollar settlements for one patent out of hundreds applicable.

Background reading:
1. Walter Isaacson. Steve Jobs.
2. H.C.Anawalt. Idea Rights.
3. D.Kahneman. Thinking Fast and Slow. - his lunchtalk I posted a week or so ago is also very good.
4. John Searle. Philosophy of Language. Phil 133, UC Berkeley podcast. Fall '11.
5. Bertrand Russel. Problems in Philosophy.

tags: philosophy, patent, control, information




Sunday, November 06, 2011

The problem with problem definitions.

I regard as no less pertinent a warning against apparent proper names having no reference. ... This lends itself to demagogic abuse as easily as ambiguity -- perhaps more easily. 'The will of the people' can serve as an example; for it is easy to establish that there is at any rate no generally accepted reference for this expression.

Gottlob Frege. On Sense and Reference. (Über Sinn und Bedeutung, 1892.)

Problem definitions frame our approach to problem solving. Bad problem definitions can make search for a good solution very difficult or even impossible. I often find that people don't understand that putting a label on a bad situation is not enough for problem definition. The confusion between label and definition is rampant in everyday thinking.  For example, when Gallup formulates questions they go for simplicity rather than clarity.


We might argue whether or not Unemployment/Jobs is a problem (it is not because unemployment is an abstraction that aggregates millions of individual cases that most likely require different solutions), but "Economy in general" is definitely not a problem that can be solved. How do you solve Economy? Would it be the same way we solved the "Osama bin Laden" problem - by killing it?

Thursday, October 06, 2011

A fundamental failure of imagination.

Philosopher Bertrand Russell remarks on a priori knowledge, essential for deductive [mathematical] reasoning:
When Swift invites us to consider the race of Struldbugs who never die, we are able to acquiesce in imagination. But a world where two and two make five seems quite on a different level. We feel that such a world, if there were one, would upset the whole fabric of our knowledge and reduce us to utter doubt.
Children before age 3 or 4 live in this wonderful world where 2+2=5. Actually, it's quite obvious for them that you can take two pieces of playdough, add another two pieces of playdough, and out of them make any natural number of playdough pieces: 5 or 1 or whatever. Then they grow up, become adults and a simple statement like 2+2=5 throws their world into utter doubt. Amazing, how fragile the world of adults is.

tags: psychology, philosophy, logic

Friday, September 30, 2011

Problem-solving in action.

Some time ago I mentioned that to produce a high quality solution, inventor has to break laws of conventional thinking. For example, despite the fact that the so-called first principle of economics states that "everything is a trade-off," breakthrough inventions destroy rather than enforce trade-offs.

Today, I've found another instance of "inventing by breaking the law." This time it relates to formal logic. Here's what Bertrand Russell writes about one of Kant's laws of thinking:

Let us take as an illustration the law of contradiction. This is commonly stated in the form 'Nothing can both be and not be', which is intended to express the fact that nothing can at once have and not have a given quality. Thus, for example, if a tree is a beech it cannot also be not a beech; if my table is rectangular it cannot also be not rectangular, and so on.

In contrast, classical TRIZ requires the problem-solver to break this law by formulating the problem as a dilemma: element X has property A, and element X has property anti-A. At the same time we  focus on useful and harmful functions provided by the element, which allows us to escape from the constraints imposed by the existing implementations.

Just the other day, when I was working with a client on a problem considered to be almost insolvable, we did find a solution by systematically applying the dilemma-busting rule. Psychologically, it was very difficult. But once we managed to overcome the inertia of taking the existing implementations for granted, the solution became almost obvious.

Though I can't disclosure the client's solution, I can show a case study from my Principles of Invention class. Here's an example of a real-life technology dilemma I solved to get US Patent 7,529,806.



tags: trade-off, dilemma, problem, solution, philosophy, logic, invention

Tuesday, September 20, 2011

The ancient roots of modern misconceptions.

Finally, I have found the philosophical origins of the common view that our abilities, including creativity, are predetermined at birth. In Lecture 2 of "Great ideas in psychology" Professor Daniel N. Robinson mentions that the idea goes back to Plato's Republic, a highly influential philosophical treatise written about 2,400 years ago. Here's the paragraph I believe Robinson refers to:

Citizens, we shall say to them in our tale, you are brothers, yet God has framed you differently. Some of you have the power of command, and in the composition of these he has mingled gold, wherefore also they have the greatest honour; others he has made of silver, to be auxillaries; others again who are to be husbandmen and craftsmen he has composed of brass and iron; and the species will generally be preserved in the children.
Nevertheless, Plato did not see this as a Nature vs Nurture issue. Rather, he believed that education was essential to the development of one's natural abilities.  Though over time, his Nature & Nurture approach turned into today's Nurture vs Nature debate about IQ and general intelligence factor.

tags: creativity, philosophy, psychology

Friday, May 20, 2011

Inventing the future

Peter Norvig, the Director of Research at Google, sums up the 10,000-hour (10-year) rule for becoming an expert:



... it takes about ten years to develop expertise in any of a wide variety of areas... The key is deliberative practice: not just doing it again and again, but challenging yourself with a task that is just beyond your current ability, trying it, analyzing your performance while and after doing it, and correcting any mistakes. Then repeat. And repeat again.


Now, if it takes 10 years to become an expert, one of the most important questions a would be expert faces on day one of his or her 10-year term is "In which area should I become an expert, so that my expertise will not get obsolete by the end of the full term?" One way (the best way?) to answer it is to create a new domain of expertise, as Thomas Edison, Henry Ford, James Watson, Tom Perkins, Bill Gates, Steve Jobs, Jeff Bezos, Zack Zuckerberg, and others had done before.



Another question, which arises on or beyond the 10-year expertise boundary, is how to avoid the curse of knowledge, a mindset that locks one's creativity within a set of "expert" assumptions.



tags: creativity, brain, system, mind, philosophy, technology, quote, timing, inertia, psychology

Thursday, January 06, 2011

"And in her eyes you see nothing"

A study in Science magazine (via Bloomberg) shows that women's tears contain a chemical component that reduces sexual arousal in men.



We found that merely sniffing negative-emotion–related odorless tears obtained from women donors, induced reductions in sexual appeal attributed by men to pictures of women’s faces. Moreover, after sniffing such tears, men experienced reduced self-rated sexual arousal, reduced physiological measures of arousal, and reduced levels of testosterone. Finally, functional magnetic resonance imaging revealed that sniffing women's tears selectively reduced activity in brain-substrates of sexual arousal in men.

The picture shows how men in the study were made to smell tears during experiments. You can see that under the circumstances it is impossible not to smell them, and I doubt that in today's life we can catch the smell from a socially acceptable distance between a man and a woman. All this makes me believe that tears started as a chemical signal but later evolved into a social (visual) signal.
In any case, the study offers an insight into brain's ability to morph a physical phenomena into a social one.

tags: detection, science, social, networking, signal, information, philosophy, biology


DOI: 10.1126/science.1198331

Monday, December 07, 2009

Wikipedia knows everything. It even remembers the list of unusual software bugs. My favorite one is Schroedinbug:

A schroedinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed.

The funny thing is that the concept is completely in agreement with John Searle's theory of Social Reality. The theory says that we create the reality by collectively believing in it. For example, paper or any other kind of money has value only because we all believe that it has value. In football, a team gets 6 points for a touchdown because everybody, including the opposing team, agrees that the touchdown is worth exactly 6 points.

I think schroedingbug works somewhat differently, though. Once it's noticed, people try to fix it and, due to a multitude side effects, the whole system promptly falls apart.

tags: construction, philosophy, computers, network,  background, artifact, problem