Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Wednesday, September 27, 2017

Invention of the Day: Hypodermic Syringe

I'm reading a wonderful book by Roger Bridman - 1,000 Inventions and Discoveries. It documents an incredible range of human ingenuity from thousands years ago to our days. For example, here's an invention that we take for granted today: hypodermic syringe.



Remarkably, it was invented by two people in different countries. As the book says, "[in 1853] In Scotland, physician Alexander Wood invented the hollow needle and adapted Pravaz’s device to go with it, forming the first hypodermic syringe." That is, the invention cannot be attributed to each of them separately because a new system — the syringe — provides functionality beyond the sum of its parts. A well-defined interface between the parts, the cylinder and the needle respectively, enabled rapid innovation in manufacturing technologies and use. For example, here's how hollow needles are produced today.


From an innovation timing perspective, we need to be aware that the business success of the new injection technology was determined by a major invention that came about much later.
By the late 1800s hypodermic syringes were widely available, though there were few injectable drugs (less than 2% of drugs in 1905). Insulin was discovered in 1921. This drug had to be injected into the bloodstream, so it created a new market for manufacturers of hypodermic needles and drugs.

Overall, the invention of the hypodermic syringe illustrates a number of important principles for pragmatic creativity:
- a new combination of parts has to produce a new system effect;
- no new science is necessary for making a technology breakthrough;
- a well-defined interface between parts enables rapid innovation on both sides, e.g. the cylinder and the needle;
- the success of the invention comes from a new use, which may require a new science, e.g. liquid penicillin;
- the combination of new parts (cylinder + needle) and use (liquid drug) form Dominant Design and Use patterns that remain stable for decades, if not centuries.

Thursday, September 14, 2017

Lunch Talk: Artificial Intelligence 65 years ago

Claude Shannon demonstrate an electro-mechanical mouse that navigates a labyrinth, computes and remembers the optimal path. (Bell Labs, 1950s)



tags: innovation, science, technology, lunchtalk, BUS239

Friday, September 08, 2017

Stanford CSP BUS 152, Innovation Timing, Session 2 Quiz 1

Background

In a 2017 Hype Cycle-related article, Gartner, an American Research and Advisory firm, put Deep Learning and Machine Learning at the top of the Hype Cycle.


In a follow-up article about Artificial Intelligence, Gartner mentioned 5 AI myths, including #1 "Buy an AI to solve your problems." The article says, "Enterprises don’t need an “AI.” They need business results in which AI technologies may play a role."

An independent investigation by Stats.com seems to confirm the conclusion. The publication considered the track record of IBM's Watson for Cancer, an AI system deployed in many hospitals around the world, and found that "the supercomputer isn’t living up to the lofty expectations IBM created for it." Furthermore,
Perhaps the most stunning overreach is in the company’s claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, “even new approaches” to cancer care. STAT found that the system doesn’t create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
IBM denies the assertions and says that the technology is on track "to offer guidance about treatment for 12 cancers that account for 80 percent of the world’s cases" by the end of the year.

Questions

1) Do you agree with the Gartner's assessment that AI in general, and Deep Learning and Machine Learning in particular are overhyped? Explain your opinion and provide supporting evidence.

2) In your opinion, which technology and business areas will benefit the most from rapid adoption of various forms of AI? Which forms of AI will play the most significant role? Explain briefly.



Thursday, September 07, 2017

Lunch Talk: Quantum Computing



a discussion of why now is the right time to be thinking about this new technology and some of the recent developments that have been made, laying the groundwork for the future of this computing model.

Friday, July 21, 2017

Stanford CSP BUS 74. Session 3 Quiz 1.

Background:

MIT Technology Review lists face-detecting systems as one of the top 10 innovations for 2017.


The technology figures to take off in China first because of the country’s attitudes toward surveillance and privacy. Unlike, say, the United States, China has a large centralized database of ID card photos. During my time at Face++, I saw how local governments are using its software to identify suspected criminals in video from surveillance cameras, which are omnipresent in the country. This is especially impressive—albeit somewhat dystopian—because the footage analyzed is far from perfect, and because mug shots or other images on file may be several years old.

Facial recognition has existed for decades, but only now is it accurate enough to be used in secure financial transactions. The new versions use deep learning, an artificial-intelligence technique that is especially effective for image recognition because it makes a computer zero in on the facial features that will most reliably identify a person.

Quiz:

Read the entire article and answer the following questions:
1. Does facial recognition covered in the article represent a new technology? Explain briefly.
2. Will the technology become important outside of China? Explain briefly:
2.1. If the answer is yes, what markets/applications will benefit from it?
2.2. If the answer is no, what barriers will prevent its diffusion?

Thursday, June 22, 2017

The Services Revolution: Why Social Networks Turned Into an Instituion

Last month I gave a talk (pdf) on innovation timing at OpenWay Club. The presentation covered, among other topics, the unfolding technology revolution in services. The talk drew on several key sources, including the work of Oliver E. Williamson, a Nobel Prize winner in economics from UC Berkeley, Cesar Hidalgo's book "Why Information Grows", and our book with Max Shtein "Scalable Innovation."

My goal was to show that new technologies have fundamentally changed the nature of services because they commoditized "specificity" and "recurrence." (see figures below). That is, in a networked digital world knowing your customers and interacting with them on a regular basis is dramatically less expensive than in a "stand alone brick-and-mortar" world. To illustrate the main points, here's a screen shot of a relevant page from Hidalgo's book (with my annotations) and several slides from the talk.




(The recent purchase of Whole Foods by Amazon is another example of the shift to Groceries-As-Service model, where Amazon leverages its customer insights into recurring retail sales.)

Even more importantly, the new service models have become a major global institution because they addressed the fundamental issue that plagued service businesses since ancient times. Douglas C North (Nobel Prize in Economics, 1993), described the problem in game theory terms:
In the world of personal exchange (recurring-specific - ES), it pays for parties to an exchange to cooperate, because the parties have personal knowledge of the other players and there is the possibility for repeat dealings between the parties. But in a world of impersonal exchange, it pays for the parties to defect, ceteris paribus. With impersonal exchange, the world is one in which there is not an iterated game.... One does not know anything about the other players, and indeed there are a large number of players.
That is, in traditional transactions players on both sides have incentives to cheat because they don't know each other personally or through a personal network. Therefore, in 1999 North suggested that to make the global marketplace efficient and scalable a new model had to be invented:
...we are going to have to devise institutions de novo that attempt to confront and deal with worlds of impersonal exchange.
Remarkably, new service models, such as Airbnb, Uber, Amazon, Alibaba, Instaply and others provide a glimpse of the institutions to come. Since identities of sellers, buyers and recommenders are known, parties are less likely to cheat; therefore, the number and quality of transactions shows rapid growth.  Although the solution is not perfect, it is a lot more efficient than all attempts to introduce global regulations. It's exciting to see how social networking technologies are redefining the rules of commerce and provide a working alternative to law.

Sunday, January 08, 2017

The Structure of Technology Revolutions

Since last summer, I've been working on a book project tentatively (and modestly!) titled "The Structure of Technology Revolutions." The purpose of the book is to show how technology enables completely new possibilities, by breaking trade-offs that are considered unbreakable.

To demonstrate the underlying structure of the innovation process, I'm using Category Theory tools (OLOGs) originally created by D.I. Spivak from MIT.

Here's a series of draft figures with an example of how the logic of innovation had worked in the technology revolution initiated by the automobile with the internal combustion engine (see below).

 Note, that the same logic can be applied to the modern autonomous vehicle. The technology is going to be successful because it creates incredible maneuverability at the "traffic" level of abstraction.

Now, back to the horses example:

Fig. 1 introduces the trade-off between Power and Maneuverability. An eight-horse carriage has a lot of power, but it's difficult to maneuver. Adding more horses will create a huge maneuverability problem. On the other hand, a horse rider is highly maneuverable but he lacks the carrying capacity of the horse carriage.


Fig. 2 introduces a logical representation of a horse carriage and maps it onto a "Conflicting Desires Diagram." That is, we show that any "designer" of a horse carriage faces a trade-off between Power and Maneuverability.


Fig. 3 sheds horse pictures and shows a logical generalization: a horse carriage is a kind of power-driven vehicle. 


Fig. 4 indicates the desired situation (the green dot on the right): We want a vehicle that has the best of both worlds, it's highly powerful and highly maneuverable.

Fig. 5 shows that the Automobile breaks the trade-off and creates a vehicle with the potential to hit the green dot. That is, we create a technology that disentangles human ability to control horses from the power. Thus, we achieve a new state that was considered impossible before.



To model the Autonomous Vehicle technology revolution we need to abstract from "a vehicle" to "traffic" and show how the new technology breaks the traffic congestion trade-off. In general, congestion trade-offs are ubiquitous in economic systems and technology revolutions break through them quite often.

Fig. 6 is a generalized diagram of how technological innovations make the impossible possible.



tags: innovation, trade-off, logic, technology, revolution

Wednesday, January 06, 2016

3D Printing - the new Clay Age

Consider a recent MIT Review article about the latest 3D printing lab experiments. What is their importance to inventors and what can we use to predict evolution of this technology?


When we study history, especially, history of innovation, people conventionally mention the Stone Age, the Bronze Age, the Iron Age, etc. At the core of such descriptions lies a wonder material — stone, bronze, iron, steel, silicon — something that enables a huge range of applications, which power technology developments for decades or even hundreds and thousands of years.

Paradoxically, there's no Clay Age (see fig below).

This is really unfortunate because the clay turned out to be the ultimate material that served us, humans, for thousands of years and enabled us to produce an amazing range of objects and technologies: from bricks to construction and architecture, from jars to storage and shipping, from ceramics to chemistry and modern waterworks, from concrete to skyscrapers and highway transportation systems. From an inventor's perspective, I see clay-based technologies as the first example of what we call today additive manufacturing.

Let's go back few thousands of years and compare stone (Before) and clay (After) as manufacturing materials. If you live in a cave and use stone to make your tools you have to chip away, blow-by-blow, certain parts of the original piece of rock that don't fit your design.




Even when we consider "raw" rocks being cheap and disregard the waste of material itself, our ability to shape the rock or change its internal physical structure is severely limited by what we can find in nature. By contrast, clay is extremely malleable: you can shape it, add filaments, make it hollow, make it solid, make it hard, glaze it, and much more. If you are a hunter-gatherer, by combining clay and fire you can create all kinds of sharp weapons that your stone age competition can't even imagine. If you are a gatherer, you can create jars and jugs, using one of the cornerstone inventions of human civilization: the Potter's Wheel.


If you are a house builder, even a primitive one, you can use mud bricks and reinforce them with straw. As you master fire and masonry, you learn how to create bricks and construct buildings that last decades and centuries, instead of years. You can even print money tokens with appropriate clay technologies! Furthermore, with advanced firing techniques, you discover how to melt and shape metals and discover important alloys, such as Bronze. Ultimately, you develop communities of innovation and economies of scale unheard of in the Stone Age.

Why thinking about the Clay Age is important today, when we are well beyond using mud for building cities? The main goal is to gain an insight into what additive manufacturing can do for us for years to come. Just like clay, 3D printing represents a technology approach with a promising long-term potential. That is, when working with both, clay and 3D printing, instead of removing and wasting extra, we add materials and shape surfaces to achieve desired designs. Luckily, for 3D printing we can leverage the learnings from clay.

Over the thousands of years, humans learned to work with clay by combining 6 key modifying methods:
1. Shape - change the outer geometry (e.g. brick).
2. Thin or thicken - change the inner geometry (e.g. thin jar).
3. Fill - change the inner structure (e.g. reinforced concrete)
4. Fire - modify inner and/or outer hardness or other material properties (e.g. hardened stove brick)
5. Slip - modify or create an outer layer with specific properties (e.g. ceramic glazes)
6. Decorate - paint or other exterior designs to make things aesthetically appealing.

With 3D printing we are still working on items 1 and 2, barely touching 3. Some of the research labs approach item 4 on our list - firing, or its equivalents.  For example, the MIT article that I've mentioned in the beginning of the post uses the ancient sequence of a clay-based technology: shape your piece from a soft material with special additives, then fire in the kiln, to achieve desired hardness and durability. Remarkably, modern 3D printing combines the ancient material — ceramics — with modern design techniques — computer modeling and manufacturing.

In the short term, 3D printing went through a lot of hype that fizzled a bit by now. In the long term,  the age of 3D printing, just like the Clay Age, is going to create a strong foundation for a broad range of human technologies. Basically, we are in the hunter-gatherer stage of our 3D evolution curve.

tags: technology, innovation, history, invention, creativity

Lunch Talk: Nanotechnology at work


A 2015 Nova documentary shows science and tech advances that power applications of nanotechnology in electronics, healthcare, optics, energy, and other fields.

tags: lunchtalk, technology, materials

Tuesday, January 05, 2016

Life sciences vs Computer Sciences - a challenge for the 21st century

Investor Peter Thiel captures the core difference between bio and computer tech in his recent interview to MTR:
This goes back to that famous Bill Gates line, where he said he liked programming computers as a kid because they always did what he told them to. They would never do anything different. A big difference between biology and software is that software does what it is told, and biology doesn’t.

One of the challenges with biotechnology generally is that biology feels too complicated and too random. It feels like there are too many things that can go wrong. You do this one little experiment and you can get a good result. But then there are five other contingencies that have to work the right way as well. I think that creates a world where the researchers, the scientists, and the entrepreneurs that start companies don’t really feel that they have agency.
Unlike computer science, biology doesn't have the equivalent of the Church-Turing thesis that, essentially, guarantees an implementability of a valid algorithm. The success of Silicon Valley is built on top this important discovery of the 20th century. That is, once a "computation" entrepreneur, either in software or hardware, finds a way to express his useful idea in an algorithmic way, he or she can be sure that it will work, provided the computational power, storage, and networking capacity grow exponentially. Most famously, Larry Ellison created his Relational Database business in mid-1970s when people did not understand implications of the Moore's Law yet.



Biology is different. Vernon Vinge, a science fiction writer, aptly calls our future successes in medicine "A Minefield Made in Heaven" because it's hard to predict the specific locations of magical "mines" that we are going to discover and cure various diseases.

Peter Thiel uses word "random" to describe biology; but from a practical perspective it's actually worse than that. If it were random we could use known randomization techniques from computer science and make new biological discoveries by almost brute force. We can't. Therefore, I'd rather use a different term – arbitrary, and there's no algorithm for generating useful arbitrariness yet - only human ingenuity.

The good news is that some of the life sciences fields are compatible with computation. We are going to make a lot of progress in areas where we can hook up analog biological experiments to the exponentially growing computing platforms. Diagnostics and pattern matching for known problems seem to be the most promising field.

tags: biology, innovation, science, technology, silicon valley

Monday, September 07, 2015

Predicting smartphone addiction in kids

A study of South Korean elementary school kids has found that stress and lack of self-control are the strongest predictors of the "smartphone" addiction. Although the device to deliver the addiction is the smartphone, the real hooks for the addiction are Social Networking (SNS) and entertainment services (via BBC news).


Since the mobile has become a dominant platform for delivering entertainment services, in a period of two generations we can expect a migration of television advertisement money into online services. The TV and the web are going to go into oblivion like the newsprint. We can also expect that Twitter will not catch up with Facebook or other major SNS'.

Also, it appears that the humanity is running a large-scale Stanford Marshmallow Experiment, dividing kids into those who can exert self–control and those who cannot. 

The first follow-up study, in 1988, showed that "preschool children who delayed gratification longer in the self-imposed delay paradigm, were described more than 10 years later by their parents as adolescents who were significantly more competent."
A second follow-up study, in 1990, showed that the ability to delay gratification also correlated with higher SATscores.[5]



From an innovation theory perspective, the smartphone represents the Dominant Design, while online services - the Dominant Use.

Saturday, August 08, 2015

Is Apple in long-term trouble?

A survey of software developers shows a sharp drop in Objective-C popularity, Objective-C being the main programming language for Apple's iOS.
Source: tiobe.com

Fewer developers means fewer apps for consumers and businesses. One could argue that with hundreds of thousands of apps already available in the AppStore Apple should not worry about the trend. Furthermore, Apple's move into its own services, including media streaming, may also decrease the need for independent developers. In general, the mobile apps space has matured well beyond its heydays.

Nevertheless, it's hard to imagine a popular software development platform that is of limited interest to developers. We might be seeing the beginning of the end of Apple's rapid expansion.

tags: technology, apple, software, services, dominant design

Tuesday, June 30, 2015

Lunch Talk: Steve Fodor of Affymetix gives a talk at Stanford eCorner

Dr. Fodor and colleagues were the first to develop and describe microarray technologies and combinatorial chemistry synthesis.

In 1993, Dr. Fodor co-founded Affymetrix where the chip technology has been used to synthesize many varieties of high density oligonucleotide arrays containing hundreds of thousands of DNA probes.

The Market or The Technology


The Scientist vs The Entrepreneur


Monday, June 29, 2015

Google's anti-trust problem: users

Many news agencies reported on a new study about Google search results, painting it in anti-trust tones, e.g.,
(BloombergBusiness, June 29, 2015) The new study, which was presented at the Antitrust Enforcement Symposium in Oxford, U.K., over the weekend, says the content Google displays at the top of many search results pages is inferior to material on competing websites. For this reason, the paper asserts, the practice has the effect of harming consumers.
-----------
In reality, Google's biggest anti-trust problem is its users who believe that Google search engine can provide them with best results. The belief still holds true for the web because Google has the ability to access, index, and rank web pages. As information and (more importantly!) user interactions shift toward the social world and proprietary mobile applications, Google gradually loses its ability to access the data and make best judgements. In Scalable Innovation (Chapter 22: Google vs Facebook) we identify at least three major consequences of this shift: no full access to social feedback, e.g. "likes"; the reactive nature of the web search itself; Google's lack of access to app-specific data. As a result, people who use search to ask questions like “What’s the best pediatrician in San Francisco?” are not going to get the best answer because Google simply doesn't have it.

On the surface, it looks as if a big monopoly is trying to hurt consumers. That's not the case. The study presented in Oxford assumes that Google is omnipotent and omnipresent. That is, the authors seem not to realize that the information world has changed and our information habits have to change accordingly. Today, consumers hurt themselves by thinking that googling will give them the right answers. Although this powerful illusion works on the web, it begins to fall apart as we enmesh ourself in social networks and mobile apps.

tags: innovation, search, google, facebook, science, technology, 3x3, world

Tuesday, April 21, 2015

Lunch Talk: Chemistry of Dyes

In most of human history people couldn't afford clothes of bright colors. Moreover, certain colors were reserved for the highest authority. For example, during the times of Roman Emperor Diocletian (245–311) purple silk was to be used only at the direction of the Emperor under penalty of death.

The chemistry revolution of the 19th century changed all of that. Back then, synthetic dyes were the equivalent of silicon-based electronics in the second half of the 20th century and mobile apps of the early 21st century. If you wanted to do a technology startup, you would think "chemistry."



tags: invention, innovation, startup, science, technology, lunchtalk

Sunday, October 12, 2014

Invention of the Day: the Integrated Circuit

During the Nobel Prize week it's only appropriate to remember Nobel-worthy inventions that changed the world. Today it's almost impossible to imagine our lives without some kind of use of the Integrated Circuit (IC) because the technology has become a fundamental building block in modern computing, communications, data storage, power supply, sensor, and many other applications. (see the Wiki article linked above).

In 2000, Jack S. Kilby received 1/2 of the Nobel Prize in Physics for his 1958 invention of the IC. Here's a picture and diagram of his original invention:


When I read Kilby's Nobel Prize lecture, several of his passages strike me as remarkable because they show how difficult it is for contemporaries to recognize a great innovation:


Note that one of the core objections was that the new system doesn't use the best individual elements (resistors and transistors). Similarly, many years later — in the early 1990s — people doubted that video was ever going to be streamed over the Web because the Internet Protocol was poorly suited for synchronous data transmission. As innovators, we often have to remind people that the system is greater than a simple sum of its parts.

Kilby's speech also gives us a new perspective on the Moore's law. Here's what Kilby says:


Just one year later, Gordon Moore published his now famous article where he formulated his "law", stating that the density of elements in ICs would double every 18 months:

The chart from that article promised exponential performance riches to the "few adventurous companies" that were willing to bet on the new technology. Perhaps it is not a coincidence that tiny, unknown Silicon Valley startups, rather than large established companies, took full advantage of the opportunity and eventually created a new reality that we all live in today.

tags: invention, innovation, technology, market, 10X, cinderella

Wednesday, August 20, 2014

Facebook's market power

The Facebook patent I briefly discussed yesterday points to a business and technology revolution, similar to the one that made Chicago a major commercial center in the United States in the 19th century. Back then, the proliferation of railroads helped move grain and cattle from small, scattered farms to large grain elevators and slaughterhouses. As the result, Chicago merchants benefited enormously from the new economies of scale. Similarly, Facebook enjoys enormous economies of scale by aggregating and processing huge amounts of scattered pieces of user preferences data. 


Furthermore, Chicago merchants developed a new standardization system that
...partitioned a natural material — a steer or a bushel of wheat into a multitude of standardized commodities, each with a different price, each with a different market (Nature's Metropolis: Chicago and the Great West, by William Cronon).
The new partitioning system allowed the merchants to sell their commodities to those consumers who were interested in a particular grain variety or beef cut and willing to pay the right price for the right commodity.

Similarly, Facebook has the ability to partition their user social graphs (and even individual users like you and I) into a multitude of parts that can be sold to advertisers and content providers for the right price at the right time and in the right place. The only difference is that instead of the Beef Chart of the 19th century they have the User Interest Chart of the 21st century.

tags: innovation, technology, control, packaged payload, distribution, scale, facebook, social, advertisement

Tuesday, February 04, 2014

The Web is Dead - mobile edition.

The hyperlink (or URL) is one of the greatest inventions of the web era. It is excellent for linking pages, navigating between sites, downloading content, etc. Unfortunately, the URL is largely useless on mobile devices because it doesn't work outside the browser. To solve the problem, mobile technology companies develop alternatives that allow launching one app from another. MIT Tech Review reports that,

Today mobile apps increasingly rule our free time and require us to dive into separate, walled-off digital containers that don’t link up.

The new kind of hyperlink could make apps seem less walled off from one another. Deep linking, as the technology is called, is also seen as a way to open up new forms of advertising that will provide revenue to make mobile advertising more closely match its online counterpart (see “Why No One Likes Mobile Ads”).

Once a suitable replacement for the URL is found, the decline of the web will become inevitable. Here's a cheesy Facebook video that explains the concept:




tags: web, technology, evolution, aboutness, mobile, application

Wednesday, January 15, 2014

Lab Notebook: A revolution in Human Resources.

Since the early days of Silicon Valley, venture capitalists (VCs) consider team quality to be a major factor predicting success of a startup. In the US, successful entrepreneurial teams typically emerge from universities and high-tech companies. In Israel, startup team formation often happens in the army, including, its high-tech units. Innovative, risk-tolerant, hard-working, highly-skilled people who can work together effectively are usually called "the A team." Startups succeed when the team discovers and takes advantage of a new, profitable business model, e.g. based on the latest and greatest technology. (Examples: Fairchild Semiconductors, Atari, Apple, Sun Microsystems, Netscape, PayPal, Yahoo, Google, Netflix, Facebook, Twitter, Wase, etc.)

Now, mature companies are trying to use the startup model to get employees to work as teams on specific projects, rather than functional departments in charge of internal processes. Netflix pioneered this approach and its former head of talent acquisition, Patty McCord, actively promotes it today. (See the HBR article for more detail). LinkedIn and Facebook use internal hackatons to identify new potential products and teams that can deliver them.

Another model is acqui-hiring, when an established company acquires a successful startup, so that they can work as an internal entrepreneurial team. Google and Facebook practice this model extensively. For example, many consider the recent acquisition of Nest GSV as an acqui-hire play by Google to get on board a team that can create a successful consumer device.

These approaches take into account that the startup-based innovation model created in Silicon Valley differs dramatically from the successful industrial innovation model pioneered by Henry Ford. He invented not only the mass-production logistics and manufacturing system, but also revolutionized labor hiring. Instead of an ethnically-based team of low-skilled workers hired to build a railroad or unload a ship, Ford wanted to see an English-speaking specialist who could fit into his production process. Similar to the quality control process in parts manufacturing, Ford tasked his newly-invented human resources (HR) department with selecting persons that fit a specific job criteria: work skills, education level, family history, ethnicity, etc. Elements of his approach were based on the popular at the time theory of eugenics, which advocated selecting people based on certain inherent traits. (Today, eugenics is broadly associated with racism and bigotry). Ford's HR system proved to be highly successful for the American industry because it allowed companies to plug individuals into well-defined work and social roles. (For example, when hiring engineers GE routinely interviewed wives of the candidates to make sure they possessed proper moral values.) 

After the World War II when educational and social roles started to shift, the new idea of the Human Capital eventually took hold. Large companies started treating their employees (often with an implied life-time job guarantee, e.g. a trade-union contract) as capital. As workplace requirements changed, the employees were provided with on-the-job educational and training opportunities to maintain or improve their skills. To manage the workforce and comply with labor laws, HR departments set up performance evaluation processes, with regular manager feedback, promotions, and incentives structures. Nevertheless, at the core of it was the old Henry Ford's idea that an individual had to fit into a pre-defined corporate production process. 

Of course, this model doesn't work in an environment where breakthrough innovations are required for company growth. Fist, startups simply can't afford the overhead. Second, large corporations, despite their incredible R&D capabilities, proved to be unable to accommodate innovations from within. Clayton Christensen described this problem in his seminal book "The Innovator's Dilemma." Unfortunately, the solutions that he offered don't seem to work despite people trying hard to implement them. Research in Motion, the maker of Blackberry phone, would be a great failure example (see the ft article). The company did everything by the book, but eventually could not compete with Apple's iPhone.

The current revolution in HR promises to solve the Innovator's Dilemma by giving internal entrepreneurs enough freedom inside the company to create innovations. It also lets companies get rid of employees that don't fit into their culture or have outdated skills. As a result, they get a mobile, active, motivated workforce that can move quickly and carry innovation risks/rewards, rather than offloading it to the parent company. Of course, traditional companies and their HR departments will lag behind in implementing the new model. Most likely, it will continue to spread through new successful startups capable of scaling their initial business into new technology markets.

tags: invention, innovation, control