Showing posts with label machine1. Show all posts
Showing posts with label machine1. Show all posts

Wednesday, May 28, 2014

Scalability Problem of the Day: Neural Networks

Deep Learning is the next frontier in computer science. After some initial breakthroughs, scientists and engineers are running into a major scalability problem: increasing the number of neurons doesn't improve a neural network's performance.

(MTR, 5/21/14) We found that if you put a lot of GPUs [specialized graphics processors] together we could make a much bigger neural network—10 billion nodes, with 16 machines instead of 1,000.

We used that same benchmark [images from YouTube videos] that the Google team did. But even though we could train a much larger neural net, we didn’t necessarily get a better cat detector. Right now we can run neural networks that are larger than we know what to do with.

This is a typical situation in Silicon Valley (I described in an earlier post). We are at a point where Machine 1 (exponential growth in computing power) is ahead of Machine 2 (Applications). Most likely, the next S-curve, i.e. a new growth cycle, will begin within the next 5-7 years.

tags: problem, scalability, constraint, machine1, machine2, silicon valley,

Wednesday, January 22, 2014

BitCoin vs PayPal: a 100X+ difference

Both BitCoin and PayPal allow parties who don't trust each other's identity to exchange money electronically. The big difference is transaction costs. Because PayPal uses the existing system for electronic payments, it extracts high fees from the seller; the smaller the transaction, the larger the percentage of the fee. In short, PayPal doesn't scale down.


In contrast, BitCoin transactions are almost free. Moreover, they scale down as computing power (thanks to the Moore's law) becomes cheaper over time. The adoption of BitCoin or any similar monetary transaction system should stimulate development of businesses that involve high-volume payments. If posting on Twitter is free, BitCoin transactions should also be free.

Companies that will make Bitcoin payments reliable and secure are going to reap huge benefits in the mobile and financial markets.

tags: money, deontic, payload, control, machine1, machine2, finance, commerce, 10x, innovation

Monday, January 20, 2014

The PC: How Steve Jobs created a huge market for Bill Gates' software.

If you believe Wikipedia, the story of the PC begins in the 1950s with IBM 610, a Personal Automatic Computer (PAC).

Rule #1: Don't believe Wikipedia when it comes to understanding innovation. Wiki editors know how to compile a myriad of data points — and I love them for that! — but they don't understand that for technology innovation the scale is all that matters. IBM 610 was a one-hand clap in the world of computer projects. Very few people heard about it, and even fewer people, if any, used it for practical purposes.

The first real Personal Computer was Apple II conceived by Steve Jobs and Steve Wozniak in 1977. Steve Wozniak put together the computing hardware part; but Steve Jobs turned the kit into a real innovation when he thought about incorporating the power supply and keyboard into one box. With the addition of the floppy drive, the box turned into a breakthrough innovation.

Source: Triumph of the Nerds

Rule #2. Don't believe people when they apply a modern technology term like "Personal Computer" retroactively. The Personal Computer became a major innovation when it took the shape of a consumer box that you use for running shrink-wrap software. Computing machines before Apple II were either hardware kits for hobbyists or closed microprocessor boxes with proprietary software for industry use .

Bill Gates and Paul Allen anticipated the era of the Personal Computer, but originally they thought about it as a machine for hobbyists. In 1976, Bill Gates even wrote a letter to hobbyists who, in Gates' words, were stealing his work:

.....


Before Apple II, the majority of users were hardware tinkerers. After Apple II, the majority of users were consumers who used the PC to run apps. The hobbyists became software developers interested in selling their apps to consumers, rather than stealing software from Bill Gates and Paul Allen. A new large market for software was born.

When IBM entered the PC market in 1981 they contracted Microsoft to write an operating system for their computer. The rest is history. Just as young Bill Gates had anticipated, the world of personal computing needed a lot of software. Steve Jobs of Apple Computers happened to create that world, helping materialize the Gates' business vision.

Why do we care today about the old PC? Because the same innovation pattern keeps repeating in Silicon Valley. Apple ][ eliminated the requirement for software enthusiasts to know a lot about hardware. Instead of tinkering with hardware components, they could now concentrate on writing cool apps to benefit consumers, while hardware engineers focused on driving the performance of the box.

Similarly, in the 1980s Sun Microsystems enabled developers to write ubiquitous UNIX software that powered the Internet server revolution of the 1990s and eventually migrated to the Linux platform.

Most recently, in the 2000s Google created the MapReduce technology that harnessed highly reliable, distributed, commodity hardware server systems to the purposes of the developers of networked data services. Another example would be the Android OS.

As we explain in Scalable Innovation, Chapter 4 (System Interfaces: How the Elements Work Together), the emergence of new interfaces between system elements decouples innovation cycles and leads to rapid growth. To innovate effectively, we should recognize the "PC moments" in major technology developments.

tags: invention, innovation, scale, machine1, machine2, microsoft, tgisv








Saturday, January 11, 2014

Lab Notebook: How to explain Silicon Valley to a Martian.

Explaining Silicon Valley to humans is quite satisfying but can be very hard. John Kelley and I spend an entire academic quarter on this task and we manage to cover only the basics. Human students differ significantly in their cognitive, motivational, and emotional backgrounds. Some of them are locals, who know a lot about hi-tech businesses; some are newcomers, who want to understand better the place they happen to be visiting. Some are interested in personal stories of inventors and entrepreneurs; some are eager to learn the core principles behind SV-style innovations. The list of student interests and motivations could go on and on, but inevitably, our explanations of how Silicon Valley works are subjective.

On the other hand, to a Martian, all human individuals, how they live and do business, look pretty much the same. To explain Silicon Valley to a Martian, we would have to rely on objective truths, not our human emotions and motivations. Therefore, universal mathematical principles would be a logical place to start.

In the 1930s, a human mathematician, Alan Turing, came up with an abstract model for an automatic machine that can execute computational instructions. According to the model, the abstract machine is fed with an infinite tape with symbols on it. The machine can move the tape and manipulate symbols: read, write, or erase them. The state of the machine and its follow-up manipulations depend on its earlier interaction with the symbols. A Universal Turing Machine can simulate any Turing Machine with any symbolic input.

Silicon Valley is a place on Earth, where people know how to make physical machines that can perform operations of a Universal Turing Machine. More importantly, they know how to make such machines run exponentially better over time. That is, humans in Silicon Valley can double computational speed of the machines every 1.5 Earth years. We humans call this technology pattern "The Moore's Law." It held true for the last 50 years, making our computations roughly 1,000,000,000,000,000 faster in the process.

In Silicon Valley, humans also learned how to grow data storage capacity even faster than computations. We humans call this "The Kryder's Law." To top it off, in Silicon Valley humans learned how to connect all kinds of exponentially better performing computational and memory devices with exponentially faster networks. We humans sometimes call this pattern the "Nielsen's Law." The interaction of these three laws made Silicon Valley a computational technology powerhouse. To simplify our conversation, I will call this amazing, exponentially growing technological capacity "Silicon Valley Machine 1" or "Machine 1."

Examples of Machine 1 are: the Integrated Circuit, the magnetic hard drive, the Ethernet, the Internet, the Google Spanner, etc.


Note that the Machine 1 by itself is meaningless to humans. Although it can shuffle symbols back and forth exponentially faster, somebody needs to figure out how to make this symbol shuffling useful. Fortunately for humans in Silicon Valley, mathematicians back in the 1930s formulated another principle — the Church-Turing thesis. Based on the thesis, we can explain to an ordinary Martian that any computable logic, no matter how long or complicated, can be executed by the Universal Turing Machine, implemented either in hardware or software or both. Therefore, if somebody (let's call him Steve Jobs) comes up with an idea for a logically consistent, useful computing device or application, it is guaranteed to work. Guess what, we say to a curious Martian, smart humans in Silicon Valley figured out a way to regularly come up with useful applications. To simplify our conversation, I will call this ingenuous creative capability "Silicon Valley Machine 2" or "Machine 2."

Examples of Machine 2: computer gaming, the spreadsheet, digital movies, the world-wide web, the iPhone, Google Search, social networking, etc.

Over the last 50 years, the folks in Silicon Valley learned how to hook up the ingenuous Machine 2 to the exponentially growing Machine 1. As a result, products, services, and experiences produced in Silicon Valley are becoming — year after year — exponentially faster and more useful to other humans. This is it. The success of Silicon Valley is based on fundamental mathematical truths as well as our ability to make abstract concepts objectively real and subjectively useful to other humans.

tags: innovation, book, machine1, machine2, silicon valley, 

Saturday, December 21, 2013

Lab Notebook: Silicon Valley and the Law of Diminishing Marginal Utility

Silicon Valley seems to defy one of the fundamental laws of economics: The Law of Diminishing Marginal Utility. The law, according to an investopedia article (need to find a textbook reference), says:

A law of economics stating that as a person increases consumption of a product - while keeping consumption of other products constant - there is a decline in the marginal utility that person derives from consuming each additional unit of that product.


They illustrate the law with a short video about a hungry girl with a pizza. The first slice of pizza tastes better than anything before.


The second slice of pizza still tastes good, but by the fifth piece, the girl is already full and she hates the fact that she has to eat the entire thing.


Let's try to apply this law to a major Silicon Valley innovation: web-based email. We skip the Hotmail vs Rocketmail controversy and go directly to the Yahoo vs Google email rivalry. In the early 2000s, Yahoo mail was an undisputed leader in this web services domain. The company was giving users 10 or 20 MB storage for free and charged for extra. Then came Google and offered at least 1GB of free storage with its brand new gMail. According to the law of diminishing marginal utility, this move would be "unlawful." That is, why offer people a lot more if we know from economics that usefulness declines with quantity of goods consumed?

Paradoxically, the more storage gmail offered, the more useful its service proved to be to the users, myself included. Millions flocked to gmail, abandoning their yahoo mailboxes.

In general, exponential growth in processing power, storage, and bandwidth (what I call "Machine 1") that Silicon Companies have been driving for the last 50 years makes users happier, defying the economics textbook wisdom taken at face value.

tags: machine1, economics, 10X, innovation, book, google, yahoo

Thursday, December 19, 2013

Lab Notebook: an illustration to Machines 1 & 2 concepts

Two Silicon Valley companies merge, citing the need to build computing capabilities (Machine 1 is a bottleneck for Machine 2):

Rosati [the CEO of the merged company] added, “If we are to become the workplace for the world, we need to make massive investments in technology, massive investments in data science, massive investments in predictable business outcomes. Fundamentally, we have a shot at building a business on the scale of Amazon or LinkedIn or iTunes.” (ref: Gannes, Liz. oDesk and Elance Merge to Create One Big Freelancer Company. All Things D, December 18, 2013.)

I can use this example in explaining Machines 1&2 in the Greatest Innovations of Silicon Valley course/book.

tags: tgisv, book, example, machine1, machine2