No one’s immune from the effects of deep practice

I love this video. Thanks John Burk for sending it my way. In this tutorial the programming mavens at Google wish to introduce us to a new feature that will boost their cred as the world’s “data brokers.” This will increase revenue by bringing in more advertising business, which will further fund Eric Schmidt’s mission of “collecting all the world’s data and making it accessible to everyone.” Watch.


Job well done, Google. But in this video, you’ve done much more than tout your own products.  You’ve made universal the notion of deep practice.

In the video the narrator explains that, contrary to common suspicion, Google does not have a workforce of elves translating our documents from a cubicle-lined warehouse somewhere in Mountain View, CA.

Instead, Google uses a process called “statistical machine translation,” which he describes as a process in which machines “generate translations based on large amounts of text.”

Rather than learning language by applying vocabulary to a set of rules, Google’s computers analyze millions upon millions of documents that have already been translated.  They look for “statistically significant patterns.”  Once it detects a pattern, it attempts to use that pattern in the future.  When a user rejects the translation, the computer learns that this pattern is not as consistent as it once thought.  It makes adjustments, seeks for “sub-patterns” (my word), and engages in continued analysis.

Over an over again, Google’s computers are testing patterns, detecting new patterns, and finding new documents to analyze, further increasing the possible combinations of patterns. As he says, “when you have billions of documents, you have billions of patterns.”

Question: Are Google’s computers engaging in “deep practice?”