You may have noticed a lot of sharing of Open AI’s Generative Pre-trained Transformer 3 (GPT-3) and ChatGPT screenshots. The most popular tend to be funny. I asked ChatGPT to describe in a style that Stephen King would write, a scene describing somebody struggling to decide which brand of toilet paper to buy. The result is fun. Here’s one about writing a script in Seinfeld style: In the sense that you can recognize the blend of styles. Elements of relevance and divergence are provided by the prompts. The evaluation, on the part of this author, is entirely subjective. And in some ways, carbon dated. Years from now, the bubble sort will be around. Probably. The knowledge of what a typical[…]

There are at least two systems of achieving productivity growth: path dependence and disruption. What if there is a third way? This post unpacks that paragraph and explores ways through. It will start with explaining lock in and path dependence. We’ll cover the application narrow machine intelligence in a very narrow industry. It will end with a small scenario and a few what ifs. Lock In Consider banner advertising. This is a relatively old industry. Its roots predate the Internet by at least a couple hundred years. It may have started thousands of years ago. It starts out with a person with a problem. They need to get the word out about their product or service. Reframed, they need to[…]

Geoffrey Hinton, the father of deep learning, said a few things at the ReWork Deep Learning Summit in Toronto last week. Hinton often looks to biology as a source for inspiration. I’ll share and expand in this post. Hinton started off with an analogy. A caterpillar is rally a leaf eating machine. It’s optimized to eat leaves. Then it turns itself into goo and becomes something else, a butterfly, to serve a different purpose. Similarly, the planet has minerals. Humans build an infrastructure to transform earth into paydirt. And then a different set of chemical reactions are applied to paydirt to yield gold, which has some purpose. This is much the same way that training data is converted into a set[…]

This post describes a fast follow startup and the implication for how that startup learns. Define Startup A startup is a market hypothesis looking for validation. It’s an organization in search of a business. If they’ve accepted funding, then it’s a group of people looking for a liquidity event. Define Follow Follow means imitation. It means that an entrepreneur or a herd entrepreneurs have been observed pursuing a particular product-solution-market fit, or a hypothesis, and some founder wants to join the herd. Define Fast Fast means that the organization is imitating fast enough to nip at the heals of the lead innovator. It is imitating fast enough to be contention of overtaking the leader, or close enough to experience a[…]

It was a treat to see these three – Yoshua Bengio, Yann Lecun, and Geoffrey Hinton – for an afternoon. Easily the best three consecutive hours I’ve ever seen at a conference. They remarked that Canada continues to invest in primary research. And this is a strength. Much of the exploratory work these three executed in the 80’s, 90’s and naughties was foundational to industrial applications which came after. Much of reinforcement and deep learning has moved on into industrial application. For the three grandfathers of deep learning, all of these algorithms and methods move into the realm of solved problems. For those of us in industry, there remains a lot of work to realize the benefits of deep learning.[…]

Hinton is quoted as saying, with respect to back propagation, “I don’t think it’s how the brain works”. You can read the full article here. Back Propagation To oversimplify, in Back Propagation, the influence of each neuron is rewarded based on how well it predicts something. Accurate predictions are rewarded with more influence. Bad predictions are punished with less. This is how the machine learns. And there’s a lot of optimism about Back Propagation. It’s really useful and generates fairly predictable machines. As data scientists, we like this. And as data scientists, we should also like what Hinton is hinting at. Kuhn It’s much more likely than not that we’re approaching a local maxima on this thread of research. I’m[…]

The other I likened the process for taking apart a Job To Be Done to taking a part a lobster. There’s a very effective way to decompose any problem with enough energy. And then I watched The Founder on Netflix and admired the McDonald brothers using a classic technique in management science to refine a system on a tennis court. And I loved it. They really refined hamburger and frenched fry delivery. And then this morning I read that Andrew Ng in working on a new coursera course for AI. And I’m thankful for his initiative and optimism. Out of those three threads, this one post. The Assembly Line The assembly line was an American invention for Americans. It could[…]

Earlier in the month, I dined under the space shuttle Endeavour with some of the best minds in marketing science. One mind remarked: “That’s why I bring a glossary with me, oh, you want to do supervised learning? Oh you mean regression? Oh, okay, now we can talk… We’ve been talking to managers about these methods for decades, but it’s just suddenly sexy because it’s all machine learning and deep learning and reinforcement learning.” A lot of the math that underlies much of machine intelligence and artificial intelligence is indeed remarketed marketing science. And, hipsterism aside, the annoyance is understandable. Marketing science started out a bit of a revolt against the Mad Men. Some of the early stories feature post-war[…]

You’re going to hear a lot more about Artificial Intelligence (AI) more generally, and Machine Intelligence more specifically. Valuation is the core causal factor. Here’s why: We’ve gotten pretty good at training a machine on niche problems. They can be trained to a point to replace a median-skilled/low-motivated human in many industries. Sometimes they can make predictions that agree with a human’s judgement 85 to 90% of the time, and sometimes, it’s the human that’s causing the bulk of the error to disagree with the machine. We’re confident that we can train a machine to learn a very specific domain. And these days we’re in the midst of that great automation revolution. Most of the organization that build those machines can[…]