DeepMind Strategy
The DeepMind Strategy
How AI is revolutionizing business models
-
Overview
AI is introducing radical innovation even in the way we think about business, and the aim of this section is indeed to categorize different AI companies and business models.
It is possible to look at the AI sector as really similar in terms of business models to the biopharma industry: expensive and long R&D; long investment cycle; low-probability enormous returns; concentration of funding toward specific phases of development. There are anyway two differences between those two fields: the experimentation phase, that is much faster and painless for AI, and the (absent) patenting period, which forces AI to continuously evolve and to use alternative revenue models (e.g., freemium model).
-
The DeepMind Strategy and the Open Source Model
If we look from the incumbents’ side, we might notice two different nuances in their business models evolution. First, the growth model is changing. Instead of competing with emerging startups, the biggest incumbents are pursuing an aggressive acquisition strategy.
I named this new expansion strategy the "DeepMind strategy" because it has become extremely common after the acquisition of DeepMind operated by Google.
The companies are purchased when they are still early stage, in their first 1–3 years of life, where the focus is more on people and pure technological advancements rather than revenues (AI is the only sector in which the pure team value exceeds the business one). They maintain elements of their original brand and retain the entire existing team (“acqui-hire”). Companies maintain full independence, both physically speaking—often they keep in place their original headquarters—as well as operationally. This independence is so vast to allow them to pursue acquisition strategies in turn (DeepMind bought Dark Blue Labs and Vision Factory in 2014). The parent company uses the subsidiary services and integrates rather than replaces the existing business (e.g., Google Brain and Deepmind).
It seems then that the acquisition costs are much lower than the opportunity cost of leaving around many brains, and it works better to (over)pay for a company today instead of being cutting out a few years later. In this sense, these acquisitions are pure real option tools: they represent future possible revenues and future possible underlying layers where incumbents might end up building on top of.
The second nuance to point out is the emerging of the open source model in the AI sector, which is quite difficult to reconcile with the traditional SaaS model. Many of the cutting-edge technologies and algorithms are indeed provided for free and can be easily downloaded. So why incumbents are paying huge money and startups are working so hard to give all away for free?
Well, there are a series of considerations to be made here. First, AI companies and departments are driven by scientists and academics, and their mindset encourages sharing and publicly presenting their findings. Second, open sourcing raises the bar of the current state of the art for potential competitors in the field: if it is publicly noted what you can build with TensorFlow, another company that wants to take over Google should publicly prove to provide at least what TensorFlow allows. It also fosters use cases that were not envisioned at all by the providing company and set up those tools as underlying technology everything should be built on top of which.
-
Implications of Open-Source
Tools are nowadays less relevant than people or data and the sharing mindset is a key asset for organizations. Based on this statement, we can divide the considerations on open source in two clusters, which are business considerations and individual considerations.
From a business perspective, the basic idea is that is really hard to keep the pace with the current technological development and you don’t want your technology to become obsolete in three months’ time. It is better to give it out for free and set the benchmark rather than keeping it proprietary and discard it after a few months. Furthermore, open sourcing:
- Raises the bar of the current state of the art for potential competitors in the field;
- Creates a competitive advantage in data creation/collection, in attracting talents (because of higher technical branding), and creating additive software/packages/products based on that underlying technology;
- Drives progress and innovation in foundational technologies;
- Increases the overall value, interoperability and sustainability of internal closed source systems;
- Raises awareness of the problems faced at scale on real-world data;
- Lowers the adoption barrier to entry, and get traction on products that would not have it otherwise;
- Shortens the product cycle, because from the moment a technical paper is published or a software release it takes weeks to have augmentations of that product;
- More importantly, it can generate a data network effect, i.e., a situation in which more (final or intermediate) users create more data using the software, which in turn make the algorithms smarter, then the product better, and eventually attract more users.
From the developer’s point of view instead, there are a series of different interesting considerations:
- Github accounts and packages look better and have a greater impact than a well-written resume in this world;
- Data scientists and developers are first of all scientists with a sharing mindset, and part of the industry power to attract and retain talents come from augmenting the academic offer (i.e., better datasets, interesting problem, better compensation packages, intellectual freedom);
- Academia has been drained of talents who moved to the industry and the concept of ‘academic publication review’ has been translated into ‘peer review’ (crowd-reviewing). This is in turn translated into i) better troubleshooting, ii) deeper understanding of technology potential and implications;
- Making codes that others can read and understand is what makes you better developer and scientist. This is something you know only if you have ever done it;
- As a general rule-of-thumb, if the contributors are from academia, they usually push innovation forward, while industry contributors prefer more system stability. Releasing open source software helps you thinking about who will use it and design the entire software more reliable and stable in the first place.
These are the many reasons why this model is working, even though there are advocates who claim incumbents to not really be maximally open (Bostrom, 2016) and to only release technology somehow old to them. My personal view is that companies are getting the best out of spreading their technologies around without paying any costs and any counter effect: they still have unique large datasets, platform, and huge investments capacity that would allow only them to scale up.
Regardless the real reasons behind this strategy, the effect of this business model on the AI development is controversial. According to Bostrom (2016), in the short term a higher openness could increase the diffusion of AI. Software and knowledge are non-rival goods, and this would enable more people to use, build on top on previous applications and technologies at a low marginal cost, and fix bugs. There would be strong brand implications for companies too.
On the long term, though, we might observe less incentive to invest in research and development, because of free riding. Hence, there should exist a way to earn monopoly rents from ideas individuals generate. On other side, what stands on the positive side is that open research is implemented to build absorptive capacity (i.e., is a mean of building skills and keeping up with state of art); it might bring to extra profit from owning complementary assets whose value is increased by new technologies or ideas; and finally, it is going to be fostered by individuals who want to demonstrate their skills, build their reputation, and eventually increase their market value.
I am adding a final concept which I find intriguing but a bit speculative. It is about the war of talents and the uncanny vicious circle we are observing in the academia-industry relationship.
The problem is indeed twofold:
- Universities are losing faculty and researchers to the benefit of private companies. This does not allow universities to train the next generation of PhD students which are now driving the AI wave;
- Things are already moving and many graduate students are deciding to not go for a PhD at all and jump directly into the private tech sector. This means that we might not have at all a new generation of PhDs.
So no students, no teachers. What have we left? Training on the job. I believe open source is helping private companies to gradually be recognized as new ‘knowledge labs’—they already were in my opinion, but traditionally this role has been assigned to universities.
It is an additional way in which big incumbents are taking over universities with a new indirect approach to education which might eventually disrupt the traditional learning we all know.
-
Conclusion
Although these notes on the effect of open research on AI advancements in short versus long term, it is not clear where this innovation will be promoted. We are looking at the transition from universities, where historically innovation and research lie, to the industry. This is not a new concept, but it is really emphasized in AI context. It has been created a vicious circle, in which universities lost faculty and researchers to the benefit of private companies because they can offer a combination of higher salary, more interesting problems, relevant large unique datasets, and virtually infinite resources. This does not allow universities to train the next generation of PhD students that would be in charge of fostering the research one step ahead. The policy suggestion is then to fund pure research institutes (e.g., OpenAI) or even research-oriented companies (as for instance Numenta) to not lose the invaluable contribution that pure research has given to the field.
Most of the considerations made so far were either general or specific to big players, but we did not focus on different startup business models. An early stage company has to face a variety of challenges to succeed, and usually, they might be financial challenges, commercial problems, or operational issues.
AI sector is very specific with respect to each of them: from a financial point of view, the main problem regards the absence of several specialized investors that could really increase the value of a company with more than mere money. The commercial issues concern instead the difficulties in identifying target customers and trying head around the open source model. The products are highly new and not always understood, and there might be more profitable ways to release them.
Finally, the operational issues are slightly more cumbersome: as abovementioned, large dataset and consistent upfront investments are essential and might be detrimental to a shorter-term monetization strategy. A solution to the data problem may be found in the "data trap" strategy, that in venture capitalist Matt Turck’s words consists of offering (often for free) products that can initialize a data network effect. In addition, the user experience and the design are becoming tangibly relevant for AI, and this creates friction in early stage companies with limited resources to be allocated between engineers, business, and design.
References
Bostrom, N. (2016). “Strategic Implications of Openness in AI Development”. Working paper.
Abhishek Mishra
19 Apr 2020 06:16:59 PM