Mobile AI apps like Claude and ChatGPT are evolving from chat apps into something that more closely resembles operating systems. One simple but powerful way to conceptualize the space is that computers as a whole and apps more specifically are just giant functions that take in inputs and return outputs.
Right now they’re very good at taking in fuzzy human input and making sense of it (especially in voice and text), but still need other apps to really nail output.
Other relevant convergent themes are Artifact Centric Computing and Audio-UIs. Also, tangentially related are, Slow Computing and Computation without Computers.
The Current Landscape
The major players each have distinct approaches:
- OpenAI (ChatGPT): Offers the most polished design and versatile platform, though their Actions API remains somewhat primitive. Tends to fast-follow strategic innovations from competitors.
- Anthropic (Claude): Excels in model quality and instruction-following, but the app lags behind in features. Could differentiate itself by offering more open API access to the app itself, not just the model.
- Other, cross-model players (like Poe): Have all the models and are theoretically well positioned, but face the challenging balancing act of serving casual users, power users, and developers simultaneously.
The Platform Strategy
A key tension in this space is between being open vs. insular. While these apps currently provide 80-90% of needed functionality within their interfaces, their approach to external integrations varies:
- Some platforms seem hesitant to let users leave their ecosystem
- Others could differentiate by offering more permissive API access (similar to Facebook’s early platform strategy)
- Revenue sharing models for specialized processing (like image analysis or weather data) could create win-win scenarios
The Data Retention Strategy
A critical aspect of these AI operating systems is their approach to user data retention. As users generate more complex knowledge structures through their interactions:
- Users build up valuable personal knowledge bases through their chat histories
- The ability to query across historical conversations creates compound value
- Complex data structures and relationships emerge that become harder to export
- This creates natural lock-in as users’ knowledge becomes more deeply embedded in the platform
This “data moat” strategy mirrors traditional operating systems, where the difficulty of migrating years of accumulated data and workflows becomes a powerful retention mechanism.
The Developer Opportunity
The key to success in this space may lie in creating a low-code environment where developers can:
- Build tools that are easily remixable
- Create applications that non-technical users can readily adopt
- Integrate with existing workflows and tools
- Build widgets and extensions that work within existing AI apps
- Access pull/push APIs to integrate with the chat history and artifacts
Raw
I want to add a note to the blog post I have about AI apps as operating systems. The retention play, of course, here is in data and more and more complex data structures For example, projects where users can not only store a bunch of their threads but then start asking questions about the knowledge they've generated the thread already. At this point, we're getting to the level where only very complicated users will be able to pull data out.