Killing the "blank page problem"
Image and text have been the most popular modalities for generative models - this magic will come to every type of content! Generation products create content from "blank pages" (ex. a text prompt to slide deck), or take incremental assets (ex. a sketch or an outline) and flesh them out. Some companies will do this via their own proprietary model, while others may mix or stitch together multiple public models.
Making open source models accessible
Some of the most interesting work in content generation is happening in the open source ecosystem, with developers finetuning and combining base models to push the limits of what users can create. However, it’s hard for everyday consumers to download and run these models locally - we’re excited about products that create the interface to utilize this tech in the browser (or an app).
Creating remixable outputs
AI makes content uniquely flexible. Every image, clip, or song can inspire another iteration or combination of parts. The current workflow for this is copy/pasting prompts, which isn't ideal. We predict platforms will "productize" this by allowing creators to expose their prompts and make their work instantly remixable - earning social and potentially even financial capital for doing so.
Enabling consumers to build content creation apps
As users turn to AI to generate more complex content (think a five minute film, not a four second clip), it's unlikely one prompt or even one model will be able to handle every step of generation. We hope to see products that help users "chain" together models and prompts behind the scenes - and then save these workflows and / or publish them for others to use.
Owning multi-media workflows
Many creative projects require more than one type of content — users want to combine an image with text, music with video, or animation with a voiceover. As of now, there isn’t one model that can generate all of these asset types. This creates an opportunity for workflow products which allow users to generate, refine, and stitch different content types in one workspace.
Enabling in-platform refinement
The final 10% of polish is often the difference in creating something good vs. great. AI products can help users identify what can be improved, and then automatically make these changes. Think of this like Apple’s “auto-retouch” feature on photos, but for anything! So far, we've mostly seen this via upscaling - but expect new primitives to emerge here.
Iterating with intelligent editors
Almost no work product is “one shot” — especially with AI, when there’s inherent randomness in every generation. Hitting the re-generate button or revising your prompt is a critical, but time-consuming and frustrating, part of the process. We’re excited to see products that enable users to take an existing output and refine it (ex. regenerate one frame or feature) without completely starting from scratch.
Automatically repurposing content
An enormous amount of manual editing goes into repurposing a piece of content for different platforms. A classic example is turning a long form YouTube video into TikToks / IG Reels, podcast audio, and even a blog post. AI can do this instantly — and incorporate data-driven predictions into which clips or elements will engage different audiences. This drastically increases the "shelf life" of media.
Agents that act as systems of action
We expect to see general agents that can complete common consumer tasks like booking a restaurant or finding and sending a gift to a friend. However, we're also anticipating specialized agents fine-tuned for specific and complex tasks, like data analysis and marketing automation. The latter may be "first to market" as they narrow the scope of requests and actions they need to reliably fulfill.
Voice-first apps
Most AI products now are text-first (natural language prompt → output). But voice is often a more convenient and natural medium to communicate, allowing consumers to share more complex and even unfinished thoughts. We expect to see AI apps embrace this — the simplest version is voice dictation and summarization, but this will expand into utilizing ambient audio captured throughout the day.
Apps that provide in-flow assistance
Context switching is deadly. With AI, users should never have to break the flow of their work. Information, ideas, or examples magically appear where and when needed. This might manifest in tools that own an entire workflow, like translating research notes to a final blog post and graphics. Or, it may be an assistant that "lives" wherever a user does work and can interject appropriate context.
"Build your own" workflows
AI finally allows non-programmers to build automations that streamline their work. These may be information-based (ex. email me if an IPO is announced) or systems-based (ex. Slack me if someone submits a support ticket). Users will be able to specify what they want in natural language, with LLMs acting as intermediaries to allow users to stitch together much more complex flows than pre-AI.
Differentiated value prop vs. generalist chat products
To drive strong user adoption and retention, AI companion products need to offer something different or better than generalist products like ChatGPT. This could be a model that specializes in content that mainstream models aren't good at (or don't allow), like fictional roleplays or erotica. Or, products can differentiate on UX — building a more engaging or gamified chat experience.
New methods of interaction
Today, most interactions with AI companions happen via a simple chat interface. We expect these companions will become more dynamic and "live" beyond the text box, across all of the software and even hardware we use now (as well as new devices 👀). Imagine a companion you can summon anywhere, with a voice, avatar, and animation that feels like a real friend hanging out with you.
Apps that enable memory and progression
Just like a human relationship, your relationship with an AI companion should evolve. A companion should get to know you better over time, remember your previous conversations, and change the nature of the relationship. Some companion products may even up the "stakes" - with the AI able to evolve or even pause their communications if the user isn't engaging in a specific way.
Hybrid AI x human communities
AI is a dynamic and often surprising conversation partner — we expect to see messaging & social apps where bots are treated as equal citizens. Bots should be able to join chats with you and friends, and weigh in or spark discussions (no more dry group chats). We've also seen early versions of bot-centric social apps. Think Instagram or X, but where AI is creating content, and humans can jump in.
Live, interactive entertainment at massive scale
AI allows anyone to be an entertainer with the help of generative avatars — VTuber neurosama (500k+ subscribers) is one example. And AI makes live content much more scalable, with no humans behind the scenes controlling the storyline or speaking to viewers. AI characters are already hosting interactive streams and shows in which they respond to audience questions, comments, or votes in real-time.
Next-gen avatars for next-level communication
Memes, GIFs, and images are a language of their own - and a very effective one. AI gives consumers a hyperrealistic digital likeness they can use to instantly generate and share these assets. If you want to convince your friend to go on a trip, generate a photo of you two on the beach! So far, we've mostly seen this manifest in LoRAs for images and GIFs, but imagine it will come to audio, video, and more.
AI to make IRL matches
Today's matchmaking apps - whether for dating, friendship, or professional connections — are inefficient. They rely on users: (1) building a good profile; (2) knowing what they are looking for; and (3) swiping enough to find a fit. What if you could instead chat with a bot that learns about you on a deeper level and uses this information to make a curated set of matches? This brings the real-world matchmaker experience to everyone, but "upgraded" as AI can parse a more expansive range of options.
Multimodal ability
Most early AI education apps are chat interfaces that ingest and output text. However, learning is most effective when multiple modalities are combined to teach a topic, especially for different types of learners. We expect to see apps expand to allow users to "input" questions, topics, or ideas in all forms (audio, image, text, and even video) and get a response in the media type that helps them learn best.
New interfaces that break the "edtech" mold
While some AI tutors may operate via more traditional formats like lectures or Q&A, we also anticipate the rise of more casual, experiential learning at scale. This may not even look like "learning" — for example, a new browser geared towards exploration, or a toy that talks to you. Products here will provide personalized, adaptive interfaces that users learn from over time via deep engagement.
Go-to-market that doesn't rely on schools
AI doesn't solve the structural problems of selling into schools. We believe the strongest companies will "force" their way into the education system through parents and even individual teachers, who will pay for products that save them time and help their students learn better. Bottoms-up adoption and engagement is key.
Hyper-specialized products
Building an effective and engaging edtech platform is hard. We expect the most successful products will live at the intersection of a specific stage (ex. kindergarten vs. high school) and subject (ex. math vs. reading), with an interface that uniquely serves those learners. Platforms here may even look overly narrow to start with (ex. "reading tutors for preschoolers") but can expand over time.
Cross-account visibility and management
Consumers have shown a clear preference for having many investing apps - you might use Robinhood for day trading, Fidelity for actively managed funds, and Wealthfront for index funds. But there is no app that gives a global portfolio view, and no easy way to optimize that portfolio. Today's AI products can analyze and move money between accounts - as agents improve, they will make trades across accounts.
Auto-optimization across products
Many consumers are "overpaying" for their debt, insurance, and bills. But finding cheaper options, calculating the new net cost, and going to the effort to switch or negotiate with providers is a very tall ask. AI agents can take over this process, by constantly monitoring the landscape and executing on a transfer if needed, with little to no action required from the user.
Programmatic investing using natural language
One of the earliest waves of AI-enabled fintech apps gave individuals the power of institutional traders — buying and selling assets programmatically. Using natural language or decision trees, consumers with no knowledge of code can build algorithms that execute trades for them. These products have technically been around since the early 2020s, but are becoming more sophisticated and generative.
Complex transactions move from services → AI
Many financial decisions are too complex to be automated with a single click - think tax planning or wealth management - as human judgment and consumer preferences have a real influence on the approach. We expect these white glove services to be augmented and in some cases replaced with AI. This will lower costs and make 1:1 financial services available to every consumer that wants them.
Data-first approach
AI will allow consumers to unlock real-time insights informed by detailed and personal data - instead of general tips or advice from a check-up a few months (or years) ago. We predict the most successful platforms here will capture net new data (ex. vocal sentiment for mental health), but also seamlessly integrate with existing apps to give recommendations informed by a user's broader health picture.
Humanlike interaction
Health is a sensitive subject, which makes it crucial for products to exhibit emotional intelligence, offer diverse interaction modes, and respond quickly. LLMs are uniquely able to check all of these boxes, and create an experience that feels akin to interacting with a caring and competent human. Some AI systems already have better diagnostic accuracy and better bedside manner than human doctors!
Daily use case with memory
Health is a journey that demands consistent commitment. We're excited about products that use AI to create daily (or even 2-3x+ per day) check-in or adherence behavior, by unlocking new insights. This might involve scanning a meal to get an estimated calorie count, or recording your mood. Ideally, these products are "low touch" in input time but can transform this data into real-time recommendations.