People Who Ship: From Prototype to Production
Welcome to People Who Ship! In this video and blog series, we’ll be bringing you behind-the-scenes stories and hard-won insights from developers building and shipping production-grade AI applications using MongoDB.
In each month’s episode, your host—myself, Senior AI Developer Advocate at MongoDB—will chat with developers from both inside and outside MongoDB about their projects, tools, and lessons learned along the way. Are you a developer? Great! This is the place for you; People Who Ship is by developers, for developers. And if you’re not (yet) a developer, that’s great too! Stick around to learn how your favorite applications are built.
In this episode, Noam Rubin, a software engineer on Vanta’s AI team, shares his insights into what it involves to take generative AI (gen AI) applications from prototype to production. We talk about identifying the right problems for AI, the importance of experimentation, finding trusted testers, and much more!
Noam’s top three insights
During our conversation, Noam shared several interesting nuggets about his experience shipping gen AI applications at Vanta. Here are my top three takeaways:
1. Place your bets on individuals who show they can adapt and learn quickly
Gen AI has leveled the playing field across software engineers, machine learning engineers (MLEs), and data scientists. Organizations across tech—including Vanta and MongoDB—are forming small, nimble centralized teams of individuals who can ship gen AI features quickly.
Given the shortage of talent with extensive gen AI experience, these centers of excellence typically comprise existing employees who demonstrate an appetite for experimenting with new technology, or external hires with MLE/data science backgrounds (including those with only personal projects or hobby-level gen AI experience).
Hackathons, both internal and external, are an effective way to identify this talent. Internal hackathons give engineers a forum to upskill and experiment with the technology, while external hackathons let you evaluate candidates in action rather than just on paper.
2. Use Gen AI to prototype Gen AI features
Gen AI feature development is inherently experimental, with unclear success metrics and unpredictable outcomes. Unlike traditional software or ML features, where requirements and expected results are well-defined, gen AI features involve exploring the unknown. For instance, Noam cites an example of using AI to help customers review documentation from third-party vendors in their vendor risk management product—a case where it would be difficult to predict the feature’s usefulness upfront.
This uncertainty demands an approach involving rapid experimentation. Teams need to quickly prototype, test with real users, measure impact, and either iterate or pivot entirely. This is where Noam’s team leverages gen AI tools like Cursor to quickly build proofs of concept and to get them in front of product teams, subject matter experts (SMEs), and trusted testers, tightening the experimentation and feedback loop.
3. Finding trusted testers for Gen AI applications is not hard
Finding trusted testers for gen AI features is easier than you might expect. The largest incentive for users to try out a gen AI feature is feeling like it might be a potential solution to a problem or pain point they have. Among Vanta’s 10,000+ customers, three groups typically volunteer to be early testers of new features: those with the right risk appetite, self-selecting AI believers and promoters, and those whose organizations mandate gen AI use. However, Noam finds self-selection (i.e., AI belief) to be the strongest signal for a trusted tester.
Overall, trust works both ways. If you want honest, dependable feedback from your users, you need to show them that you are listening to their feedback and acting on it by implementing their suggestions. With gen AI features, you can often iterate on user feedback much faster than traditional development cycles, especially for prompt refinements, output formatting, and basic guardrails.
Noam’s AI tool recommendations
Noam mentions Cursor and Vercel’s v0 as his go-to tools for rapidly prototyping AI features. These AI-assisted development tools go beyond code autocomplete to generate, rewrite, refactor, and debug code—with v0 able to create complete UI components and applications from natural language descriptions. This makes it easier for Noam’s team to build proofs of concept and to iterate on them, thus speeding time to production.
Identifying product market fit
Throughout the episode, Noam underscores the importance of getting early user feedback to prove out the usefulness of gen AI products and features. If early adopters—who are typically the most motivated to try new solutions—aren’t engaging with your feature, it’s a strong indicator that the problem isn’t significant enough and you don’t have product-market fit.
If you’re actively building and prototyping AI applications and want to learn about how MongoDB can help, submit a request to speak with one of our specialists! If you would like to explore on your own, check out our self-paced AI Learning Hub and our gen AI examples GitHub repository.
This article first appeared on Read More