July 06, 2025
What insights does ICONIQ Capital's 2025 annual report, "The Builder's Playbook", offer for the implementation of AI within enterprises?
The Enterprise AI Blueprint: Translating ICONIQ’s “The Builder’s Playbook” into an Internal Competitive Advantage
Part 1: Executive Summary: The Execution Imperative in the AI Era
The competitive landscape of Artificial Intelligence (AI) has decisively shifted from an initial “adoption race” to a demanding “battle of execution.” ICONIQ Capital’s 2025 annual report, “The Builder’s Playbook,” clearly captures this market pulse, pivoting its focus from the “buying journey” of previous years to a deep dive into “how to build”.¹ This playbook is not just for startups; more importantly, it provides a critical strategic framework for any enterprise seeking to transform AI from a promising concept into a reliable, productivity-driving internal asset.¹
The report’s most striking finding is the vast chasm in velocity and maturity between “AI-native” and “AI-enabled” companies. Data shows that nearly half (47%) of AI-native companies have already achieved critical scale and product-market fit with their core offerings, a milestone reached by only 13% of their AI-enabled counterparts.³ This gap reveals a profound reality: this is not merely a technological lead, but a victory of operating models. The success of AI-native companies stems from an operational ethos centered on speed, aggressive resource allocation, and a culture of rapid experimentation.³ Therefore, the central thesis of this report is: for internal enterprise AI projects to succeed, they must adopt an
“Internal AI-Native” mindset and operational approach.
A closer look at the data reveals that this superior performance is tightly correlated with specific organizational behaviors. High-growth companies, a proxy for the AI-native mindset, dedicate a significantly higher proportion of their engineering talent (37%) to AI compared to other companies (28%); they are also far more aggressive in experimenting with and adopting new tools (92% vs. 80%).¹ This indicates that the key success factor is not merely possessing AI technology, but operating like a company built around AI. This presents a profound challenge to traditional enterprises: the primary obstacle to internal AI success is often not the technology itself, but the inherent friction between the agile, experimental “AI-native” model and conventional corporate governance, budget cycles, and risk-averse cultures. “The Builder’s Playbook,” therefore, is not just a technical guide but a roadmap for enterprises to reinvent their operational DNA.
This report will be structured around the five core pillars of the playbook, reinterpreted for the unique context of internal enterprise AI implementation, providing leaders with a clear, actionable framework ¹:
Architecture & Product Strategy
People & Culture
Financials & ROI
Internal Value & Adoption
Governance & Scaling
Through a deep analysis of these five pillars, this report aims to help enterprises transform AI from a technology project into a durable engine of core business value.
Part 2: Pillar 1 - Architecture & Product Strategy: Building the Internal AI Engine
The Core Strategic Challenge: From “What Can AI Do?” to “What Should AI Do?”
The primary challenge for enterprises implementing AI internally is not technical feasibility, but strategic positioning. ICONIQ’s report clearly states that for internal AI projects, the main deployment challenges are strategic, not technical. “Finding the right use cases” is cited as the top difficulty by 46% of respondents, while “proving ROI” plagues 42%.¹ This stands in stark contrast to external product development, which is more concerned with technical issues like model hallucinations (39%) and explainability (38%).¹ This finding reveals a fundamental issue: before committing resources to technical development, enterprises must first solve the strategic question of “what to build.” This section provides a framework to systematically address this challenge.
What to Build: Focusing on “Agentic Workflows” and “Vertical Applications”
Market trends point enterprises in a clear direction. The report shows that nearly 80% of top AI builders are investing heavily in two key areas: Agentic workflows and Vertical applications.³ Agentic workflows refer to autonomous systems capable of executing multi-step, complex tasks on behalf of a user.
For internal enterprise applications, this means the strategic focus should shift from developing simple, single-turn chatbots to building AI agents that can automate high-friction, complex internal processes. Examples include automating multi-level procurement approval processes, coordinating cross-departmental tasks for new employee onboarding, or performing financial reconciliation across multiple systems. Companies in ICONIQ’s portfolio provide excellent examples of this vertical workflow-centric approach. For instance, Tennr automates the entire patient referral workflow in the healthcare sector, handling everything from document reading and information extraction to process routing.¹ Similarly,
Legora focuses on automating complex legal research and document drafting, providing powerful professional augmentation for lawyers.⁷ These cases demonstrate that the path to creating maximum value lies in going deep into vertical business domains and solving core workflow bottlenecks.
How to Build: The Strategic Divergence of the Internal Tech Stack
The technology stack for an internal AI project must follow a fundamentally different logic than that for an external product because it serves a different optimization goal.
The report’s data clearly illustrates this divergence in priorities. For customer-facing products, Model accuracy is the overwhelming top consideration, cited by 74% of respondents. However, when the context shifts to internal enterprise applications, Cost becomes the most important factor (74%), slightly surpassing accuracy (72%) and privacy (50%).¹
This fundamental shift from “accuracy-first” to “cost-first” must be the cornerstone of the entire internal AI technology strategy. This cost-driven principle directly explains and validates the growing industry trend of multi-model architectures. The report shows that companies use an average of 2.8 different models for each customer-facing product ³ and are building architectures that support rapid model swapping to optimize the cost-performance trade-off for each specific task. Therefore, a mature internal AI team must possess this capability: using expensive, high-precision frontier models only when absolutely necessary, while handling the majority of routine workloads with lower-cost, faster open-source or specialized models.
At the infrastructure level, a cloud-first strategy has become the dominant paradigm. 68% of companies opt for fully cloud-based solutions, and 64% rely on external AI API services to minimize upfront capital expenditure and accelerate time-to-market.¹ In terms of data strategy,
Retrieval-Augmented Generation (RAG) and Fine-tuning are the most common techniques for securely applying proprietary data to AI models.¹
To help enterprises systematically address the primary challenge of “finding the right use cases,” the following structured decision-making tool is provided.
Table 1: Internal AI Use Case Prioritization Matrix
[TABLE]
This matrix provides a data-driven framework to help leaders filter a strategic portfolio of AI projects from a collection of scattered ideas, ensuring that limited resources are focused where they can create value fastest and most effectively.
Part 3: Pillar 2 - People & Culture: Assembling and Empowering the AI Vanguard
Talent: The Core Differentiator and Primary Bottleneck
ICONIQ’s report unequivocally states that talent strategy is no longer a simple support function but a core differentiator for building a competitive advantage.³ However, it is also the biggest bottleneck enterprises face. Data shows that a staggering 54% of companies report being behind schedule in their AI talent acquisition plans.¹ This contradiction highlights that in the AI era, the ability to both possess and effectively leverage talent will directly determine a company’s success or failure.
The Winning Organizational Structure: Leadership and Team Composition
To successfully implement AI internally, a matching organizational structure must be established.
Dedicated AI Leadership: The report reveals a strong correlation between company size and dedicated AI leadership. When a company’s annual revenue crosses the $100 million threshold, the proportion with a dedicated AI leader jumps to 50%; for companies with over $1 billion in revenue, this figure rises to 61%.¹ This trend is driven by the growing operational complexity of AI, requiring a centralized owner to unify AI strategy, coordinate cross-departmental resources, manage organizational complexity, and ultimately be accountable for the business outcomes of AI projects.
Cross-Functional Vanguard Teams: The most effective “builders” universally adopt a cross-functional team model, with core members including AI/ML Engineers, Data Scientists, and AI Product Managers.³ The report’s data confirms this, as these three roles are the most common existing AI positions in companies (at 88%, 72%, and 54% respectively) and are also the top hiring priorities.¹ For internal projects, the role of the AI Product Manager is particularly critical. They act as the bridge between the technical team and business units, responsible for translating business needs into executable technical specifications and ensuring the final AI solution genuinely solves business pain points.
The Talent Paradox: Why You Can’t Win by Hiring Alone
Enterprises face a stark paradox in the war for AI talent: this war cannot be won by external hiring alone. A systematic approach is required to maximize the leverage of existing talent.
A deep dive into the data reveals the root of this dilemma. The report indicates that the primary factor hindering rapid hiring is not budget (cited by only 49% of respondents), but the extreme scarcity of qualified candidates in the market (cited by 60%).¹ Meanwhile, the average time to hire a key AI/ML engineer exceeds 70 days.¹ This dual pressure of “insufficient talent supply” and “long hiring cycles” makes any strategy relying solely on recruitment destined to fail.
Therefore, the solution must be a system that multiplies the impact of the existing team. This system should comprise three core strategies:
Augment Existing Teams: Actively deploy AI-driven development tools to empower the current engineering workforce. The report shows that 77% of respondents are already using coding assistants, and in high-growth companies, an average of 33% of code is written by AI.¹ This dramatically increases development efficiency, allowing limited AI experts to focus on more creative work.
Leverage External Tools: Make full use of managed services and AI APIs provided by cloud platforms. This frees the team from the heavy lifting of underlying infrastructure maintenance, allowing them to focus on application-level innovation and value creation.
Internal Upskilling and Transformation: Establish systematic internal training and upskilling programs to help existing high-performing employees transition into AI-related roles. This not only alleviates the pressure of external hiring but also retains valuable business domain knowledge.
Cultivating an AI-First Culture
Technology and talent must be supported by a compatible cultural soil. The report reveals significant differences in AI culture across companies. A full 92% of high-growth companies exhibit a culture of actively embracing and experimenting with new AI tools, while other companies are generally more cautious.¹
To successfully promote AI within an enterprise, simply providing access to tools is far from enough. The report offers a key insight: organizations with high adoption rates (over 50% of employees actively using AI tools) share a common trait—they have deployed a broad range of AI use cases internally, averaging 7.1.¹ The strategic implication of this finding is that the best path to driving enterprise-wide adoption and cultural change may not be to pour all resources into a single “killer app,” but rather to implement a
portfolio of smaller AI applications across multiple departments. This strategy creates more employee touchpoints, normalizes AI in daily work, and fosters a virtuous cycle of “contact-use-feedback-improvement,” ultimately embedding an AI-first mindset into the corporate culture.
Part 4: Pillar 3 - Financials & ROI: Mastering the Economics of Internal AI
Budget Planning: Treating AI as a Core Business Function
AI is no longer an experimental budget item for the R&D department; it is becoming a core business function that impacts the company’s profit and loss (P&L) statement. The report shows that AI-enabled enterprises are allocating 10-20% of their R&D budgets to AI development, and this percentage is growing across all revenue bands.³ This marks a significant shift: companies must plan and manage AI financial investments just as they would for any core business operation.
Understanding the Evolving Cost Structure
For effective financial planning, leaders must understand how the cost structure of an AI project evolves with its maturity. The report points out that in the early stages, project costs are primarily composed of talent costs, including expenses for recruitment, training, and upskilling. However, as the project moves from development to scaled deployment, the center of gravity for costs will decisively shift, with cloud infrastructure costs, model inference costs, and governance costs accounting for the vast majority of expenditures.³
Model inference costs, in particular, can skyrocket after a product goes live. Among these, API usage fees for third-party models are considered the most difficult infrastructure cost to control.¹ This stark financial reality provides the ultimate and most compelling business case for the “cost-first, multi-model” internal tech stack architecture proposed in Part 2. Enterprises must make cost control a core design principle from the very beginning of technology selection, or they will face runaway operational expenses and profit erosion at the scaling stage.
The “Value-Center GTM” Model for Internal ROI
To drive internal adoption and prove its business value, the internal AI team must draw from the successes of the external product market and adopt a “value-center” operating model.
The report clearly states that “proving ROI” is one of the top three deployment challenges for both internal and external AI projects.¹ In the external market, this pressure is forcing pricing models to evolve towards usage-based and outcome-based models, with the core logic being to directly link cost to the value received by the customer.³
This trend holds profound implications for internal AI teams. While an internal team’s “Go-to-Market (GTM)” strategy does not involve sales, its core objectives are the same: driving adoption and value attribution. This means the internal AI team must operate like an independent business unit. They need to clearly articulate the value proposition of their solutions to other business departments and, more importantly, must establish a rigorous system to track and report the value they create. ICONIQ’s investment case, Tennr, provides a perfect reference. Tennr offers its healthcare clients not just technology, but hard ROI data, such as “reducing processing time by 60%” and “doubling a team’s daily capacity”.⁶ This is the gold standard that internal AI teams need to emulate.
Measuring What Matters: A Blueprint for the Internal AI Scorecard
The report provides clear guidance on how to measure the success of internal AI. For internal applications, value is primarily demonstrated through two core metrics:
Productivity Improvements: 75% of respondents use this as their primary measure.¹
Cost Savings: 51% of respondents focus on this metric.¹
The importance of these two metrics far outweighs others typically used for external products, such as revenue uplift (20%) or customer retention (20%).¹ This further emphasizes that the core mission of internal AI is to optimize operational efficiency. The report also notes that surveyed companies have generally achieved significant productivity gains of
15-30% across various generative AI applications.¹
To put these measurement principles into practice, here is a standardized AI project ROI dashboard template to help AI teams demonstrate their value in a quantitative, business-friendly language to financial and business leaders.
Table 2: AI Project Return on Investment (ROI) Dashboard Template
Project Name: Accounts Payable Invoice Automation Agent
Evaluation Period: July 2025
Metric Category | Specific Metric | This Month’s Data | Quantified Value (Monthly) |
---|---|---|---|
Investment (Costs) | 1. Model API/Inference Costs | $1,500 | -$1,500 |
2. Cloud/Data Storage Costs | $500 | -$500 | |
3. Engineering/Maintenance Labor Costs (0.5 FTE) | $6,000 | -$6,000 | |
Total Investment | -$8,000 | ||
Return (Value) | 4. Average Invoice Processing Time Reduction | From 15 mins to 3 mins | |
5. Invoices Processed per FTE Increase | From 30/day to 150/day | ||
6. Manual Error Rate Reduction | From 2% to 0.1% | ||
7. Total Hours Saved (Calculated from Metrics 4 & 5) | 350 hours | +$17,500 (at $50/hour) | |
8. Cost of Errors Avoided (Calculated from Metric 6) | +$2,000 (estimated) | ||
Total Return | +$19,500 | ||
Net Return & ROI | Monthly Net Return (Total Return - Total Investment) | +$11,500 | |
Monthly Return on Investment (ROI) | 143.75% |
This dashboard template translates abstract “value” into concrete financial figures, directly addressing the core challenge of “proving ROI.” It provides a standardized language for AI teams to clearly showcase their results, thereby transforming themselves from a cost center into a clear value-creation engine.
Part 5: Pillar 5 - Governance & Operations: Scaling with Trust and Efficiency
Navigating Deployment Challenges
As AI projects transition from pilot to scale, enterprises must confront a series of complex operational challenges. The report indicates that even for internal tools, model hallucinations and the need for explainability and trust are significant, non-negotiable hurdles.¹ These issues are particularly acute in highly regulated functions such as Human Resources, Legal, and Finance. An unreliable or inexplicable AI system not only fails to improve efficiency but can introduce new operational and compliance risks.
Building a Governance Framework for Trustworthy AI
To build and maintain trust during scaled deployment, enterprises must adopt a multi-layered governance strategy. Insights from “The Builder’s Playbook” point to several key components:
Human-in-the-Loop (HITL) Oversight: The report confirms that the vast majority of companies use HITL as a critical guardrail to ensure the fairness, safety, and accuracy of AI systems.¹ This should not be viewed as a temporary crutch to be discarded once the technology matures, but as a core design principle for responsible AI deployment, especially when dealing with high-stakes decisions.
Proactive Monitoring and Guardrails: As project scale increases, passive HITL oversight is no longer sufficient to manage risk. Enterprises need to shift towards more proactive monitoring mechanisms. The report highlights that leading teams are adopting specialized guardrail libraries to implement automated safety checks. Concurrently, as products mature, more sophisticated performance monitoring systems are deployed to track data drift, establish real-time feedback loops, and ensure the continued stability of model performance.¹
Compliance and Explainability: Adherence to data privacy regulations like GDPR and CCPA is the cornerstone of enterprise AI governance. To go a step further, build user trust, and meet growing regulatory demands, leading teams have begun providing users with basic insights into how AI models influence final decisions. This transparency is key to driving widespread acceptance and trust in AI within the enterprise.¹
The Engine of Scale: The “Portfolio” Approach to Adoption
A crucial finding in the report provides a clue to solving the persistent problem of low internal AI adoption rates. Many companies, even after providing AI tools to their employees, still face a lack of engagement.³ However, organizations that have achieved high adoption rates (over 50% of employees using the tools) exhibit a common strategy: they have deployed a
wide variety of AI use cases across the enterprise, averaging 7.1.¹
This finding has significant strategic implications. It suggests that the best way to drive enterprise-wide AI adoption and cultural change may not be to concentrate resources on building a single, perfect “killer app,” but rather to pursue a portfolio of smaller AI projects across multiple different departments simultaneously. This approach can:
Create More Touchpoints: Allow more employees to encounter and experience AI in their own daily work.
Lower the Adoption Barrier: Smaller, targeted applications that solve specific pain points are easier to understand and accept.
Normalize AI: When AI appears in multiple work contexts, it ceases to be a distant concept and becomes a natural part of the workflow.
Accelerate the Learning Cycle: Multiple parallel projects can gather user feedback more quickly, accelerating the entire organization’s AI learning and iteration speed.
Through this “cast a wide net” approach, enterprises can create an atmosphere where AI is ubiquitous, fostering a virtuous cycle of engagement, feedback, and improvement, ultimately embedding an AI-first culture deep within the entire organization.
Part 6: The Playbook in Practice: Lessons from Vertical AI Pioneers
This section moves from theory to practice, analyzing pioneer companies from ICONIQ’s portfolio to demonstrate how the principles of “The Builder’s Playbook” are successfully executed in the real business world.⁹ These case studies provide a tangible vision for internal enterprise AI teams, revealing what successful AI strategy execution looks like in specialized, high-value domains.
Case Study 1: Tennr - The Integration and ROI Playbook (Healthcare)
Tennr perfectly illustrates how to build a vertical, agentic AI to solve a high-friction workflow common throughout an industry—patient referrals.⁵
Playbook in Practice: Integrate, Don’t Disrupt. A key to Tennr’s success lies in its strategic choice. It did not try to force healthcare organizations to abandon their existing, chaotic but deeply entrenched workflows (like faxes and emails), but instead chose to adapt to and integrate with them. Its AI model can read and understand various unstructured documents, thereby automating the back-end process without changing front-end work habits. This “work with the system, not against it” strategy dramatically lowers the adoption barrier for customers.⁵ This is an invaluable lesson for internal AI teams dealing with numerous legacy systems: the most effective solutions are often those that seamlessly integrate into existing workflows, rather than demanding disruptive changes from users.
Playbook in Practice: Clear, Quantifiable ROI. Tennr’s value proposition is not an abstract “efficiency improvement.” It delivers hard metrics to its customers that could be written directly into our “ROI Dashboard.” For example, at its client NEB Medical, intake processing time dropped by over 60%, enabling the team to triple its daily capacity. Eastern MedTech scaled its document handling eightfold without adding headcount.⁶ This sets the gold standard for how to demonstrate AI’s value to business units: communicate in a quantifiable language that business departments understand, translating technical achievements into tangible business returns.
Case Study 2: Legora - The Expert Augmentation Playbook (Legal)
Legora showcases another powerful paradigm for AI application: not replacement, but augmentation of high-skilled knowledge workers (lawyers).⁷
Playbook in Practice: Platform, Not Point Solution. Legora recognized that the value of legal work comes from the synergy of multiple steps. Therefore, it provides a unified AI platform that integrates seamlessly into the lawyer’s core workflow, for instance, by offering a Microsoft Word plug-in instead of making them switch between multiple isolated tools.⁷ This confirms the importance of platform thinking: true value comes from the systematic enhancement of core workflows, not from providing scattered features.
Playbook in Practice: Deep Collaboration with Users. Legora’s success is largely attributable to its deep collaboration and partnership with its clients (law firms). They co-develop and embed AI features, ensuring the product truly fits the nuanced needs of professional users.¹ This model is analogous to the role of the “AI Product Manager” on an internal AI team, emphasizing that understanding and serving the end-user is a prerequisite for product success.
Playbook in Practice: Achieving Step-Function Productivity Gains. The ROI delivered by Legora represents a fundamental change in work patterns. It reduces tasks that used to take weeks, such as data room review during due diligence, to a matter of hours, without sacrificing accuracy.¹ This reveals the ultimate goal of internal AI: not just to achieve incremental efficiency improvements, but to create step-function leaps in productivity by augmenting the core capabilities of human experts, allowing them to focus on the higher-value, strategic work that only humans can perform.
Part 7: Strategic Synthesis: A Tailored Action Plan for Your Enterprise
Core Insights Revisited
ICONIQ’s “The Builder’s Playbook” offers profound insights for internal enterprise AI implementation. Synthesizing the analysis, the core strategic takeaways are as follows:
Mindset Shift is the Prerequisite: Success hinges on adopting an “internal AI-native” mindset, injecting a culture of agility, experimentation, and rapid iteration into the lifeblood of corporate operations.
Strategic Positioning is the Core: Internal AI success begins with correct strategic positioning. Its tech stack must be built around a “cost-first” principle, and its project selection must focus on solving high-value vertical workflows.
Value Demonstration is Key: Internal AI teams must operate like business units, adopting a “value-center” model to drive adoption, prove value, and secure continuous resource investment through quantified ROI.
Talent Leverage is the Guarantee: Faced with the reality of talent scarcity, enterprises must move beyond mere recruitment and systematically maximize talent leverage by augmenting existing teams, leveraging external tools, and fostering internal upskilling.
A Phased Implementation Roadmap
To translate the above strategies into executable actions, we provide a phased implementation roadmap for enterprise leaders.
Phase 1: Laying the Foundation (0-6 Months)
Action: Appoint a dedicated AI leader and assemble a cross-functional “vanguard team” of AI/ML engineers, data scientists, and an AI product manager.
Action: Use the “Internal AI Use Case Prioritization Matrix” to identify and launch 2-3 pilot projects from the “Quick Wins” quadrant to demonstrate visible results in the short term.
Action: Establish a foundational AI governance framework and data privacy guidelines, select a primary cloud platform partner, and begin creating the first “AI Project ROI Dashboard” template.
Phase 2: Expansion & Evangelism (6-18 Months)
Action: Rigorously track and communicate the ROI results of pilot projects across the company to build internal credibility and secure broader business unit support and resource allocation.
Action: Begin building a multi-model technical architecture that supports rapid switching to proactively manage and optimize growing model inference costs.¹
Action: Launch an internal AI upskilling program and expand the AI application portfolio to 5-7+ use cases to drive broader employee adoption and cultural penetration.¹
Phase 3: Scale & Transformation (18+ Months)
Action: Scale successful AI solutions enterprise-wide, with a strategic focus on building agentic workflows that can reshape core business processes.
Action: Formally establish an internal AI value attribution model to ensure that funding for AI projects remains continuously aligned with business unit goals and returns.
Action: Drive the organization towards the benchmark goal of dedicating 20-30% of engineering resources to AI development and operations, ultimately solidifying AI as a core business capability rather than an ancillary technology project.¹
Conclusion
ICONIQ Capital’s “The Builder’s Playbook” is more than a market report; it is a signal of the times. It marks the moment AI has moved past the dawn of proof-of-concept and into the high noon of value realization. For every enterprise in this new era, the message is clear and urgent: the time for isolated experiments is over. The time to systematically build a durable, efficient internal AI engine capable of driving business value—and the decisive competitive advantage that comes with it—is now.
Cited works
The Builder’s Playbook - My AI, https://my.ai.se/resources/6272
2025 State of AI Report: The Builder’s Playbook - ICONIQ Capital, https://www.iconiqcapital.com/growth/reports/2025-state-of-ai
ICONIQ: The Builder’s Playbook - 2025 State of AI Report - YouTube, https://www.youtube.com/watch?v=6hHz_ejt4_A
Tennr clinches $101M to build out AI that automates patient referral process, https://www.fiercehealthcare.com/health-tech/tennr-clinches-101m-build-out-ai-automates-patient-referral-workflows
Revolutionizing Patient Referrals with AI: Our Partnership with Tennr - ICONIQ Capital, https://www.iconiqcapital.com/growth/insights/revolutionizing-patient-referrals-with-ai-our-partnership-with-tennr
Powering the Future of Legal with AI: Our Partnership with Legora - ICONIQ Capital, https://www.iconiqcapital.com/growth/insights/powering-the-future-of-legal-with-ai-our-partnership-with-legora
Legora Attracts $80 Million Series B Funding as Top Global Law Firms and Legal Teams Rush to Adopt Its Collaborative AI - Business Wire, https://www.businesswire.com/news/home/20250521422959/en/Legora-Attracts-%2480-Million-Series-B-Funding-as-Top-Global-Law-Firms-and-Legal-Teams-Rush-to-Adopt-Its-Collaborative-AI
Venture & Growth AI - ICONIQ Capital, https://www.iconiqcapital.com/growth/ai
Our $80 million Series B led by ICONIQ and General Catalyst - Legora, https://legora.com/blog/series-b