#2 - Risks and mitigation strategies for AI-provided services
As AI increasingly takes on roles in delivering professional services - from accounting and legal advice to customer support - the potential for technical failures to cause serious financial and legal consequences is growing. Issues like AI hallucinations, biased outputs, or overlooked anomalies are no longer confined to technical domains; they can escalate into real-world liabilities.
Take the example of an accounting firm relying on AI to generate reports. If the AI produces an incorrect financial statement or miscalculates tax obligations, the firm - not the AI itself - bears the responsibility.
The conventional product model - where a tech company sells a tool and leaves the user responsible - may not be sufficient in this context. New tech companies will provide AI agents that do the entire work. If they are no longer tool providers, but service providers, the final liability could rely on the tech company itself and not the user of the agent. This means tech startups could no longer do just tech and may require mixed talent, including specialists in the field, to review agents’ outputs.
A concern is how current workers will fare as AI takes over their workload. Some of them will switch roles, while others will evolve from manually serving a handful of customers to overseeing agents serving thousands.
Mitigating the risks
New insurance categories emerge as a natural response to the shift. Just as businesses purchase policies for legal and financial risks, new insurance products may become essential to cover potential AI errors. These policies could create a financial safety net that allows companies to deploy AI in service contexts with confidence, and it could become one of the main infrastructure layers that nobody is paying enough attention to.
Regulators are increasingly scrutinizing AI-driven services, with some jurisdictions exploring requirements for AI explainability, human oversight, and accountability measures. The European Union has already created the AI Act, which is a set of rules that an AI system must follow depending on its risk category.
Deploying AI systems, and specifically AI-powered services, is not just a hard technical challenge; it’s also becoming a legal and operational one. Firms must rethink their service models, integrate oversight mechanisms, and explore risk mitigation strategies, all while keeping pace with regulatory expectations.
What are others doing?
A smart move multiple tech entrepreneurs are following is to acquire an existing small firm instead of starting from scratch. This is not new; it has been done for years, but for this AI wave, it gives you a greater advantage than before.
If you were a product company, acquiring a small service firm in the niche would allow you to buy the knowledge and iterate on the product faster, to later start selling it to others. A similar approach was to start by offering consultancy services to later pivot to building a specific product for the space. Both are different ways to gain domain knowledge fast.
But now, if you acquire a small service firm, is about scaling the operations with AI agents. It is no longer to build a tool for the current employees, but to build something that does the job while they transition to become human supervisors. Instead of serving a hundred customers, the firm can serve a thousand, maintaining the headcount and growing margins. Having the talent to provide the specific service in-house it’s a great way of mitigating the financial risks of agents going wrong.
The existing employees, who have the knowledge, will also be responsible for contributing to the evaluations, feedback loops, and all the mechanisms to ensure the AI outputs are appropriate, which all together is what provides the AI with guardrails.

