CalypsoAI raises $23M to stop employees sharing sensitive data with generative AI chatbots
Calypso AI Corp. said today it has raised $23 million in an early-stage financing as it looks to carve out a niche for itself providing security for companies using generative artificial intelligence models such as ChatGPT.
Today’s Series A-1 round was led by Paladin Capital Group and included participation from existing investor Lockheed Martin Ventures, new investors Hakluyt Capital and Expeditions Fund, plus angel investors Auren Hoffman and Anne and Susan Wojcicki. All told, CalypsoAI has now raised $38.2 million in funding since its founding in 2018.
CalypsoAI originally pitched itself as the world’s first AI security firm, with its mission being to help enterprises adopt AI in a way that’s safe and secure, primarily by helping them protect their data from being misused. Clearly, it sees a big opportunity for such a service with the explosion of interest in generative AI, which has captured the imagination of the public and enterprises alike.
Generative AI is the technology that powers AI chatbots such as OpenAI LP’s ChatGPT and Google LLC’s Bard, which are capable of holding humanlike conversations with users. Other forms of generative AI include DALL-E 2, which can create photorealistic images from human prompts.
Enterprises see clear benefits to be had by adopting generative AI to aid with internal business workloads. For instance, generative AI models can field customer calls and help employees to write marketing materials. However, using these models can also be dangerous, since there’s a big risk that employees might share confidential corporate data, resulting in that sensitive information being leaked to the public domain.
That’s where the startup’s CalypsoAI Moderator tool is intended to be useful. Launched earlier this year, CalypsoAI Moderator is a generative AI governance tool that works by actively monitoring the use of large language models offered by companies including OpenAI, Google, Cohere Inc. and A121 Labs Inc., in real time, providing full auditability, traceability and attribution for costs, content and user engagement.
It helps prevent data loss by blocking any sensitive company information from being shared with public LLMs, while working to stop any malicious cyberattacks being deployed within generative AI tools. In addition, it can ensure all generative AI outputs are verified, helping increase accuracy, which is particularly important for enterprises operating consumer-facing generative AI models.
Andy Thurai, vice president and principal analyst at Constellation Research Inc., told SiliconANGLE that CalypsoAI’s offering may intrigue some enterprises because user education and general governance policies might not be enough to prevent harmful use of LLMs. The problem for enterprises is they need to find a way to use LLMs safely, the analyst said, because simply blocking their use could end up with them becoming digital laggards.
According to the analyst, CalypsoAI protects enterprises from potential harm in several ways, chiefly by monitoring what data users share with LLMs in order to stop sensitive information being handed over, and by tracking conversations so they can be audited. “It works by blocking the transaction, and once the confidential information is removed then users can proceed normally,” Thurai explained. “It also supports scanning and validation of returned code to find anything suspicious or malicious that could be used unknowingly by developers. It can flag or remove vulnerable code that might not be detectable by existing antimalware or antivirus tools.”
CalypsoAI also offers something quite unique, Thurai said. “With LLMs, it is hard to verify the source and information provenance, making it hard to trust their responses,” the analyst explained. “CalypsoAI can help verify the information’s provenance, cite the source, and provide a link that can be used.
CalypsoAI and its backers believe there’s a massive opportunity for these kinds of LLM protections, with data from Pitchbook suggesting that the global generative AI market will reach $42.6 billion this year. McKinsey said in June that it expects generative AI to generate $4.4 trillion in economic value globally.
As a result, it looks like there will be lots of opportunities for malicious actors to exploit vulnerabilities in generative AI tools. Other risks include employees unknowingly sharing intellectual property with LLMs.
CalypsoAI founder and Chief Executive Neil Serebryany said many organizations might be tempted simply to ban the use of LLMs in order to mitigate these risks. But by doing so, he said they’ll become “digital laggards” compared with their competitors. “By adopting CalypsoAI’s solutions, every enterprise or government organization should be able to enable the benefits AI solutions deliver while having confidence that they are trusted, resilient and secure,” he said.
Paladin Capital Group Managing Director Mourad Yesayan said AISec technology will become critical for enterprises as they accelerate their use of generative AI. “CalypsoAI has been leading the charge since the beginning,” he said. “The company’s unique mix of cybersecurity and AI talent, combined with its battle-tested technology, makes it a clear driver of the AI revolution.”
With the money from today’s round, CalypsoAI said, it will look to accelerate development of its LLM security offerings, hire more talent and invest in its go-to-market teams.
Image: rawpixel/Freepik
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU