Tumeryk, the AI security company, announced the launch of its AI Trust Score, the industry’s first scientifically-based metric designed to quantify the risks associated with Generative AI systems. This groundbreaking tool enables CISO’s to ensure their AI deployments are secure, compliant, and trustworthy, and offers developers solutions for addressing any issues in their AI applications.

Cyber Technology Insights: WWT Completes Softchoice Acquisition, Expands AI & Cloud

The introduction of the AI Trust Score™ comes at a critical time, as the U.S. AI Safety Institute (AISI) faces significant budget cuts that could impede the development of essential AI safety standards and regulations, highlighting the urgent need for private sector solutions like Tumeryk’s AI Trust Score™. The AI Trust Score™ aligns with NIST and OWASP LLM Top 10 and evaluates AI systems based on nine critical factors: Prompt injection, Hallucinations, Insecure Output Handling, Security, Toxicity, Sensitive Information Disclosure, Supply Chain Vulnerability, Psychological Safety and Fairness.

By assessing these dimensions, Tumeryk provides a comprehensive trustworthiness score ranging from 0 to 1000, with higher scores indicating greater trust. This allows organizations to identify and mitigate potential risks in their AI applications effectively.

Recent assessments using the AI Trust Score™ have revealed that certain Chinese AI models, such as DeepSeek, Alibaba, and others, exhibit higher safety and compliance standards than previously reported. Notably, DeepSeek operates on U.S.-based platforms like NVIDIA and SambaNova, ensuring data security and adherence to international regulations. These findings challenge prevailing perceptions and underscore the importance of objective, data-driven evaluations in the AI industry. For example, in the Sensitive Information Disclosure category, Deepseek NIM on NVIDIA scored 910 vs. Anthropic Claude Sonnet 3.5 score of 687 and Meta Llama 3.1 405B score of 557.

“For Chief Information Security Officers and security professionals, Tumeryk offers the AI Trust Manager, a robust platform for monitoring and remediating AI applications. This tool provides real-time insights into AI system performance, identifies vulnerabilities, and recommends actionable steps to enhance security and compliance, said Rohit Valia, Turmeric CEO. “By integrating the AI Trust Manager, organizations can proactively manage risks and ensure their AI deployments align with regulatory standards and ethical guidelines.”

“The availability of Tumeryk’s AI Trust Score™ on AWS GovCloud significantly streamlines access for our public sector clients,” said Keith Mortier, Redport Information Assurance Chief Information Security Officer. “Redport Information Assurance is proud to partner with Tumeryk to deliver this critical AI security capability, enabling government agencies to confidently and responsibly innovate with AI.”

Cyber Technology Insights: OberaConnect Wins Parabilis Financial IT Security Deal

To participate in our interviews, please write to our CyberTech Media Room at news@intentamplify.com

Source – Prweb