DeepKeep is Ensuring AI Systems of Today and Tomorrow are Safe, Trusted and Secure

DeepKeep is Ensuring AI Systems of Today and Tomorrow are Safe, Trusted and Secure

The AI explosion is happening faster and with greater ferocity than any of us could have imagined.


Fueled by the mass acceptance and popularity of tools such as Open AI’s GPT3 and Midjourney, AI has become part of everyday conversation. Every individual and every business across every industry is now looking at how AI can be leveraged, while at the same time, flagging very real concerns about the threat it potentially represents.


In an open letter recently published by the Future of Life Institute, 27,000+ signatories, including Tesla and SpaceX founder Elon Musk and Apple co-founder Steve Wozniak called for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4, to ensure AI systems are refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

The letter also called for AI developers to work with policymakers to dramatically accelerate the development of robust AI governance systems.


This is because AI models, as accurate and efficient as they might be, expose businesses to new types of threats. Recent attack vectors exploit inherent AI model vulnerabilities causing them to yield skewed results. These adversarial AI attacks pose a challenge that hampers the deployment of powerful AI models into production environments.


“We are navigating uncharted waters. AI advancement is racing forward at a very quick pace. It’s like the gold rush. Sure, there’s a treasure out there, but you can also get devoured by a bear if you take the wrong turn,” says Guy Sheena, Chief Business Officer (CBO) for DeepKeep.


Headquartered in Israel and funded by AWZ Ventures, DeepKeep’s platform is an enterprise software platform that allows data scientists, MLOps, IT teams, and CISOs to protect the AI models they are building from cyberattacks. By implementing defence methods such as AI firewalls, detectors, pre-processing and post-processing algorithms, ML Protect gives large enterprises a higher level of explainability, enhances the robustness and hardens the model fortifying against attack, while providing IT teams with full protection and visibility into models running in production.


DeepKeep product is divided into four core pillars, explains Guy. The first centres around risk assessment and penetration testing. This assesses the performance of the model and the data sets used by the model to identify weak spots, and biases related to trust and security. The second pillar focuses on detection protection, and entails providing an AI shielding and detection suit that actively safeguards the model pipelines. The third pillar of ML Protect is a monitoring infrastructure that keeps a watchful eye on models operating in production environments to look for attacks, biases and other threats that undermine trust. And finally, the fourth pillar of ML Protect focuses on mitigation, instructing IT organizations on what to do if an issue -- how to fix the problem and prevent it from happening again.


The company has two prongs to its go-to-market strategy. Commercially, the business focuses on large corporations and entities operating in defence/homeland security, automotive, fintech and communication sectors, and has amassed customers in Israel, Japan and Europe.


Guy points to a couple of interesting use cases from the automotive and insurance sectors where the ML Protect platform plays an important defensive role. “Insurance damage claims fraud accounts for hundreds of millions of dollars in revenue loss for insurers every year. DeepKeep’s Car Damage Protection suite allows insurance companies to significantly reduce their and in turn, increase their net revenues.

Furthermore, autonomous driving deployment is directly dependent on the security and accuracy of the AI computer vision models governing the behaviour of vehicles and their surrounding worlds. ML Protect allows insurance companies to avoid attacks on insurance claims related to accidents involving autonomous vehicles which again boost their credibility and increase their revenues.”


With the deployment of AI models happening at breakneck speed and with much of AI development taking place in the open-source world, DeepKeep’s founding team was also passionate about engaging the open-source community in advancing AI governance and trust.


“AI regulation is also progressing at a rapid pace to ensure AI is employed for good and with the best of intent,” says Guy. “You have the AI Act and the Bill of Rights in the United States as well as regulations in other countries including China, Japan, Brazil and Canada. We need governance to protect the end users from problems AI can improve and because of the release of all the generative AI technologies, we believe regulation will progress very quickly. But as technologists all know, regulation can also hinder innovation.

So our ambition at DeepKeep is to be the platform that helps companies building AI models pass regulation. Our system will test, validate and continue to monitor the model, certifying that it operates in a trustworthy way.”


DeepKeep joined the 5G Open Innovation Lab (5G OI Lab) in the Fall of 2022 as part of the Lab’s latest batch of startup companies. Relationship with the Lab and its team is opening doors to joint initiatives with 5G OI Lab corporate partners including Amdocs and Intel, who are incorporating AI into their tech layers.


“Our work is just beginning with companies who are incorporating AI elements into their 5G pipeline. From signal-based protection to AI-enabled data processing at the Edge, to 5G-enabled private networks, where AI is involved, there are layers upon layers of security required. Having 5G OI Lab at our side will accelerate our ability to reach those markets and solve those challenges more quickly,” says Guy.

Posted June 13, 2023

[?]
Sign up for our newsletter!
5G Technology website made with Invisible Ink