TechCrunch, May 21, 2024 |
Generative AI makes stuff up. It can be biased. Sometimes it spits out toxic text. So can it be “safe”?
Rick Caccia, the CEO of WitnessAI, believes it can.
“Securing AI models is a real problem, and it’s one that’s especially shiny for AI researchers, but it’s different from securing use,” Caccia, formerly SVP of marketing at Palo Alto Networks, told TechCrunch in an interview. “I think of it like a sports car: having a more powerful engine — i.e., model — doesn’t buy you anything unless you have good brakes and steering, too. The controls are just as important for fast driving as the engine.”
There’s certainly demand for such controls among the enterprise, which — while cautiously optimistic about generative AI’s productivity-boosting potential — has concerns about the tech’s limitations.
Fifty-one percent of CEOs are hiring for generative AI-related roles that didn’t exist until this year, an IBM poll finds. Yet only 9% of companies say that they’re prepared to manage threats — including threats pertaining to privacy and intellectual property — arising from their use of generative AI, per a Riskonnect survey.
Read the full article in TechCrunch. >>