What Kore.ai does differently, Koneru asserts, is offer great flexibility where it concerns where companies can deploy their AI apps — in the cloud, locally or in virtual machines — and the degree to which they can fine-tune these apps. For certain applications (e.g. text summarization, finding and generating answers, topic discovery and sentiment analysis), Koneru makes the case that fine-tuned models — Kore.ai’s speciality — are superior to the larger, more powerful models available from vendors like Anthropic and OpenAI, as well as more cost-effective.
There’s a privacy argument to be made, too, for smaller, offline models.
A 2023 Predibase survey found that more than 75% of enterprises don’t plan on using commercial, cloud-hosted LLMs in production over fears that the models will compromise sensitive info. In a separate poll from GenAI platform Portal26 and data research firm CensusWide, 85% of businesses said that they’re concerned about GenAI’s privacy and security risks.
“Over the past 18 months, we’ve observed that fine-tuned models are very effective compared to pre-trained models for specific enterprise use cases,” Koneru said. “Compared to a large pre-trained model, it takes less than 2% of the enterprise data to train and create a fine-tuned model that companies can deploy safely for enterprise use cases. We’ve successfully built smaller enterprise LLMs that provide higher efficiency, better accuracy, the ability to control responses and — most importantly — reduce latency and cost.”