Skip to main content

AI is finding many use cases in everyday operations like HR, customer management, knowledge management, research, sales & marketing and many other functions. There is now acceptance that using AI has big advantages especially in efficiency and  speed. However for many applications, these AI tools are expensive and have limited accuracy. This is because most businesses are using general-purpose AI models, which often miss the mark for the following reasons:

  1. High opex – cost of training, retraining, and running a generic model is steep
  2. Accuracy – Retrofitting domain logic into a generic model does not deliver the desired result
  3. Data security – concerns around data loom large, especially in finance, healthcare, and government
  4. Limited precision – limited reasoning and logic with degraded accuracy is disastrous for mission-critical tasks
  5. Compliance – inability to adhere to guidelines, lack of industry-specific laws and regulations, especially for evolving regulations

Large language models (LLMs) have billions (sometimes trillions) of parameters, trained on generic internet data. They require massive infrastructure, consume enormous amounts of energy, and still produce limited results. Hallucinations, irrelevant responses, and sky-high compute costs are common. A simple query can trigger all nodes of the model which is like firing a missile at a paper target. It burns tokens, energy, and time, yet doesn’t guarantee accuracy.

Can the much-hyped generic AIs truly address the real challenges faced by industry? We believe they offer only limited value and fall far short of the magic many expect. Even OpenAI’s CEO has noted: “The era of giant AI models is over” at an MIT event.

Domain-specific LLMs can address many of the above issues. They are purpose-built models for industry sectors, horizontal functions, or geographies. Domain-specific LLMs are trained from the ground up on a particular subject. These don’t just “speak the language,” they understand the logic, constraints, jargon and context of a specific subject. They are leaner, require less compute, and can be deployed with lower latency. Unlike generic AI, they understand domain logic, workflows, and compliance norms making them ideal for sectors like finance, healthcare, insurance, legal, and government. They’re also region-aware, easily adapted to local languages and cultural nuances, and can be fine-tuned using RAG for faster deployment and continuous learning. As proprietary data accumulates, the model improves, reducing dependence on external sources.

With a higher precision, better security and a higher ROI, domain-specific LLMs are expected to be the next leap in enterprise AI. Just as the 1990s saw an explosion of specific web-based services built on the backbone of the internet, domain-specific AI is poised to power the next wave of enterprise applications.

There are areas where domain-specific LLMs are in the making or in use, although many aren’t accessible to individuals. Here are a few examples:

  • Medical LLM – e.g. Med-PaLM is trained on a vast dataset of medical literature. It can accurately answer medical questions and provide insights into topics like diseases, treatments, and procedures. The model can assist clinicians by analyzing patient data for potential diagnoses or treatment plans and can also analyze medical images. It is currently at over 86% accuracy.
  • Finance LLM – e.g. Bloomberg GPT can run a wide range of tasks within the financial industry. With 50 billion parameters and trained on over 700 billion tokens, the model can achieve state-of-the-art results on various financial tasks.
  • Legal LLM – e.g. Legal bert are models trained on large data sets of legal documents, including legislation, court cases, and contracts. These models enable them to capture the unique characteristics of legal language.

And there are many areas where domain-specific LLMs are being invested in and will be in great demand

  • Customer support, for enhanced customer understanding, automated but personalised interactions, multilingual support
  • HR, for resume and candidate screening, interview scheduling, training and skill-gap assessment, detecting red flags, etc
  • Fraud detection for real-time fraud detection, phishing detection, deviations from typical behaviour, etc.

As the limitations of general-purpose AI are getting increasingly clearer, the shift toward domain-specific LLMs is becoming a necessity. At AGR, we are not just observing this shift but are enabling it. We are closely collaborating with a leading LLM provider, bringing in domain and functional expertise to help make the models more specialized and industry-relevant.


About the author

Rishikesh Deshpande, CEO of AGR Knowledge Services, has over 27 years of experience in business strategy, cross-border expansion, and operational management. He has successfully driven sustainable growth and transformed organizations, focusing on process optimization and long-term customer relationships. Rishikesh has also represented India on international platforms, promoting global business collaboration.

DOWNLOAD NEWSLETTER