Hybrid Transformer Architecture
We deploy a multi-layered ensemble approach combining fine-tuned Encoder-only Transformers (RoBERTa/DeBERTa) for discriminatory classification with Large Language Models (LLMs) for zero-shot intent extraction. This hybrid topology mitigates the ‘cold-start’ problem, allowing for high accuracy even with sparse initial training datasets.