I’ve upgraded my application by integrating a transformer-based model for intent classification, which moves beyond the basic, rule-based system I used initially. Now, instead of relying on simple keyword checks, my app calls a smaller, efficient DistilBERT model that can pick up on more nuanced language patterns. This change makes my pipeline more sophisticated and better prepared for future improvements, such as fine-tuning on my own dataset to achieve domain-specific accuracy.

In addition, I’ve tackled the stability and resource issues I faced before by using a smaller model and explicitly setting it to run on the CPU. This reduces the risk of crashes or silent failures. I’ve also maintained my spaCy-based entity extraction and GPT‑4 integration for generating insights, so my app still returns refined prompts and thorough AI responses. Overall, I feel that my setup is now more robust, extensible, and aligned with best practices in modern NLP.