Plutonic Rainbows

Plutonic Rainbows

Prompt Refiner Updates

I’ve significantly refined my application’s interface and user experience today by introducing Montserrat as the main font, aligning the two columns so both the refined prompt and AI insights start at the same height, enlarging the text areas for more comfortable typing, and adding a loading spinner that appears whenever a request is processing. I also added a subtle highlight animation for updated content, giving the entire workflow a smoother, more polished feel.

Prompt Refiner

I’ve upgraded my application by integrating a transformer-based model for intent classification, which moves beyond the basic, rule-based system I used initially. Now, instead of relying on simple keyword checks, my app calls a smaller, efficient DistilBERT model that can pick up on more nuanced language patterns. This change makes my pipeline more sophisticated and better prepared for future improvements, such as fine-tuning on my own dataset to achieve domain-specific accuracy.

In addition, I’ve tackled the stability and resource issues I faced before by using a smaller model and explicitly setting it to run on the CPU. This reduces the risk of crashes or silent failures. I’ve also maintained my spaCy-based entity extraction and GPT‑4 integration for generating insights, so my app still returns refined prompts and thorough AI responses. Overall, I feel that my setup is now more robust, extensible, and aligned with best practices in modern NLP.

Chat GPT-4.5 Preview

Now available for Pro users, with Plus users gaining access next week. I tested it through the API — it’s impressive but significantly more expensive than other models. Hopefully, the cost will decrease soon.

New Updates

  • I have added wan-i2v templates for file upload and video generation.

  • Open AI have added ten requests a month to Deep Research for Plus Users.

  • Started building Prompt Refiner application.

  • Built application launcher for Flux.1 [Dev] templates.

Veo2 Updates

I discovered that my app was failing to display the generated video because I was incorrectly extracting the video URL from the Fal.ai API response. Initially, my code assumed the video data was inside a property called data (i.e., final_obj.data), but in reality, the final result was returned directly as a plain dictionary in final_obj with the structure {"video": {"url": "..."}}. Once I logged the final API response, I realised I needed to use final_obj directly to extract the video URL. This change fixed the issue, and now the correct URL is passed to the template, allowing the video to display as intended.