Today, I integrated several new features into my prompt evaluation app to enhance its functionality and data-driven personalisation. I expanded the rating scale from a simple binary choice to a 1-to-10 scale, allowing for more nuanced feedback from users. I also updated the prompt generation process to keep prompt text and seed words separate, ensuring that the descriptive content is richer and more varied. Additionally, I incorporated a trained machine learning model into the prompt selection function — this model weights candidate one-liner prompts based on historical user ratings, and it gracefully falls back to random selection if the model isn’t available.

I further enhanced the admin dashboard to provide a clear visual representation of prompt performance. Using Chart.js, I set up a bar chart that displays the average rating and count for each prompt description, and I implemented a feature to highlight the top three performing prompts with distinct colours. These updates, along with maintaining user authentication and basic administrative routes, have made my application more sophisticated and have paved the way for future improvements in personalisation and data analysis.

Later in the day, I added a comments section for users — this will also be saved to the SQL database.