I was having trouble with the suggestions list persisting on my screen even after I selected a suggested prompt, and my cursor kept losing focus on the text area. To solve this, I switched from using onClick to onMouseDown with e.preventDefault(), which prevents the text area from losing focus when I interact with the suggestions. Then, by using a small setTimeout to refocus on the text area, I ensured that the suggestions list disappears as soon as I choose an option, and my cursor remains in the right place to continue typing.

I’ve now built a solid framework for reinforcement learning from human feedback.

  • Feedback Collection: I set up a FastAPI backend with endpoints for submitting feedback, refining prompts, and generating insights. This lets users provide valuable feedback that’s stored in a SQLite database.

  • Data Management: I integrated SQLAlchemy to handle my SQLite database. The system automatically creates a new feedback.db if one doesn’t exist, giving me a clean slate when needed.

  • Training Simulation: I created a script (rlhf_training.py) that retrieves the feedback data, processes it in a dummy training loop, and saves a model checkpoint. This simulates how I could fine-tune my model using the collected human feedback.

  • Model Setup: I ensured my model is loaded with the correct number of labels (to match my feedback ratings) and can seamlessly integrate with both the feedback collection and training processes.

This framework sets the stage for continuous improvement. Now, as I gather more feedback, I can use this data to progressively refine and retrain my model.