This long-awaited model is now rolling out to Plus users. I gained access about two hours after the announcement, which I’m quite pleased about. However, I’m concerned about the rate limit, which is reportedly set at 50 requests per week. I assume this restriction is due to the model’s extremely high operational costs.
Updates
March 05, 2025
I've added a function that displays the number of tokens used per query, separated clearly from the text output. Additionally, I compressed the CSS on the blog for improved performance.
Currently testing Tom Ford's Fucking Fabulous Parfum. This is a revitalised edition of the original which launched quite a few years back. Honestly, I really dislike the fragrance's name — I find it controversial simply for controversy's sake.
Sunday Activities
March 02, 2025
I was having trouble with the suggestions list persisting on my screen even
after I selected a suggested prompt, and my cursor kept losing focus on the text
area. To solve this, I switched from using onClick to onMouseDown
with
e.preventDefault()
, which prevents the text area from losing focus when I
interact with the suggestions. Then, by using a small setTimeout
to
refocus on the text area, I ensured that the suggestions list disappears as soon
as I choose an option, and my cursor remains in the right place to continue
typing.
I’ve now built a solid framework for reinforcement learning from human feedback.
-
Feedback Collection: I set up a FastAPI backend with endpoints for submitting feedback, refining prompts, and generating insights. This lets users provide valuable feedback that’s stored in a SQLite database.
-
Data Management: I integrated SQLAlchemy to handle my SQLite database. The system automatically creates a new
feedback.db
if one doesn’t exist, giving me a clean slate when needed. -
Training Simulation: I created a script (
rlhf_training.py
) that retrieves the feedback data, processes it in a dummy training loop, and saves a model checkpoint. This simulates how I could fine-tune my model using the collected human feedback. -
Model Setup: I ensured my model is loaded with the correct number of labels (to match my feedback ratings) and can seamlessly integrate with both the feedback collection and training processes.
This framework sets the stage for continuous improvement. Now, as I gather more feedback, I can use this data to progressively refine and retrain my model.
Sunday Extras
March 02, 2025
Some other things happening today:
-
A sample of Rosendo Mateu No 5 Elixir arrived. It's very unique.
-
Made some small adjustments to Flux.1 [Dev] templates.
-
Began reading The King In Yellow by Robert W. Chambers.
-
Listened to the new MPU101 album.
Flux Updates
March 01, 2025
I updated all my templates that support image generation to include a high-definition option, while retaining the legacy option because it is likely more cost-effective. The new high-resolution setting now outputs images at 1088×1920 pixels, regardless of whether the orientation is portrait or landscape.
For my Prompt Refiner application, I also added an SQL database to log user feedback ratings on prompts. My plan is to eventually incorporate Reinforcement Learning from Human Feedback (RLHF).