According to an article on Habré, the process of bypassing censorship on the service turned out to be surprisingly simple and accessible.
The main idea is to use the fine-tuning technique, which allows you to modify the model by providing examples of answers to non-standard queries.
This approach involves creating a custom dataset where the model must display detailed answers without grammatical errors and adequately respond to any user requests.
But the author of the article warns about the risk of model degradation in case of poor-quality fine-tuning. If the dataset contains monosyllabic or inconsistent response samples, this can lead to unpredictable model behavior.
While the model training process is carried out based on gpt-3.5-turbo-1106, the content of the dialogues is subject to manual censorship. This allows you to maintain the usefulness of the model by avoiding wrong answers or ignoring part of the query.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.