Bagel family of models was fine-tuned on a large amount of data, across variety of prompting formats. Large fraction of the data is writing and role-play related, making it a great choice for those tasks.
The 7B models are based on Mistral 7B.
The DPO models underwent direct-preference-optimization on top of fine-tuning. Many users reports that the DPO versions are less suitable for writing and role-play, but better for general "assistant" tasks.