
First impressions with OpenAI’s Sora
OpenAI’s Sora text-to-video model became available in Europe on February 28th. Being the tech geek that I am, I immediately tested it to see what it could do. My immediate impression was that Sora can be surprisingly good at times but also quite terrible at others. Currently, Sora is available to users with either a ChatGPT Plus or Pro subscription. With Plus you’ll get 1000 credits per month (50 videos) and can create videos up to 10 seconds in up to 720p resolution. Pro includes 10000 credits (500 videos) and you can create videos up to 20 seconds in length with up to 1080p resolution. When you run out of credits you can use relaxed queue which enables you to complete your videos when site traffic is low. In the relaxed queue, higher-tier plans are prioritized. However, I’ve noticed that even in the relaxed queue, videos are generated fairly quickly. That said, Plus users are limited to creating only 5-second clips once their credits are exhausted. With the Pro plan, you can generate five videos concurrently, whereas Plus users can only generate one at a time.
I remember when I first started experimenting with DALL-E some years ago. One of the first images I ever generated was of a Bernese Mountain Dog. So naturally, the first video I generated with Sora featured a dog of the same breed. And Sora did pretty neat job!
After this I asked Sora to create a video where Bernese Mountain Dog is walking in the streets of New York and in this video we can see some struggles that it has like at one point, the dog had a tail on both ends. But otherwise the video looks pretty nice.
Then I thought I’ll try something funnier and prompted Sora to create a video of a Bernese Mountain Dog sitting behind a table filling out papers. Well, the dog doesn’t exactly do anything but it indeed sits behind a table and video looks surprisingly good and dog looks much more realistic than for example in the forest video.
Since AI generators often struggle with human characters, I wanted to see how well Sora could handle this challenge. Once again, I was pleasantly surprised by the outcome—a cloud architect with long hair and a beard celebrating a successful deployment. Though, in this case, the laptop should have been facing the architect.
Conclusions
I was pleasantly surprised by the abilities of OpenAI’s Sora. I know that it has been out in the open since December 2024 already but I was only able to test it now as it became available here in Europe at the end of February 2025. I had seen videos about Sora but of course it’s different thing when you’re able to test it yourself with your own prompts. It already seems far more advanced than DALL-E was when I first experimented with it. Given how rapidly other text-to-something models have advanced over the past few years, I’m confident that Sora will continue to improve at a fast pace. I would’ve liked to create some animation style videos but that’s either not yet possible to do or I’m not skilled enough yet. I think Sora can already be a great tool to generate some b-roll to be included into videos but I maybe wouldn’t start to create any full length movies with it just yet. For now, Sora is only available with a paid ChatGPT subscription. However, since OpenAI eventually made DALL-E free for all users, there’s hope that Sora might follow the same path in the future.
Update 3.3.2025
In my conclusions, I mentioned that I wasn’t able to generate animation-style videos. However, since Sora can create videos from images, I was ultimately able to do so. One such example can be seen below. Sora does an excellent job of bringing the picture to life.
Useful links:
https://sora.com
https://chatgpt.com/
https://openai.com/chatgpt/pricing/