• 中文版
  • BM
  • News
  • Deals
  • Reviews
    • First Impressions
    • Hands-on
    • Comparisons
  • Tech
    • Mobile
    • Computers
    • Cameras
    • Wearables
    • Audio
    • Drones
  • Telco
    • Celcom
    • Digi
    • Maxis
    • Time
    • Tune Talk
    • U Mobile
    • Unifi
    • Yes
  • Cars
  • Contribute
  • Jobs
Menu
  • 中文版
  • BM
  • News
  • Deals
  • Reviews
    • First Impressions
    • Hands-on
    • Comparisons
  • Tech
    • Mobile
    • Computers
    • Cameras
    • Wearables
    • Audio
    • Drones
  • Telco
    • Celcom
    • Digi
    • Maxis
    • Time
    • Tune Talk
    • U Mobile
    • Unifi
    • Yes
  • Cars
  • Contribute
  • Jobs
Search
  • Tech
    • News
    • Mobile
    • Computers
    • Cameras
    • Wearables
    • Audio
    • Drones
  • Telco
    • Celcom
    • Digi
    • Maxis
    • Time
    • U Mobile
    • Unifi
    • Yes
  • Reviews
    • First Impressions
    • Hands-on
    • Comparisons
  • Buyer’s Guide
  • Opinions
  • Digital Life
  • Video
  • Deals
  • How-To
  • Cars
  • Bahasa Melayu
  • EV
  • Contribute
  • Advertise
Menu
  • Tech
    • News
    • Mobile
    • Computers
    • Cameras
    • Wearables
    • Audio
    • Drones
  • Telco
    • Celcom
    • Digi
    • Maxis
    • Time
    • U Mobile
    • Unifi
    • Yes
  • Reviews
    • First Impressions
    • Hands-on
    • Comparisons
  • Buyer’s Guide
  • Opinions
  • Digital Life
  • Video
  • Deals
  • How-To
  • Cars
  • Bahasa Melayu
  • EV
  • Contribute
  • Advertise
Search
Close
Home Digital Life

OpenAI’s Sora can create realistic video clips from text prompts

  • BY Sharil Abdul Rahman
  • 16 February 2024
  • 3:25 pm
  • Comment
Share on FacebookShare on Twitter

OpenAI, the same company that created ChatGPT and Dall.E has just unveiled its latest video-generation model called Sora. The new model takes text prompts and turns them into ‘realistic and imaginative scenes.’ The new model can currently create minute-long clips purely based on text prompts users have written.

Introducing Sora, our text-to-video model.

Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W

Prompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vf

— OpenAI (@OpenAI) February 15, 2024

OpenAI’s blog post says the model can “generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background.” More scarily, OpenAI says that the model not only understands what the user is asking with the prompt but also how the things in the prompt exist in the physical world.

https://cdn.openai.com/sora/videos/tokyo-walk.mp4
One of the sample video clips created with Sora using text prompts. Video credit: OpenAI

The result is truly amazing and scary. Because the model has a deep understanding of language, it can accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately portrays characters and visual style.

The model also can accept image inputs and generate video based on that image. It can also fill in the missing frames in a video, or even extend the video when needed.

https://cdn.openai.com/sora/videos/art-museum.mp4
Prompt used: Tour of an art gallery with many beautiful works of art in different styles. Video credit: OpenAI

According to OpenAI, Sora is a diffusion model, where it generates a video by starting with one that looks like static noise and gradually transforms it by removing the noise over many steps. Similar to GPT models, Sora uses a transformer architecture, unlocking superior scaling performance

The quality of the video is pretty good, but there are still some visual glitches in some of the clips. Sora struggles to render fast movements correctly including fast-moving backgrounds, and some clips even have the multiple limbs glitch that is always associated with AI generated content.

The green foliage looks grainy and blotchy
The cat has three front paws?

Currently, Sora is only available to “red teamers” assessing the model for potential harms and risks. OpenAI did say that the company is using the same safety methods built into Dall-E 3 to ensure bad actors will not be able to create content in violation of its usage policies. So no violent, explicit, hateful, deep-fake or other similar content will be allowed by the text or image classifier.

OpenAI did not share when Sora will be available for the public – just that it is currently working with stakeholders (policymakers, educators and artists) around the world to understand their concerns and to identify positive use cases for this new technology.

[SOURCE]

Tags: AIChatGPTDalle EGenerative AIOpenAISora
Sharil Abdul Rahman

Sharil Abdul Rahman

POPULAR

Tune Talk introduces #TheRealTuney campaign, highlights focus on all-day Ultra Fast 5G Connectivity for 2026

January 23, 2026

Is your TNG eWallet suspended? Here’s what you should do

January 27, 2026

OpenAI’s Sora can create realistic video clips from text prompts

February 16, 2024

Astro drops HBO channels after nearly 30 years, introduces 4 new channels under Astro One Epic Pack

February 16, 2026

Samsung Galaxy S26 launching on 26 Feb, full specs leaked: Another year of playing it safe?

February 12, 2026

A Look Inside the All-New Maxis Centre at 1 Utama: What’s Different?

February 12, 2026

Copyright © 2025 · SoyaCincau.com
Mind Blow Sdn Bhd (1076827-P)

  • ADVERTISE
  • DISCLAIMER

Copyright © 2026 · SoyaCincau.com – Mind Blow Sdn Bhd (1076827-P)

  • ADVERTISE
  • DISCLAIMER