[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"nav-categories":3,"article-tencent-s-hunyuanvideo-1-5-just-made-hollywood-level-video-generation-truly-open-source":70},{"data":4},[5,37,57,64],{"name":6,"slug":7,"categories":8},"Productivity","productivity",[9,13,17,21,25,29,33],{"id":10,"title":11,"slug":12},17,"Branding","branding",{"id":14,"title":15,"slug":16},19,"Marketing","marketing",{"id":18,"title":19,"slug":20},20,"Work","work",{"id":22,"title":23,"slug":24},34,"Community","community",{"id":26,"title":27,"slug":28},21,"For newbies","for-newbies",{"id":30,"title":31,"slug":32},24,"Investment","investment",{"id":34,"title":35,"slug":36},22,"Finance","finance",{"name":38,"slug":39,"categories":40},"Tech","tech",[41,45,49,53],{"id":42,"title":43,"slug":44},28,"Technology","technology",{"id":46,"title":47,"slug":48},32,"Artificial Intelligence","artificial-intelligence",{"id":50,"title":51,"slug":52},26,"Security and protection","security-and-protection",{"id":54,"title":55,"slug":56},31,"YouTube Blog","youtube-blog",{"name":58,"slug":59,"categories":60},"News","news",[61],{"id":62,"title":58,"slug":63},18,"quasanews",{"name":65,"slug":66,"categories":67},"Business","business",[68],{"id":69,"title":65,"slug":66},16,{"post":71,"published_news":97,"popular_news":163,"categories":234},{"title":72,"description":73,"meta_title":72,"meta_description":74,"meta_keywords":75,"text":76,"slug":77,"created_at":78,"publish_at":79,"formatted_created_at":80,"category_id":18,"links":81,"view_type":86,"video_url":87,"views":88,"likes":89,"lang":90,"comments_count":89,"category":91},"Tencent’s HunyuanVideo 1.5 Just Made Hollywood-Level Video Generation Truly Open-Source","In a move that has sent shockwaves through the AI filmmaking community, Tencent has released HunyuanVideo 1.5, an 8.3-billion-parameter open-source video generation model that is now widely regarded as the strongest fully open video foundation model on the planet.","HunyuanVideo 1.5 isn’t just another research checkpoint; it’s the moment when cinematic video synthesis officially escaped the walled gardens.","Every weight, every line of inference code, and even the training data recipe are now public on GitHub and Hugging Face.","\u003Cp>In a move that has sent shockwaves through the AI filmmaking community, Tencent has released HunyuanVideo 1.5, an 8.3-billion-parameter open-source video generation model that is now widely regarded as the strongest fully open video foundation model on the planet.\u003C/p>\n\n\u003Cp>\u003Cpicture class=\"image-align-right\">\u003Csource srcset=\"https://cdn.quasa.io/photos/00/image-2025-11-23t102721140.webp\" type=\"image/webp\">\u003Cimg alt=\"Tencent’s HunyuanVideo 1.5 Just Made Hollywood-Level Video Generation Truly Open-Source\" class=\"image-align-right\" height=\"447\" src=\"https://cdn.quasa.io/photos/00/image-2025-11-23t102721140.jpg\" width=\"300\" />\u003C/picture>While closed giants like OpenAI&rsquo;s Sora, Google&rsquo;s Veo 2, and Kling 1.5 still lock their best weights behind APIs and enterprise contracts, HunyuanVideo 1.5 is completely free for commercial and research use under an Apache 2.0 license. Every weight, every line of inference code, and even the training data recipe are now public on GitHub and Hugging Face.\u003C/p>\n\n\u003Ch4>\u003Cstrong>What actually sets it apart\u003C/strong>\u003C/h4>\n\n\u003Cul>\n\t\u003Cli>True 1080p cinematic output: Native generation is 768&times;512 at 24 fps for 5&ndash;10 seconds, but a built-in two-stage super-resolution module (trained jointly with the base DiT) pushes final renders to crisp 1920&times;1080 with film-grade texture and lighting fidelity.\u003C/li>\n\t\u003Cli>Runs on consumer hardware: The full 8.3 B model fits in ~13.6 GB VRAM at BF16 precision. Users are already producing 1080p clips on a single RTX 4090 or even an RTX 3090 Ti in under 4 minutes per 5-second clip using TensorRT-LLM optimizations.\u003C/li>\n\t\u003Cli>Motion coherence that finally competes with the closed leaders: Independent benchmarks (VBench, T2V-Score, and human preference studies on GenAI-Arena) place HunyuanVideo 1.5 neck-and-neck with Kling 1.5 and ahead of Runway Gen-3, Luma Dream Machine, and Pika 1.5 in complex motion, camera control, and prompt adherence.\u003C/li>\n\t\u003Cli>Multi-modal conditioning out of the box: Text-to-video, image-to-video, video-to-video, depth-map control, and reference-image styling all ship in the same checkpoint.\u003C/li>\n\u003C/ul>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Architecture highlights\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cstrong>\u003Cpicture class=\"image-align-left\">\u003Csource srcset=\"https://cdn.quasa.io/photos/00/image-2025-11-23t102722374.webp\" type=\"image/webp\">\u003Cimg alt=\"Tencent’s HunyuanVideo 1.5 Just Made Hollywood-Level Video Generation Truly Open-Source\" class=\"image-align-left\" height=\"298\" src=\"https://cdn.quasa.io/photos/00/image-2025-11-23t102722374.jpg\" width=\"200\" />\u003C/picture>HunyuanVideo 1.5 is built on a pure Diffusion Transformer (DiT) backbone with several clever departures from earlier open models like Open-Sora or Stable Video Diffusion:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>3D Causal VAE with 8&times;8&times;4 spatio-temporal compression (instead of the usual 8&times;8&times;8) that preserves significantly more high-frequency detail.\u003C/li>\n\t\u003Cli>Rotary positional embeddings extended to the temporal dimension, giving the model a native understanding of camera motion and physics.\u003C/li>\n\t\u003Cli>Flow-matching training in the latent space (a trick borrowed from recent image papers) that yields dramatically cleaner trajectories than standard denoising objectives.\u003C/li>\n\t\u003Cli>A 2-billion-parameter lightweight super-resolution DiT that was jointly trained with the base model, eliminating the usual &ldquo;blurry upscaling&rdquo; look that has plagued most open-source attempts.\u003C/li>\n\u003C/ul>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Real-world impact already happening\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cstrong>Within 72 hours of release:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>Indie filmmakers on X and Reddit reported generating entire mood-reels and pre-vis sequences that previously required $50&ndash;$200 per minute on paid APIs.\u003C/li>\n\t\u003Cli>ComfyUI and Automatic1111 forks added native HunyuanVideo nodes; the most popular one already has 40 k+ downloads.\u003C/li>\n\t\u003Cli>Chinese studios are using it in production for virtual production backgrounds and VFX plate generation, citing cost savings of 70&ndash;90 % compared to Kling Pro or Runway credits.\u003C/li>\n\u003C/ul>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>The new democratisation line\u003C/strong>\u003C/h4>\n\n\u003Cp>For the first time, a single hobbyist with a $1,500 GPU can now generate video that rivals what Hollywood studios were paying six-figure sums for just twelve months ago. The gap between &ldquo;closed corporate AI&rdquo; and &ldquo;what anyone can run at home&rdquo; has never been smaller.\u003C/p>\n\n\u003Cp>HunyuanVideo 1.5 isn&rsquo;t just another research checkpoint; it&rsquo;s the moment when cinematic video synthesis officially escaped the walled gardens.\u003C/p>\n\n\u003Cp>Model, code, and demos: \u003Ca href=\"https://hunyuan.tencent.com/video/en\">https://hunyuan.tencent.com/video/en\u003C/a>\u003Cbr />\nGitHub: https://github.com/Tencent-Hunyuan/HunyuanVideo &nbsp;\u003Cbr />\nHugging Face: https://huggingface.co/Tencent-Hunyuan/HunyuanVideo-1.5\u003C/p>\n\n\u003Cp>The age of truly open cinematic AI has arrived, and it runs on a gaming card.\u003C/p>","tencent-s-hunyuanvideo-1-5-just-made-hollywood-level-video-generation-truly-open-source","2025-11-23T09:28:15.000000Z","2025-11-30T06:23:00.000000Z","30.11.2025",{"image":82,"image_webp":83,"thumb":84,"thumb_webp":85},"https://cdn.quasa.io/images/news/vQuGnegsBqfmYjPRgv7Ri6cmdr9jEblVkbam2E2S.jpg","https://cdn.quasa.io/images/news/vQuGnegsBqfmYjPRgv7Ri6cmdr9jEblVkbam2E2S.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/vQuGnegsBqfmYjPRgv7Ri6cmdr9jEblVkbam2E2S.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/vQuGnegsBqfmYjPRgv7Ri6cmdr9jEblVkbam2E2S.webp","small",null,2781,0,"en",{"id":18,"title":19,"slug":20,"meta_title":92,"meta_description":93,"meta_keywords":94,"deleted_at":87,"created_at":95,"updated_at":96,"lang":90},"Quasa Media's blog about Growth Hacking in action","Exclusive life hacks on how to choose a career and be productive in any job.","Freelance, business, startup, Survive As a Freelance Developer","2021-09-03T20:21:41.000000Z","2024-08-25T15:40:14.000000Z",[98,112,124,137,150],{"title":99,"description":100,"slug":101,"created_at":102,"publish_at":103,"formatted_created_at":104,"category":105,"links":106,"view_type":86,"video_url":87,"views":69,"likes":89,"lang":90,"comments_count":89,"is_pinned":111},"China Hits a Major Milestone: AgiBot Produces Its 10,000th Humanoid Robot — And the Acceleration Is What Matters","China has officially entered the mass-production era of humanoid robots.AgiBot, one of the country’s leading humanoid startups, announced this week that it has rolled out its 10,000th unit. While the absolute number is already impressive, the real story lies in the speed of scaling.","china-hits-a-major-milestone-agibot-produces-its-10-000th-humanoid-robot-and-the-acceleration-is-what-matters","2026-04-17T18:04:12.000000Z","2026-04-21T11:56:00.000000Z","21.04.2026",{"title":43,"slug":44},{"image":107,"image_webp":108,"thumb":109,"thumb_webp":110},"https://cdn.quasa.io/images/news/RWBoSSyjqLEBSXlBW2Pawk5lQJlTdnjcgxs6grJU.jpg","https://cdn.quasa.io/images/news/RWBoSSyjqLEBSXlBW2Pawk5lQJlTdnjcgxs6grJU.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/RWBoSSyjqLEBSXlBW2Pawk5lQJlTdnjcgxs6grJU.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/RWBoSSyjqLEBSXlBW2Pawk5lQJlTdnjcgxs6grJU.webp",false,{"title":113,"description":114,"slug":115,"created_at":116,"publish_at":117,"formatted_created_at":104,"category":118,"links":119,"view_type":86,"video_url":87,"views":42,"likes":89,"lang":90,"comments_count":89,"is_pinned":111},"Tufts Report: 9.3 Million US Jobs at Risk from AI in the Next 5 Years — And the Hits Are Coming for High-Tech Hubs, Not Rust Belt Towns","A new study from the research center at The Fletcher School at Tufts University delivers one of the most detailed — and alarmist — assessments yet of how generative AI could reshape the American workforce.","tufts-report-9-3-million-us-jobs-at-risk-from-ai-in-the-next-5-years-and-the-hits-are-coming-for-high-tech-hubs-not-rust-belt-towns","2026-04-17T17:56:01.000000Z","2026-04-21T09:46:00.000000Z",{"title":19,"slug":20},{"image":120,"image_webp":121,"thumb":122,"thumb_webp":123},"https://cdn.quasa.io/images/news/lh9EBa6xSwRdFrH3x7iNGyU9DNtyKYgXsr4nmUgg.jpg","https://cdn.quasa.io/images/news/lh9EBa6xSwRdFrH3x7iNGyU9DNtyKYgXsr4nmUgg.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/lh9EBa6xSwRdFrH3x7iNGyU9DNtyKYgXsr4nmUgg.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/lh9EBa6xSwRdFrH3x7iNGyU9DNtyKYgXsr4nmUgg.webp",{"title":125,"description":126,"slug":127,"created_at":128,"publish_at":129,"formatted_created_at":104,"category":130,"links":131,"view_type":86,"video_url":87,"views":136,"likes":89,"lang":90,"comments_count":89,"is_pinned":111},"Anthropic Hits $30B ARR — Superforecaster Peter Wildeford Now Sees OpenAI + Anthropic Combined Run Rate at $240B by End of 2026","The AI revenue rocket ship just got a new thrust level.This week Anthropic announced it has crossed a $30 billion annualized revenue run rate — more than tripling from roughly $9 billion at the end of 2025.","anthropic-hits-30b-arr-superforecaster-peter-wildeford-now-sees-openai-anthropic-combined-run-rate-at-240b-by-end-of-2026","2026-04-17T17:42:21.000000Z","2026-04-21T06:33:00.000000Z",{"title":35,"slug":36},{"image":132,"image_webp":133,"thumb":134,"thumb_webp":135},"https://cdn.quasa.io/images/news/Nb3l8oxamaQ5AXpER58hkLvUybJKVHiuW8FxH1Wp.jpg","https://cdn.quasa.io/images/news/Nb3l8oxamaQ5AXpER58hkLvUybJKVHiuW8FxH1Wp.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/Nb3l8oxamaQ5AXpER58hkLvUybJKVHiuW8FxH1Wp.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/Nb3l8oxamaQ5AXpER58hkLvUybJKVHiuW8FxH1Wp.webp",46,{"title":138,"description":139,"slug":140,"created_at":141,"publish_at":142,"formatted_created_at":104,"category":143,"links":144,"view_type":86,"video_url":87,"views":149,"likes":89,"lang":90,"comments_count":89,"is_pinned":111},"AI Gets $100k, a 3-Year Lease in San Francisco, and One Simple Instruction: “Make Profit” — It Opened a Real Store and Hired Humans","Andon Labs just ran one of the most audacious real-world AI experiments yet.","ai-gets-100k-a-3-year-lease-in-san-francisco-and-one-simple-instruction-make-profit-it-opened-a-real-store-and-hired-humans","2026-04-17T17:25:03.000000Z","2026-04-21T03:15:00.000000Z",{"title":19,"slug":20},{"image":145,"image_webp":146,"thumb":147,"thumb_webp":148},"https://cdn.quasa.io/images/news/YuKTmq4MmTZ32xLs273lXrR1C1Crj21v8WarjChS.jpg","https://cdn.quasa.io/images/news/YuKTmq4MmTZ32xLs273lXrR1C1Crj21v8WarjChS.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/YuKTmq4MmTZ32xLs273lXrR1C1Crj21v8WarjChS.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/YuKTmq4MmTZ32xLs273lXrR1C1Crj21v8WarjChS.webp",65,{"title":151,"description":152,"slug":153,"created_at":154,"publish_at":154,"formatted_created_at":155,"category":156,"links":157,"view_type":86,"video_url":87,"views":162,"likes":89,"lang":90,"comments_count":89,"is_pinned":111},"Claude Design Looks Great — But It Devours Your Token Limits. Here’s How to Use It Smartly","Here's a well-structured, engaging English article based on the thread by Ryan Mather (@Flomerboy\n) from the Anthropic design team. It covers the fresh launch of Claude Design, its strengths, the main pain point (high token usage during initial setup), and the practical tips from the developers.","claude-design-looks-great-but-it-devours-your-token-limits-here-s-how-to-use-it-smartly","2026-04-20T19:33:38.000000Z","20.04.2026",{"title":47,"slug":48},{"image":158,"image_webp":159,"thumb":160,"thumb_webp":161},"https://cdn.quasa.io/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.jpg","https://cdn.quasa.io/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.webp",107,[164,177,193,205,220],{"title":165,"description":166,"slug":167,"created_at":168,"publish_at":169,"formatted_created_at":170,"category":171,"links":172,"view_type":86,"video_url":87,"views":175,"likes":176,"lang":90,"comments_count":89,"is_pinned":111},"The Anatomy of an Entrepreneur","Entrepreneur is a French word that means an enterpriser. Enterprisers are people who undertake a business or enterprise with the chance of earning profits or suffering from loss.","the-anatomy-of-an-entrepreneur","2021-08-04T15:18:21.000000Z","2025-12-14T06:09:00.000000Z","14.12.2025",{"title":65,"slug":66},{"image":173,"image_webp":87,"thumb":174,"thumb_webp":174},"https://cdn.quasa.io/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp",71056,2,{"title":178,"description":179,"slug":180,"created_at":181,"publish_at":182,"formatted_created_at":183,"category":184,"links":185,"view_type":190,"video_url":87,"views":191,"likes":192,"lang":90,"comments_count":89,"is_pinned":111},"Advertising on QUASA","QUASA MEDIA is read by more than 400 thousand people a month. We offer to place your article, add a link or order the writing of an article for publication.","advertising-on-quasa","2022-07-06T07:33:02.000000Z","2025-12-15T17:33:02.000000Z","15.12.2025",{"title":58,"slug":63},{"image":186,"image_webp":187,"thumb":188,"thumb_webp":189},"https://cdn.quasa.io/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","https://cdn.quasa.io/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.webp","large",70825,4,{"title":194,"description":195,"slug":196,"created_at":197,"publish_at":198,"formatted_created_at":199,"category":200,"links":201,"view_type":86,"video_url":87,"views":204,"likes":192,"lang":90,"comments_count":89,"is_pinned":111},"What is a Startup?","A startup is not a new company, not a tech company, nor a new tech company. You can be a new tech company, if your goal is not to grow high and fast; then, you are not a startup. ","what-is-a-startup","2021-08-04T12:05:17.000000Z","2025-12-17T13:02:00.000000Z","17.12.2025",{"title":65,"slug":66},{"image":202,"image_webp":87,"thumb":203,"thumb_webp":203},"https://cdn.quasa.io/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp",68450,{"title":206,"description":207,"slug":208,"created_at":209,"publish_at":210,"formatted_created_at":211,"category":212,"links":213,"view_type":86,"video_url":87,"views":218,"likes":176,"lang":90,"comments_count":219,"is_pinned":111},"Top 5 Tips to Make More Money as a Content Creator","Content creators are one of the most desired job titles right now. Who wouldn’t want to earn a living online?","top-5-tips-to-make-more-money-as-a-content-creator","2022-01-17T17:31:51.000000Z","2026-01-17T11:30:00.000000Z","17.01.2026",{"title":19,"slug":20},{"image":214,"image_webp":215,"thumb":216,"thumb_webp":217},"https://cdn.quasa.io/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg","https://cdn.quasa.io/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.webp",42412,1,{"title":221,"description":222,"slug":223,"created_at":224,"publish_at":225,"formatted_created_at":226,"category":227,"links":228,"view_type":190,"video_url":87,"views":233,"likes":176,"lang":90,"comments_count":89,"is_pinned":111},"8 Logo Design Tips for Small Businesses","Your logo tells the story of your business and the values you stand for.","8-logo-design-tips-for-small-businesses","2021-12-04T21:59:52.000000Z","2025-05-05T03:30:00.000000Z","05.05.2025",{"title":15,"slug":16},{"image":229,"image_webp":230,"thumb":231,"thumb_webp":232},"https://cdn.quasa.io/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg","https://cdn.quasa.io/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.webp",41498,[235,236,237,238,239,240,241,242,243,244,245,246,247],{"title":23,"slug":24},{"title":47,"slug":48},{"title":55,"slug":56},{"title":43,"slug":44},{"title":51,"slug":52},{"title":31,"slug":32},{"title":35,"slug":36},{"title":27,"slug":28},{"title":19,"slug":20},{"title":15,"slug":16},{"title":58,"slug":63},{"title":11,"slug":12},{"title":65,"slug":66}]