[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"nav-categories":3,"article-nvidia-lyra-2-0-solves-spatial-forgetting-and-temporal-drift-in-generative-video":70},{"data":4},[5,37,57,64],{"name":6,"slug":7,"categories":8},"Productivity","productivity",[9,13,17,21,25,29,33],{"id":10,"title":11,"slug":12},17,"Branding","branding",{"id":14,"title":15,"slug":16},19,"Marketing","marketing",{"id":18,"title":19,"slug":20},20,"Work","work",{"id":22,"title":23,"slug":24},34,"Community","community",{"id":26,"title":27,"slug":28},21,"For newbies","for-newbies",{"id":30,"title":31,"slug":32},24,"Investment","investment",{"id":34,"title":35,"slug":36},22,"Finance","finance",{"name":38,"slug":39,"categories":40},"Tech","tech",[41,45,49,53],{"id":42,"title":43,"slug":44},28,"Technology","technology",{"id":46,"title":47,"slug":48},32,"Artificial Intelligence","artificial-intelligence",{"id":50,"title":51,"slug":52},26,"Security and protection","security-and-protection",{"id":54,"title":55,"slug":56},31,"YouTube Blog","youtube-blog",{"name":58,"slug":59,"categories":60},"News","news",[61],{"id":62,"title":58,"slug":63},18,"quasanews",{"name":65,"slug":66,"categories":67},"Business","business",[68],{"id":69,"title":65,"slug":66},16,{"post":71,"published_news":93,"popular_news":148,"categories":219},{"title":72,"description":73,"meta_title":72,"meta_description":73,"meta_keywords":74,"text":75,"slug":76,"created_at":77,"publish_at":77,"formatted_created_at":78,"category_id":46,"links":79,"view_type":84,"video_url":85,"views":86,"likes":87,"lang":88,"comments_count":87,"category":89},"NVIDIA Lyra 2.0 Solves Spatial Forgetting and Temporal Drift in Generative Video","NVIDIA has unveiled Lyra 2.0, a new framework that generates persistent, explorable 3D worlds from a single image.","Modern generative video models produce stunning short clips, but their \"memory\" is notoriously short — more like a goldfish than a reliable scene builder.","\u003Cp>\u003Ca href=\"https://research.nvidia.com/labs/sil/projects/lyra2/\">NVIDIA has unveiled \u003Cstrong>Lyra 2.0\u003C/strong>\u003C/a>, a new framework that generates persistent, explorable 3D worlds from a single image. Developed by NVIDIA Research, it tackles one of the biggest headaches in generative video AI: the inability of models to maintain coherent, long-horizon scenes when the virtual camera moves freely, especially when it revisits previously seen areas or makes sharp viewpoint changes.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>The Persistent Problem with Generative Video Models\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cpicture>\u003Csource srcset=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210021570.webp\" type=\"image/webp\">\u003Cimg alt=\"NVIDIA Lyra 2.0 Solves Spatial Forgetting and Temporal Drift in Generative Video\" class=\"image-align-left\" height=\"372\" src=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210021570.jpg\" width=\"250\" />\u003C/picture>Modern generative video models produce stunning short clips, but their &quot;memory&quot; is notoriously short &mdash; more like a goldfish than a reliable scene builder. When the camera turns away from an object and then looks back, the model often hallucinates entirely new details or forgets what was there before.\u003C/p>\n\n\u003Cp>Over longer sequences, small errors compound: colors shift, object shapes warp, geometry drifts, and the entire scene gradually falls apart. This makes it nearly impossible to create believable, navigable environments for applications beyond simple TikTok-style videos.\u003C/p>\n\n\u003Cp>\u003Ca href=\"https://the-decoder.com/nvidia-wants-to-scale-robot-simulation-training-with-lyra-2-0/\">NVIDIA&#39;s engineers claim to have cracked this issue with a surprisingly practical approach\u003C/a>. Instead of forcing the model to remember everything internally, they bolted on an explicit \u003Cstrong>3D cache\u003C/strong>&nbsp;that acts as an external spatial memory.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>How Lyra 2.0 Works: 3D Cache + Smart Retrieval\u003C/strong>\u003C/h4>\n\n\u003Cp>The pipeline starts with a single input image (and an optional text prompt). \u003Ca href=\"https://www.i-scoop.eu/nvidia-lyra-2-0-generates-explorable-3d-worlds-from-a-single-image/\">Users define a camera trajectory through an interactive 3D explorer interface.\u003C/a>\u003C/p>\n\n\u003Cp>\u003Cstrong>\u003Cpicture>\u003Csource srcset=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210023482.webp\" type=\"image/webp\">\u003Cimg alt=\"NVIDIA Lyra 2.0 Solves Spatial Forgetting and Temporal Drift in Generative Video\" class=\"image-align-right\" height=\"447\" src=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210023482.jpg\" width=\"300\" />\u003C/picture>The system then generates the video in autoregressive segments, but with crucial enhancements for consistency:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>For every generated frame, Lyra 2.0 estimates depth and stores camera parameters along with a downsampled point cloud in the growing \u003Cstrong>3D cache\u003C/strong>.\u003C/li>\n\t\u003Cli>When generating a new frame (especially after a camera turn or revisit), the system retrieves the most relevant past frames based on visibility from the target viewpoint.\u003C/li>\n\t\u003Cli>It warps these historical frames into the current coordinate system using the cached 3D geometry, establishing dense correspondences.\u003C/li>\n\t\u003Cli>These correspondences, along with compressed temporal history, are injected into the Diffusion Transformer (DiT) via attention mechanisms. The model still relies on its strong generative prior for appearance synthesis, but the geometry acts as a reliable &quot;scaffold&quot; to prevent hallucination in already-explored regions.\u003C/li>\n\u003C/ul>\n\n\u003Cp>This geometry-aware retrieval effectively solves \u003Ca href=\"https://www.vp-land.com/p/nvidia-s-lyra-2-0-builds-walkable-3d-worlds-from-generated-video\">\u003Cstrong>spatial forgetting\u003C/strong>\u003C/a>&nbsp;&mdash; the model no longer has to reinvent the world from scratch when the camera looks back.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Fixing Temporal Drift with Self-Augmented Training\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cpicture>\u003Csource srcset=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210025738.webp\" type=\"image/webp\">\u003Cimg alt=\"NVIDIA Lyra 2.0 Solves Spatial Forgetting and Temporal Drift in Generative Video\" class=\"image-align-left\" height=\"223\" src=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210025738.jpg\" width=\"150\" />\u003C/picture>The second major innovation addresses \u003Cstrong>temporal drifting\u003C/strong>, where small synthesis errors accumulate over time and distort both appearance and geometry.\u003C/p>\n\n\u003Cp>During training, \u003Ca href=\"https://www.researchgate.net/publication/346213288_Lyra_2_Designing_Interactive_Visualizations_by_Demonstration\">NVIDIA researchers deliberately feed the model its own slightly degraded predictions as part of the history\u003C/a>. This self-augmented approach teaches the network to correct and clean up its own mistakes rather than propagating and amplifying them frame by frame.\u003C/p>\n\n\u003Cp>Combined with context compression for longer histories, it results in significantly more stable long-range video generation.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>From Video to Interactive 3D Worlds\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cpicture>\u003Csource srcset=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210019548.webp\" type=\"image/webp\">\u003Cimg alt=\"NVIDIA Lyra 2.0 Solves Spatial Forgetting and Temporal Drift in Generative Video\" class=\"image-align-right\" height=\"447\" src=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210019548.jpg\" width=\"300\" />\u003C/picture>Once the consistent video walkthrough is generated, \u003Ca href=\"https://curiousrefuge.com/blog/nvidias-new-ai-filmmaking-tool-is-bonkers\">Lyra 2.0 lifts the sequence into explicit 3D representations through a fast feed-forward reconstruction step\u003C/a>.\u003C/p>\n\n\u003Cp>\u003Cstrong>The output can be exported as:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Cstrong>3D Gaussian Splatting\u003C/strong>&nbsp;scenes for high-quality, real-time rendering;\u003C/li>\n\t\u003Cli>Point clouds or meshes;\u003C/li>\n\t\u003Cli>Fully navigable environments suitable for VR experiences.\u003C/li>\n\u003C/ul>\n\n\u003Cp>The scenes are coherent enough that users can freely explore them, revisit locations, and even extend the world into previously unseen areas while maintaining consistency with what came before.\u003C/p>\n\n\u003Cp>Beyond entertainment, the system supports practical downstream use cases. \u003Ca href=\"https://jessleao.substack.com/p/that-loser-premise-makes-no-sense\">Generated scenes can be exported directly into physics engines like \u003Cstrong>NVIDIA Isaac Sim\u003C/strong>\u003C/a>, enabling physically grounded robot navigation, interaction, and training for embodied AI. This makes Lyra 2.0 particularly relevant for simulation, robotics, and scalable world model development.\u003C/p>\n\n\u003Chr />\n\u003Cp>\u003Cstrong>Also read:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/cloudflare-just-made-email-a-first-class-citizen-for-ai-agents-and-traditional-email-services-are-feeling-it\">Cloudflare Just Made Email a First-Class Citizen for AI Agents &mdash; And Traditional Email Services Are Feeling It\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/mozilla-nails-it-thunderbolt-brings-chatgpt-at-home-to-the-enterprise-without-vendor-lock-in\">Mozilla Nails It: Thunderbolt Brings &ldquo;ChatGPT at Home&rdquo; to the Enterprise &mdash; Without Vendor Lock-In\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/x-is-finally-cracking-down-on-unlabeled-ads-and-it-s-personal\">X Is Finally Cracking Down on Unlabeled Ads &mdash; And It&rsquo;s Personal\u003C/a>\u003C/li>\n\u003C/ul>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Implications for Creators and Developers\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cpicture>\u003Csource srcset=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210028338.webp\" type=\"image/webp\">\u003Cimg alt=\"NVIDIA Lyra 2.0 Solves Spatial Forgetting and Temporal Drift in Generative Video\" class=\"image-align-right\" height=\"447\" src=\"https://cdn.quasa.io/photos/0001/image-2026-04-20t210028338.jpg\" width=\"300\" />\u003C/picture>The results are impressive: demos show long camera trajectories (tens of meters) with stable geometry, consistent objects even after sharp turns or revisits, and seamless switching between the generated video and real-time Gaussian Splatting renders.\u003C/p>\n\n\u003Cp>For 3D artists, level designers, and game developers, this doesn&#39;t mean the end of traditional tools just yet &mdash; but it signals a shift. Generating large, coherent environments from a single image and a camera path could dramatically speed up prototyping and world-building. The ability to drop a robot into a physically plausible version of the generated scene opens new doors for AI training and simulation.\u003C/p>\n\n\u003Cp>Lyra 2.0 is detailed in a new arXiv paper (arXiv:2604.13036), with interactive demos, video examples, and a gallery available on the official NVIDIA Research project page. While the full model weights and code details are hosted on Hugging Face under NVIDIA&#39;s organization, the framework represents a meaningful step toward truly persistent generative 3D worlds.\u003C/p>\n\n\u003Cp>In short, NVIDIA has shown that combining video diffusion models with explicit 3D memory and clever self-correction can turn fleeting generative clips into explorable, expandable realities. The era of AI-built virtual worlds you can actually walk through &mdash; and come back to without everything falling apart &mdash; is getting closer.\u003C/p>","nvidia-lyra-2-0-solves-spatial-forgetting-and-temporal-drift-in-generative-video","2026-04-20T19:05:53.000000Z","20.04.2026",{"image":80,"image_webp":81,"thumb":82,"thumb_webp":83},"https://cdn.quasa.io/images/news/Za28inhjfgkp2cMS3eCUHAQ3UP8dJncCvITAKe0G.jpg","https://cdn.quasa.io/images/news/Za28inhjfgkp2cMS3eCUHAQ3UP8dJncCvITAKe0G.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/Za28inhjfgkp2cMS3eCUHAQ3UP8dJncCvITAKe0G.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/Za28inhjfgkp2cMS3eCUHAQ3UP8dJncCvITAKe0G.webp","small",null,23,0,"en",{"id":46,"title":47,"slug":48,"meta_title":47,"meta_description":90,"meta_keywords":90,"deleted_at":85,"created_at":91,"updated_at":92,"lang":88},"Artificial Intelligence, ai, ml, machine learning, chatgpt, future","2024-09-22T08:08:27.000000Z","2024-09-23T12:49:38.000000Z",[94,106,109,122,135],{"title":95,"description":96,"slug":97,"created_at":98,"publish_at":98,"formatted_created_at":78,"category":99,"links":100,"view_type":84,"video_url":85,"views":18,"likes":87,"lang":88,"comments_count":87,"is_pinned":105},"Claude Design Looks Great — But It Devours Your Token Limits. Here’s How to Use It Smartly","Here's a well-structured, engaging English article based on the thread by Ryan Mather (@Flomerboy\n) from the Anthropic design team. It covers the fresh launch of Claude Design, its strengths, the main pain point (high token usage during initial setup), and the practical tips from the developers.","claude-design-looks-great-but-it-devours-your-token-limits-here-s-how-to-use-it-smartly","2026-04-20T19:33:38.000000Z",{"title":47,"slug":48},{"image":101,"image_webp":102,"thumb":103,"thumb_webp":104},"https://cdn.quasa.io/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.jpg","https://cdn.quasa.io/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/MWAqWKyMuj9LRYzXpAOSU6ohIA3IRlA3RoiqdHw2.webp",false,{"title":72,"description":73,"slug":76,"created_at":77,"publish_at":77,"formatted_created_at":78,"category":107,"links":108,"view_type":84,"video_url":85,"views":86,"likes":87,"lang":88,"comments_count":87,"is_pinned":105},{"title":47,"slug":48},{"image":80,"image_webp":81,"thumb":82,"thumb_webp":83},{"title":110,"description":111,"slug":112,"created_at":113,"publish_at":114,"formatted_created_at":78,"category":115,"links":116,"view_type":84,"video_url":85,"views":121,"likes":87,"lang":88,"comments_count":87,"is_pinned":105},"GitHub’s AI Agent Tsunami: 275 Million Commits a Week, 14 Billion Projected for 2026 — And the Platform Is Starting to Crack","GitHub just hit numbers that would have sounded like science fiction twelve months ago.","github-s-ai-agent-tsunami-275-million-commits-a-week-14-billion-projected-for-2026-and-the-platform-is-starting-to-crack","2026-04-17T17:10:50.000000Z","2026-04-20T11:57:00.000000Z",{"title":58,"slug":63},{"image":117,"image_webp":118,"thumb":119,"thumb_webp":120},"https://cdn.quasa.io/images/news/XZTYmDGdaBEeRqB4Fv56AeOVoRtMa9PgoABzA3uj.jpg","https://cdn.quasa.io/images/news/XZTYmDGdaBEeRqB4Fv56AeOVoRtMa9PgoABzA3uj.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/XZTYmDGdaBEeRqB4Fv56AeOVoRtMa9PgoABzA3uj.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/XZTYmDGdaBEeRqB4Fv56AeOVoRtMa9PgoABzA3uj.webp",64,{"title":123,"description":124,"slug":125,"created_at":126,"publish_at":127,"formatted_created_at":78,"category":128,"links":129,"view_type":84,"video_url":85,"views":134,"likes":87,"lang":88,"comments_count":87,"is_pinned":105},"Time’s Up for SaaS: Grow Faster or Disappear","The public markets have spoken — and the verdict is brutal. In early 2026, the software sector is in freefall. The Meritech Public SaaS Index has plunged 37% since the end of Q3 2025.","time-s-up-for-saas-grow-faster-or-disappear","2026-04-17T16:42:27.000000Z","2026-04-20T09:32:00.000000Z",{"title":31,"slug":32},{"image":130,"image_webp":131,"thumb":132,"thumb_webp":133},"https://cdn.quasa.io/images/news/Pgd6kFN0MEzRDCuMrQRslHnVu4QNjjMMfuhEgOWY.jpg","https://cdn.quasa.io/images/news/Pgd6kFN0MEzRDCuMrQRslHnVu4QNjjMMfuhEgOWY.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/Pgd6kFN0MEzRDCuMrQRslHnVu4QNjjMMfuhEgOWY.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/Pgd6kFN0MEzRDCuMrQRslHnVu4QNjjMMfuhEgOWY.webp",75,{"title":136,"description":137,"slug":138,"created_at":139,"publish_at":140,"formatted_created_at":78,"category":141,"links":142,"view_type":84,"video_url":85,"views":147,"likes":87,"lang":88,"comments_count":87,"is_pinned":105},"Twitter Dev Builds “Stukach-Claw” — An AI Snitch Bot That’s Already Reported 4,250 People to the IRS for Tax Jokes","A crypto trader and developer known as @camolNFT has gone viral after revealing he built an autonomous AI agent called OpenClaw (affectionately dubbed Stukach-Claw by Russian-speaking users) that actively hunts for “jokes” about tax evasion on social media and automatically files whistleblower reports with the IRS.","twitter-dev-builds-stukach-claw-an-ai-snitch-bot-that-s-already-reported-4-250-people-to-the-irs-for-tax-jokes","2026-04-17T12:09:57.000000Z","2026-04-20T06:06:00.000000Z",{"title":19,"slug":20},{"image":143,"image_webp":144,"thumb":145,"thumb_webp":146},"https://cdn.quasa.io/images/news/Bknfo3h65dH5eqJ5coCdgVsAxMYJsDIOPyZplmaX.jpg","https://cdn.quasa.io/images/news/Bknfo3h65dH5eqJ5coCdgVsAxMYJsDIOPyZplmaX.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/Bknfo3h65dH5eqJ5coCdgVsAxMYJsDIOPyZplmaX.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/Bknfo3h65dH5eqJ5coCdgVsAxMYJsDIOPyZplmaX.webp",93,[149,162,178,190,205],{"title":150,"description":151,"slug":152,"created_at":153,"publish_at":154,"formatted_created_at":155,"category":156,"links":157,"view_type":84,"video_url":85,"views":160,"likes":161,"lang":88,"comments_count":87,"is_pinned":105},"The Anatomy of an Entrepreneur","Entrepreneur is a French word that means an enterpriser. Enterprisers are people who undertake a business or enterprise with the chance of earning profits or suffering from loss.","the-anatomy-of-an-entrepreneur","2021-08-04T15:18:21.000000Z","2025-12-14T06:09:00.000000Z","14.12.2025",{"title":65,"slug":66},{"image":158,"image_webp":85,"thumb":159,"thumb_webp":159},"https://cdn.quasa.io/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp",70971,2,{"title":163,"description":164,"slug":165,"created_at":166,"publish_at":167,"formatted_created_at":168,"category":169,"links":170,"view_type":175,"video_url":85,"views":176,"likes":177,"lang":88,"comments_count":87,"is_pinned":105},"Advertising on QUASA","QUASA MEDIA is read by more than 400 thousand people a month. We offer to place your article, add a link or order the writing of an article for publication.","advertising-on-quasa","2022-07-06T07:33:02.000000Z","2025-12-15T17:33:02.000000Z","15.12.2025",{"title":58,"slug":63},{"image":171,"image_webp":172,"thumb":173,"thumb_webp":174},"https://cdn.quasa.io/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","https://cdn.quasa.io/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.webp","large",70738,4,{"title":179,"description":180,"slug":181,"created_at":182,"publish_at":183,"formatted_created_at":184,"category":185,"links":186,"view_type":84,"video_url":85,"views":189,"likes":177,"lang":88,"comments_count":87,"is_pinned":105},"What is a Startup?","A startup is not a new company, not a tech company, nor a new tech company. You can be a new tech company, if your goal is not to grow high and fast; then, you are not a startup. ","what-is-a-startup","2021-08-04T12:05:17.000000Z","2025-12-17T13:02:00.000000Z","17.12.2025",{"title":65,"slug":66},{"image":187,"image_webp":85,"thumb":188,"thumb_webp":188},"https://cdn.quasa.io/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp",68365,{"title":191,"description":192,"slug":193,"created_at":194,"publish_at":195,"formatted_created_at":196,"category":197,"links":198,"view_type":84,"video_url":85,"views":203,"likes":161,"lang":88,"comments_count":204,"is_pinned":105},"Top 5 Tips to Make More Money as a Content Creator","Content creators are one of the most desired job titles right now. Who wouldn’t want to earn a living online?","top-5-tips-to-make-more-money-as-a-content-creator","2022-01-17T17:31:51.000000Z","2026-01-17T11:30:00.000000Z","17.01.2026",{"title":19,"slug":20},{"image":199,"image_webp":200,"thumb":201,"thumb_webp":202},"https://cdn.quasa.io/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg","https://cdn.quasa.io/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.webp",42332,1,{"title":206,"description":207,"slug":208,"created_at":209,"publish_at":210,"formatted_created_at":211,"category":212,"links":213,"view_type":175,"video_url":85,"views":218,"likes":161,"lang":88,"comments_count":87,"is_pinned":105},"8 Logo Design Tips for Small Businesses","Your logo tells the story of your business and the values you stand for.","8-logo-design-tips-for-small-businesses","2021-12-04T21:59:52.000000Z","2025-05-05T03:30:00.000000Z","05.05.2025",{"title":15,"slug":16},{"image":214,"image_webp":215,"thumb":216,"thumb_webp":217},"https://cdn.quasa.io/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg","https://cdn.quasa.io/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.webp","https://cdn.quasa.io/thumbs/news-thumb/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg","https://cdn.quasa.io/thumbs/news-thumb/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.webp",41423,[220,221,222,223,224,225,226,227,228,229,230,231,232],{"title":23,"slug":24},{"title":47,"slug":48},{"title":55,"slug":56},{"title":43,"slug":44},{"title":51,"slug":52},{"title":31,"slug":32},{"title":35,"slug":36},{"title":27,"slug":28},{"title":19,"slug":20},{"title":15,"slug":16},{"title":58,"slug":63},{"title":11,"slug":12},{"title":65,"slug":66}]