[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"nav-categories":3,"article-when-images-are-born-from-light-itself-ucla-s-all-optical-generative-ai":70},{"data":4},[5,37,57,64],{"name":6,"slug":7,"categories":8},"Productivity","productivity",[9,13,17,21,25,29,33],{"id":10,"title":11,"slug":12},17,"Branding","branding",{"id":14,"title":15,"slug":16},19,"Marketing","marketing",{"id":18,"title":19,"slug":20},20,"Work","work",{"id":22,"title":23,"slug":24},34,"Community","community",{"id":26,"title":27,"slug":28},21,"For newbies","for-newbies",{"id":30,"title":31,"slug":32},24,"Investment","investment",{"id":34,"title":35,"slug":36},22,"Finance","finance",{"name":38,"slug":39,"categories":40},"Tech","tech",[41,45,49,53],{"id":42,"title":43,"slug":44},28,"Technology","technology",{"id":46,"title":47,"slug":48},32,"Artificial Intelligence","artificial-intelligence",{"id":50,"title":51,"slug":52},26,"Security and protection","security-and-protection",{"id":54,"title":55,"slug":56},31,"YouTube Blog","youtube-blog",{"name":58,"slug":59,"categories":60},"News","news",[61],{"id":62,"title":58,"slug":63},18,"quasanews",{"name":65,"slug":66,"categories":67},"Business","business",[68],{"id":69,"title":65,"slug":66},16,{"post":71,"published_news":95,"popular_news":152,"categories":216},{"title":72,"description":73,"meta_title":72,"meta_description":74,"meta_keywords":75,"text":76,"slug":77,"created_at":78,"publish_at":79,"formatted_created_at":80,"category_id":42,"links":81,"view_type":84,"video_url":85,"views":86,"likes":87,"lang":88,"comments_count":87,"category":89},"When Images Are Born from Light Itself: UCLA’s All-Optical Generative AI","Forget massive GPU clusters sipping megawatts of electricity. In a laboratory at the University of California, Los Angeles, researchers have built and experimentally demonstrated a system that generates complex, high-quality color images using nothing but coherent light, passive optical elements, and a single camera sensor. The heavy lifting that normally requires billions of multiply-accumulate operations on silicon is now performed instantly by physics.","In a laboratory at the University of California, Los Angeles, researchers have built and experimentally demonstrated a system that generates complex","The heavy lifting that normally requires billions of multiply-accumulate operations on silicon is now performed instantly by physics.","\u003Cp>Forget massive GPU clusters sipping megawatts of electricity. In a laboratory at the University of California, Los Angeles, researchers have built and experimentally demonstrated a system that generates complex, high-quality color images using nothing but coherent light, passive optical elements, and a single camera sensor. The heavy lifting that normally requires billions of multiply-accumulate operations on silicon is now performed instantly by physics.\u003C/p>\n\n\u003Cp>\u003Cstrong>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"342\" src=\"https://quasa.io/storage/photos/00/photo_2025-11-24_13-44-05.jpg\" width=\"300\" />\u003C/strong>The core invention is called the Optical Generative Model (OGM): a fully analog, diffraction-based architecture that turns random noise into recognizable pictures in one or a few light-propagation steps.\u003C/p>\n\n\u003Ch4>\u003Cstrong>How the Magic Happens\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cstrong>The pipeline has only three main stages:\u003C/strong>\u003C/p>\n\n\u003Col>\n\t\u003Cli>A lightweight digital phase encoder (a shallow neural network with ~580 million trainable parameters) receives pure Gaussian noise and outputs a 2D phase pattern &phi;(x,y);\u003C/li>\n\t\u003Cli>This phase pattern is displayed on a high-resolution spatial light modulator (SLM) &mdash; essentially a programmable liquid-crystal screen that can retard different parts of an incoming laser beam by precise fractions of a wavelength;\u003C/li>\n\t\u003Cli>Collimated laser light passes through the SLM, then free-space propagates through a trained diffractive optical decoder (a stack of 3D-printed phase masks or passive diffractive layers). After a few centimeters of propagation, the light intensity distribution that lands on an ordinary color CMOS sensor is the final generated image.\u003C/li>\n\u003C/ol>\n\n\u003Cp>\u003Cstrong>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"351\" src=\"https://quasa.io/storage/photos/00/photo_2025-11-24_13-44-03.jpg\" width=\"300\" />\u003C/strong>No active electronics sit between the SLM and the sensor.\u003C/p>\n\n\u003Cp>The &ldquo;neural network&rdquo; is literally etched into the physical structure of light waves.\u003C/p>\n\n\u003Cp>\u003Cstrong>The team demonstrated two variants:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>Snapshot OGM: a single forward pass of light produces the final image in &lt;1 nanosecond of optical computation (limited only by the speed of light over ~10 cm);\u003C/li>\n\t\u003Cli>Iterative OGM: the current sensor image is fed back digitally, slightly denoised, re-encoded onto the SLM, and light passes through the same diffractive decoder again &mdash; exactly mimicking digital diffusion&rsquo;s iterative refinement, but with optical speed at each step.\u003C/li>\n\u003C/ul>\n\n\u003Ch4>\u003Cstrong>What They Actually Generated in the Lab\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"313\" src=\"https://quasa.io/storage/photos/00/photo_2025-11-24_13-44-07.jpg\" width=\"300\" />\u003Cstrong>Using ordinary visible lasers (red 638 nm, green 520 nm, blue 450 nm combined through beam splitters), the researchers produced:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>MNIST handwritten digits with FID 131 (comparable to early digital diffusion models);\u003C/li>\n\t\u003Cli>Fashion-MNIST items (shoes, bags, clothing) with FID 180;\u003C/li>\n\t\u003Cli>Celebrity faces from a small CelebA subset;\u003C/li>\n\t\u003Cli>Realistic colored butterflies;\u003C/li>\n\t\u003Cli>Full-color artworks in Van Gogh&rsquo;s swirling post-impressionist style when conditioned on simple text prompts like &ldquo;sunflowers,&rdquo; &ldquo;church,&rdquo; or &ldquo;self-portrait&rdquo;.\u003C/li>\n\u003C/ul>\n\n\u003Cp>All of these were captured directly by the camera sensor &mdash; no post-processing beyond basic white-balance. The physical setup fits on a standard optical breadboard roughly 40 &times; 60 cm.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Why This Changes Everything\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"340\" src=\"https://quasa.io/storage/photos/00/photo_2025-11-24_13-44-09.jpg\" width=\"300\" />Energy &amp; Speed &nbsp;\u003Cbr />\nA single 4090 GPU generating one 512&times;512 image at 50 diffusion steps consumes roughly 10&ndash;15 joules and takes ~1&ndash;3 seconds. The optical system uses ~50 mW of laser power and finishes the equivalent computation in the time light travels 30 cm &mdash; about 1 billion times more energy-efficient per image and millions of times faster for the optical core.\u003C/p>\n\n\u003Cp>Edge AI Becomes Realistic &nbsp;\u003Cbr />\nBecause almost all computation is passive, future versions could run on milliwatt power budgets inside smartphones, AR glasses, drones, or even contact-lens displays. Real-time style transfer, infinite image continuation, or private on-device generation without ever touching a GPU suddenly look feasible.\u003C/p>\n\n\u003Cp>Scalability Path &nbsp;\u003Cbr />\nThe diffractive decoder is parallel by nature: every pixel computes simultaneously. Resolution is currently limited by SLM pixel pitch (~4 &micro;m), but 8K and 16K modulators are already commercially available, and metasurface alternatives are reaching sub-wavelength precision.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Current Limitations (They Are Real)\u003C/strong>\u003C/h4>\n\n\u003Cul>\n\t\u003Cli>Optical alignment is painful: sub-micron precision is required across the entire beam path;\u003C/li>\n\t\u003Cli>Phase-only modulation throws away amplitude information, forcing the system to be creative in how it encodes data;\u003C/li>\n\t\u003Cli>Laser speckle and sensor noise still degrade fidelity compared to the best digital models;\u003C/li>\n\t\u003Cli>Color separation requires three separate laser paths and perfect overlay on the sensor;\u003C/li>\n\t\u003Cli>Conditioning on complex text prompts is still primitive (the current encoder is tiny compared to CLIP).\u003C/li>\n\u003C/ul>\n\n\u003Cp>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"344\" src=\"https://quasa.io/storage/photos/00/photo_2025-11-24_13-44-13.jpg\" width=\"300\" />Yet even with these constraints, the experimental FID scores are within striking distance of 2022-era digital diffusion models that used 10&ndash;100&times; more parameters and orders of magnitude more energy.\u003C/p>\n\n\u003Cp>Also read:\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/sam-altman-how-ai-is-flipping-the-value-of-professions\">Sam Altman: How AI Is Flipping the Value of Professions\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/china-s-electric-truck-boom-a-green-freight-revolution-reshaping-global-energy-rivalries\">China&#39;s Electric Truck Boom: A Green Freight Revolution Reshaping Global Energy Rivalries\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/the-solar-revolution-is-no-longer-coming-it-s-already-here-and-it-speaks-chinese\">The Solar Revolution Is No Longer Coming &ndash; It&rsquo;s Already Here (and It Speaks Chinese)\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/exchange-dogecoin-doge-to-euro-eur-exchange\">Exchange Dogecoin (DOGE) to euro (EUR) exchange\u003C/a>\u003C/li>\n\u003C/ul>\n\n\u003Ch4>\u003Cstrong>The Bigger Picture\u003C/strong>\u003C/h4>\n\n\u003Cp>This is not just another &ldquo;optical neural network&rdquo; paper. For the first time, a physical optical system trained end-to-end with backpropagation (via differentiable wave-propagation simulation) has crossed the threshold from toy proofs-of-concept into generating recognizable, aesthetically pleasing color images that rival early Stable Diffusion outputs &mdash; all while consuming less power than a smartwatch display.\u003C/p>\n\n\u003Cp>If the roadmap holds, within 5&ndash;10 years we may see commercial optical generative chips the size of a postage stamp capable of producing 4K video frames at thousands per second, powered only by ambient light or a coin cell.\u003C/p>\n\n\u003Cp>The GPU era gave us generative AI. &nbsp;\u003Cbr />\nThe coming optical era may make it ubiquitous, invisible, and essentially free.\u003C/p>","when-images-are-born-from-light-itself-ucla-s-all-optical-generative-ai","2025-11-24T12:58:30.000000Z","2025-12-03T06:48:00.000000Z","03.12.2025",{"image":82,"thumb":83},"https://quasa.io/storage/images/news/5IumHYqJqsxVTpdNfROnMAAb37cCY9WTHVHnMmEb.png","https://api.quasa.io/thumbs/news-thumb/images/news/5IumHYqJqsxVTpdNfROnMAAb37cCY9WTHVHnMmEb.png","small",null,768,0,"en",{"id":42,"title":43,"slug":44,"meta_title":90,"meta_description":91,"meta_keywords":92,"deleted_at":85,"created_at":93,"updated_at":94,"lang":88},"Quasa media blog about growth hacking in Tech","All the most interesting and useful about technologies. Exclusive articles from technologies you won't find anywhere else.","Technology, tech, business, ai, gadget, gadgets, life hacks","2023-03-23T08:15:32.000000Z","2024-08-25T15:37:57.000000Z",[96,109,120,130,141],{"title":97,"description":98,"slug":99,"created_at":100,"publish_at":101,"formatted_created_at":102,"category":103,"links":104,"view_type":84,"video_url":85,"views":107,"likes":87,"lang":88,"comments_count":87,"is_pinned":108},"Market Sizing in Minutes: How Claude AI Just Turned TAM, SAM, and SOM Analysis into a 2-Minute Superpower","For years, calculating Total Addressable Market (TAM), Serviceable Addressable Market (SAM), and Serviceable Obtainable Market (SOM) was one of the most tedious parts of any strategy, pitch deck, or go-to-market plan.","market-sizing-in-minutes-how-claude-ai-just-turned-tam-sam-and-som-analysis-into-a-2-minute-superpower","2026-03-28T18:02:11.000000Z","2026-04-11T03:52:00.000000Z","11.04.2026",{"title":19,"slug":20},{"image":105,"thumb":106},"https://quasa.io/storage/images/news/WqSZ1w4RUZJdakkDj5L1v09G7bkmNxCVAU244BAb.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/WqSZ1w4RUZJdakkDj5L1v09G7bkmNxCVAU244BAb.jpg",62,false,{"title":110,"description":111,"slug":112,"created_at":113,"publish_at":113,"formatted_created_at":114,"category":115,"links":116,"view_type":84,"video_url":85,"views":119,"likes":87,"lang":88,"comments_count":87,"is_pinned":108},"Marble 1.1 — World Labs Just Made Their World Model Significantly Better","World Labs has released a meaningful update to its generative world model: Marble 1.1 and a new, more powerful variant called Marble 1.1 Plus.","marble-1-1-world-labs-just-made-their-world-model-significantly-better","2026-04-10T19:22:07.000000Z","10.04.2026",{"title":43,"slug":44},{"image":117,"thumb":118},"https://quasa.io/storage/images/news/Klmcbo6URuD0uYTxZn4aR9x8zl98NpFfsdMHTGHw.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/Klmcbo6URuD0uYTxZn4aR9x8zl98NpFfsdMHTGHw.jpg",424,{"title":121,"description":122,"slug":123,"created_at":124,"publish_at":124,"formatted_created_at":114,"category":125,"links":126,"view_type":84,"video_url":85,"views":129,"likes":87,"lang":88,"comments_count":87,"is_pinned":108},"Unmasking Runway Characters: The Unexpected Rise of the Real-Time Avatar","The generative AI landscape is moving so fast it's sometimes hard to keep up. But just when we thought we knew what to expect from major players like Runway, they dropped a curveball: Runway Characters.","unmasking-runway-characters-the-unexpected-rise-of-the-real-time-avatar","2026-04-10T19:04:45.000000Z",{"title":58,"slug":63},{"image":127,"thumb":128},"https://quasa.io/storage/images/news/Lxi7mPfuvku81DkTvlELBfErpx8nbus6cXvBCWMk.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/Lxi7mPfuvku81DkTvlELBfErpx8nbus6cXvBCWMk.jpg",392,{"title":131,"description":132,"slug":133,"created_at":134,"publish_at":134,"formatted_created_at":114,"category":135,"links":136,"view_type":84,"video_url":85,"views":139,"likes":140,"lang":88,"comments_count":87,"is_pinned":108},"Claude Mythos Just Broke Cybersecurity: The AI That Finds Vulnerabilities Better Than Most Human Hackers","Anthropic has quietly unleashed something terrifyingly powerful — and then immediately locked it away.","claude-mythos-just-broke-cybersecurity-the-ai-that-finds-vulnerabilities-better-than-most-human-hackers","2026-04-10T15:10:28.000000Z",{"title":43,"slug":44},{"image":137,"thumb":138},"https://quasa.io/storage/images/news/mzgaJsOkQfbcba4vvmQXFniw06VALNMRRGcRLVXF.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/mzgaJsOkQfbcba4vvmQXFniw06VALNMRRGcRLVXF.jpg",627,1,{"title":142,"description":143,"slug":144,"created_at":145,"publish_at":146,"formatted_created_at":114,"category":147,"links":148,"view_type":84,"video_url":85,"views":151,"likes":87,"lang":88,"comments_count":87,"is_pinned":108},"China’s Five-Year Plans Strike Again: How Centralized Vision and Competitive Freedom Are Powering the Next Frontier of Brain-Computer Interfaces","In an era of breakneck technological change, China’s much-maligned five-year planning system is proving surprisingly effective. Far from the rigid, top-down micromanagement of the Soviet era, Beijing’s modern industrial strategies deliberately avoid over-specifying every detail.","china-s-five-year-plans-strike-again-how-centralized-vision-and-competitive-freedom-are-powering-the-next-frontier-of-brain-computer-interfaces","2026-03-28T17:45:42.000000Z","2026-04-10T11:36:00.000000Z",{"title":43,"slug":44},{"image":149,"thumb":150},"https://quasa.io/storage/images/news/9e878UicRgHXBtTQ74llERUUJHi9VJhY6RrS6GzZ.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/9e878UicRgHXBtTQ74llERUUJHi9VJhY6RrS6GzZ.jpg",672,[153,166,180,192,204],{"title":154,"description":155,"slug":156,"created_at":157,"publish_at":158,"formatted_created_at":159,"category":160,"links":161,"view_type":84,"video_url":85,"views":164,"likes":165,"lang":88,"comments_count":87,"is_pinned":108},"The Anatomy of an Entrepreneur","Entrepreneur is a French word that means an enterpriser. Enterprisers are people who undertake a business or enterprise with the chance of earning profits or suffering from loss.","the-anatomy-of-an-entrepreneur","2021-08-04T15:18:21.000000Z","2025-12-14T06:09:00.000000Z","14.12.2025",{"title":65,"slug":66},{"image":162,"thumb":163},"https://quasa.io/storage/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp","https://api.quasa.io/thumbs/news-thumb/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp",69403,2,{"title":167,"description":168,"slug":169,"created_at":170,"publish_at":171,"formatted_created_at":172,"category":173,"links":174,"view_type":177,"video_url":85,"views":178,"likes":179,"lang":88,"comments_count":87,"is_pinned":108},"Advertising on QUASA","QUASA MEDIA is read by more than 400 thousand people a month. We offer to place your article, add a link or order the writing of an article for publication.","advertising-on-quasa","2022-07-06T07:33:02.000000Z","2025-12-15T17:33:02.000000Z","15.12.2025",{"title":58,"slug":63},{"image":175,"thumb":176},"https://quasa.io/storage/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","large",69083,4,{"title":181,"description":182,"slug":183,"created_at":184,"publish_at":185,"formatted_created_at":186,"category":187,"links":188,"view_type":84,"video_url":85,"views":191,"likes":179,"lang":88,"comments_count":87,"is_pinned":108},"What is a Startup?","A startup is not a new company, not a tech company, nor a new tech company. You can be a new tech company, if your goal is not to grow high and fast; then, you are not a startup. ","what-is-a-startup","2021-08-04T12:05:17.000000Z","2025-12-17T13:02:00.000000Z","17.12.2025",{"title":65,"slug":66},{"image":189,"thumb":190},"https://quasa.io/storage/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp","https://api.quasa.io/thumbs/news-thumb/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp",66871,{"title":193,"description":194,"slug":195,"created_at":196,"publish_at":197,"formatted_created_at":198,"category":199,"links":200,"view_type":84,"video_url":85,"views":203,"likes":165,"lang":88,"comments_count":140,"is_pinned":108},"Top 5 Tips to Make More Money as a Content Creator","Content creators are one of the most desired job titles right now. Who wouldn’t want to earn a living online?","top-5-tips-to-make-more-money-as-a-content-creator","2022-01-17T17:31:51.000000Z","2026-01-17T11:30:00.000000Z","17.01.2026",{"title":19,"slug":20},{"image":201,"thumb":202},"https://quasa.io/storage/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg",41031,{"title":205,"description":206,"slug":207,"created_at":208,"publish_at":209,"formatted_created_at":210,"category":211,"links":212,"view_type":177,"video_url":85,"views":215,"likes":165,"lang":88,"comments_count":87,"is_pinned":108},"8 Logo Design Tips for Small Businesses","Your logo tells the story of your business and the values you stand for.","8-logo-design-tips-for-small-businesses","2021-12-04T21:59:52.000000Z","2025-05-05T03:30:00.000000Z","05.05.2025",{"title":15,"slug":16},{"image":213,"thumb":214},"https://quasa.io/storage/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg",40240,[217,218,219,220,221,222,223,224,225,226,227,228,229],{"title":23,"slug":24},{"title":47,"slug":48},{"title":55,"slug":56},{"title":43,"slug":44},{"title":51,"slug":52},{"title":31,"slug":32},{"title":35,"slug":36},{"title":27,"slug":28},{"title":19,"slug":20},{"title":15,"slug":16},{"title":58,"slug":63},{"title":11,"slug":12},{"title":65,"slug":66}]