Close Menu
    Trending
    • Struggling to scale your company? Here are five things that could be holding you back
    • What happens if you’re hit by a primordial black hole?
    • When is London Marathon 2026? Start time and how to watch race for FREE
    • Pentagon Requests $54 Billion For AI War
    • Clavicular Hit With New YouTube Crackdown
    • Beijing’s new supply chain rules deepen concerns for US firms in China
    • India denounces ‘hellhole’ remark shared by Trump | Donald Trump News
    • New photos of Mike Vrabel and Dianna Russini emerge
    Benjamin Franklin Institute
    Friday, April 24
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Technology»Decentralized AI Training Turns Homes Into Data Hubs
    Technology

    Decentralized AI Training Turns Homes Into Data Hubs

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteApril 7, 2026No Comments6 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link

    Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models.

    No wonder big tech companies are warming up to nuclear energy, envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.

    Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a solar-powered home. Instead of constructing more data centers that require electric grids to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.

    Hardware in harmony

    Training AI models is a huge data center sport, synchronized across clusters of closely connected GPUs. But as hardware improvements struggle to keep up with the swift rise in size of large language models, even massive single data centers are no longer cutting it.

    Tech firms are turning to the pooled power of multiple data centers—no matter their location. Nvidia, for instance, launched the Spectrum-XGS Ethernet for scale-across networking, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, Cisco introduced its 8223 router designed to “connect geographically dispersed AI clusters.”

    Other companies are harvesting idle compute in servers, sparking the emergence of a GPU-as-a-Service business model. Take Akash Network, a peer-to-peer cloud computing marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.

    “If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO Greg Osuri. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”

    Software in sync

    In addition to orchestrating the hardware, decentralized AI training also requires algorithmic changes on the software side. This is where federated learning, a form of distributed machine learning, comes in.

    It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains Lalana Kagal, a principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads the Decentralized Information Group. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.

    But there are drawbacks to distributing both data and computation. The constant back and forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.

    “A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”

    To overcome these hurdles, researchers at Google DeepMind developed DiLoCo, a distributed low-communication optimization algorithm. DiLoCo forms what Google DeepMind research scientist Arthur Douillard calls “islands of compute,” where each island consists of a group of chips. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.

    An improved version dubbed Streaming DiLoCo further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.

    AI development platform Prime Intellect implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter INTELLECT-1 model trained across five countries spanning three continents. Upping the ante, 0G Labs, makers of a decentralized AI operating system, adapted DiLoCo to train a 107-billion-parameter foundation model under a network of segregated clusters with limited bandwidth. Meanwhile, popular open-source deep learning framework PyTorch included DiLoCo in its repository of fault tolerance techniques.

    “A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”

    A more energy-efficient way to train AI

    With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.

    And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting tradeoff of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”

    Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its Starcluster program. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.

    Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in batteries for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.

    Backend work is already underway to enable homes to participate as providers in the Akash Network, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.

    Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”

    From Your Site Articles

    Related Articles Around the Web



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Technology

    How This Former Roboticist’s Students Rebuilt ENIAC

    April 23, 2026
    Technology

    How AI Is Changing Cybersecurity

    April 23, 2026
    Technology

    Ham Radio Brings Teletext Back to Life

    April 22, 2026
    Technology

    Energy in Motion: Unlocking the Interconnected Grid of Tomorrow

    April 22, 2026
    Technology

    Tech Life – A hologram to remember: Pam and Bill’s love story

    April 21, 2026
    Technology

    Engineering Manager Vs IC: How to Choose With Clarity

    April 21, 2026
    Editors Picks

    Commandos probed deep into Iran to rescue downed airman: US media

    April 5, 2026

    Nigerian airstrike leaves 200 feared dead in Yobe, councillor and residents say

    April 12, 2026

    Oil jumps, stocks wobble as Mideast ceasefire hangs in the balance

    April 20, 2026

    Braves building elite pitching staff with signing of Robert Suarez

    December 12, 2025

    How the travel chaos is wreaking havoc on the NCAA March Madness tournament

    March 19, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Struggling to scale your company? Here are five things that could be holding you back

    April 24, 2026

    What happens if you’re hit by a primordial black hole?

    April 24, 2026

    When is London Marathon 2026? Start time and how to watch race for FREE

    April 24, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.