A Digital Twin of the Earth

A few months ago I wrote a long-form post on the potential impact of AI on how we communicate. One of the themes I explored was how AI can increase the speed and efficiency of video transmission by modelling the underlying systems using neural networks, assisting in frame interpolation and then using generative AI to super-sample or upscale images.

In the recent NVIDIA keynote, Jenson Huang shared how a combination of a short-to-medium range weather forecasting model called FourCastNet (based on a Neural Network) combined with a Generative AI model called CorrDiff to super-sample predictions and perform weather simulations 1000x faster with 3000x less energy than conventional modelling. Jenson went on to share that while the CorrDiff model is currently trained on weather patterns around the region of Taiwan, in the future it will be expanded to other regions and be part of the Earth-2 Inference service, a collaboration between NVIDIA and the Weather Company. Jenson went on to say that extreme weather conditions costs the US $150 billion per year, due to direct impacts such as infrastructure damage, worker injuries and agricultural losses. Earth-2 Inference has the potential to save lives and protect property by giving people and authorities more notice on the impact of adverse weather conditions.

While Generative AI is stealing most of the limelight in the media, it's not the only element of the AI stack that's delivering value. When we combine Generative AI with Neural Networks trained to model the underlying system we get the greatest gains in terms of speed and efficiency. I believe we will see this two-factor combination become increasingly pervasive as organisations look to get the greatest value from AI. The size and scale of Digital Twins assisted with AI is now moving to a planetary scale.