NEDTalks: How Can Artificial Intelligence Contribute to NOAA’s Mission?

Hi, my name is David Hall. I’m a senior solution architect for Nvidia and I’m here to speak with you today about how artificial intelligence can help benefit NOAA’s mission. I’ll go over this in a number of different sections. First, I’ll introduce what’s so impressive about artificial intelligence in recent days and why people are talking about it so much; I’ll describe a little bit about how it works at a high level, then I’ll tell you why it’s so important to use GPUs and why they’re so closely connected together. I’ll go on to give lots of examples of how we can actually apply this to things that NOAA cares about, and then we’ll go into a deeper dive into how we detect tropical storms in some of the projects that we’re working on together with NOAA. So, let’s start out with talking a bit about why AI has done so many things that interest people lately, and why there’s so much buzz and excitement about this Automation has increased recently with the recent advent of artificial intelligence and the new developments in that field, with particularly strong showings in the areas of strategy, computer vision, machine translation, and sample generation. One particularly impressive example of this is the defeat of the world champion Go player by the program called “Alpha Go”. Go was considered to be a game that was impossible to be beat by machines, because there are so many moves that you have to explore. A brute-force search will just fail entirely, so they didn’t expect machines to be able to beat the world’s champion at any time in the near future, but in 10, 20, 30 years from now-because machines simply wouldn’t be fast enough. Then along comes the recent advances in deep learning and artificial intelligence, and it comes up with a more intelligent way to play the game, rather than a faster way. It’s able to beat the world’s best minds in this game. It also went on to defeat- not just the world’s best humans- but the world’s best computer programs in chess and shogi, as well (which is Japanese chess). So, in addition to playing perfect information games like go, it’s recently shown very good result in plain imperfect information games like poker-where you don’t know what the other person has in their hand- and even real-time strategy eSport games like Dota 2 and Starcraft 2, which are very long games with very complex strategy played against very talented humans and it’s defeated them all across the board Another thing that’s been shown to be very strong is computer vision. You can use artificial intelligence to do automatic detection of objects in video You can detect people’s faces and automatically do face recognition, where it has on-the-screen labels of who each of these people are. You can do automatic pose estimation. As I move my arms around, AI can tell me where my arms and legs are. You can even use it when you combine these things together to do autonomous driving, which is still in the works but is getting very good. Another thing that this has transformed is machine translation, where you can go from text to speech, from speech to text, and directly between languages without going in between- all using artificial intelligence at a very high level of quality. With these technologies in place, you can create natural user interfaces, such as Google Now, Alexa, and Siri, which allow you to just talk to the computer and have it return the responses you’re expecting. Another thing that’s been really impressive lately is sample generation. Artificial intelligence is able to generate all kinds of automatic independent versions of images, videos, music, and text. So, about the picture in the upper left-hand side these are pictures of people, but none of these people actually exist. Basically, it’s taken a set of people, learn the statistics, and then created new versions of people that could hypothetically exist, but do not. You can also use it to animate video. Here, we have a comedian basically using President Obama as a puppet, forcing him to say whatever it is he wants to say. In this example, you can still see some anomalies, but in the future, it’ll be seamless and you won’t be able to tell real video from fake. At the bottom, they’ve generated an entire concert in the style of Wolfgang Mozart. And here, just by inputting a few lines of text, you can generate the entire remainder of an article that basically covers a subject that you’ve asked it to do, just by predicting what words that should come next in that same subject So, there’s some really amazing stuff in sample generation, with both good and bad implications. The next question is- how are these breakthroughs accomplished? Pretty much all of the examples that I have shown were accomplished through supervised deep learning or deep reinforcement learning, which are two closely related technologies. There are all different types of artificial intelligence. That kind of falls on this automation spectrum, with expert systems and machine learning being a little bit more manual, and deep learning and reinforcement learning being the most autonomous. Their power comes from their ability to do things on their own. So, what is supervised deep learning? Is it something really fancy? Is it an intelligent ‘Terminator’ robot? No, it’s

just a way of creating functions that map your inputs to your outputs, and then you automatically find a function that connects the dots. If that sounds a lot like curve fitting to you, that’s good because that’s pretty much all it is. Here, we see an example of low-dimensional curve fitting. I’ve got a Taylor Series across the top; I draw in some dots, and then I allow the computer to minimize a loss function in order to find the best fit to this curve. Here we go- we’ve fit the examples that we’ve given it, we’ve got a curve that generalizes to new values. That’s what curve fitting is- it allows you to make new predictions. If you were to pin put new ‘X’ values, it’ll predict new ‘Y’ values. Deep learning differs from ordinary curve fitting in that it operates in very high dimensional spaces, so calling a curve fitting is a bit of a negative way to describe it, because it’s a very powerful technique where you can input images like the ones shown here on the left; you can input sounds, large three-dimensional four-dimensional fields; and connect those to the same thing on the right-hand side. They could be numbers, images, sounds- whatever- on the right-hand side. So, it’s curve fitting, but in a very, very high dimensional space. It’s able to find complex hierarchical relationships in between the inputs and the outputs and typically adjust millions of parameters rather than the four or five that we saw in the previous example. Deep reinforcement learning is very similar to supervised deep learning, except that instead of giving it the inputs and outputs having a large corpus of training data we don’t have the output so we don’t know what the correct answers are instead we give it a reinforcement function that tells that when it’s doing a good job it interacts with an environment and then, through trial and error, it gets the feedbacks to figure out whether or not it’s got the right answers. Internally, it’s training supervised deep learning networks. You add, on top of that, the sort of trial-and-error interaction and you’re able to do really impressive things in the areas of strategy and planning; whereas supervised deep learning is more appropriate for vision translation and generation tasks. Perhaps the best way to think of it as a new way to generate software directly from data software; it is really just a way of manipulating numbers inside of the computer’s memory You give us some numbers, it does some manipulations, and produces an answer Numbers go in, like temperature, pressure, and moisture. Numbers come out, like the predicted probability of rain in your region. The same thing can be done either using a handwritten function, or a deep learning function They both take the same inputs, return the same outputs. The only thing that’s different is what happens in between. So, with a handwritten function, you break down your algorithm into a set of steps that are human-understandable and human-readable; updating momentum, energy, macro physics. Deep learning, on the other hand, basically solves some equation in order to figure out how to create those outputs. Then, you just adjust these parameters, like the weights and the biases, in order to map the inputs to the outputs in the appropriate manner. Not as easy for people to understand, but you’re able to generate code that’s too complex to generate in the old-fashioned sense So, let’s talk a little bit about why GPUs are so closely linked to deep learning, and why NVidia, which I represent, is so interested. It all starts with the 2012 IMAGENET Competition, in which computer experts, experts in computer vision, would get together every year to try and present their best programs for automatically identifying a thousand different types of images, with a 1.2 million images database. In the IMAGENET Competition, there was Alex Krezesky (who was a student at the University of Toronto and a student of Geoffrey Hinton) came along with this very simple little algorithm called a deep neural network, which he revived from previous decades, threw it on a GPU, which made it practical to train, submitted it to the conference, and it absolutely destroyed the competition. They went from 26% error on the prediction, down to 16%, which is a huge increase in the accuracy of this prediction. It surprised the heck out of everyone, and these green bars tell you how many people were using GPUs every year. The next year, it went from four groups to sixty groups- pretty much everyone was using GPUs in deep learning in the following years, because it just worked better than what the experts were able to do in 40 years worth of computer vision research. This guy doesn’t really know much of anything about computer vision- I apologize, I apologize! In addition to doing really good job on the IMAGENET Competition and working really well with deep learning, GPUs have stepped in to basically fill the gap in CPU performance, because CPU performance has basically plateaued in recent years. It used to be that you could wait a year and a half or so and the speed of the fastest computers would double; you just go buy a new computer and everything would run twice as fast you didn’t have to change your software. These days, it’s only increasing by about 10% per year. So f you wait around, your software doesn’t really get any faster

in a magic fashion like it used to GPUs, on the other hand, are still exhibiting this sort of performance game, because they achieve performance in a very different way, using many thousands of parallel threads instead of one very very fast thread. The people who develop and build the world’s fastest supercomputers have made note of this and all of the fastest supercomputers (at least as of the time of this presentation) are absolutely jam-packed with GPUs 27,000 GPUs on Summit, a bunch on Sierra Piz Daint, and the fastest computers in Japan and in Europe; all using lots and lots of GPUs, so it makes sense to learn how to use them well. There are a lot of different ways that you can use GPUs, and to take advantage of the parallelism you can just drop in some libraries, you can add some open-ACC comments into your code, you can go right down to the full control level and write some CUDA code; but perhaps the most interesting way, particularly for me, is to use deep learning to program your GPU, in a sort of automatic fashion. Deep learning basically magically converts your data into GPU optimized code. So, you input your data; you train a neural network; it turns it into a bunch of simple matrix multiplications and nonlinear function evaluations, which operate in an optimized fashion on the GPU because they’ve already optimized those operations. Data in, software out, no porting of your code, no CUDA, none of the other complications you might expect The two work amazingly well together- deep learning in GPUs! Alright! Now that we kind of know what we’re talking about here, what AI can do a little bit about how it works-how can we use this to accomplish some things that NOAA might care about? What sorts of problems can we apply it to? There’s a really long list here and I’ll power through a few of them you can use it for the most straightforward thing which is feature detection so one idea that I’m really excited about is automatically attaching to every satellite a deep learning algorithm that runs in microseconds and automatically detects important features in your incoming data stream so you can automatically detect tropical storms and all kinds of other important weather features of interest in each of your satellites to flag which things are going on right now without having to have an expert look at each and every image there’s tons of images streaming in and this can make that a lot more efficient another thing you can do is you can train a deep neural network to automatically detect important features on the surface of the Sun here we see a network that I trained to detect coronal holes and it’s doing a pretty good job you get are probably interested in detecting solar flares and coronal mass ejections because they have a large impact on life here on the earth you can use it to invite Munter environmental change so here we see the effect of drought and you can automatically detect these changes using artificial intelligence and also classify exactly what type of change is occurring whether it’s drought flooding deforestation urban if occation melting glaciers or sea level rise you can monitor the health and condition of the planet so all i’ve already shown that it’s very very good at doing strategy playing go for example you anything that you can put in the form of a game a strategy where you have a set of actions where you want to automatically choose those actions the learning reinforcement learning can excel at that so you can use it to optimize disaster planning how do i best evacuate a city how do i allocate resources in case there are a disaster and there is a disaster in the future you can develop autonomous agents that operate independently of you for when you’re not in communication with them for example the rover on Mars can’t have commands coming into it all the time because there’s a huge lag and need to know what to do when you’re not talking to it you can apply this to Rovers sensors on automated aircraft basically anything that you want to do something in a semi to intelligent fashion on its own you can also search for the best long-term strategies climate change is altering the planet we may need to get to the point where we have to do some sort of geoengineering to make it livable here there are a lot of strategies for how to do geoengineering and all of them have different trade-offs you can put mirrors in space seed clouds put some aerosols in the atmosphere to reflect the light you can directly capture carbon from the atmosphere you can seed the ocean with iron all of these have different trade-offs different Geographic impacts It’s a very complex set of choices, and you can use artificial intelligence to explore the trade-offs and basically play this game to get the best strategy possible. Another really exciting potential thing that you can do with artificial intelligence is use it to accelerate expensive models, like your weather climate models, or any other expensive calculation you might be doing What you’re doing here is thinking of an expensive calculation that, say, takes an entire hour to compute a single value. You’re just going to tabulate a bunch of those values and look them up in a table later, because online, you don’t want to have to be recomputing the same value over and over

again. But, you can’t possibly store all the values that you might want to tabulate-that’s just impractical! What you can do is to store some of them and just interpolate between them, basically connect the dots. That’s fine, but it’s not particularly accurate, particularly in high dimensional spaces Here, we see a two-dimensional curve in which it works; in a high- dimensional space, it generally won’t work. But what you can do is use artificial intelligence to fit a high-dimensional curve to your points, so it interpolates through these points in an intelligent fashion. Then, you don’t need to use the original algorithm anymore. You can just basically use this super-fast lookup table to get all of your values. It can be thousands of times as fast as the original computation. This was used in the Deep Density Displacement Model of Gravitation (D3M) in order to accelerate how fast they’re able to do gravitational simulations of star and galaxy formation.Once this deep neural network was trained, it was able to do simulations in literally just microseconds, and now they can explore very large swathes of parameter space, whereas before it took minutes for the approximation techniques that had before, or hours for the full computation It also has accuracy comparable or better to all the approximation techniques they’ve used before. Another way to accelerate something using emulation (which is the label for how you accelerate things) is to accelerate the data assimilation process. Data assimilation is important for getting an accurate weather prediction. You basically need a very accurate initial state. So, you take all the data from lots of satellites, you use them appropriately to an initialize your model, and then you run the model forward It’s very computationally expensive: you have to iterate over, and over, and over again- using an expensive forward operator-in order to integrate each of these numbers. So, you can only use a very small fraction of the available data in the amount of time you have to do the data assimilation. By speeding up the forward operator, or even the inverse process- which is what the iteration is doing- you can do data assimilation much faster and therefore use a much larger quantity of data, get a more accurate initial condition, and a more accurate weather prediction. In addition, you can enhance your data. There’s lots of different ways to enhance it. I’ve just picked a few, because this list is getting rather long. Here’s a really fun one: you can use in-painting to fill in missing data. Here, we see a photograph taken by someone, but there’s all these things in the scene that you don’t want to be there. This person is intentionally damaging their image by erasing some of these pixels and then allowing to artificially intelligent neural network (to train the deep learning model) to fill in the pixels that are most likely to be there, given what it sees in the scene. So, it fills in natural-looking things, because that’s usually what you would find in this scene. You can use this same technique for other applications. The GOES 17 satellite has this cooling problem- at certain times of year, the data is corrupted because it’s not correctly cooling the satellite. You get all this damage in images that are being collected by GOES 17. Well, these damaged pixels can be replaced automatically using the same in- painting technique, both using information that it knows from the other channels (if they are not damaged) and from the many thousands of images it’s taken before that were correctly observed, so it has learned the statistics from them. So, you can fix these images, turn this useless image into something useful. Another thing you can do is increase the resolution of your photographs. Here, we see a low resolution picture of a car, a high resolution picture of a car, and the image is very sharp. It’s taking this and converted it into that, using an artificial intelligence technique called Up-Res, and it just knows how to fill in the missing pixels in order to make a good high-resolution image.This same technique, or something quite similar was used in the paper shown here at the bottom- they had a sparse set of wind observations across mainland China observing wind speed. They filled those in and made a continuous field out of them, and then used an AI technique to automatically increase the resolution of that map to create a very high resolution wind map, showing you where the winds are strong and where the winds are not. Another thing you can do is take a video, slow it down so this is 24 frames per second, and then fill in a bunch of extra frames in between to make a super slow motion video So, this is the video that was taken. And this is a video that’s automatically generated by an artificially intelligent deep learning interpolation technique It allows you to create very high quality, high-resolution images from a low-resolution video. Here, I’ve taken the same technique and I’ve applied it to some satellite observations. A satellite loop from the GOES 15 satellite. Here, we see 10 input frames and here up on output, 110 output frames. So, it makes a fairly nice smooth video interpolate by interpolating these frames in an intelligent fashion. You can accelerate and improve physical parameterizations in your global climate and local climate models and in your weather models.You can take existing parameterizations that

you have, and accelerate them in the same fashion I described before This can be applied to cloud micro physics and macro physics, parameterizations, boundary layer turbulence, radiative transfer models- which are particularly expensive and can’t be run every time step because they’re so expensive; you can speed all these guys up, so that they go faster. You can use them more often and get more accuracy out of your model in that fashion. You can also create brand-new parameterizations of higher quality directly from data, so instead of having to build these parameterizations by hand, you can create really high-resolution simulations which has been done here by Noah Brenowitz and Chris Brotherton at the University of Washington. They use a very high resolution aquaplanet simulation to simulate the clouds, micro physics, and all the things that are going on in the model without the parameterizations. Then, they average over each of these squares to get what would have happened at a low resolution, so you can build the parameterizations in a combined fashion from that data. More accuracy and more speed! You can create more accurate time series predictions, as well. Before I showed in the language model that you could enter a few sentences of text and then predict the text that’s coming next. That is simply a time series. You can do the same thing with any other time-series data. You enter some of the data, and from previous examples, it’s able to guess what might happen next. This could be applied, for example, to predict the intensity of a tropical storm or its path, or any of the other many important time-series that you may want to predict One of the things you can do is that you can combine the spatial prediction capabilities of deep learning with the time-series prediction to do Nowcasting! Nowcasting is looking at the web- just an hour or two into the future, basically- creating an approximate model to figure out how these clouds are going to turn into other clouds. Deep learning is very well-suited to this. You just combine the two technologies in time-series and spatial prediction together. Finally, you can use it to augment your tools to make better tools themselves. You have augmented reality, virtual reality, natural language interfaces. Here is a particularly fun and interesting task/ application. This group, called Tab 9, used a very powerful GP Two language model, not to predict what you’re going to say next but to predict what you’re going to type next. As a person is entering text in their code right here, it predicts the entire set of what’s likely to come next. It can predict entire paragraphs and you can just hit tab and they’ll complete the entire remainder of your code. Okay, so we took a look at many of the different use cases that could be applied, and there are many many more that I left out, because I simply don’t have time to discuss them. But, you can probably think of some on your own for your own particular use case. Let’s take a closer look at one of these use cases to try and make it a little less mysterious. How actually does this work? I’ve been working together with ESRL, the Earth System Research Lab, with Mark Govett and Jebb Stewart in Boulder and I’ve been collaborating with Sid Boukabara, here in the College Park Maryland area, on some of these projects There are some of the ones we’ve talked about, and some other ones that I haven’t talked about. The one I would like to talk to talk about is how to detect extreme weather. In particular, we’re looking at detecting tropical storms. So, it would be great if we could automatically detect extreme events in the atmosphere, both so we can select data for better data assimilation, but also just because people would like to be warned when bad things are coming. We would like to detect lots of interesting features-like tropical cyclones, atmospheric rivers, storm fronts, tornadoes, wildfires, volcanic eruptions, and things that are hard for people to see- like the cyclogenesis of a cyclone and convection initiation, things that are a little harder to detect with your eye. So, how do you go about training a neural network to detect a tropical storm? Well, the first thing you need to do, since we’re using supervised deep learning, is create a labelled data set. We need our X’s and our Y’s- so here, we have a water vapor field and I’ve overlaid on top of it the expert labels for the named storms, so we know where the storms are within this water vapor field Then, you can use those locations to extract images in the vicinity of each of those storms, so those create your positive training set. These are the images that have storms in them. You also want to have a negative training set, images that don’t have storms in them. Then you can classify it- “is this a storm, or is it not a storm?”- automatically. You input your storms on the left; you output the appropriate labels on the right. Zero means no storm, one means there is a storm. Then, you try and solve for an equation that basically connects these dots in an appropriate manner. So, you want to adjust weights -you know, W’s and B’s-until this equation does a really good job of mapping the X’s to the Y’s. People don’t usually look at it in that fashion; they usually represent a neural network by a diagram like this, where the nodes are called neurons- that’s where the activation functions are computed. The weights are represented by these green lines, which are strengthened or weakened; or the values

are changed in order to adjust your function. The inputs on the left side; the outputs are on the right-hand side. Let’s zoom in a little bit and see how this works. You start out with the weights, initialized to random numbers- just completely random numbers, usually in the range of -1 to 1. You input your values on the left-hand side and then you fire off your neural network. It makes a prediction. It says, “I have a 50/50 chance here that this thing is a storm.” I have no idea. It’ll get the answer wrong because it’s just been initialized with a random nonsensical weights. Then, you compare to the right answer. You compute a loss function, which is basically the distance between these two, how close am I to the right answer. Then, you see how wrong you are and you propagate that error backward through your network. You attribute the error to each of the neurons that incorrectly produced those values and the weights that incorrectly produced those values, and you adjust them by a small amount- up and down- to get a slightly better answer. If you do this over and over again, many thousands or millions of times, eventually (if you’re lucky!) your neural network will converge and you’ll have a function that now maps all of your images to your output values. So, what I’ve just described is essentially a search through a loss functional space, where you have a bunch of parameters. You’re starting from a random initial point and then you’re descending the surface of that space to find an answer that has a lower loss, or a lower error. It’s basically a search procedure where you’re searching for the correct function. Here, we see just two parameters, so you can actually plot it- but remember there are a 100,000 or maybe a million parameters that we’re searching through. There are a bunch of different optimizers that you can use in order to take these steps Some of them use momentum and have other clever techniques. For better finding this minimum, I use the adaptive momentum operator. Most of the time, it works really well. Other possibilities are root-mean-square propagator, stochastic gradient descent, and things with momentum. Okay, so this all might still sound a little bit mysterious, so let me make it really clear how this works. This is what you actually do with your code. This is a minimum functional example. You start out by generating some data or loading your data from the images from your file. You define your model, which in this case we have a few dense, fully connected layers. You choose your loss function. You choose your optimizer. You compile your model, and then you train it- which is the part that takes a really long time. It goes for 30–20 epochs in this case– through all the data, basically trying out all the images, looking for the minimum of that loss function. Using the loss function- that would be the error that we’ve chosen (binary cross entropy) and the batch size tells you that it’s looking at 128 images before it takes a little step. Then, at the end, you evaluate how good of a job it did. With these same 20 lines of code, you can generate very, very powerful pieces of software. You give it a different piece of data, you get a different piece of software out. A tropical storm detector, a sun spot detector…this is obviously an oversimplification. We use more sophisticated models than this, but this is basically all there is to it, when it works. So, once you’ve trained a tropical storm detector, you then apply it to each and every set of pixels in your image in order to tell me- is there a tropical storm at this point? You just you could just slide this window across your image and detect it and wherever you find tropical storms, those will light up and the other things will remain dark You can create a mask in that fashion, segmenting your image into storm and no storm pixels. But, this is very inefficient, sending these images through one at a time. It’s more efficient to use something called a convolutional neural network that does the same thing, but in parallel. A convolution is a very small matrix (like this one) that you enter some numbers and apply it everywhere within your image and it performs some transformation of your image. It can detect edges, or it can do the things that we’ve just shown, like detecting tropical storms- if you have the right numbers in there! Instead of doing the numbers by hand, a convolutional neural network learns these sets of numbers. These are the weights, so it’s actually adjusting. So, we go back to our picture, we replace our fully connected network with convolutional filters. Here, there’s three filters per layer-and these are the things that are learned. Now, the output that you train it on is a set of pixels- white shows where I would like it to tell me there’s a tropical storm; black shows me shows me where there isn’t a tropical storm. And, this shows me the output that is learned so far. Then, you can combine these convolutional neural networks in interesting ways to create advanced structured neural networks. Here, we see something called a U-NET. It’s a u-shaped convolutional neural network where each of these blue layers are convolutions, and then you downscale the image over and over again so that you can find features at multiple resolutions. Then, it combines all these together with skip connections. So, this is the mathematical model that we’re using, that fully connected network. This is a network that I use quite often. It’s very simple, but very powerful looking and can do many things. So, working with the Earth System Research Lab (ESRL) group in Boulder, we actually applied the U-NET network and

trained it to detect tropical storms in the water vapor field. We input the water vapor field-here are the ground truth points that it was trained upon at the top, and here is the predicted segmentation at the bottom where it thinks it found tropical storms. You can see that it’s doing a pretty good job. What’s interesting about this application is that the original data points- where the labels came from- experts used many, many different variables to create those labeled points. They used temperature, pressure, wind fields, local observations. All the neural network needs, now that it’s been trained, is the water vapor field. Without all those other fields, it can predict if there’s a tropical storm there or not. This worked pretty well, so they applied it as well to satellite observational data. Here, they’ve used the GOES- geostationary satellite. Here are the observations from the upper tropospheric water vapor, and here are the correct answers. Here are the segmentations that they obtained, and again, it’s doing really quite a good job of automatically detecting tropical storms. A very similar task was performed by Prabhat and others at the Lawrence Berkeley National Lab. They automatically detected both tropical storms and atmospheric rivers. Then, once they trained their neural network, they scaled it up to the entire Summit supercomputer so they’re using all 27,000 GPUs on Summit, and in doing so, they won the Gordon Bell Prize for supercomputing, because they were able to do this at one exaflop of performance. They also were able to power through a thousand years of climate model data in only a few hours Once you train these things, given enough to compute, you can look at all of your data rather than just a subset. Alright, so I’ve gone over a lot of things! Allow me to sum up and revisit some of the most important points: artificial intelligence, particularly the rebirth of deep learning and reinforcement learning, can do some amazing things and has done some really impressive stuff in the last few years. The most impressive things, I’m sure, are still to come. It is essentially nothing more mysterious than curve fitting, but in a very high dimensional space. It allows you to connect your inputs to your outputs automatically. This is best thought of as a piece of software that has been generated; a function that you’ve generated directly from data, which runs in an optimized fashion on a GPU. The GPU makes it practical, makes it fast, and with much lower power; you would have to use many, many CPUs to do the same thing. Finally, we can apply these techniques to a really wide spectrum of things in NOAA. I gave a few examples, but there are many more examples I didn’t have time to discuss- pretty much it’s open, and there’s just an unlimited number of things that you can do with this. Thank you for inviting me here to talk with you today. If you would like to contact me, my email address is [email protected] Thanks for your attention