VIDEO_ID: izVyptLrkYA URL: https://www.youtube.com/watch?v=izVyptLrkYA TIMESTAMP_FLAGGED: 132 LANGUAGE: en SNIPPET_COUNT: 231 ================================================================================ So this week there was a new official skill that was launched for the Gemini API and it solves one of the most timeconuming problems we have all potentially been dealing with as we start building AI applications or even use AI agents. Let me explain you what I mean. So the Gemini ecosystem if you think about it right has been moving really fast. We went all the way from Gemini 1.5 to 2.0 to 2.5 to now Gemini 3. In the same time, the SDK also got completely rewritten in all about like 12 months or so. When you use some of your favorite wipe coding tools like cursor, claw code or even anti-gravity, the agent may have some stale information about the Gemini platform. And this is where this skill is super super interesting. Google packaged the current state of the entire Gemini API, the current models, the current SDKs across Python, JavaScript, Go, and Java itself. plus a live documentation index into one official skill that any coding agent can now use and refer to. Now, if this skill wasn't there, here's what happens. Your agent defaults to old deprecated Google generative AI SDK. It picks legacy models like Gemini 2.0 Flash. It doesn't know about Gemini 3's new capabilities like native image generation with search grounding. and you end up with code that looks right but is actually already outdated the moment it's written. Let me show you exactly how this works and how you can install it and most importantly a quick real demo so that you can see the difference with and without the skill and as always opinions are my own and do not belong to my employer. So with that let's get into it. All right so in order to explain how this skill is really valuable when you're building an AI application I'm going to build one together with you. So I'm an anti-gravity and I'm going to build a weather application where you can select a name of the city from a drop-down and this application is going to create an infographic about that city with the realtime weather information. I've asked anti-gravity to give me an implementation plan. Now I am familiar that the latest Gemini 3 Nano Banana Pro model has the real-time Google search grounding information built right into it and that's where it will be able to do that without calling any external API but in this case let's see what it is going to do right now it's creating an implementation plan as we have asked it to do and we're just going to wait for it for a few seconds so that we can read what the implementation plan is and there we have all right so let's look at the implementation plan that it has given me so Here number one it is using the right SDK which is fantastic. Do keep in mind that anti-gravity also has the access to Google search. So it can obviously search for getting the latest information but here it's using the Gemini 2.0 flash model and then it is using a different model for image generation right so for orchestration and data fetching. So it's using two different models and it is also going to be using the Google search tool. So it's not leveraging the weather API which is good but again what I really wanted to have was to actually use a single API and it has actually gone in and provided some information here. So it is saying that to satisfy this single API call as closely as possible we going to use this SDK to tool calling capability technically text and image generation are handled by different model endpoints. And so this is where it's a little bit different. Right now this is where I want to talk about what Gemini API skill actually gives you. Right? So it provides access to the latest information about all the different types of models current Gemini models. And assumption is it's they're going to keep on updating it as they update the models itself. And then it also says that these are legacy models and should not be used right or deprecated. If you're using this model then your knowledge is outdated. It talks about which is the latest SDK and it also wants for using the old SDK like legacy SDK right so this is where you can see that it is not using the latest model so what we will do now is we will install the skill and then we will run the same prompt and see the difference here okay all right so to install the skill I'm going to open up my terminal within anti-gravity so no need to go anywhere outside and then here I'm going to type this particular command which is npm skills at google gemini so let me just expand this so that you can see and you can see that here this skill is available for all types of agents. So in this case we're going to select anti-gravity and then I'm just going to select copy to all agents sim link. I'm going to select that and proceed with installation. So now the installation is complete. You can see that it has installed the agent in this particular folder. Okay. So now that we have got the skill installed, what I'm going to do is I'm going to run the same exact prompt. provide the same exact I'll start the new like a new window. I'm just going to also say that use the Gemini API dev skill so that it knows what it needs to do. So I'm just going to close it and now it's going to write a new implementation plan for us. So I'm just going to wait on that and we will basically do a compare and contrast. Okay. So you can see it is starting to analyze the skill. It is starting to understand the key models etc. So it is going ahead and doing that. So we'll come back and see how the new implementation plan it provides. All right. So it has generated the implementation plan and let's look at this. And there you go. You can immediately see the difference here. So the SDK is going to be the latest one. But the model which is using is Gemini 3 Pro image preview. The weather fetching is handled by Gemini using Google search tool directly within the image generation call for real-time data. And this is what I was really expecting right. So implement like true single API call. We will leverage the multimodal nature of Gemini 3 pro image unlike the previous models that required a text to image bridge. Um this can search infinite context 4K render. So this is like completely different right and you just saw the implementation of skills before and after. So I'm just going to say proceed with building the application so that we can also see what this application is really going to be. So I'm just going to build the application really quickly and we will look at the final result. But you have already seen the meat of the demo which is before implementing the skills and after implementing the skill. All we are looking at now is how the application is going to look like. So I'm just going to give it first few seconds and we're just going to run the end output of the application so that we can see how Nano Banana Pro is able to use real-time information from Google search while creating the images. Okay. So we'll give it a few seconds. All right. So this was the application that it was able to build. Here I'm going to provide my API key and then here I have a drop down of the different cities that I can select from. So, let's say I'm just going to select one of my favorite cities, which is Tokyo, and click on generate infographic. Now, what it should do is it should go ahead and find the latest weather. So, I'm just going to look for Tokyo's weather right now. And you can see that it is 13° Centiggrade or 56° Fahrenheit, right? So, let's see what it comes back with here. It's still generating. So, we'll give it a few seconds. All right. So, this is what it was able to generate. This is realtime weather data. It's 13°C, 15° F. If I go back to you know live weather information over here, you can see exactly the same thing, right? So this is now possible directly as part of the image because it is able to get the real-time information from Google search data and create an image automatically. This was not possible before and you'll have to have a text image sorry a text model creating that information and then embedding that into an image model. But now just one model is able to do that. So this model itself is not new but I I think the way you can build the application leveraging the latest information without you providing all of that detail is something which this particular skill is able to help with right and and that is what I wanted to show you guys today. So I hope this video was helpful. Please let me know in the comment section if you guys have any questions and if you're new to the channel please do hit that subscribe button and if you're existing subscriber if you like the video please hit that like button as well. Thank you once again for your time. Thank you for watching and I will see you in the next