Google rolled out a new update to the Gemini app, called ‘Personal Intelligence’ to its Nano Banana 2 image generation model. With this update, Gemini can now pull context directly from services like Google Photos to create images that actually reflect your real life. You no longer need to upload reference images or explain everything in detail. A simple prompt like “design my dream house” can now generate results based on your taste and lifestyle.

This is powered by Nano Banana 2, Google’s latest image model, which works with Personal Intelligence to fill in gaps using data across your Google account. The goal is to cut down the effort needed for prompts while making the final output feel more relevant and useful.
Another standout feature is the ability to include real people in generated images. By linking Google Photos, Gemini can recognise tagged photos of friends and family and use them as references. You can also tweak results, swap images, or try different styles like watercolour or clay animation.
Under the hood, the system also uses metadata like photo labels and activity context to identify people and preferences, helping the model generate more accurate and consistent visuals while maintaining high-quality outputs with faster image generation speeds.
Google says privacy is still a priority here. Your personal photos are not used to train AI models, and the feature is completely opt-in.
The rollout has already started for Gemini AI Plus, Pro, and Ultra users in select regions, with a wider release expected soon. This move shows Google pushing towards more context-aware AI tools that create outputs which feel less generic and far more personal.
This update could also make everyday creative tasks faster, especially for casual users who want quick results without learning prompt tricks. It lowers the barrier to entry, making advanced image generation feel more accessible and practical.
