Antigravity for Science

Google’s new code editor, Antigravity, is an AI-assisted IDE (Integrated Development Environment) with native agentic and generative AI capabilities built into the platform to rapidly accelerate computational and data driven development and scientific research. Antigravity is built on top of the Visual Studio Code editor for a familiar but accelerated user experience. The embedded agentic assistance allows users to spend more time solving research problems and less time debugging problematic code. This workshop is designed for researchers of all coding skill levels. We will walk through simple, research-grounded examples to help you fit the Antigravity framework immediately to your own projects.

Learn how to use Antigravity to:

Watch the recording!

Watch Rahul Sundar present the below content live: https://youtu.be/34dS94ONuDM

This workshop is part of the GDG AI for Science workshop series. Join the community (in Australia, Korea, Japan, or India) for talks, events, collaborations and more.

Target Audience

This workshop is aimed at anyone that writes code for scientific research including:

  • Research students (undergrad, masters, PhD, etc)
  • Researchers requiring computational tools
  • Data Scientists and professionals supporting scientific projects

Requirements

Before the session please download and install Antigravity for your system by following the instructions here: https://antigravity.google/download

Prerequisites:

  • Basic coding knowledge will be helpful.
  • A motivation to experiment
  • Familiarity with Visual Studio Code editor will be helpful otherwise this will be a crash-course

Welcome to Antigravity

After installation, the first time you launch Antigravity you will answer a few basic configuration steps (all options can be be left as default and changed later) then you will have to login with your Google credentials. You can also change user-accounts later if needed. From there you will land in the VSCode IDE, and can open a Folder/Project immediately starting hacking!

We will run through a few different examples for how you can use Antigravity.

Example 1: First prompt

You can immediately get started with a prompt in the Agent Manager sidebar or window

Your goal is to show if temperature has changed in Sydney, Australia.

Keep the conversation going now it has developed more context:

Your goal is to prove this using data and code

Make sure you have tools (like Python) installed and the correct environments are being used. Put comments into the generated artefacts to give it helpful context hints:

run the tests in conda environment oztemp

Keep the agent working with more complex tasks:

Please extend this to make an animated heatmap of Australia's Annual Mean Max Temperature

Example 2: Working with your data

Whilst you can start from scratch to develop a complete workflow, likely you will already have some bones of a repo. Let’s create a new project folder with some data and get Antigravity to work with that context:

mkdir demofaces
cd demofaces
curl -L -o human-face-emotions.zip\  https://www.kaggle.com/api/v1/datasets/download/samithsachidanandan/human-face-emotions
unzip human-face-emotions.zip

Now give it a prompt:

Write a TensorFlow model to classify the images of facial expressions in the Data folder into 5 classes based on the sub-folder names (Angry, Fear, Happy, Sad, Surprise)

Example 3: Updating existing code

You may have been given some code or pulled it from a paper. You need to refactor it to run on your data or a high performance computing cluster. Download the example script or navigate through the repo.

cd EXAMPLES/BIO

Ask about the code:

What does this code do? Can you make my code run faster on my laptop?

Now parallelise it! And get the methodology for running it.

Use the conda environment "bio" for any python processing
I have been given access to the NCI HPC, how do I run this on many fastq files on that machine?

Example 4: Reproduce results

Antigravity has a bunch of built in tools that will work for you. In this example we will ask it to recreate a figure from a paper. You could be helpful and put the pdf or figure into a file, or make it do all the work with a prompt like:

Recreate the Fig 1 from the paper at https://www.nature.com/articles/s41586-025-09840-z

Example 5: Multi-agent orchestration

From the Agent Manager you can launch multiple agents to work on different parts of a problem concurrently. But let’s hand complete control to the Agent, set it running and go and get a cup of coffee. Start new conversation in…

Prompt 1:

I want to prove or disprove the correlation between earthquakes and Lunar cycles. Write a literature review with extensive referencing on this topic.

Prompt 2:

Pull the required data and setup the project to to prove or disprove the correlation between earthquakes and Lunar cycles. 

Prompt 3:

Write the code to prove or disprove the correlation between earthquakes and Lunar cycles.

Prompt 4:

Write the conclusion based on the data and review about the link between Earthquakes and Lunar cycles

Example 6: Targeted literature review and summary

Perform a systematic search across ArXiv, PubMed, and Semantic Scholar for 'Non-pharmacological interventions for Neuroplasticity (2023-2026)'. Extract the N-values, p-values, and effect sizes from the top 40 relevant papers. Use Python to aggregate this data into a structured CSV. Identify and highlight 'Evidence Contradictions' where two or more papers report conflicting results for the same intervention. Generate a summary identifying the 'consensus gap' and suggest three specific hypotheses for future experimental validation.

Example 7: Grant writing assitance

Read the draft grant proposal in grant_v1.docx. Cross-reference all technical claims in the 'Methodology' section with existing literature using search. Identify any 'Innovation Claims' that are actually already solved in recent 2025 publications. Check the budget section against the proposed computational requirements (e.g., GPU hours for training the specified model) and flag if the requested funds are insufficient. Suggest three peer-reviewed citations that would strengthen the 'Prior Art' section.

Example 8: Building a PINN Testbench with Antigravity

This document outlines how to use Antigravity (your agentic coding assistant) to build a robust, modular Physics Informed Neural Network (PINN) framework from scratch.

1. Defining the Core Architecture

Start by asking Antigravity to create a modular structure. A professional PINN testbench requires separation of concerns: - data.py: Handles geometry and point sampling (Collocation, Boundary, Initial conditions). - model.py: Defines the neural network (MLP, ResNet, etc.). - solver.py: The “engine” that computes PDE residuals using automatic differentiation and manages the training loop. - utils.py: Visualization tools for 1D/2D solutions and error metrics.

Prompt Example:

Build a modular PINN framework in TensorFlow for solving the Burgers' equation. I need separate files for data handling, model definition, and a solver that uses GradientTape for physics loss.

2. Implementing Advanced Features

Once the basics are working, use Antigravity to implement state-of-the-art components.

Linear Attention ResNet

Ask Antigravity to implement a more complex architecture like a Linear Attention ResNet to handle global dependencies efficiently. - Layers: Multi-head or Linear Attention. - Input Projection: High-dimensional coordinate injection.

Reliability & Logging

Request features like File Logging, TensorBoard integration, and Model Checkpointing to ensure you never lose the best performing state.

Prompt Example:

Update the solver to support model checkpointing (save best) and TensorBoard logging. Also, add a separate evaluation script that loads a model and plots pointwise error heatmaps.

3. Creating a Frontend Dashboard

Wrap the framework in a Streamlit UI to make it interactive. This allows: - Launching training runs with varying hyperparameters. - Monitoring live logs and CPU/Resource usage. - Navigating and comparing past results.

Prompt Example:

Build a Streamlit dashboard that lets me launch 'train_burgers.py' in the background, monitor its logs live, and visualize the output plots from the results directory.

4. Key Lessons from Antigravity’s Workflow

  • Iterative Refinement: Start with a simple MLP and verify with unit tests before adding Attention or Dashboards.
  • Environment Management: Use Antigravity to set up dedicated Conda environments (conda create -n pinn ...) to avoid dependency conflicts.
  • Verification: Always ask Antigravity to create a test_pinn.py to ensure the mathematical logic (derivatives) is correct early on.

Final thoughts

  • To get the best results from an agentic IDE, it can be helpful to move from instructional prompting (“do this”) to architectural prompting (“build a system that functions like this”). You don’t have to be the developer or the project manager, rather think of yourself as the agent orchestration engineer!

  • Try using the Gemini chat interface to make your prompts more strategic, for example you can use this meta-prompt:

# Role
You are a Lead Prompt Engineer specializing in Google's Antigravity IDE and it's agentic capabilities. Your goal is to write a high-fidelity, instruction-dense prompt that will force Antigravity Agents to utilize their maximum reasoning depth and agentic research tools. 
# Task
Write a well-crafted and optimal prompt for input into the Antigravity so the returned output perfectly answers the following query: 
"[INSERT QUESTION/TOPIC HERE]"
  • Read the generated artefacts and make changes that suit your needs

  • Be somewhat careful with permissions you give Antigravity. It is very powerful and it can delete things! Browse through the options and think.

  • Experiment and see what works best for you - and share it with the rest of us!

  • Explore using different models for different phases of your workflow, some models are better at different tasks.