How to train AI in Godot 4 – Reinforcement learning beginner guide

This guide helps you set up reinforcement learning (RL) in Godot 4 using the godot_rl_agents plugin. Follow these steps to install dependencies, configure the project, and run an RL example.

enter image description here


Prerequisites


Setup Instructions

1. Create Project Structure

Create a project folder with the following structure:

/project_root/
├── app/                # Godot project folder
├── stable_baselines3_example.py  # RL example script

2. Download Required Files

project root file structure example

3. Install Python Dependencies

  1. Install uv for Python package management:

    pip install uv
    
    
  2. Initialize a Python environment in /project_root/:

    uv init
    uv add godot-rl
    
    

4. Install .NET Dependencies

enter image description here

  1. Create a dummy C# script inside the Godot editor:
    • In the bottom left of the Godot editor, right-click, select Create New > Script, and choose C# as the language (e.g., name it Dummy.cs).
  2. Verify the C# setup:
    • After creating the script, a 🔨 (hammer) icon should appear in the top right corner of the Godot editor, indicating C# support is enabled.
  3. Install the ONNX runtime for .NET:
    • Open a terminal in the app/ directory.

    • Run the following command:

      dotnet add package Microsoft.ML.OnnxRuntime --version 1.21.0
      
      

5. Enable the Plugin in Godot

  1. Open the app/ folder as a project in Godot 4.
  2. Go to Project > Project Settings > Plugins.
  3. Enable the Godot RL Agents plugin.

Implementing RL in Godot

Create a new AIController node. Extend the script in AIController node and implement these required functions (make sure it extends AIController at the top of your script):

func get_obs() -> Dictionary:
    # Return observations (e.g., player position, environment state)
    return {"obs": []}

func get_reward() -> float:
    # Define the reward (e.g., +1 for reaching a goal)
    return 0.0

func get_action_space() -> Dictionary:
    # Define action space (continuous or discrete)
    return {
        "example_actions_continuous": {
            "size": 2,
            "action_type": "continuous"
        },
        "example_actions_discrete": {
            "size": 2,
            "action_type": "discrete"
        }
    }

func set_action(action) -> void:
    # Apply the action (e.g., move character based on action values)
    pass


Running the RL Example

Use the provided Python script to train or test the RL model (Change the save or export path if needed. Simply add /yourpath/model.zip. Run the following command in /project_root/:

uv run stable_baselines3_example.py

Optional Arguments

  • --save_model_path=model.zip: Save the model
  • --resume_model_path=model.zip: Load a pre-trained model.
  • --onnx_export_path=model.onnx: Export the model to ONNX format.
  • --timesteps=100_000: Auto stop after 100000 steps.

Exporting the Model

Export directory example

  • Ensure the .onnx file is in the same directory as the final .exe executable.

Tips for Beginners

  • Start Simple: Use a basic Godot scene (e.g., a 2D platformer) to test RL.
  • Debug Observations: Ensure get_obs() returns meaningful data.

For more details, visit the Godot RL Agents repository.

Scroll to Top