reinforce-tactics

Reinforce Tactics - 2D Turn-Based Strategy Game

A modular turn-based strategy game built with Pygame and Gymnasium for reinforcement learning.

Features

Installation

Basic Installation (GUI Mode)

pip install pygame pandas numpy

Full Installation (with RL)

pip install pygame pandas numpy gymnasium stable-baselines3[extra]

Optional (for replay video recording)

pip install opencv-python

Project Structure

strategy_game/
├── constants.py              # Game constants
├── main.py                   # GUI entry point
├── train_rl_agent.py        # RL training script
├── core/                     # Core game logic (no rendering)
│   ├── tile.py
│   ├── unit.py
│   ├── grid.py
│   └── game_state.py
├── game/                     # Game mechanics
│   ├── mechanics.py
│   └── bot.py
├── ui/                       # Pygame UI
│   ├── renderer.py
│   └── menus.py
├── rl/                       # Reinforcement learning
│   ├── gym_env.py
│   └── action_space.py
├── utils/                    # Utilities
│   └── file_io.py
└── maps/                     # Map files
    └── 1v1/

Quick Start

Play the Game (GUI)

python main.py

Train an RL Agent

# Train against bot opponent
python train_rl_agent.py train --opponent bot --total-timesteps 1000000

# Train with self-play
python train_rl_agent.py train --opponent self --total-timesteps 1000000

# Custom rewards (dense rewards for faster learning)
python train_rl_agent.py train --reward-income 0.01 --reward-units 10 --reward-structures 5

Test a Trained Agent

python train_rl_agent.py test --model-path ./models/PPO_final.zip --n-episodes 5

Use as Gymnasium Environment

from rl.gym_env import StrategyGameEnv

# Create environment
env = StrategyGameEnv(
    map_file='maps/1v1/test_map.csv',  # or None for random
    opponent='bot',  # 'bot', 'random', or 'self'
    render_mode=None  # None, 'human', or 'rgb_array'
)

# Standard Gym API
obs, info = env.reset()
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)

Headless Mode (Fast Training)

from core.game_state import GameState
from game.bot import SimpleBot
from utils.file_io import FileIO

# Load map
map_data = FileIO.load_map('maps/1v1/test_map.csv')

# Create game without rendering
game = GameState(map_data)
bot = SimpleBot(game, player=2)

# Game loop
while not game.game_over:
    # Your agent's actions here
    # ...
    
    game.end_turn()
    
    # Bot plays
    bot.take_turn()
    game.end_turn()

Game Rules

Units

Combat

Structures

Economy

Map Format

Maps are CSV files with tile codes:

p,p,p,b_1,h_1,b_1,p,p,p
p,p,p,p,p,p,p,p,p
p,t,p,p,p,p,p,t,p
p,p,p,p,p,p,p,p,p
p,p,p,b_2,h_2,b_2,p,p,p

Tile codes:

RL Environment Details

Observation Space

Multi-dict space with:

Action Space

Multi-discrete: [action_type, unit_type, from_x, from_y, to_x, to_y]

Action types:

  1. Create unit
  2. Move
  3. Attack
  4. Paralyze
  5. Heal
  6. Cure
  7. Seize
  8. End turn

Reward Configuration

reward_config = {
    'win': 1000.0,           # Win game
    'loss': -1000.0,         # Lose game
    'income_diff': 0.0,      # Gold advantage per turn
    'unit_diff': 0.0,        # Unit count advantage
    'structure_control': 0.0,# Structure control bonus
    'invalid_action': -10.0   # Invalid action penalty
}

Start with sparse rewards (only win/loss), then add dense rewards if learning is slow.

Development

Running Tests

# Test environment
python train_rl_agent.py train --check-env --total-timesteps 1000

# Quick game test
python -c "from rl import StrategyGameEnv; env = StrategyGameEnv(); env.reset(); print('OK')"

Creating Custom Maps

  1. Create CSV file in maps/1v1/
  2. Use tile codes (see Map Format above)
  3. Ensure each player has at least 1 HQ
  4. Minimum size is 20x20 (auto-padded if smaller)

Extending the Game

Troubleshooting

“No maps found”: Create maps/1v1/ directory and add CSV map files

Pygame window not appearing: Check if running in headless environment

RL training slow: Use headless mode (render_mode=None)

Invalid actions during training: Action masking not fully implemented yet, agent will learn to avoid invalid actions through penalties

Documentation Site

This project includes a comprehensive Docusaurus-based documentation site located in the docs-site/ directory.

Running the Documentation Site Locally

# Navigate to the docs site directory
cd docs-site

# Install dependencies (first time only)
npm install

# Start the development server
npm start

The site will be available at http://localhost:3000.

Building for Production

cd docs-site
npm run build

The static files will be generated in the docs-site/build/ directory.

Deploying to GitHub Pages

cd docs-site
npm run deploy

This will build the site and deploy it to the gh-pages branch.

Configuring Google Analytics

The documentation site includes Google Analytics integration with GDPR-compliant cookie consent. To set up tracking:

  1. Open docs-site/docusaurus.config.ts
  2. Find the gtag configuration section
  3. Replace 'G-XXXXXXXXXX' with your actual Google Analytics tracking ID
  4. The tracking ID can be found in your Google Analytics account under Admin > Data Streams

Example:

gtag: {
  trackingID: 'G-YOUR-TRACKING-ID', // Replace with your actual tracking ID
  anonymizeIP: true,
},

Cookie Consent Banner: The site includes a cookie consent banner that appears on first visit. Analytics tracking is disabled by default and only enabled when users accept cookies. The user’s choice is stored in localStorage.

Documentation Structure

The documentation includes:

License

MIT License - feel free to use and modify!

Contributing

Contributions welcome! Areas for improvement:

Credits

Built with:

To Do List