# Epic 2: Event Backbone & Dedicated Event Store — Jira Ticket

**LOE:** 8 story points, 32–48 hours.

**Main drivers:** This spans new GCP infrastructure/IAM, a central Laravel migration, and an end-to-end validation harness that must prove ordered delivery, DLQ routing, and least-privilege access across Pub/Sub and GCS.

**Assumptions/risks:** This assumes an existing GCP project/IaC pattern is already in place and IAM approvals are straightforward; if Terraform conventions, workload identity/security handoff, or central-vs-tenant DB patterns need to be established, the effort could rise to 13 points / 48–64 hours.

## Problem Statement

We need an immutable ledger (Event Store) and messaging queue to handle our NetSuite Sync. To protect our existing SuiteX application from performance degradation and avoid a maintenance window outage, we are standing up a completely isolated, highly available database for this sync engine.

## Proposed Solution

Provision **Google Cloud Pub/Sub** for message routing, a **GCS bucket** for oversized payloads, and a brand new, dedicated **HA Cloud SQL** instance for the `event_store` table. Update Laravel to securely connect to this secondary database strictly for background worker writes.

## Technical Requirements

### Google Cloud Infrastructure (Terraform / gcloud)

- **Pub/Sub Topics:** Create `events.raw`, `events.merged`, and `events.dlq` (with Message Ordering enabled).
- **Dead Letter Routing:** Configure raw and merged subscriptions to route to `events.dlq` after 5 failed delivery attempts.
- **Cloud Storage (GCS):** Provision `suitex-payload-snapshots-{environment}` (Uniform access, 30-day deletion lifecycle). Environments: dev, staging, prod.
- **Dedicated Cloud SQL Instance:** Provision a new MySQL 8.x instance. Must have: High Availability (Regional), Automated Backups, Point In Time Recovery.
- **Network Security:** The new Cloud SQL instance must have its Public IP explicitly disabled. It must be provisioned with a Private IP and peered to the existing SuiteX VPC network to ensure only existing internal VMs/workers can route to the database.
- **IAM Service Account:** Create a worker account with `roles/pubsub.publisher`, `roles/pubsub.subscriber`, and `roles/storage.objectAdmin`.

### Laravel Multi-Database Configuration

**.env** — Add the new connection variables:

```env
DB_EVENT_STORE_CONNECTION=mysql
DB_EVENT_STORE_HOST=[NEW_HA_INSTANCE_IP]
DB_EVENT_STORE_PORT=3306
DB_EVENT_STORE_DATABASE=events
DB_EVENT_STORE_USERNAME=sync_worker
DB_EVENT_STORE_PASSWORD=[SECURE_PASSWORD]
```

- **config/database.php:** Duplicate the `mysql` array and name it `event_store`, pointing to the new env variables.
- **Eloquent Model:** Create `app/Models/EventStore.php` and explicitly bind the connection:

```php
class EventStore extends Model {
    protected $connection = 'event_store';
    protected $table = 'event_store';
    // ... fillable properties
}
```

### The Event Store Schema (Migration)

Create the migration to run on the new connection (`Schema::connection('event_store')->create(...)`).

**Columns:**

- `id` (Auto-increment)
- `account_id` (String)
- `event_id` (UUID — Unique)
- `aggregate_type` (String)
- `aggregate_id` (String)
- `payload` (JSON)
- `created_at` (Timestamp)

**Indexes:** Create a composite B-Tree index on `['account_id', 'aggregate_type', 'aggregate_id']`.

### Connection Validation Script

Write a standalone test script to prove the IAM credentials can publish to `events.raw` and that Laravel can successfully insert a dummy row into the new `event_store` database.

## Acceptance Criteria (Definition of Done)

- **Scenario 1:** The new HA database is online, and the existing SuiteX database was not restarted or affected during provisioning.
- **Scenario 2:** Laravel successfully writes to the `event_store` database using the secondary connection without interfering with primary tenant database queries.
- **Scenario 3:** The validation script successfully publishes to Pub/Sub and reads from the subscription using the scoped IAM credentials.
