# SuiteX Bulk Operations Design Document

## 1. Goal
To design and implement a robust **Bulk Operations** feature within SuiteX. This feature allows users to filter, select, and update large sets of records via a dedicated workflow. The system must support background processing, detailed history tracking, and per-record outcome visibility (success/failure logs).

## 2. Scope
*   **In:**
    *   **Dedicated Workflow**: A 5-step UI flow (Filter, Select, Configure, Review, Monitor) on a new page.
    *   **Backend Architecture**: Laravel Queue/Job design using `Bus::batch` and a detailed history schema.
    *   **Data Structure**: DTOs for type-safe requests (`BulkOperationRequestData`) and detailed status reporting.
    *   **History & Monitoring**: A dedicated "Bulk Operations History" page with drill-down into specific record outcomes.
*   **Out:**
    *   Integration with existing Grids (this is a net-new standalone feature).
    *   Real-time WebSockets (Polling will be used).

## 3. Inputs & Preconditions
*   **Existing Architecture**: Laravel 10+, Eloquent, `spatie/laravel-data`.
*   **Permissions**: User must have `BULK_UPDATE` permission.

## 4. Architecture & Standards Alignment
*   **API**: Uses `v1/bulk/*` Use-Case Endpoints.
*   **DTOs**: All payloads encapsulated in DTOs.
*   **Queues**: Jobs dispatch to `bulk_ops` queue.
*   **Tenancy**: Strict `tenant_id` scoping for all queries.

## 5. Constraints
*   **Performance**: Filtering <500ms. Job processing ~500 records/sec.
*   **Visibility**: Must track *every* record's outcome (Success/Fail + Message) for the "Details" modal.

## 6. Proposed Solution

### 6.1 Database Schema
We need two tables: one for the operation summary and one for the individual record logs.

```sql
-- The High Level Job Summary
CREATE TABLE bulk_operations (
    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    tenant_id BIGINT UNSIGNED NOT NULL,
    user_id BIGINT UNSIGNED NOT NULL,
    batch_id VARCHAR(255), -- Link to Laravel Job Batch
    record_type VARCHAR(50) NOT NULL, -- e.g., 'project_task'
    operation_type VARCHAR(50) NOT NULL DEFAULT 'update',
    filters JSON NOT NULL, -- Snapshot of filters used
    payload JSON NOT NULL, -- The changes applied
    total_records INT DEFAULT 0,
    processed_records INT DEFAULT 0,
    failed_records INT DEFAULT 0,
    status VARCHAR(20) DEFAULT 'pending', -- pending, processing, completed, failed, partial
    created_at TIMESTAMP,
    updated_at TIMESTAMP,
    INDEX (tenant_id, user_id)
);

-- The Individual Record Logs (for the "Details" modal)
CREATE TABLE bulk_operation_records (
    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    bulk_operation_id BIGINT UNSIGNED NOT NULL,
    record_id BIGINT UNSIGNED NOT NULL, -- The ID of the record that was updated
    status VARCHAR(20) NOT NULL, -- 'success', 'error'
    message TEXT NULL, -- Error message or success details
    created_at TIMESTAMP,
    FOREIGN KEY (bulk_operation_id) REFERENCES bulk_operations(id) ON DELETE CASCADE,
    INDEX (bulk_operation_id, status) -- For filtering success vs error tabs
    );
```

### 6.2 API Contract

**Submission Endpoint**
`POST /api/v1/bulk/update`

Request Body (`BulkOperationRequestData`):
```json
{
  "record_type": "project_task",
  "filters": { ... }, // Used if selection_mode = 'all'
  "selection_mode": "all", // 'all' (use filters) or 'include' (use record_ids)
  "record_ids": [1, 2, 3], // Optional, used if selection_mode = 'include'
  "updates": {
    "status": "archived"
  }
}
```

**Status Endpoint**
`GET /api/v1/bulk/status/{id}`

Response (`BulkOperationStatusData`):
```json
{
  "id": 1024,
  "status": "processing",
  "progress": 45,
  "total": 1000,
  "processed": 450,
  "failures": 12
}
```

**Record Details Endpoint (Lazy Loaded)**
`GET /api/v1/bulk/{id}/records?status=error&page=1`

Response:
```json
{
  "data": [
    { "record_id": 55, "status": "error", "message": "Record is locked" }
  ],
  "meta": { "total": 12, "per_page": 100 }
}
```

### 6.3 Job Architecture & Implementation Guide

#### 1. Coordinator: `PrepareBulkOperationJob`
*   **Logic**:
    *   Resolve IDs: If `selection_mode` is 'include', use `record_ids`. If 'all', run the filter query to fetch IDs.
    *   Create `bulk_operations` entry.
    *   Chunk IDs (e.g., 100/chunk).
    *   Dispatch Laravel Batch of `ExecuteBulkUpdateJob`.

#### 2. Worker: `ExecuteBulkUpdateJob`
*   **Logic**:
    *   Iterate IDs.
    *   Try Update.
    *   **On Success**: Insert into `bulk_operation_records` (status='success'). Increment `processed_records`.
    *   **On Failure**: Insert into `bulk_operation_records` (status='error', message=$e->getMessage()). Increment `failed_records`.
    *   *Note: Bulk inserting logs at the end of the chunk is more performant than 1-by-1.*

### 6.4 UI/UX Workflow (The 5-Step Process)

1.  **Filter**: User enters the "Bulk Updates" page. Selects "Record Type" and applies filters. Clicks "Next".
2.  **Select**: A table appears showing results. User checks specific rows or clicks "Select All". Clicks "Next".
3.  **Configure**: A modal/panel opens. User selects fields (e.g., "Status") and new values (e.g., "Closed").
4.  **Review**: UI summarizes: "Update 150 records: Status -> Closed". User confirms.
5.  **Monitor**: User is redirected to "History" page. Top row shows the new job running.
    *   User clicks "View Details".
    *   Modal opens with tabs: "Overview", "Updated Records" (lazy loaded), "Errors" (lazy loaded).

## 7. Deliverables & Files

### Phase 1: Foundation (Backend)
*   **Migrations**: `create_bulk_operations_table`, `create_bulk_operation_records_table`.
*   **DTOs**: `BulkOperationRequestData`, `BulkOperationStatusData`, `BulkRecordLogData`.
*   **Jobs**: `PrepareBulkOperationJob`, `ExecuteBulkUpdateJob`.
*   **Controllers**: `BulkOperationController` (submit, status), `BulkRecordController` (list logs).

### Phase 2: Frontend - Creation Flow
*   **Pages**: `BulkUpdatePage` (Wizard flow).
*   **Components**: `FilterStep`, `SelectionStep` (Table), `ConfigurationStep` (Form), `ReviewStep`.

### Phase 3: Frontend - History & Monitoring
*   **Pages**: `BulkHistoryPage`.
*   **Components**: `HistoryTable` (with polling), `JobDetailsModal` (with lazy-loaded tabs).

## 8. Phased Rollout Plan & Acceptance Criteria

### Phase 1: Core Backend & API (Weeks 1-2)
**Goal**: Build the engine that processes updates and logs detailed outcomes.

*   **Developer Guidance**:
    *   Focus on the `bulk_operation_records` table efficiency. Ensure `ExecuteBulkUpdateJob` performs a bulk insert of logs at the end of its chunk to avoid N+1 inserts.
    *   Ensure strict tenant scoping in the `Prepare` job query.
*   **Deliverables**:
    *   Migrations & Models.
    *   API Endpoint: `POST /api/v1/bulk/update` (Submit Job)
    *   API Endpoint: `GET /api/v1/bulk/status/{id}` (Job Status)
    *   API Endpoint: `GET /api/v1/bulk/{id}/records` (Details Log)
    *   Tests: Submit a job, verify it runs, verify logs appear in `bulk_operation_records`.
*   **Acceptance Criteria**:
    *   API accepts valid JSON payload to `POST /api/v1/bulk/update` and rejects invalid ones (422).
    *   Job correctly logs 5 successes and 1 failure (mocked) to the log table.
    *   API pagination for `/api/v1/bulk/{id}/records` endpoint works correctly.

### Phase 2: UI - Creation Flow (Weeks 3-4)
**Goal**: Build the Wizard flow (Filter -> Select -> Configure -> Submit).

*   **Developer Guidance**:
    *   State management is key here. Keep the "Wizard" state (filters, selection, config) in a React Context or similar store until submission.
    *   The "Selection" table doesn't need to be a full-featured datagrid, just a selectable list of results.
*   **Deliverables**:
    *   `BulkUpdateWizard` component.
    *   Integration with `POST /api/v1/bulk/update`.
*   **Acceptance Criteria**:
    *   User can step through all 4 stages.
    *   "Select All" sends `selection_mode: all` to `POST /api/v1/bulk/update`. Checking boxes sends `selection_mode: include` + IDs.

### Phase 3: UI - History & Monitoring (Weeks 5-6)
**Goal**: The History Page and Details Modal.

*   **Developer Guidance**:
    *   The History page should poll for active jobs every ~5s.
    *   The Details Modal tabs must use the `GET /api/v1/bulk/{id}/records` endpoint with `status=success` or `status=error` params. Implement infinite scroll or load-more buttons.
*   **Deliverables**:
    *   `BulkHistoryPage`.
    *   `JobDetailsModal` with 3 tabs.
*   **Acceptance Criteria**:
    *   Clicking a row opens the modal.
    *   "Errors" tab shows the error message for failed records (from `GET /api/v1/bulk/{id}/records`).
    *   "Updated" tab allows text search to find a specific updated record.
