# mindzieAPI - Complete Documentation > RESTful API for mindzieStudio process mining platform. This document contains the complete API reference for managing tenants, users, projects, datasets, dashboards, and executing analysis workflows programmatically. **API Base URL:** https://your-mindzie-instance.com/api **Authentication:** All endpoints require a Bearer token. Use either a Global API Key (for tenant/user management) or a Tenant API Key (for project/dataset operations). --- # Getting Started **Your Journey to API Mastery** Everything you need to start building powerful integrations with the mindzieAPI. From authentication setup to your first successful API calls. ## Quick Links - **Authentication**: Learn how to securely authenticate with the mindzieAPI using Bearer tokens and configure access credentials. [Setup Authentication →](/mindzie_api/authentication) - **Quick Start**: Follow our step-by-step guide to make your first API calls and verify your setup in minutes. [Start Building →](/mindzie_api/quick-start) - **Response Formats**: Understand API response structures, status codes, and error handling for robust integrations. [Learn Formats →](/mindzie_api/response-formats) - **AI Coding Tools**: Configure Claude Code, Cursor, Windsurf, and other AI assistants to understand the mindzieAPI instantly. [Configure AI Tools →](/mindzie_api/llm-access) ## Prerequisites Before getting started with the mindzieAPI, ensure you have: ### API Access - Enterprise Server or SaaS deployment with API access enabled ### Credentials - Access token, tenant ID, and project ID from your administrator ### Network Access - HTTPS connectivity to your mindzie instance ## Learning Path Follow this recommended sequence for the best learning experience: ### Step 1: Authentication Set up your API credentials and understand security requirements [Start Here →](/mindzie_api/authentication) ### Step 2: Quick Start Make your first API calls and verify everything works correctly [Try It →](/mindzie_api/quick-start) ### Step 3: Response Formats Learn to handle responses and implement proper error handling [Master Responses →](/mindzie_api/response-formats) ## What's Next? After completing the getting started guides, explore these API sections: ### [Actions](/mindzie_api/action) System operations, health monitoring, and execution tracking ### [Blocks](/mindzie_api/block) Analysis blocks, filters, calculators, and notebook management ### [Datasets](/mindzie_api/dataset) Data upload, management, and transformation capabilities --- ## AI-Friendly Documentation mindzieAPI follows the [llms.txt standard](https://llmstxt.org/) to provide documentation optimized for AI coding assistants and Large Language Models. ### Available Files | File | Size | Purpose | |------|------|---------| | [`/llms.txt`](https://docs.mindziestudio.com/llms.txt) | ~6 KB | Documentation index with categorized links | | [`/llms-full.txt`](https://docs.mindziestudio.com/llms-full.txt) | ~470 KB | Complete API documentation in one file | ### Supported Tools These AI coding assistants can use our documentation directly: - **Claude Code** - Fetch via WebFetch or add to project instructions - **Cursor** - Add via @Docs feature - **Windsurf** - Add to knowledge base - **GitHub Copilot** - Include in project context - **Cody (Sourcegraph)** - Add to context sources [Configure Your AI Coding Tool →](/mindzie_api/llm-access) --- **Ready to Start?** Begin with [Authentication Setup](/mindzie_api/authentication) to configure your API access credentials and security settings. --- # Quick Start Guide **Get Up and Running in Minutes** Follow this step-by-step guide to make your first successful API calls to mindzieStudio and start integrating process mining capabilities into your applications. ## Prerequisites - **API Credentials:** Access token, tenant ID, and project ID - **Base URL:** Your mindzie instance API endpoint - **HTTPS Access:** Secure connection to your mindzie instance - **Development Environment:** Your preferred programming language and HTTP client **Don't have credentials?** Check the [Authentication Guide](/mindzie_api/authentication) to learn how to obtain your API access credentials. ## Step 1: Test Basic Connectivity Start by testing basic connectivity to ensure your mindzie instance is accessible: ```bash curl -X GET "https://your-mindzie-instance.com/api/Action/ping" ``` **Expected Response:** ```json { "status": "ok", "timestamp": "2024-01-15T10:30:00Z", "version": "1.0.0" } ``` ## Step 2: Verify Authentication Test your authentication credentials with the authenticated ping endpoint: ```bash curl -X GET "https://your-mindzie-instance.com/api/Action/ping/authenticated" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "X-Tenant-Id: YOUR_TENANT_GUID" \ -H "X-Project-Id: YOUR_PROJECT_GUID" \ -H "Content-Type: application/json" ``` **Expected Response:** ```json { "status": "authenticated", "timestamp": "2024-01-15T10:30:00Z", "tenantId": "12345678-1234-1234-1234-123456789012", "projectId": "87654321-4321-4321-4321-210987654321", "userId": "user@company.com", "permissions": ["read", "write", "admin"] } ``` ## Step 3: Your First API Call Let's make a practical API call to retrieve action history: ```bash curl -X GET "https://your-mindzie-instance.com/api/Action/history?limit=5" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "X-Tenant-Id: YOUR_TENANT_GUID" \ -H "X-Project-Id: YOUR_PROJECT_GUID" \ -H "Content-Type: application/json" ``` **Example Response:** ```json { "actions": [ { "actionId": "87654321-4321-4321-4321-210987654321", "actionType": "analyze", "status": "completed", "startTime": "2024-01-15T10:30:00Z", "endTime": "2024-01-15T10:32:15Z", "duration": 135, "userId": "user@company.com" } ], "pagination": { "currentPage": 1, "totalPages": 1, "totalItems": 1, "itemsPerPage": 5 } } ``` ## Language-Specific Examples ### JavaScript Use fetch API or axios for modern web applications and Node.js backends. ### Python Use requests library for data science workflows and backend automation. ### C#/.NET Use HttpClient for enterprise applications and microservices. ## JavaScript Example Complete example using modern JavaScript and fetch API: ```javascript // Configuration const API_CONFIG = { baseURL: 'https://your-mindzie-instance.com/api', token: 'YOUR_ACCESS_TOKEN', tenantId: 'YOUR_TENANT_GUID', projectId: 'YOUR_PROJECT_GUID' }; // Helper function for API requests async function callMindzieAPI(endpoint, options = {}) { const url = `${API_CONFIG.baseURL}${endpoint}`; const defaultHeaders = { 'Authorization': `Bearer ${API_CONFIG.token}`, 'X-Tenant-Id': API_CONFIG.tenantId, 'X-Project-Id': API_CONFIG.projectId, 'Content-Type': 'application/json' }; try { const response = await fetch(url, { ...options, headers: { ...defaultHeaders, ...options.headers } }); if (!response.ok) { throw new Error(`HTTP ${response.status}: ${response.statusText}`); } return await response.json(); } catch (error) { console.error('API call failed:', error); throw error; } } // Example usage async function quickStartExample() { try { // 1. Test connectivity console.log('Testing connectivity...'); const pingResult = await callMindzieAPI('/Action/ping'); console.log('Ping successful:', pingResult); // 2. Test authentication console.log('Testing authentication...'); const authResult = await callMindzieAPI('/Action/ping/authenticated'); console.log('Authentication successful:', authResult); // 3. Get action history console.log('Fetching action history...'); const history = await callMindzieAPI('/Action/history?limit=5'); console.log('Action history:', history); console.log('Quick start completed successfully!'); return history; } catch (error) { console.error('Quick start failed:', error); throw error; } } // Run the example quickStartExample(); ``` ## Python Example Complete example using Python requests library: ```python import requests import json from typing import Dict, Any class MindzieQuickStart: def __init__(self, base_url: str, token: str, tenant_id: str, project_id: str): self.base_url = base_url.rstrip('/') self.headers = { 'Authorization': f'Bearer {token}', 'X-Tenant-Id': tenant_id, 'X-Project-Id': project_id, 'Content-Type': 'application/json' } def call_api(self, endpoint: str, method: str = 'GET', **kwargs) -> Dict[str, Any]: """Make an API call to mindzie""" url = f"{self.base_url}{endpoint}" try: response = requests.request( method=method, url=url, headers=self.headers, **kwargs ) response.raise_for_status() return response.json() except requests.exceptions.RequestException as e: print(f"API call failed: {e}") raise def run_quick_start(self): """Execute the quick start sequence""" print("Starting mindzie API Quick Start...") try: # 1. Test connectivity print("1. Testing connectivity...") ping_result = requests.get(f"{self.base_url}/api/Action/ping") ping_result.raise_for_status() print(f" Connectivity OK: {ping_result.json()}") # 2. Test authentication print("2. Testing authentication...") auth_result = self.call_api('/api/Action/ping/authenticated') print(f" Authentication OK: {auth_result['status']}") # 3. Get action history print("3. Fetching action history...") history = self.call_api('/api/Action/history?limit=5') print(f" Retrieved {len(history['actions'])} actions") print("Quick start completed successfully!") return history except Exception as e: print(f"Quick start failed: {e}") raise # Usage example if __name__ == "__main__": # Configure your credentials quick_start = MindzieQuickStart( base_url='https://your-mindzie-instance.com/api', token='YOUR_ACCESS_TOKEN', tenant_id='YOUR_TENANT_GUID', project_id='YOUR_PROJECT_GUID' ) # Run the quick start result = quick_start.run_quick_start() print(f"Final result: {json.dumps(result, indent=2)}") ``` ## C#/.NET Example Complete example using C# HttpClient: ```csharp using System; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; public class MindzieQuickStart { private readonly HttpClient _httpClient; private readonly string _baseUrl; public MindzieQuickStart(string baseUrl, string token, string tenantId, string projectId) { _baseUrl = baseUrl.TrimEnd('/'); _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {token}"); _httpClient.DefaultRequestHeaders.Add("X-Tenant-Id", tenantId); _httpClient.DefaultRequestHeaders.Add("X-Project-Id", projectId); } public async Task CallApiAsync(string endpoint) { try { var response = await _httpClient.GetAsync($"{_baseUrl}{endpoint}"); response.EnsureSuccessStatusCode(); var content = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(content, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } catch (HttpRequestException ex) { Console.WriteLine($"API call failed: {ex.Message}"); throw; } } public async Task RunQuickStartAsync() { Console.WriteLine("Starting mindzie API Quick Start..."); try { // 1. Test connectivity Console.WriteLine("1. Testing connectivity..."); using var pingClient = new HttpClient(); var pingResponse = await pingClient.GetAsync($"{_baseUrl}/api/Action/ping"); pingResponse.EnsureSuccessStatusCode(); Console.WriteLine(" Connectivity OK"); // 2. Test authentication Console.WriteLine("2. Testing authentication..."); var authResult = await CallApiAsync("/api/Action/ping/authenticated"); Console.WriteLine($" Authentication OK: {authResult.Status}"); // 3. Get action history Console.WriteLine("3. Fetching action history..."); var history = await CallApiAsync("/api/Action/history?limit=5"); Console.WriteLine($" Retrieved {history.Actions.Length} actions"); Console.WriteLine("Quick start completed successfully!"); } catch (Exception ex) { Console.WriteLine($"Quick start failed: {ex.Message}"); throw; } } public void Dispose() { _httpClient?.Dispose(); } } // Data models public class AuthResponse { public string Status { get; set; } public string TenantId { get; set; } public string ProjectId { get; set; } public string UserId { get; set; } } public class ActionHistoryResponse { public ActionItem[] Actions { get; set; } public PaginationInfo Pagination { get; set; } } public class ActionItem { public string ActionId { get; set; } public string ActionType { get; set; } public string Status { get; set; } public DateTime StartTime { get; set; } public DateTime? EndTime { get; set; } } public class PaginationInfo { public int CurrentPage { get; set; } public int TotalPages { get; set; } public int TotalItems { get; set; } } // Usage class Program { static async Task Main(string[] args) { var quickStart = new MindzieQuickStart( "https://your-mindzie-instance.com/api", "YOUR_ACCESS_TOKEN", "YOUR_TENANT_GUID", "YOUR_PROJECT_GUID" ); try { await quickStart.RunQuickStartAsync(); } finally { quickStart.Dispose(); } } } ``` ## Common Issues & Solutions ### Authentication Failures - **401 Unauthorized:** Verify your access token is correct and not expired - **403 Forbidden:** Check tenant/project IDs and user permissions - **400 Bad Request:** Ensure all required headers are included ### Connection Issues - **Network timeouts:** Check firewall settings and network connectivity - **SSL/TLS errors:** Verify certificate validity and HTTPS configuration - **DNS resolution:** Confirm the mindzie instance URL is correct ### Rate Limiting - **429 Too Many Requests:** Implement exponential backoff retry logic - **Monitor rate limits:** Check response headers for rate limit information - **Optimize requests:** Use pagination and filtering to reduce API calls ## Next Steps **Congratulations!** You've successfully completed the mindzieAPI quick start. Next, explore the [Actions API](/mindzie_api/action) or [Blocks API](/mindzie_api/block) to start building powerful integrations. --- # Authentication **Secure Access to the mindzieAPI** Learn how to authenticate with the mindzieAPI using Bearer tokens, manage tenant and project access, and implement secure API integration patterns. ## Authentication Overview The mindzieAPI uses Bearer token authentication combined with tenant and project identifiers to provide secure, multi-tenant access to mindzie resources. ## API Key Types The mindzieAPI supports two types of API keys with different access levels: ### Tenant API Keys (Standard) Tenant API Keys are scoped to a specific tenant and are used for most API operations: - Access projects, datasets, investigations, and dashboards within the tenant - Execute notebooks and blocks - Manage project-level resources **Create at:** Settings -> API Keys (within mindzieStudio) ### Global API Keys (Server API Keys) Global API Keys have system-wide administrative access and are required for: - **Tenant API** - Create, list, update, and delete tenants - **User API (Global)** - Create and manage users across all tenants - Assign users to tenants **Create at:** `/admin/global-api-keys` (Administrator access required) **IMPORTANT:** The Tenant API endpoints (`/api/tenant`) require a Global API Key. Regular tenant-specific API keys cannot access these endpoints and will receive a 401 Unauthorized response. ## Required Headers ### For Tenant-Scoped Operations ``` Authorization: Bearer YOUR_TENANT_API_KEY Content-Type: application/json ``` The tenant ID is typically included in the URL path (e.g., `/api/{tenantId}/project`). ### For Global Operations (Tenant/User Management) ``` Authorization: Bearer YOUR_GLOBAL_API_KEY Content-Type: application/json ``` **Security Note:** Always use HTTPS when making API requests to protect your access tokens in transit. ## Obtaining Access Tokens ### Enterprise Server For Enterprise Server deployments, contact your mindzie administrator to obtain: - API access token - Tenant ID (GUID format) - Project ID (GUID format) - Base API URL for your instance ### SaaS Deployment For SaaS users, access tokens can be generated through: - mindzie Studio user interface (Settings → API Keys) - Contacting your account administrator - Using the authentication endpoints (if enabled) ## Testing Authentication Use the ping endpoints to verify your authentication setup: ### Basic Connectivity Test ```bash curl -X GET "https://your-mindzie-instance.com/api/Action/ping" ``` ### Authenticated Test ```bash curl -X GET "https://your-mindzie-instance.com/api/Action/ping/authenticated" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "X-Tenant-Id: YOUR_TENANT_GUID" \ -H "X-Project-Id: YOUR_PROJECT_GUID" ``` ### Successful Response ```json { "status": "authenticated", "timestamp": "2024-01-15T10:30:00Z", "tenantId": "12345678-1234-1234-1234-123456789012", "projectId": "87654321-4321-4321-4321-210987654321", "userId": "user@company.com", "permissions": ["read", "write", "admin"] } ``` ## Security Best Practices ### Token Security Store tokens securely using environment variables or secure credential management systems. ### Token Expiration Monitor token expiration and implement refresh mechanisms to maintain uninterrupted access. ### Multi-Tenant Each token is scoped to specific tenants and projects for secure data isolation. ## Implementation Examples ### JavaScript/Node.js ```javascript const apiConfig = { baseURL: process.env.MINDZIE_API_URL, token: process.env.MINDZIE_ACCESS_TOKEN, tenantId: process.env.MINDZIE_TENANT_ID, projectId: process.env.MINDZIE_PROJECT_ID }; const makeAuthenticatedRequest = async (endpoint, options = {}) => { const url = `${apiConfig.baseURL}${endpoint}`; const headers = { 'Authorization': `Bearer ${apiConfig.token}`, 'X-Tenant-Id': apiConfig.tenantId, 'X-Project-Id': apiConfig.projectId, 'Content-Type': 'application/json', ...options.headers }; try { const response = await fetch(url, { ...options, headers }); if (!response.ok) { throw new Error(`API request failed: ${response.status} ${response.statusText}`); } return await response.json(); } catch (error) { console.error('API request error:', error); throw error; } }; ``` ### Python ```python import os import requests from typing import Dict, Any class MindzieAPIClient: def __init__(self): self.base_url = os.getenv('MINDZIE_API_URL') self.token = os.getenv('MINDZIE_ACCESS_TOKEN') self.tenant_id = os.getenv('MINDZIE_TENANT_ID') self.project_id = os.getenv('MINDZIE_PROJECT_ID') if not all([self.base_url, self.token, self.tenant_id, self.project_id]): raise ValueError("Missing required environment variables") def _get_headers(self) -> Dict[str, str]: return { 'Authorization': f'Bearer {self.token}', 'X-Tenant-Id': self.tenant_id, 'X-Project-Id': self.project_id, 'Content-Type': 'application/json' } def make_request(self, method: str, endpoint: str, **kwargs) -> Dict[str, Any]: url = f"{self.base_url.rstrip('/')}{endpoint}" headers = self._get_headers() if 'headers' in kwargs: headers.update(kwargs['headers']) kwargs['headers'] = headers try: response = requests.request(method, url, **kwargs) response.raise_for_status() return response.json() except requests.exceptions.RequestException as e: raise Exception(f"API request failed: {e}") # Usage client = MindzieAPIClient() result = client.make_request('GET', '/api/Action/ping/authenticated') ``` ### C#/.NET ```csharp using System; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; public class MindzieApiClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly string _tenantId; private readonly string _projectId; public MindzieApiClient(string baseUrl, string accessToken, string tenantId, string projectId) { _baseUrl = baseUrl.TrimEnd('/'); _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {accessToken}"); _httpClient.DefaultRequestHeaders.Add("X-Tenant-Id", tenantId); _httpClient.DefaultRequestHeaders.Add("X-Project-Id", projectId); } public async Task GetAsync(string endpoint) { var response = await _httpClient.GetAsync($"{_baseUrl}{endpoint}"); response.EnsureSuccessStatusCode(); var content = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(content); } public async Task PostAsync(string endpoint, object data) { var json = JsonSerializer.Serialize(data); var content = new StringContent(json, System.Text.Encoding.UTF8, "application/json"); var response = await _httpClient.PostAsync($"{_baseUrl}{endpoint}", content); response.EnsureSuccessStatusCode(); var responseContent = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(responseContent); } } // Usage var client = new MindzieApiClient( Environment.GetEnvironmentVariable("MINDZIE_API_URL"), Environment.GetEnvironmentVariable("MINDZIE_ACCESS_TOKEN"), Environment.GetEnvironmentVariable("MINDZIE_TENANT_ID"), Environment.GetEnvironmentVariable("MINDZIE_PROJECT_ID") ); ``` ## Error Handling ### Common Authentication Errors | Status Code | Error | Description | Solution | |-------------|-------|-------------|----------| | `401` | Unauthorized | Invalid or missing access token | Verify token and ensure it's not expired | | `403` | Forbidden | Valid token but insufficient permissions | Check tenant/project access or request permissions | | `400` | Bad Request | Missing required headers | Ensure X-Tenant-Id and X-Project-Id are provided | ### Example Error Response ```json { "error": "invalid_token", "message": "The provided access token is invalid or expired", "timestamp": "2024-01-15T10:30:00Z", "requestId": "req_12345" } ``` ## Next Steps Once authentication is working, try the [Quick Start Guide](/mindzie_api/quick-start) to make your first API calls or explore the [Response Formats](/mindzie_api/response-formats) documentation. --- # Response Formats **Understanding API Response Structures** Learn about mindzieAPI response formats, status codes, error handling patterns, and data structures to build robust integrations. ## Standard Response Format All mindzieAPI responses follow consistent JSON formatting with predictable structures: ### Successful Response ```json { "data": { // Primary response data }, "metadata": { "timestamp": "2024-01-15T10:30:00Z", "requestId": "req_12345", "version": "1.0.0" }, "pagination": { // Present for paginated responses "currentPage": 1, "totalPages": 5, "totalItems": 100, "itemsPerPage": 20, "hasNext": true, "hasPrevious": false } } ``` ### Error Response ```json { "error": { "code": "validation_failed", "message": "Request validation failed", "details": { "field": "datasetId", "reason": "Invalid GUID format" }, "timestamp": "2024-01-15T10:30:00Z", "requestId": "req_12345" } } ``` ## Response Types ### Success Responses HTTP 2xx status codes with structured JSON data and metadata. ### Error Responses HTTP 4xx/5xx status codes with detailed error information. ### Pagination Consistent pagination format for large dataset responses. ## HTTP Status Codes ### Success Codes (2xx) | Code | Status | Description | Usage | |------|--------|-------------|-------| | `200` | OK | Request successful, data returned | GET requests, successful operations | | `201` | Created | Resource successfully created | POST requests creating new resources | | `202` | Accepted | Request accepted for async processing | Long-running operations, queued tasks | | `204` | No Content | Request successful, no data returned | DELETE requests, updates without return data | ### Client Error Codes (4xx) | Code | Status | Description | Common Causes | |------|--------|-------------|---------------| | `400` | Bad Request | Invalid request format or parameters | Missing headers, invalid JSON, malformed data | | `401` | Unauthorized | Authentication required or failed | Missing/invalid token, expired credentials | | `403` | Forbidden | Valid auth but insufficient permissions | Limited user access, wrong tenant/project | | `404` | Not Found | Requested resource doesn't exist | Invalid endpoint, non-existent resource ID | | `422` | Unprocessable Entity | Valid format but business logic validation failed | Invalid business rules, constraint violations | | `429` | Too Many Requests | Rate limit exceeded | Too many API calls in time window | ### Server Error Codes (5xx) | Code | Status | Description | Action | |------|--------|-------------|--------| | `500` | Internal Server Error | Unexpected server error | Retry with exponential backoff | | `502` | Bad Gateway | Upstream service error | Check service status, retry later | | `503` | Service Unavailable | Service temporarily unavailable | Retry after delay, check maintenance | | `504` | Gateway Timeout | Request timeout | Increase timeout, optimize request | ## Common Response Patterns ### Single Resource Response ```json { "actionId": "87654321-4321-4321-4321-210987654321", "actionType": "analyze", "status": "completed", "startTime": "2024-01-15T10:30:00Z", "endTime": "2024-01-15T10:32:15Z", "duration": 135, "result": { "outputId": "98765432-8765-4321-4321-987654321098", "recordsProcessed": 10000 } } ``` ### Collection Response with Pagination ```json { "actions": [ { "actionId": "87654321-4321-4321-4321-210987654321", "actionType": "analyze", "status": "completed" }, { "actionId": "11111111-2222-3333-4444-555555555555", "actionType": "export", "status": "processing" } ], "pagination": { "currentPage": 1, "totalPages": 5, "totalItems": 100, "itemsPerPage": 20, "hasNext": true, "hasPrevious": false, "links": { "first": "/api/Action/history?page=1&limit=20", "next": "/api/Action/history?page=2&limit=20", "last": "/api/Action/history?page=5&limit=20" } } } ``` ### Async Operation Response ```json { "operationId": "op_12345678-1234-1234-1234-123456789012", "status": "processing", "progress": { "percentage": 45, "currentStep": "data_analysis", "totalSteps": 5, "estimatedCompletion": "2024-01-15T10:35:00Z" }, "trackingUrl": "/api/Execution/status/op_12345678-1234-1234-1234-123456789012", "message": "Processing dataset analysis..." } ``` ## Error Response Details ### Validation Error ```json { "error": { "code": "validation_failed", "message": "Request validation failed", "details": { "errors": [ { "field": "datasetId", "code": "invalid_format", "message": "Must be a valid GUID" }, { "field": "parameters.timeout", "code": "out_of_range", "message": "Must be between 1 and 3600 seconds" } ] }, "timestamp": "2024-01-15T10:30:00Z", "requestId": "req_12345" } } ``` ### Authentication Error ```json { "error": { "code": "invalid_token", "message": "The provided access token is invalid or expired", "details": { "tokenType": "bearer", "expiresAt": "2024-01-15T09:00:00Z", "suggestion": "Please refresh your access token" }, "timestamp": "2024-01-15T10:30:00Z", "requestId": "req_12345" } } ``` ### Rate Limiting Error ```json { "error": { "code": "rate_limit_exceeded", "message": "Rate limit exceeded for this endpoint", "details": { "limit": 100, "remaining": 0, "resetTime": "2024-01-15T11:00:00Z", "retryAfter": 1800 }, "timestamp": "2024-01-15T10:30:00Z", "requestId": "req_12345" } } ``` ## Response Headers ### Standard Headers | Header | Description | Example | |--------|-------------|---------| | `Content-Type` | Response format | application/json; charset=utf-8 | | `X-Request-Id` | Unique request identifier | req_12345678 | | `X-Response-Time` | Server processing time | 145ms | | `X-API-Version` | API version used | 1.0.0 | ### Rate Limiting Headers | Header | Description | Example | |--------|-------------|---------| | `X-RateLimit-Limit` | Maximum requests per window | 100 | | `X-RateLimit-Remaining` | Remaining requests in window | 95 | | `X-RateLimit-Reset` | Window reset timestamp | 1642251600 | | `Retry-After` | Seconds to wait before retry | 3600 | ## Best Practices for Error Handling ### JavaScript Example ```javascript async function handleAPIResponse(response) { // Check if response is ok if (!response.ok) { const errorData = await response.json(); switch (response.status) { case 400: throw new ValidationError(errorData.error.message, errorData.error.details); case 401: throw new AuthenticationError('Authentication failed'); case 403: throw new AuthorizationError('Insufficient permissions'); case 404: throw new NotFoundError('Resource not found'); case 429: const retryAfter = response.headers.get('Retry-After'); throw new RateLimitError(`Rate limited. Retry after ${retryAfter} seconds`); case 500: case 502: case 503: case 504: throw new ServerError('Server error occurred. Please retry.'); default: throw new APIError(`Unexpected error: ${response.status}`); } } return await response.json(); } // Usage with retry logic async function apiCallWithRetry(url, options, maxRetries = 3) { for (let attempt = 1; attempt <= maxRetries; attempt++) { try { const response = await fetch(url, options); return await handleAPIResponse(response); } catch (error) { if (error instanceof RateLimitError) { const retryAfter = parseInt(error.retryAfter) || Math.pow(2, attempt); await new Promise(resolve => setTimeout(resolve, retryAfter * 1000)); continue; } if (error instanceof ServerError && attempt < maxRetries) { const delay = Math.pow(2, attempt) * 1000; // Exponential backoff await new Promise(resolve => setTimeout(resolve, delay)); continue; } throw error; } } } ``` ### Python Example ```python import requests import time from typing import Dict, Any class APIError(Exception): def __init__(self, message: str, status_code: int = None, details: Dict = None): super().__init__(message) self.status_code = status_code self.details = details def handle_api_response(response: requests.Response) -> Dict[str, Any]: """Handle API response with proper error handling""" if response.ok: return response.json() try: error_data = response.json() except ValueError: error_data = {"error": {"message": response.text}} error_info = error_data.get("error", {}) message = error_info.get("message", f"HTTP {response.status_code}") details = error_info.get("details", {}) if response.status_code == 429: retry_after = response.headers.get('Retry-After', '60') raise APIError(f"Rate limited. Retry after {retry_after} seconds", response.status_code, details) elif response.status_code >= 500: raise APIError(f"Server error: {message}", response.status_code, details) elif response.status_code >= 400: raise APIError(f"Client error: {message}", response.status_code, details) raise APIError(f"Unexpected error: {message}", response.status_code, details) def api_call_with_retry(url: str, method: str = 'GET', max_retries: int = 3, **kwargs) -> Dict[str, Any]: """Make API call with automatic retry logic""" for attempt in range(1, max_retries + 1): try: response = requests.request(method, url, **kwargs) return handle_api_response(response) except APIError as e: if e.status_code == 429: retry_after = int(e.details.get('retryAfter', 60)) time.sleep(retry_after) continue elif e.status_code >= 500 and attempt < max_retries: delay = 2 ** attempt # Exponential backoff time.sleep(delay) continue raise raise APIError(f"Max retries ({max_retries}) exceeded") ``` ## Next Steps Now that you understand response formats, explore specific API sections like [Actions](/mindzie_api/action), [Blocks](/mindzie_api/block), or [Datasets](/mindzie_api/dataset) to see these patterns in action. --- # Tenant API System-level tenant management operations for mindzieStudio. Create, list, update, and delete tenants across the platform. **IMPORTANT:** All Tenant API endpoints require a **Global API Key**. Regular tenant-specific API keys cannot access these endpoints. ## Features ### Management List all tenants, retrieve tenant details, create new tenants, and update tenant settings. Configure user limits, case limits, and tenant properties. [View Management](/mindzie_api/tenant/management) ### Deletion Permanently delete tenants with triple verification for safety. Includes best practices for data export and safe deletion workflows. [View Deletion](/mindzie_api/tenant/deletion) --- ## Available Endpoints | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/tenant` | List all tenants | | GET | `/api/tenant/{tenantId}` | Get tenant by ID | | POST | `/api/tenant` | Create a tenant | | PUT | `/api/tenant` | Update a tenant | | DELETE | `/api/tenant` | Delete a tenant | --- ## Tenant Object Fields | Field | Type | Description | |-------|------|-------------| | `tenantId` | GUID | Unique identifier for the tenant | | `name` | string | Unique system name (used in URLs) | | `displayName` | string | Human-readable display name | | `description` | string | Description of the tenant | | `caseCount` | integer | Total number of cases | | `maxUserCount` | integer | Maximum allowed users | | `maxAnalystCount` | integer | Maximum allowed analysts | | `userCount` | integer | Current number of users | | `analystCount` | integer | Current number of analysts | | `isDisabled` | boolean | Whether the tenant is disabled | | `isAcademic` | boolean | Whether this is an academic tenant | | `preRelease` | boolean | Whether tenant has pre-release features | | `dateCreated` | datetime | When the tenant was created | --- ## Authentication | Endpoint | API Key Type | Access | |----------|--------------|--------| | All `/api/tenant` endpoints | Global API Key | Required | | | Tenant API Key | 401 Unauthorized | Global API keys can be created through the admin interface at `/admin/global-api-keys`. See [Authentication](/mindzie_api/authentication) for details on API key types and usage. --- ## Quick Start ```bash # List all tenants (Global API key required) curl -X GET "https://your-mindzie-instance.com/api/tenant" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Get a specific tenant curl -X GET "https://your-mindzie-instance.com/api/tenant/{tenantId}" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" ``` --- ## Important Notes - **Global API Keys Only**: All tenant endpoints require Global API keys - **License Limits**: Monitor tenant counts against license limits - **Destructive Operations**: Tenant deletion is permanent and irreversible - **Triple Verification**: Delete operations require ID, name, and display name to match exactly - **Disable vs Delete**: Consider disabling tenants to preserve data while preventing access --- Create, list, retrieve, and update tenants in the mindzieStudio platform. **IMPORTANT:** All endpoints on this page require a **Global API Key**. Tenant-specific API keys will receive a 401 Unauthorized error. --- ## List All Tenants **GET** `/api/tenant` Retrieves a paginated list of all tenants in the system with summary statistics. ### Query Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `page` | integer | 1 | Page number for pagination | | `pageSize` | integer | 50 | Number of items per page (max: 100) | ### Response (200 OK) ```json { "tenants": [ { "tenantId": "12345678-1234-1234-1234-123456789012", "name": "acme-corp", "displayName": "Acme Corporation", "description": "Main tenant for Acme Corporation", "caseCount": 50000, "maxUserCount": 100, "maxAnalystCount": 20, "analystCount": 12, "userCount": 45, "preRelease": false, "isAcademic": false, "autoload": true, "dateCreated": "2024-01-15T10:30:00Z", "isDisabled": false } ], "totalCount": 5, "page": 1, "pageSize": 50 } ``` ### Tenant Object Fields | Field | Type | Description | |-------|------|-------------| | `tenantId` | GUID | Unique identifier for the tenant | | `name` | string | Unique system name (used in URLs) | | `displayName` | string | Human-readable display name | | `description` | string | Description of the tenant | | `caseCount` | integer | Total number of cases across all datasets | | `maxUserCount` | integer | Maximum allowed users | | `maxAnalystCount` | integer | Maximum allowed analysts | | `analystCount` | integer | Current number of analysts | | `userCount` | integer | Current number of users | | `preRelease` | boolean | Whether tenant has pre-release features | | `isAcademic` | boolean | Whether this is an academic tenant | | `autoload` | boolean | Whether to auto-load projects | | `dateCreated` | datetime | When the tenant was created | | `isDisabled` | boolean | Whether the tenant is disabled | ### Error Responses **Unauthorized (401):** ```json { "error": "This endpoint requires a Global API key. Tenant-specific API keys cannot list all tenants.", "hint": "Global API keys can be created at /admin/global-api-keys" } ``` --- ## Get Tenant by ID **GET** `/api/tenant/{tenantId}` Retrieves detailed information for a specific tenant by its ID. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The unique identifier of the tenant | ### Response (200 OK) ```json { "tenantId": "12345678-1234-1234-1234-123456789012", "name": "acme-corp", "displayName": "Acme Corporation", "description": "Main tenant for Acme Corporation", "isAcademic": false, "preRelease": false, "maxUserCount": 100, "maxAnalystCount": 20, "maxCases": 100000, "dateCreated": "2024-01-15T10:30:00Z", "isDisabled": false } ``` ### Error Responses **Not Found (404):** ```json { "error": "Tenant with ID '12345678-1234-1234-1234-123456789012' not found" } ``` --- ## Create Tenant **POST** `/api/tenant` Creates a new tenant in the system with all necessary infrastructure. ### Request Body ```json { "name": "new-tenant", "displayName": "New Tenant Corp", "description": "Description of the new tenant", "maxUsers": 50, "maxAnalyst": 10, "maxCases": 100000, "timeZone": "America/New_York" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Unique system name (3-63 chars, lowercase alphanumeric with hyphens) | | `displayName` | string | Yes | Human-readable display name | | `description` | string | No | Description of the tenant | | `maxUsers` | integer | Yes | Maximum number of users | | `maxAnalyst` | integer | Yes | Maximum number of analysts | | `maxCases` | integer | Yes | Maximum number of cases | | `timeZone` | string | No | Timezone for the tenant | ### Tenant Name Requirements - 3-63 characters - Lowercase alphanumeric with hyphens only - No spaces or special characters - Must be unique across all tenants ### Response (201 Created) ```json { "tenantId": "aabbccdd-1234-1234-1234-123456789012", "name": "new-tenant", "displayName": "New Tenant Corp", "message": "Tenant 'New Tenant Corp' created successfully", "storageContainerCreated": true } ``` ### Error Responses **Conflict (409):** ```json { "error": "A tenant with name 'new-tenant' already exists" } ``` **License Limit (429):** ```json { "error": "Maximum number of tenants reached. Your license allows 10 tenants.", "hint": "Upgrade your license to create more tenants" } ``` **Validation Error (400):** ```json { "error": "Validation failed", "validationErrors": [ "Name must be between 3 and 63 characters", "Name can only contain lowercase letters, numbers, and hyphens" ] } ``` --- ## Update Tenant **PUT** `/api/tenant` Updates an existing tenant's properties. Only provided fields will be updated; null values are ignored. ### Request Body ```json { "tenantId": "12345678-1234-1234-1234-123456789012", "displayName": "Acme Corporation Updated", "description": "Updated description", "maxUsers": 100, "maxAnalyst": 25, "maxCases": 200000, "timeZone": "America/New_York", "isAcademic": false, "preRelease": true, "isDisabled": false } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant to update | | `displayName` | string | No | New display name (null = no change) | | `description` | string | No | New description (null = no change, "" = clear) | | `maxUsers` | integer | No | Maximum users (null = no change) | | `maxAnalyst` | integer | No | Maximum analysts (null = no change) | | `maxCases` | integer | No | Maximum cases (null = no change, -1 = unlimited) | | `timeZone` | string | No | TimeZone ID (null = no change) | | `isAcademic` | boolean | No | Academic flag (null = no change) | | `preRelease` | boolean | No | Pre-release features (null = no change) | | `isDisabled` | boolean | No | Disable tenant (null = no change) | **Note:** The tenant `name` cannot be changed after creation. ### Response (200 OK) ```json { "tenantId": "12345678-1234-1234-1234-123456789012", "name": "acme-corp", "displayName": "Acme Corporation Updated", "message": "Tenant 'acme-corp' updated successfully", "isDisabled": false } ``` ### Error Responses **Not Found (404):** ```json { "error": "Tenant with ID '12345678-1234-1234-1234-123456789012' not found" } ``` **Validation Error (400):** ```json { "error": "Validation failed", "validationErrors": ["Display name cannot exceed 255 characters"] } ``` --- ## Implementation Examples ### cURL ```bash # List all tenants curl -X GET "https://your-mindzie-instance.com/api/tenant?page=1&pageSize=50" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Get specific tenant by ID curl -X GET "https://your-mindzie-instance.com/api/tenant/12345678-1234-1234-1234-123456789012" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Create tenant curl -X POST "https://your-mindzie-instance.com/api/tenant" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "new-tenant", "displayName": "New Tenant Corp", "description": "Test tenant", "maxUsers": 50, "maxAnalyst": 10, "maxCases": 100000 }' # Update tenant curl -X PUT "https://your-mindzie-instance.com/api/tenant" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "tenantId": "12345678-1234-1234-1234-123456789012", "displayName": "New Tenant Corp Updated", "maxUsers": 100, "isDisabled": false }' ``` ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' class TenantManager: def __init__(self, global_api_key): """Initialize with a GLOBAL API key (not tenant-specific).""" self.headers = { 'Authorization': f'Bearer {global_api_key}', 'Content-Type': 'application/json' } def list_tenants(self, page=1, page_size=50): """List all tenants in the system.""" url = f'{BASE_URL}/api/tenant' params = {'page': page, 'pageSize': page_size} response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() return response.json() def get_tenant(self, tenant_id): """Get a specific tenant by ID.""" url = f'{BASE_URL}/api/tenant/{tenant_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_tenant(self, name, display_name, description='', max_users=50, max_analyst=10, max_cases=100000, timezone=None): """Create a new tenant.""" url = f'{BASE_URL}/api/tenant' payload = { 'name': name, 'displayName': display_name, 'description': description, 'maxUsers': max_users, 'maxAnalyst': max_analyst, 'maxCases': max_cases } if timezone: payload['timeZone'] = timezone response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def update_tenant(self, tenant_id, display_name=None, description=None, max_users=None, max_analyst=None, is_disabled=None): """Update an existing tenant.""" url = f'{BASE_URL}/api/tenant' payload = {'tenantId': tenant_id} if display_name is not None: payload['displayName'] = display_name if description is not None: payload['description'] = description if max_users is not None: payload['maxUsers'] = max_users if max_analyst is not None: payload['maxAnalyst'] = max_analyst if is_disabled is not None: payload['isDisabled'] = is_disabled response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = TenantManager('your-global-api-key') # List all tenants result = manager.list_tenants() print(f"Total tenants: {result['totalCount']}") for tenant in result['tenants']: print(f"- {tenant['displayName']} ({tenant['name']})") print(f" Users: {tenant['userCount']}/{tenant['maxUserCount']}") # Create a new tenant new_tenant = manager.create_tenant( name='test-tenant', display_name='Test Tenant', description='Created via API', max_users=25, max_analyst=5, max_cases=50000 ) print(f"Created tenant: {new_tenant['tenantId']}") # Update tenant limits manager.update_tenant( tenant_id=new_tenant['tenantId'], max_users=50, max_analyst=10 ) print("Tenant limits updated") ``` ### JavaScript ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; class TenantManager { constructor(globalApiKey) { this.headers = { 'Authorization': `Bearer ${globalApiKey}`, 'Content-Type': 'application/json' }; } async listTenants(page = 1, pageSize = 50) { const url = `${BASE_URL}/api/tenant?page=${page}&pageSize=${pageSize}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async getTenant(tenantId) { const url = `${BASE_URL}/api/tenant/${tenantId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async createTenant(config) { const url = `${BASE_URL}/api/tenant`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify(config) }); if (!response.ok) { const error = await response.json(); throw new Error(error.error || `Failed: ${response.status}`); } return await response.json(); } async updateTenant(tenantId, updates) { const url = `${BASE_URL}/api/tenant`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify({ tenantId, ...updates }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } } // Usage const manager = new TenantManager('your-global-api-key'); // List tenants const tenants = await manager.listTenants(); console.log(`Found ${tenants.totalCount} tenants`); // Create tenant const newTenant = await manager.createTenant({ name: 'new-tenant', displayName: 'New Tenant', maxUsers: 50, maxAnalyst: 10, maxCases: 100000 }); console.log(`Created: ${newTenant.tenantId}`); // Update tenant await manager.updateTenant(newTenant.tenantId, { displayName: 'Updated Tenant Name', maxUsers: 100 }); ``` --- ## Best Practices 1. **Global API Keys**: Only use global API keys for tenant management - they have significant system-wide privileges 2. **License Awareness**: Monitor tenant counts against license limits before creating new tenants 3. **Capacity Planning**: Set appropriate user and analyst limits based on expected usage 4. **Naming Conventions**: Use consistent, lowercase naming with hyphens for tenant names --- Permanently delete a tenant and all related data. This operation requires triple verification for safety. **IMPORTANT:** All endpoints on this page require a **Global API Key**. This is a **dangerous operation** that cannot be undone. --- ## Delete Tenant **DELETE** `/api/tenant` Permanently deletes a tenant and all related data. Requires triple verification for safety. ### WARNING This operation is **IRREVERSIBLE**. All tenant data will be permanently deleted including: - All projects and their datasets - All investigations, notebooks, and dashboards - Blob storage container and files - Database records and settings - User assignments to the tenant (users themselves are not deleted) **Always export important data before deleting a tenant.** --- ## Request Body ```json { "tenantId": "12345678-1234-1234-1234-123456789012", "name": "acme-corp", "displayName": "Acme Corporation" } ``` ### Triple Verification All three identifiers must match exactly for the deletion to proceed: | Field | Type | Required | Description | |-------|------|----------|-------------| | `tenantId` | GUID | Yes | Tenant ID to delete | | `name` | string | Yes | Tenant name (must match exactly) | | `displayName` | string | Yes | Display name (must match exactly) | This triple verification prevents accidental deletions by requiring you to know and confirm all three identifiers. --- ## Response (200 OK) ```json { "message": "Tenant 'Acme Corporation' deleted successfully", "tenantName": "acme-corp", "tenantDisplayName": "Acme Corporation", "storageContainerDeleted": true } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `message` | string | Success confirmation message | | `tenantName` | string | The deleted tenant's system name | | `tenantDisplayName` | string | The deleted tenant's display name | | `storageContainerDeleted` | boolean | Whether the blob storage was deleted | --- ## Error Responses ### Not Found (404) ```json { "error": "Tenant not found with ID '12345678-1234-1234-1234-123456789012'" } ``` ### Verification Failed (400) When the tenant name doesn't match: ```json { "error": "Tenant name 'wrong-name' does not match the tenant with ID '12345678-1234-1234-1234-123456789012'. Expected 'acme-corp'.", "hint": "All three identifiers (ID, name, display name) must match exactly for safety" } ``` When the display name doesn't match: ```json { "error": "Display name 'Wrong Name' does not match the tenant with ID '12345678-1234-1234-1234-123456789012'. Expected 'Acme Corporation'.", "hint": "All three identifiers (ID, name, display name) must match exactly for safety" } ``` ### Unauthorized (401) ```json { "error": "This endpoint requires a Global API key.", "hint": "Global API keys can be created at /admin/global-api-keys" } ``` --- ## Implementation Examples ### cURL ```bash # Delete tenant (requires triple verification) curl -X DELETE "https://your-mindzie-instance.com/api/tenant" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "tenantId": "12345678-1234-1234-1234-123456789012", "name": "test-tenant", "displayName": "Test Tenant" }' ``` ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' class TenantDeleter: def __init__(self, global_api_key): """Initialize with a GLOBAL API key (not tenant-specific).""" self.headers = { 'Authorization': f'Bearer {global_api_key}', 'Content-Type': 'application/json' } def get_tenant(self, tenant_id): """Get tenant details for verification.""" url = f'{BASE_URL}/api/tenant/{tenant_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def delete_tenant(self, tenant_id, name, display_name, confirm=False): """ Delete a tenant with triple verification. Args: tenant_id: The tenant GUID name: The tenant system name (must match exactly) display_name: The tenant display name (must match exactly) confirm: Set to True to actually perform deletion """ if not confirm: # Verification mode - check that values match tenant = self.get_tenant(tenant_id) errors = [] if tenant['name'] != name: errors.append(f"Name mismatch: expected '{tenant['name']}', got '{name}'") if tenant['displayName'] != display_name: errors.append(f"Display name mismatch: expected '{tenant['displayName']}', got '{display_name}'") if errors: raise ValueError("Verification failed:\n" + "\n".join(errors)) print(f"Verification passed for tenant '{display_name}' ({name})") print(f" - ID: {tenant_id}") print(f" - Users: {tenant.get('userCount', 'N/A')}") print(f" - Cases: {tenant.get('caseCount', 'N/A')}") print("\nCall with confirm=True to proceed with deletion.") return None # Perform the deletion url = f'{BASE_URL}/api/tenant' payload = { 'tenantId': tenant_id, 'name': name, 'displayName': display_name } response = requests.delete(url, json=payload, headers=self.headers) if response.ok: return response.json() elif response.status_code == 404: raise Exception(f'Tenant not found: {tenant_id}') elif response.status_code == 400: error = response.json() raise Exception(f"Verification failed: {error.get('error', 'Unknown error')}") else: raise Exception(f'Failed to delete tenant: {response.text}') # Usage - Safe deletion workflow deleter = TenantDeleter('your-global-api-key') tenant_id = '12345678-1234-1234-1234-123456789012' tenant_name = 'test-tenant' display_name = 'Test Tenant' # Step 1: Verify (no deletion occurs) try: deleter.delete_tenant(tenant_id, tenant_name, display_name, confirm=False) except ValueError as e: print(f"Verification failed: {e}") exit(1) # Step 2: Confirm deletion (uncomment when ready) # result = deleter.delete_tenant(tenant_id, tenant_name, display_name, confirm=True) # print(f"Deleted: {result['message']}") ``` ### JavaScript ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; class TenantDeleter { constructor(globalApiKey) { this.headers = { 'Authorization': `Bearer ${globalApiKey}`, 'Content-Type': 'application/json' }; } async getTenant(tenantId) { const url = `${BASE_URL}/api/tenant/${tenantId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed to get tenant: ${response.status}`); return await response.json(); } async deleteTenant(tenantId, name, displayName, confirm = false) { if (!confirm) { // Verification mode const tenant = await this.getTenant(tenantId); const errors = []; if (tenant.name !== name) { errors.push(`Name mismatch: expected '${tenant.name}', got '${name}'`); } if (tenant.displayName !== displayName) { errors.push(`Display name mismatch: expected '${tenant.displayName}', got '${displayName}'`); } if (errors.length > 0) { throw new Error('Verification failed:\n' + errors.join('\n')); } console.log(`Verification passed for tenant '${displayName}' (${name})`); console.log(` - ID: ${tenantId}`); console.log(` - Users: ${tenant.userCount || 'N/A'}`); console.log('\nCall with confirm=true to proceed with deletion.'); return null; } // Perform the deletion const url = `${BASE_URL}/api/tenant`; const response = await fetch(url, { method: 'DELETE', headers: this.headers, body: JSON.stringify({ tenantId, name, displayName }) }); if (response.ok) { return await response.json(); } const error = await response.json(); throw new Error(error.error || `Delete failed: ${response.status}`); } } // Usage - Safe deletion workflow const deleter = new TenantDeleter('your-global-api-key'); const tenantId = '12345678-1234-1234-1234-123456789012'; const tenantName = 'test-tenant'; const displayName = 'Test Tenant'; // Step 1: Verify (no deletion occurs) try { await deleter.deleteTenant(tenantId, tenantName, displayName, false); } catch (e) { console.error(`Verification failed: ${e.message}`); process.exit(1); } // Step 2: Confirm deletion (uncomment when ready) // const result = await deleter.deleteTenant(tenantId, tenantName, displayName, true); // console.log(`Deleted: ${result.message}`); ``` --- ## Safety Best Practices ### Before Deletion 1. **Export All Data**: Use the Project API to export all projects as .mpz files 2. **Verify Users**: Check which users are assigned and notify them 3. **Document**: Record what is being deleted and why for audit purposes 4. **Double-Check**: Verify the tenant ID, name, and display name are correct ### During Deletion 1. **Use the Verification Pattern**: Call the delete endpoint in verify-only mode first 2. **Check Response**: Verify the response shows the correct tenant information 3. **Confirm Deliberately**: Only pass `confirm=True` after manual verification ### After Deletion 1. **Verify Deletion**: Confirm the tenant no longer appears in the tenant list 2. **Check Users**: Verify affected users can no longer access the deleted tenant 3. **Update Documentation**: Record the deletion in your system documentation --- ## Alternative: Disable Instead of Delete Consider disabling a tenant instead of deleting it to preserve data while preventing access: ```bash # Disable tenant (preserves data) curl -X PUT "https://your-mindzie-instance.com/api/tenant" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "tenantId": "12345678-1234-1234-1234-123456789012", "isDisabled": true }' ``` Disabled tenants: - Users cannot log in to the tenant - All data is preserved - Can be re-enabled later by setting `isDisabled: false` - Appear in the tenant list with `isDisabled: true` This is often a safer choice than permanent deletion. --- # User API Manage users across the mindzieStudio platform. Create, update, and assign users to tenants with flexible API scopes. ## Features ### Global Operations System-wide user management with a Global API Key. List all users, create users, update properties, and manage tenant assignments across the entire platform. [View Global Operations](/mindzie_api/user/global) ### Tenant Operations Tenant-scoped user management that works with either Global or Tenant API Keys. Manage users within a specific tenant context. [View Tenant Operations](/mindzie_api/user/tenant-scoped) ### Roles & Permissions User roles define access levels and capabilities. Understand role hierarchy, service accounts, and best practices for access management. [View Roles & Permissions](/mindzie_api/user/roles) --- ## API Scopes The User API has two scopes: | Scope | Base Path | API Key Required | |-------|-----------|------------------| | Global | `/api/user` | Global API Key | | Tenant-scoped | `/api/tenant/{tenantId}/user` | Global or Tenant API Key | --- ## Available Endpoints ### Global User Endpoints | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/user` | List all users | | POST | `/api/user` | Create a user | | GET | `/api/user/{userId}` | Get user by ID | | PUT | `/api/user/{userId}` | Update user | | GET | `/api/user/by-email/{email}` | Get user by email | | GET | `/api/user/{userId}/tenants` | Get user's tenants | ### Tenant-Scoped User Endpoints | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/tenant/{tenantId}/user` | List tenant users | | POST | `/api/tenant/{tenantId}/user` | Create user in tenant | | GET | `/api/tenant/{tenantId}/user/{userId}` | Get user in tenant | | PUT | `/api/tenant/{tenantId}/user/{userId}` | Update user in tenant | | GET | `/api/tenant/{tenantId}/user/by-email/{email}` | Get by email in tenant | | POST | `/api/tenant/{tenantId}/user/{userId}` | Assign user to tenant | | DELETE | `/api/tenant/{tenantId}/user/{userId}` | Remove from tenant | --- ## User Roles | Role | Level | Description | |------|-------|-------------| | **Administrator** | System | Full system access across all tenants | | **TenantAdmin** | Tenant | Full access within assigned tenants | | **Analyst** | Project | Create and manage analyses within projects | | **Viewer** | Read-only | View dashboards and reports only | --- ## Authentication | Endpoint Scope | API Key Type | Access | |----------------|--------------|--------| | Global (`/api/user`) | Global API Key | All tenants | | Tenant-scoped | Global API Key | All tenants | | Tenant-scoped | Tenant API Key | Own tenant only | See [Authentication](/mindzie_api/authentication) for details on API key types and usage. --- ## Quick Start ```bash # List all users (Global API key required) curl -X GET "https://your-mindzie-instance.com/api/user" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # List users in a tenant (Tenant API key works) curl -X GET "https://your-mindzie-instance.com/api/tenant/{tenantId}/user" \ -H "Authorization: Bearer YOUR_TENANT_API_KEY" ``` --- ## Important Notes - **Global vs Tenant Keys**: Use tenant-scoped keys for most operations; reserve global keys for system administration - **User Deactivation**: Use `disabled: true` instead of deleting users to preserve audit trails - **Service Accounts**: Only Administrator and TenantAdmin roles can be service accounts - **Capacity Limits**: Tenants have configurable user and analyst limits --- Global user endpoints provide system-wide user management capabilities. These endpoints require a **Global API Key** and can access users across all tenants. ## Authentication All endpoints on this page require a **Global API Key**. Tenant-scoped API keys will receive a 401 Unauthorized error. --- ## List All Users **GET** `/api/user` Retrieves a paginated list of all users across all tenants. ### Query Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `page` | integer | 1 | Page number for pagination | | `pageSize` | integer | 50 | Number of items per page (max: 1000) | | `includeDisabled` | boolean | false | Include disabled users | | `role` | string | null | Filter by role name | | `search` | string | null | Search by email or display name | ### Response (200 OK) ```json { "users": [ { "userId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "email": "john.smith@example.com", "displayName": "John Smith", "firstName": "John", "lastName": "Smith", "roleName": "Analyst", "disabled": false, "isServiceAccount": false, "homeTenantId": null, "homeTenantName": null, "lastLogin": "2024-01-15T10:30:00Z", "tenantCount": 2, "tenantNames": "acme-corp, globex-inc", "dateCreated": "2024-01-01T00:00:00Z" } ], "totalCount": 150, "page": 1, "pageSize": 50 } ``` ### User Object Fields | Field | Type | Description | |-------|------|-------------| | `userId` | GUID | Unique identifier for the user | | `email` | string | User's email address (unique) | | `displayName` | string | User's display name | | `firstName` | string | User's first name | | `lastName` | string | User's last name | | `roleName` | string | User's role (Administrator, TenantAdmin, Analyst, etc.) | | `disabled` | boolean | Whether the user account is disabled | | `isServiceAccount` | boolean | Whether this is a service account | | `homeTenantId` | GUID | Home tenant for service accounts | | `homeTenantName` | string | Home tenant name for service accounts | | `lastLogin` | datetime | Last login timestamp | | `tenantCount` | integer | Number of tenants user is assigned to | | `tenantNames` | string | Comma-separated list of tenant names | | `dateCreated` | datetime | Account creation date | ### Error Responses **Unauthorized (401):** ```json { "error": "This endpoint requires a Global API key. Tenant-specific API keys cannot list all users.", "hint": "Use /api/tenant/{tenantId}/user to list users for a specific tenant, or create a Global API key at /admin/global-api-keys" } ``` --- ## Create User **POST** `/api/user` Creates a new user in the system. This does NOT assign the user to any tenants. ### Request Body ```json { "email": "john.smith@example.com", "displayName": "John Smith", "firstName": "John", "lastName": "Smith", "roleName": "Analyst" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `email` | string | Yes | User's email (must be unique) | | `displayName` | string | Yes | Display name (2-100 characters) | | `firstName` | string | No | First name (max 50 characters) | | `lastName` | string | No | Last name (max 50 characters) | | `roleName` | string | Yes | Role name (see [Roles & Permissions](/mindzie_api/user/roles)) | ### Response (201 Created) ```json { "userId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "email": "john.smith@example.com", "displayName": "John Smith", "message": "User created successfully" } ``` ### Error Responses **Conflict (409):** ```json { "error": "A user with email 'john.smith@example.com' already exists" } ``` --- ## Get User by ID **GET** `/api/user/{userId}` Retrieves detailed information for a specific user. ### Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `userId` | GUID | The user identifier | ### Response (200 OK) Returns a full user object with tenant assignments. ### Error Responses **Not Found (404):** ```json { "error": "User not found with ID 'a1b2c3d4-e5f6-7890-abcd-ef1234567890'", "userId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" } ``` --- ## Update User **PUT** `/api/user/{userId}` Updates user properties. Only provided fields will be updated. ### Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `userId` | GUID | The user identifier | ### Request Body ```json { "displayName": "Jane Smith", "roleName": "TenantAdmin", "disabled": false, "isServiceAccount": true, "homeTenantId": "12345678-1234-1234-1234-123456789012" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `displayName` | string | No | New display name | | `roleName` | string | No | New role name | | `disabled` | boolean | No | Enable/disable account | | `isServiceAccount` | boolean | No | Service account flag | | `homeTenantId` | GUID | Conditional | Required if making service account | ### Service Account Rules - Only **Administrator** and **TenantAdmin** roles can be service accounts - When promoting to service account, `homeTenantId` is **required** - When demoting from service account, `homeTenantId` is automatically cleared ### Response (200 OK) ```json { "message": "User updated successfully" } ``` --- ## Get User by Email **GET** `/api/user/by-email/{email}` Retrieves a user by their email address. ### Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `email` | string | The user's email address (URL encoded) | ### Response (200 OK) Returns a full user object. --- ## Get User's Tenants **GET** `/api/user/{userId}/tenants` Retrieves all tenant assignments for a user. ### Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `userId` | GUID | The user identifier | ### Response (200 OK) ```json { "userId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "email": "john.smith@example.com", "displayName": "John Smith", "tenants": [ { "tenantId": "12345678-1234-1234-1234-123456789012", "tenantName": "acme-corp", "displayName": "Acme Corporation", "dateAssigned": "2024-01-15T10:30:00Z" } ] } ``` --- ## Implementation Examples ### cURL ```bash # List all users (Global API key required) curl -X GET "https://your-mindzie-instance.com/api/user?page=1&pageSize=50" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Search for users by name curl -X GET "https://your-mindzie-instance.com/api/user?search=john" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Filter by role curl -X GET "https://your-mindzie-instance.com/api/user?role=Analyst" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Create a new user curl -X POST "https://your-mindzie-instance.com/api/user" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "email": "john.smith@example.com", "displayName": "John Smith", "roleName": "Analyst" }' # Get user by ID curl -X GET "https://your-mindzie-instance.com/api/user/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Get user by email curl -X GET "https://your-mindzie-instance.com/api/user/by-email/john.smith%40example.com" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Get user's tenants curl -X GET "https://your-mindzie-instance.com/api/user/a1b2c3d4-e5f6-7890-abcd-ef1234567890/tenants" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" ``` ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' class GlobalUserManager: def __init__(self, global_api_key): """Initialize with a GLOBAL API key (not tenant-specific).""" self.headers = { 'Authorization': f'Bearer {global_api_key}', 'Content-Type': 'application/json' } def list_users(self, page=1, page_size=50, include_disabled=False, role=None, search=None): """List all users across all tenants.""" url = f'{BASE_URL}/api/user' params = { 'page': page, 'pageSize': page_size, 'includeDisabled': include_disabled } if role: params['role'] = role if search: params['search'] = search response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() return response.json() def create_user(self, email, display_name, role_name, first_name=None, last_name=None): """Create a new user (not assigned to any tenant).""" url = f'{BASE_URL}/api/user' payload = { 'email': email, 'displayName': display_name, 'roleName': role_name } if first_name: payload['firstName'] = first_name if last_name: payload['lastName'] = last_name response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def get_user(self, user_id): """Get user by ID.""" url = f'{BASE_URL}/api/user/{user_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def get_user_by_email(self, email): """Get user by email address.""" from urllib.parse import quote url = f'{BASE_URL}/api/user/by-email/{quote(email, safe="")}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def update_user(self, user_id, display_name=None, role_name=None, disabled=None, is_service_account=None, home_tenant_id=None): """Update user properties.""" url = f'{BASE_URL}/api/user/{user_id}' payload = {} if display_name is not None: payload['displayName'] = display_name if role_name is not None: payload['roleName'] = role_name if disabled is not None: payload['disabled'] = disabled if is_service_account is not None: payload['isServiceAccount'] = is_service_account if home_tenant_id is not None: payload['homeTenantId'] = home_tenant_id response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def get_user_tenants(self, user_id): """Get all tenant assignments for a user.""" url = f'{BASE_URL}/api/user/{user_id}/tenants' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = GlobalUserManager('your-global-api-key') # List all analysts analysts = manager.list_users(role='Analyst') print(f"Total analysts: {analysts['totalCount']}") # Create a new user new_user = manager.create_user( email='new.analyst@example.com', display_name='New Analyst', role_name='Analyst', first_name='New', last_name='Analyst' ) print(f"Created user: {new_user['userId']}") # Get user's tenant assignments user_id = new_user['userId'] tenants = manager.get_user_tenants(user_id) print(f"User is assigned to {len(tenants['tenants'])} tenants") ``` ### JavaScript/Node.js ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; class GlobalUserManager { constructor(globalApiKey) { this.headers = { 'Authorization': `Bearer ${globalApiKey}`, 'Content-Type': 'application/json' }; } async listUsers(options = {}) { const params = new URLSearchParams({ page: options.page || 1, pageSize: options.pageSize || 50, includeDisabled: options.includeDisabled || false }); if (options.role) params.append('role', options.role); if (options.search) params.append('search', options.search); const url = `${BASE_URL}/api/user?${params}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async createUser(email, displayName, roleName) { const url = `${BASE_URL}/api/user`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ email, displayName, roleName }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async getUser(userId) { const url = `${BASE_URL}/api/user/${userId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async getUserTenants(userId) { const url = `${BASE_URL}/api/user/${userId}/tenants`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } } // Usage const manager = new GlobalUserManager('your-global-api-key'); // List all users const users = await manager.listUsers(); console.log(`Total users: ${users.totalCount}`); // Create and check tenant assignments const newUser = await manager.createUser( 'new@example.com', 'New User', 'Analyst' ); const tenants = await manager.getUserTenants(newUser.userId); console.log(`Assigned to ${tenants.tenants.length} tenants`); ``` --- Tenant-scoped user endpoints manage users within a specific tenant. These endpoints can be accessed with either a **Global API Key** or a **Tenant API Key**. ## Authentication | API Key Type | Access | |--------------|--------| | Global API Key | Can access any tenant | | Tenant API Key | Can only access its own tenant | --- ## List Users for Tenant **GET** `/api/tenant/{tenantId}/user` Retrieves users assigned to a specific tenant. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | ### Query Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `page` | integer | 1 | Page number for pagination | | `pageSize` | integer | 50 | Number of items per page (max: 1000) | | `includeDisabled` | boolean | false | Include disabled users | | `role` | string | null | Filter by role name | | `search` | string | null | Search by email or display name | ### Response (200 OK) Same structure as the global List All Users, but filtered to the specified tenant. --- ## Create User in Tenant **POST** `/api/tenant/{tenantId}/user` Creates a new user AND assigns them to the tenant, or assigns an existing user to the tenant. **Note:** If a user with the specified email already exists, they will be assigned to the tenant instead of creating a duplicate. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | ### Request Body ```json { "email": "john.smith@example.com", "displayName": "John Smith", "firstName": "John", "lastName": "Smith", "roleName": "Analyst" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `email` | string | Yes | User's email address | | `displayName` | string | Yes | Display name (2-100 characters) | | `firstName` | string | No | First name (max 50 characters) | | `lastName` | string | No | Last name (max 50 characters) | | `roleName` | string | Yes | Role name (see [Roles & Permissions](/mindzie_api/user/roles)) | ### Capacity Validation The operation validates tenant capacity limits: - **MaxUsers** limit is checked for all roles - **MaxAnalyst** limit is checked for Analyst roles ### Response (201 Created) ```json { "userId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "email": "john.smith@example.com", "displayName": "John Smith", "message": "User created and assigned to tenant successfully" } ``` ### Error Responses **Conflict (409):** ```json { "error": "User is already assigned to this tenant" } ``` **Capacity Exceeded (400):** ```json { "error": "Cannot add user: tenant has reached its maximum user limit (100)", "hint": "Increase the tenant's user or analyst limit to add more users" } ``` --- ## Get User in Tenant **GET** `/api/tenant/{tenantId}/user/{userId}` Retrieves a specific user within the tenant context. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `userId` | GUID | Yes | The user identifier | ### Response (200 OK) Returns the user object if they are assigned to the tenant. --- ## Get User by Email in Tenant **GET** `/api/tenant/{tenantId}/user/by-email/{email}` Retrieves a user by email within the tenant context. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `email` | string | Yes | The user's email (URL encoded) | ### Response (200 OK) Returns the user object if they are assigned to the tenant. --- ## Assign Existing User to Tenant **POST** `/api/tenant/{tenantId}/user/{userId}` Assigns an existing user to a tenant. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `userId` | GUID | Yes | The user identifier | ### Request Body (Optional) ```json { "roleName": "Analyst" } ``` ### Response (200 OK) ```json { "message": "User assigned to tenant successfully" } ``` --- ## Update User in Tenant **PUT** `/api/tenant/{tenantId}/user/{userId}` Updates a user's properties within the tenant context. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `userId` | GUID | Yes | The user identifier | ### Request Body ```json { "displayName": "Updated Name", "roleName": "TenantAdmin" } ``` ### Response (200 OK) ```json { "message": "User updated successfully" } ``` --- ## Remove User from Tenant **DELETE** `/api/tenant/{tenantId}/user/{userId}` Removes a user's assignment from a tenant. This does NOT delete the user from the system. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `userId` | GUID | Yes | The user identifier | ### Response (200 OK) ```json { "message": "User removed from tenant successfully" } ``` ### Error Responses **Not Found (404):** ```json { "error": "User is not assigned to this tenant" } ``` --- ## Implementation Examples ### cURL ```bash # List users for a tenant (Tenant API key works) curl -X GET "https://your-mindzie-instance.com/api/tenant/12345678-1234-1234-1234-123456789012/user" \ -H "Authorization: Bearer YOUR_TENANT_API_KEY" # Search users in tenant curl -X GET "https://your-mindzie-instance.com/api/tenant/12345678-1234-1234-1234-123456789012/user?search=john" \ -H "Authorization: Bearer YOUR_TENANT_API_KEY" # Create user in tenant (creates user AND assigns) curl -X POST "https://your-mindzie-instance.com/api/tenant/12345678-1234-1234-1234-123456789012/user" \ -H "Authorization: Bearer YOUR_TENANT_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "email": "new.user@example.com", "displayName": "New User", "roleName": "Analyst" }' # Assign existing user to tenant curl -X POST "https://your-mindzie-instance.com/api/tenant/12345678-1234-1234-1234-123456789012/user/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_TENANT_API_KEY" \ -H "Content-Type: application/json" \ -d '{"roleName": "Analyst"}' # Remove user from tenant curl -X DELETE "https://your-mindzie-instance.com/api/tenant/12345678-1234-1234-1234-123456789012/user/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_TENANT_API_KEY" ``` ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' class TenantUserManager: def __init__(self, api_key, tenant_id): """ Initialize with an API key and tenant ID. Works with either Global or Tenant API keys. """ self.headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } self.tenant_id = tenant_id def list_users(self, page=1, page_size=50, role=None, search=None): """List users assigned to this tenant.""" url = f'{BASE_URL}/api/tenant/{self.tenant_id}/user' params = {'page': page, 'pageSize': page_size} if role: params['role'] = role if search: params['search'] = search response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() return response.json() def create_user(self, email, display_name, role_name, first_name=None, last_name=None): """Create a user and assign to tenant (or assign existing user).""" url = f'{BASE_URL}/api/tenant/{self.tenant_id}/user' payload = { 'email': email, 'displayName': display_name, 'roleName': role_name } if first_name: payload['firstName'] = first_name if last_name: payload['lastName'] = last_name response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def assign_user(self, user_id, role_name=None): """Assign an existing user to this tenant.""" url = f'{BASE_URL}/api/tenant/{self.tenant_id}/user/{user_id}' payload = {} if role_name: payload['roleName'] = role_name response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def remove_user(self, user_id): """Remove a user from this tenant (does not delete the user).""" url = f'{BASE_URL}/api/tenant/{self.tenant_id}/user/{user_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() return response.json() def get_user(self, user_id): """Get a specific user in this tenant.""" url = f'{BASE_URL}/api/tenant/{self.tenant_id}/user/{user_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() # Usage with Tenant API key tenant_id = '12345678-1234-1234-1234-123456789012' manager = TenantUserManager('your-tenant-api-key', tenant_id) # List all analysts in tenant analysts = manager.list_users(role='Analyst') print(f"Tenant has {analysts['totalCount']} analysts") for user in analysts['users']: print(f" - {user['displayName']} ({user['email']})") # Add a new analyst new_user = manager.create_user( email='new.analyst@example.com', display_name='New Analyst', role_name='Analyst' ) print(f"Added user: {new_user['userId']}") # Remove user from tenant (user still exists in system) manager.remove_user(new_user['userId']) print("User removed from tenant") ``` ### JavaScript/Node.js ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; class TenantUserManager { constructor(apiKey, tenantId) { this.headers = { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }; this.tenantId = tenantId; } async listUsers(options = {}) { const params = new URLSearchParams({ page: options.page || 1, pageSize: options.pageSize || 50 }); if (options.role) params.append('role', options.role); if (options.search) params.append('search', options.search); const url = `${BASE_URL}/api/tenant/${this.tenantId}/user?${params}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async createUser(email, displayName, roleName) { const url = `${BASE_URL}/api/tenant/${this.tenantId}/user`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ email, displayName, roleName }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async assignUser(userId, roleName = null) { const url = `${BASE_URL}/api/tenant/${this.tenantId}/user/${userId}`; const body = roleName ? { roleName } : {}; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify(body) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async removeUser(userId) { const url = `${BASE_URL}/api/tenant/${this.tenantId}/user/${userId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } } // Usage const tenantId = '12345678-1234-1234-1234-123456789012'; const manager = new TenantUserManager('your-tenant-api-key', tenantId); // List users const users = await manager.listUsers(); console.log(`Tenant has ${users.totalCount} users`); // Add new user const newUser = await manager.createUser( 'new@example.com', 'New User', 'Analyst' ); console.log(`Added: ${newUser.userId}`); ``` --- ## Best Practices 1. **Use Tenant API Keys**: For most operations, tenant-scoped keys are more secure 2. **Check Capacity**: Verify tenant limits before bulk user creation 3. **Remove vs Delete**: Removing from tenant keeps user in system for other tenants 4. **Search Before Create**: Check if user exists before creating to avoid duplicates --- User roles define access levels and capabilities within mindzieStudio. Each user is assigned a single role that determines their permissions across the platform. ## Available Roles | Role | Level | Description | |------|-------|-------------| | **Administrator** | System | Full system access across all tenants | | **TenantAdmin** | Tenant | Full access within assigned tenants | | **Analyst** | Project | Create and manage analyses within projects | | **Viewer** | Read-only | View dashboards and reports only | --- ## Role Details ### Administrator The highest privilege level with complete system access. **Capabilities:** - Access all tenants and projects - Create and delete tenants - Manage all users across the system - Create Global API keys - Access admin interfaces - All TenantAdmin, Analyst, and Viewer capabilities **Use Cases:** - System administrators - Platform administrators - IT operations staff ### TenantAdmin Full administrative access within assigned tenants. **Capabilities:** - Manage all projects within their tenants - Add and remove users from tenants - Create Tenant API keys - Manage tenant settings - All Analyst and Viewer capabilities **Use Cases:** - Department heads - Team leads - Tenant administrators ### Analyst Standard user role for process mining analysis. **Capabilities:** - Create and manage investigations - Upload and configure datasets - Create dashboards and reports - Execute notebooks and blocks - Export analysis results - All Viewer capabilities **Use Cases:** - Process analysts - Data scientists - Business analysts ### Viewer Read-only access for consuming reports. **Capabilities:** - View shared dashboards - Access published reports - View project summaries - Cannot modify any data **Use Cases:** - Executives - Stakeholders - External reviewers --- ## Role Hierarchy ``` Administrator | +-- Can do everything TenantAdmin can do | +-- Can do everything Analyst can do | +-- Can do everything Viewer can do ``` --- ## Service Accounts Service accounts are special user accounts designed for API integrations and automated workflows. ### Requirements - Only **Administrator** and **TenantAdmin** roles can be service accounts - Service accounts must have a **home tenant** assigned - Service accounts can authenticate via API without user login ### Configuration To promote a user to a service account: ```json { "isServiceAccount": true, "homeTenantId": "12345678-1234-1234-1234-123456789012" } ``` To demote back to regular user: ```json { "isServiceAccount": false } ``` The `homeTenantId` is automatically cleared when demoting. ### Use Cases - CI/CD pipeline integrations - Automated data import scripts - Scheduled report generation - ETL processes - Monitoring and alerting systems --- ## Role Assignment ### When Creating Users Specify the role in the creation request: ```json { "email": "john.smith@example.com", "displayName": "John Smith", "roleName": "Analyst" } ``` ### When Updating Users Change the role with an update request: ```json { "roleName": "TenantAdmin" } ``` --- ## API Key Types and Roles | API Key Type | Created By | Access Scope | |--------------|------------|--------------| | Global API Key | Administrator | All tenants, all endpoints | | Tenant API Key | TenantAdmin or Administrator | Single tenant only | ### Global API Key Endpoints Only Global API keys can access: - `/api/user` - Global user management - `/api/tenant` - Tenant management - Cross-tenant operations ### Tenant API Key Endpoints Tenant API keys can access: - `/api/tenant/{tenantId}/user` - Tenant user management - `/api/{tenantId}/project` - Project operations - `/api/{tenantId}/dataset` - Dataset operations - All other tenant-scoped endpoints --- ## Best Practices ### Least Privilege Assign the minimum role necessary for each user's job function. ``` Executive viewing dashboards -> Viewer Analyst running investigations -> Analyst Team lead managing projects -> TenantAdmin IT admin managing system -> Administrator ``` ### Service Account Security - Create dedicated service accounts for each integration - Use descriptive display names (e.g., "CI/CD Pipeline Service") - Regularly rotate API keys - Monitor service account activity ### Role Transitions - When promoting users, verify they understand new responsibilities - When demoting users, ensure they have access to complete their work - Document role changes for audit purposes ### Disable vs Delete - Prefer disabling users over deleting to preserve audit trails - Disabled users cannot log in but their history is preserved - Delete only when required for data privacy --- ## Implementation Examples ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' class RoleManager: def __init__(self, global_api_key): self.headers = { 'Authorization': f'Bearer {global_api_key}', 'Content-Type': 'application/json' } def get_users_by_role(self, role_name): """Get all users with a specific role.""" url = f'{BASE_URL}/api/user' params = {'role': role_name, 'pageSize': 1000} response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() return response.json() def promote_to_service_account(self, user_id, home_tenant_id): """Promote a user to service account.""" url = f'{BASE_URL}/api/user/{user_id}' payload = { 'isServiceAccount': True, 'homeTenantId': home_tenant_id } response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def demote_from_service_account(self, user_id): """Demote a service account back to regular user.""" url = f'{BASE_URL}/api/user/{user_id}' payload = {'isServiceAccount': False} response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def change_role(self, user_id, new_role): """Change a user's role.""" url = f'{BASE_URL}/api/user/{user_id}' payload = {'roleName': new_role} response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def disable_user(self, user_id): """Disable a user account.""" url = f'{BASE_URL}/api/user/{user_id}' payload = {'disabled': True} response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = RoleManager('your-global-api-key') # List all administrators admins = manager.get_users_by_role('Administrator') print(f"System has {admins['totalCount']} administrators") # Promote user to service account manager.promote_to_service_account( user_id='a1b2c3d4-e5f6-7890-abcd-ef1234567890', home_tenant_id='12345678-1234-1234-1234-123456789012' ) # Change role from Analyst to TenantAdmin manager.change_role( user_id='a1b2c3d4-e5f6-7890-abcd-ef1234567890', new_role='TenantAdmin' ) # Disable a user instead of deleting manager.disable_user('departing-user-id') ``` ### JavaScript ```javascript class RoleManager { constructor(globalApiKey) { this.headers = { 'Authorization': `Bearer ${globalApiKey}`, 'Content-Type': 'application/json' }; } async getUsersByRole(roleName) { const url = `${BASE_URL}/api/user?role=${roleName}&pageSize=1000`; const response = await fetch(url, { headers: this.headers }); return await response.json(); } async promoteToServiceAccount(userId, homeTenantId) { const url = `${BASE_URL}/api/user/${userId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify({ isServiceAccount: true, homeTenantId }) }); return await response.json(); } async changeRole(userId, newRole) { const url = `${BASE_URL}/api/user/${userId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify({ roleName: newRole }) }); return await response.json(); } } // Usage const manager = new RoleManager('your-global-api-key'); // Get all analysts const analysts = await manager.getUsersByRole('Analyst'); console.log(`${analysts.totalCount} analysts in system`); // Promote to TenantAdmin await manager.changeRole('user-id', 'TenantAdmin'); ``` --- # Project API Manage projects within mindzieStudio tenants. Projects are the top-level containers for datasets, investigations, dashboards, and analysis workflows. ## Features ### Project Management Create, retrieve, update, and delete projects. List all projects in a tenant with pagination support. [View Management API](/mindzie_api/project/management) ### Cache Operations Load projects into memory for fast access during API operations. Essential for executing notebooks and blocks efficiently. [View Cache API](/mindzie_api/project/cache) ### User Permissions Manage user access to projects. Add users, update permission levels (owner vs member), and remove access. [View Users API](/mindzie_api/project/users) ### Import & Export Export projects as portable .mpz files for backup or transfer. Import projects from .mpz files. Manage project thumbnails. [View Import & Export API](/mindzie_api/project/import-export) --- ## Available Endpoints ### Connectivity Testing | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/project/unauthorized-ping` | Public connectivity test | | GET | `/api/{tenantId}/project/ping` | Authenticated connectivity test | ### Project CRUD | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/project` | List all projects | | GET | `/api/{tenantId}/project/{projectId}` | Get project details | | POST | `/api/{tenantId}/project` | Create a project | | PUT | `/api/{tenantId}/project/{projectId}` | Update a project | | DELETE | `/api/{tenantId}/project/{projectId}` | Delete a project | | GET | `/api/{tenantId}/project/{projectId}/summary` | Get project statistics | ### Cache Management | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/project/{projectId}/load` | Load project into cache | | DELETE | `/api/{tenantId}/project/{projectId}/unload` | Unload project from cache | ### User Permissions | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/project/{projectId}/users` | List project users | | POST | `/api/{tenantId}/project/{projectId}/users/{userId}` | Add user to project | | PUT | `/api/{tenantId}/project/{projectId}/users/{userId}` | Update user permission | | DELETE | `/api/{tenantId}/project/{projectId}/users/{userId}` | Remove user | ### Import/Export | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/project/{projectId}/download` | Export as .mpz | | POST | `/api/{tenantId}/project/import` | Import from .mpz | ### Thumbnails | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/project/{projectId}/thumbnail` | Get thumbnail | | POST | `/api/{tenantId}/project/{projectId}/thumbnail` | Update thumbnail | | DELETE | `/api/{tenantId}/project/{projectId}/thumbnail` | Remove thumbnail | --- ## Authentication All Project API endpoints require a valid API key. Use tenant-scoped API keys for project operations. See [Authentication](/mindzie_api/authentication) for details on API key types and usage. --- ## Quick Start ```bash # List all projects in a tenant curl -X GET "https://your-mindzie-instance.com/api/{tenantId}/project" \ -H "Authorization: Bearer YOUR_API_KEY" # Load a project into cache before executing notebooks curl -X GET "https://your-mindzie-instance.com/api/{tenantId}/project/{projectId}/load" \ -H "Authorization: Bearer YOUR_API_KEY" ``` --- ## Important Notes - **CASCADE Delete**: Deleting a project permanently removes all datasets, investigations, dashboards, and files - **Cache Required**: Load projects into cache before executing notebooks or blocks - **Cache Duration**: Projects remain cached for 30 minutes after last access - **Export Before Delete**: Always export projects before deletion as a backup --- Manage projects within mindzieStudio tenants. Create, retrieve, update, and delete projects that contain datasets, investigations, dashboards, and analysis workflows. ## Connectivity Testing ### Unauthorized Ping **GET** `/api/{tenantId}/project/unauthorized-ping` Test endpoint that does not require authentication. Use this to verify network connectivity. #### Response ``` Ping Successful ``` ### Authenticated Ping **GET** `/api/{tenantId}/project/ping` Authenticated ping endpoint to verify API access for a specific tenant. #### Response (200 OK) ``` Ping Successful (tenant id: {tenantId}) ``` --- ## List All Projects **GET** `/api/{tenantId}/project` Retrieves a paginated list of all projects the authenticated user has access to within the specified tenant. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | ### Query Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `page` | integer | 1 | Page number for pagination | | `pageSize` | integer | 50 | Number of items per page (max recommended: 100) | ### Response (200 OK) ```json { "projects": [ { "projectId": "87654321-4321-4321-4321-210987654321", "tenantId": "12345678-1234-1234-1234-123456789012", "projectName": "Purchase Order Analysis", "projectDescription": "Process mining analysis of P2P workflow", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "user@example.com", "modifiedBy": "user@example.com", "isActive": true, "datasetCount": 3, "investigationCount": 5, "dashboardCount": 2, "userCount": 8 } ], "totalCount": 15, "page": 1, "pageSize": 50 } ``` ### Project Object Fields | Field | Type | Description | |-------|------|-------------| | `projectId` | GUID | Unique identifier for the project | | `tenantId` | GUID | Tenant this project belongs to | | `projectName` | string | Display name of the project | | `projectDescription` | string | Description of the project | | `dateCreated` | datetime | When the project was created | | `dateModified` | datetime | When the project was last modified | | `createdBy` | string | User who created the project | | `modifiedBy` | string | User who last modified the project | | `isActive` | boolean | Whether the project is active | | `datasetCount` | integer | Number of datasets in the project | | `investigationCount` | integer | Number of investigations | | `dashboardCount` | integer | Number of dashboards | | `userCount` | integer | Number of users with access | --- ## Get Project Details **GET** `/api/{tenantId}/project/{projectId}` Retrieves detailed information for a specific project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Response (200 OK) Same structure as the project object in the list response. ### Error Responses **Not Found (404):** ```json { "error": "Project not found with ID '{projectId}'. The project may have been deleted or the ID is incorrect.", "projectId": "87654321-4321-4321-4321-210987654321" } ``` --- ## Get Project Summary **GET** `/api/{tenantId}/project/{projectId}/summary` Retrieves aggregated statistics and key metrics for the project. ### Response (200 OK) ```json { "projectId": "87654321-4321-4321-4321-210987654321", "projectName": "Purchase Order Analysis", "projectDescription": "Process mining analysis of P2P workflow", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "statistics": { "totalDatasets": 3, "totalInvestigations": 5, "totalDashboards": 2, "totalNotebooks": 12, "totalUsers": 8 } } ``` --- ## Create Project **POST** `/api/{tenantId}/project` Creates a new project in the specified tenant. ### Request Body ```json { "projectName": "New Analysis Project", "projectDescription": "Process mining analysis for procurement workflow" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `projectName` | string | Yes | Project name (max 255 characters) | | `projectDescription` | string | No | Description (max 1000 characters) | ### Response (201 Created) Returns the created project object (same structure as Get Project). ### Error Responses **Bad Request (400):** ```json { "error": "Validation failed", "validationErrors": ["Project name is required"] } ``` --- ## Update Project **PUT** `/api/{tenantId}/project/{projectId}` Updates an existing project's properties. ### Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `tenantId` | GUID | The tenant identifier | | `projectId` | GUID | The project identifier | ### Request Body ```json { "projectName": "Updated Project Name", "projectDescription": "Updated description", "isActive": true } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `projectName` | string | Yes | New project name | | `projectDescription` | string | No | New description | | `isActive` | boolean | No | Enable/disable project | ### Response (200 OK) Returns the updated project object. --- ## Delete Project **DELETE** `/api/{tenantId}/project/{projectId}` Permanently deletes a project and ALL associated data. **WARNING: This is a DESTRUCTIVE operation that CANNOT be undone.** ### Cascade Delete Includes - All datasets in the project - All investigations and notebooks - All dashboards - All user permissions - All blob storage files (event logs, attachments) ### Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `tenantId` | GUID | The tenant identifier | | `projectId` | GUID | The project identifier | ### Response (200 OK) ```json { "message": "Project deleted successfully", "projectId": "87654321-4321-4321-4321-210987654321" } ``` --- ## Implementation Examples ### cURL ```bash # List all projects curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get project details curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Create a new project curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "projectName": "Q4 Analysis", "projectDescription": "Quarterly procurement analysis" }' # Update a project curl -X PUT "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "projectName": "Q4 Analysis - Final", "projectDescription": "Updated description" }' # Delete a project (CAUTION: Irreversible!) curl -X DELETE "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' BASE_URL = 'https://your-mindzie-instance.com' class ProjectManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def list_projects(self, page=1, page_size=50): """List all projects in the tenant.""" url = f'{BASE_URL}/api/{TENANT_ID}/project' params = {'page': page, 'pageSize': page_size} response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() return response.json() def get_project(self, project_id): """Get project details.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_project(self, name, description=''): """Create a new project.""" url = f'{BASE_URL}/api/{TENANT_ID}/project' payload = { 'projectName': name, 'projectDescription': description } response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def update_project(self, project_id, name=None, description=None, is_active=None): """Update an existing project.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}' payload = {} if name: payload['projectName'] = name if description is not None: payload['projectDescription'] = description if is_active is not None: payload['isActive'] = is_active response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def delete_project(self, project_id): """Delete a project (CAUTION: Irreversible!).""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = ProjectManager('your-auth-token') # List all projects result = manager.list_projects() print(f"Total projects: {result['totalCount']}") for project in result['projects']: print(f"- {project['projectName']}: {project['datasetCount']} datasets") # Create a new project new_project = manager.create_project( name='API Test Project', description='Created via API' ) print(f"Created: {new_project['projectId']}") ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const BASE_URL = 'https://your-mindzie-instance.com'; class ProjectManager { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async listProjects(page = 1, pageSize = 50) { const url = `${BASE_URL}/api/${TENANT_ID}/project?page=${page}&pageSize=${pageSize}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async getProject(projectId) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async createProject(name, description = '') { const url = `${BASE_URL}/api/${TENANT_ID}/project`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ projectName: name, projectDescription: description }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async deleteProject(projectId) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } } // Usage const manager = new ProjectManager('your-auth-token'); const projects = await manager.listProjects(); console.log(`Found ${projects.totalCount} projects`); ``` --- The Project Cache API manages in-memory project loading for API operations. Understanding when projects need to be loaded is essential for efficient API usage. ## Key Concepts ### Unified Cache Architecture The API and UI share the same in-memory cache. When you load a project via the API, it's the same cache the UI uses. This means: - **Shared State**: API operations see the same data as UI users - **Shared Results**: Execution results are visible to both API and UI - **No Divergence**: Impossible for API and UI to have different views of a project ### Operation Categories API operations fall into three categories with different caching requirements: | Category | Description | Project Load Required? | Examples | |----------|-------------|------------------------|----------| | **Direct DB** | Read-only operations | No | GET endpoints, tenant/user management | | **Auto-Load** | Modification operations | **No** (auto-loads) | POST/PUT/DELETE on investigations, notebooks, blocks | | **Requires Load** | Execution operations | **Yes** | Execute notebook, get execution results | ### Auto-Load Pattern (Simplified Workflow) For most CRUD operations, **you don't need to explicitly load the project**. The API automatically loads the project when needed: ```python # OLD workflow (no longer needed for CRUD): # manager.load_project(project_id) # Not required! # NEW workflow - just call the operation directly: response = requests.put( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}", json={"Name": "Updated Name"}, headers=headers ) # Project loads automatically if needed ``` ### When Explicit Load IS Required Explicit project loading is still required for **execution operations**: - `POST /execution/notebook/{notebookId}` - Execute notebook - `GET /execution/notebook/{notebookId}/results` - Get execution results - `GET /execution/status/{notebookId}` - Check execution status --- ## Load Project into Cache **GET** `/api/{tenantId}/project/{projectId}/load` Loads a project into the shared cache. Use this before executing notebooks. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Response (200 OK) ```json { "projectId": "87654321-4321-4321-4321-210987654321", "projectName": "Purchase Order Analysis", "tenantName": "acme-corp", "investigationCount": 5, "notebookCount": 12, "datasetCount": 3, "loadedFromCache": false, "message": "Project loaded from database" } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `projectId` | GUID | Project identifier | | `projectName` | string | Name of the project | | `tenantName` | string | Name of the tenant | | `investigationCount` | integer | Number of investigations | | `notebookCount` | integer | Number of notebooks | | `datasetCount` | integer | Number of datasets | | `loadedFromCache` | boolean | True if already in cache, false if loaded from database | | `message` | string | Human-readable status message | ### Cache Behavior | Scenario | Response | Performance | |----------|----------|-------------| | First call (cache miss) | `loadedFromCache: false` | ~1000ms (database query) | | Subsequent calls (cache hit) | `loadedFromCache: true` | ~75ms (13x faster) | | After 30 min inactivity | Cache expires | Next call reloads | ### Cache Properties - **Duration**: 30 minutes after last access - **Auto-refresh**: Any API call to the project resets the 30-minute timer - **Shared**: Same cache used by UI and API - **Memory Management**: Automatic cleanup at 90% memory pressure --- ## Unload Project from Cache **DELETE** `/api/{tenantId}/project/{projectId}/unload` Removes a project from the cache, freeing memory. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Response (200 OK) ```json { "projectId": "87654321-4321-4321-4321-210987654321", "wasInCache": true, "message": "Project unloaded from cache successfully" } ``` --- ## Workflow Examples ### Workflow A: CRUD Operations (Auto-Load) For creating, updating, or deleting investigations, notebooks, or blocks: ```python import requests headers = {"Authorization": f"Bearer {API_KEY}"} # Just call the operation directly - no load needed! response = requests.post( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation", json={"name": "New Investigation", "description": "Created via API"}, headers=headers ) # Project auto-loads if needed ``` ### Workflow B: Notebook Execution (Requires Load) For executing notebooks and retrieving results: ```python import requests import time headers = {"Authorization": f"Bearer {API_KEY}"} # Step 1: Load project (REQUIRED for execution) response = requests.get( f"{BASE_URL}/api/{TENANT_ID}/project/{PROJECT_ID}/load", headers=headers ) print(f"Project loaded: {response.json()['projectName']}") # Step 2: Execute notebook response = requests.post( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/execution/notebook/{NOTEBOOK_ID}", headers=headers ) print(f"Execution queued: {response.json()['status']}") # Step 3: Poll for completion while True: response = requests.get( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/execution/status/{NOTEBOOK_ID}", headers=headers ) status = response.json() print(f"Status: {status['status']} ({status['progress']}%)") if status['status'] == 'Completed': break time.sleep(2) # Step 4: Get results response = requests.get( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/execution/notebook/{NOTEBOOK_ID}/results", headers=headers ) results = response.json() # Step 5: Unload project (optional cleanup) requests.delete( f"{BASE_URL}/api/{TENANT_ID}/project/{PROJECT_ID}/unload", headers=headers ) ``` --- ## Implementation Examples ### cURL ```bash # Load project into cache curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/load" \ -H "Authorization: Bearer YOUR_API_KEY" # Unload project from cache curl -X DELETE "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/unload" \ -H "Authorization: Bearer YOUR_API_KEY" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' BASE_URL = 'https://your-mindzie-instance.com' class ProjectCacheManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } self.loaded_projects = set() def load_project(self, project_id): """Load project into cache (required for execution operations).""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/load' response = requests.get(url, headers=self.headers) response.raise_for_status() result = response.json() self.loaded_projects.add(project_id) status = "from cache" if result['loadedFromCache'] else "from database" print(f"Project '{result['projectName']}' loaded {status}") return result def unload_project(self, project_id): """Unload project from cache.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/unload' response = requests.delete(url, headers=self.headers) response.raise_for_status() self.loaded_projects.discard(project_id) return response.json() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): for project_id in list(self.loaded_projects): self.unload_project(project_id) # Usage with context manager with ProjectCacheManager('your-api-key') as cache: result = cache.load_project('87654321-4321-4321-4321-210987654321') # Execute notebooks here... # Projects automatically unloaded when exiting ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const BASE_URL = 'https://your-mindzie-instance.com'; class ProjectCacheManager { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; this.loadedProjects = new Set(); } async loadProject(projectId) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/load`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); const result = await response.json(); this.loadedProjects.add(projectId); console.log(`Loaded: ${result.projectName} (from cache: ${result.loadedFromCache})`); return result; } async unloadProject(projectId) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/unload`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); this.loadedProjects.delete(projectId); return response.json(); } async unloadAll() { await Promise.all( Array.from(this.loadedProjects).map(id => this.unloadProject(id)) ); } } // Usage const cache = new ProjectCacheManager('your-api-key'); try { await cache.loadProject('87654321-4321-4321-4321-210987654321'); // Execute notebooks here... } finally { await cache.unloadAll(); } ``` --- ## Best Practices 1. **CRUD Operations**: Don't explicitly load - let auto-load handle it 2. **Execution Operations**: Always load the project first 3. **Long-Running Clients**: Unload projects when done to free memory 4. **Context Managers**: Use `with` statements (Python) or try/finally for cleanup 5. **Memory Awareness**: The cache auto-cleans at 90% memory pressure, but explicit unloading is better 6. **Shared Cache**: Remember that UI users see the same project state as your API operations --- Manage user access and permissions for projects. Add users to projects, update their permission levels, and remove access when needed. ## Permission Levels | Level | Description | |-------|-------------| | **Owner** (`isOwner: true`) | Full control - can modify project settings, manage users, delete project | | **Member** (`isOwner: false`) | Can view and work with project content, cannot manage users or delete | --- ## List Project Users **GET** `/api/{tenantId}/project/{projectId}/users` Retrieves all users with access to the project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Response (200 OK) ```json { "users": [ { "permissionId": "11111111-1111-1111-1111-111111111111", "userId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "email": "john.smith@example.com", "displayName": "John Smith", "isOwner": true, "dateAssigned": "2024-01-15T10:30:00Z" }, { "permissionId": "22222222-2222-2222-2222-222222222222", "userId": "b2c3d4e5-f6a7-8901-bcde-f23456789012", "email": "jane.doe@example.com", "displayName": "Jane Doe", "isOwner": false, "dateAssigned": "2024-01-20T14:00:00Z" } ], "totalCount": 2 } ``` ### User Permission Fields | Field | Type | Description | |-------|------|-------------| | `permissionId` | GUID | Unique permission record ID | | `userId` | GUID | User identifier | | `email` | string | User's email address | | `displayName` | string | User's display name | | `isOwner` | boolean | Whether user is a project owner | | `dateAssigned` | datetime | When access was granted | --- ## Add User to Project **POST** `/api/{tenantId}/project/{projectId}/users/{userId}` Adds a user to the project with specified permissions. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `userId` | GUID | Yes | The user to add | ### Request Body (Optional) ```json { "isOwner": false } ``` ### Request Fields | Field | Type | Default | Description | |-------|------|---------|-------------| | `isOwner` | boolean | false | Grant owner permissions | ### Response (201 Created) ```json { "message": "User added to project successfully" } ``` ### Error Responses **Conflict (409):** ```json { "error": "User is already a member of this project" } ``` **Not Found (404):** ```json { "error": "User not found with ID '{userId}'" } ``` --- ## Update User Permission **PUT** `/api/{tenantId}/project/{projectId}/users/{userId}` Updates a user's permission level on the project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `userId` | GUID | Yes | The user to update | ### Request Body ```json { "isOwner": true } ``` ### Response (200 OK) ```json { "message": "User permission updated successfully" } ``` --- ## Remove User from Project **DELETE** `/api/{tenantId}/project/{projectId}/users/{userId}` Removes a user's access to the project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `userId` | GUID | Yes | The user to remove | ### Response (200 OK) ```json { "message": "User removed from project successfully" } ``` ### Error Responses **Not Found (404):** ```json { "error": "User is not a member of this project" } ``` --- ## Implementation Examples ### cURL ```bash # List project users curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/users" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Add user to project (as member) curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/users/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{"isOwner": false}' # Add user as owner curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/users/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{"isOwner": true}' # Promote user to owner curl -X PUT "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/users/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{"isOwner": true}' # Remove user from project curl -X DELETE "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/users/a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' BASE_URL = 'https://your-mindzie-instance.com' class ProjectUserManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def list_users(self, project_id): """List all users with access to the project.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/users' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def add_user(self, project_id, user_id, is_owner=False): """Add a user to the project.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/users/{user_id}' payload = {'isOwner': is_owner} response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def update_permission(self, project_id, user_id, is_owner): """Update a user's permission level.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/users/{user_id}' payload = {'isOwner': is_owner} response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def remove_user(self, project_id, user_id): """Remove a user from the project.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/users/{user_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = ProjectUserManager('your-auth-token') project_id = '87654321-4321-4321-4321-210987654321' # List current users result = manager.list_users(project_id) print(f"Project has {result['totalCount']} users:") for user in result['users']: role = 'Owner' if user['isOwner'] else 'Member' print(f" - {user['displayName']} ({user['email']}) - {role}") # Add a new user as member new_user_id = 'a1b2c3d4-e5f6-7890-abcd-ef1234567890' manager.add_user(project_id, new_user_id, is_owner=False) print(f"Added user {new_user_id} as member") # Promote user to owner manager.update_permission(project_id, new_user_id, is_owner=True) print(f"Promoted user {new_user_id} to owner") # Remove user manager.remove_user(project_id, new_user_id) print(f"Removed user {new_user_id}") ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const BASE_URL = 'https://your-mindzie-instance.com'; class ProjectUserManager { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async listUsers(projectId) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/users`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async addUser(projectId, userId, isOwner = false) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/users/${userId}`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ isOwner }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async updatePermission(projectId, userId, isOwner) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/users/${userId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify({ isOwner }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async removeUser(projectId, userId) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/users/${userId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } } // Usage const manager = new ProjectUserManager('your-auth-token'); const projectId = '87654321-4321-4321-4321-210987654321'; // List users const users = await manager.listUsers(projectId); console.log(`Project has ${users.totalCount} users`); users.users.forEach(user => { const role = user.isOwner ? 'Owner' : 'Member'; console.log(` - ${user.displayName} (${role})`); }); // Add user as member, then promote to owner await manager.addUser(projectId, 'user-id-here', false); await manager.updatePermission(projectId, 'user-id-here', true); ``` --- ## Best Practices 1. **Limit Owners**: Only grant owner access to users who need to manage the project 2. **Audit Access**: Regularly review project users and remove unnecessary access 3. **Use Members for Analysts**: Regular analysts should be members, not owners 4. **Document Changes**: Log permission changes for audit purposes --- Export projects as portable .mpz files for backup or transfer, and import them into other tenants. Also manage project thumbnail images. ## Project Packages (.mpz) The `.mpz` format is a mindzie Package Zip containing: - Project settings and metadata - All datasets and their configurations - Investigations and notebooks - Dashboards and panels - Blob storage files (event logs, attachments) --- ## Export Project **GET** `/api/{tenantId}/project/{projectId}/download` Exports the project as a .mpz (mindzie Package Zip) file. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project to export | ### Response Returns a binary file download with: - **Content-Type**: `application/octet-stream` - **Filename**: `{projectName}.mpz` ### Use Cases - **Backup**: Create regular backups of important projects - **Migration**: Move projects between tenants or instances - **Templates**: Export a configured project as a template for new analyses --- ## Import Project **POST** `/api/{tenantId}/project/import` Imports a project from a .mpz file. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The target tenant | ### Request - **Content-Type**: `multipart/form-data` - **File parameter**: `file` (the .mpz file) ### Constraints | Constraint | Value | |------------|-------| | Maximum file size | 1 GB | | File extension | Must be `.mpz` | | File format | Must be a valid mindzie project export | ### Response (200 OK) ```json { "success": true, "projectId": "99999999-9999-9999-9999-999999999999", "projectName": "Imported Project", "datasetsImported": 2, "investigationsImported": 3, "dashboardsImported": 1, "message": "Project imported successfully" } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `success` | boolean | Whether import succeeded | | `projectId` | GUID | ID of the newly created project | | `projectName` | string | Name of the imported project | | `datasetsImported` | integer | Number of datasets imported | | `investigationsImported` | integer | Number of investigations imported | | `dashboardsImported` | integer | Number of dashboards imported | | `message` | string | Human-readable status | ### Error Responses **Bad Request (400):** ```json { "success": false, "errorMessage": "Invalid file format. Expected .mpz file." } ``` --- ## Thumbnail Management Project thumbnails are displayed in the project list and provide visual identification. ### Get Thumbnail **GET** `/api/{tenantId}/project/{projectId}/thumbnail` Retrieves the project's thumbnail image. ### Response (200 OK) ```json { "projectId": "87654321-4321-4321-4321-210987654321", "hasThumbnail": true, "base64Image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..." } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `projectId` | GUID | Project identifier | | `hasThumbnail` | boolean | Whether a thumbnail exists | | `base64Image` | string | Base64-encoded image with data URI prefix | ### Update Thumbnail **POST** `/api/{tenantId}/project/{projectId}/thumbnail` Updates the project's thumbnail image. ### Request Body ```json { "base64Image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..." } ``` **Note:** The base64 string should include the data URI prefix (e.g., `data:image/jpeg;base64,` or `data:image/png;base64,`). ### Response (200 OK) ```json { "message": "Thumbnail updated successfully" } ``` ### Remove Thumbnail **DELETE** `/api/{tenantId}/project/{projectId}/thumbnail` Removes the project's thumbnail image. ### Response (200 OK) ```json { "message": "Thumbnail removed successfully" } ``` --- ## Implementation Examples ### cURL ```bash # Export project to file curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/download" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ --output project_backup.mpz # Import project from file curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/import" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -F "file=@project_backup.mpz" # Get thumbnail curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/thumbnail" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Update thumbnail (from base64 file) curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/thumbnail" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{"base64Image": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUg..."}' # Remove thumbnail curl -X DELETE "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/thumbnail" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### Python ```python import requests import base64 from pathlib import Path TENANT_ID = '12345678-1234-1234-1234-123456789012' BASE_URL = 'https://your-mindzie-instance.com' class ProjectExportManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}' } def export_project(self, project_id, output_path): """Export project to .mpz file.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/download' response = requests.get(url, headers=self.headers, stream=True) response.raise_for_status() with open(output_path, 'wb') as f: for chunk in response.iter_content(chunk_size=8192): f.write(chunk) print(f"Exported to {output_path}") return output_path def import_project(self, file_path): """Import project from .mpz file.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/import' with open(file_path, 'rb') as f: files = {'file': (Path(file_path).name, f, 'application/octet-stream')} response = requests.post(url, headers=self.headers, files=files) response.raise_for_status() result = response.json() print(f"Imported: {result['projectName']}") print(f" Datasets: {result['datasetsImported']}") print(f" Investigations: {result['investigationsImported']}") print(f" Dashboards: {result['dashboardsImported']}") return result def get_thumbnail(self, project_id): """Get project thumbnail as base64.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/thumbnail' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def set_thumbnail(self, project_id, image_path): """Set project thumbnail from image file.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/thumbnail' # Read and encode image with open(image_path, 'rb') as f: image_data = f.read() # Determine MIME type ext = Path(image_path).suffix.lower() mime_type = 'image/jpeg' if ext in ['.jpg', '.jpeg'] else 'image/png' # Create base64 data URI base64_data = base64.b64encode(image_data).decode('utf-8') data_uri = f'data:{mime_type};base64,{base64_data}' headers = {**self.headers, 'Content-Type': 'application/json'} response = requests.post(url, json={'base64Image': data_uri}, headers=headers) response.raise_for_status() print(f"Thumbnail updated for project {project_id}") return response.json() def remove_thumbnail(self, project_id): """Remove project thumbnail.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{project_id}/thumbnail' response = requests.delete(url, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = ProjectExportManager('your-auth-token') project_id = '87654321-4321-4321-4321-210987654321' # Export project for backup manager.export_project(project_id, 'my_project_backup.mpz') # Import into same or different tenant result = manager.import_project('my_project_backup.mpz') new_project_id = result['projectId'] # Set a thumbnail manager.set_thumbnail(project_id, 'project_thumbnail.png') # Get thumbnail thumbnail = manager.get_thumbnail(project_id) if thumbnail['hasThumbnail']: print("Thumbnail exists") ``` ### JavaScript/Node.js ```javascript const fs = require('fs'); const path = require('path'); const FormData = require('form-data'); const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const BASE_URL = 'https://your-mindzie-instance.com'; class ProjectExportManager { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}` }; } async exportProject(projectId, outputPath) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/download`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) { throw new Error(`Export failed: ${response.status}`); } const buffer = await response.arrayBuffer(); fs.writeFileSync(outputPath, Buffer.from(buffer)); console.log(`Exported to ${outputPath}`); } async importProject(filePath) { const url = `${BASE_URL}/api/${TENANT_ID}/project/import`; const formData = new FormData(); formData.append('file', fs.createReadStream(filePath)); const response = await fetch(url, { method: 'POST', headers: { ...this.headers, ...formData.getHeaders() }, body: formData }); if (!response.ok) { throw new Error(`Import failed: ${response.status}`); } const result = await response.json(); console.log(`Imported: ${result.projectName}`); return result; } async setThumbnail(projectId, imagePath) { const url = `${BASE_URL}/api/${TENANT_ID}/project/${projectId}/thumbnail`; // Read and encode image const imageBuffer = fs.readFileSync(imagePath); const ext = path.extname(imagePath).toLowerCase(); const mimeType = ext === '.png' ? 'image/png' : 'image/jpeg'; const base64Data = imageBuffer.toString('base64'); const dataUri = `data:${mimeType};base64,${base64Data}`; const response = await fetch(url, { method: 'POST', headers: { ...this.headers, 'Content-Type': 'application/json' }, body: JSON.stringify({ base64Image: dataUri }) }); if (!response.ok) { throw new Error(`Thumbnail update failed: ${response.status}`); } return await response.json(); } } // Usage const manager = new ProjectExportManager('your-auth-token'); // Export and import await manager.exportProject('project-id', 'backup.mpz'); const imported = await manager.importProject('backup.mpz'); // Set thumbnail await manager.setThumbnail('project-id', 'thumbnail.png'); ``` --- ## Best Practices 1. **Regular Backups**: Schedule regular exports of important projects 2. **Version Naming**: Include dates in export filenames (e.g., `project_2024-01-15.mpz`) 3. **Test Imports**: Test imports in a non-production tenant before production 4. **Thumbnail Size**: Keep thumbnails under 100KB for fast loading 5. **Thumbnail Format**: Use JPEG for photos, PNG for graphics with transparency --- # Dataset API ## Data Management API Upload, manage, and update datasets with support for multiple file formats including CSV, ZIP packages, and binary formats. ## Features ### Dataset Creation Create new datasets from CSV, ZIP packages, or binary files. [Create Datasets](/mindzie_api/dataset/creation) ### Data Import Import data with column mapping for process mining analysis. [Import Data](/mindzie_api/dataset/import) ### Dataset Updates Update existing datasets with new data while preserving configurations. [Update Datasets](/mindzie_api/dataset/updates) ### File Formats Supported file formats and data structures. [View Formats](/mindzie_api/dataset/formats) ## Available Endpoints ### Connectivity Testing - **GET** `/api/{tenantId}/{projectId}/dataset/unauthorized-ping` - Public connectivity test (no auth required) - **GET** `/api/{tenantId}/{projectId}/dataset/ping` - Authenticated connectivity test ### Dataset Operations - **GET** `/api/{tenantId}/{projectId}/dataset` - List all datasets in a project ### Dataset Creation - **POST** `/api/{tenantId}/{projectId}/dataset/csv` - Create dataset from CSV file - **POST** `/api/{tenantId}/{projectId}/dataset/package` - Create dataset from ZIP package - **POST** `/api/{tenantId}/{projectId}/dataset/binary` - Create dataset from binary file ### Dataset Updates - **PUT** `/api/{tenantId}/{projectId}/dataset/{datasetId}/csv` - Update dataset from CSV - **PUT** `/api/{tenantId}/{projectId}/dataset/{datasetId}/package` - Update dataset from ZIP package - **PUT** `/api/{tenantId}/{projectId}/dataset/{datasetId}/binary` - Update dataset from binary file ## Supported File Formats mindzieStudio supports multiple data formats for process mining: ### CSV Files Comma-separated values with flexible column mapping. - Event logs with case ID, activity, timestamp - Custom culture settings for date/number parsing - UTF-8 encoding support ### ZIP Packages Compressed packages containing multiple related files. - Complex datasets with multiple tables - Metadata and configuration files - mindzie dataset packaging standards ### Binary Files Native binary format for efficient data transfer. - Pre-processed event log data - Optimized for large datasets - Column mappings required ## Dataset Structure Understanding the expected data structure for process mining analysis: ### Required Columns | Column | Description | |--------|-------------| | Case ID | Unique identifier for each process instance | | Activity | Name of the activity or event | | Timestamp | When the activity occurred | ### Optional Columns | Column | Description | |--------|-------------| | Resource | User or system that performed the activity | | Start Time | Activity start time (for duration calculations) | | Expected Order | Sequence ordering column | ### Response Structure ```json { "datasetId": "550e8400-e29b-41d4-a716-446655440000", "datasetName": "Purchase Order Process", "datasetDescription": "Event log from SAP procurement", "projectId": "660e8400-e29b-41d4-a716-446655440000", "caseIdColumnName": "CaseID", "activityColumnName": "Activity", "timeColumnName": "Timestamp", "resourceColumnName": "Resource", "beginTimeColumnName": "StartTime", "useDateOnlySorting": false, "useOnlyEventColumns": false, "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-15T14:45:00Z", "createdBy": "user@example.com", "modifiedBy": "user@example.com" } ``` ## Upload Response Structure Dataset creation and update endpoints return import statistics: ```json { "datasetId": "550e8400-e29b-41d4-a716-446655440000", "caseCount": 5200, "eventCount": 150000, "invalidValueCount": 12, "skippedRowsCount": 3, "errors": [], "rowIssues": [], "statusCode": 200 } ``` ## Common Use Cases - **Event Log Import:** Upload process event data from ERP, CRM, or BPM systems - **Data Refresh:** Update existing datasets with new data while preserving analysis configurations - **Multi-Format Support:** Import data from CSV exports or proprietary binary formats - **Batch Processing:** Upload large datasets up to 1GB with progress tracking ## File Size Limits All upload endpoints support files up to **1GB** in size. For larger datasets, consider: - Breaking data into multiple uploads - Using the binary format for efficiency - Contacting support for enterprise data solutions ## Authentication All Dataset API endpoints (except `unauthorized-ping`) require valid authentication with appropriate permissions for the target project and tenant. ## Getting Started Begin with [Dataset Creation](/mindzie_api/dataset/creation) to learn how to create datasets, then explore [Data Import](/mindzie_api/dataset/import) for column mapping details. --- ## Create New Datasets Create datasets from CSV files, ZIP packages, or binary files. Each format requires specific column mappings for process mining analysis. ## Connectivity Testing ### Unauthorized Ping **GET** `/api/{tenantId}/{projectId}/dataset/unauthorized-ping` Test endpoint that does not require authentication. #### Response ``` Ping Successful ``` ### Authenticated Ping **GET** `/api/{tenantId}/{projectId}/dataset/ping` Authenticated ping endpoint to verify API access. #### Response (200 OK) ``` Ping Successful (tenant id: {tenantId}) ``` ## List All Datasets **GET** `/api/{tenantId}/{projectId}/dataset` Retrieves all datasets within the specified project. ### Response (200 OK) ```json { "datasets": [ { "datasetId": "550e8400-e29b-41d4-a716-446655440000", "datasetName": "Purchase Order Process", "datasetDescription": "Event log from SAP", "projectId": "660e8400-e29b-41d4-a716-446655440000", "caseIdColumnName": "CaseID", "activityColumnName": "Activity", "timeColumnName": "Timestamp", "resourceColumnName": "Resource", "beginTimeColumnName": null, "expectedOrderColumnName": null, "useDateOnlySorting": false, "useOnlyEventColumns": false, "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-15T14:45:00Z", "createdBy": "user@example.com", "modifiedBy": "user@example.com" } ] } ``` ## Create Dataset from CSV **POST** `/api/{tenantId}/{projectId}/dataset/csv` Creates a new dataset from a CSV file upload with column mappings. ### Request (multipart/form-data) | Field | Type | Required | Description | |-------|------|----------|-------------| | `file` | file | Yes | CSV file to upload (max 1GB) | | `datasetName` | string | Yes | Name for the new dataset | | `caseIdColumn` | string | Yes | Column name containing case IDs | | `activityNameColumn` | string | Yes | Column name containing activity names | | `activityTimeColumn` | string | Yes | Column name containing timestamps | | `resourceColumn` | string | No | Column name containing resource/performer | | `startTimeColumn` | string | No | Column name for activity start times | | `cultureInfo` | string | No | Culture for parsing (default: "en-US") | ### Response (200 OK) ```json { "datasetId": "550e8400-e29b-41d4-a716-446655440000", "caseCount": 5200, "eventCount": 150000, "invalidValueCount": 0, "skippedRowsCount": 0, "errors": [], "rowIssues": [], "statusCode": 200 } ``` ### Error Response (422 Unprocessable Entity) ```json { "errors": ["Column 'CaseID' not found in CSV file"], "statusCode": 422 } ``` ## Create Dataset from ZIP Package **POST** `/api/{tenantId}/{projectId}/dataset/package` Creates a new dataset from a ZIP package containing data files. ### Request (multipart/form-data) | Field | Type | Required | Description | |-------|------|----------|-------------| | `file` | file | Yes | ZIP package file (max 1GB) | | `datasetName` | string | Yes | Name for the new dataset | | `cultureInfo` | string | No | Culture for parsing (default: "en-US") | ### Response (200 OK) ```json { "datasetId": "550e8400-e29b-41d4-a716-446655440000", "caseCount": 5200, "eventCount": 150000, "invalidValueCount": 0, "skippedRowsCount": 0, "errors": [], "rowIssues": [], "statusCode": 200 } ``` ## Create Dataset from Binary **POST** `/api/{tenantId}/{projectId}/dataset/binary` Creates a new dataset from a binary format file with column mappings. ### Request (multipart/form-data) | Field | Type | Required | Description | |-------|------|----------|-------------| | `file` | file | Yes | Binary file to upload (max 1GB) | | `datasetName` | string | Yes | Name for the new dataset | | `caseIdColumn` | string | Yes | Column name containing case IDs | | `activityNameColumn` | string | Yes | Column name containing activity names | | `activityTimeColumn` | string | Yes | Column name containing timestamps | | `resourceColumn` | string | No | Column name containing resource/performer | | `startTimeColumn` | string | No | Column name for activity start times | ### Response (200 OK) Same structure as CSV creation response. ## Implementation Examples ### cURL - CSV Upload ```bash curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dataset/csv" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -F "file=@event_log.csv" \ -F "datasetName=Purchase Orders" \ -F "caseIdColumn=CaseID" \ -F "activityNameColumn=Activity" \ -F "activityTimeColumn=Timestamp" \ -F "resourceColumn=User" \ -F "cultureInfo=en-US" ``` ### cURL - ZIP Package Upload ```bash curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dataset/package" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -F "file=@data_package.zip" \ -F "datasetName=SAP Export" \ -F "cultureInfo=en-US" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class DatasetUploader: def __init__(self, token): self.headers = {'Authorization': f'Bearer {token}'} def create_from_csv(self, file_path, dataset_name, case_id_col, activity_col, time_col, resource_col=None, start_time_col=None, culture='en-US'): """Create dataset from CSV file.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dataset/csv' with open(file_path, 'rb') as f: files = {'file': (file_path, f, 'text/csv')} data = { 'datasetName': dataset_name, 'caseIdColumn': case_id_col, 'activityNameColumn': activity_col, 'activityTimeColumn': time_col, 'cultureInfo': culture } if resource_col: data['resourceColumn'] = resource_col if start_time_col: data['startTimeColumn'] = start_time_col response = requests.post(url, headers=self.headers, files=files, data=data) if response.ok: return response.json() else: raise Exception(f'Upload failed: {response.text}') def create_from_package(self, file_path, dataset_name, culture='en-US'): """Create dataset from ZIP package.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dataset/package' with open(file_path, 'rb') as f: files = {'file': (file_path, f, 'application/zip')} data = { 'datasetName': dataset_name, 'cultureInfo': culture } response = requests.post(url, headers=self.headers, files=files, data=data) if response.ok: return response.json() else: raise Exception(f'Upload failed: {response.text}') def list_datasets(self): """List all datasets in the project.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dataset' response = requests.get(url, headers=self.headers) return response.json() # Usage uploader = DatasetUploader('your-auth-token') # Create from CSV result = uploader.create_from_csv( 'event_log.csv', 'Purchase Order Process', 'CaseID', 'Activity', 'Timestamp', resource_col='User' ) print(f"Created dataset: {result['datasetId']}") print(f"Cases: {result['caseCount']}, Events: {result['eventCount']}") # List all datasets datasets = uploader.list_datasets() for ds in datasets['datasets']: print(f"- {ds['datasetName']} ({ds['datasetId']})") ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class DatasetUploader { constructor(token) { this.token = token; } async createFromCsv(file, datasetName, caseIdCol, activityCol, timeCol, options = {}) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dataset/csv`; const formData = new FormData(); formData.append('file', file); formData.append('datasetName', datasetName); formData.append('caseIdColumn', caseIdCol); formData.append('activityNameColumn', activityCol); formData.append('activityTimeColumn', timeCol); formData.append('cultureInfo', options.culture || 'en-US'); if (options.resourceColumn) { formData.append('resourceColumn', options.resourceColumn); } if (options.startTimeColumn) { formData.append('startTimeColumn', options.startTimeColumn); } const response = await fetch(url, { method: 'POST', headers: { 'Authorization': `Bearer ${this.token}` }, body: formData }); if (response.ok) { return await response.json(); } else { throw new Error(`Upload failed: ${await response.text()}`); } } async listDatasets() { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dataset`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${this.token}` } }); return await response.json(); } } // Usage (browser) const uploader = new DatasetUploader('your-auth-token'); const fileInput = document.getElementById('csvFile'); fileInput.addEventListener('change', async (e) => { const file = e.target.files[0]; const result = await uploader.createFromCsv( file, 'My Dataset', 'CaseID', 'Activity', 'Timestamp', { resourceColumn: 'User' } ); console.log(`Created: ${result.datasetId}`); console.log(`Cases: ${result.caseCount}, Events: ${result.eventCount}`); }); ``` ## Response Fields | Field | Type | Description | |-------|------|-------------| | `datasetId` | GUID | ID of the created dataset | | `caseCount` | integer | Number of unique cases imported | | `eventCount` | integer | Total number of events imported | | `invalidValueCount` | integer | Number of invalid values encountered | | `skippedRowsCount` | integer | Number of rows skipped due to errors | | `errors` | array | List of error messages | | `rowIssues` | array | Detailed information about row-level issues | | `statusCode` | integer | HTTP status code | ## Best Practices - **Validate Column Names:** Ensure column names match exactly (case-sensitive) - **Check Culture Settings:** Use appropriate culture for date/number formats - **Handle Large Files:** Monitor upload progress for files approaching 1GB - **Review Row Issues:** Check `rowIssues` array for data quality problems - **Unique Dataset Names:** Dataset names must be unique within a project --- **Import Data from Multiple Sources** Import data from CSV, Excel, JSON, and other formats. Handle large datasets with streaming uploads. ## Upload Data File **POST** `/api/{tenantId}/{projectId}/dataset/{datasetId}/import` Upload and import data from various file formats including CSV, Excel, and XES. Supports large file uploads with chunked transfer. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | | `datasetId` | GUID | Path | The dataset identifier | | `file` | File | Form Data | Data file to upload | | `columnMapping` | JSON | Form Data | Column mapping configuration | ### Column Mapping Configuration ```json { "mapping": [ { "sourceColumn": "Case_ID", "targetColumn": "CaseID", "dataType": "string" }, { "sourceColumn": "Event_Name", "targetColumn": "Activity", "dataType": "string" }, { "sourceColumn": "Event_Time", "targetColumn": "Timestamp", "dataType": "datetime", "format": "yyyy-MM-dd HH:mm:ss" } ], "options": { "hasHeader": true, "delimiter": ",", "encoding": "UTF-8", "skipRows": 0 } } ``` ### Response ```json { "importId": "import-550e8400-e29b-41d4-a716-446655440000", "datasetId": "550e8400-e29b-41d4-a716-446655440000", "status": "Processing", "fileName": "process_events.csv", "fileSize": 15728640, "rowsProcessed": 0, "rowsTotal": 50000, "errors": [], "warnings": [], "startTime": "2024-01-15T10:30:00Z" } ``` ## Import CSV with Mapping **POST** `/api/{tenantId}/{projectId}/dataset/{datasetId}/import/csv` Import CSV data with advanced column mapping, data transformation, and validation options. ### Request Body ```json { "fileUrl": "https://your-storage.com/data/events.csv", "mapping": [ { "sourceColumn": "order_id", "targetColumn": "CaseID", "dataType": "string", "required": true }, { "sourceColumn": "step_name", "targetColumn": "Activity", "dataType": "string", "required": true }, { "sourceColumn": "timestamp", "targetColumn": "Timestamp", "dataType": "datetime", "format": "ISO8601", "required": true }, { "sourceColumn": "user_name", "targetColumn": "Resource", "dataType": "string", "required": false } ], "options": { "hasHeader": true, "delimiter": ",", "encoding": "UTF-8", "skipRows": 1, "validateData": true, "replaceExisting": false }, "transformations": [ { "column": "Activity", "type": "replace", "find": "ORDER_", "replace": "Order " } ] } ``` ### Response Returns the same import status object as the file upload endpoint. ## Get Import Status **GET** `/api/{tenantId}/{projectId}/dataset/{datasetId}/import/{importId}/status` Monitor the progress and status of a data import operation including validation results and error details. ### Response ```json { "importId": "import-550e8400-e29b-41d4-a716-446655440000", "datasetId": "550e8400-e29b-41d4-a716-446655440000", "status": "Completed", "fileName": "process_events.csv", "fileSize": 15728640, "rowsProcessed": 49876, "rowsTotal": 50000, "rowsSkipped": 124, "startTime": "2024-01-15T10:30:00Z", "endTime": "2024-01-15T10:45:23Z", "duration": "00:15:23", "errors": [ { "row": 1532, "column": "Timestamp", "error": "Invalid date format", "value": "2024-13-01 25:00:00" } ], "warnings": [ { "row": 2847, "column": "Resource", "warning": "Empty value for optional field", "value": "" } ], "statistics": { "uniqueCases": 1247, "uniqueActivities": 12, "dateRange": { "earliest": "2024-01-01T08:00:00Z", "latest": "2024-01-31T17:30:00Z" } } } ``` ## Supported File Formats mindzieStudio supports multiple data formats for seamless process mining data import: ### CSV Files Comma-separated values with flexible parsing options. - Custom delimiters (comma, semicolon, tab) - UTF-8, ISO-8859-1 encoding support - Header row detection - Quote character handling ### Excel Files Microsoft Excel workbooks (.xlsx, .xls). - Multiple worksheet support - Cell formatting preservation - Date/time recognition - Large file streaming ### XES Format IEEE XES standard for process mining. - Full XES specification support - Event attributes and extensions - Lifecycle information - Process mining tool compatibility ### JSON Files Structured JSON data for complex events. - Nested object support - Array handling - Custom schema mapping - Streaming JSON processing ## JavaScript Example: File Upload with Progress ```javascript class DataImporter { constructor(baseUrl, tenantId, projectId, token) { this.baseUrl = baseUrl; this.tenantId = tenantId; this.projectId = projectId; this.headers = { 'Authorization': `Bearer ${token}` }; } async uploadFile(datasetId, file, columnMapping) { const formData = new FormData(); formData.append('file', file); formData.append('columnMapping', JSON.stringify(columnMapping)); const url = `${this.baseUrl}/api/${this.tenantId}/${this.projectId}/dataset/${datasetId}/import`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: formData }); return await response.json(); } async importCsv(datasetId, csvConfig) { const url = `${this.baseUrl}/api/${this.tenantId}/${this.projectId}/dataset/${datasetId}/import/csv`; const response = await fetch(url, { method: 'POST', headers: { ...this.headers, 'Content-Type': 'application/json' }, body: JSON.stringify(csvConfig) }); return await response.json(); } async getImportStatus(datasetId, importId) { const url = `${this.baseUrl}/api/${this.tenantId}/${this.projectId}/dataset/${datasetId}/import/${importId}/status`; const response = await fetch(url, { headers: this.headers }); return await response.json(); } async monitorImport(datasetId, importId, callback) { const checkStatus = async () => { try { const status = await this.getImportStatus(datasetId, importId); callback(status); if (status.status === 'Processing') { setTimeout(checkStatus, 2000); // Check every 2 seconds } } catch (error) { callback({ error: error.message }); } }; checkStatus(); } buildStandardMapping() { return { mapping: [ { sourceColumn: 'case_id', targetColumn: 'CaseID', dataType: 'string' }, { sourceColumn: 'activity', targetColumn: 'Activity', dataType: 'string' }, { sourceColumn: 'timestamp', targetColumn: 'Timestamp', dataType: 'datetime', format: 'ISO8601' } ], options: { hasHeader: true, delimiter: ',', encoding: 'UTF-8', validateData: true } }; } } // Usage example const importer = new DataImporter( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-token' ); // Handle file upload with progress monitoring document.getElementById('fileInput').addEventListener('change', async (e) => { const file = e.target.files[0]; if (!file) return; const mapping = importer.buildStandardMapping(); try { const result = await importer.uploadFile('dataset-guid', file, mapping); console.log('Upload started:', result.importId); // Monitor progress importer.monitorImport('dataset-guid', result.importId, (status) => { if (status.error) { console.error('Import failed:', status.error); } else { const progress = (status.rowsProcessed / status.rowsTotal) * 100; console.log(`Progress: ${progress.toFixed(1)}% (${status.rowsProcessed}/${status.rowsTotal})`); if (status.status === 'Completed') { console.log('Import completed successfully!'); console.log(`Processed ${status.rowsProcessed} rows with ${status.errors.length} errors`); } } }); } catch (error) { console.error('Upload failed:', error); } }); ``` --- ## Update Existing Datasets Update existing datasets with new data from CSV files, ZIP packages, or binary files. Updates preserve the dataset ID and associated configurations. ## Update Dataset from CSV **PUT** `/api/{tenantId}/{projectId}/dataset/{datasetId}/csv` Replaces the data in an existing dataset with new data from a CSV file. The system automatically detects column mappings from the dataset configuration. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `datasetId` | GUID | Yes | The dataset identifier to update | ### Request (multipart/form-data) | Field | Type | Required | Description | |-------|------|----------|-------------| | `file` | file | Yes | CSV file with new data (max 1GB) | | `cultureInfo` | string | No | Culture for parsing (default: "en-US") | ### Response (200 OK) ```json { "datasetId": "550e8400-e29b-41d4-a716-446655440000", "caseCount": 5500, "eventCount": 165000, "invalidValueCount": 0, "skippedRowsCount": 0, "errors": [], "rowIssues": [], "statusCode": 200 } ``` ### Error Responses **Bad Request (400):** ``` dataset with id '{datasetId}' not found ``` ``` can't update '{datasetName}' because it's not an original dataset ``` ## Update Dataset from ZIP Package **PUT** `/api/{tenantId}/{projectId}/dataset/{datasetId}/package` Replaces the data in an existing dataset with new data from a ZIP package. ### Request (multipart/form-data) | Field | Type | Required | Description | |-------|------|----------|-------------| | `file` | file | Yes | ZIP package file with new data (max 1GB) | | `cultureInfo` | string | No | Culture for parsing (default: "en-US") | ### Response (200 OK) Same structure as CSV update response. ### Error Response (422 Unprocessable Entity) ```json { "errors": ["Invalid package structure"], "rowIssues": [ { "rowIndex": 15, "columnName": "Timestamp", "errorType": "ParseError", "outcome": "Skipped", "message": "Unable to parse date value" } ], "statusCode": 422 } ``` ## Update Dataset from Binary **PUT** `/api/{tenantId}/{projectId}/dataset/{datasetId}/binary` Replaces the data in an existing dataset with new data from a binary format file. ### Request (multipart/form-data) | Field | Type | Required | Description | |-------|------|----------|-------------| | `file` | file | Yes | Binary file with new data (max 1GB) | ### Response (200 OK) Same structure as CSV update response. ## Update Restrictions - **Original Datasets Only:** Only original datasets can be updated. Datasets derived from filters or other transformations cannot be updated directly. - **Preserve Configuration:** Updates preserve the dataset ID and all associated configurations (notebooks, blocks, etc.) - **Column Consistency:** The new data should have the same column structure as the original dataset ## Implementation Examples ### cURL - Update from CSV ```bash curl -X PUT "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dataset/550e8400-e29b-41d4-a716-446655440000/csv" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -F "file=@updated_event_log.csv" \ -F "cultureInfo=en-US" ``` ### cURL - Update from ZIP Package ```bash curl -X PUT "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dataset/550e8400-e29b-41d4-a716-446655440000/package" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -F "file=@updated_data_package.zip" \ -F "cultureInfo=en-US" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class DatasetUpdater: def __init__(self, token): self.headers = {'Authorization': f'Bearer {token}'} def update_from_csv(self, dataset_id, file_path, culture='en-US'): """Update dataset from CSV file.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dataset/{dataset_id}/csv' with open(file_path, 'rb') as f: files = {'file': (file_path, f, 'text/csv')} data = {'cultureInfo': culture} response = requests.put(url, headers=self.headers, files=files, data=data) if response.ok: return response.json() else: raise Exception(f'Update failed: {response.text}') def update_from_package(self, dataset_id, file_path, culture='en-US'): """Update dataset from ZIP package.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dataset/{dataset_id}/package' with open(file_path, 'rb') as f: files = {'file': (file_path, f, 'application/zip')} data = {'cultureInfo': culture} response = requests.put(url, headers=self.headers, files=files, data=data) if response.ok: return response.json() elif response.status_code == 422: result = response.json() print(f"Validation errors: {result['errors']}") for issue in result.get('rowIssues', []): print(f" Row {issue['rowIndex']}: {issue['message']}") raise Exception('Data validation failed') else: raise Exception(f'Update failed: {response.text}') def update_from_binary(self, dataset_id, file_path): """Update dataset from binary file.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dataset/{dataset_id}/binary' with open(file_path, 'rb') as f: files = {'file': (file_path, f, 'application/octet-stream')} response = requests.put(url, headers=self.headers, files=files) if response.ok: return response.json() else: raise Exception(f'Update failed: {response.text}') # Usage updater = DatasetUpdater('your-auth-token') # Update from CSV result = updater.update_from_csv( '550e8400-e29b-41d4-a716-446655440000', 'updated_event_log.csv' ) print(f"Updated dataset: {result['datasetId']}") print(f"New case count: {result['caseCount']}") print(f"New event count: {result['eventCount']}") # Check for issues if result['skippedRowsCount'] > 0: print(f"Warning: {result['skippedRowsCount']} rows were skipped") if result['invalidValueCount'] > 0: print(f"Warning: {result['invalidValueCount']} invalid values found") ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class DatasetUpdater { constructor(token) { this.token = token; } async updateFromCsv(datasetId, file, culture = 'en-US') { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dataset/${datasetId}/csv`; const formData = new FormData(); formData.append('file', file); formData.append('cultureInfo', culture); const response = await fetch(url, { method: 'PUT', headers: { 'Authorization': `Bearer ${this.token}` }, body: formData }); if (response.ok) { return await response.json(); } else if (response.status === 422) { const result = await response.json(); throw new Error(`Validation failed: ${result.errors.join(', ')}`); } else { throw new Error(`Update failed: ${await response.text()}`); } } async updateFromPackage(datasetId, file, culture = 'en-US') { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dataset/${datasetId}/package`; const formData = new FormData(); formData.append('file', file); formData.append('cultureInfo', culture); const response = await fetch(url, { method: 'PUT', headers: { 'Authorization': `Bearer ${this.token}` }, body: formData }); if (response.ok) { return await response.json(); } else { const error = await response.json(); throw new Error(`Update failed: ${error.errors?.join(', ') || response.statusText}`); } } } // Usage (browser) const updater = new DatasetUpdater('your-auth-token'); const fileInput = document.getElementById('updateFile'); fileInput.addEventListener('change', async (e) => { const file = e.target.files[0]; const datasetId = '550e8400-e29b-41d4-a716-446655440000'; try { const result = await updater.updateFromCsv(datasetId, file); console.log(`Updated: ${result.datasetId}`); console.log(`New cases: ${result.caseCount}`); console.log(`New events: ${result.eventCount}`); if (result.skippedRowsCount > 0) { console.warn(`Skipped ${result.skippedRowsCount} rows`); } } catch (error) { console.error('Update failed:', error.message); } }); ``` ## Response Fields | Field | Type | Description | |-------|------|-------------| | `datasetId` | GUID | ID of the updated dataset | | `caseCount` | integer | Number of unique cases in updated data | | `eventCount` | integer | Total number of events in updated data | | `invalidValueCount` | integer | Number of invalid values encountered | | `skippedRowsCount` | integer | Number of rows skipped due to errors | | `errors` | array | List of error messages | | `rowIssues` | array | Detailed information about row-level issues | | `statusCode` | integer | HTTP status code | ## Row Issue Structure ```json { "rowIndex": 15, "columnName": "Timestamp", "errorType": "ParseError", "outcome": "Skipped", "message": "Unable to parse date value '2024-13-45'" } ``` | Field | Description | |-------|-------------| | `rowIndex` | Row number with the issue | | `columnName` | Column containing the problematic value | | `errorType` | Type of error (ParseError, ValidationError, etc.) | | `outcome` | What happened (Skipped, DefaultValue, etc.) | | `message` | Human-readable error description | ## Best Practices - **Backup First:** Consider exporting current data before updates - **Verify Structure:** Ensure new data has the same column structure - **Check Results:** Review `rowIssues` and `skippedRowsCount` after updates - **Test First:** Test updates on a non-production dataset first - **Culture Settings:** Use the correct culture for date and number formats --- **Supported Data Formats** Learn about supported file formats, data structures, and column mapping requirements for process mining datasets. ## CSV (Comma-Separated Values) The most commonly used format for process mining data with flexible parsing options. ### Format Specifications | Option | Description | Default | Example | |--------|-------------|---------|---------| | `delimiter` | Field separator character | comma (,) | semicolon (;), tab (\t) | | `encoding` | Character encoding | UTF-8 | ISO-8859-1, Windows-1252 | | `hasHeader` | First row contains column names | true | true, false | | `quoteChar` | Text qualifier character | double quote (") | single quote (') | ### Sample CSV Structure ```csv CaseID,Activity,Timestamp,Resource,Amount PO-001,Create Order,2024-01-15T09:00:00Z,buyer.smith,1500.00 PO-001,Approve Order,2024-01-15T10:30:00Z,manager.jones,1500.00 PO-001,Send to Supplier,2024-01-15T11:00:00Z,system.auto,1500.00 PO-002,Create Order,2024-01-15T09:15:00Z,buyer.brown,2750.50 ``` ### Column Mapping Configuration ```json { "mapping": [ { "sourceColumn": "CaseID", "targetColumn": "CaseID", "dataType": "string", "role": "case_id" }, { "sourceColumn": "Activity", "targetColumn": "Activity", "dataType": "string", "role": "activity" }, { "sourceColumn": "Timestamp", "targetColumn": "Timestamp", "dataType": "datetime", "role": "timestamp", "format": "ISO8601" } ], "options": { "hasHeader": true, "delimiter": ",", "encoding": "UTF-8" } } ``` ## Excel Files (.xlsx, .xls) Microsoft Excel workbooks with support for multiple worksheets and advanced formatting. ### Supported Features #### File Types - .xlsx (Excel 2007+) - .xls (Excel 97-2003) - .xlsm (Macro-enabled) #### Worksheet Handling - Multiple worksheet support - Specific sheet selection - Range-based import #### Data Recognition - Automatic date/time detection - Numeric format preservation - Text formatting cleanup ### Excel Import Configuration ```json { "worksheetName": "ProcessEvents", "range": "A1:E1000", "hasHeader": true, "startRow": 1, "mapping": [ { "sourceColumn": "Order ID", "targetColumn": "CaseID", "dataType": "string" }, { "sourceColumn": "Event Date", "targetColumn": "Timestamp", "dataType": "datetime", "format": "MM/dd/yyyy HH:mm:ss" } ] } ``` ## XES (eXtensible Event Stream) IEEE standard format for process mining with full support for event attributes and extensions. ### XES Specification Support | Element | Support Level | Description | |---------|---------------|-------------| | Log | Full | Log-level attributes and metadata | | Trace | Full | Case-level attributes and events | | Event | Full | Activity-level data and attributes | | Extensions | Partial | Standard extensions (concept, time, lifecycle) | ### Sample XES Structure ```xml ``` ## JSON (JavaScript Object Notation) Structured JSON format for complex event data with nested attributes and flexible schema. ### JSON Schema Options #### Array of Events Simple flat structure with event objects. ```json [ { "caseId": "PO-001", "activity": "Create Order", "timestamp": "2024-01-15T09:00:00Z", "resource": "buyer.smith" } ] ``` #### Nested Structure Hierarchical data with case and event nesting. ```json { "cases": [ { "caseId": "PO-001", "events": [ { "activity": "Create Order", "timestamp": "2024-01-15T09:00:00Z" } ] } ] } ``` ### JSON Mapping Configuration ```json { "schema": "flat", "mapping": [ { "jsonPath": "$.caseId", "targetColumn": "CaseID", "dataType": "string" }, { "jsonPath": "$.activity", "targetColumn": "Activity", "dataType": "string" }, { "jsonPath": "$.timestamp", "targetColumn": "Timestamp", "dataType": "datetime" } ] } ``` ## Data Type Requirements Understanding data types and validation rules for proper dataset structure: ### String Fields Text data with length and character validation. - UTF-8 encoding required - Maximum length: 1000 characters - Special character handling - Null value support ### DateTime Fields Timestamp data with timezone support. - ISO 8601 format preferred - Custom format support - Timezone conversion - Precision to milliseconds ### Numeric Fields Integer and decimal number handling. - 64-bit integer support - Double precision decimals - Scientific notation - Currency formatting ### Boolean Fields True/false value interpretation. - true/false (case insensitive) - 1/0 numeric values - yes/no text values - Null handling options ## Format Validation and Errors Common validation rules and error handling for different file formats: ### Required Columns Every process mining dataset must include these essential columns: - **Case ID:** Unique identifier for each process instance - **Activity:** Name or description of the process step - **Timestamp:** When the activity occurred (with timezone) ### Common Validation Errors | Error Type | Description | Resolution | |------------|-------------|------------| | Missing Required Column | CaseID, Activity, or Timestamp not found | Add missing column or update mapping | | Invalid Date Format | Timestamp not in recognized format | Specify custom date format pattern | | Empty Case ID | Null or empty values in Case ID column | Clean data or use row filtering | | Duplicate Headers | Multiple columns with same name | Rename columns or use column indices | ## Best Practices - **Data Quality:** Validate data before import using built-in validation options - **Performance:** Use streaming uploads for files larger than 100MB - **Encoding:** Always specify UTF-8 encoding for international character support - **Timestamps:** Include timezone information in all timestamp data - **Testing:** Use small sample files to test column mappings before full import - **Documentation:** Document custom formats and mappings for future reference --- # Block API ## Analysis Block Management Manage analysis blocks within notebooks including filters, calculators, and alerts. Complete operations for working with process mining analysis blocks. ## Features ### Block Type Discovery Discover all available block types (filters, calculators, enrichments) in a single API call with complete metadata. [Discover Block Types](/mindzie_api/block/discovery) ### Block Management Get, update, and delete analysis blocks. Create blocks via the Notebook API. [Manage Blocks](/mindzie_api/block/management) ### Block Execution Execute individual blocks and monitor processing status with asynchronous queuing. [Execute Blocks](/mindzie_api/block/execution) ### Block Results Retrieve analysis results and execution history from completed block executions. [Get Results](/mindzie_api/block/results) ### Block Types Explore different block types including filters, calculators, and alert configurations. [Explore Types](/mindzie_api/block/types) ## Available Endpoints ### Block Type Discovery - **GET** `/api/tenant/{tenantId}/project/{projectId}/block/types/all` - Get all block types (filters, calculators, enrichments) in a single call ### Connectivity Testing - **GET** `/api/{tenantId}/{projectId}/block/unauthorized-ping` - Public connectivity test (no auth required) - **GET** `/api/{tenantId}/{projectId}/block/ping` - Authenticated connectivity test ### Block Operations - **GET** `/api/{tenantId}/{projectId}/block/{blockId}` - Get block details - **PUT** `/api/{tenantId}/{projectId}/block/{blockId}` - Update block metadata - **DELETE** `/api/{tenantId}/{projectId}/block/{blockId}` - Delete a block ### Block Execution - **POST** `/api/{tenantId}/{projectId}/block/{blockId}/execute` - Queue block for execution ### Block Results - **GET** `/api/{tenantId}/{projectId}/block/{blockId}/results` - Get execution results - **GET** `/api/{tenantId}/{projectId}/block/{blockId}/output-data` - Get output data (via ExecutionController) ## Creating Blocks Block creation is handled through the Notebook API: **POST** `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks` See [Notebook API](/mindzie_api/notebook/overview) for block creation details. ## Block Types mindzieStudio supports various types of analysis blocks: ### Filter Blocks Apply filters to focus analysis on specific data subsets and conditions. - Activity filters - Time period filters - Case attribute filters ### Calculator Blocks Perform calculations and generate metrics from process mining data. - Duration calculations - Frequency analysis - Performance metrics ### Alert Blocks Configure monitoring alerts and notifications for process deviations. - Threshold alerts - Pattern detection - Compliance monitoring ## Common Use Cases - **Dynamic Analysis:** Build and modify analysis workflows programmatically - **Automated Reporting:** Execute blocks on schedule and export results - **Custom Dashboards:** Create tailored visualizations with specific block configurations - **Data Processing Pipelines:** Chain multiple blocks for complex analysis workflows - **Real-time Monitoring:** Set up alert blocks for continuous process monitoring ## Authentication All Block API endpoints (except `unauthorized-ping`) require valid authentication with appropriate permissions for the target tenant and project. ## Getting Started Begin with [Block Management](/mindzie_api/block/management) to learn how to work with blocks, then explore [Block Types](/mindzie_api/block/types) for specific analysis configurations. --- ## Analysis Block Categories Explore different block types including filters, calculators, and alert configurations. Learn about each type's capabilities, configuration options, and creation endpoints. ## Filter Blocks **POST** `/api/{tenantId}/{projectId}/block/filter` Filter blocks apply data filtering criteria to datasets, reducing data by applying conditions such as date ranges, value filters, or logical expressions. They are the foundation for focused analysis on specific data subsets. ### Capabilities - **Date Range Filtering:** Filter data within specific time periods - **Activity Filtering:** Include or exclude specific process activities - **Case Attribute Filtering:** Filter based on case properties and metadata - **Value Range Filtering:** Apply numeric and text value conditions - **Complex Logic:** Combine multiple filters with AND/OR operations ### Request Body ```json { "notebookId": "660e8400-e29b-41d4-a716-446655440000", "blockTitle": "Date Range Filter", "blockDescription": "Filters process data for the last 30 days" } ``` ### Configuration Examples ```javascript // Date range filter configuration { "filterType": "dateRange", "startDate": "2024-01-01T00:00:00Z", "endDate": "2024-01-31T23:59:59Z", "dateField": "timestamp" } // Activity filter configuration { "filterType": "activity", "include": ["Order Created", "Payment Processed"], "exclude": ["System Log"] } // Case attribute filter configuration { "filterType": "caseAttribute", "attribute": "customerType", "operator": "equals", "value": "Premium" } ``` ## Calculator Blocks **POST** `/api/{tenantId}/{projectId}/block/calculator` Calculator blocks perform mathematical operations and analytical calculations on datasets. They compute metrics, aggregations, statistical measures, and derived values for process mining analysis. ### Capabilities - **Duration Calculations:** Process cycle times and lead times - **Frequency Analysis:** Activity occurrence rates and patterns - **Performance Metrics:** Throughput, efficiency, and utilization - **Statistical Analysis:** Mean, median, percentiles, and distributions - **Custom Formulas:** Complex mathematical expressions and KPIs ### Request Body ```json { "notebookId": "660e8400-e29b-41d4-a716-446655440000", "blockTitle": "Process Duration Calculator", "blockDescription": "Calculates average case duration and cycle times" } ``` ### Configuration Examples ```javascript // Duration calculation configuration { "calculationType": "duration", "startActivity": "Order Created", "endActivity": "Order Completed", "unit": "hours", "aggregation": "average" } // Frequency calculation configuration { "calculationType": "frequency", "groupBy": "activity", "timeWindow": "daily", "metric": "count" } // Custom KPI calculation configuration { "calculationType": "custom", "formula": "(completedCases / totalCases) * 100", "resultUnit": "percentage", "name": "Completion Rate" } ``` ## Alert Blocks **POST** `/api/{tenantId}/{projectId}/block/alert` Alert blocks monitor data conditions and trigger notifications when specific criteria are met. They provide automated monitoring and exception detection for process mining workflows and compliance requirements. ### Capabilities - **Threshold Monitoring:** Alert when metrics exceed defined limits - **Pattern Detection:** Identify unusual process behavior patterns - **Compliance Monitoring:** Track adherence to business rules - **Performance Alerts:** Monitor SLA violations and performance degradation - **Real-time Notifications:** Immediate alerts for critical conditions ### Request Body ```json { "notebookId": "660e8400-e29b-41d4-a716-446655440000", "blockTitle": "SLA Violation Alert", "blockDescription": "Alerts when case duration exceeds SLA threshold" } ``` ### Configuration Examples ```javascript // Threshold alert configuration { "alertType": "threshold", "metric": "caseDuration", "operator": "greaterThan", "threshold": 48, "unit": "hours", "severity": "high" } // Pattern deviation alert configuration { "alertType": "patternDeviation", "baselinePattern": "Order -> Payment -> Fulfillment", "deviationTolerance": 0.1, "minOccurrences": 10 } // Compliance alert configuration { "alertType": "compliance", "rule": "approvalRequired", "condition": "amount > 1000", "requiredActivity": "Manager Approval" } ``` ## Block Type Comparison Choose the right block type for your analysis needs: | Block Type | Primary Purpose | Input | Output | Use Cases | |------------|-----------------|-------|--------|-----------| | **Filter** | Data reduction and focusing | Full dataset | Filtered dataset | Time period analysis, specific process paths | | **Calculator** | Metrics and KPI computation | Dataset (filtered or full) | Calculated values/metrics | Performance measurement, statistical analysis | | **Alert** | Monitoring and notifications | Metrics or dataset | Alert conditions/notifications | SLA monitoring, exception detection | ## Example: Complete Block Workflow This example demonstrates creating different block types for a comprehensive analysis: ```javascript // Create a filter block to focus on recent data const createDateFilter = async (notebookId) => { const response = await fetch(`/api/${tenantId}/${projectId}/block/filter`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ notebookId: notebookId, blockTitle: 'Last 30 Days Filter', blockDescription: 'Focus analysis on recent process data' }) }); return await response.json(); }; // Create a calculator block to compute metrics const createDurationCalculator = async (notebookId) => { const response = await fetch(`/api/${tenantId}/${projectId}/block/calculator`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ notebookId: notebookId, blockTitle: 'Average Duration Calculator', blockDescription: 'Calculate average case processing time' }) }); return await response.json(); }; // Create an alert block for monitoring const createSLAAlert = async (notebookId) => { const response = await fetch(`/api/${tenantId}/${projectId}/block/alert`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ notebookId: notebookId, blockTitle: 'SLA Violation Alert', blockDescription: 'Monitor for SLA breaches' }) }); return await response.json(); }; // Build complete analysis workflow const buildAnalysisWorkflow = async (notebookId) => { try { // 1. Filter data to recent period const filterBlock = await createDateFilter(notebookId); console.log('Created filter block:', filterBlock.blockId); // 2. Calculate performance metrics const calculatorBlock = await createDurationCalculator(notebookId); console.log('Created calculator block:', calculatorBlock.blockId); // 3. Set up monitoring alerts const alertBlock = await createSLAAlert(notebookId); console.log('Created alert block:', alertBlock.blockId); return { filter: filterBlock, calculator: calculatorBlock, alert: alertBlock }; } catch (error) { console.error('Error building workflow:', error); throw error; } }; ``` ## Python Implementation ```python import requests from typing import Dict, Any class BlockTypeManager: def __init__(self, base_url: str, tenant_id: str, project_id: str, token: str): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def create_filter_block(self, notebook_id: str, title: str, description: str) -> Dict[str, Any]: """Create a filter block for data reduction""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/block/filter" payload = { 'notebookId': notebook_id, 'blockTitle': title, 'blockDescription': description } response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def create_calculator_block(self, notebook_id: str, title: str, description: str) -> Dict[str, Any]: """Create a calculator block for metrics computation""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/block/calculator" payload = { 'notebookId': notebook_id, 'blockTitle': title, 'blockDescription': description } response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def create_alert_block(self, notebook_id: str, title: str, description: str) -> Dict[str, Any]: """Create an alert block for monitoring""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/block/alert" payload = { 'notebookId': notebook_id, 'blockTitle': title, 'blockDescription': description } response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def create_analysis_pipeline(self, notebook_id: str) -> Dict[str, Any]: """Create a complete analysis pipeline with all block types""" pipeline = {} # Create filter block pipeline['filter'] = self.create_filter_block( notebook_id, 'Data Filter', 'Filter dataset for analysis scope' ) # Create calculator block pipeline['calculator'] = self.create_calculator_block( notebook_id, 'Performance Calculator', 'Calculate key performance metrics' ) # Create alert block pipeline['alert'] = self.create_alert_block( notebook_id, 'Performance Alert', 'Monitor performance thresholds' ) return pipeline # Usage example block_manager = BlockTypeManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) # Create complete analysis pipeline pipeline = block_manager.create_analysis_pipeline('notebook-guid') print(f"Created pipeline with {len(pipeline)} blocks") ``` ## Important Notes **Block Dependencies:** Blocks can be chained together where filter blocks reduce data, calculator blocks compute metrics, and alert blocks monitor the results for exceptions. **Best Practice:** Start with filter blocks to reduce data scope, then use calculator blocks for analysis, and finally add alert blocks for ongoing monitoring and notifications. --- ## Execute Analysis Blocks Execute individual blocks asynchronously and monitor their processing status. Blocks process data according to their type and configuration. ## Execute Block **POST** `/api/{tenantId}/{projectId}/block/{blockId}/execute` Queues a block for asynchronous execution. The block will process data according to its type and configuration (filter, calculator, alert, etc.). Returns an execution ID for tracking progress. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `blockId` | GUID | Yes | The block identifier to execute | ### Request Body (Optional) ```json { "parameters": { "dateFrom": "2024-01-01", "dateTo": "2024-12-31", "threshold": 1000 } } ``` ### Response (202 Accepted) ```json { "blockId": "550e8400-e29b-41d4-a716-446655440000", "executionId": "770e8400-e29b-41d4-a716-446655440000", "status": "Queued", "dateQueued": "2024-01-15T10:30:00Z", "dateStarted": null, "dateEnded": null, "result": null, "errorMessage": null, "message": "Block execution has been queued" } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `blockId` | GUID | The block that was queued | | `executionId` | GUID | Unique identifier for this execution | | `status` | string | Current execution status | | `dateQueued` | datetime | When the execution was queued | | `dateStarted` | datetime | When execution started (null if not started) | | `dateEnded` | datetime | When execution completed (null if not finished) | | `result` | object | Execution result data (null until complete) | | `errorMessage` | string | Error details if execution failed | | `message` | string | Human-readable status message | ### Error Responses **Not Found (404):** ```json { "Error": "Block not found", "BlockId": "550e8400-e29b-41d4-a716-446655440000" } ``` **Unauthorized (401):** ``` {error message describing authorization failure} ``` ## Execution Status Values Block execution progresses through these status values: | Status | Description | |--------|-------------| | `Queued` | Block is waiting in the execution queue | | `Running` | Block is currently processing data | | `Success` | Block completed execution successfully | | `Failed` | Block execution failed with errors | | `Cancelled` | Block execution was cancelled | ## Implementation Examples ### cURL ```bash # Execute block without parameters curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000/execute" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" # Execute block with parameters curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000/execute" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{"parameters": {"dateFrom": "2024-01-01", "dateTo": "2024-12-31"}}' ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class BlockExecutor { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async executeBlock(blockId, parameters = null) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/block/${blockId}/execute`; const body = parameters ? JSON.stringify({ parameters }) : null; const response = await fetch(url, { method: 'POST', headers: this.headers, body: body }); if (response.status === 202) { return await response.json(); } throw new Error(`Execution failed: ${response.statusText}`); } } // Usage const executor = new BlockExecutor('your-auth-token'); // Execute without parameters const result = await executor.executeBlock('block-guid'); console.log(`Execution queued: ${result.executionId}`); // Execute with parameters const resultWithParams = await executor.executeBlock('block-guid', { dateFrom: '2024-01-01', dateTo: '2024-12-31' }); ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class BlockExecutor: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def execute_block(self, block_id, parameters=None): """Queue block for execution.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/block/{block_id}/execute' payload = {'parameters': parameters} if parameters else {} response = requests.post(url, json=payload, headers=self.headers) if response.status_code == 202: return response.json() else: raise Exception(f'Failed to execute block: {response.text}') # Usage executor = BlockExecutor('your-auth-token') # Execute without parameters result = executor.execute_block('block-guid') print(f"Execution queued: {result['executionId']}") # Execute with parameters result = executor.execute_block('block-guid', { 'dateFrom': '2024-01-01', 'dateTo': '2024-12-31', 'threshold': 1000 }) print(f"Status: {result['status']}") ``` ### C# ```csharp using System; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; public class ExecuteBlockReturn { public Guid BlockId { get; set; } public Guid ExecutionId { get; set; } public string Status { get; set; } public DateTime DateQueued { get; set; } public DateTime? DateStarted { get; set; } public DateTime? DateEnded { get; set; } public object Result { get; set; } public string ErrorMessage { get; set; } public string Message { get; set; } } public class BlockExecutorClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public BlockExecutorClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task ExecuteBlockAsync(Guid blockId, object parameters = null) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/block/{blockId}/execute"; var payload = parameters != null ? new { parameters } : new { }; var content = new StringContent( JsonSerializer.Serialize(payload), Encoding.UTF8, "application/json"); var response = await _httpClient.PostAsync(url, content); if (response.StatusCode == System.Net.HttpStatusCode.Accepted) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } throw new Exception($"Failed to execute block: {response.StatusCode}"); } } // Usage var executor = new BlockExecutorClient( "https://your-mindzie-instance.com", Guid.Parse("12345678-1234-1234-1234-123456789012"), Guid.Parse("87654321-4321-4321-4321-210987654321"), "your-access-token"); // Execute block var result = await executor.ExecuteBlockAsync( Guid.Parse("block-guid"), new { dateFrom = "2024-01-01", dateTo = "2024-12-31" }); Console.WriteLine($"Execution queued: {result.ExecutionId}"); ``` ## Best Practices - **Check Block Status:** Verify the block is not disabled before executing - **Use Parameters Wisely:** Pass runtime parameters to customize execution without modifying block configuration - **Handle Async Nature:** Block execution is asynchronous - use the results endpoint to check completion - **Monitor Execution:** For long-running blocks, poll the results endpoint to track progress --- ## Retrieve Analysis Results Access execution history and results from completed block executions. Get processed results from filters, calculations, and alerts. ## Get Block Results **GET** `/api/{tenantId}/{projectId}/block/{blockId}/results` Retrieves execution history metadata for a specific block. Results include execution timestamps and status information. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `blockId` | GUID | Yes | The block identifier | ### Response (200 OK) ```json { "blockId": "550e8400-e29b-41d4-a716-446655440000", "message": "Block execution history not yet implemented", "executions": [] } ``` ### Response Codes - `200 OK` - Request successful - `401 Unauthorized` - Invalid authentication or insufficient permissions - `404 Not Found` - Block not found or no access - `500 Internal Server Error` - Server error occurred ## Get Block Output Data **GET** `/api/{tenantId}/{projectId}/block/{blockId}/output-data` Retrieves the transformed dataset produced by the block. **Important:** This endpoint returns guidance to use the ExecutionController workflow for actual output data retrieval, as block output data is only available through the in-memory project cache. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `blockId` | GUID | Yes | The block identifier | ### Response (501 Not Implemented) ```json { "Error": "Block output data retrieval not implemented via database API", "BlockId": "550e8400-e29b-41d4-a716-446655440000", "Message": "To retrieve block output data, use the ExecutionController workflow: 1. Load project into cache via ProjectController.LoadProject, 2. Execute notebook via ExecutionController.ExecuteNotebook, 3. Retrieve results via ExecutionController.GetNotebookResults", "Recommendation": "GET /api/{tenantId}/{projectId}/execution/notebook/{notebookId}/results" } ``` ## Recommended Workflow for Block Output To retrieve actual block output data, follow this workflow using the Execution API: 1. **Load Project into Cache** ``` GET /api/{tenantId}/{projectId}/project/{projectId}/load ``` 2. **Execute Notebook** ``` POST /api/{tenantId}/{projectId}/execution/notebook/{notebookId} ``` 3. **Get Notebook Results** ``` GET /api/{tenantId}/{projectId}/execution/notebook/{notebookId}/results ``` See [Execution API](/mindzie_api/execution/overview) for complete documentation. ## Implementation Examples ### cURL ```bash # Get block results metadata curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000/results" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Attempt to get output data (returns guidance) curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000/output-data" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class BlockResultsClient { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async getBlockResults(blockId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/block/${blockId}/results`; const response = await fetch(url, { headers: this.headers }); if (response.ok) { return await response.json(); } throw new Error(`Failed to get results: ${response.status}`); } async getBlockOutputData(blockId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/block/${blockId}/output-data`; const response = await fetch(url, { headers: this.headers }); // Note: This endpoint returns 501 with guidance return await response.json(); } } // Usage const client = new BlockResultsClient('your-auth-token'); // Get results metadata const results = await client.getBlockResults('block-guid'); console.log(`Block: ${results.blockId}`); console.log(`Executions: ${results.executions.length}`); // Check output data guidance const outputInfo = await client.getBlockOutputData('block-guid'); console.log(`Recommendation: ${outputInfo.Recommendation}`); ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class BlockResultsClient: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def get_block_results(self, block_id): """Get block execution results metadata.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/block/{block_id}/results' response = requests.get(url, headers=self.headers) if response.ok: return response.json() else: raise Exception(f'Failed to get results: {response.status_code}') def get_block_output_data(self, block_id): """Get block output data guidance.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/block/{block_id}/output-data' response = requests.get(url, headers=self.headers) return response.json() # Usage client = BlockResultsClient('your-auth-token') # Get results results = client.get_block_results('block-guid') print(f"Block: {results['blockId']}") # Get output data guidance output_info = client.get_block_output_data('block-guid') print(f"Recommendation: {output_info.get('Recommendation', 'N/A')}") ``` ### C# ```csharp using System; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; public class BlockResultsClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public BlockResultsClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task GetBlockResultsAsync(Guid blockId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/block/{blockId}/results"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonDocument.Parse(json); } throw new Exception($"Failed to get results: {response.StatusCode}"); } public async Task GetBlockOutputDataAsync(Guid blockId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/block/{blockId}/output-data"; var response = await _httpClient.GetAsync(url); // Returns 501 with guidance var json = await response.Content.ReadAsStringAsync(); return JsonDocument.Parse(json); } } ``` ## Important Notes - **Results Endpoint:** Currently returns execution history metadata. Full execution history retrieval is being developed. - **Output Data:** Block output data is only available through the in-memory project cache. Use the ExecutionController workflow for actual data retrieval. - **Best Practice:** For complete block output, load the project, execute the notebook, and retrieve results via the Execution API. --- ## Get, Update, and Delete Analysis Blocks Manage analysis blocks within notebooks. Blocks are the fundamental analysis units that perform data transformations, calculations, filtering operations, and alerting. ## Connectivity Testing ### Unauthorized Ping **GET** `/api/{tenantId}/{projectId}/block/unauthorized-ping` Test endpoint that does not require authentication. Use this to verify the Block API is accessible. #### Response ``` Ping Successful ``` ### Authenticated Ping **GET** `/api/{tenantId}/{projectId}/block/ping` Authenticated ping endpoint to verify API access for a specific tenant and project. #### Response (200 OK) ``` Ping Successful (tenant id: {tenantId}) ``` #### Response (401 Unauthorized) ``` {error message describing authorization failure} ``` ## Get Block Details **GET** `/api/{tenantId}/{projectId}/block/{blockId}` Retrieves comprehensive information about a specific analysis block including its configuration, execution history, and metadata. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `blockId` | GUID | Yes | The block identifier | ### Response (200 OK) ```json { "blockId": "550e8400-e29b-41d4-a716-446655440000", "notebookId": "660e8400-e29b-41d4-a716-446655440000", "blockType": "Filter", "blockTitle": "Date Range Filter", "blockDescription": "Filters data for the last 30 days", "blockOrder": 0, "configuration": "{\"filterType\": \"dateRange\", \"startDate\": \"2024-01-01\"}", "isDisabled": false, "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-15T14:45:00Z", "createdBy": "user@example.com", "modifiedBy": "user@example.com", "lastExecutionDate": "2024-01-15T14:45:00Z", "lastExecutionStatus": "Success", "executionCount": 12 } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `blockId` | GUID | Unique identifier for the block | | `notebookId` | GUID | ID of the notebook containing this block | | `blockType` | string | Type of block (Filter, Calculator, Alert, etc.) | | `blockTitle` | string | Display title of the block | | `blockDescription` | string | Description of the block's purpose | | `blockOrder` | integer | Order of the block in the notebook (default: 0) | | `configuration` | string | JSON string containing block settings | | `isDisabled` | boolean | Whether the block is disabled | | `dateCreated` | datetime | When the block was created | | `dateModified` | datetime | When the block was last modified | | `createdBy` | string | User who created the block | | `modifiedBy` | string | User who last modified the block | | `lastExecutionDate` | datetime | When the block was last executed | | `lastExecutionStatus` | string | Status of last execution | | `executionCount` | integer | Number of times block has been executed | ### Error Responses **Not Found (404):** ```json { "Error": "Block not found", "BlockId": "550e8400-e29b-41d4-a716-446655440000" } ``` ## Update Block **PUT** `/api/{tenantId}/{projectId}/block/{blockId}` Updates an existing block's metadata. Preserves execution history while updating the specified fields. ### Request Body ```json { "blockTitle": "Updated Date Filter", "blockDescription": "Filters data for custom date range", "isDisabled": false } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `blockTitle` | string | No | New title for the block | | `blockDescription` | string | No | New description for the block | | `isDisabled` | boolean | No | Whether to disable the block | ### Response (200 OK) Returns the updated block object with the same structure as the GET endpoint. ### Error Responses **Bad Request (400):** ``` Failed to update block. ``` ## Delete Block **DELETE** `/api/{tenantId}/{projectId}/block/{blockId}` Permanently removes a block and all its execution history from the notebook. This operation cannot be undone. ### Response Codes - `204 No Content` - Block deleted successfully - `400 Bad Request` - Failed to delete block - `401 Unauthorized` - Not authenticated or lacks access - `404 Not Found` - Block not found ## Creating Blocks Block creation is handled through the Notebook API, not the Block API. **POST** `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks` See [Notebook API](/mindzie_api/notebook/overview) for complete block creation documentation. ## Implementation Examples ### cURL ```bash # Test connectivity (no auth) curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/unauthorized-ping" # Test connectivity (authenticated) curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/ping" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get block details curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Update block curl -X PUT "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{"blockTitle": "Updated Filter", "isDisabled": false}' # Delete block curl -X DELETE "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/block/550e8400-e29b-41d4-a716-446655440000" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class BlockManager { constructor(token) { this.token = token; this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async getBlock(blockId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/block/${blockId}`; const response = await fetch(url, { headers: this.headers }); if (response.ok) { return await response.json(); } else if (response.status === 404) { throw new Error('Block not found'); } else { throw new Error(`Failed to get block: ${response.status}`); } } async updateBlock(blockId, updates) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/block/${blockId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify(updates) }); if (response.ok) { return await response.json(); } else { throw new Error(`Failed to update block: ${response.status}`); } } async deleteBlock(blockId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/block/${blockId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); return response.status === 204; } } // Usage const manager = new BlockManager('your-auth-token'); // Get block details const block = await manager.getBlock('block-guid'); console.log(`Block: ${block.blockTitle} (${block.blockType})`); // Update block const updated = await manager.updateBlock('block-guid', { blockTitle: 'Updated Title', isDisabled: false }); // Delete block const deleted = await manager.deleteBlock('block-guid'); ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class BlockManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def get_block(self, block_id): """Get block details.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/block/{block_id}' response = requests.get(url, headers=self.headers) if response.ok: return response.json() elif response.status_code == 404: raise Exception('Block not found') else: raise Exception(f'Failed to get block: {response.status_code}') def update_block(self, block_id, title=None, description=None, disabled=None): """Update block metadata.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/block/{block_id}' payload = {} if title is not None: payload['blockTitle'] = title if description is not None: payload['blockDescription'] = description if disabled is not None: payload['isDisabled'] = disabled response = requests.put(url, json=payload, headers=self.headers) if response.ok: return response.json() else: raise Exception(f'Failed to update block: {response.status_code}') def delete_block(self, block_id): """Delete a block.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/block/{block_id}' response = requests.delete(url, headers=self.headers) return response.status_code == 204 # Usage manager = BlockManager('your-auth-token') # Get block block = manager.get_block('block-guid') print(f"Block: {block['blockTitle']} ({block['blockType']})") print(f"Execution count: {block['executionCount']}") # Update block updated = manager.update_block('block-guid', title='New Title', disabled=False) print(f"Updated: {updated['blockTitle']}") # Delete block if manager.delete_block('block-guid'): print('Block deleted successfully') ``` ### C# ```csharp using System; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; public class BlockReturn { public Guid BlockId { get; set; } public Guid NotebookId { get; set; } public string BlockType { get; set; } public string BlockTitle { get; set; } public string BlockDescription { get; set; } public int BlockOrder { get; set; } public string Configuration { get; set; } public bool IsDisabled { get; set; } public DateTime DateCreated { get; set; } public DateTime DateModified { get; set; } public string CreatedBy { get; set; } public string ModifiedBy { get; set; } public DateTime? LastExecutionDate { get; set; } public string LastExecutionStatus { get; set; } public int ExecutionCount { get; set; } } public class BlockApiClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public BlockApiClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task GetBlockAsync(Guid blockId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/block/{blockId}"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } throw new Exception($"Failed to get block: {response.StatusCode}"); } public async Task UpdateBlockAsync(Guid blockId, string title, string description, bool? isDisabled) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/block/{blockId}"; var payload = new { blockTitle = title, blockDescription = description, isDisabled = isDisabled }; var content = new StringContent( JsonSerializer.Serialize(payload), Encoding.UTF8, "application/json"); var response = await _httpClient.PutAsync(url, content); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } throw new Exception($"Failed to update block: {response.StatusCode}"); } public async Task DeleteBlockAsync(Guid blockId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/block/{blockId}"; var response = await _httpClient.DeleteAsync(url); return response.StatusCode == System.Net.HttpStatusCode.NoContent; } } ``` --- # Dashboard API ## Dashboard Management API Retrieve dashboards, access panel configurations, and generate shareable URLs for process mining insights and visualization management. ## Features ### Dashboard Retrieval List and retrieve dashboards with comprehensive metadata and panel counts. [View Dashboards](/mindzie_api/dashboard/management) ### Panel Information Access dashboard panel configurations including layout and visualization settings. [View Panels](/mindzie_api/dashboard/panels) ### Sharing & URLs Generate shareable links and embed URLs for dashboard access. [Share Dashboards](/mindzie_api/dashboard/sharing) ## Available Endpoints ### Connectivity Testing - **GET** `/api/{tenantId}/{projectId}/dashboard/unauthorized-ping` - Public connectivity test (no auth required) - **GET** `/api/{tenantId}/{projectId}/dashboard/ping` - Authenticated connectivity test ### Dashboard Operations Core operations for accessing dashboard resources. - **GET** `/api/{tenantId}/{projectId}/dashboard` - List all dashboards in a project - **GET** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}` - Get specific dashboard details ### Panel Operations Access visualization panels within dashboards. - **GET** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}/panels` - Get all panels in a dashboard ### URL Operations Generate shareable URLs and embed codes. - **GET** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}/url` - Get shareable dashboard URLs ## Creating Dashboards Dashboards are created within investigations through the mindzieStudio UI. Dashboard creation requires investigation context and notebook relationships that are managed through the application: 1. Create an investigation via the Investigation API or UI 2. Add dashboard blocks to notebooks within that investigation 3. Dashboards become accessible via this API ## Dashboard Components mindzieStudio dashboards provide powerful visualization capabilities: ### Process Mining Visualizations Charts and graphs for process analysis and discovery. - Process maps and flowcharts - Performance metrics charts - Timeline visualizations ### KPI Dashboards Key performance indicators and business metrics. - Real-time KPI widgets - Trend analysis charts - Comparison dashboards ### Interactive Panels Configurable panels with drag-and-drop positioning. - Flexible layout management - Responsive panel sizing - Custom panel configurations ## Common Use Cases - **Executive Dashboards:** Access high-level KPI dashboards for management reporting - **Operational Monitoring:** View real-time dashboards for process monitoring - **Embedded Insights:** Integrate dashboard panels into external applications - **Stakeholder Sharing:** Generate shareable URLs for collaborative analysis ## Authentication All Dashboard API endpoints (except `unauthorized-ping`) require valid authentication with appropriate permissions for the target project and tenant. ## Get Started Begin with [Dashboard Management](/mindzie_api/dashboard/management) to learn how to list and retrieve dashboards, then explore [Sharing](/mindzie_api/dashboard/sharing) for URL generation. --- ## List and Retrieve Dashboards Access dashboards that contain visualization panels for process mining insights, KPIs, and analytics. Dashboards are containers for displaying your analytical results in an organized, shareable format. ## Connectivity Testing ### Unauthorized Ping **GET** `/api/{tenantId}/{projectId}/dashboard/unauthorized-ping` Test endpoint that does not require authentication. Use this to verify the Dashboard API is accessible. #### Response ``` Ping Successful ``` ### Authenticated Ping **GET** `/api/{tenantId}/{projectId}/dashboard/ping` Authenticated ping endpoint to verify API access for a specific tenant and project. #### Response (200 OK) ``` Ping Successful (tenant id: {tenantId}) ``` #### Response (401 Unauthorized) ``` {error message describing authorization failure} ``` ## List All Dashboards **GET** `/api/{tenantId}/{projectId}/dashboard` Retrieves a paginated list of all dashboards within the specified project. Each dashboard includes metadata, panel count, and a shareable URL. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Query Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `page` | integer | 1 | Page number for pagination | | `pageSize` | integer | 50 | Number of items per page (max recommended: 100) | ### Response (200 OK) ```json { "dashboards": [ { "dashboardId": "880e8400-e29b-41d4-a716-446655440000", "projectId": "660e8400-e29b-41d4-a716-446655440000", "name": "Process Overview Dashboard", "description": "Main dashboard showing key process metrics", "panelCount": 8, "url": "https://your-instance.com/dashboard/880e8400-e29b-41d4-a716-446655440000", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "user@example.com", "modifiedBy": "user@example.com" } ], "totalCount": 25, "page": 1, "pageSize": 50 } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `dashboards` | array | List of dashboard objects | | `totalCount` | integer | Total number of dashboards | | `page` | integer | Current page number | | `pageSize` | integer | Items per page | ### Dashboard Object Fields | Field | Type | Description | |-------|------|-------------| | `dashboardId` | GUID | Unique identifier for the dashboard | | `projectId` | GUID | Project this dashboard belongs to | | `name` | string | Display name of the dashboard | | `description` | string | Description of the dashboard | | `panelCount` | integer | Number of panels in the dashboard | | `url` | string | Shareable URL for the dashboard | | `dateCreated` | datetime | When the dashboard was created | | `dateModified` | datetime | When the dashboard was last modified | | `createdBy` | string | User who created the dashboard | | `modifiedBy` | string | User who last modified the dashboard | ## Get Dashboard Details **GET** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}` Retrieves comprehensive information about a specific dashboard including metadata, panel count, and shareable URL. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `dashboardId` | GUID | Yes | The dashboard identifier | ### Response (200 OK) ```json { "dashboardId": "880e8400-e29b-41d4-a716-446655440000", "projectId": "660e8400-e29b-41d4-a716-446655440000", "name": "Process Overview Dashboard", "description": "Main dashboard showing key process metrics and performance indicators", "panelCount": 8, "url": "https://your-instance.com/dashboard/880e8400-e29b-41d4-a716-446655440000", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "user@example.com", "modifiedBy": "user@example.com" } ``` ### Error Responses **Not Found (404):** ```json { "Error": "Dashboard not found", "DashboardId": "880e8400-e29b-41d4-a716-446655440000" } ``` ## Creating Dashboards Dashboard creation is managed through the mindzieStudio UI as it requires investigation context and notebook relationships. See [Dashboard Overview](/mindzie_api/dashboard/overview) for details. ## Implementation Examples ### cURL ```bash # Test connectivity (no auth) curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dashboard/unauthorized-ping" # Test connectivity (authenticated) curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dashboard/ping" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # List all dashboards curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dashboard?page=1&pageSize=50" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get specific dashboard curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dashboard/880e8400-e29b-41d4-a716-446655440000" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class DashboardManager { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async getAllDashboards(page = 1, pageSize = 50) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dashboard?page=${page}&pageSize=${pageSize}`; const response = await fetch(url, { headers: this.headers }); return await response.json(); } async getDashboard(dashboardId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dashboard/${dashboardId}`; const response = await fetch(url, { headers: this.headers }); if (response.ok) { return await response.json(); } else if (response.status === 404) { throw new Error('Dashboard not found'); } else { throw new Error(`Failed to get dashboard: ${response.status}`); } } async listAllDashboards() { const allDashboards = []; let page = 1; while (true) { const result = await this.getAllDashboards(page); allDashboards.push(...result.dashboards); if (allDashboards.length >= result.totalCount) { break; } page++; } return allDashboards; } } // Usage const manager = new DashboardManager('your-auth-token'); // Get all dashboards const result = await manager.getAllDashboards(); console.log(`Found ${result.totalCount} dashboards`); result.dashboards.forEach(d => { console.log(`- ${d.name}: ${d.panelCount} panels`); console.log(` URL: ${d.url}`); }); ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class DashboardManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def get_all_dashboards(self, page=1, page_size=50): """Get paginated list of dashboards.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dashboard' params = {'page': page, 'pageSize': page_size} response = requests.get(url, headers=self.headers, params=params) return response.json() def get_dashboard(self, dashboard_id): """Get dashboard details.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dashboard/{dashboard_id}' response = requests.get(url, headers=self.headers) if response.ok: return response.json() elif response.status_code == 404: raise Exception('Dashboard not found') else: raise Exception(f'Failed to get dashboard: {response.status_code}') def list_all_dashboards(self): """Get all dashboards (handling pagination).""" all_dashboards = [] page = 1 while True: result = self.get_all_dashboards(page=page) all_dashboards.extend(result['dashboards']) if len(all_dashboards) >= result['totalCount']: break page += 1 return all_dashboards # Usage manager = DashboardManager('your-auth-token') # Get all dashboards dashboards = manager.get_all_dashboards() print(f"Total dashboards: {dashboards['totalCount']}") for dashboard in dashboards['dashboards']: print(f"\nDashboard: {dashboard['name']}") print(f" Panels: {dashboard['panelCount']}") print(f" URL: {dashboard['url']}") ``` ### C# ```csharp using System; using System.Collections.Generic; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; public class DashboardListReturn { public List Dashboards { get; set; } public int TotalCount { get; set; } public int Page { get; set; } public int PageSize { get; set; } } public class DashboardReturn { public Guid DashboardId { get; set; } public Guid ProjectId { get; set; } public string Name { get; set; } public string Description { get; set; } public int PanelCount { get; set; } public string Url { get; set; } public DateTime DateCreated { get; set; } public DateTime DateModified { get; set; } public string CreatedBy { get; set; } public string ModifiedBy { get; set; } } public class DashboardApiClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public DashboardApiClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task GetAllDashboardsAsync(int page = 1, int pageSize = 50) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/dashboard?page={page}&pageSize={pageSize}"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } throw new Exception($"Failed to get dashboards: {response.StatusCode}"); } public async Task GetDashboardAsync(Guid dashboardId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/dashboard/{dashboardId}"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } else if (response.StatusCode == System.Net.HttpStatusCode.NotFound) { throw new Exception($"Dashboard {dashboardId} not found"); } throw new Exception($"Failed to get dashboard: {response.StatusCode}"); } } ``` --- ## Access Dashboard Panel Configurations View and create visualization panels in dashboards including their layout, positioning, and configuration settings. ## Get Dashboard Panels **GET** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}/panels` Retrieves all visualization panels configured in a dashboard, including panel types, positions, dimensions, and configuration settings. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `dashboardId` | GUID | Yes | The dashboard identifier | ### Response (200 OK) ```json { "dashboardId": "880e8400-e29b-41d4-a716-446655440000", "panels": [ { "panelId": "990e8400-e29b-41d4-a716-446655440000", "name": "Process Flow Chart", "panelType": "DashboardPanelProcessMap", "position": "Row: 1, Col: 1", "width": 6, "height": 4, "configuration": "{\"dataSource\": \"MainProcess\", \"visualization\": \"flow\"}" }, { "panelId": "aa0e8400-e29b-41d4-a716-446655440000", "name": "KPI Summary", "panelType": "DashboardPanelSingleValue", "position": "Row: 1, Col: 7", "width": 3, "height": 2, "configuration": "{\"metric\": \"avgCycleTime\", \"format\": \"duration\"}" } ] } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `dashboardId` | GUID | The dashboard containing these panels | | `panels` | array | List of panel objects | ### Panel Object Fields | Field | Type | Description | |-------|------|-------------| | `panelId` | GUID | Unique identifier for the panel | | `name` | string | Display title of the panel | | `panelType` | string | Type of visualization (see Panel Types section) | | `position` | string | Grid position as "Row: X, Col: Y" | | `width` | integer | Panel width in grid units | | `height` | integer | Panel height in grid units | | `configuration` | string | JSON string containing panel-specific settings | ### Error Responses **Not Found (404):** ```json { "Error": "Dashboard not found", "DashboardId": "880e8400-e29b-41d4-a716-446655440000" } ``` ## Create Dashboard Panel **POST** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}/panel` Creates a new visualization panel in a dashboard. The API automatically creates the appropriate selector block based on the panel type and settings provided. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `dashboardId` | GUID | Yes | The dashboard identifier | ### Request Body ```json { "title": "string (required)", "description": "string (optional)", "panelType": "string (required)", "blockId": "GUID (required for data panels)", "row": "integer (required, 0-based)", "column": "integer (required, 0-based)", "width": "integer (required, 1-12)", "height": "integer (required, 1-20)", "settings": "string (optional, JSON for selector configuration)" } ``` ### Request Body Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `title` | string | Yes | Display title of the panel | | `description` | string | No | Optional description | | `panelType` | string | Yes | Panel type class name (see Panel Types section) | | `blockId` | GUID | Yes* | Calculator block ID to display (*not required for DashboardPanelDashboardNote) | | `row` | integer | Yes | Row position (0-based) | | `column` | integer | Yes | Column position (0-based) | | `width` | integer | Yes | Panel width in grid units (1-12) | | `height` | integer | Yes | Panel height in grid units (1-20) | | `settings` | string | No | JSON string with selector configuration | ### Response (201 Created) ```json { "panelId": "5b54172c-b15c-4dd9-bab6-6cfceb7389df", "title": "My KPI Panel", "dashboardPanelClassName": "DashboardPanelSingleValue", "row": 0, "column": 0, "width": 2, "height": 1, "blockId": "db0dd1b9-63b5-4754-92b1-2764f4a3fb68" } ``` **Note:** The `blockId` in the response is the SELECTOR block ID, not the original calculator block ID. The API automatically creates a selector block that wraps the calculator. ### Error Responses **Bad Request (400) - Missing Block:** ```json { "Error": "VALIDATION_ERROR", "Message": "BlockId is required for panel type 'DashboardPanelSingleValue'" } ``` **Bad Request (400) - Invalid Panel Type:** ```json { "Error": "VALIDATION_ERROR", "Message": "Invalid panel type 'InvalidType'. Valid types: DashboardPanelSingleValue, DashboardPanelCalculator, ..." } ``` **Bad Request (400) - Block Not Found:** ```json { "Error": "VALIDATION_ERROR", "Message": "Block 'guid' does not exist" } ``` ## Panel Types mindzieStudio dashboards support the following panel types: | Panel Type Class | Selector Created | Description | |------------------|------------------|-------------| | `DashboardPanelCalculator` | SelectorFullCalculator | Full calculator output (process maps, trends, charts) | | `DashboardPanelSingleValue` | SelectorSingleValueFromLabel | Single value / KPI card | | `DashboardPanelHorizontalBarChart` | SelectorMultiColumns | Horizontal bar chart, ranked lists | | `DashboardPanelDataTable` | SelectorFullCalculator | Data table display | | `DashboardPanelProcessMap` | SelectorFullCalculator | Process map visualization | | `DashboardPanelDashboardNote` | (none) | Text/note panel (no blockId required) | | `DashboardPanelRecommendationSentence` | (none) | AI recommendation display | ## Panel Settings by Type ### DashboardPanelSingleValue (KPI Cards) When `panelType` is `DashboardPanelSingleValue`, the settings configure how to extract a single value from a calculator's output: ```json { "tableIndex": 0, "labelColumnName": "Name", "labelName": "Total Case Count", "valueColumnName": "Value", "formatText": "N0" } ``` | Setting | Type | Description | |---------|------|-------------| | `tableIndex` | integer | Which output table to use (usually 0) | | `labelColumnName` | string | Column containing labels (usually "Name") | | `labelName` | string | The label to find (e.g., "Total Case Count") | | `valueColumnName` | string | Column containing the value (usually "Value") | | `formatText` | string | .NET format string (N0, F2, P0, etc.) | **Best used with:** CalculatorDataInformation, CalculatorOverview ### DashboardPanelHorizontalBarChart (Bar Charts) When `panelType` is `DashboardPanelHorizontalBarChart`, include `columnNames` in settings to trigger SelectorMultiColumns: ```json { "tableIndex": 0, "columnNames": ["ActivityName", "Count"], "sortColumnName": "Count", "sortAscending": false, "maxRows": 10 } ``` | Setting | Type | Description | |---------|------|-------------| | `tableIndex` | integer | Which output table to use | | `columnNames` | array | Columns to include in chart (MUST match calculator output) | | `sortColumnName` | string | Column to sort by | | `sortAscending` | boolean | Sort direction | | `maxRows` | integer | Maximum rows to display | **IMPORTANT:** Column names must exactly match the calculator's output columns. For example, `CalculatorActivityFrequency` uses the dataset's activity column name (e.g., "ActivityName"), not a generic "Activity". **Best used with:** CalculatorActivityFrequency, CalculatorResourceFrequency ### DashboardPanelCalculator (Full Visualizations) For full calculator output (process maps, trends), no special settings are required: ```json {} ``` Or omit the settings parameter entirely. ## Panel Architecture Dashboard panels don't connect directly to calculator blocks. Instead, they use a three-layer architecture: ``` Panel.InputBlockId -> Selector Block -> Calculator Block (via parent_id) ``` **Why Selectors?** Selector blocks define HOW to display calculator output: - **SelectorFullCalculator**: Display entire output with visualization settings - **SelectorSingleValueFromLabel**: Extract one value by label lookup - **SelectorMultiColumns**: Extract specific columns with sorting/limiting The API automatically creates the appropriate selector block when you create a panel pointing to a calculator. ## Panel Layout System Dashboard panels use a grid-based layout system: - **Grid Units:** 12-column grid system - **Position:** Row and column coordinates (0-based) - **Size:** Width (1-12 columns), Height (1-20 rows) - **Responsive:** Automatic scaling on different screen sizes ## Implementation Examples ### Get Dashboard Panels #### cURL ```bash curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dashboard/880e8400-e29b-41d4-a716-446655440000/panels" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### Create KPI Panel (Single Value) ```bash curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/dashboard/{dashboardId}/panel" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "title": "Total Cases", "panelType": "DashboardPanelSingleValue", "blockId": "calculator-data-information-guid", "row": 0, "column": 0, "width": 2, "height": 1, "settings": "{\"tableIndex\":0,\"labelColumnName\":\"Name\",\"labelName\":\"Total Case Count\",\"valueColumnName\":\"Value\",\"formatText\":\"N0\"}" }' ``` ### Create Horizontal Bar Chart ```bash curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/dashboard/{dashboardId}/panel" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "title": "Activity Distribution", "panelType": "DashboardPanelHorizontalBarChart", "blockId": "calculator-activity-frequency-guid", "row": 1, "column": 0, "width": 6, "height": 3, "settings": "{\"tableIndex\":0,\"columnNames\":[\"ActivityName\",\"Count\"],\"sortColumnName\":\"Count\",\"sortAscending\":false,\"maxRows\":10}" }' ``` ### Create Full Calculator Panel ```bash curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/dashboard/{dashboardId}/panel" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "title": "Case Count", "panelType": "DashboardPanelCalculator", "blockId": "calculator-case-count-guid", "row": 0, "column": 2, "width": 2, "height": 1 }' ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; async function getDashboardPanels(dashboardId, token) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dashboard/${dashboardId}/panels`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' } }); if (response.ok) { return await response.json(); } else if (response.status === 404) { throw new Error('Dashboard not found'); } else { throw new Error(`Failed to get panels: ${response.status}`); } } async function createDashboardPanel(dashboardId, panelData, token) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dashboard/${dashboardId}/panel`; const response = await fetch(url, { method: 'POST', headers: { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }, body: JSON.stringify(panelData) }); if (response.ok) { return await response.json(); } else { const error = await response.json(); throw new Error(error.Message || `Failed to create panel: ${response.status}`); } } // Usage - Get panels const panels = await getDashboardPanels('dashboard-guid', 'your-auth-token'); console.log(`Dashboard: ${panels.dashboardId}`); console.log(`Panels: ${panels.panels.length}`); // Usage - Create KPI panel const newPanel = await createDashboardPanel('dashboard-guid', { title: 'Total Cases', panelType: 'DashboardPanelSingleValue', blockId: 'calculator-data-information-guid', row: 0, column: 0, width: 2, height: 1, settings: JSON.stringify({ tableIndex: 0, labelColumnName: 'Name', labelName: 'Total Case Count', valueColumnName: 'Value', formatText: 'N0' }) }, 'your-auth-token'); console.log(`Created panel: ${newPanel.panelId}`); ``` ### Python ```python import requests import json TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' def get_dashboard_panels(dashboard_id, token): """Get all panels in a dashboard.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dashboard/{dashboard_id}/panels' headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } response = requests.get(url, headers=headers) if response.ok: return response.json() elif response.status_code == 404: raise Exception('Dashboard not found') else: raise Exception(f'Failed to get panels: {response.status_code}') def create_dashboard_panel(dashboard_id, panel_data, token): """Create a new panel in a dashboard.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dashboard/{dashboard_id}/panel' headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } response = requests.post(url, headers=headers, json=panel_data) if response.ok: return response.json() else: error = response.json() raise Exception(error.get('Message', f'Failed to create panel: {response.status_code}')) # Usage - Get panels panels = get_dashboard_panels('dashboard-guid', 'your-auth-token') print(f"Dashboard: {panels['dashboardId']}") print(f"Panel count: {len(panels['panels'])}") # Usage - Create KPI panel settings = { 'tableIndex': 0, 'labelColumnName': 'Name', 'labelName': 'Total Case Count', 'valueColumnName': 'Value', 'formatText': 'N0' } new_panel = create_dashboard_panel('dashboard-guid', { 'title': 'Total Cases', 'panelType': 'DashboardPanelSingleValue', 'blockId': 'calculator-data-information-guid', 'row': 0, 'column': 0, 'width': 2, 'height': 1, 'settings': json.dumps(settings) }, 'your-auth-token') print(f"Created panel: {new_panel['panelId']}") ``` ### C# ```csharp using System; using System.Collections.Generic; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; public class DashboardPanelsResponse { public Guid DashboardId { get; set; } public List Panels { get; set; } } public class PanelInfo { public Guid PanelId { get; set; } public string Name { get; set; } public string PanelType { get; set; } public string Position { get; set; } public int Width { get; set; } public int Height { get; set; } public string Configuration { get; set; } } public class CreatePanelRequest { public string Title { get; set; } public string Description { get; set; } public string PanelType { get; set; } public Guid BlockId { get; set; } public int Row { get; set; } public int Column { get; set; } public int Width { get; set; } public int Height { get; set; } public string Settings { get; set; } } public class CreatePanelResponse { public Guid PanelId { get; set; } public string Title { get; set; } public string DashboardPanelClassName { get; set; } public int Row { get; set; } public int Column { get; set; } public int Width { get; set; } public int Height { get; set; } public Guid BlockId { get; set; } } public class DashboardPanelClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; private readonly JsonSerializerOptions _jsonOptions; public DashboardPanelClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); _jsonOptions = new JsonSerializerOptions { PropertyNameCaseInsensitive = true }; } public async Task GetDashboardPanelsAsync(Guid dashboardId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/dashboard/{dashboardId}/panels"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, _jsonOptions); } else if (response.StatusCode == System.Net.HttpStatusCode.NotFound) { throw new Exception($"Dashboard {dashboardId} not found"); } throw new Exception($"Failed to get panels: {response.StatusCode}"); } public async Task CreatePanelAsync(Guid dashboardId, CreatePanelRequest request) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/dashboard/{dashboardId}/panel"; var json = JsonSerializer.Serialize(request, _jsonOptions); var content = new StringContent(json, Encoding.UTF8, "application/json"); var response = await _httpClient.PostAsync(url, content); if (response.IsSuccessStatusCode) { var responseJson = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(responseJson, _jsonOptions); } var errorJson = await response.Content.ReadAsStringAsync(); throw new Exception($"Failed to create panel: {errorJson}"); } } // Usage var client = new DashboardPanelClient( "https://your-mindzie-instance.com", Guid.Parse("12345678-1234-1234-1234-123456789012"), Guid.Parse("87654321-4321-4321-4321-210987654321"), "your-access-token"); // Get panels var panels = await client.GetDashboardPanelsAsync(Guid.Parse("dashboard-guid")); Console.WriteLine($"Dashboard: {panels.DashboardId}"); foreach (var panel in panels.Panels) { Console.WriteLine($"- {panel.Name} ({panel.PanelType})"); } // Create KPI panel var newPanel = await client.CreatePanelAsync(Guid.Parse("dashboard-guid"), new CreatePanelRequest { Title = "Total Cases", PanelType = "DashboardPanelSingleValue", BlockId = Guid.Parse("calculator-data-information-guid"), Row = 0, Column = 0, Width = 2, Height = 1, Settings = JsonSerializer.Serialize(new { tableIndex = 0, labelColumnName = "Name", labelName = "Total Case Count", valueColumnName = "Value", formatText = "N0" }) }); Console.WriteLine($"Created panel: {newPanel.PanelId}"); ``` ## Important Notes ### API Capabilities - **Read Access:** Use GET endpoint to retrieve panel configurations - **Create Access:** Use POST endpoint to create new panels with automatic selector block creation - **Modify Access:** Panel modification is currently done through the mindzieStudio UI - **Delete Access:** Panel deletion is currently done through the mindzieStudio UI ### Configuration Parsing The `configuration` field contains a JSON string that must be parsed to access settings. ### Layout System Panel positions use a row/column grid system with flexible sizing. Row and column values are 0-based when creating panels. --- ## Generate Shareable Dashboard URLs Generate shareable links and embed URLs for dashboard access. Share dashboards with stakeholders or embed them in external applications. ## Get Dashboard URLs **GET** `/api/{tenantId}/{projectId}/dashboard/{dashboardId}/url` Generates shareable URLs for a dashboard, including standard view URLs and embed URLs for iframe integration. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `dashboardId` | GUID | Yes | The dashboard identifier | ### Response (200 OK) ```json { "dashboardId": "880e8400-e29b-41d4-a716-446655440000", "url": "https://your-instance.com/dashboard/880e8400-e29b-41d4-a716-446655440000", "embedUrl": "https://your-instance.com/embed/dashboard/880e8400-e29b-41d4-a716-446655440000" } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `dashboardId` | GUID | The dashboard identifier | | `url` | string | Standard dashboard URL (requires authentication) | | `embedUrl` | string | Embed URL for iframe integration | ### Error Responses **Not Found (404):** ```json { "Error": "Dashboard not found", "DashboardId": "880e8400-e29b-41d4-a716-446655440000" } ``` ## Dashboard Embedding mindzieStudio dashboards can be embedded in external applications using iframe technology. ### Basic Embedding ```html ``` ### Responsive Embedding ```html
``` ## URL Types ### Standard URL The standard dashboard URL requires user authentication: - Users must log in to mindzieStudio to view the dashboard - Provides full dashboard interactivity - Suitable for internal team sharing ### Embed URL The embed URL is designed for iframe integration: - Simplified interface optimized for embedding - May require additional authentication configuration - Suitable for portals and external applications ## Implementation Examples ### cURL ```bash # Get dashboard URLs curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/dashboard/880e8400-e29b-41d4-a716-446655440000/url" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; async function getDashboardUrls(dashboardId, token) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/dashboard/${dashboardId}/url`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' } }); if (response.ok) { return await response.json(); } else if (response.status === 404) { throw new Error('Dashboard not found'); } else { throw new Error(`Failed to get URLs: ${response.status}`); } } // Generate embed code function generateEmbedCode(embedUrl, width = '100%', height = 600) { return ``; } // Usage const urls = await getDashboardUrls('dashboard-guid', 'your-auth-token'); console.log(`Dashboard URL: ${urls.url}`); console.log(`Embed URL: ${urls.embedUrl}`); console.log('\nEmbed code:'); console.log(generateEmbedCode(urls.embedUrl)); ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' def get_dashboard_urls(dashboard_id, token): """Get shareable URLs for a dashboard.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/dashboard/{dashboard_id}/url' headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } response = requests.get(url, headers=headers) if response.ok: return response.json() elif response.status_code == 404: raise Exception('Dashboard not found') else: raise Exception(f'Failed to get URLs: {response.status_code}') def generate_embed_code(embed_url, width='100%', height=600): """Generate HTML embed code for a dashboard.""" return f'''''' # Usage urls = get_dashboard_urls('dashboard-guid', 'your-auth-token') print(f"Dashboard URL: {urls['url']}") print(f"Embed URL: {urls['embedUrl']}") print('\nEmbed code:') print(generate_embed_code(urls['embedUrl'])) ``` ### C# ```csharp using System; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; public class DashboardUrlResponse { public Guid DashboardId { get; set; } public string Url { get; set; } public string EmbedUrl { get; set; } } public class DashboardSharingClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public DashboardSharingClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task GetDashboardUrlsAsync(Guid dashboardId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/dashboard/{dashboardId}/url"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); } else if (response.StatusCode == System.Net.HttpStatusCode.NotFound) { throw new Exception($"Dashboard {dashboardId} not found"); } throw new Exception($"Failed to get URLs: {response.StatusCode}"); } public string GenerateEmbedCode(string embedUrl, string width = "100%", int height = 600) { return $@""; } } // Usage var client = new DashboardSharingClient( "https://your-mindzie-instance.com", Guid.Parse("12345678-1234-1234-1234-123456789012"), Guid.Parse("87654321-4321-4321-4321-210987654321"), "your-access-token"); var urls = await client.GetDashboardUrlsAsync(Guid.Parse("dashboard-guid")); Console.WriteLine($"Dashboard URL: {urls.Url}"); Console.WriteLine($"Embed URL: {urls.EmbedUrl}"); Console.WriteLine("\nEmbed code:"); Console.WriteLine(client.GenerateEmbedCode(urls.EmbedUrl)); ``` ## Best Practices - **Authentication:** Standard URLs require user authentication. Plan your sharing strategy accordingly. - **Embedding:** Use embed URLs when integrating dashboards into external applications or portals. - **Responsive Design:** Use responsive iframe techniques for mobile-friendly embedding. - **Security:** Consider your organization's security policies when sharing dashboard URLs externally. ## Important Notes - **Authentication Required:** Both URL types may require authentication depending on your security configuration. - **Access Control:** Users accessing shared URLs must have appropriate permissions for the tenant and project. - **Public Sharing:** Extended public sharing features (password protection, expiration, etc.) are managed through the mindzieStudio UI. --- # Enrichment API **Data Enrichment API** Enhance your datasets with AI-powered enrichments, custom pipelines, and integrated Python notebooks for advanced analytics. ## Features ### Enrichment Pipelines Build and manage data enrichment pipelines. [View Pipelines](/mindzie_api/enrichment/pipelines) ### Pipeline Execution Execute enrichment pipelines on your datasets. [Execute Pipelines](/mindzie_api/enrichment/execution) ### Python Notebooks Use Jupyter notebooks for custom enrichments. [View Notebooks](/mindzie_api/enrichment/notebooks) ## Available Endpoints ### Enrichment Management Core operations for managing enrichment pipelines and configurations. | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/enrichments` | List all enrichment pipelines | | POST | `/api/{tenantId}/{projectId}/enrichment` | Create new enrichment pipeline | | GET | `/api/{tenantId}/{projectId}/enrichment/{enrichmentId}` | Get enrichment details | | PUT | `/api/{tenantId}/{projectId}/enrichment/{enrichmentId}` | Update enrichment configuration | | DELETE | `/api/{tenantId}/{projectId}/enrichment/{enrichmentId}` | Delete enrichment pipeline | ### Pipeline Execution Execute enrichment pipelines and monitor processing status. | Method | Endpoint | Description | |--------|----------|-------------| | POST | `/api/{tenantId}/{projectId}/enrichment/{enrichmentId}/execute` | Execute enrichment pipeline | | GET | `/api/{tenantId}/{projectId}/enrichment/{enrichmentId}/status` | Get execution status | | GET | `/api/{tenantId}/{projectId}/enrichment/{enrichmentId}/results` | Get enrichment results | ### Notebook Integration Manage Python notebooks for custom enrichment logic. | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/enrichment/notebooks` | List available notebooks | | POST | `/api/{tenantId}/{projectId}/enrichment/notebook/execute` | Execute notebook enrichment | ## Enrichment Types mindzieStudio supports various types of data enrichment for enhanced process mining analysis: ### AI-Powered Enrichments Leverage artificial intelligence for intelligent data enhancement. - Activity classification - Anomaly detection - Pattern recognition - Predictive insights ### Statistical Enrichments Add calculated metrics and statistical insights. - Duration calculations - Frequency analysis - Performance indicators - Trend analysis ### Business Rules Apply custom business logic and validation rules. - Compliance checking - Business rule validation - Data quality assessment - Custom transformations ### External Integration Enrich data with external system information. - ERP data lookup - CRM integration - Third-party APIs - Master data enrichment ## Pipeline Configuration Understanding enrichment pipeline structure and configuration options: ### Basic Pipeline Structure ```json { "enrichmentId": "enrich-550e8400-e29b-41d4-a716-446655440000", "name": "Process Performance Enrichment", "description": "Calculate KPIs and performance metrics", "type": "statistical", "inputDataset": "dataset-660e8400-e29b-41d4-a716-446655440000", "steps": [ { "stepId": "step-001", "type": "duration_calculation", "config": { "fromActivity": "Order Created", "toActivity": "Order Completed", "outputColumn": "TotalDuration" } }, { "stepId": "step-002", "type": "frequency_analysis", "config": { "groupBy": "Activity", "outputColumn": "ActivityFrequency" } } ], "schedule": { "enabled": true, "frequency": "daily", "time": "02:00" } } ``` ## Common Use Cases - **Process Intelligence:** Add AI-powered insights and pattern recognition to event logs - **Performance Analysis:** Calculate KPIs, durations, and performance metrics automatically - **Data Quality:** Validate and clean process data using business rules - **Compliance Monitoring:** Check adherence to business rules and regulations - **Predictive Analytics:** Generate predictions for process outcomes and bottlenecks - **External Context:** Enrich process data with information from other business systems > **Note:** All Enrichment API endpoints require valid authentication with appropriate permissions for the target project and tenant. > **Get Started:** Begin with [Pipeline Management](/mindzie_api/enrichment/pipelines) to create enrichment pipelines, then explore [Pipeline Execution](/mindzie_api/enrichment/execution) for running enrichments on your datasets. --- **Build Data Enrichment Workflows** Create and manage enrichment pipelines to transform and enhance your process mining datasets. ## Get Pipeline Details **GET** `/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}` Retrieves comprehensive information about a specific enrichment pipeline including its stages, configuration, and execution metadata. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | | `pipelineId` | GUID | Path | The pipeline identifier | ### Response ```json { "pipelineId": "770e8400-e29b-41d4-a716-446655440000", "projectId": "660e8400-e29b-41d4-a716-446655440000", "pipelineName": "Process Mining Data Enrichment", "pipelineDescription": "Enriches event logs with additional attributes and calculations", "status": "Active", "stages": [ { "stageId": "stage-001", "stageName": "Data Validation", "stageType": "Validation", "order": 1, "configuration": { "validateCaseId": true, "validateTimestamps": true, "requireActivityNames": true } }, { "stageId": "stage-002", "stageName": "Time Enrichment", "stageType": "TimeCalculation", "order": 2, "configuration": { "addDayOfWeek": true, "addBusinessHours": true, "timezoneId": "UTC" } } ], "triggers": { "automatic": true, "schedule": "0 2 * * *", "onDataUpdate": true }, "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "user123", "lastExecutionDate": "2024-01-20T02:00:00Z", "lastExecutionStatus": "Success", "executionCount": 45 } ``` ## List All Pipelines **GET** `/api/{tenantId}/{projectId}/enrichment/pipelines` Retrieves a list of all enrichment pipelines in the project with basic metadata and status information. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `status` | string | Filter by pipeline status: Active, Inactive, Failed | | `page` | integer | Page number for pagination (default: 1) | | `pageSize` | integer | Number of items per page (default: 20, max: 100) | ### Response ```json { "pipelines": [ { "pipelineId": "770e8400-e29b-41d4-a716-446655440000", "pipelineName": "Process Mining Data Enrichment", "status": "Active", "stageCount": 5, "lastExecutionDate": "2024-01-20T02:00:00Z", "lastExecutionStatus": "Success", "dateCreated": "2024-01-15T10:30:00Z" } ], "totalCount": 12, "page": 1, "pageSize": 20, "hasNextPage": false } ``` ## Create New Pipeline **POST** `/api/{tenantId}/{projectId}/enrichment/pipeline` Creates a new enrichment pipeline with specified stages and configuration. The pipeline can be configured to run automatically or manually. ### Request Body ```json { "pipelineName": "Customer Journey Enrichment", "pipelineDescription": "Enriches customer journey data with demographics and behavior patterns", "stages": [ { "stageName": "Customer Data Lookup", "stageType": "DataLookup", "order": 1, "configuration": { "lookupTable": "customer_demographics", "joinKey": "customerId", "selectFields": ["age", "segment", "region"] } }, { "stageName": "Journey Metrics", "stageType": "Calculation", "order": 2, "configuration": { "calculations": [ { "fieldName": "journeyDuration", "formula": "LAST_TIMESTAMP - FIRST_TIMESTAMP", "groupBy": "caseId" }, { "fieldName": "touchpointCount", "formula": "COUNT(*)", "groupBy": "caseId" } ] } } ], "triggers": { "automatic": false, "schedule": null, "onDataUpdate": true } } ``` ### Response Returns `201 Created` with the complete pipeline object including generated IDs and timestamps. ## Update Pipeline **PUT** `/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}` Updates an existing pipeline's configuration, stages, or triggers. Changes take effect on the next execution. ### Request Body ```json { "pipelineName": "Updated Customer Journey Enrichment", "pipelineDescription": "Enhanced customer journey data enrichment with ML insights", "status": "Active", "triggers": { "automatic": true, "schedule": "0 3 * * *", "onDataUpdate": true } } ``` ### Response Returns the updated pipeline object with the same structure as the GET endpoint. ## Delete Pipeline **DELETE** `/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}` Permanently removes a pipeline and all its execution history. This operation cannot be undone and will stop any currently running executions. ### Response Codes - `204 No Content` - Pipeline deleted successfully - `404 Not Found` - Pipeline not found or access denied - `409 Conflict` - Pipeline is currently executing and cannot be deleted ## Add Stage to Pipeline **POST** `/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}/stage` Adds a new processing stage to an existing pipeline. The stage will be inserted at the specified order position. ### Request Body ```json { "stageName": "Process Performance Metrics", "stageType": "PerformanceCalculation", "order": 3, "configuration": { "metrics": [ { "name": "cycleTime", "calculation": "CASE_DURATION", "unit": "hours" }, { "name": "waitTime", "calculation": "ACTIVITY_WAITING_TIME", "unit": "hours" } ], "aggregations": ["AVG", "MAX", "MIN", "P95"] } } ``` ### Response Returns `201 Created` with the complete stage object including generated stage ID. ## Remove Stage from Pipeline **DELETE** `/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}/stage/{stageId}` Removes a specific stage from the pipeline. Subsequent stages will be reordered automatically. ### Response Codes - `204 No Content` - Stage removed successfully - `404 Not Found` - Stage not found in pipeline - `409 Conflict` - Cannot remove stage while pipeline is executing ## Example: Complete Pipeline Workflow This example demonstrates creating and managing an enrichment pipeline: ```javascript // 1. Create a new enrichment pipeline const createPipeline = async () => { const response = await fetch('/api/{tenantId}/{projectId}/enrichment/pipeline', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ pipelineName: 'Order Processing Enrichment', pipelineDescription: 'Enriches order data with fulfillment metrics', stages: [ { stageName: 'Order Validation', stageType: 'Validation', order: 1, configuration: { validateOrderId: true, validateCustomerId: true, validateAmounts: true } }, { stageName: 'Fulfillment Time Calculation', stageType: 'TimeCalculation', order: 2, configuration: { startActivity: 'Order Received', endActivity: 'Order Shipped', outputField: 'fulfillmentTime' } } ], triggers: { automatic: true, onDataUpdate: true } }) }); return await response.json(); }; // 2. Add a new stage to existing pipeline const addStage = async (pipelineId) => { const response = await fetch(`/api/{tenantId}/{projectId}/enrichment/pipeline/${pipelineId}/stage`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ stageName: 'Customer Segmentation', stageType: 'Classification', order: 3, configuration: { segmentationRules: [ { segment: 'VIP', condition: 'orderValue > 1000' }, { segment: 'Regular', condition: 'orderValue <= 1000' } ] } }) }); return await response.json(); }; // 3. Get pipeline status const getPipelineStatus = async (pipelineId) => { const response = await fetch(`/api/{tenantId}/{projectId}/enrichment/pipeline/${pipelineId}`, { headers: { 'Authorization': `Bearer ${token}` } }); return await response.json(); }; ``` ## Python Example ```python import requests import json from datetime import datetime class EnrichmentPipelineManager: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def create_pipeline(self, name, description, stages, triggers=None): """Create a new enrichment pipeline""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipeline" payload = { 'pipelineName': name, 'pipelineDescription': description, 'stages': stages, 'triggers': triggers or {'automatic': False, 'onDataUpdate': True} } response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_pipeline(self, pipeline_id): """Get pipeline details""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipeline/{pipeline_id}" response = requests.get(url, headers=self.headers) return response.json() def list_pipelines(self, status=None, page=1, page_size=20): """List all pipelines with optional filtering""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipelines" params = {'page': page, 'pageSize': page_size} if status: params['status'] = status response = requests.get(url, params=params, headers=self.headers) return response.json() def add_stage(self, pipeline_id, stage_name, stage_type, order, configuration): """Add a new stage to an existing pipeline""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipeline/{pipeline_id}/stage" payload = { 'stageName': stage_name, 'stageType': stage_type, 'order': order, 'configuration': configuration } response = requests.post(url, json=payload, headers=self.headers) return response.json() def update_pipeline(self, pipeline_id, name=None, description=None, status=None, triggers=None): """Update pipeline configuration""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipeline/{pipeline_id}" payload = {} if name: payload['pipelineName'] = name if description: payload['pipelineDescription'] = description if status: payload['status'] = status if triggers: payload['triggers'] = triggers response = requests.put(url, json=payload, headers=self.headers) return response.json() def delete_pipeline(self, pipeline_id): """Delete a pipeline""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipeline/{pipeline_id}" response = requests.delete(url, headers=self.headers) return response.status_code == 204 # Usage example manager = EnrichmentPipelineManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) # Create a comprehensive enrichment pipeline stages = [ { 'stageName': 'Data Quality Check', 'stageType': 'Validation', 'order': 1, 'configuration': { 'checkDuplicates': True, 'validateTimestamps': True, 'checkMissingValues': True } }, { 'stageName': 'Process Mining Metrics', 'stageType': 'ProcessCalculation', 'order': 2, 'configuration': { 'calculateCycleTime': True, 'calculateWaitingTime': True, 'calculateResourceUtilization': True, 'detectBottlenecks': True } }, { 'stageName': 'Anomaly Detection', 'stageType': 'AnomalyDetection', 'order': 3, 'configuration': { 'algorithm': 'isolation_forest', 'threshold': 0.1, 'features': ['duration', 'cost', 'resourceCount'] } } ] pipeline = manager.create_pipeline( 'Comprehensive Process Analysis', 'End-to-end process analysis with anomaly detection', stages, {'automatic': True, 'schedule': '0 1 * * *', 'onDataUpdate': True} ) print(f"Created pipeline: {pipeline['pipelineId']}") # List all active pipelines active_pipelines = manager.list_pipelines(status='Active') print(f"Found {active_pipelines['totalCount']} active pipelines") ``` --- **Jupyter Notebook Integration** Integrate Jupyter notebooks for custom enrichments, data analysis, and machine learning workflows. ## Get Notebook Details **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}` Retrieves comprehensive information about a Jupyter notebook including its cells, execution status, and integration parameters. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | | `notebookId` | GUID | Path | The notebook identifier | ### Response ```json { "notebookId": "aa0e8400-e29b-41d4-a716-446655440000", "projectId": "660e8400-e29b-41d4-a716-446655440000", "notebookName": "Process Mining Analysis", "notebookDescription": "Custom analysis for customer journey optimization", "notebookVersion": "1.3.2", "kernelType": "python3", "status": "Ready", "integration": { "enrichmentMode": true, "datasetBinding": "880e8400-e29b-41d4-a716-446655440000", "outputFormat": "enriched_dataframe", "autoExecution": false }, "cells": [ { "cellId": "cell-001", "cellType": "code", "executionCount": 15, "hasOutput": true, "lastExecuted": "2024-01-20T10:30:00Z", "executionStatus": "Success" }, { "cellId": "cell-002", "cellType": "markdown", "lastModified": "2024-01-19T14:20:00Z" } ], "environment": { "pythonVersion": "3.9.18", "packages": ["pandas", "numpy", "scikit-learn", "mindzie-sdk"], "customLibraries": ["process_mining_utils", "customer_analytics"] }, "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T10:30:00Z", "createdBy": "user123", "lastExecutionDate": "2024-01-20T10:30:00Z", "executionCount": 47 } ``` ## List All Notebooks **GET** `/api/{tenantId}/{projectId}/notebooks` Retrieves a list of all Jupyter notebooks in the project with basic metadata and execution status. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `status` | string | Filter by status: Ready, Running, Error, Kernel_Dead | | `kernelType` | string | Filter by kernel type: python3, r, scala | | `enrichmentMode` | boolean | Filter notebooks configured for data enrichment | | `page` | integer | Page number for pagination (default: 1) | | `pageSize` | integer | Number of items per page (default: 20, max: 100) | ### Response ```json { "notebooks": [ { "notebookId": "aa0e8400-e29b-41d4-a716-446655440000", "notebookName": "Process Mining Analysis", "kernelType": "python3", "status": "Ready", "enrichmentMode": true, "cellCount": 12, "lastExecutionDate": "2024-01-20T10:30:00Z", "dateCreated": "2024-01-15T10:30:00Z" } ], "totalCount": 8, "page": 1, "pageSize": 20, "hasNextPage": false } ``` ## Create New Notebook **POST** `/api/{tenantId}/{projectId}/notebook` Creates a new Jupyter notebook with specified configuration and optional template. The notebook is automatically configured for mindzie data integration. ### Request Body ```json { "notebookName": "Advanced Customer Analytics", "notebookDescription": "Machine learning models for customer behavior prediction", "kernelType": "python3", "template": "process_mining_starter", "integration": { "enrichmentMode": true, "datasetBinding": "880e8400-e29b-41d4-a716-446655440000", "outputFormat": "enriched_dataframe", "autoExecution": false }, "environment": { "packages": ["pandas", "numpy", "scikit-learn", "matplotlib", "seaborn"], "customLibraries": ["process_mining_utils"] }, "initialCells": [ { "cellType": "markdown", "content": "# Customer Analytics Notebook\n\nThis notebook analyzes customer journey data using process mining techniques." }, { "cellType": "code", "content": "import pandas as pd\nimport numpy as np\nfrom mindzie_sdk import ProcessMiningClient\n\n# Initialize mindzie client\nclient = ProcessMiningClient()" } ] } ``` ### Response Returns `201 Created` with the complete notebook object including generated notebook ID and initial session information. ## Execute Notebook **POST** `/api/{tenantId}/{projectId}/notebook/{notebookId}/execute` Executes all cells in the notebook or specified cell range. Execution runs asynchronously and results are stored for retrieval. ### Request Body ```json { "executionMode": "all", "cellRange": { "startCell": "cell-001", "endCell": "cell-010" }, "parameters": { "dataset_id": "880e8400-e29b-41d4-a716-446655440000", "analysis_period": "2024-01", "include_weekends": false }, "outputOptions": { "captureOutputs": true, "saveIntermediateResults": true, "generateReport": true }, "timeout": 1800, "priority": "Normal" } ``` ### Response ```json { "executionId": "bb0e8400-e29b-41d4-a716-446655440000", "notebookId": "aa0e8400-e29b-41d4-a716-446655440000", "status": "Running", "startTime": "2024-01-20T10:30:00Z", "estimatedDuration": "15-20 minutes", "currentCell": "cell-003", "progress": { "totalCells": 12, "completedCells": 2, "currentCellIndex": 3, "percentComplete": 17 }, "parameters": { "dataset_id": "880e8400-e29b-41d4-a716-446655440000", "analysis_period": "2024-01", "include_weekends": false } } ``` ## Get Execution Status **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}/execution/{executionId}` Retrieves the current status and progress of a notebook execution, including cell-by-cell execution details and any errors. ### Response ```json { "executionId": "bb0e8400-e29b-41d4-a716-446655440000", "notebookId": "aa0e8400-e29b-41d4-a716-446655440000", "status": "Completed", "startTime": "2024-01-20T10:30:00Z", "endTime": "2024-01-20T10:47:00Z", "totalDuration": "17 minutes", "progress": { "totalCells": 12, "completedCells": 12, "successfulCells": 11, "failedCells": 1, "percentComplete": 100 }, "cellResults": [ { "cellId": "cell-001", "status": "Success", "executionTime": "0.5 seconds", "hasOutput": false }, { "cellId": "cell-002", "status": "Success", "executionTime": "3.2 seconds", "hasOutput": true, "outputType": "display_data" }, { "cellId": "cell-003", "status": "Error", "executionTime": "1.1 seconds", "errorType": "KeyError", "errorMessage": "'customer_id' column not found in dataset" } ], "outputs": { "dataFrames": 3, "plots": 5, "models": 2, "enrichedData": { "recordCount": 15420, "newColumns": ["customer_segment", "journey_score", "anomaly_flag"] } }, "resources": { "peakMemoryUsage": "2.3 GB", "cpuTime": "8.5 minutes", "diskUsage": "450 MB" } } ``` ## Get Execution Results **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}/execution/{executionId}/results` Retrieves the outputs and results from a completed notebook execution, including generated data, plots, and enriched datasets. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `outputType` | string | Filter by output type: all, data, plots, models, reports | | `format` | string | Response format: summary, detailed, download | | `cellId` | string | Get results from specific cell only | ### Response ```json { "executionId": "bb0e8400-e29b-41d4-a716-446655440000", "status": "Completed", "outputs": [ { "cellId": "cell-002", "outputType": "display_data", "contentType": "text/html", "title": "Dataset Overview", "content": "
Dataset contains 15,420 records...
", "downloadUrl": "https://api.mindzie.com/downloads/cell-002-bb0e8400.html" }, { "cellId": "cell-005", "outputType": "image/png", "title": "Customer Journey Flow Chart", "dimensions": {"width": 800, "height": 600}, "downloadUrl": "https://api.mindzie.com/downloads/cell-005-bb0e8400.png" }, { "cellId": "cell-008", "outputType": "application/json", "title": "Process Mining Metrics", "content": { "avgCycleTime": "4.2 hours", "bottleneckActivities": ["Review Application", "Manager Approval"], "processEfficiency": 78.5, "customerSatisfactionScore": 8.2 }, "downloadUrl": "https://api.mindzie.com/downloads/cell-008-bb0e8400.json" } ], "enrichedDatasets": [ { "name": "customer_journey_enhanced", "recordCount": 15420, "newColumns": ["customer_segment", "journey_score", "anomaly_flag"], "format": "pandas_dataframe", "downloadUrl": "https://api.mindzie.com/downloads/enriched-bb0e8400.csv" } ], "models": [ { "name": "customer_churn_predictor", "modelType": "RandomForestClassifier", "accuracy": 0.87, "features": ["journey_score", "cycle_time", "touchpoint_count"], "downloadUrl": "https://api.mindzie.com/downloads/model-bb0e8400.pkl" } ], "reports": [ { "name": "Customer Analytics Summary", "format": "html", "downloadUrl": "https://api.mindzie.com/downloads/report-bb0e8400.html" } ] } ``` ## Update Notebook **PUT** `/api/{tenantId}/{projectId}/notebook/{notebookId}` Updates notebook configuration, cells, or integration settings. Changes to cells will trigger a new notebook version. ### Request Body ```json { "notebookName": "Advanced Customer Analytics v2", "notebookDescription": "Enhanced ML models with real-time prediction capabilities", "integration": { "enrichmentMode": true, "datasetBinding": "880e8400-e29b-41d4-a716-446655440000", "outputFormat": "enriched_dataframe", "autoExecution": true, "scheduleExecution": "0 2 * * *" }, "environment": { "packages": ["pandas", "numpy", "scikit-learn", "tensorflow", "matplotlib"], "customLibraries": ["process_mining_utils", "ml_models"] } } ``` ### Response Returns the updated notebook object with incremented version number and modification timestamps. ## Delete Notebook **DELETE** `/api/{tenantId}/{projectId}/notebook/{notebookId}` Permanently removes a notebook and all its execution history. This operation cannot be undone and will stop any currently running executions. ### Response Codes - `204 No Content` - Notebook deleted successfully - `404 Not Found` - Notebook not found or access denied - `409 Conflict` - Notebook is currently executing and cannot be deleted ## Upload Existing Notebook **POST** `/api/{tenantId}/{projectId}/notebook/upload` Uploads an existing Jupyter notebook (.ipynb file) and configures it for mindzie integration. The notebook will be parsed and cells will be validated. ### Request (Multipart Form Data) ``` Content-Type: multipart/form-data --boundary Content-Disposition: form-data; name="file"; filename="analysis.ipynb" Content-Type: application/json {notebook content} --boundary Content-Disposition: form-data; name="notebookName" Customer Journey Analysis --boundary Content-Disposition: form-data; name="enrichmentMode" true --boundary Content-Disposition: form-data; name="datasetBinding" 880e8400-e29b-41d4-a716-446655440000 --boundary-- ``` ### Response Returns `201 Created` with the uploaded notebook object including parsing results and any validation warnings. ## Example: Complete Notebook Workflow This example demonstrates creating, executing, and retrieving results from a Jupyter notebook: ```javascript // 1. Create a new notebook const createNotebook = async () => { const response = await fetch('/api/{tenantId}/{projectId}/notebook', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ notebookName: 'Process Mining Analysis', notebookDescription: 'Advanced analytics for process optimization', kernelType: 'python3', template: 'process_mining_starter', integration: { enrichmentMode: true, datasetBinding: '880e8400-e29b-41d4-a716-446655440000', outputFormat: 'enriched_dataframe', autoExecution: false }, environment: { packages: ['pandas', 'numpy', 'scikit-learn', 'matplotlib'], customLibraries: ['process_mining_utils'] }, initialCells: [ { cellType: 'markdown', content: '# Process Mining Analysis\n\nAnalyzing process efficiency and bottlenecks.' }, { cellType: 'code', content: 'import pandas as pd\nfrom mindzie_sdk import ProcessMiningClient\n\nclient = ProcessMiningClient()' } ] }) }); return await response.json(); }; // 2. Execute the notebook const executeNotebook = async (notebookId) => { const response = await fetch(`/api/{tenantId}/{projectId}/notebook/${notebookId}/execute`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ executionMode: 'all', parameters: { dataset_id: '880e8400-e29b-41d4-a716-446655440000', analysis_period: '2024-01', include_weekends: false }, outputOptions: { captureOutputs: true, saveIntermediateResults: true, generateReport: true }, timeout: 1800, priority: 'High' }) }); return await response.json(); }; // 3. Monitor execution progress const monitorNotebookExecution = async (notebookId, executionId) => { const checkStatus = async () => { const response = await fetch(`/api/{tenantId}/{projectId}/notebook/${notebookId}/execution/${executionId}`, { headers: { 'Authorization': `Bearer ${token}` } }); const execution = await response.json(); console.log(`Status: ${execution.status}, Progress: ${execution.progress.percentComplete}%`); if (execution.status === 'Running') { setTimeout(() => checkStatus(), 15000); } else if (execution.status === 'Completed') { console.log('Notebook execution completed!'); await getNotebookResults(notebookId, executionId); } else if (execution.status === 'Error') { console.log('Execution failed:', execution.cellResults.filter(c => c.status === 'Error')); } }; await checkStatus(); }; // 4. Get execution results const getNotebookResults = async (notebookId, executionId) => { const response = await fetch(`/api/{tenantId}/{projectId}/notebook/${notebookId}/execution/${executionId}/results?format=detailed`, { headers: { 'Authorization': `Bearer ${token}` } }); const results = await response.json(); console.log('Execution Results:', results); console.log('Enriched Datasets:', results.enrichedDatasets); console.log('Generated Models:', results.models); return results; }; // Execute the workflow createNotebook() .then(notebook => { console.log(`Created notebook: ${notebook.notebookId}`); return executeNotebook(notebook.notebookId); }) .then(execution => { console.log(`Started execution: ${execution.executionId}`); return monitorNotebookExecution(execution.notebookId, execution.executionId); }) .catch(error => console.error('Notebook workflow failed:', error)); ``` ## Python Example ```python import requests import time import json from pathlib import Path class NotebookManager: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def create_notebook(self, name, description, kernel_type="python3", template=None, integration=None): """Create a new Jupyter notebook""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/notebook" payload = { 'notebookName': name, 'notebookDescription': description, 'kernelType': kernel_type, 'template': template, 'integration': integration or { 'enrichmentMode': True, 'outputFormat': 'enriched_dataframe', 'autoExecution': False } } response = requests.post(url, json=payload, headers=self.headers) return response.json() def upload_notebook(self, file_path, name, dataset_binding=None): """Upload an existing notebook file""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/notebook/upload" with open(file_path, 'rb') as file: files = {'file': (Path(file_path).name, file, 'application/json')} data = { 'notebookName': name, 'enrichmentMode': 'true', 'datasetBinding': dataset_binding or '' } # Remove Content-Type header for multipart upload headers = {k: v for k, v in self.headers.items() if k != 'Content-Type'} response = requests.post(url, files=files, data=data, headers=headers) return response.json() def execute_notebook(self, notebook_id, parameters=None, timeout=1800): """Execute all cells in a notebook""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/notebook/{notebook_id}/execute" payload = { 'executionMode': 'all', 'parameters': parameters or {}, 'outputOptions': { 'captureOutputs': True, 'saveIntermediateResults': True, 'generateReport': True }, 'timeout': timeout, 'priority': 'Normal' } response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_execution_status(self, notebook_id, execution_id): """Get notebook execution status""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/notebook/{notebook_id}/execution/{execution_id}" response = requests.get(url, headers=self.headers) return response.json() def wait_for_completion(self, notebook_id, execution_id, poll_interval=15, timeout=3600): """Wait for notebook execution to complete""" start_time = time.time() while time.time() - start_time < timeout: status = self.get_execution_status(notebook_id, execution_id) print(f"Notebook {notebook_id}: {status['status']} ({status['progress']['percentComplete']}%)") if status['status'] in ['Completed', 'Error', 'Cancelled']: return status time.sleep(poll_interval) raise TimeoutError(f"Notebook execution {execution_id} did not complete within {timeout} seconds") def get_execution_results(self, notebook_id, execution_id, output_type="all", format_type="detailed"): """Get notebook execution results""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/notebook/{notebook_id}/execution/{execution_id}/results" params = { 'outputType': output_type, 'format': format_type } response = requests.get(url, params=params, headers=self.headers) return response.json() def list_notebooks(self, status=None, enrichment_mode=None, page=1, page_size=20): """List all notebooks with optional filtering""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/notebooks" params = {'page': page, 'pageSize': page_size} if status: params['status'] = status if enrichment_mode is not None: params['enrichmentMode'] = str(enrichment_mode).lower() response = requests.get(url, params=params, headers=self.headers) return response.json() # Usage example manager = NotebookManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) try: # Create a process mining notebook notebook = manager.create_notebook( 'Advanced Process Analytics', 'Machine learning-based process analysis with anomaly detection', 'python3', 'process_mining_starter', { 'enrichmentMode': True, 'datasetBinding': 'dataset-guid', 'outputFormat': 'enriched_dataframe', 'autoExecution': False } ) print(f"Created notebook: {notebook['notebookId']}") # Execute with custom parameters execution_params = { 'dataset_id': 'dataset-guid', 'analysis_type': 'full_analysis', 'time_window': '30_days', 'ml_models': ['anomaly_detection', 'process_prediction'], 'generate_visualizations': True } execution = manager.execute_notebook( notebook['notebookId'], execution_params, timeout=2400 # 40 minutes ) print(f"Started execution: {execution['executionId']}") print(f"Estimated duration: {execution['estimatedDuration']}") # Wait for completion final_status = manager.wait_for_completion( notebook['notebookId'], execution['executionId'] ) if final_status['status'] == 'Completed': # Get detailed results results = manager.get_execution_results( notebook['notebookId'], execution['executionId'], 'all', 'detailed' ) print("Notebook execution completed successfully!") print(f"Generated outputs: {len(results['outputs'])}") print(f"Enriched datasets: {len(results['enrichedDatasets'])}") print(f"ML models created: {len(results['models'])}") # Download enriched data for dataset in results['enrichedDatasets']: print(f"Download enriched data: {dataset['downloadUrl']}") # Download models for model in results['models']: print(f"Download model '{model['name']}': {model['downloadUrl']}") else: print(f"Notebook execution failed with status: {final_status['status']}") failed_cells = [cell for cell in final_status['cellResults'] if cell['status'] == 'Error'] for cell in failed_cells: print(f"Cell {cell['cellId']} failed: {cell['errorMessage']}") except Exception as e: print(f"Error in notebook workflow: {e}") ``` --- **Execute Enrichment Pipelines** Run enrichment pipelines on datasets, monitor progress, and retrieve enhanced results. ## Execute Pipeline **POST** `/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}/execute` Triggers the execution of an enrichment pipeline on a specified dataset. The execution runs asynchronously and returns an execution ID for tracking progress. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | | `pipelineId` | GUID | Path | The pipeline identifier | ### Request Body ```json { "datasetId": "880e8400-e29b-41d4-a716-446655440000", "executionName": "Monthly Process Analysis", "executionDescription": "Enrichment for monthly performance review", "parameters": { "timeRange": { "startDate": "2024-01-01", "endDate": "2024-01-31" }, "filterCriteria": { "includeWeekends": false, "minCaseDuration": "1h" }, "outputOptions": { "includeRawData": true, "generateSummary": true, "exportFormat": "CSV" } }, "priority": "Normal", "notifyOnCompletion": true } ``` ### Response ```json { "executionId": "990e8400-e29b-41d4-a716-446655440000", "pipelineId": "770e8400-e29b-41d4-a716-446655440000", "datasetId": "880e8400-e29b-41d4-a716-446655440000", "status": "Queued", "estimatedDuration": "15-20 minutes", "executionName": "Monthly Process Analysis", "dateSubmitted": "2024-01-20T10:30:00Z", "priority": "Normal", "stages": [ { "stageId": "stage-001", "stageName": "Data Validation", "status": "Pending", "estimatedDuration": "2-3 minutes" }, { "stageId": "stage-002", "stageName": "Time Enrichment", "status": "Pending", "estimatedDuration": "8-10 minutes" } ] } ``` ## Get Execution Status **GET** `/api/{tenantId}/{projectId}/enrichment/execution/{executionId}` Retrieves the current status and progress information for a pipeline execution, including detailed stage-by-stage progress. ### Response ```json { "executionId": "990e8400-e29b-41d4-a716-446655440000", "pipelineId": "770e8400-e29b-41d4-a716-446655440000", "datasetId": "880e8400-e29b-41d4-a716-446655440000", "status": "Running", "progress": 45, "currentStage": { "stageId": "stage-002", "stageName": "Time Enrichment", "status": "Running", "progress": 60, "startTime": "2024-01-20T10:35:00Z", "estimatedCompletion": "2024-01-20T10:45:00Z" }, "executionName": "Monthly Process Analysis", "dateSubmitted": "2024-01-20T10:30:00Z", "dateStarted": "2024-01-20T10:32:00Z", "estimatedCompletion": "2024-01-20T10:50:00Z", "priority": "Normal", "stages": [ { "stageId": "stage-001", "stageName": "Data Validation", "status": "Completed", "progress": 100, "startTime": "2024-01-20T10:32:00Z", "endTime": "2024-01-20T10:35:00Z", "duration": "3 minutes", "recordsProcessed": 15420, "validationResults": { "totalRecords": 15420, "validRecords": 15418, "errors": 2, "warnings": 15 } }, { "stageId": "stage-002", "stageName": "Time Enrichment", "status": "Running", "progress": 60, "startTime": "2024-01-20T10:35:00Z", "recordsProcessed": 9252, "totalRecords": 15418 } ], "metrics": { "totalRecords": 15420, "processedRecords": 9252, "errorCount": 2, "warningCount": 15 } } ``` ## Get Execution Results **GET** `/api/{tenantId}/{projectId}/enrichment/execution/{executionId}/results` Retrieves the final results of a completed pipeline execution, including enriched data, summary statistics, and downloadable outputs. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `format` | string | Response format: summary, full, download (default: summary) | | `includeRawData` | boolean | Include original dataset in response (default: false) | | `limit` | integer | Limit number of records returned (max: 10000) | ### Response ```json { "executionId": "990e8400-e29b-41d4-a716-446655440000", "status": "Completed", "completionDate": "2024-01-20T10:48:00Z", "totalDuration": "18 minutes", "summary": { "originalRecords": 15420, "enrichedRecords": 15418, "newAttributes": 8, "dataQualityScore": 98.7, "enrichmentCoverage": 99.9 }, "enrichedAttributes": [ { "attributeName": "dayOfWeek", "attributeType": "string", "coverage": 100, "uniqueValues": 7, "description": "Day of the week for each event" }, { "attributeName": "businessHours", "attributeType": "boolean", "coverage": 100, "description": "Whether event occurred during business hours" }, { "attributeName": "cycleTime", "attributeType": "duration", "coverage": 99.8, "averageValue": "4.2 hours", "description": "Time from case start to completion" } ], "dataQuality": { "completeness": 99.9, "accuracy": 98.5, "consistency": 99.2, "validity": 97.8, "issues": [ { "type": "Missing Timestamp", "count": 2, "severity": "High" }, { "type": "Invalid Duration", "count": 15, "severity": "Medium" } ] }, "downloadUrls": { "enrichedDataset": "https://api.mindzie.com/downloads/enriched-990e8400.csv", "summary": "https://api.mindzie.com/downloads/summary-990e8400.pdf", "dataQualityReport": "https://api.mindzie.com/downloads/quality-990e8400.html" } } ``` ## List Pipeline Executions **GET** `/api/{tenantId}/{projectId}/enrichment/executions` Retrieves a list of all pipeline executions with filtering and pagination options. Useful for monitoring execution history and performance. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `pipelineId` | GUID | Filter by specific pipeline | | `status` | string | Filter by status: Queued, Running, Completed, Failed, Cancelled | | `dateFrom` | datetime | Filter executions from this date | | `dateTo` | datetime | Filter executions to this date | | `page` | integer | Page number for pagination (default: 1) | | `pageSize` | integer | Number of items per page (default: 20, max: 100) | ### Response ```json { "executions": [ { "executionId": "990e8400-e29b-41d4-a716-446655440000", "pipelineId": "770e8400-e29b-41d4-a716-446655440000", "pipelineName": "Process Mining Data Enrichment", "executionName": "Monthly Process Analysis", "status": "Completed", "dateSubmitted": "2024-01-20T10:30:00Z", "dateCompleted": "2024-01-20T10:48:00Z", "duration": "18 minutes", "recordsProcessed": 15418, "priority": "Normal", "submittedBy": "user123" } ], "totalCount": 47, "page": 1, "pageSize": 20, "hasNextPage": true } ``` ## Cancel Execution **DELETE** `/api/{tenantId}/{projectId}/enrichment/execution/{executionId}` Cancels a running or queued pipeline execution. Completed stages will be preserved, but the execution will stop at the current stage. ### Request Body (Optional) ```json { "reason": "User requested cancellation", "preservePartialResults": true } ``` ### Response Codes - `200 OK` - Execution cancelled successfully - `404 Not Found` - Execution not found - `409 Conflict` - Execution already completed or cannot be cancelled ## Restart Failed Execution **POST** `/api/{tenantId}/{projectId}/enrichment/execution/{executionId}/restart` Restarts a failed pipeline execution from the point of failure. Previously completed stages will be skipped unless explicitly requested to re-run. ### Request Body ```json { "restartFromStage": "stage-003", "rerunCompletedStages": false, "updateParameters": { "retryFailedRecords": true, "increaseTimeout": true } } ``` ### Response Returns `200 OK` with a new execution object containing updated execution ID and status. ## Example: Complete Execution Workflow This example demonstrates executing a pipeline and monitoring its progress: ```javascript // 1. Execute pipeline const executeEnrichment = async () => { const response = await fetch('/api/{tenantId}/{projectId}/enrichment/pipeline/{pipelineId}/execute', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ datasetId: '880e8400-e29b-41d4-a716-446655440000', executionName: 'Customer Journey Analysis', executionDescription: 'Enriching customer data with journey metrics', parameters: { timeRange: { startDate: '2024-01-01', endDate: '2024-01-31' }, outputOptions: { includeRawData: true, generateSummary: true, exportFormat: 'CSV' } }, priority: 'High', notifyOnCompletion: true }) }); return await response.json(); }; // 2. Monitor execution progress const monitorExecution = async (executionId) => { const checkStatus = async () => { const response = await fetch(`/api/{tenantId}/{projectId}/enrichment/execution/${executionId}`, { headers: { 'Authorization': `Bearer ${token}` } }); const execution = await response.json(); console.log(`Status: ${execution.status}, Progress: ${execution.progress}%`); if (execution.status === 'Running' || execution.status === 'Queued') { // Check again in 30 seconds setTimeout(() => checkStatus(), 30000); } else if (execution.status === 'Completed') { console.log('Execution completed successfully!'); await getResults(executionId); } else if (execution.status === 'Failed') { console.log('Execution failed:', execution.error); } }; await checkStatus(); }; // 3. Get results when completed const getResults = async (executionId) => { const response = await fetch(`/api/{tenantId}/{projectId}/enrichment/execution/${executionId}/results?format=summary`, { headers: { 'Authorization': `Bearer ${token}` } }); const results = await response.json(); console.log('Enrichment Summary:', results.summary); console.log('Download URLs:', results.downloadUrls); return results; }; // Execute the workflow executeEnrichment() .then(execution => { console.log(`Started execution: ${execution.executionId}`); return monitorExecution(execution.executionId); }) .catch(error => console.error('Execution failed:', error)); ``` ## Python Example ```python import requests import time import json from datetime import datetime, timedelta class PipelineExecutionManager: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def execute_pipeline(self, pipeline_id, dataset_id, execution_name, parameters=None, priority="Normal"): """Execute an enrichment pipeline""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/pipeline/{pipeline_id}/execute" payload = { 'datasetId': dataset_id, 'executionName': execution_name, 'parameters': parameters or {}, 'priority': priority, 'notifyOnCompletion': True } response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_execution_status(self, execution_id): """Get current execution status""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/execution/{execution_id}" response = requests.get(url, headers=self.headers) return response.json() def wait_for_completion(self, execution_id, poll_interval=30, timeout=3600): """Wait for execution to complete with periodic status checks""" start_time = time.time() while time.time() - start_time < timeout: status = self.get_execution_status(execution_id) print(f"Execution {execution_id}: {status['status']} ({status.get('progress', 0)}%)") if status['status'] in ['Completed', 'Failed', 'Cancelled']: return status time.sleep(poll_interval) raise TimeoutError(f"Execution {execution_id} did not complete within {timeout} seconds") def get_execution_results(self, execution_id, format_type="summary", include_raw_data=False): """Get execution results""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/execution/{execution_id}/results" params = { 'format': format_type, 'includeRawData': include_raw_data } response = requests.get(url, params=params, headers=self.headers) return response.json() def cancel_execution(self, execution_id, reason="User cancellation"): """Cancel a running execution""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/execution/{execution_id}" payload = { 'reason': reason, 'preservePartialResults': True } response = requests.delete(url, json=payload, headers=self.headers) return response.status_code == 200 def list_executions(self, pipeline_id=None, status=None, date_from=None, date_to=None, page=1, page_size=20): """List pipeline executions with filtering""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/enrichment/executions" params = {'page': page, 'pageSize': page_size} if pipeline_id: params['pipelineId'] = pipeline_id if status: params['status'] = status if date_from: params['dateFrom'] = date_from.isoformat() if date_to: params['dateTo'] = date_to.isoformat() response = requests.get(url, params=params, headers=self.headers) return response.json() # Usage example manager = PipelineExecutionManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) # Execute pipeline with custom parameters execution_params = { 'timeRange': { 'startDate': '2024-01-01', 'endDate': '2024-01-31' }, 'filterCriteria': { 'includeWeekends': False, 'minCaseDuration': '1h' }, 'outputOptions': { 'includeRawData': True, 'generateSummary': True, 'exportFormat': 'CSV' } } try: # Start execution execution = manager.execute_pipeline( 'pipeline-guid', 'dataset-guid', 'Monthly Process Analysis', execution_params, 'High' ) print(f"Started execution: {execution['executionId']}") print(f"Estimated duration: {execution['estimatedDuration']}") # Wait for completion final_status = manager.wait_for_completion(execution['executionId']) if final_status['status'] == 'Completed': # Get results results = manager.get_execution_results(execution['executionId']) print(f"Enrichment completed successfully!") print(f"Original records: {results['summary']['originalRecords']}") print(f"Enriched records: {results['summary']['enrichedRecords']}") print(f"Data quality score: {results['summary']['dataQualityScore']}") print(f"Download enriched data: {results['downloadUrls']['enrichedDataset']}") else: print(f"Execution failed with status: {final_status['status']}") except Exception as e: print(f"Error executing pipeline: {e}") ``` --- # Action API Execute and manage workflow actions programmatically. ## Overview The Actions API provides endpoints for managing and executing workflow actions within mindzieStudio. Actions are automated workflow components that can be executed on demand or on schedule. ## Base URL Structure All Action API endpoints follow this pattern: ``` /api/{tenantId}/{projectId}/action/... ``` **Path Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `tenantId` | GUID | Your tenant identifier | | `projectId` | GUID | Your project identifier | ## Available Endpoints ### Health Monitoring Test connectivity and validate authentication. - **GET** `/api/{tenantId}/{projectId}/action/unauthorized-ping` - Basic connectivity test (no auth required) - **GET** `/api/{tenantId}/{projectId}/action/ping` - Authenticated connectivity test [View Ping Documentation](/mindzie_api/action/ping) ### Action Management List and retrieve action details. - **GET** `/api/{tenantId}/{projectId}/action` - List all actions for a project - **GET** `/api/{tenantId}/{projectId}/action/{actionId}` - Get specific action details [View Action Management Documentation](/mindzie_api/action/list-actions) ### Execute Actions Trigger action execution programmatically. - **GET** `/api/{tenantId}/{projectId}/action/execute/{actionId}` - Execute an action [View Execute Documentation](/mindzie_api/action/execute) ## Authentication Most endpoints require authentication via Bearer token or API key. The only exception is the `unauthorized-ping` endpoint which can be called without authentication. **Example authenticated request:** ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/ping Authorization: Bearer {your-access-token} ``` ## Common Use Cases - **Health Monitoring:** Use ping endpoints to verify API connectivity and authentication - **Automation:** Execute actions programmatically as part of ETL pipelines or scheduled jobs - **Integration:** Trigger mindzieStudio workflows from external systems - **Monitoring:** List and inspect actions to track workflow status ## Error Responses All endpoints return standard HTTP status codes: | Status | Description | |--------|-------------| | 200 | Success | | 401 | Unauthorized - invalid or missing authentication | | 404 | Not found - action or resource does not exist | ## Get Started 1. Start with [Ping Endpoints](/mindzie_api/action/ping) to test connectivity 2. Use [Action Management](/mindzie_api/action/list-actions) to discover available actions 3. [Execute Actions](/mindzie_api/action/execute) to trigger workflows --- Health monitoring and connectivity testing for the Actions API. ## Overview Ping endpoints provide a simple way to test connectivity and validate authentication with the mindzieAPI. These endpoints are essential for monitoring system health and troubleshooting connection issues. ## Unauthorized Ping **GET** `/api/{tenantId}/{projectId}/action/unauthorized-ping` Basic connectivity test that does not require authentication. Use this to verify the API is accessible. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/unauthorized-ping ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | ### Response ``` HTTP/1.1 200 OK Content-Type: text/plain Ping Successful ``` ### Use Cases - Basic connectivity testing - Load balancer health checks - Network troubleshooting - Service availability monitoring ## Authenticated Ping **GET** `/api/{tenantId}/{projectId}/action/ping` Authenticated connectivity test that validates credentials and verifies access to the specified tenant and project. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/ping Authorization: Bearer {your-access-token} ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | ### Response **Success (200 OK):** ``` HTTP/1.1 200 OK Content-Type: text/plain Ping Successful (tenant id: 12345678-1234-1234-1234-123456789012) ``` **Unauthorized (401):** ``` HTTP/1.1 401 Unauthorized Content-Type: text/plain {error message describing authorization failure} ``` ### Use Cases - Authentication validation - Permission verification - Token validity testing - Tenant/project access validation ## Implementation Examples ### cURL ```bash # Unauthorized ping curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/action/unauthorized-ping" # Authenticated ping curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/action/ping" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; // Unauthorized ping const unauthorizedPing = async () => { const response = await fetch( `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/unauthorized-ping` ); const text = await response.text(); console.log('Unauthorized ping:', text); return response.ok; }; // Authenticated ping const authenticatedPing = async (token) => { const response = await fetch( `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/ping`, { headers: { 'Authorization': `Bearer ${token}` } } ); if (response.ok) { const text = await response.text(); console.log('Authenticated ping:', text); return true; } else { console.error('Authentication failed:', response.status); return false; } }; ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' def unauthorized_ping(): """Basic connectivity test without authentication.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/unauthorized-ping' response = requests.get(url) print(f'Unauthorized ping: {response.text}') return response.ok def authenticated_ping(token): """Authenticated connectivity test.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/ping' headers = { 'Authorization': f'Bearer {token}' } response = requests.get(url, headers=headers) if response.ok: print(f'Authenticated ping: {response.text}') return True else: print(f'Authentication failed: {response.status_code}') return False ``` ### C# ```csharp using System.Net.Http; using System.Threading.Tasks; public class ActionApiClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public ActionApiClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task UnauthorizedPingAsync() { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/action/unauthorized-ping"; var response = await _httpClient.GetAsync(url); var content = await response.Content.ReadAsStringAsync(); Console.WriteLine($"Unauthorized ping: {content}"); return response.IsSuccessStatusCode; } public async Task AuthenticatedPingAsync() { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/action/ping"; var response = await _httpClient.GetAsync(url); var content = await response.Content.ReadAsStringAsync(); if (response.IsSuccessStatusCode) { Console.WriteLine($"Authenticated ping: {content}"); return true; } else { Console.WriteLine($"Authentication failed: {response.StatusCode}"); return false; } } } ``` ## Best Practices - **Health Checks:** Use the unauthorized ping for automated health monitoring systems - **Pre-flight Validation:** Call authenticated ping before executing actions to validate credentials - **Error Handling:** Always handle network timeouts and authentication failures gracefully - **Monitoring:** Set up automated alerts based on ping failures to detect service outages early --- Full CRUD operations for managing actions in your mindzieStudio project. Actions are workflow components that can be executed to perform automated tasks. --- ## API Endpoints | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/action` | List all actions | | GET | `/api/{tenantId}/{projectId}/action/{actionId}` | Get action details | | POST | `/api/{tenantId}/{projectId}/action` | Create action | | PUT | `/api/{tenantId}/{projectId}/action/{actionId}` | Update action | | DELETE | `/api/{tenantId}/{projectId}/action/{actionId}` | Delete action | | POST | `/api/{tenantId}/{projectId}/action/{actionId}/enable` | Enable action | | POST | `/api/{tenantId}/{projectId}/action/{actionId}/disable` | Disable action | --- ## List All Actions **GET** `/api/{tenantId}/{projectId}/action` Retrieve all actions configured for a project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | ### Response (200 OK) ```json { "actions": [ { "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "projectId": "87654321-4321-4321-4321-210987654321", "name": "Daily Data Refresh", "description": "Refreshes data from source systems daily", "isEnabled": true, "maxRunTime": 3600, "actionStatus": "Idle", "nextRunTime": "2024-01-16T06:00:00Z", "lastRunTime": "2024-01-15T06:00:00Z", "lastRunResult": "Success", "dateCreated": "2024-01-01T10:00:00Z", "dateModified": "2024-01-15T14:30:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "triggers": [...], "steps": [...] } ], "totalCount": 1 } ``` --- ## Get Action Details **GET** `/api/{tenantId}/{projectId}/action/{actionId}` Retrieve detailed information about a specific action. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `actionId` | GUID | Yes | The action to retrieve | ### Response (200 OK) ```json { "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "projectId": "87654321-4321-4321-4321-210987654321", "name": "Daily Data Refresh", "description": "Refreshes data from source systems daily", "isEnabled": true, "maxRunTime": 3600, "actionStatus": "Idle", "nextRunTime": "2024-01-16T06:00:00Z", "lastRunTime": "2024-01-15T06:00:00Z", "lastRunResult": "Success", "dateCreated": "2024-01-01T10:00:00Z", "dateModified": "2024-01-15T14:30:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "triggers": [ { "triggerId": "11111111-1111-1111-1111-111111111111", "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "triggerType": "DailyScheduler", "settings": "{}", "frequency": 1, "eventName": null, "startDate": "2024-01-01", "dateCreated": "2024-01-01T10:00:00Z" } ], "steps": [ { "stepId": "22222222-2222-2222-2222-222222222222", "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "stepNumber": 1, "stepType": "Python", "description": "Execute data refresh script", "settings": "{\"script\": \"refresh_data.py\"}", "dateCreated": "2024-01-01T10:00:00Z" } ] } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `actionId` | GUID | Unique action identifier | | `projectId` | GUID | Project this action belongs to | | `name` | string | Display name | | `description` | string | Action description | | `isEnabled` | boolean | Whether the action is enabled | | `maxRunTime` | integer | Maximum run time in seconds | | `actionStatus` | string | Current status (Idle, Running, etc.) | | `nextRunTime` | datetime | Next scheduled execution | | `lastRunTime` | datetime | Last execution time | | `lastRunResult` | string | Result of last execution | | `triggers` | array | Trigger configurations | | `steps` | array | Action step definitions | --- ## Create Action **POST** `/api/{tenantId}/{projectId}/action` Create a new action in the project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | ### Request Body ```json { "name": "Weekly Report", "description": "Generate weekly analysis report", "isEnabled": true, "maxRunTime": 1800, "steps": [ { "stepNumber": 1, "stepType": "Python", "description": "Generate report", "settings": "{\"script\": \"generate_report.py\"}" }, { "stepNumber": 2, "stepType": "Email", "description": "Send report via email", "settings": "{\"recipients\": [\"team@company.com\"]}" } ], "triggers": [ { "triggerType": "WeeklyScheduler", "frequency": 1, "startDate": "2024-01-08" } ] } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Action name (must be unique in project) | | `description` | string | No | Action description | | `isEnabled` | boolean | No | Whether action is enabled (default: true) | | `maxRunTime` | integer | No | Max run time in seconds (default: 3600) | | `steps` | array | Yes | At least one step is required | | `triggers` | array | No | Optional trigger configurations | ### Step Object | Field | Type | Required | Description | |-------|------|----------|-------------| | `stepNumber` | integer | No | Execution order (auto-assigned if not provided) | | `stepType` | string | Yes | Type: Python, Email, Webhook, etc. | | `description` | string | No | Step description | | `settings` | string | Yes | JSON configuration for the step | ### Trigger Object | Field | Type | Required | Description | |-------|------|----------|-------------| | `triggerType` | string | Yes | Type: HourlyScheduler, DailyScheduler, WeeklyScheduler, MonthlyScheduler, EventTrigger | | `frequency` | integer | No | Frequency multiplier | | `startDate` | date | No | When to start the schedule | | `eventName` | string | No | Event name (for EventTrigger) | | `settings` | string | No | Additional trigger settings | ### Response (201 Created) Returns the created action with assigned IDs. ### Error Responses **Conflict (409) - Duplicate Name** ```json { "Error": "An action with this name already exists in the project" } ``` --- ## Update Action **PUT** `/api/{tenantId}/{projectId}/action/{actionId}` Update an existing action. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `actionId` | GUID | Yes | The action to update | ### Request Body ```json { "name": "Updated Weekly Report", "description": "Updated description", "isEnabled": true, "maxRunTime": 2400, "steps": [ { "stepId": "22222222-2222-2222-2222-222222222222", "stepNumber": 1, "stepType": "Python", "description": "Updated step", "settings": "{\"script\": \"updated_report.py\"}" } ], "triggers": [ { "triggerId": "11111111-1111-1111-1111-111111111111", "triggerType": "DailyScheduler", "frequency": 1, "startDate": "2024-02-01" } ] } ``` All fields are optional - only provided fields will be updated. ### Response (200 OK) Returns the updated action. --- ## Delete Action **DELETE** `/api/{tenantId}/{projectId}/action/{actionId}` Permanently delete an action. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `actionId` | GUID | Yes | The action to delete | ### Response (204 No Content) Empty response on success. --- ## Enable Action **POST** `/api/{tenantId}/{projectId}/action/{actionId}/enable` Enable a disabled action. ### Response (200 OK) Returns the updated action with `isEnabled: true`. --- ## Disable Action **POST** `/api/{tenantId}/{projectId}/action/{actionId}/disable` Disable an action. ### Response (200 OK) Returns the updated action with `isEnabled: false`. --- ## Implementation Examples ### cURL ```bash # List all actions curl -X GET "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action" \ -H "Authorization: Bearer YOUR_API_KEY" # Create an action curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Daily Report", "description": "Generate daily report", "isEnabled": true, "steps": [ { "stepNumber": 1, "stepType": "Python", "description": "Run report script", "settings": "{\"script\": \"daily_report.py\"}" } ], "triggers": [ { "triggerType": "DailyScheduler", "frequency": 1, "startDate": "2024-01-15" } ] }' # Update an action curl -X PUT "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/{actionId}" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"name": "Updated Daily Report"}' # Delete an action curl -X DELETE "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/{actionId}" \ -H "Authorization: Bearer YOUR_API_KEY" # Enable/Disable action curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/{actionId}/enable" \ -H "Authorization: Bearer YOUR_API_KEY" curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/{actionId}/disable" \ -H "Authorization: Bearer YOUR_API_KEY" ``` ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' class ActionManager: def __init__(self, api_key): self.headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } def list_actions(self): """List all actions.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def get_action(self, action_id): """Get action details.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/{action_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_action(self, name, steps, description=None, triggers=None): """Create a new action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action' data = { 'name': name, 'description': description, 'isEnabled': True, 'steps': steps, 'triggers': triggers or [] } response = requests.post(url, json=data, headers=self.headers) response.raise_for_status() return response.json() def update_action(self, action_id, **kwargs): """Update an action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/{action_id}' response = requests.put(url, json=kwargs, headers=self.headers) response.raise_for_status() return response.json() def delete_action(self, action_id): """Delete an action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/{action_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() def enable_action(self, action_id): """Enable an action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/{action_id}/enable' response = requests.post(url, headers=self.headers) response.raise_for_status() return response.json() def disable_action(self, action_id): """Disable an action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/{action_id}/disable' response = requests.post(url, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = ActionManager('your-api-key') # Create an action action = manager.create_action( name='Daily Report', description='Generate daily analysis report', steps=[ { 'stepNumber': 1, 'stepType': 'Python', 'description': 'Generate report', 'settings': '{"script": "daily_report.py"}' } ], triggers=[ { 'triggerType': 'DailyScheduler', 'frequency': 1, 'startDate': '2024-01-15' } ] ) print(f"Created action: {action['actionId']}") # Disable then enable manager.disable_action(action['actionId']) print("Action disabled") manager.enable_action(action['actionId']) print("Action enabled") # Update the action updated = manager.update_action(action['actionId'], name='Updated Daily Report') # Delete the action manager.delete_action(action['actionId']) print("Action deleted") ``` ### JavaScript/Node.js ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; class ActionManager { constructor(apiKey) { this.headers = { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }; } async listActions() { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async createAction(name, steps, description = null, triggers = []) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ name, description, isEnabled: true, steps, triggers }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async updateAction(actionId, updates) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/${actionId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify(updates) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async deleteAction(actionId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/${actionId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); } async enableAction(actionId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/${actionId}/enable`; const response = await fetch(url, { method: 'POST', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async disableAction(actionId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/${actionId}/disable`; const response = await fetch(url, { method: 'POST', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } } // Usage const manager = new ActionManager('your-api-key'); // Create action const action = await manager.createAction( 'Daily Report', [{ stepNumber: 1, stepType: 'Python', description: 'Run script', settings: '{}' }], 'Generate daily report', [{ triggerType: 'DailyScheduler', frequency: 1, startDate: '2024-01-15' }] ); // Toggle enable/disable await manager.disableAction(action.actionId); await manager.enableAction(action.actionId); // Delete await manager.deleteAction(action.actionId); ``` --- ## Best Practices 1. **Unique Names**: Action names must be unique within a project 2. **Step Order**: Steps execute in stepNumber order 3. **Enable/Disable**: Use enable/disable endpoints rather than deleting and recreating 4. **Triggers**: Use appropriate trigger types for your scheduling needs 5. **MaxRunTime**: Set reasonable timeouts to prevent runaway actions --- Trigger action execution programmatically within mindzieStudio. ## Overview The Execute Action endpoint allows you to trigger a specific action within mindzieStudio. Actions are queued for asynchronous execution and you receive an execution ID to track progress. ## Execute Action **GET** `/api/{tenantId}/{projectId}/action/execute/{actionId}` Execute a specific action by its ID. The action is added to an execution queue and processed asynchronously. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/action/execute/{actionId} Authorization: Bearer {your-access-token} ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `actionId` | GUID | Yes | The action to execute | ### Response **Success (200 OK):** ```json { "actionId": "87654321-4321-4321-4321-210987654321", "actionExecutionId": "11111111-2222-3333-4444-555555555555", "dateStarted": "2024-01-15T10:30:00Z", "dateEnded": null, "status": "Queued", "notes": null } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `actionId` | GUID | The ID of the action being executed | | `actionExecutionId` | GUID | Unique identifier for this execution instance | | `dateStarted` | datetime | When the execution was queued | | `dateEnded` | datetime | When execution completed (null if still running) | | `status` | string | Current execution status | | `notes` | string | Additional execution notes or error messages | ### Error Responses **Action Not Found (404):** ```json { "error": "Action not found", "actionId": "87654321-4321-4321-4321-210987654321" } ``` **Execution Creation Failed (404):** ```json { "error": "Action can't create execution", "actionId": "87654321-4321-4321-4321-210987654321" } ``` **Unauthorized (401):** ``` HTTP/1.1 401 Unauthorized {error message describing authorization failure} ``` ## Execution Status Values | Status | Description | |--------|-------------| | Queued | Action is queued and waiting to be processed | | Running | Action is currently executing | | Completed | Action completed successfully | | Failed | Action execution failed | ## Implementation Examples ### cURL ```bash curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/action/execute/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; const executeAction = async (actionId, token) => { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/action/execute/${actionId}`; try { const response = await fetch(url, { method: 'GET', headers: { 'Authorization': `Bearer ${token}` } }); if (response.ok) { const result = await response.json(); console.log('Action queued:', result); console.log('Execution ID:', result.actionExecutionId); return result; } else if (response.status === 404) { const error = await response.json(); console.error('Action not found:', error); throw new Error(error.error); } else { throw new Error(`Execution failed: ${response.status}`); } } catch (error) { console.error('Error executing action:', error); throw error; } }; // Example usage executeAction('aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', 'your_token') .then(result => { // Store actionExecutionId for tracking const executionId = result.actionExecutionId; }); ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' def execute_action(action_id, token): """Execute an action and return execution details.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/action/execute/{action_id}' headers = { 'Authorization': f'Bearer {token}' } response = requests.get(url, headers=headers) if response.ok: result = response.json() print(f'Action queued: {result}') print(f'Execution ID: {result["actionExecutionId"]}') return result elif response.status_code == 404: error = response.json() print(f'Action not found: {error}') raise Exception(error['error']) else: raise Exception(f'Execution failed: {response.status_code}') # Example usage result = execute_action('aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', 'your_token') execution_id = result['actionExecutionId'] ``` ### C# ```csharp using System; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; public class ActionExecutionResult { public Guid ActionId { get; set; } public Guid ActionExecutionId { get; set; } public DateTime? DateStarted { get; set; } public DateTime? DateEnded { get; set; } public string Status { get; set; } public string Notes { get; set; } } public class ActionApiClient { private readonly HttpClient _httpClient; private readonly string _baseUrl; private readonly Guid _tenantId; private readonly Guid _projectId; public ActionApiClient(string baseUrl, Guid tenantId, Guid projectId, string accessToken) { _baseUrl = baseUrl; _tenantId = tenantId; _projectId = projectId; _httpClient = new HttpClient(); _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); } public async Task ExecuteActionAsync(Guid actionId) { var url = $"{_baseUrl}/api/{_tenantId}/{_projectId}/action/execute/{actionId}"; var response = await _httpClient.GetAsync(url); if (response.IsSuccessStatusCode) { var json = await response.Content.ReadAsStringAsync(); var result = JsonSerializer.Deserialize(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true }); Console.WriteLine($"Action queued. Execution ID: {result.ActionExecutionId}"); return result; } else if (response.StatusCode == System.Net.HttpStatusCode.NotFound) { throw new Exception($"Action {actionId} not found"); } else { throw new Exception($"Execution failed: {response.StatusCode}"); } } } ``` ## Best Practices - **Store Execution ID:** Always store the `actionExecutionId` returned to track execution progress - **Check Action Exists:** Use the Get Action endpoint first to verify the action exists and is enabled - **Handle Async Nature:** Actions execute asynchronously - the response indicates the action was queued, not completed - **Error Handling:** Implement proper error handling for 404 responses (action not found) and 401 (unauthorized) - **Idempotency:** Each call creates a new execution - avoid duplicate calls if not intended --- Track and monitor action execution history and download result packages. ## Overview The Action Execution API provides endpoints for tracking action execution history, monitoring status, and downloading execution results. This API uses a separate controller from the main Actions API. **Base URL:** `/api/{tenantId}/{projectId}/actionexecution` ## Get Execution History for an Action **GET** `/api/{tenantId}/{projectId}/actionexecution/action/{actionId}` Retrieve all execution history for a specific action. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/actionexecution/action/{actionId} Authorization: Bearer {your-access-token} ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `actionId` | GUID | Yes | The action to get execution history for | ### Response **Success (200 OK):** ```json { "items": [ { "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "actionExecutionId": "11111111-2222-3333-4444-555555555555", "dateStarted": "2024-01-15T10:30:00Z", "dateEnded": "2024-01-15T10:32:15Z", "status": "Completed", "notes": null }, { "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "actionExecutionId": "22222222-3333-4444-5555-666666666666", "dateStarted": "2024-01-14T10:30:00Z", "dateEnded": "2024-01-14T10:31:45Z", "status": "Completed", "notes": null } ] } ``` ## Get Last Execution for an Action **GET** `/api/{tenantId}/{projectId}/actionexecution/lastaction/{actionId}` Retrieve the most recent execution for a specific action. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/actionexecution/lastaction/{actionId} Authorization: Bearer {your-access-token} ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `actionId` | GUID | Yes | The action to get the last execution for | ### Response **Success (200 OK):** ```json { "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "actionExecutionId": "11111111-2222-3333-4444-555555555555", "dateStarted": "2024-01-15T10:30:00Z", "dateEnded": "2024-01-15T10:32:15Z", "status": "Completed", "notes": null } ``` **Not Found (404):** ```json { "error": "Can't find action", "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" } ``` ## Get Specific Execution Details **GET** `/api/{tenantId}/{projectId}/actionexecution/{executionId}` Retrieve details for a specific execution by its execution ID. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/actionexecution/{executionId} Authorization: Bearer {your-access-token} ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `executionId` | GUID | Yes | The execution to retrieve | ### Response **Success (200 OK):** ```json { "actionId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "actionExecutionId": "11111111-2222-3333-4444-555555555555", "dateStarted": "2024-01-15T10:30:00Z", "dateEnded": "2024-01-15T10:32:15Z", "status": "Completed", "notes": null } ``` **Not Found (404):** ```json { "error": "Can't find action", "executionId": "11111111-2222-3333-4444-555555555555" } ``` ## Download Execution Package **GET** `/api/{tenantId}/{projectId}/actionexecution/downloadpackage/{executionId}` Download the results package (ZIP file) for a completed execution. ### Request ```http GET https://your-mindzie-instance.com/api/{tenantId}/{projectId}/actionexecution/downloadpackage/{executionId} Authorization: Bearer {your-access-token} ``` ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | Your tenant identifier | | `projectId` | GUID | Yes | Your project identifier | | `executionId` | GUID | Yes | The execution to download results for | ### Response **Success (200 OK):** Returns a ZIP file download containing execution results, reports, and artifacts. ``` HTTP/1.1 200 OK Content-Type: application/zip Content-Disposition: attachment; filename="{executionId}.zip" [binary ZIP file content] ``` **Not Found (404):** ```json { "error": "Execution not found", "executionId": "11111111-2222-3333-4444-555555555555" } ``` ```json { "error": "Zip file not found", "executionId": "11111111-2222-3333-4444-555555555555" } ``` ## Execution Response Fields | Field | Type | Description | |-------|------|-------------| | `actionId` | GUID | The action that was executed | | `actionExecutionId` | GUID | Unique identifier for this execution | | `dateStarted` | datetime | When execution started | | `dateEnded` | datetime | When execution completed (null if still running) | | `status` | string | Current execution status | | `notes` | string | Execution notes or error messages | ## Execution Status Values | Status | Description | |--------|-------------| | Queued | Execution is queued, waiting to start | | Running | Execution is currently in progress | | Completed | Execution finished successfully | | Failed | Execution encountered an error | ## Implementation Examples ### cURL ```bash # Get execution history for an action curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/actionexecution/action/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get last execution curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/actionexecution/lastaction/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get specific execution curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/actionexecution/11111111-2222-3333-4444-555555555555" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Download execution package curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/actionexecution/downloadpackage/11111111-2222-3333-4444-555555555555" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -o execution_results.zip ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; // Get execution history for an action const getExecutionHistory = async (actionId, token) => { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/actionexecution/action/${actionId}`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${token}` } }); if (response.ok) { const result = await response.json(); return result.items; } throw new Error(`Failed: ${response.status}`); }; // Get last execution for an action const getLastExecution = async (actionId, token) => { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/actionexecution/lastaction/${actionId}`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${token}` } }); if (response.ok) { return await response.json(); } else if (response.status === 404) { return null; } throw new Error(`Failed: ${response.status}`); }; // Get specific execution details const getExecution = async (executionId, token) => { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/actionexecution/${executionId}`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${token}` } }); if (response.ok) { return await response.json(); } throw new Error(`Failed: ${response.status}`); }; // Download execution package const downloadPackage = async (executionId, token) => { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/actionexecution/downloadpackage/${executionId}`; const response = await fetch(url, { headers: { 'Authorization': `Bearer ${token}` } }); if (response.ok) { const blob = await response.blob(); // Save or process the ZIP file return blob; } throw new Error(`Failed: ${response.status}`); }; ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' def get_execution_history(action_id, token): """Get all executions for an action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/actionexecution/action/{action_id}' headers = {'Authorization': f'Bearer {token}'} response = requests.get(url, headers=headers) response.raise_for_status() return response.json()['items'] def get_last_execution(action_id, token): """Get the most recent execution for an action.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/actionexecution/lastaction/{action_id}' headers = {'Authorization': f'Bearer {token}'} response = requests.get(url, headers=headers) if response.status_code == 404: return None response.raise_for_status() return response.json() def get_execution(execution_id, token): """Get details for a specific execution.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/actionexecution/{execution_id}' headers = {'Authorization': f'Bearer {token}'} response = requests.get(url, headers=headers) response.raise_for_status() return response.json() def download_package(execution_id, token, output_path): """Download the execution results package.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/actionexecution/downloadpackage/{execution_id}' headers = {'Authorization': f'Bearer {token}'} response = requests.get(url, headers=headers) response.raise_for_status() with open(output_path, 'wb') as f: f.write(response.content) return output_path # Example: Monitor execution until complete def wait_for_completion(execution_id, token, max_wait_seconds=300): import time start_time = time.time() while time.time() - start_time < max_wait_seconds: execution = get_execution(execution_id, token) status = execution['status'] if status == 'Completed': print(f'Execution completed successfully') return execution elif status == 'Failed': print(f'Execution failed: {execution.get("notes", "Unknown error")}') return execution else: print(f'Status: {status}, waiting...') time.sleep(5) raise TimeoutError('Execution did not complete within timeout') ``` ## Best Practices - **Poll for Status:** After executing an action, poll the execution endpoint to monitor progress - **Handle Long-Running Actions:** Use appropriate timeouts when waiting for completion - **Download Results:** For actions that produce output, download the package after completion - **Error Handling:** Check the `notes` field for error details when status is "Failed" - **Execution History:** Use history endpoints for auditing and debugging past executions --- # Execution API Job Execution API Manage and monitor the execution of process mining jobs, handle asynchronous operations, and track job progress in real-time. ## Features ### Job Queue Manage job queue and priorities. [View Queue](/mindzie_api/execution/queue) ### Job Tracking Track job status and progress. [Track Jobs](/mindzie_api/execution/tracking) ### Async Operations Handle long-running asynchronous operations. [Async Operations](/mindzie_api/execution/async) ## Get Job Status **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}` Retrieves the current status and details of any execution job, including progress information, execution metrics, and completion status. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | | `jobId` | GUID | Path | The job identifier | ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "projectId": "660e8400-e29b-41d4-a716-446655440000", "jobType": "ProcessMining", "jobName": "Customer Journey Analysis", "jobDescription": "Comprehensive analysis of customer touchpoints and behaviors", "status": "Running", "priority": "High", "progress": { "percentage": 65, "currentStage": "Data Processing", "estimatedCompletion": "2024-01-20T11:15:00Z", "elapsedTime": "8 minutes 32 seconds" }, "resource": { "resourceType": "Pipeline", "resourceId": "770e8400-e29b-41d4-a716-446655440000", "resourceName": "Customer Analytics Pipeline" }, "execution": { "startTime": "2024-01-20T10:30:00Z", "submittedBy": "user123", "executionNode": "worker-node-02", "memoryUsage": "2.1 GB", "cpuUsage": "45%", "diskUsage": "890 MB" }, "metrics": { "recordsProcessed": 125430, "totalRecords": 192850, "errorCount": 3, "warningCount": 12, "averageProcessingRate": "1250 records/second" }, "dateCreated": "2024-01-20T10:28:00Z", "lastUpdated": "2024-01-20T10:38:45Z" } ``` ## List All Jobs **GET** `/api/{tenantId}/{projectId}/execution/jobs` Retrieves a paginated list of all execution jobs in the project with filtering options for status, job type, and date ranges. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `status` | string | Filter by status: Queued, Running, Completed, Failed, Cancelled | | `jobType` | string | Filter by job type: ProcessMining, DataEnrichment, Notebook, Analysis | | `priority` | string | Filter by priority: Low, Normal, High, Critical | | `submittedBy` | string | Filter by user who submitted the job | | `dateFrom` | datetime | Filter jobs from this date | | `dateTo` | datetime | Filter jobs to this date | | `page` | integer | Page number for pagination (default: 1) | | `pageSize` | integer | Number of items per page (default: 20, max: 100) | ### Response ```json { "jobs": [ { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "jobType": "ProcessMining", "jobName": "Customer Journey Analysis", "status": "Running", "priority": "High", "progress": 65, "startTime": "2024-01-20T10:30:00Z", "estimatedCompletion": "2024-01-20T11:15:00Z", "submittedBy": "user123", "resourceName": "Customer Analytics Pipeline" }, { "jobId": "dd0e8400-e29b-41d4-a716-446655440000", "jobType": "DataEnrichment", "jobName": "Daily Sales Enrichment", "status": "Completed", "priority": "Normal", "progress": 100, "startTime": "2024-01-20T09:00:00Z", "endTime": "2024-01-20T09:23:00Z", "duration": "23 minutes", "submittedBy": "system", "resourceName": "Sales Data Pipeline" } ], "summary": { "totalJobs": 156, "runningJobs": 3, "queuedJobs": 7, "completedJobs": 142, "failedJobs": 4 }, "page": 1, "pageSize": 20, "hasNextPage": true } ``` ## Submit New Job **POST** `/api/{tenantId}/{projectId}/execution/job` Submits a new execution job to the system. The job will be queued and processed based on priority and resource availability. ### Request Body ```json { "jobName": "Weekly Process Analysis", "jobDescription": "Automated weekly analysis of process performance", "jobType": "ProcessMining", "priority": "Normal", "resource": { "resourceType": "Pipeline", "resourceId": "770e8400-e29b-41d4-a716-446655440000" }, "parameters": { "datasetId": "880e8400-e29b-41d4-a716-446655440000", "analysisType": "comprehensive", "timeWindow": { "startDate": "2024-01-01", "endDate": "2024-01-07" }, "includeAnomalyDetection": true, "outputFormat": "detailed_report" }, "scheduling": { "executeImmediately": true, "scheduledTime": null, "timeoutMinutes": 120 }, "notifications": { "onCompletion": true, "onFailure": true, "emailRecipients": ["analyst@company.com"] } } ``` ### Response ```json { "jobId": "ee0e8400-e29b-41d4-a716-446655440000", "status": "Queued", "queuePosition": 3, "estimatedStartTime": "2024-01-20T10:45:00Z", "estimatedDuration": "45-60 minutes", "jobName": "Weekly Process Analysis", "priority": "Normal", "dateSubmitted": "2024-01-20T10:30:00Z", "submittedBy": "user123" } ``` ## Cancel Job **DELETE** `/api/{tenantId}/{projectId}/execution/job/{jobId}` Cancels a queued or running job. Completed jobs cannot be cancelled. Running jobs will be stopped gracefully when possible. ### Request Body (Optional) ```json { "reason": "User requested cancellation", "forceTermination": false, "preservePartialResults": true } ``` ### Response Codes - `200 OK` - Job cancelled successfully - `404 Not Found` - Job not found - `409 Conflict` - Job already completed or cannot be cancelled ## Get Job Results **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}/results` Retrieves the results and outputs from a completed job execution, including generated artifacts, reports, and data files. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `format` | string | Response format: summary, detailed, download (default: summary) | | `includeArtifacts` | boolean | Include downloadable artifacts in response (default: true) | | `outputType` | string | Filter by output type: reports, data, models, visualizations | ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "status": "Completed", "completionTime": "2024-01-20T11:12:00Z", "totalDuration": "42 minutes", "success": true, "summary": { "recordsProcessed": 192850, "outputsGenerated": 7, "dataQualityScore": 94.2, "processingEfficiency": 87.5 }, "results": { "primaryOutput": { "type": "ProcessMiningReport", "title": "Customer Journey Analysis Report", "format": "html", "size": "2.3 MB", "downloadUrl": "https://api.mindzie.com/downloads/report-cc0e8400.html" }, "additionalOutputs": [ { "type": "EnrichedDataset", "title": "Customer Journey Data Enhanced", "format": "csv", "recordCount": 192850, "size": "45.7 MB", "downloadUrl": "https://api.mindzie.com/downloads/data-cc0e8400.csv" }, { "type": "ProcessMap", "title": "Customer Journey Process Map", "format": "svg", "size": "890 KB", "downloadUrl": "https://api.mindzie.com/downloads/map-cc0e8400.svg" }, { "type": "AnalyticsModel", "title": "Journey Prediction Model", "format": "pkl", "accuracy": 0.89, "size": "12.4 MB", "downloadUrl": "https://api.mindzie.com/downloads/model-cc0e8400.pkl" } ] }, "executionMetrics": { "totalCpuTime": "38.5 minutes", "peakMemoryUsage": "3.2 GB", "diskIoOperations": 45672, "networkDataTransfer": "567 MB" }, "qualityMetrics": { "dataValidation": { "totalRecords": 195000, "validRecords": 192850, "duplicatesRemoved": 1890, "invalidRecords": 260 }, "processingErrors": [], "warnings": [ { "type": "DataQuality", "message": "Some timestamps had to be inferred", "count": 125 } ] } } ``` ## Retry Failed Job **POST** `/api/{tenantId}/{projectId}/execution/job/{jobId}/retry` Retries a failed job execution with optional parameter modifications. The job will be re-queued with the same or updated configuration. ### Request Body ```json { "retryReason": "Infrastructure issue resolved", "modifyParameters": true, "updatedParameters": { "timeoutMinutes": 180, "retryFailedRecords": true, "increaseMemoryLimit": true }, "priority": "High", "immediateExecution": false } ``` ### Response Returns `200 OK` with a new job object containing updated job ID and retry information. ## Get System Execution Status **GET** `/api/{tenantId}/execution/system/status` Retrieves the current system-wide execution status including resource utilization, queue health, and performance metrics. ### Response ```json { "systemStatus": "Healthy", "timestamp": "2024-01-20T10:45:00Z", "executionNodes": [ { "nodeId": "worker-node-01", "status": "Active", "cpuUsage": 67, "memoryUsage": 78, "activeJobs": 2, "jobCapacity": 4 }, { "nodeId": "worker-node-02", "status": "Active", "cpuUsage": 45, "memoryUsage": 56, "activeJobs": 1, "jobCapacity": 4 } ], "queueStatistics": { "totalQueuedJobs": 15, "highPriorityJobs": 3, "normalPriorityJobs": 10, "lowPriorityJobs": 2, "averageWaitTime": "4.2 minutes", "estimatedProcessingTime": "23 minutes" }, "performanceMetrics": { "jobsCompletedToday": 847, "averageJobDuration": "18.5 minutes", "successRate": 97.8, "throughputPerHour": 35.2 }, "resourceUtilization": { "totalCpuCapacity": 1600, "usedCpuCapacity": 896, "totalMemoryCapacity": "64 GB", "usedMemoryCapacity": "38.4 GB", "diskSpaceAvailable": "2.3 TB" } } ``` ## Example: Complete Job Management Workflow This example demonstrates submitting a job, monitoring its progress, and retrieving results: ```javascript // 1. Submit a new job const submitJob = async () => { const response = await fetch('/api/{tenantId}/{projectId}/execution/job', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ jobName: 'Customer Behavior Analysis', jobDescription: 'Weekly analysis of customer interaction patterns', jobType: 'ProcessMining', priority: 'High', resource: { resourceType: 'Pipeline', resourceId: '770e8400-e29b-41d4-a716-446655440000' }, parameters: { datasetId: '880e8400-e29b-41d4-a716-446655440000', analysisType: 'comprehensive', timeWindow: { startDate: '2024-01-13', endDate: '2024-01-19' }, includeAnomalyDetection: true, outputFormat: 'detailed_report' }, scheduling: { executeImmediately: true, timeoutMinutes: 90 }, notifications: { onCompletion: true, onFailure: true, emailRecipients: ['analyst@company.com'] } }) }); return await response.json(); }; // 2. Monitor job progress const monitorJob = async (jobId) => { const checkStatus = async () => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}`, { headers: { 'Authorization': `Bearer ${token}` } }); const job = await response.json(); console.log(`Job ${jobId}: ${job.status} (${job.progress.percentage}%)`); console.log(`Current stage: ${job.progress.currentStage}`); console.log(`ETA: ${job.progress.estimatedCompletion}`); if (job.status === 'Running' || job.status === 'Queued') { setTimeout(() => checkStatus(), 30000); // Check every 30 seconds } else if (job.status === 'Completed') { console.log('Job completed successfully!'); await getJobResults(jobId); } else if (job.status === 'Failed') { console.log('Job failed:', job.error); } }; await checkStatus(); }; // 3. Get job results const getJobResults = async (jobId) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}/results?format=detailed&includeArtifacts=true`, { headers: { 'Authorization': `Bearer ${token}` } }); const results = await response.json(); console.log('Job Results:', results.summary); console.log('Primary Output:', results.results.primaryOutput.downloadUrl); // Download additional outputs for (const output of results.results.additionalOutputs) { console.log(`Download ${output.type}: ${output.downloadUrl}`); } return results; }; // 4. Get system status const getSystemStatus = async () => { const response = await fetch('/api/{tenantId}/execution/system/status', { headers: { 'Authorization': `Bearer ${token}` } }); const status = await response.json(); console.log(`System Status: ${status.systemStatus}`); console.log(`Queue: ${status.queueStatistics.totalQueuedJobs} jobs waiting`); console.log(`Average wait time: ${status.queueStatistics.averageWaitTime}`); return status; }; // Execute the workflow submitJob() .then(job => { console.log(`Submitted job: ${job.jobId}`); console.log(`Queue position: ${job.queuePosition}`); console.log(`Estimated start: ${job.estimatedStartTime}`); return monitorJob(job.jobId); }) .catch(error => console.error('Job workflow failed:', error)); ``` ## Python Example ```python import requests import time import json from datetime import datetime, timedelta class ExecutionManager: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def submit_job(self, job_name, job_type, resource_type, resource_id, parameters=None, priority="Normal"): """Submit a new execution job""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job" payload = { 'jobName': job_name, 'jobType': job_type, 'priority': priority, 'resource': { 'resourceType': resource_type, 'resourceId': resource_id }, 'parameters': parameters or {}, 'scheduling': { 'executeImmediately': True, 'timeoutMinutes': 120 }, 'notifications': { 'onCompletion': True, 'onFailure': True } } response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_job_status(self, job_id): """Get current job status""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}" response = requests.get(url, headers=self.headers) return response.json() def list_jobs(self, status=None, job_type=None, date_from=None, date_to=None, page=1, page_size=20): """List jobs with optional filtering""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/jobs" params = {'page': page, 'pageSize': page_size} if status: params['status'] = status if job_type: params['jobType'] = job_type if date_from: params['dateFrom'] = date_from.isoformat() if date_to: params['dateTo'] = date_to.isoformat() response = requests.get(url, params=params, headers=self.headers) return response.json() def wait_for_completion(self, job_id, poll_interval=30, timeout=3600): """Wait for job to complete with periodic status checks""" start_time = time.time() while time.time() - start_time < timeout: job = self.get_job_status(job_id) print(f"Job {job_id}: {job['status']} ({job['progress']['percentage']}%)") print(f" Current stage: {job['progress']['currentStage']}") print(f" Elapsed time: {job['progress']['elapsedTime']}") if job['status'] in ['Completed', 'Failed', 'Cancelled']: return job time.sleep(poll_interval) raise TimeoutError(f"Job {job_id} did not complete within {timeout} seconds") def get_job_results(self, job_id, format_type="detailed", include_artifacts=True): """Get job execution results""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/results" params = { 'format': format_type, 'includeArtifacts': str(include_artifacts).lower() } response = requests.get(url, params=params, headers=self.headers) return response.json() def cancel_job(self, job_id, reason="User cancellation", force=False): """Cancel a running or queued job""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}" payload = { 'reason': reason, 'forceTermination': force, 'preservePartialResults': True } response = requests.delete(url, json=payload, headers=self.headers) return response.status_code == 200 def retry_job(self, job_id, reason="Retry after failure", priority=None, modify_params=None): """Retry a failed job""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/retry" payload = { 'retryReason': reason, 'modifyParameters': modify_params is not None, 'immediateExecution': False } if priority: payload['priority'] = priority if modify_params: payload['updatedParameters'] = modify_params response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_system_status(self): """Get system-wide execution status""" url = f"{self.base_url}/api/{self.tenant_id}/execution/system/status" response = requests.get(url, headers=self.headers) return response.json() # Usage example manager = ExecutionManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) try: # Check system status system_status = manager.get_system_status() print(f"System Status: {system_status['systemStatus']}") print(f"Jobs in queue: {system_status['queueStatistics']['totalQueuedJobs']}") print(f"Average wait time: {system_status['queueStatistics']['averageWaitTime']}") # Submit a comprehensive process mining job job_params = { 'datasetId': 'dataset-guid', 'analysisType': 'comprehensive', 'timeWindow': { 'startDate': '2024-01-01', 'endDate': '2024-01-31' }, 'includeAnomalyDetection': True, 'includeProcessVariants': True, 'generateInsights': True, 'outputFormat': 'detailed_report', 'performanceMetrics': ['cycle_time', 'waiting_time', 'resource_utilization'], 'qualityChecks': { 'validateTimestamps': True, 'checkDuplicates': True, 'validateActivities': True } } job = manager.submit_job( 'Monthly Process Analytics', 'ProcessMining', 'Pipeline', 'pipeline-guid', job_params, 'High' ) print(f"Submitted job: {job['jobId']}") print(f"Queue position: {job['queuePosition']}") print(f"Estimated start: {job['estimatedStartTime']}") # Wait for completion final_job = manager.wait_for_completion(job['jobId']) if final_job['status'] == 'Completed': # Get detailed results results = manager.get_job_results(job['jobId']) print("Job completed successfully!") print(f"Records processed: {results['summary']['recordsProcessed']:,}") print(f"Data quality score: {results['summary']['dataQualityScore']}") print(f"Processing efficiency: {results['summary']['processingEfficiency']}%") # Download primary report print(f"Download report: {results['results']['primaryOutput']['downloadUrl']}") # List all additional outputs for output in results['results']['additionalOutputs']: print(f"Download {output['type']}: {output['downloadUrl']}") else: print(f"Job failed with status: {final_job['status']}") if 'error' in final_job: print(f"Error: {final_job['error']}") except Exception as e: print(f"Error in execution workflow: {e}") ``` --- Long-Running Operations Handle asynchronous operations with callbacks, webhooks, and real-time status updates. ## Start Async Operation **POST** `/api/{tenantId}/{projectId}/async/operation` Initiates a long-running asynchronous operation and returns an operation ID for tracking. Supports callback URLs and webhook notifications. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | ### Request Body ```json { "operationType": "ProcessMiningAnalysis", "operationName": "Comprehensive Customer Journey Analysis", "operationDescription": "Deep analysis of customer interaction patterns with advanced ML algorithms", "priority": "High", "parameters": { "datasetId": "880e8400-e29b-41d4-a716-446655440000", "analysisType": "comprehensive", "timeWindow": { "startDate": "2024-01-01", "endDate": "2024-01-31" }, "algorithmSettings": { "useAdvancedML": true, "enableAnomalyDetection": true, "performanceOptimization": "high_accuracy" }, "outputOptions": { "generateReports": true, "createVisualizations": true, "exportFormats": ["PDF", "CSV", "JSON"] } }, "callbacks": { "onProgress": "https://your-app.com/webhooks/progress", "onCompletion": "https://your-app.com/webhooks/completion", "onError": "https://your-app.com/webhooks/error" }, "notifications": { "email": ["analyst@company.com", "manager@company.com"], "slack": { "channel": "#process-mining", "mentionUsers": ["@analyst", "@data-team"] } }, "timeout": 7200, "retryPolicy": { "maxRetries": 3, "retryDelay": 300, "backoffMultiplier": 2.0 } } ``` ### Response ```json { "operationId": "op-ff0e8400-e29b-41d4-a716-446655440000", "operationType": "ProcessMiningAnalysis", "operationName": "Comprehensive Customer Journey Analysis", "status": "Initiated", "estimatedDuration": "45-60 minutes", "estimatedCompletion": "2024-01-20T12:15:00Z", "trackingUrl": "/api/{tenantId}/{projectId}/async/operation/op-ff0e8400-e29b-41d4-a716-446655440000", "webhooksRegistered": 3, "priority": "High", "dateCreated": "2024-01-20T11:15:00Z", "timeoutAt": "2024-01-20T13:15:00Z", "resourcesAllocated": { "cpuUnits": 4, "memoryGB": 8, "estimatedCost": "$2.45" } } ``` ## Get Operation Status **GET** `/api/{tenantId}/{projectId}/async/operation/{operationId}` Retrieves the current status and progress of an asynchronous operation, including detailed execution information and estimated completion times. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `operationId` | string | Path | The operation identifier | ### Response ```json { "operationId": "op-ff0e8400-e29b-41d4-a716-446655440000", "operationType": "ProcessMiningAnalysis", "operationName": "Comprehensive Customer Journey Analysis", "status": "Running", "progress": { "percentage": 67, "currentPhase": "Machine Learning Analysis", "phasesCompleted": 2, "totalPhases": 3, "startTime": "2024-01-20T11:18:00Z", "elapsedTime": "23 minutes 15 seconds", "estimatedRemaining": "15-20 minutes", "estimatedCompletion": "2024-01-20T12:05:00Z" }, "execution": { "executionId": "exec-aa1e8400-e29b-41d4-a716-446655440000", "workerNode": "async-worker-03", "resourceUsage": { "cpuUsage": 78, "memoryUsage": "6.2 GB", "diskUsage": "1.3 GB", "networkIO": "45 MB" }, "processedRecords": 125430, "totalRecords": 187250, "processingRate": "1890 records/minute" }, "phases": [ { "phaseName": "Data Loading & Validation", "status": "Completed", "startTime": "2024-01-20T11:18:00Z", "endTime": "2024-01-20T11:25:00Z", "duration": "7 minutes", "recordsProcessed": 187250, "validationResults": { "validRecords": 187248, "errorRecords": 2, "dataQualityScore": 99.9 } }, { "phaseName": "Process Discovery", "status": "Completed", "startTime": "2024-01-20T11:25:00Z", "endTime": "2024-01-20T11:38:00Z", "duration": "13 minutes", "results": { "activitiesDiscovered": 52, "processVariants": 347, "uniquePaths": 289 } }, { "phaseName": "Machine Learning Analysis", "status": "Running", "startTime": "2024-01-20T11:38:00Z", "progress": 72, "currentActivity": "Training anomaly detection models", "modelsTraining": 3, "modelsCompleted": 2 }, { "phaseName": "Report Generation", "status": "Pending", "estimatedStartTime": "2024-01-20T11:55:00Z", "estimatedDuration": "8-10 minutes" } ], "callbacks": { "progressCallbacksSent": 15, "lastProgressCallback": "2024-01-20T11:40:00Z", "callbacksSuccessful": 15, "callbacksFailed": 0 }, "dateCreated": "2024-01-20T11:15:00Z", "timeoutAt": "2024-01-20T13:15:00Z", "priority": "High" } ``` ## List Async Operations **GET** `/api/{tenantId}/{projectId}/async/operations` Retrieves a list of asynchronous operations with filtering and pagination options. Useful for monitoring multiple long-running operations. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `status` | string | Filter by status: Initiated, Running, Completed, Failed, Cancelled, Timeout | | `operationType` | string | Filter by operation type: ProcessMiningAnalysis, DataEnrichment, ReportGeneration | | `priority` | string | Filter by priority: Low, Normal, High, Critical | | `dateFrom` | datetime | Filter operations from this date | | `dateTo` | datetime | Filter operations to this date | | `includeDetails` | boolean | Include detailed execution information (default: false) | | `page` | integer | Page number for pagination (default: 1) | | `pageSize` | integer | Number of items per page (default: 20, max: 100) | ### Response ```json { "operations": [ { "operationId": "op-ff0e8400-e29b-41d4-a716-446655440000", "operationType": "ProcessMiningAnalysis", "operationName": "Comprehensive Customer Journey Analysis", "status": "Running", "progress": 67, "priority": "High", "startTime": "2024-01-20T11:18:00Z", "estimatedCompletion": "2024-01-20T12:05:00Z", "currentPhase": "Machine Learning Analysis", "resourceUsage": { "cpuUsage": 78, "memoryUsage": "6.2 GB" } }, { "operationId": "op-gg1e8400-e29b-41d4-a716-446655440000", "operationType": "DataEnrichment", "operationName": "Sales Data Processing", "status": "Completed", "progress": 100, "priority": "Normal", "startTime": "2024-01-20T10:45:00Z", "endTime": "2024-01-20T11:10:00Z", "duration": "25 minutes", "recordsProcessed": 89420 } ], "summary": { "totalOperations": 47, "running": 3, "completed": 41, "failed": 2, "cancelled": 1 }, "page": 1, "pageSize": 20, "hasNextPage": true } ``` ## Cancel Async Operation **DELETE** `/api/{tenantId}/{projectId}/async/operation/{operationId}` Cancels a running or pending asynchronous operation. The operation will be stopped gracefully, preserving any completed work. ### Request Body (Optional) ```json { "reason": "User requested cancellation due to changed requirements", "preservePartialResults": true, "forceTermination": false, "notifyCallbacks": true } ``` ### Response ```json { "operationId": "op-ff0e8400-e29b-41d4-a716-446655440000", "status": "Cancelled", "cancellationTime": "2024-01-20T11:42:00Z", "reason": "User requested cancellation due to changed requirements", "progressAtCancellation": 67, "phaseAtCancellation": "Machine Learning Analysis", "partialResults": { "available": true, "completedPhases": 2, "downloadUrls": [ "https://api.mindzie.com/downloads/partial-results-ff0e8400.zip" ] }, "resourcesReleased": { "cpuUnits": 4, "memoryGB": 8, "costSaved": "$1.20" }, "cancelledBy": "user123" } ``` ## Get Operation Results **GET** `/api/{tenantId}/{projectId}/async/operation/{operationId}/results` Retrieves the complete results of a finished asynchronous operation, including all generated outputs, reports, and downloadable artifacts. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `format` | string | Response format: summary, detailed, download (default: summary) | | `includeArtifacts` | boolean | Include downloadable artifacts in response (default: true) | | `phase` | string | Get results from specific phase only | ### Response ```json { "operationId": "op-ff0e8400-e29b-41d4-a716-446655440000", "operationType": "ProcessMiningAnalysis", "operationName": "Comprehensive Customer Journey Analysis", "status": "Completed", "completionTime": "2024-01-20T12:03:00Z", "totalDuration": "45 minutes", "success": true, "summary": { "recordsAnalyzed": 187248, "processVariants": 347, "anomaliesDetected": 23, "modelsGenerated": 3, "reportsCreated": 5, "dataQualityScore": 94.7, "overallConfidenceScore": 91.2 }, "phaseResults": [ { "phaseName": "Data Loading & Validation", "status": "Completed", "results": { "recordsLoaded": 187250, "validRecords": 187248, "dataQualityScore": 99.9, "validationErrors": [ { "type": "Missing Timestamp", "count": 2, "resolved": true } ] } }, { "phaseName": "Process Discovery", "status": "Completed", "results": { "processModel": { "activities": 52, "transitions": 178, "variants": 347, "complexity": "Medium-High" }, "performanceMetrics": { "averageCycleTime": "4.2 hours", "medianCycleTime": "3.1 hours", "bottleneckActivities": ["Review Application", "Manager Approval"], "efficiency": 78.3 } } }, { "phaseName": "Machine Learning Analysis", "status": "Completed", "results": { "anomalies": { "detected": 23, "highSeverity": 5, "mediumSeverity": 12, "lowSeverity": 6, "falsePositiveRate": 0.03 }, "predictions": { "cycleTimePrediction": { "accuracy": 0.89, "meanAbsoluteError": "0.3 hours" }, "pathPrediction": { "accuracy": 0.92, "confidence": 0.87 } }, "patterns": { "frequentPatterns": 15, "rarePatterns": 8, "criticalPaths": 3 } } } ], "artifacts": [ { "name": "Process Mining Analysis Report", "type": "Report", "format": "PDF", "size": "3.2 MB", "downloadUrl": "https://api.mindzie.com/downloads/report-ff0e8400.pdf", "description": "Comprehensive analysis report with insights and recommendations" }, { "name": "Process Model Visualization", "type": "Visualization", "format": "SVG", "size": "890 KB", "downloadUrl": "https://api.mindzie.com/downloads/process-map-ff0e8400.svg", "description": "Interactive process flow diagram" }, { "name": "Anomaly Detection Results", "type": "Dataset", "format": "CSV", "size": "1.8 MB", "downloadUrl": "https://api.mindzie.com/downloads/anomalies-ff0e8400.csv", "description": "Detailed anomaly analysis with severity scores" }, { "name": "Predictive Models", "type": "Model", "format": "PKL", "size": "45.7 MB", "downloadUrl": "https://api.mindzie.com/downloads/models-ff0e8400.zip", "description": "Trained ML models for cycle time and path prediction" } ], "performance": { "totalExecutionTime": "45 minutes", "resourceUtilization": { "averageCpuUsage": 72, "peakMemoryUsage": "7.8 GB", "totalCpuHours": 3.0, "totalCost": "$2.31" }, "throughput": "4161 records/minute", "efficiency": 87.2 }, "recommendations": [ { "category": "Process Optimization", "priority": "High", "recommendation": "Focus on reducing wait times in 'Manager Approval' activity", "expectedImprovement": "25% reduction in overall cycle time" }, { "category": "Data Quality", "priority": "Medium", "recommendation": "Implement automated timestamp validation", "expectedImprovement": "Improved data quality score to 99.5%" } ] } ``` ## Register Webhook **POST** `/api/{tenantId}/{projectId}/async/webhooks` Registers a webhook endpoint to receive real-time notifications about asynchronous operations. Supports multiple event types and custom filtering. ### Request Body ```json { "webhookUrl": "https://your-app.com/webhooks/async-operations", "webhookName": "Main Operations Webhook", "events": [ "operation.started", "operation.progress", "operation.phase.completed", "operation.completed", "operation.failed", "operation.cancelled" ], "filters": { "operationTypes": ["ProcessMiningAnalysis", "DataEnrichment"], "priorities": ["High", "Critical"], "minProgressIncrement": 10 }, "authentication": { "type": "hmac-sha256", "secret": "your-webhook-secret-key" }, "retryPolicy": { "maxRetries": 5, "retryDelay": 60, "backoffMultiplier": 2.0, "maxDelay": 3600 }, "headers": { "X-Source": "mindzie-api", "X-Environment": "production" } } ``` ### Response ```json { "webhookId": "wh-123e8400-e29b-41d4-a716-446655440000", "webhookUrl": "https://your-app.com/webhooks/async-operations", "webhookName": "Main Operations Webhook", "status": "Active", "eventsSubscribed": [ "operation.started", "operation.progress", "operation.phase.completed", "operation.completed", "operation.failed", "operation.cancelled" ], "filters": { "operationTypes": ["ProcessMiningAnalysis", "DataEnrichment"], "priorities": ["High", "Critical"], "minProgressIncrement": 10 }, "createdAt": "2024-01-20T11:45:00Z", "lastDelivery": null, "deliveryStats": { "totalDeliveries": 0, "successfulDeliveries": 0, "failedDeliveries": 0, "averageResponseTime": null } } ``` ## Retry Failed Operation **POST** `/api/{tenantId}/{projectId}/async/operation/{operationId}/retry` Retries a failed asynchronous operation with optional parameter modifications. Can resume from the point of failure or restart completely. ### Request Body ```json { "retryMode": "resume", "retryReason": "Infrastructure issue resolved, retrying with increased resources", "modifyParameters": true, "updatedParameters": { "algorithmSettings": { "useAdvancedML": true, "enableAnomalyDetection": true, "performanceOptimization": "high_throughput" }, "resourceAllocation": { "cpuUnits": 6, "memoryGB": 12, "priority": "Critical" } }, "retryPolicy": { "maxRetries": 2, "retryDelay": 180, "backoffMultiplier": 1.5 }, "newTimeout": 10800, "preserveOriginalResults": true } ``` ### Response ```json { "originalOperationId": "op-ff0e8400-e29b-41d4-a716-446655440000", "newOperationId": "op-retry-ff0e8400-e29b-41d4-a716-446655440000", "retryMode": "resume", "retryNumber": 1, "resumeFromPhase": "Machine Learning Analysis", "status": "Initiated", "estimatedDuration": "20-25 minutes", "estimatedCompletion": "2024-01-20T12:30:00Z", "preservedResults": { "phasesPreserved": 2, "recordsProcessed": 187248, "progressSaved": 45 }, "resourcesAllocated": { "cpuUnits": 6, "memoryGB": 12, "estimatedCost": "$3.20" }, "retryAttemptDate": "2024-01-20T12:05:00Z" } ``` ## Submit Batch Operations **POST** `/api/{tenantId}/{projectId}/async/batch` Submits multiple asynchronous operations as a batch with dependencies and coordination. Useful for complex workflows requiring multiple interconnected operations. ### Request Body ```json { "batchName": "Monthly Process Mining Pipeline", "batchDescription": "Complete monthly analysis workflow with multiple datasets", "operations": [ { "operationName": "Data Preparation", "operationType": "DataEnrichment", "priority": "High", "operationKey": "data-prep", "parameters": { "datasetId": "dataset-1", "cleaningRules": ["remove_duplicates", "fix_timestamps"], "outputFormat": "processed_csv" } }, { "operationName": "Process Discovery", "operationType": "ProcessMiningAnalysis", "priority": "High", "operationKey": "discovery", "dependencies": ["data-prep"], "parameters": { "algorithm": "alpha_miner_enhanced", "enableVariantAnalysis": true } }, { "operationName": "Performance Analysis", "operationType": "ProcessMiningAnalysis", "priority": "Normal", "operationKey": "performance", "dependencies": ["discovery"], "parameters": { "enableBottleneckDetection": true, "generateOptimizationRecommendations": true } } ], "batchCallbacks": { "onBatchStart": "https://your-app.com/webhooks/batch-start", "onOperationComplete": "https://your-app.com/webhooks/operation-complete", "onBatchComplete": "https://your-app.com/webhooks/batch-complete" }, "failurePolicy": { "stopOnFirstFailure": false, "continueIndependentOperations": true, "retryFailedOperations": true } } ``` ### Response ```json { "batchId": "batch-567e8400-e29b-41d4-a716-446655440000", "batchName": "Monthly Process Mining Pipeline", "status": "Initiated", "operations": [ { "operationKey": "data-prep", "operationId": "op-prep-890e8400-e29b-41d4-a716-446655440000", "status": "Running", "dependencies": [], "estimatedDuration": "15 minutes" }, { "operationKey": "discovery", "operationId": "op-disc-901e8400-e29b-41d4-a716-446655440000", "status": "Pending", "dependencies": ["data-prep"], "estimatedStartTime": "2024-01-20T12:20:00Z" }, { "operationKey": "performance", "operationId": "op-perf-012e8400-e29b-41d4-a716-446655440000", "status": "Pending", "dependencies": ["discovery"], "estimatedStartTime": "2024-01-20T12:45:00Z" } ], "totalOperations": 3, "estimatedBatchDuration": "75-90 minutes", "estimatedBatchCompletion": "2024-01-20T13:45:00Z", "batchStartTime": "2024-01-20T12:05:00Z", "trackingUrl": "/api/{tenantId}/{projectId}/async/batch/batch-567e8400-e29b-41d4-a716-446655440000" } ``` ## Example: Complete Async Operation Workflow This example demonstrates the full lifecycle of asynchronous operations: ```javascript // 1. Register webhook for real-time notifications const registerWebhook = async () => { const response = await fetch('/api/{tenantId}/{projectId}/async/webhooks', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ webhookUrl: 'https://your-app.com/webhooks/async-operations', webhookName: 'Process Mining Webhook', events: [ 'operation.started', 'operation.progress', 'operation.completed', 'operation.failed' ], filters: { operationTypes: ['ProcessMiningAnalysis'], priorities: ['High', 'Critical'], minProgressIncrement: 15 }, authentication: { type: 'hmac-sha256', secret: 'your-secret-key' } }) }); return await response.json(); }; // 2. Start a complex async operation const startAsyncAnalysis = async () => { const response = await fetch('/api/{tenantId}/{projectId}/async/operation', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ operationType: 'ProcessMiningAnalysis', operationName: 'Advanced Customer Journey Analysis', operationDescription: 'Deep ML-powered analysis with anomaly detection', priority: 'High', parameters: { datasetId: '880e8400-e29b-41d4-a716-446655440000', analysisType: 'comprehensive', timeWindow: { startDate: '2024-01-01', endDate: '2024-01-31' }, algorithmSettings: { useAdvancedML: true, enableAnomalyDetection: true, performanceOptimization: 'high_accuracy' }, outputOptions: { generateReports: true, createVisualizations: true, exportFormats: ['PDF', 'CSV', 'JSON'] } }, callbacks: { onProgress: 'https://your-app.com/webhooks/progress', onCompletion: 'https://your-app.com/webhooks/completion', onError: 'https://your-app.com/webhooks/error' }, notifications: { email: ['analyst@company.com'], slack: { channel: '#process-mining', mentionUsers: ['@analyst'] } }, timeout: 7200 }) }); return await response.json(); }; // 3. Monitor operation progress const monitorOperation = async (operationId) => { const checkStatus = async () => { const response = await fetch(`/api/{tenantId}/{projectId}/async/operation/${operationId}`, { headers: { 'Authorization': `Bearer ${token}` } }); const operation = await response.json(); console.log(`Operation ${operationId}: ${operation.status} (${operation.progress.percentage}%)`); console.log(`Current phase: ${operation.progress.currentPhase}`); console.log(`ETA: ${operation.progress.estimatedCompletion}`); if (operation.status === 'Running') { setTimeout(() => checkStatus(), 60000); // Check every minute } else if (operation.status === 'Completed') { console.log('Operation completed successfully!'); await getOperationResults(operationId); } else if (operation.status === 'Failed') { console.log('Operation failed, attempting retry...'); await retryOperation(operationId); } }; await checkStatus(); }; // 4. Get operation results const getOperationResults = async (operationId) => { const response = await fetch(`/api/{tenantId}/{projectId}/async/operation/${operationId}/results?format=detailed&includeArtifacts=true`, { headers: { 'Authorization': `Bearer ${token}` } }); const results = await response.json(); console.log('Operation Results:', results.summary); console.log('Generated Artifacts:'); results.artifacts.forEach(artifact => { console.log(`- ${artifact.name} (${artifact.format}): ${artifact.downloadUrl}`); }); return results; }; // 5. Retry failed operation const retryOperation = async (operationId) => { const response = await fetch(`/api/{tenantId}/{projectId}/async/operation/${operationId}/retry`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ retryMode: 'resume', retryReason: 'Automatic retry with increased resources', modifyParameters: true, updatedParameters: { resourceAllocation: { cpuUnits: 6, memoryGB: 12, priority: 'Critical' } }, newTimeout: 10800 }) }); const retryResult = await response.json(); console.log(`Retry operation started: ${retryResult.newOperationId}`); // Monitor the retry operation await monitorOperation(retryResult.newOperationId); return retryResult; }; // 6. Submit batch operations const submitBatchOperations = async () => { const response = await fetch('/api/{tenantId}/{projectId}/async/batch', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ batchName: 'Complete Process Mining Pipeline', batchDescription: 'End-to-end analysis with data prep and reporting', operations: [ { operationName: 'Data Cleaning', operationType: 'DataEnrichment', priority: 'High', operationKey: 'clean', parameters: { datasetId: 'raw-dataset-123', cleaningRules: ['remove_duplicates', 'fix_timestamps', 'validate_activities'] } }, { operationName: 'Process Analysis', operationType: 'ProcessMiningAnalysis', priority: 'High', operationKey: 'analyze', dependencies: ['clean'], parameters: { analysisType: 'comprehensive', enableML: true, generateInsights: true } }, { operationName: 'Report Generation', operationType: 'ReportGeneration', priority: 'Normal', operationKey: 'report', dependencies: ['analyze'], parameters: { reportType: 'executive_summary', includeVisualizations: true, exportFormats: ['PDF', 'PowerPoint'] } } ], failurePolicy: { stopOnFirstFailure: false, continueIndependentOperations: true, retryFailedOperations: true } }) }); return await response.json(); }; // Execute complete async workflow const runAsyncWorkflow = async () => { try { console.log('Starting async operation workflow...'); // Register webhook const webhook = await registerWebhook(); console.log(`Webhook registered: ${webhook.webhookId}`); // Start operation const operation = await startAsyncAnalysis(); console.log(`Operation started: ${operation.operationId}`); console.log(`Estimated completion: ${operation.estimatedCompletion}`); // Monitor progress await monitorOperation(operation.operationId); } catch (error) { console.error('Async workflow failed:', error); } }; // Start the workflow runAsyncWorkflow(); ``` ## Python Example ```python import requests import time import json import hmac import hashlib from datetime import datetime, timedelta class AsyncOperationManager: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def start_operation(self, operation_type, name, parameters, priority='Normal', timeout=3600): """Start an asynchronous operation""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/operation" payload = { 'operationType': operation_type, 'operationName': name, 'priority': priority, 'parameters': parameters, 'timeout': timeout, 'callbacks': { 'onProgress': 'https://your-app.com/webhooks/progress', 'onCompletion': 'https://your-app.com/webhooks/completion', 'onError': 'https://your-app.com/webhooks/error' } } response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_operation_status(self, operation_id): """Get current operation status""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/operation/{operation_id}" response = requests.get(url, headers=self.headers) return response.json() def list_operations(self, status=None, operation_type=None, page=1, page_size=20): """List async operations with filtering""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/operations" params = {'page': page, 'pageSize': page_size} if status: params['status'] = status if operation_type: params['operationType'] = operation_type response = requests.get(url, params=params, headers=self.headers) return response.json() def cancel_operation(self, operation_id, reason="User cancellation"): """Cancel a running operation""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/operation/{operation_id}" payload = { 'reason': reason, 'preservePartialResults': True, 'notifyCallbacks': True } response = requests.delete(url, json=payload, headers=self.headers) return response.json() def get_operation_results(self, operation_id, format_type='detailed', include_artifacts=True): """Get operation results""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/operation/{operation_id}/results" params = { 'format': format_type, 'includeArtifacts': str(include_artifacts).lower() } response = requests.get(url, params=params, headers=self.headers) return response.json() def retry_operation(self, operation_id, retry_mode='resume', updated_params=None): """Retry a failed operation""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/operation/{operation_id}/retry" payload = { 'retryMode': retry_mode, 'retryReason': 'Automatic retry with optimization', 'modifyParameters': updated_params is not None } if updated_params: payload['updatedParameters'] = updated_params response = requests.post(url, json=payload, headers=self.headers) return response.json() def register_webhook(self, webhook_url, events, filters=None): """Register webhook for operation notifications""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/webhooks" payload = { 'webhookUrl': webhook_url, 'webhookName': 'Python SDK Webhook', 'events': events, 'filters': filters or {}, 'authentication': { 'type': 'hmac-sha256', 'secret': 'your-webhook-secret' } } response = requests.post(url, json=payload, headers=self.headers) return response.json() def submit_batch_operations(self, batch_name, operations, failure_policy=None): """Submit multiple operations as a batch""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/async/batch" payload = { 'batchName': batch_name, 'operations': operations, 'failurePolicy': failure_policy or { 'stopOnFirstFailure': False, 'continueIndependentOperations': True, 'retryFailedOperations': True } } response = requests.post(url, json=payload, headers=self.headers) return response.json() def wait_for_completion(self, operation_id, check_interval=60, timeout=7200): """Wait for operation to complete with periodic status checks""" start_time = time.time() while time.time() - start_time < timeout: operation = self.get_operation_status(operation_id) status = operation['status'] progress = operation['progress']['percentage'] print(f"Operation {operation_id}: {status} ({progress}%)") if operation['progress']['currentPhase']: print(f" Current phase: {operation['progress']['currentPhase']}") if status == 'Completed': print("Operation completed successfully!") return operation elif status in ['Failed', 'Cancelled', 'Timeout']: print(f"Operation ended with status: {status}") return operation time.sleep(check_interval) raise TimeoutError(f"Operation {operation_id} did not complete within {timeout} seconds") def verify_webhook_signature(self, payload, signature, secret): """Verify webhook signature for security""" expected_signature = hmac.new( secret.encode('utf-8'), payload.encode('utf-8'), hashlib.sha256 ).hexdigest() return hmac.compare_digest(signature, f"sha256={expected_signature}") # Usage example manager = AsyncOperationManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) try: # Register webhook for notifications webhook = manager.register_webhook( 'https://your-app.com/webhooks/async', ['operation.started', 'operation.progress', 'operation.completed', 'operation.failed'], {'operationTypes': ['ProcessMiningAnalysis'], 'priorities': ['High', 'Critical']} ) print(f"Webhook registered: {webhook['webhookId']}") # Start comprehensive process mining operation operation_params = { 'datasetId': 'dataset-guid', 'analysisType': 'comprehensive', 'timeWindow': { 'startDate': '2024-01-01', 'endDate': '2024-01-31' }, 'algorithmSettings': { 'useAdvancedML': True, 'enableAnomalyDetection': True, 'performanceOptimization': 'high_accuracy' }, 'outputOptions': { 'generateReports': True, 'createVisualizations': True, 'exportFormats': ['PDF', 'CSV', 'JSON'] } } operation = manager.start_operation( 'ProcessMiningAnalysis', 'Advanced Customer Journey Analysis', operation_params, 'High', 7200 ) print(f"Operation started: {operation['operationId']}") print(f"Estimated duration: {operation['estimatedDuration']}") print(f"Estimated completion: {operation['estimatedCompletion']}") # Wait for completion final_operation = manager.wait_for_completion(operation['operationId']) if final_operation['status'] == 'Completed': # Get detailed results results = manager.get_operation_results(operation['operationId']) print("Operation completed successfully!") print(f"Records analyzed: {results['summary']['recordsAnalyzed']:,}") print(f"Process variants: {results['summary']['processVariants']}") print(f"Anomalies detected: {results['summary']['anomaliesDetected']}") print(f"Data quality score: {results['summary']['dataQualityScore']}") print("\nGenerated artifacts:") for artifact in results['artifacts']: print(f"- {artifact['name']} ({artifact['format']}): {artifact['downloadUrl']}") print("\nRecommendations:") for rec in results['recommendations']: print(f"- {rec['category']} ({rec['priority']}): {rec['recommendation']}") else: print(f"Operation failed with status: {final_operation['status']}") # Try to retry if failed if final_operation['status'] == 'Failed': print("Attempting to retry operation...") retry_result = manager.retry_operation( operation['operationId'], 'resume', {'resourceAllocation': {'cpuUnits': 6, 'memoryGB': 12}} ) print(f"Retry operation started: {retry_result['newOperationId']}") except Exception as e: print(f"Error in async operation workflow: {e}") ``` --- Manage Execution Queue View and manage the job execution queue, set priorities, and control job scheduling. ## Get Queue Status **GET** `/api/{tenantId}/{projectId}/execution/queue` Retrieves the current status of the execution queue including queued jobs, their priorities, and estimated processing times. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `priority` | string | Filter by priority: Critical, High, Normal, Low | | `jobType` | string | Filter by job type: ProcessMining, DataEnrichment, Notebook, Analysis | | `includeEstimates` | boolean | Include detailed timing estimates (default: true) | ### Response ```json { "queueStatus": "Active", "timestamp": "2024-01-20T10:45:00Z", "summary": { "totalQueuedJobs": 23, "criticalPriorityJobs": 2, "highPriorityJobs": 7, "normalPriorityJobs": 12, "lowPriorityJobs": 2, "averageWaitTime": "8.5 minutes", "estimatedProcessingTime": "47 minutes" }, "processingCapacity": { "activeWorkers": 4, "totalWorkers": 6, "currentLoad": 67, "maxConcurrentJobs": 8, "currentlyRunning": 3 }, "queuedJobs": [ { "jobId": "ff0e8400-e29b-41d4-a716-446655440000", "jobName": "Customer Analytics Pipeline", "jobType": "ProcessMining", "priority": "Critical", "queuePosition": 1, "estimatedStartTime": "2024-01-20T10:47:00Z", "estimatedDuration": "12-15 minutes", "submittedBy": "user456", "dateSubmitted": "2024-01-20T10:44:00Z", "resourceRequirements": { "cpuUnits": 2, "memoryGB": 4, "estimatedDiskUsage": "1.2 GB" } }, { "jobId": "00fe8400-e29b-41d4-a716-446655440000", "jobName": "Daily Sales Analysis", "jobType": "DataEnrichment", "priority": "High", "queuePosition": 2, "estimatedStartTime": "2024-01-20T11:02:00Z", "estimatedDuration": "8-10 minutes", "submittedBy": "system", "dateSubmitted": "2024-01-20T10:30:00Z", "resourceRequirements": { "cpuUnits": 1, "memoryGB": 2, "estimatedDiskUsage": "500 MB" } } ], "performanceMetrics": { "averageJobDuration": "16.3 minutes", "throughputLastHour": 12, "queueTrends": { "currentHourSubmissions": 8, "peakHourToday": "09:00-10:00", "averageQueueSize": 15.7 } } } ``` ## Get Jobs by Priority **GET** `/api/{tenantId}/{projectId}/execution/queue/priority/{priority}` Retrieves jobs in the queue filtered by specific priority level with detailed position and timing information. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `priority` | string | Path | Priority level: Critical, High, Normal, Low | ### Response ```json { "priority": "High", "jobCount": 7, "averageWaitTime": "6.2 minutes", "estimatedProcessingTime": "31 minutes", "jobs": [ { "jobId": "00fe8400-e29b-41d4-a716-446655440000", "jobName": "Daily Sales Analysis", "jobType": "DataEnrichment", "queuePosition": 2, "overallQueuePosition": 3, "estimatedStartTime": "2024-01-20T11:02:00Z", "estimatedCompletion": "2024-01-20T11:12:00Z", "submittedBy": "system", "dateSubmitted": "2024-01-20T10:30:00Z", "waitTime": "15 minutes", "dependencies": [], "resourceRequirements": { "cpuUnits": 1, "memoryGB": 2, "estimatedDiskUsage": "500 MB" } } ] } ``` ## Change Job Priority **PUT** `/api/{tenantId}/{projectId}/execution/queue/job/{jobId}/priority` Updates the priority of a queued job, which may change its position in the queue and estimated start time. ### Request Body ```json { "newPriority": "Critical", "reason": "Business critical analysis required urgently", "notifyUser": true } ``` ### Response ```json { "jobId": "00fe8400-e29b-41d4-a716-446655440000", "previousPriority": "High", "newPriority": "Critical", "previousQueuePosition": 3, "newQueuePosition": 1, "previousEstimatedStart": "2024-01-20T11:02:00Z", "newEstimatedStart": "2024-01-20T10:47:00Z", "timeSaved": "15 minutes", "updatedBy": "user123", "updateTime": "2024-01-20T10:46:00Z" } ``` ## Move Job Position **PUT** `/api/{tenantId}/{projectId}/execution/queue/job/{jobId}/position` Manually adjusts a job's position within its priority tier. Position changes are limited to the same priority level. ### Request Body ```json { "newPosition": 1, "reason": "Dependencies resolved, can execute earlier", "respectPriorityBoundaries": true } ``` ### Response ```json { "jobId": "00fe8400-e29b-41d4-a716-446655440000", "priority": "High", "previousPosition": 3, "newPosition": 1, "previousEstimatedStart": "2024-01-20T11:02:00Z", "newEstimatedStart": "2024-01-20T10:55:00Z", "affectedJobs": [ { "jobId": "11fe8400-e29b-41d4-a716-446655440000", "newPosition": 2, "newEstimatedStart": "2024-01-20T11:05:00Z" } ], "updateTime": "2024-01-20T10:46:00Z" } ``` ## Control Queue Processing **POST** `/api/{tenantId}/{projectId}/execution/queue/control` Pauses or resumes queue processing for maintenance or emergency situations. Running jobs continue but no new jobs will start when paused. ### Request Body ```json { "action": "pause", "reason": "System maintenance window", "duration": 30, "allowRunningJobsToComplete": true, "notifyUsers": true, "scheduledResume": "2024-01-20T12:00:00Z" } ``` ### Response ```json { "action": "pause", "status": "Paused", "pausedAt": "2024-01-20T10:47:00Z", "scheduledResume": "2024-01-20T12:00:00Z", "affectedJobs": 23, "runningJobsCount": 3, "estimatedDelayMinutes": 30, "reason": "System maintenance window", "pausedBy": "admin123" } ``` ## Get Queue History **GET** `/api/{tenantId}/{projectId}/execution/queue/history` Retrieves historical queue performance data and metrics for analysis and optimization purposes. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `dateFrom` | datetime | Start date for historical data | | `dateTo` | datetime | End date for historical data | | `aggregation` | string | Data aggregation level: hour, day, week (default: hour) | | `metrics` | string | Comma-separated metrics: queue_size, wait_time, throughput, efficiency | ### Response ```json { "period": { "startDate": "2024-01-19T00:00:00Z", "endDate": "2024-01-20T10:47:00Z", "aggregation": "hour" }, "summary": { "totalJobsProcessed": 847, "averageQueueSize": 12.3, "averageWaitTime": "7.8 minutes", "peakQueueSize": 45, "peakWaitTime": "23 minutes", "throughputPerHour": 24.8, "efficiency": 87.2 }, "hourlyData": [ { "timestamp": "2024-01-20T09:00:00Z", "queueSize": { "average": 18, "peak": 25, "minimum": 8 }, "waitTime": { "average": "9.5 minutes", "maximum": "18 minutes", "minimum": "2 minutes" }, "throughput": { "jobsCompleted": 28, "jobsSubmitted": 31, "efficiency": 89.3 }, "priorityDistribution": { "critical": 2, "high": 8, "normal": 14, "low": 1 } } ], "trends": { "queueSizeGrowth": -2.3, "waitTimeImprovement": 5.7, "throughputIncrease": 12.1, "efficiencyChange": 3.4 }, "bottlenecks": [ { "timeframe": "2024-01-20T08:30:00Z - 2024-01-20T09:15:00Z", "issue": "High memory usage jobs accumulated", "impact": "15 minute delay", "resolution": "Additional worker allocated" } ] } ``` ## Cancel User's Queued Jobs **DELETE** `/api/{tenantId}/{projectId}/execution/queue/user/{userId}` Cancels all queued jobs submitted by a specific user. Running jobs by the user will continue to completion. ### Request Body (Optional) ```json { "reason": "User account deactivated", "notifyUser": false, "cancelJobTypes": ["ProcessMining", "DataEnrichment"], "excludeJobIds": ["important-job-id-1", "important-job-id-2"] } ``` ### Response ```json { "userId": "user123", "cancelledJobsCount": 5, "preservedJobsCount": 2, "cancelledJobs": [ { "jobId": "job1-guid", "jobName": "Weekly Analysis", "priority": "Normal", "queuePosition": 8 } ], "preservedJobs": [ { "jobId": "important-job-id-1", "jobName": "Critical Business Report", "reason": "Explicitly excluded" } ], "cancelledAt": "2024-01-20T10:47:00Z", "cancelledBy": "admin123" } ``` ## Get Queue Predictions **GET** `/api/{tenantId}/{projectId}/execution/queue/predictions` Provides AI-powered predictions for queue behavior, optimal submission times, and resource allocation recommendations. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `horizon` | integer | Prediction horizon in hours (1-24, default: 4) | | `jobType` | string | Predict for specific job type | | `includeRecommendations` | boolean | Include optimization recommendations (default: true) | ### Response ```json { "predictionTime": "2024-01-20T10:47:00Z", "horizon": 4, "predictions": { "queueSizeProjection": [ { "time": "2024-01-20T11:00:00Z", "expectedQueueSize": 18, "confidence": 0.87 }, { "time": "2024-01-20T12:00:00Z", "expectedQueueSize": 12, "confidence": 0.82 } ], "waitTimeProjection": [ { "time": "2024-01-20T11:00:00Z", "averageWaitTime": "6.5 minutes", "confidence": 0.85 } ], "resourceUtilization": [ { "time": "2024-01-20T11:00:00Z", "cpuUtilization": 78, "memoryUtilization": 65, "efficiency": 89.2 } ] }, "recommendations": { "optimalSubmissionTimes": [ { "timeWindow": "2024-01-20T13:00:00Z - 2024-01-20T15:00:00Z", "expectedWaitTime": "3-5 minutes", "reason": "Low queue activity period" } ], "resourceOptimization": [ { "recommendation": "Add 1 additional worker node", "expectedImprovement": "25% reduction in wait times", "cost": "Low", "priority": "Medium" } ], "jobScheduling": [ { "jobType": "ProcessMining", "recommendation": "Schedule during off-peak hours (14:00-16:00)", "reason": "Memory-intensive jobs perform better with less contention" } ] }, "modelInfo": { "modelVersion": "2.1.3", "lastTrained": "2024-01-19T02:00:00Z", "accuracy": 0.84, "dataPoints": 10080 } } ``` ## Example: Queue Management Workflow This example demonstrates monitoring and managing the job queue: ```javascript // 1. Get current queue status const getQueueStatus = async () => { const response = await fetch('/api/{tenantId}/{projectId}/execution/queue?includeEstimates=true', { headers: { 'Authorization': `Bearer ${token}` } }); const queue = await response.json(); console.log(`Queue Status: ${queue.queueStatus}`); console.log(`Total jobs: ${queue.summary.totalQueuedJobs}`); console.log(`Average wait time: ${queue.summary.averageWaitTime}`); return queue; }; // 2. Change job priority if needed const updateJobPriority = async (jobId, newPriority, reason) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/queue/job/${jobId}/priority`, { method: 'PUT', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ newPriority: newPriority, reason: reason, notifyUser: true }) }); const result = await response.json(); console.log(`Job ${jobId} priority changed from ${result.previousPriority} to ${result.newPriority}`); console.log(`New position: ${result.newQueuePosition} (was ${result.previousQueuePosition})`); console.log(`Time saved: ${result.timeSaved}`); return result; }; // 3. Get queue predictions for optimization const getQueuePredictions = async () => { const response = await fetch('/api/{tenantId}/{projectId}/execution/queue/predictions?horizon=4&includeRecommendations=true', { headers: { 'Authorization': `Bearer ${token}` } }); const predictions = await response.json(); console.log('Queue Predictions:'); predictions.predictions.queueSizeProjection.forEach(prediction => { console.log(` ${prediction.time}: ${prediction.expectedQueueSize} jobs (${Math.round(prediction.confidence * 100)}% confidence)`); }); console.log('Recommendations:'); predictions.recommendations.optimalSubmissionTimes.forEach(rec => { console.log(` Submit during: ${rec.timeWindow} (${rec.expectedWaitTime} wait)`); }); return predictions; }; // 4. Monitor queue for specific job const monitorJobInQueue = async (jobId) => { const checkQueue = async () => { const queue = await getQueueStatus(); const job = queue.queuedJobs.find(j => j.jobId === jobId); if (job) { console.log(`Job ${jobId} is at position ${job.queuePosition}`); console.log(`Estimated start: ${job.estimatedStartTime}`); console.log(`Estimated duration: ${job.estimatedDuration}`); // Check again in 2 minutes setTimeout(() => checkQueue(), 120000); } else { console.log(`Job ${jobId} is no longer in queue (likely started or cancelled)`); } }; await checkQueue(); }; // 5. Emergency queue management const pauseQueue = async (reason, duration) => { const response = await fetch('/api/{tenantId}/{projectId}/execution/queue/control', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ action: 'pause', reason: reason, duration: duration, allowRunningJobsToComplete: true, notifyUsers: true }) }); const result = await response.json(); console.log(`Queue paused: ${result.status}`); console.log(`${result.affectedJobs} jobs affected`); console.log(`Estimated delay: ${result.estimatedDelayMinutes} minutes`); return result; }; // Execute queue management workflow getQueueStatus() .then(queue => { console.log('Current queue status retrieved'); // Check if queue is getting long if (queue.summary.totalQueuedJobs > 30) { console.log('Queue is getting long, checking predictions...'); return getQueuePredictions(); } return null; }) .then(predictions => { if (predictions) { console.log('Queue predictions retrieved'); // If predictions show continued growth, consider resource optimization const futureQueueSize = predictions.predictions.queueSizeProjection[predictions.predictions.queueSizeProjection.length - 1]; if (futureQueueSize.expectedQueueSize > 25) { console.log('Consider implementing resource optimization recommendations'); predictions.recommendations.resourceOptimization.forEach(rec => { console.log(`- ${rec.recommendation}: ${rec.expectedImprovement}`); }); } } }) .catch(error => console.error('Queue management failed:', error)); ``` ## Python Example ```python import requests import time import json from datetime import datetime, timedelta class QueueManager: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def get_queue_status(self, priority=None, job_type=None, include_estimates=True): """Get current queue status""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue" params = {'includeEstimates': str(include_estimates).lower()} if priority: params['priority'] = priority if job_type: params['jobType'] = job_type response = requests.get(url, params=params, headers=self.headers) return response.json() def get_jobs_by_priority(self, priority): """Get jobs filtered by priority level""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/priority/{priority}" response = requests.get(url, headers=self.headers) return response.json() def change_job_priority(self, job_id, new_priority, reason, notify_user=True): """Change priority of a queued job""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/job/{job_id}/priority" payload = { 'newPriority': new_priority, 'reason': reason, 'notifyUser': notify_user } response = requests.put(url, json=payload, headers=self.headers) return response.json() def move_job_position(self, job_id, new_position, reason): """Move job to new position within its priority tier""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/job/{job_id}/position" payload = { 'newPosition': new_position, 'reason': reason, 'respectPriorityBoundaries': True } response = requests.put(url, json=payload, headers=self.headers) return response.json() def control_queue(self, action, reason, duration=None, scheduled_resume=None): """Pause or resume queue processing""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/control" payload = { 'action': action, 'reason': reason, 'allowRunningJobsToComplete': True, 'notifyUsers': True } if duration: payload['duration'] = duration if scheduled_resume: payload['scheduledResume'] = scheduled_resume.isoformat() response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_queue_history(self, date_from, date_to, aggregation='hour', metrics=None): """Get historical queue performance data""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/history" params = { 'dateFrom': date_from.isoformat(), 'dateTo': date_to.isoformat(), 'aggregation': aggregation } if metrics: params['metrics'] = ','.join(metrics) response = requests.get(url, params=params, headers=self.headers) return response.json() def cancel_user_jobs(self, user_id, reason, job_types=None, exclude_job_ids=None): """Cancel all queued jobs for a specific user""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/user/{user_id}" payload = { 'reason': reason, 'notifyUser': False } if job_types: payload['cancelJobTypes'] = job_types if exclude_job_ids: payload['excludeJobIds'] = exclude_job_ids response = requests.delete(url, json=payload, headers=self.headers) return response.json() def get_queue_predictions(self, horizon=4, job_type=None, include_recommendations=True): """Get AI-powered queue predictions""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/queue/predictions" params = { 'horizon': horizon, 'includeRecommendations': str(include_recommendations).lower() } if job_type: params['jobType'] = job_type response = requests.get(url, params=params, headers=self.headers) return response.json() def monitor_queue_health(self, alert_threshold=30, check_interval=300): """Continuously monitor queue health and alert on issues""" while True: try: queue_status = self.get_queue_status() total_jobs = queue_status['summary']['totalQueuedJobs'] avg_wait = queue_status['summary']['averageWaitTime'] print(f"Queue Health Check: {total_jobs} jobs, avg wait: {avg_wait}") if total_jobs > alert_threshold: print(f"ALERT: Queue size ({total_jobs}) exceeds threshold ({alert_threshold})") # Get predictions to understand if this will improve predictions = self.get_queue_predictions() future_size = predictions['predictions']['queueSizeProjection'][-1]['expectedQueueSize'] if future_size > total_jobs: print("WARNING: Queue expected to grow further") print("Resource optimization recommendations:") for rec in predictions['recommendations']['resourceOptimization']: print(f" - {rec['recommendation']}: {rec['expectedImprovement']}") time.sleep(check_interval) except Exception as e: print(f"Queue monitoring error: {e}") time.sleep(60) # Usage example manager = QueueManager( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) try: # Get comprehensive queue status queue_status = manager.get_queue_status(include_estimates=True) print(f"Queue Status: {queue_status['queueStatus']}") print(f"Total jobs in queue: {queue_status['summary']['totalQueuedJobs']}") print(f"Average wait time: {queue_status['summary']['averageWaitTime']}") print(f"Processing capacity: {queue_status['processingCapacity']['currentLoad']}%") # Check high priority jobs specifically high_priority_jobs = manager.get_jobs_by_priority('High') print(f"High priority jobs: {high_priority_jobs['jobCount']}") # Get queue predictions for the next 4 hours predictions = manager.get_queue_predictions(horizon=4) print("Queue predictions:") for pred in predictions['predictions']['queueSizeProjection']: confidence_pct = round(pred['confidence'] * 100) print(f" {pred['time']}: {pred['expectedQueueSize']} jobs ({confidence_pct}% confidence)") # Check recommendations if predictions['recommendations']['optimalSubmissionTimes']: print("Optimal submission times:") for rec in predictions['recommendations']['optimalSubmissionTimes']: print(f" {rec['timeWindow']}: {rec['expectedWaitTime']} wait time") # Example: Elevate a job priority if needed if queue_status['summary']['totalQueuedJobs'] > 20: # Find a normal priority job to elevate normal_jobs = [job for job in queue_status['queuedJobs'] if job['priority'] == 'Normal'] if normal_jobs: job_to_elevate = normal_jobs[0] result = manager.change_job_priority( job_to_elevate['jobId'], 'High', 'Queue congestion - elevating business critical job' ) print(f"Elevated job {job_to_elevate['jobName']} to High priority") print(f"New position: {result['newQueuePosition']} (was {result['previousPosition']})") # Get queue history for analysis history = manager.get_queue_history( datetime.now() - timedelta(hours=24), datetime.now(), 'hour', ['queue_size', 'wait_time', 'throughput'] ) print(f"24h summary: {history['summary']['totalJobsProcessed']} jobs processed") print(f"Peak queue size: {history['summary']['peakQueueSize']}") print(f"Average throughput: {history['summary']['throughputPerHour']} jobs/hour") # If there are bottlenecks, report them if history['bottlenecks']: print("Recent bottlenecks:") for bottleneck in history['bottlenecks']: print(f" {bottleneck['timeframe']}: {bottleneck['issue']} (Impact: {bottleneck['impact']})") except Exception as e: print(f"Error in queue management: {e}") ``` --- Monitor Job Progress Track job execution status, monitor progress, and retrieve detailed execution logs. ## Get Job Execution Logs **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}/logs` Retrieves detailed execution logs for a specific job, including progress updates, error messages, and performance metrics. ### Parameters | Parameter | Type | Location | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Path | The tenant identifier | | `projectId` | GUID | Path | The project identifier | | `jobId` | GUID | Path | The job identifier | ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `level` | string | Filter by log level: DEBUG, INFO, WARN, ERROR (default: INFO) | | `fromTime` | datetime | Get logs from this timestamp | | `toTime` | datetime | Get logs until this timestamp | | `limit` | integer | Maximum number of log entries (default: 1000, max: 10000) | | `format` | string | Response format: structured, raw (default: structured) | ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "jobName": "Customer Journey Analysis", "jobStatus": "Running", "logsSummary": { "totalEntries": 247, "debugEntries": 89, "infoEntries": 145, "warnEntries": 11, "errorEntries": 2, "timeRange": { "startTime": "2024-01-20T10:30:00Z", "endTime": "2024-01-20T10:45:00Z" } }, "logs": [ { "timestamp": "2024-01-20T10:30:15Z", "level": "INFO", "component": "DataLoader", "stage": "Initialization", "message": "Starting data load for dataset 880e8400-e29b-41d4-a716-446655440000", "details": { "datasetSize": "45.7 MB", "expectedRecords": 192850, "format": "CSV" } }, { "timestamp": "2024-01-20T10:32:45Z", "level": "INFO", "component": "ProcessMiner", "stage": "Data Processing", "message": "Processing batch 1 of 15", "details": { "batchSize": 12856, "progress": 6.7, "recordsPerSecond": 1247 } }, { "timestamp": "2024-01-20T10:38:22Z", "level": "WARN", "component": "DataValidator", "stage": "Data Processing", "message": "Found 125 records with missing timestamps", "details": { "affectedRecords": 125, "action": "Timestamp inferred from surrounding events", "impactOnAnalysis": "Minimal" } }, { "timestamp": "2024-01-20T10:41:10Z", "level": "ERROR", "component": "AnalyticsEngine", "stage": "Analysis", "message": "Memory limit exceeded during bottleneck analysis", "details": { "memoryUsage": "3.8 GB", "memoryLimit": "4.0 GB", "action": "Switching to disk-based processing", "retry": true } }, { "timestamp": "2024-01-20T10:45:33Z", "level": "INFO", "component": "ReportGenerator", "stage": "Output Generation", "message": "Generating process map visualization", "details": { "activitiesCount": 47, "pathsCount": 156, "formatRequested": "SVG" } } ], "executionMetrics": { "currentStage": "Output Generation", "stageProgress": 78, "overallProgress": 85, "processingRate": "1250 records/second", "memoryUsage": "2.3 GB", "cpuUsage": 67, "estimatedCompletion": "2024-01-20T10:52:00Z" } } ``` ## Track Job Progress **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}/progress` Retrieves real-time progress information for a running job, including stage-by-stage completion and performance metrics. ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "jobName": "Customer Journey Analysis", "status": "Running", "overallProgress": { "percentage": 85, "startTime": "2024-01-20T10:30:00Z", "elapsedTime": "15 minutes 33 seconds", "estimatedRemaining": "2 minutes 27 seconds", "estimatedCompletion": "2024-01-20T10:52:00Z" }, "stages": [ { "stageName": "Data Loading", "stageOrder": 1, "status": "Completed", "progress": 100, "startTime": "2024-01-20T10:30:00Z", "endTime": "2024-01-20T10:32:15Z", "duration": "2 minutes 15 seconds", "recordsProcessed": 192850, "metrics": { "throughput": "1427 records/second", "dataValidated": true, "errorsFound": 0 } }, { "stageName": "Process Discovery", "stageOrder": 2, "status": "Completed", "progress": 100, "startTime": "2024-01-20T10:32:15Z", "endTime": "2024-01-20T10:41:30Z", "duration": "9 minutes 15 seconds", "recordsProcessed": 192850, "metrics": { "activitiesDiscovered": 47, "variantsFound": 234, "pathsIdentified": 156 } }, { "stageName": "Performance Analysis", "stageOrder": 3, "status": "Running", "progress": 78, "startTime": "2024-01-20T10:41:30Z", "estimatedEndTime": "2024-01-20T10:48:00Z", "recordsProcessed": 150243, "totalRecords": 192850, "metrics": { "bottlenecksIdentified": 8, "waitTimeCalculated": 150243, "cycleTimeCalculated": 150243 } }, { "stageName": "Report Generation", "stageOrder": 4, "status": "Pending", "progress": 0, "estimatedStartTime": "2024-01-20T10:48:00Z", "estimatedEndTime": "2024-01-20T10:52:00Z" } ], "currentActivity": { "component": "PerformanceAnalyzer", "operation": "Calculating resource utilization metrics", "details": "Processing activity transitions for efficiency analysis" }, "resourceUsage": { "memoryUsage": "2.3 GB", "memoryLimit": "4.0 GB", "cpuUsage": 67, "diskUsage": "890 MB", "networkIO": "12 MB", "processingRate": "1250 records/second" }, "qualityMetrics": { "dataQualityScore": 94.8, "validationsPassed": 15, "validationsFailed": 1, "warningsGenerated": 11, "errorsEncountered": 2 } } ``` ## Get Job Execution Timeline **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}/timeline` Retrieves a detailed timeline of job execution events, including stage transitions, resource allocation changes, and significant milestones. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `includeSubEvents` | boolean | Include detailed sub-events (default: false) | | `granularity` | string | Timeline granularity: seconds, minutes, major_events (default: minutes) | ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "jobName": "Customer Journey Analysis", "timelineScope": { "startTime": "2024-01-20T10:30:00Z", "currentTime": "2024-01-20T10:45:33Z", "endTime": null, "granularity": "minutes" }, "timeline": [ { "timestamp": "2024-01-20T10:30:00Z", "eventType": "JobStarted", "description": "Job execution initiated", "details": { "submittedBy": "user123", "priority": "High", "estimatedDuration": "20-25 minutes", "resourcesAllocated": { "cpuUnits": 2, "memoryGB": 4, "workerNode": "worker-node-02" } } }, { "timestamp": "2024-01-20T10:30:15Z", "eventType": "StageStarted", "description": "Data Loading stage initiated", "details": { "stageName": "Data Loading", "expectedDuration": "2-3 minutes", "datasetSize": "45.7 MB", "recordCount": 192850 } }, { "timestamp": "2024-01-20T10:32:15Z", "eventType": "StageCompleted", "description": "Data Loading stage completed successfully", "details": { "stageName": "Data Loading", "actualDuration": "2 minutes 15 seconds", "recordsLoaded": 192850, "dataQualityScore": 98.2, "errorsFound": 0 } }, { "timestamp": "2024-01-20T10:32:15Z", "eventType": "StageStarted", "description": "Process Discovery stage initiated", "details": { "stageName": "Process Discovery", "expectedDuration": "8-12 minutes", "algorithm": "Alpha Miner Enhanced" } }, { "timestamp": "2024-01-20T10:35:30Z", "eventType": "Milestone", "description": "Process model discovered", "details": { "activitiesFound": 47, "uniqueActivities": 47, "processComplexity": "Medium" } }, { "timestamp": "2024-01-20T10:38:22Z", "eventType": "Warning", "description": "Data quality issue detected", "details": { "issue": "Missing timestamps", "affectedRecords": 125, "resolution": "Timestamps inferred", "impact": "Minimal" } }, { "timestamp": "2024-01-20T10:41:10Z", "eventType": "Error", "description": "Memory limit approached", "details": { "memoryUsage": "3.8 GB", "memoryLimit": "4.0 GB", "action": "Switched to disk-based processing", "performanceImpact": "15% slower processing" } }, { "timestamp": "2024-01-20T10:41:30Z", "eventType": "StageCompleted", "description": "Process Discovery stage completed", "details": { "stageName": "Process Discovery", "actualDuration": "9 minutes 15 seconds", "processVariants": 234, "pathsDiscovered": 156 } }, { "timestamp": "2024-01-20T10:41:30Z", "eventType": "StageStarted", "description": "Performance Analysis stage initiated", "details": { "stageName": "Performance Analysis", "expectedDuration": "6-8 minutes", "analysisTypes": ["Bottleneck", "Resource Utilization", "Cycle Time"] } }, { "timestamp": "2024-01-20T10:45:33Z", "eventType": "Progress", "description": "Performance Analysis 78% complete", "details": { "stageName": "Performance Analysis", "progress": 78, "currentOperation": "Resource utilization analysis", "recordsProcessed": 150243, "remainingRecords": 42607 } } ], "upcomingEvents": [ { "estimatedTime": "2024-01-20T10:48:00Z", "eventType": "StageCompletion", "description": "Performance Analysis stage completion expected" }, { "estimatedTime": "2024-01-20T10:48:00Z", "eventType": "StageStart", "description": "Report Generation stage start expected" }, { "estimatedTime": "2024-01-20T10:52:00Z", "eventType": "JobCompletion", "description": "Job completion expected" } ] } ``` ## Get Job Performance Metrics **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}/metrics` Retrieves detailed performance metrics for a job execution, including resource utilization, throughput, and efficiency measurements. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `interval` | string | Metrics collection interval: 1m, 5m, 15m (default: 5m) | | `metrics` | string | Comma-separated metrics: cpu, memory, disk, network, throughput | | `includeHistory` | boolean | Include historical metrics data (default: false) | ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "metricsCollectionTime": "2024-01-20T10:45:33Z", "currentMetrics": { "resourceUtilization": { "cpu": { "usage": 67, "cores": 2, "efficiency": 89.2 }, "memory": { "used": "2.3 GB", "allocated": "4.0 GB", "peak": "3.8 GB", "efficiency": 87.5 }, "disk": { "reads": "450 MB", "writes": "89 MB", "iops": 145, "latency": "12ms" }, "network": { "bytesIn": "67 MB", "bytesOut": "12 MB", "connections": 8 } }, "processing": { "recordsPerSecond": 1250, "recordsProcessed": 150243, "totalRecords": 192850, "processingEfficiency": 78.3, "errorRate": 0.001, "retryRate": 0.015 }, "stages": { "completed": 2, "running": 1, "pending": 1, "averageStageTime": "5.75 minutes", "stageEfficiency": 91.2 } }, "historicalMetrics": [ { "timestamp": "2024-01-20T10:30:00Z", "cpu": 15, "memory": 0.8, "recordsPerSecond": 0, "stage": "Initialization" }, { "timestamp": "2024-01-20T10:35:00Z", "cpu": 85, "memory": 1.9, "recordsPerSecond": 1427, "stage": "Data Loading" }, { "timestamp": "2024-01-20T10:40:00Z", "cpu": 72, "memory": 3.2, "recordsPerSecond": 1156, "stage": "Process Discovery" }, { "timestamp": "2024-01-20T10:45:00Z", "cpu": 67, "memory": 2.3, "recordsPerSecond": 1250, "stage": "Performance Analysis" } ], "performanceTrends": { "cpuTrend": "Stable", "memoryTrend": "Declining", "throughputTrend": "Improving", "overallEfficiency": "Good", "predictionAccuracy": 94.2 }, "benchmarks": { "jobType": "ProcessMining", "averageJobDuration": "18.5 minutes", "averageThroughput": "1180 records/second", "currentPerformanceRank": "85th percentile", "similarJobsComparison": { "fasterThan": 85, "similarTo": 12, "slowerThan": 3 } } } ``` ## Track Multiple Jobs **GET** `/api/{tenantId}/{projectId}/execution/tracking/batch` Retrieves tracking information for multiple jobs simultaneously, useful for dashboard displays and batch monitoring. ### Query Parameters | Parameter | Type | Description | |-----------|------|-------------| | `jobIds` | string | Comma-separated list of job IDs to track | | `status` | string | Filter by status: Running, Queued, Completed, Failed | | `submittedBy` | string | Filter by user who submitted jobs | | `includeMetrics` | boolean | Include performance metrics for each job (default: false) | | `refreshInterval` | integer | Auto-refresh interval in seconds for real-time tracking | ### Response ```json { "trackingTime": "2024-01-20T10:45:33Z", "jobCount": 5, "summary": { "running": 3, "queued": 1, "completed": 1, "failed": 0 }, "jobs": [ { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "jobName": "Customer Journey Analysis", "status": "Running", "progress": 85, "startTime": "2024-01-20T10:30:00Z", "estimatedCompletion": "2024-01-20T10:52:00Z", "currentStage": "Performance Analysis", "submittedBy": "user123", "priority": "High", "resourceUsage": { "cpu": 67, "memory": "2.3 GB", "processingRate": "1250 records/second" } }, { "jobId": "dd0e8400-e29b-41d4-a716-446655440000", "jobName": "Sales Data Enrichment", "status": "Running", "progress": 45, "startTime": "2024-01-20T10:35:00Z", "estimatedCompletion": "2024-01-20T11:05:00Z", "currentStage": "Data Enrichment", "submittedBy": "system", "priority": "Normal", "resourceUsage": { "cpu": 52, "memory": "1.8 GB", "processingRate": "890 records/second" } }, { "jobId": "ee0e8400-e29b-41d4-a716-446655440000", "jobName": "Weekly Report Generation", "status": "Queued", "progress": 0, "queuePosition": 2, "estimatedStartTime": "2024-01-20T10:55:00Z", "estimatedCompletion": "2024-01-20T11:20:00Z", "submittedBy": "user456", "priority": "Normal" } ], "systemHealth": { "overallLoad": 73, "queueHealth": "Good", "resourceAvailability": "Medium", "estimatedCapacity": "6 additional jobs" } } ``` ## Subscribe to Job Events **POST** `/api/{tenantId}/{projectId}/execution/job/{jobId}/subscribe` Establishes a real-time subscription to job events for live tracking. Supports WebSocket connections and webhook notifications. ### Request Body ```json { "subscriptionType": "webhook", "webhookUrl": "https://your-app.com/webhooks/job-events", "events": [ "stageStarted", "stageCompleted", "progressUpdate", "error", "warning", "jobCompleted" ], "filters": { "minProgressIncrement": 5, "includeDebugEvents": false, "notifyOnErrors": true }, "authentication": { "type": "bearer", "token": "your-webhook-auth-token" } } ``` ### Response ```json { "subscriptionId": "sub-123e8400-e29b-41d4-a716-446655440000", "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "subscriptionType": "webhook", "status": "Active", "webhookUrl": "https://your-app.com/webhooks/job-events", "eventsSubscribed": [ "stageStarted", "stageCompleted", "progressUpdate", "error", "warning", "jobCompleted" ], "createdAt": "2024-01-20T10:45:33Z", "expiresAt": "2024-01-20T18:45:33Z" } ``` ## Get Job Dependencies **GET** `/api/{tenantId}/{projectId}/execution/job/{jobId}/dependencies` Retrieves information about job dependencies, including prerequisite jobs, dependent resources, and blocking conditions. ### Response ```json { "jobId": "cc0e8400-e29b-41d4-a716-446655440000", "dependencies": { "prerequisiteJobs": [ { "jobId": "aa0e8400-e29b-41d4-a716-446655440000", "jobName": "Data Preparation", "status": "Completed", "completedAt": "2024-01-20T10:25:00Z", "dependency": "Dataset must be validated before analysis" } ], "resourceDependencies": [ { "resourceType": "Dataset", "resourceId": "880e8400-e29b-41d4-a716-446655440000", "resourceName": "Customer Journey Data", "status": "Available", "lastModified": "2024-01-20T10:25:00Z" }, { "resourceType": "ComputeNode", "resourceId": "worker-node-02", "status": "Allocated", "allocatedAt": "2024-01-20T10:30:00Z" } ], "dependentJobs": [ { "jobId": "ee0e8400-e29b-41d4-a716-446655440000", "jobName": "Weekly Report Generation", "status": "Queued", "waitingFor": "Customer Journey Analysis results" } ] }, "blockingConditions": [ { "condition": "Memory allocation below 2GB", "status": "Resolved", "resolvedAt": "2024-01-20T10:30:00Z", "resolution": "Additional memory allocated" } ], "dependencyGraph": { "nodes": [ { "id": "aa0e8400-e29b-41d4-a716-446655440000", "type": "PrerequisiteJob", "status": "Completed" }, { "id": "cc0e8400-e29b-41d4-a716-446655440000", "type": "CurrentJob", "status": "Running" }, { "id": "ee0e8400-e29b-41d4-a716-446655440000", "type": "DependentJob", "status": "Queued" } ], "edges": [ { "from": "aa0e8400-e29b-41d4-a716-446655440000", "to": "cc0e8400-e29b-41d4-a716-446655440000", "type": "prerequisite" }, { "from": "cc0e8400-e29b-41d4-a716-446655440000", "to": "ee0e8400-e29b-41d4-a716-446655440000", "type": "dependency" } ] } } ``` ## Example: Complete Job Tracking Workflow This example demonstrates comprehensive job tracking and monitoring: ```javascript // 1. Start tracking a job const trackJob = async (jobId) => { // Get initial job status const progress = await getJobProgress(jobId); console.log(`Tracking job: ${progress.jobName}`); console.log(`Current progress: ${progress.overallProgress.percentage}%`); // Subscribe to real-time events await subscribeToJobEvents(jobId); return progress; }; // 2. Get detailed job progress const getJobProgress = async (jobId) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}/progress`, { headers: { 'Authorization': `Bearer ${token}` } }); return await response.json(); }; // 3. Subscribe to real-time job events const subscribeToJobEvents = async (jobId) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}/subscribe`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }, body: JSON.stringify({ subscriptionType: 'webhook', webhookUrl: 'https://your-app.com/webhooks/job-events', events: [ 'stageStarted', 'stageCompleted', 'progressUpdate', 'error', 'warning', 'jobCompleted' ], filters: { minProgressIncrement: 10, includeDebugEvents: false, notifyOnErrors: true } }) }); const subscription = await response.json(); console.log(`Subscribed to job events: ${subscription.subscriptionId}`); return subscription; }; // 4. Get job performance metrics const getJobMetrics = async (jobId) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}/metrics?interval=5m&includeHistory=true`, { headers: { 'Authorization': `Bearer ${token}` } }); const metrics = await response.json(); console.log('Performance Metrics:'); console.log(` CPU Usage: ${metrics.currentMetrics.resourceUtilization.cpu.usage}%`); console.log(` Memory Usage: ${metrics.currentMetrics.resourceUtilization.memory.used}`); console.log(` Processing Rate: ${metrics.currentMetrics.processing.recordsPerSecond} records/sec`); return metrics; }; // 5. Get job execution logs const getJobLogs = async (jobId, level = 'INFO') => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}/logs?level=${level}&limit=100`, { headers: { 'Authorization': `Bearer ${token}` } }); const logs = await response.json(); console.log(`Retrieved ${logs.logs.length} log entries`); // Display recent important logs logs.logs.filter(log => log.level !== 'DEBUG').forEach(log => { console.log(`[${log.timestamp}] ${log.level}: ${log.message}`); }); return logs; }; // 6. Track multiple jobs in a dashboard const trackMultipleJobs = async (jobIds) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/tracking/batch?jobIds=${jobIds.join(',')}&includeMetrics=true`, { headers: { 'Authorization': `Bearer ${token}` } }); const tracking = await response.json(); console.log(`Tracking ${tracking.jobCount} jobs:`); tracking.jobs.forEach(job => { console.log(` ${job.jobName}: ${job.status} (${job.progress}%)`); }); return tracking; }; // 7. Monitor job timeline const getJobTimeline = async (jobId) => { const response = await fetch(`/api/{tenantId}/{projectId}/execution/job/${jobId}/timeline?includeSubEvents=true&granularity=minutes`, { headers: { 'Authorization': `Bearer ${token}` } }); const timeline = await response.json(); console.log('Job Timeline:'); timeline.timeline.forEach(event => { console.log(`[${event.timestamp}] ${event.eventType}: ${event.description}`); }); return timeline; }; // Execute comprehensive tracking workflow const runTrackingWorkflow = async (jobId) => { try { console.log('Starting comprehensive job tracking...'); // Track job progress const progress = await trackJob(jobId); // Get performance metrics const metrics = await getJobMetrics(jobId); // Get execution logs const logs = await getJobLogs(jobId, 'WARN'); // Get timeline const timeline = await getJobTimeline(jobId); // Monitor until completion const monitoring = setInterval(async () => { const currentProgress = await getJobProgress(jobId); if (currentProgress.status === 'Completed') { console.log('Job completed successfully!'); clearInterval(monitoring); // Get final metrics const finalMetrics = await getJobMetrics(jobId); console.log(`Final efficiency: ${finalMetrics.performanceTrends.overallEfficiency}`); } else if (currentProgress.status === 'Failed') { console.log('Job failed!'); clearInterval(monitoring); // Get error logs const errorLogs = await getJobLogs(jobId, 'ERROR'); console.log('Error details:', errorLogs.logs); } else { console.log(`Progress update: ${currentProgress.overallProgress.percentage}%`); } }, 30000); // Check every 30 seconds } catch (error) { console.error('Tracking workflow failed:', error); } }; // Start tracking runTrackingWorkflow('cc0e8400-e29b-41d4-a716-446655440000'); ``` ## Python Example ```python import requests import time import json from datetime import datetime, timedelta class JobTracker: def __init__(self, base_url, tenant_id, project_id, token): self.base_url = base_url self.tenant_id = tenant_id self.project_id = project_id self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def get_job_progress(self, job_id): """Get current job progress""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/progress" response = requests.get(url, headers=self.headers) return response.json() def get_job_logs(self, job_id, level='INFO', limit=1000): """Get job execution logs""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/logs" params = {'level': level, 'limit': limit} response = requests.get(url, params=params, headers=self.headers) return response.json() def get_job_metrics(self, job_id, interval='5m', include_history=False): """Get job performance metrics""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/metrics" params = { 'interval': interval, 'includeHistory': str(include_history).lower() } response = requests.get(url, params=params, headers=self.headers) return response.json() def get_job_timeline(self, job_id, include_sub_events=False, granularity='minutes'): """Get job execution timeline""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/timeline" params = { 'includeSubEvents': str(include_sub_events).lower(), 'granularity': granularity } response = requests.get(url, params=params, headers=self.headers) return response.json() def track_multiple_jobs(self, job_ids, include_metrics=False): """Track multiple jobs simultaneously""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/tracking/batch" params = { 'jobIds': ','.join(job_ids), 'includeMetrics': str(include_metrics).lower() } response = requests.get(url, params=params, headers=self.headers) return response.json() def subscribe_to_job_events(self, job_id, webhook_url, events=None): """Subscribe to real-time job events""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/subscribe" payload = { 'subscriptionType': 'webhook', 'webhookUrl': webhook_url, 'events': events or [ 'stageStarted', 'stageCompleted', 'progressUpdate', 'error', 'warning', 'jobCompleted' ], 'filters': { 'minProgressIncrement': 10, 'includeDebugEvents': False, 'notifyOnErrors': True } } response = requests.post(url, json=payload, headers=self.headers) return response.json() def get_job_dependencies(self, job_id): """Get job dependencies and blocking conditions""" url = f"{self.base_url}/api/{self.tenant_id}/{self.project_id}/execution/job/{job_id}/dependencies" response = requests.get(url, headers=self.headers) return response.json() def monitor_job_until_completion(self, job_id, check_interval=30, timeout=3600): """Monitor job until completion with detailed tracking""" start_time = time.time() print(f"Starting monitoring for job {job_id}") # Get initial state progress = self.get_job_progress(job_id) print(f"Job: {progress['jobName']}") print(f"Initial progress: {progress['overallProgress']['percentage']}%") while time.time() - start_time < timeout: try: # Get current progress progress = self.get_job_progress(job_id) status = progress['status'] percentage = progress['overallProgress']['percentage'] print(f"Progress: {percentage}% - Status: {status}") if status == 'Completed': print("Job completed successfully!") # Get final metrics metrics = self.get_job_metrics(job_id, include_history=True) print(f"Final efficiency: {metrics['performanceTrends']['overallEfficiency']}") print(f"Total duration: {progress['overallProgress']['elapsedTime']}") return progress elif status == 'Failed': print("Job failed!") # Get error logs logs = self.get_job_logs(job_id, level='ERROR') print("Error logs:") for log in logs['logs']: print(f" [{log['timestamp']}] {log['message']}") return progress elif status == 'Running': # Get performance metrics metrics = self.get_job_metrics(job_id) cpu_usage = metrics['currentMetrics']['resourceUtilization']['cpu']['usage'] memory_used = metrics['currentMetrics']['resourceUtilization']['memory']['used'] processing_rate = metrics['currentMetrics']['processing']['recordsPerSecond'] print(f" CPU: {cpu_usage}%, Memory: {memory_used}, Rate: {processing_rate} rec/sec") # Check for warnings or errors recent_logs = self.get_job_logs(job_id, level='WARN') recent_warnings = [log for log in recent_logs['logs'] if (datetime.fromisoformat(log['timestamp'].replace('Z', '+00:00')) > datetime.now().replace(tzinfo=None) - timedelta(minutes=1))] if recent_warnings: for warning in recent_warnings: print(f" WARNING: {warning['message']}") time.sleep(check_interval) except Exception as e: print(f"Monitoring error: {e}") time.sleep(60) raise TimeoutError(f"Job {job_id} monitoring timed out after {timeout} seconds") def create_tracking_dashboard(self, job_ids): """Create a simple tracking dashboard for multiple jobs""" print("Job Tracking Dashboard") print("=" * 50) while True: try: tracking = self.track_multiple_jobs(job_ids, include_metrics=True) print(f"\nUpdate: {tracking['trackingTime']}") print(f"System Load: {tracking['systemHealth']['overallLoad']}%") print(f"Queue Health: {tracking['systemHealth']['queueHealth']}") print() for job in tracking['jobs']: status_icon = "Done" if job['status'] == 'Completed' else "Running" if job['status'] == 'Running' else "Waiting" print(f"{status_icon} {job['jobName']}: {job['progress']}% ({job['status']})") if job['status'] == 'Running' and 'resourceUsage' in job: print(f" CPU: {job['resourceUsage']['cpu']}%, Memory: {job['resourceUsage']['memory']}") print(f" Rate: {job['resourceUsage']['processingRate']}") if job['status'] == 'Queued': print(f" Queue position: {job.get('queuePosition', 'Unknown')}") print(f" Estimated start: {job.get('estimatedStartTime', 'Unknown')}") print("\n" + "=" * 50) # Check if all jobs are completed active_jobs = [job for job in tracking['jobs'] if job['status'] in ['Running', 'Queued']] if not active_jobs: print("All jobs completed!") break time.sleep(30) except KeyboardInterrupt: print("\nDashboard stopped by user") break except Exception as e: print(f"Dashboard error: {e}") time.sleep(60) # Usage example tracker = JobTracker( 'https://your-mindzie-instance.com', 'tenant-guid', 'project-guid', 'your-auth-token' ) try: # Monitor a single job with comprehensive tracking job_id = 'cc0e8400-e29b-41d4-a716-446655440000' # Get initial job state progress = tracker.get_job_progress(job_id) print(f"Tracking job: {progress['jobName']}") print(f"Status: {progress['status']}") print(f"Progress: {progress['overallProgress']['percentage']}%") # Get job dependencies dependencies = tracker.get_job_dependencies(job_id) if dependencies['dependencies']['prerequisiteJobs']: print("Prerequisites:") for prereq in dependencies['dependencies']['prerequisiteJobs']: print(f" - {prereq['jobName']}: {prereq['status']}") # Monitor until completion final_status = tracker.monitor_job_until_completion(job_id) # Get final timeline timeline = tracker.get_job_timeline(job_id, include_sub_events=True) print("\nExecution Timeline:") for event in timeline['timeline'][-5:]: # Last 5 events print(f" [{event['timestamp']}] {event['eventType']}: {event['description']}") except Exception as e: print(f"Error in job tracking: {e}") ``` --- # Investigation API Manage investigations within mindzieStudio projects. Investigations are the primary containers for process mining analysis, linking datasets to notebooks that define analytical workflows. ## Features ### Investigation Management Create, retrieve, update, and delete investigations. List all investigations in a project with pagination support. [View Management API](/mindzie_api/investigation/management) ### Notebook Access Retrieve notebooks within an investigation, including the main notebook that is automatically created with each investigation. [View Notebooks API](/mindzie_api/investigation/notebooks) --- ## Available Endpoints ### Connectivity Testing | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/investigation/unauthorized-ping` | Public connectivity test | | GET | `/api/{tenantId}/{projectId}/investigation/ping` | Authenticated connectivity test | ### Investigation CRUD | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/investigation` | List all investigations | | GET | `/api/{tenantId}/{projectId}/investigation/{investigationId}` | Get investigation details | | POST | `/api/{tenantId}/{projectId}/investigation` | Create an investigation | | PUT | `/api/{tenantId}/{projectId}/investigation/{investigationId}` | Update an investigation | | DELETE | `/api/{tenantId}/{projectId}/investigation/{investigationId}` | Delete an investigation | ### Notebook Access | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/investigation/{investigationId}/notebooks` | List all notebooks | | GET | `/api/{tenantId}/{projectId}/investigation/{investigationId}/main-notebook` | Get main notebook | --- ## Authentication All Investigation API endpoints require a valid Tenant API key. Include the API key in the Authorization header. See [Authentication](/mindzie_api/authentication) for details on API key types and usage. --- ## Quick Start ```bash # List all investigations in a project curl -X GET "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/investigation" \ -H "Authorization: Bearer YOUR_API_KEY" # Create a new investigation curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/investigation" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"investigationName": "Order Analysis", "datasetId": "your-dataset-guid"}' ``` --- ## Important Notes - **Dataset Required**: Every investigation must be linked to an existing dataset - **Main Notebook**: A main notebook is automatically created when an investigation is created - **Cache Required**: Load the project into cache before accessing notebooks via the notebooks endpoint - **CASCADE Delete**: Deleting an investigation permanently removes all its notebooks and analysis history - **Operation Center**: Set `isUsedForOperationCenter` to true for investigations used in real-time monitoring --- Manage investigations within mindzieStudio projects. Create, retrieve, update, and delete investigations that contain notebooks and define process mining analysis workflows. ## Connectivity Testing ### Unauthorized Ping **GET** `/api/{tenantId}/{projectId}/investigation/unauthorized-ping` Test endpoint that does not require authentication. Use this to verify network connectivity. #### Response ``` Ping Successful ``` ### Authenticated Ping **GET** `/api/{tenantId}/{projectId}/investigation/ping` Authenticated ping endpoint to verify API access for a specific tenant and project. #### Response (200 OK) ``` Ping Successful (tenant id: {tenantId}) ``` --- ## List All Investigations **GET** `/api/{tenantId}/{projectId}/investigation` Retrieves a paginated list of all investigations within the specified project. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Query Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `page` | integer | 1 | Page number for pagination | | `pageSize` | integer | 50 | Number of items per page (max recommended: 100) | ### Response (200 OK) ```json { "investigations": [ { "investigationId": "11111111-2222-3333-4444-555555555555", "projectId": "87654321-4321-4321-4321-210987654321", "investigationName": "Order Analysis", "investigationDescription": "Process mining analysis of order workflow", "datasetId": "12345678-1234-1234-1234-123456789012", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "investigationOrder": 1.0, "isUsedForOperationCenter": false, "investigationFolderId": null, "notebookCount": 3 } ], "totalCount": 5, "page": 1, "pageSize": 50 } ``` ### Investigation Object Fields | Field | Type | Description | |-------|------|-------------| | `investigationId` | GUID | Unique identifier for the investigation | | `projectId` | GUID | Project this investigation belongs to | | `investigationName` | string | Display name of the investigation | | `investigationDescription` | string | Description of the investigation | | `datasetId` | GUID | The dataset this investigation analyzes | | `dateCreated` | datetime | When the investigation was created | | `dateModified` | datetime | When the investigation was last modified | | `createdBy` | GUID | User who created the investigation | | `modifiedBy` | GUID | User who last modified the investigation | | `investigationOrder` | decimal | Display order within the project | | `isUsedForOperationCenter` | boolean | Whether used for real-time monitoring | | `investigationFolderId` | GUID | Optional folder for organization | | `notebookCount` | integer | Number of notebooks in the investigation | --- ## Get Investigation Details **GET** `/api/{tenantId}/{projectId}/investigation/{investigationId}` Retrieves detailed information for a specific investigation. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Response (200 OK) Same structure as the investigation object in the list response. ### Error Responses **Not Found (404):** ```json { "error": "Investigation not found", "investigationId": "11111111-2222-3333-4444-555555555555" } ``` --- ## Create Investigation **POST** `/api/{tenantId}/{projectId}/investigation` Creates a new investigation linked to an existing dataset. A main notebook is automatically created. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | ### Request Body ```json { "investigationName": "Order Analysis", "investigationDescription": "Process mining analysis of order workflow", "datasetId": "12345678-1234-1234-1234-123456789012", "isUsedForOperationCenter": false } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `investigationName` | string | Yes | Investigation name | | `investigationDescription` | string | No | Description of the investigation | | `datasetId` | GUID | Yes | The dataset to analyze | | `isUsedForOperationCenter` | boolean | No | Enable for real-time monitoring (default: false) | ### Response (201 Created) Returns the created investigation object (same structure as Get Investigation). ### Error Responses **Bad Request (400):** ```json { "error": "Dataset not found with ID '12345678-1234-1234-1234-123456789012'" } ``` --- ## Update Investigation **PUT** `/api/{tenantId}/{projectId}/investigation/{investigationId}` Updates an existing investigation's properties. All fields are optional. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Request Body ```json { "investigationName": "Updated Analysis Name", "investigationDescription": "Updated description", "isUsedForOperationCenter": true } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `investigationName` | string | No | New investigation name | | `investigationDescription` | string | No | New description | | `isUsedForOperationCenter` | boolean | No | Enable/disable operation center | ### Response (200 OK) Returns the updated investigation object. ### Error Responses **Not Found (404):** ```json { "error": "Investigation not found", "investigationId": "11111111-2222-3333-4444-555555555555" } ``` --- ## Delete Investigation **DELETE** `/api/{tenantId}/{projectId}/investigation/{investigationId}` Permanently deletes an investigation and ALL associated notebooks. **WARNING: This is a DESTRUCTIVE operation that CANNOT be undone.** ### Cascade Delete Includes - All notebooks in the investigation - All block configurations - All execution history - All analysis results ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Response (204 No Content) No response body on successful deletion. ### Error Responses **Not Found (404):** ```json { "error": "Investigation not found", "investigationId": "11111111-2222-3333-4444-555555555555" } ``` --- ## Implementation Examples ### cURL ```bash # List all investigations curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get investigation details curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation/11111111-2222-3333-4444-555555555555" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Create a new investigation curl -X POST "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "investigationName": "Q4 Analysis", "investigationDescription": "Quarterly order analysis", "datasetId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" }' # Update an investigation curl -X PUT "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation/11111111-2222-3333-4444-555555555555" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "investigationName": "Q4 Analysis - Final", "investigationDescription": "Updated description" }' # Delete an investigation (CAUTION: Irreversible!) curl -X DELETE "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation/11111111-2222-3333-4444-555555555555" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class InvestigationManager: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def list_investigations(self, page=1, page_size=50): """List all investigations in the project.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation' params = {'page': page, 'pageSize': page_size} response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() return response.json() def get_investigation(self, investigation_id): """Get investigation details.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation/{investigation_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_investigation(self, name, dataset_id, description='', is_operation_center=False): """Create a new investigation.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation' payload = { 'investigationName': name, 'investigationDescription': description, 'datasetId': dataset_id, 'isUsedForOperationCenter': is_operation_center } response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def update_investigation(self, investigation_id, name=None, description=None, is_operation_center=None): """Update an existing investigation.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation/{investigation_id}' payload = {} if name: payload['investigationName'] = name if description is not None: payload['investigationDescription'] = description if is_operation_center is not None: payload['isUsedForOperationCenter'] = is_operation_center response = requests.put(url, json=payload, headers=self.headers) response.raise_for_status() return response.json() def delete_investigation(self, investigation_id): """Delete an investigation (CAUTION: Irreversible!).""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation/{investigation_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() return None # 204 No Content # Usage manager = InvestigationManager('your-auth-token') # List all investigations result = manager.list_investigations() print(f"Total investigations: {result['totalCount']}") for inv in result['investigations']: print(f"- {inv['investigationName']}: {inv['notebookCount']} notebooks") # Create a new investigation new_inv = manager.create_investigation( name='API Test Investigation', dataset_id='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', description='Created via API' ) print(f"Created: {new_inv['investigationId']}") ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class InvestigationManager { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async listInvestigations(page = 1, pageSize = 50) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation?page=${page}&pageSize=${pageSize}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async getInvestigation(investigationId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation/${investigationId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async createInvestigation(name, datasetId, description = '', isOperationCenter = false) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ investigationName: name, investigationDescription: description, datasetId: datasetId, isUsedForOperationCenter: isOperationCenter }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async updateInvestigation(investigationId, updates) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation/${investigationId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify(updates) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async deleteInvestigation(investigationId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation/${investigationId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return null; // 204 No Content } } // Usage const manager = new InvestigationManager('your-auth-token'); const investigations = await manager.listInvestigations(); console.log(`Found ${investigations.totalCount} investigations`); investigations.investigations.forEach(inv => { console.log(`- ${inv.investigationName}: ${inv.notebookCount} notebooks`); }); // Create a new investigation const newInv = await manager.createInvestigation( 'API Test Investigation', 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', 'Created via API' ); console.log(`Created: ${newInv.investigationId}`); ``` --- Access notebooks within an investigation. Notebooks contain the analysis blocks that define process mining workflows. ## Prerequisites Before accessing notebooks via this API, you must load the project into cache using the Project API. ```bash # Load project into cache first curl -X GET "https://your-mindzie-instance.com/api/{tenantId}/project/{projectId}/load" \ -H "Authorization: Bearer YOUR_API_KEY" ``` See [Project Cache API](/mindzie_api/project/cache) for details. --- ## List Investigation Notebooks **GET** `/api/{tenantId}/{projectId}/investigation/{investigationId}/notebooks` Retrieves all notebooks within an investigation from the project cache. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Response (200 OK) ```json { "notebooks": [ { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Main", "description": "", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": null, "modifiedBy": null, "notebookType": null, "notebookOrder": 1.0, "lastExecutionDuration": 0, "blockCount": 12 }, { "notebookId": "bbbbbbbb-cccc-dddd-eeee-ffffffffffff", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Variant Analysis", "description": "", "dateCreated": "2024-01-16T09:00:00Z", "dateModified": "2024-01-18T11:30:00Z", "createdBy": null, "modifiedBy": null, "notebookType": null, "notebookOrder": 2.0, "lastExecutionDuration": 0, "blockCount": 8 } ], "totalCount": 2 } ``` ### Notebook Object Fields | Field | Type | Description | |-------|------|-------------| | `notebookId` | GUID | Unique identifier for the notebook | | `investigationId` | GUID | Investigation this notebook belongs to | | `name` | string | Display name of the notebook | | `description` | string | Description of the notebook | | `dateCreated` | datetime | When the notebook was created | | `dateModified` | datetime | When the notebook was last modified | | `createdBy` | GUID | User who created the notebook | | `modifiedBy` | GUID | User who last modified the notebook | | `notebookType` | integer | Type of notebook (0 = standard) | | `notebookOrder` | decimal | Display order within the investigation | | `lastExecutionDuration` | double | Last execution time in seconds | | `blockCount` | integer | Number of blocks in the notebook | ### Error Responses **Not Found (404) - Project not cached:** ```json "Project not found in cache. Please load the project first using the ProjectController.LoadProject endpoint." ``` **Not Found (404) - Investigation not found:** ```json { "error": "Investigation not found", "investigationId": "11111111-2222-3333-4444-555555555555" } ``` --- ## Get Main Notebook **GET** `/api/{tenantId}/{projectId}/investigation/{investigationId}/main-notebook` Retrieves the main notebook for an investigation. The main notebook is automatically created when an investigation is created and typically contains the core filtering and analysis workflow. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Response (200 OK) ```json { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Main", "description": "Primary analysis notebook", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "notebookType": 0, "notebookOrder": 1.0, "lastExecutionDuration": 2.5, "blockCount": 12 } ``` ### Error Responses **Not Found (404) - Investigation not found:** ```json { "error": "Investigation not found", "investigationId": "11111111-2222-3333-4444-555555555555" } ``` **Not Found (404) - Main notebook not found:** ```json { "error": "Main notebook not found for investigation", "investigationId": "11111111-2222-3333-4444-555555555555" } ``` --- ## Implementation Examples ### cURL ```bash # Step 1: Load project into cache first curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/project/87654321-4321-4321-4321-210987654321/load" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Step 2: List all notebooks in an investigation curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation/11111111-2222-3333-4444-555555555555/notebooks" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" # Get the main notebook directly curl -X GET "https://your-mindzie-instance.com/api/12345678-1234-1234-1234-123456789012/87654321-4321-4321-4321-210987654321/investigation/11111111-2222-3333-4444-555555555555/main-notebook" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" ``` ### Python ```python import requests TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' BASE_URL = 'https://your-mindzie-instance.com' class NotebookAccessor: def __init__(self, token): self.headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } def load_project(self): """Load project into cache before accessing notebooks.""" url = f'{BASE_URL}/api/{TENANT_ID}/project/{PROJECT_ID}/load' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def list_notebooks(self, investigation_id): """List all notebooks in an investigation.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation/{investigation_id}/notebooks' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def get_main_notebook(self, investigation_id): """Get the main notebook for an investigation.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/investigation/{investigation_id}/main-notebook' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() # Usage accessor = NotebookAccessor('your-auth-token') investigation_id = '11111111-2222-3333-4444-555555555555' # Step 1: Load project into cache print("Loading project into cache...") load_result = accessor.load_project() print(f"Project loaded: {load_result['projectName']}") # Step 2: List notebooks notebooks = accessor.list_notebooks(investigation_id) print(f"\nFound {notebooks['totalCount']} notebooks:") for nb in notebooks['notebooks']: print(f" - {nb['name']}: {nb['blockCount']} blocks") # Get main notebook main = accessor.get_main_notebook(investigation_id) print(f"\nMain notebook: {main['name']} ({main['blockCount']} blocks)") ``` ### JavaScript/Node.js ```javascript const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; const BASE_URL = 'https://your-mindzie-instance.com'; class NotebookAccessor { constructor(token) { this.headers = { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' }; } async loadProject() { const url = `${BASE_URL}/api/${TENANT_ID}/project/${PROJECT_ID}/load`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async listNotebooks(investigationId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation/${investigationId}/notebooks`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } async getMainNotebook(investigationId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/investigation/${investigationId}/main-notebook`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return await response.json(); } } // Usage const accessor = new NotebookAccessor('your-auth-token'); const investigationId = '11111111-2222-3333-4444-555555555555'; // Step 1: Load project console.log('Loading project into cache...'); const loadResult = await accessor.loadProject(); console.log(`Project loaded: ${loadResult.projectName}`); // Step 2: List notebooks const notebooks = await accessor.listNotebooks(investigationId); console.log(`\nFound ${notebooks.totalCount} notebooks:`); notebooks.notebooks.forEach(nb => { console.log(` - ${nb.name}: ${nb.blockCount} blocks`); }); // Get main notebook const main = await accessor.getMainNotebook(investigationId); console.log(`\nMain notebook: ${main.name} (${main.blockCount} blocks)`); ``` --- ## Best Practices 1. **Always Load Project First**: Before accessing notebooks, ensure the project is loaded into cache 2. **Cache Duration**: Projects remain in cache for 30 minutes after last access 3. **Touch Session**: API calls automatically extend cache lifetime 4. **Use Main Notebook**: For basic analysis, the main notebook contains the primary workflow 5. **Notebook Order**: Notebooks are ordered by `notebookOrder` for consistent display --- # Notebook API Notebooks are containers for analysis blocks that define process mining workflows within investigations. Use the Notebook API to create, manage, and execute analysis workflows programmatically. ## Key Concepts ### What Are Notebooks? Notebooks contain ordered sequences of analysis blocks: - **Filters**: Narrow down the data to specific cases or events - **Calculators**: Compute metrics, durations, and derived values - **Insights**: Generate visualizations and statistical analysis - **Dashboards**: Create shareable reports Blocks are executed in order, with each block receiving data from its parent block. ### Notebook Types | Type | Value | Description | |------|-------|-------------| | Standard | 0 | Regular analysis notebook | | Template | 1 | Template-based notebook | | BaseKnowledge | 2 | Foundational knowledge notebook | ### Auto-Load Behavior Notebook CRUD operations **automatically load the project** into the shared cache. You don't need to explicitly call `/project/{id}/load` before creating, updating, or deleting notebooks. ```python # Just call the operation directly - project loads automatically response = requests.post( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/investigation/{investigation_id}", json={"name": "New Notebook"}, headers=headers ) ``` ### Optimistic Locking Update operations support optional conflict detection using `DateModified`: ```python # Include DateModified to detect concurrent modifications response = requests.put( f"{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}", json={ "name": "Updated Name", "dateModified": "2024-01-15T10:30:00Z" # From your last GET }, headers=headers ) # If another user modified the notebook, returns 409 Conflict ``` --- ## API Endpoints ### Notebook CRUD | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}` | List notebooks in investigation | | POST | `/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}` | Create notebook | | POST | `/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}/from-template` | Create from template | | GET | `/api/{tenantId}/{projectId}/notebook/{notebookId}` | Get notebook details | | PUT | `/api/{tenantId}/{projectId}/notebook/{notebookId}` | Update notebook | | DELETE | `/api/{tenantId}/{projectId}/notebook/{notebookId}` | Delete notebook | | POST | `/api/{tenantId}/{projectId}/notebook/{notebookId}/copy` | Copy notebook | ### Block Operations | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks` | List blocks | | POST | `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks` | Create block | | GET | `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks/order` | Get block order | | PUT | `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks/order` | Reorder blocks | ### Utility | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/{tenantId}/{projectId}/notebook/{notebookId}/url` | Generate shareable URL | | GET | `/api/{tenantId}/{projectId}/notebook/ping` | Authenticated connectivity test | | GET | `/api/{tenantId}/{projectId}/notebook/unauthorized-ping` | Public connectivity test | --- ## Quick Start ### List Notebooks in an Investigation ```bash curl -X GET "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}" \ -H "Authorization: Bearer YOUR_API_KEY" ``` ### Create a Notebook ```bash curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"name": "My Analysis", "description": "Process analysis workflow"}' ``` ### Create from Template ```bash curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}/from-template" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "My Analysis"}' ``` --- ## Response Structure ### Notebook Response ```json { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Main Analysis", "description": "Primary process analysis workflow", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "notebookType": 0, "notebookOrder": 1.0, "lastExecutionDuration": 2.5, "blockCount": 12 } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `notebookId` | GUID | Unique notebook identifier | | `investigationId` | GUID | Parent investigation ID | | `name` | string | Notebook name | | `description` | string | Notebook description | | `dateCreated` | datetime | Creation timestamp | | `dateModified` | datetime | Last modification timestamp | | `createdBy` | GUID | Creator user ID | | `modifiedBy` | GUID | Last modifier user ID | | `notebookType` | integer | Type (0=Standard, 1=Template, 2=BaseKnowledge) | | `notebookOrder` | decimal | Display order within investigation | | `lastExecutionDuration` | double | Last execution time in seconds | | `blockCount` | integer | Number of blocks in notebook | --- ## What's Next? - [Notebook Management](/mindzie_api/notebook/management) - Full CRUD operations - [Block Operations](/mindzie_api/notebook/blocks) - Create and manage blocks - [Template API](/mindzie_api/template) - Create notebooks from templates - [Execution API](/mindzie_api/execution) - Execute notebooks --- Full CRUD operations for managing notebooks within investigations. All modification operations automatically load the project into the shared cache. --- ## List Notebooks **GET** `/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}` Returns all notebooks within an investigation. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Response (200 OK) ```json [ { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Main", "description": "Primary analysis notebook", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "notebookType": 0, "notebookOrder": 1.0, "lastExecutionDuration": 2.5, "blockCount": 12 }, { "notebookId": "bbbbbbbb-cccc-dddd-eeee-ffffffffffff", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Variant Analysis", "description": "Process variant exploration", "dateCreated": "2024-01-16T09:00:00Z", "dateModified": "2024-01-18T11:30:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": null, "notebookType": 0, "notebookOrder": 2.0, "lastExecutionDuration": 1.2, "blockCount": 8 } ] ``` ### Error Responses **Not Found (404)** ```json { "Error": "Investigation not found", "InvestigationId": "11111111-2222-3333-4444-555555555555" } ``` --- ## Get Notebook **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}` Returns detailed information for a specific notebook. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `notebookId` | GUID | Yes | The notebook identifier | ### Response (200 OK) ```json { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Main", "description": "Primary analysis notebook", "dateCreated": "2024-01-15T10:30:00Z", "dateModified": "2024-01-20T14:45:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "notebookType": 0, "notebookOrder": 1.0, "lastExecutionDuration": 2.5, "blockCount": 12 } ``` --- ## Create Notebook **POST** `/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}` Creates a new empty notebook in the investigation. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Request Body ```json { "name": "Process Analysis", "description": "Detailed process analysis workflow", "notebookType": 0 } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Notebook name (unique within investigation) | | `description` | string | No | Notebook description | | `notebookType` | integer | No | Type (0=Standard, default) | ### Response (201 Created) ```json { "notebookId": "cccccccc-dddd-eeee-ffff-000000000000", "investigationId": "11111111-2222-3333-4444-555555555555", "name": "Process Analysis", "description": "Detailed process analysis workflow", "dateCreated": "2024-03-01T10:00:00Z", "dateModified": "2024-03-01T10:00:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "modifiedBy": null, "notebookType": 0, "notebookOrder": 3.0, "lastExecutionDuration": 0, "blockCount": 0 } ``` ### Error Responses **Bad Request (400)** ```json { "Error": "Failed to create notebook. The name may already exist in this investigation." } ``` --- ## Create from Template **POST** `/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}/from-template` Creates a notebook from a pre-defined template, including all blocks and configurations. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `investigationId` | GUID | Yes | The investigation identifier | ### Request Body ```json { "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "Process Discovery Analysis", "description": "Analysis using Process Discovery template" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `templateId` | GUID | Yes | Template to use | | `name` | string | No | Override template name | | `description` | string | No | Override template description | ### Response (201 Created) Returns the created notebook with blocks from the template. ### Error Responses **Not Found (404)** ```json { "Error": "Template not found" } ``` --- ## Update Notebook **PUT** `/api/{tenantId}/{projectId}/notebook/{notebookId}` Updates notebook metadata. Supports optimistic locking via `DateModified`. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `notebookId` | GUID | Yes | The notebook identifier | ### Request Body ```json { "name": "Updated Notebook Name", "description": "Updated description", "notebookOrder": 2.5, "dateModified": "2024-01-20T14:45:00Z" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Notebook name | | `description` | string | No | Notebook description | | `notebookOrder` | decimal | No | Display order | | `dateModified` | datetime | No | For optimistic locking | ### Response (200 OK) Returns the updated notebook. ### Optimistic Locking If `dateModified` is provided and doesn't match the server's current value, returns 409 Conflict: ```json { "Error": "CONFLICT", "Message": "Notebook was modified by another user since you last fetched it", "YourDateModified": "2024-01-20T14:45:00Z", "CurrentDateModified": "2024-01-21T09:30:00Z", "ModifiedBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "Resolution": "GET /api/{tenantId}/{projectId}/notebook/{notebookId} to fetch current state, then retry" } ``` --- ## Delete Notebook **DELETE** `/api/{tenantId}/{projectId}/notebook/{notebookId}` Permanently deletes a notebook and all its blocks. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `notebookId` | GUID | Yes | The notebook identifier | ### Response (200 OK) ```json { "Message": "Notebook successfully deleted", "NotebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" } ``` --- ## Copy Notebook **POST** `/api/{tenantId}/{projectId}/notebook/{notebookId}/copy` Creates a complete copy of a notebook including all blocks. Can copy to the same or different investigation. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `notebookId` | GUID | Yes | Source notebook identifier | ### Request Body ```json { "name": "Copy of Main Analysis", "destinationInvestigationId": "22222222-3333-4444-5555-666666666666" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | No | Name for copy (default: "Copy of {original}") | | `destinationInvestigationId` | GUID | No | Target investigation (default: same investigation) | ### Response (201 Created) Returns the newly created notebook copy. --- ## Get Shareable URL **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}/url` Generates a shareable URL for direct access to the notebook in the UI. ### Response (200 OK) ```json { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "url": "https://your-mindzie-instance.com/investigation/12345678/87654321/11111111/notebook/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "relativePath": "/investigation/12345678/87654321/11111111/notebook/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "expiresAt": null } ``` --- ## Implementation Examples ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' class NotebookManager: def __init__(self, api_key): self.headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } def list_notebooks(self, investigation_id): """List all notebooks in an investigation.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/investigation/{investigation_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def get_notebook(self, notebook_id): """Get notebook details.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_notebook(self, investigation_id, name, description=None): """Create a new notebook.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/investigation/{investigation_id}' data = {'name': name, 'description': description} response = requests.post(url, json=data, headers=self.headers) response.raise_for_status() return response.json() def create_from_template(self, investigation_id, template_id, name=None): """Create notebook from template.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/investigation/{investigation_id}/from-template' data = {'templateId': template_id, 'name': name} response = requests.post(url, json=data, headers=self.headers) response.raise_for_status() return response.json() def update_notebook(self, notebook_id, name, description=None, date_modified=None): """Update notebook with optimistic locking.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}' data = {'name': name, 'description': description} if date_modified: data['dateModified'] = date_modified response = requests.put(url, json=data, headers=self.headers) if response.status_code == 409: conflict = response.json() raise Exception(f"Conflict: {conflict['Message']}") response.raise_for_status() return response.json() def delete_notebook(self, notebook_id): """Delete a notebook.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() return response.json() def copy_notebook(self, notebook_id, name=None, destination_investigation=None): """Copy a notebook.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}/copy' data = {'name': name} if destination_investigation: data['destinationInvestigationId'] = destination_investigation response = requests.post(url, json=data, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = NotebookManager('your-api-key') investigation_id = '11111111-2222-3333-4444-555555555555' # List notebooks notebooks = manager.list_notebooks(investigation_id) print(f"Found {len(notebooks)} notebooks") # Create a notebook notebook = manager.create_notebook(investigation_id, 'New Analysis', 'My workflow') print(f"Created notebook: {notebook['notebookId']}") # Copy the notebook copy = manager.copy_notebook(notebook['notebookId'], 'Copy of New Analysis') print(f"Created copy: {copy['notebookId']}") # Update with optimistic locking try: updated = manager.update_notebook( notebook['notebookId'], 'Renamed Analysis', date_modified=notebook['dateModified'] ) except Exception as e: print(f"Update failed: {e}") ``` ### JavaScript/Node.js ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; class NotebookManager { constructor(apiKey) { this.headers = { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }; } async listNotebooks(investigationId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/investigation/${investigationId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async createNotebook(investigationId, name, description = null) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/investigation/${investigationId}`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ name, description }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async updateNotebook(notebookId, name, description = null, dateModified = null) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/${notebookId}`; const body = { name, description }; if (dateModified) body.dateModified = dateModified; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify(body) }); if (response.status === 409) { const conflict = await response.json(); throw new Error(`Conflict: ${conflict.Message}`); } if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async deleteNotebook(notebookId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/${notebookId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async copyNotebook(notebookId, name = null, destinationInvestigationId = null) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/${notebookId}/copy`; const body = {}; if (name) body.name = name; if (destinationInvestigationId) body.destinationInvestigationId = destinationInvestigationId; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify(body) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } } // Usage const manager = new NotebookManager('your-api-key'); const investigationId = '11111111-2222-3333-4444-555555555555'; // List notebooks const notebooks = await manager.listNotebooks(investigationId); console.log(`Found ${notebooks.length} notebooks`); // Create and copy const notebook = await manager.createNotebook(investigationId, 'New Analysis'); const copy = await manager.copyNotebook(notebook.notebookId, 'Copy of Analysis'); ``` --- ## Best Practices 1. **Auto-Load**: Don't explicitly load projects for notebook CRUD - it's automatic 2. **Optimistic Locking**: Include `dateModified` in updates to detect conflicts 3. **Templates**: Use templates for consistent analysis workflows 4. **Naming**: Use descriptive names unique within each investigation 5. **Cleanup**: Delete unused notebooks to keep investigations organized --- Create and manage analysis blocks within notebooks. Blocks are the building blocks of analysis workflows. --- ## Block Types | Type | Description | |------|-------------| | `Filter` | Narrow down data to specific cases or events | | `Calculator` | Compute metrics, durations, and derived values | | `Opportunity` | Identify improvement opportunities | | `Insight` | Generate visualizations and statistics | | `Dashboard` | Create shareable report panels | | `Alert` | Define monitoring alerts | --- ## List Blocks **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks` Returns all blocks in a notebook in execution order. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `notebookId` | GUID | Yes | The notebook identifier | ### Response (200 OK) ```json { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "blocks": [ { "blockId": "11111111-1111-1111-1111-111111111111", "blockType": "Filter", "operatorName": "ActivityFilter", "name": "Select Key Activities", "description": "Filter to main process activities", "parentId": null, "order": 0, "configuration": "{\"activities\": [\"Create Order\", \"Approve\", \"Ship\"]}", "dateCreated": "2024-01-15T10:30:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "dateModified": "2024-01-15T10:30:00Z", "isActive": true }, { "blockId": "22222222-2222-2222-2222-222222222222", "blockType": "Calculator", "operatorName": "DurationCalculator", "name": "Case Duration", "description": "Calculate total case duration", "parentId": "11111111-1111-1111-1111-111111111111", "order": 0, "configuration": "{\"unit\": \"days\"}", "dateCreated": "2024-01-15T10:35:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "dateModified": "2024-01-15T10:35:00Z", "isActive": true } ], "totalCount": 2 } ``` ### Block Fields | Field | Type | Description | |-------|------|-------------| | `blockId` | GUID | Unique block identifier | | `blockType` | string | Type (Filter, Calculator, etc.) | | `operatorName` | string | Specific operator name | | `name` | string | Block display name | | `description` | string | Block description | | `parentId` | GUID | Parent block ID (data flows from parent) | | `order` | integer | Execution order hint | | `configuration` | string | JSON configuration for the operator | | `dateCreated` | datetime | Creation timestamp | | `createdBy` | GUID | Creator user ID | | `dateModified` | datetime | Last modification timestamp | | `isActive` | boolean | Whether block is enabled | --- ## Create Block **POST** `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks` Creates a new analysis block in the notebook. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | | `projectId` | GUID | Yes | The project identifier | | `notebookId` | GUID | Yes | The notebook identifier | ### Request Body ```json { "blockType": "Filter", "operatorName": "ActivityFilter", "name": "Filter by Status", "description": "Keep only completed orders", "configuration": "{\"activities\": [\"Complete\", \"Shipped\"]}", "insertAfterBlockId": "11111111-1111-1111-1111-111111111111" } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `blockType` | string | No | Type (Filter, Calculator, etc.) Default: Filter | | `operatorName` | string | Yes | Specific operator to use | | `name` | string | Yes | Block display name | | `description` | string | No | Block description | | `configuration` | string | No | JSON configuration for the operator | | `insertAfterBlockId` | GUID | No | Insert after this block (default: end) | ### Response (201 Created) ```json { "blockId": "33333333-3333-3333-3333-333333333333", "blockType": "Filter", "operatorName": "ActivityFilter", "name": "Filter by Status", "description": "Keep only completed orders", "parentId": "11111111-1111-1111-1111-111111111111", "order": 0, "configuration": "{\"activities\": [\"Complete\", \"Shipped\"]}", "dateCreated": "2024-03-01T10:00:00Z", "createdBy": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "dateModified": "2024-03-01T10:00:00Z", "isActive": true } ``` --- ## Get Block Order **GET** `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks/order` Returns the current execution order of blocks. ### Response (200 OK) ```json { "notebookId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "blockCount": 3, "blockOrder": [ { "blockId": "11111111-1111-1111-1111-111111111111", "parentId": null, "position": 1 }, { "blockId": "22222222-2222-2222-2222-222222222222", "parentId": "11111111-1111-1111-1111-111111111111", "position": 2 }, { "blockId": "33333333-3333-3333-3333-333333333333", "parentId": "22222222-2222-2222-2222-222222222222", "position": 3 } ] } ``` --- ## Reorder Blocks **PUT** `/api/{tenantId}/{projectId}/notebook/{notebookId}/blocks/order` Changes the execution order of blocks in the notebook. ### Request Body ```json { "blockIds": [ "11111111-1111-1111-1111-111111111111", "33333333-3333-3333-3333-333333333333", "22222222-2222-2222-2222-222222222222" ] } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `blockIds` | array | Yes | Block IDs in desired execution order | ### Response (200 OK) Returns the updated block order. --- ## Block Execution Flow Blocks form a chain where each block receives data from its parent: ``` [Investigation Start] | v [Filter: Activity Filter] -> Filters to specific activities | v [Calculator: Duration] -> Calculates durations for filtered data | v [Insight: Statistics] -> Generates statistics on calculated data ``` When you reorder blocks, the parent chain is updated automatically. --- ## Implementation Examples ### Python ```python import requests import json BASE_URL = 'https://your-mindzie-instance.com' TENANT_ID = '12345678-1234-1234-1234-123456789012' PROJECT_ID = '87654321-4321-4321-4321-210987654321' class BlockManager: def __init__(self, api_key): self.headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } def list_blocks(self, notebook_id): """List all blocks in a notebook.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}/blocks' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_block(self, notebook_id, block_type, operator_name, name, description=None, configuration=None, insert_after=None): """Create a new block.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}/blocks' data = { 'blockType': block_type, 'operatorName': operator_name, 'name': name, 'description': description, 'configuration': json.dumps(configuration) if configuration else None, 'insertAfterBlockId': insert_after } response = requests.post(url, json=data, headers=self.headers) response.raise_for_status() return response.json() def get_block_order(self, notebook_id): """Get current block execution order.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}/blocks/order' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def reorder_blocks(self, notebook_id, block_ids): """Reorder blocks in notebook.""" url = f'{BASE_URL}/api/{TENANT_ID}/{PROJECT_ID}/notebook/{notebook_id}/blocks/order' response = requests.put(url, json={'blockIds': block_ids}, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = BlockManager('your-api-key') notebook_id = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' # List blocks blocks = manager.list_blocks(notebook_id) print(f"Found {blocks['totalCount']} blocks") # Create a filter block filter_block = manager.create_block( notebook_id, block_type='Filter', operator_name='ActivityFilter', name='Select Key Activities', configuration={'activities': ['Create Order', 'Approve', 'Ship']} ) print(f"Created filter: {filter_block['blockId']}") # Create a calculator block after the filter calc_block = manager.create_block( notebook_id, block_type='Calculator', operator_name='DurationCalculator', name='Case Duration', configuration={'unit': 'days'}, insert_after=filter_block['blockId'] ) print(f"Created calculator: {calc_block['blockId']}") # Check execution order order = manager.get_block_order(notebook_id) print(f"Block order: {[b['blockId'] for b in order['blockOrder']]}") ``` ### JavaScript/Node.js ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; const TENANT_ID = '12345678-1234-1234-1234-123456789012'; const PROJECT_ID = '87654321-4321-4321-4321-210987654321'; class BlockManager { constructor(apiKey) { this.headers = { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }; } async listBlocks(notebookId) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/${notebookId}/blocks`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async createBlock(notebookId, blockType, operatorName, name, options = {}) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/${notebookId}/blocks`; const body = { blockType, operatorName, name, description: options.description, configuration: options.configuration ? JSON.stringify(options.configuration) : null, insertAfterBlockId: options.insertAfter }; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify(body) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async reorderBlocks(notebookId, blockIds) { const url = `${BASE_URL}/api/${TENANT_ID}/${PROJECT_ID}/notebook/${notebookId}/blocks/order`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify({ blockIds }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } } // Usage const manager = new BlockManager('your-api-key'); const notebookId = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'; // List blocks const blocks = await manager.listBlocks(notebookId); console.log(`Found ${blocks.totalCount} blocks`); // Create blocks const filter = await manager.createBlock(notebookId, 'Filter', 'ActivityFilter', 'Key Activities', { configuration: { activities: ['Create', 'Approve'] } }); const calc = await manager.createBlock(notebookId, 'Calculator', 'DurationCalculator', 'Duration', { insertAfter: filter.blockId }); ``` --- ## Best Practices 1. **Block Order Matters**: Data flows from parent to child - plan your workflow 2. **Use Templates**: For complex workflows, create from templates 3. **Configuration JSON**: Store operator-specific settings as JSON strings 4. **InsertAfter**: Use `insertAfterBlockId` to control positioning 5. **Execution**: After creating blocks, execute the notebook to see results --- # Template API Templates are reusable notebook configurations that define analysis workflows. Use the Template API to list, retrieve, create, and manage notebook templates programmatically. ## Key Concepts ### What Are Templates? Templates are pre-configured notebook definitions containing: - **Blocks**: Filters, calculators, insights, and other analysis components - **MCL Text**: The configuration text that defines the notebook structure - **Metadata**: Name, description, category, and process context When you create a notebook from a template, all blocks and configurations are automatically applied. ### Template Types | Type | Scope | Can Create via API? | Can Delete via API? | |------|-------|---------------------|---------------------| | **Global** | All tenants | No | No | | **Tenant-Specific** | Single tenant | Yes | Yes | Global templates are system-wide and managed through the admin interface. Tenant-specific templates can be created and managed via this API. ### Template Categories Templates are organized into categories: | Category | Description | |----------|-------------| | `Templates` | Standard analysis templates | | `Custom` | User-created custom templates | | `BaseKnowledge` | Foundational knowledge templates | --- ## Authentication All Template API endpoints require a **Global API Key**. Tenant API keys cannot access template operations. ```bash curl -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ https://your-mindzie-instance.com/api/templates ``` If you use a non-global API key, you'll receive: ```json { "error": "This endpoint requires a Global API key.", "hint": "Global API keys can be created at /admin/global-api-keys" } ``` --- ## API Endpoints | Method | Endpoint | Description | |--------|----------|-------------| | GET | `/api/templates` | List all global templates | | GET | `/api/templates/tenant/{tenantId}` | List templates for a tenant (global + tenant-specific) | | GET | `/api/templates/category/{category}` | List templates by category | | GET | `/api/templates/{templateId}` | Get template details with MCL text | | GET | `/api/templates/{templateId}/thumbnail` | Get template thumbnail image | | POST | `/api/templates/tenant/{tenantId}` | Create a tenant-specific template | | PUT | `/api/templates/{templateId}` | Update a template | | DELETE | `/api/templates/{templateId}` | Delete a template | --- ## Quick Start ### List All Templates for a Tenant ```bash curl -X GET "https://your-mindzie-instance.com/api/templates/tenant/12345678-1234-1234-1234-123456789012" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" ``` ### Get Template Details ```bash curl -X GET "https://your-mindzie-instance.com/api/templates/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" ``` ### Create a Notebook from Template Use the Notebook API to create a notebook from a template: ```bash curl -X POST "https://your-mindzie-instance.com/api/{tenantId}/{projectId}/notebook/investigation/{investigationId}/from-template" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "My Analysis" }' ``` --- ## Response Structure ### Template List Response ```json { "templates": [ { "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "Process Discovery", "description": "Standard process discovery workflow", "category": "Templates", "processName": "Order to Cash", "tenantId": null, "isGlobal": true, "hasThumbnail": true, "autoAddedDefaultSortOrder": 100, "dateModified": "2024-01-15T10:30:00Z" } ], "totalCount": 1 } ``` ### Template Detail Response ```json { "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "Process Discovery", "description": "Standard process discovery workflow", "category": "Templates", "processName": "Order to Cash", "mclText": "// MCL configuration text here...", "tenantId": null, "isGlobal": true, "hasThumbnail": true, "autoAddedDefaultSortOrder": 100, "originatingNotebookId": null, "dateCreated": "2024-01-01T00:00:00Z", "dateModified": "2024-01-15T10:30:00Z", "createdBy": null, "createdByName": "System", "modifiedBy": null, "modifiedByName": "System" } ``` --- ## What's Next? - [Template Management](/mindzie_api/template/management) - Full CRUD operations for templates - [Notebook API](/mindzie_api/notebook) - Create notebooks from templates --- Full CRUD operations for managing notebook templates. All endpoints require a Global API Key. --- ## List Global Templates **GET** `/api/templates` Returns all global templates available across all tenants. ### Response (200 OK) ```json { "templates": [ { "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "Process Discovery", "description": "Standard process discovery workflow", "category": "Templates", "processName": "Order to Cash", "tenantId": null, "isGlobal": true, "hasThumbnail": true, "autoAddedDefaultSortOrder": 100, "dateModified": "2024-01-15T10:30:00Z" } ], "totalCount": 1 } ``` --- ## List Templates for Tenant **GET** `/api/templates/tenant/{tenantId}` Returns all templates available to a specific tenant, including both global and tenant-specific templates. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | ### Response (200 OK) ```json { "templates": [ { "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "Process Discovery", "description": "Standard process discovery workflow", "category": "Templates", "processName": "Order to Cash", "tenantId": null, "isGlobal": true, "hasThumbnail": true, "autoAddedDefaultSortOrder": 100, "dateCreated": "2024-01-01T00:00:00Z", "dateModified": "2024-01-15T10:30:00Z", "createdByName": "System", "modifiedByName": "System" }, { "templateId": "bbbbbbbb-cccc-dddd-eeee-ffffffffffff", "name": "Custom Analysis", "description": "Tenant-specific custom analysis", "category": "Custom", "processName": null, "tenantId": "12345678-1234-1234-1234-123456789012", "isGlobal": false, "hasThumbnail": false, "autoAddedDefaultSortOrder": 0, "dateCreated": "2024-02-01T09:00:00Z", "dateModified": "2024-02-15T14:30:00Z", "createdByName": "API", "modifiedByName": "API" } ], "totalCount": 2 } ``` --- ## List Templates by Category **GET** `/api/templates/category/{category}` Returns templates filtered by category. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `category` | string | Yes | Category name: `Templates`, `Custom`, or `BaseKnowledge` | ### Query Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | No | Filter to specific tenant's templates | ### Example ```bash # Get all custom templates curl -X GET "https://your-mindzie-instance.com/api/templates/category/Custom" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Get custom templates for a specific tenant curl -X GET "https://your-mindzie-instance.com/api/templates/category/Custom?tenantId=12345678-1234-1234-1234-123456789012" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" ``` --- ## Get Template Details **GET** `/api/templates/{templateId}` Returns full template details including the MCL configuration text. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `templateId` | GUID | Yes | The template identifier | ### Response (200 OK) ```json { "templateId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "name": "Process Discovery", "description": "Standard process discovery workflow with variant analysis", "category": "Templates", "processName": "Order to Cash", "mclText": "// MCL configuration defining notebook blocks and settings\n{\n \"blocks\": [...],\n \"settings\": {...}\n}", "tenantId": null, "isGlobal": true, "hasThumbnail": true, "autoAddedDefaultSortOrder": 100, "originatingNotebookId": null, "dateCreated": "2024-01-01T00:00:00Z", "dateModified": "2024-01-15T10:30:00Z", "createdBy": null, "createdByName": "System", "modifiedBy": null, "modifiedByName": "System" } ``` ### Response Fields | Field | Type | Description | |-------|------|-------------| | `templateId` | GUID | Unique identifier | | `name` | string | Template name | | `description` | string | Template description | | `category` | string | Category (Templates, Custom, BaseKnowledge) | | `processName` | string | Associated process name | | `mclText` | string | MCL configuration text | | `tenantId` | GUID | Tenant ID (null for global templates) | | `isGlobal` | boolean | True if this is a global template | | `hasThumbnail` | boolean | True if thumbnail image exists | | `autoAddedDefaultSortOrder` | integer | Display sort order | | `originatingNotebookId` | GUID | Source notebook if created from existing notebook | | `dateCreated` | datetime | Creation timestamp | | `dateModified` | datetime | Last modification timestamp | | `createdBy` | GUID | Creator user ID | | `createdByName` | string | Creator user name | | `modifiedBy` | GUID | Last modifier user ID | | `modifiedByName` | string | Last modifier user name | ### Error Responses **Not Found (404)** ```json { "error": "Template with ID 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' not found" } ``` --- ## Get Template Thumbnail **GET** `/api/templates/{templateId}/thumbnail` Returns the thumbnail image for a template. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `templateId` | GUID | Yes | The template identifier | ### Response (200 OK) Returns JPEG image data with `Content-Type: image/jpeg`. ### Error Responses **Not Found (404)** ```json { "error": "Thumbnail not found for template 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'" } ``` --- ## Create Template **POST** `/api/templates/tenant/{tenantId}` Creates a new tenant-specific template. Global templates cannot be created via API. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `tenantId` | GUID | Yes | The tenant identifier | ### Request Body ```json { "name": "Custom Analysis Template", "description": "Custom analysis workflow for monthly reporting", "mclText": "// MCL configuration text", "category": "Custom", "processName": "Monthly Report", "isGlobal": false, "autoAddedDefaultSortOrder": 0 } ``` ### Request Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Template name (must be unique) | | `description` | string | No | Template description | | `mclText` | string | Yes | MCL configuration text | | `category` | string | No | Category (default: "Custom") | | `processName` | string | No | Associated process name | | `isGlobal` | boolean | No | Must be false for API creation | | `autoAddedDefaultSortOrder` | integer | No | Display sort order | ### Response (201 Created) ```json { "templateId": "cccccccc-dddd-eeee-ffff-000000000000", "name": "Custom Analysis Template", "description": "Custom analysis workflow for monthly reporting", "category": "Custom", "processName": "Monthly Report", "mclText": "// MCL configuration text", "tenantId": "12345678-1234-1234-1234-123456789012", "isGlobal": false, "hasThumbnail": false, "autoAddedDefaultSortOrder": 0, "dateCreated": "2024-03-01T10:00:00Z", "dateModified": "2024-03-01T10:00:00Z", "createdByName": "API" } ``` ### Error Responses **Bad Request (400) - Validation Failed** ```json { "error": "Validation failed", "validationErrors": ["Name is required", "MclText is required"] } ``` **Conflict (409) - Duplicate Name** ```json { "error": "A template with this name already exists" } ``` --- ## Update Template **PUT** `/api/templates/{templateId}` Updates an existing tenant-specific template. Global templates cannot be updated via API. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `templateId` | GUID | Yes | The template identifier | ### Request Body ```json { "name": "Updated Template Name", "description": "Updated description", "mclText": "// Updated MCL configuration", "category": "Custom", "processName": "Updated Process", "autoAddedDefaultSortOrder": 10 } ``` All fields are optional - only provided fields will be updated. ### Response (200 OK) Returns the updated template details. ### Error Responses **Bad Request (400) - Global Template** ```json { "error": "Cannot update global templates through the API" } ``` **Not Found (404)** ```json { "error": "Template not found" } ``` **Conflict (409) - Duplicate Name** ```json { "error": "A template with this name already exists" } ``` --- ## Delete Template **DELETE** `/api/templates/{templateId}` Deletes a tenant-specific template. Global templates cannot be deleted via API. ### Path Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `templateId` | GUID | Yes | The template identifier | ### Response (200 OK) ```json { "message": "Template deleted successfully" } ``` ### Error Responses **Bad Request (400) - Global Template** ```json { "error": "Cannot delete global templates" } ``` **Not Found (404)** ```json { "error": "Template not found" } ``` --- ## Implementation Examples ### cURL ```bash # List all templates for a tenant curl -X GET "https://your-mindzie-instance.com/api/templates/tenant/12345678-1234-1234-1234-123456789012" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Get template details curl -X GET "https://your-mindzie-instance.com/api/templates/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" # Create a template curl -X POST "https://your-mindzie-instance.com/api/templates/tenant/12345678-1234-1234-1234-123456789012" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "My Custom Template", "description": "Custom analysis workflow", "mclText": "// MCL configuration", "category": "Custom" }' # Update a template curl -X PUT "https://your-mindzie-instance.com/api/templates/cccccccc-dddd-eeee-ffff-000000000000" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{"name": "Updated Name"}' # Delete a template curl -X DELETE "https://your-mindzie-instance.com/api/templates/cccccccc-dddd-eeee-ffff-000000000000" \ -H "Authorization: Bearer YOUR_GLOBAL_API_KEY" ``` ### Python ```python import requests BASE_URL = 'https://your-mindzie-instance.com' TENANT_ID = '12345678-1234-1234-1234-123456789012' class TemplateManager: def __init__(self, global_api_key): self.headers = { 'Authorization': f'Bearer {global_api_key}', 'Content-Type': 'application/json' } def list_templates(self, tenant_id=None): """List templates, optionally filtered by tenant.""" if tenant_id: url = f'{BASE_URL}/api/templates/tenant/{tenant_id}' else: url = f'{BASE_URL}/api/templates' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def get_template(self, template_id): """Get template details including MCL text.""" url = f'{BASE_URL}/api/templates/{template_id}' response = requests.get(url, headers=self.headers) response.raise_for_status() return response.json() def create_template(self, tenant_id, name, mcl_text, description=None, category='Custom'): """Create a new tenant-specific template.""" url = f'{BASE_URL}/api/templates/tenant/{tenant_id}' data = { 'name': name, 'mclText': mcl_text, 'description': description, 'category': category } response = requests.post(url, json=data, headers=self.headers) response.raise_for_status() return response.json() def update_template(self, template_id, **kwargs): """Update a template. Pass only fields to update.""" url = f'{BASE_URL}/api/templates/{template_id}' response = requests.put(url, json=kwargs, headers=self.headers) response.raise_for_status() return response.json() def delete_template(self, template_id): """Delete a tenant-specific template.""" url = f'{BASE_URL}/api/templates/{template_id}' response = requests.delete(url, headers=self.headers) response.raise_for_status() return response.json() # Usage manager = TemplateManager('your-global-api-key') # List templates templates = manager.list_templates(TENANT_ID) print(f"Found {templates['totalCount']} templates") for t in templates['templates']: print(f" - {t['name']} ({'Global' if t['isGlobal'] else 'Tenant'})") # Create a template new_template = manager.create_template( tenant_id=TENANT_ID, name='My Analysis Template', mcl_text='// MCL configuration here', description='Custom workflow' ) print(f"Created template: {new_template['templateId']}") # Update the template updated = manager.update_template( new_template['templateId'], name='Renamed Template' ) # Delete the template manager.delete_template(new_template['templateId']) print("Template deleted") ``` ### JavaScript/Node.js ```javascript const BASE_URL = 'https://your-mindzie-instance.com'; const TENANT_ID = '12345678-1234-1234-1234-123456789012'; class TemplateManager { constructor(globalApiKey) { this.headers = { 'Authorization': `Bearer ${globalApiKey}`, 'Content-Type': 'application/json' }; } async listTemplates(tenantId = null) { const url = tenantId ? `${BASE_URL}/api/templates/tenant/${tenantId}` : `${BASE_URL}/api/templates`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async getTemplate(templateId) { const url = `${BASE_URL}/api/templates/${templateId}`; const response = await fetch(url, { headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async createTemplate(tenantId, name, mclText, description = null, category = 'Custom') { const url = `${BASE_URL}/api/templates/tenant/${tenantId}`; const response = await fetch(url, { method: 'POST', headers: this.headers, body: JSON.stringify({ name, mclText, description, category }) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async updateTemplate(templateId, updates) { const url = `${BASE_URL}/api/templates/${templateId}`; const response = await fetch(url, { method: 'PUT', headers: this.headers, body: JSON.stringify(updates) }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } async deleteTemplate(templateId) { const url = `${BASE_URL}/api/templates/${templateId}`; const response = await fetch(url, { method: 'DELETE', headers: this.headers }); if (!response.ok) throw new Error(`Failed: ${response.status}`); return response.json(); } } // Usage const manager = new TemplateManager('your-global-api-key'); // List templates const templates = await manager.listTemplates(TENANT_ID); console.log(`Found ${templates.totalCount} templates`); // Create a template const newTemplate = await manager.createTemplate( TENANT_ID, 'My Analysis Template', '// MCL configuration', 'Custom workflow' ); console.log(`Created: ${newTemplate.templateId}`); // Delete the template await manager.deleteTemplate(newTemplate.templateId); ``` --- ## Best Practices 1. **Use Global API Keys**: Template operations require global API keys 2. **Unique Names**: Template names must be unique within their scope 3. **MCL Text**: Store complete MCL configuration for reproducible notebooks 4. **Categories**: Use standard categories for organization 5. **Thumbnails**: Templates created via API won't have thumbnails automatically --- # End of Documentation This file was auto-generated from the mindzieAPI documentation source files. For the latest documentation, visit: https://docs.mindziestudio.com/mindzie_api Generated: 2026-01-02 16:41:35 UTC