Training Day

Batch Processing

Exchanging data with other systems in batches

Batch Processing Overview

Batch processing offers several key characteristics:

  • Efficiency: Processes multiple records in a single operation
  • Scheduling: Runs at predetermined times or intervals
  • Resource optimization: Makes optimal use of system resources
  • Transaction grouping: Handles multiple operations as a single transaction
  • Volume handling: Well-suited for high-volume data processing
┌─────────────────┐      ┌─────────────────┐      ┌─────────────────┐
│  Source System  │      │   Processing    │      │  Target System  │
│  (Collect Data) │──────►   (Transform)   │──────►  (Load/Update)  │
│                 │      │                 │      │                 │
└─────────────────┘      └─────────────────┘      └─────────────────┘
        │                                                  │
        │                      ┌──────────┐               │
        └──────────────────────► Schedule │───────────────┘
                               └──────────┘

When to Use Batch Processing

Batch processing is ideal for scenarios where:

  • High volumes of data need to be processed
  • Real-time processing is not required
  • Processing can be scheduled during off-hours
  • Efficiency is more important than immediacy
  • Operations are resource-intensive
  • Data needs to be processed as a cohesive set

FileMaker Implementation Approaches

FileMaker offers several powerful approaches to implement batch processing:

  • Server Schedules: Leverage built-in scheduling for automated execution
  • CSV Operations: Use FileMaker's robust import/export capabilities
  • API Integration: Connect with external systems through Data API and OData
  • Transaction Management: Ensure data integrity with atomic operations

FileMaker Server Schedules

Use FileMaker Server's built-in scheduling capabilities:

  1. Create a script that performs the batch operation
  2. Configure a server-side schedule to run at specific times
  3. Log results and handle notifications for failures

CSV Import and Export

FileMaker's robust CSV handling makes it ideal for batch processing:

  1. Automated exports: Generate CSV files on schedule for external system consumption
  2. Scheduled imports: Import data from CSV files deposited by other systems
  3. Data transformation: Use temp tables and calculations during import/export for data mapping, validation, and transformation

Data API and OData Integration

FileMaker Data API and OData provide powerful methods for batch processing:

  1. Bulk record operations: Use the Data API to fetch, create, or update multiple records in a single call
  2. Pagination handling: Process large datasets by working with manageable chunks of records
  3. OData queries: Leverage OData's query capabilities to filter and sort data before processing
  4. Scripted batch jobs: Create server-side scripts triggered by API calls to process batches of data
  5. Cross-platform integration: Enable other systems to initiate batch processes in FileMaker
  6. Scheduled API operations: Use external schedulers to make regular API calls for data exchange

Transaction Management

Ensure data integrity with proper transaction handling. FileMaker can import records as a single transaction, making it a powerful technique for batch operations. If any record in the batch fails validation rules, the entire import can be rolled back, preserving data integrity.

Implementing comprehensive error logging is crucial for transaction management in batch processes. Create dedicated error log tables to record transaction details, including timestamps, affected record counts, validation failures, and specific error messages. This information is essential for troubleshooting failed batches and providing audit trails for business-critical operations.

Common Batch Processing Scenarios

Data Synchronization

  • Nightly sync: Synchronize systems during off-hours
  • Periodic reconciliation: Compare and align data periodically
  • Aggregation processes: Collect and summarize data for reporting

Example: A retail business uses FileMaker to manage inventory and customer data while running an ecommerce store on Shopify. Every night at 2:00 AM, a scheduled FileMaker script retrieves new orders from Shopify via API, creates corresponding order records in FileMaker, updates inventory levels, and flags items for restocking. Simultaneously, product information updates made in FileMaker during the day are pushed to Shopify to ensure consistent pricing and availability information across both systems. This bidirectional synchronization happens when customer activity is minimal, ensuring systems remain responsive during business hours.

Import/Export Operations

  • Scheduled exports: Generate reports or data extracts on schedule
  • Bulk imports: Process large incoming data files
  • Data migration: Move data between systems in batches

Maintenance Operations

  • Data archiving: Move old data to archive storage
  • Database maintenance: Optimize and clean up databases
  • Index rebuilding: Recreate search indexes periodically

Advantages and Challenges

Advantages

  • Efficient use of system resources
  • Reduced overhead for processing
  • Simplifies error handling and recovery
  • Enables processing during off-peak hours
  • Often simpler to implement than real-time solutions

Challenges

  • Data latency (not real-time)
  • Batch window constraints
  • Error recovery complexity
  • Process monitoring requirements
  • Potential for data inconsistency between batches