Search Documentation
Search across all documentation pages
Batch Encoding

Overview

Batch encoding is the process of submitting multiple transcoding jobs at once — for example, encoding an entire video library, processing user uploads in bulk, or generating multiple renditions of a content catalog. Transcodely does not have a dedicated “batch” endpoint. Instead, you create individual jobs in parallel and use idempotency keys to make the process safe and repeatable.

This approach gives you full control over per-job configuration, error handling, and retry logic.


Creating Jobs in Parallel

Submit multiple jobs concurrently by making parallel API calls. Each job is independent and processes on its own worker.

Sequential

Process files one at a time with cURL:

for video in source1.mp4 source2.mp4 source3.mp4; do
  curl -X POST https://api.transcodely.com/transcodely.v1.JobService/Create 
    -H "Content-Type: application/json" 
    -H "Authorization: Bearer {{API_KEY}}" 
    -H "X-Organization-ID: {{ORG_ID}}" 
    -d "{
      "input_origin_id": "ori_input12345",
      "input_path": "uploads/${video}",
      "output_origin_id": "ori_output6789",
      "idempotency_key": "batch_2026-02-28_${video}",
      "outputs": [
        {
          "type": "mp4",
          "video": [{ "codec": "h264", "resolution": "1080p", "quality": "standard" }]
        }
      ]
    }"
done

Parallel

Submit all jobs concurrently for faster throughput:

import { createOrgApiClient } from "$lib/api/client";
import { JobService } from "$lib/gen/transcodely/v1/job_connect";

const jobClient = createOrgApiClient(JobService);

const videos = [
  "uploads/episode-01.mp4",
  "uploads/episode-02.mp4",
  "uploads/episode-03.mp4",
  "uploads/episode-04.mp4",
  "uploads/episode-05.mp4",
];

// Create all jobs in parallel
const results = await Promise.allSettled(
  videos.map((inputPath) =>
    jobClient.create({
      inputOriginId: "ori_input12345",
      inputPath,
      outputOriginId: "ori_output6789",
      idempotencyKey: `batch_2026-02-28_${inputPath}`,
      outputs: [
        {
          type: "mp4",
          video: [{ codec: "h264", resolution: "1080p", quality: "standard" }],
        },
        {
          type: "hls",
          video: [
            { codec: "h264", resolution: "1080p", quality: "standard" },
            { codec: "h264", resolution: "720p", quality: "standard" },
            { codec: "h264", resolution: "480p", quality: "economy" },
          ],
        },
      ],
    })
  )
);

// Separate successes and failures
const created = results
  .filter((r) => r.status === "fulfilled")
  .map((r) => r.value.job);

const failed = results
  .filter((r) => r.status === "rejected")
  .map((r, i) => ({ video: videos[i], error: r.reason }));

console.warn(`Created ${created.length} jobs, ${failed.length} failures`);
import asyncio
import httpx

API_URL = "https://api.transcodely.com/transcodely.v1.JobService/Create"
HEADERS = {
    "Content-Type": "application/json",
    "Authorization": "Bearer {{API_KEY}}",
    "X-Organization-ID": "{{ORG_ID}}",
}

videos = [
    "uploads/episode-01.mp4",
    "uploads/episode-02.mp4",
    "uploads/episode-03.mp4",
]

async def create_job(client: httpx.AsyncClient, input_path: str):
    response = await client.post(API_URL, headers=HEADERS, json={
        "input_origin_id": "ori_input12345",
        "input_path": input_path,
        "output_origin_id": "ori_output6789",
        "idempotency_key": f"batch_2026-02-28_{input_path}",
        "outputs": [{
            "type": "mp4",
            "video": [{"codec": "h264", "resolution": "1080p", "quality": "standard"}],
        }],
    })
    response.raise_for_status()
    return response.json()

async def main():
    async with httpx.AsyncClient() as client:
        tasks = [create_job(client, video) for video in videos]
        results = await asyncio.gather(*tasks, return_exceptions=True)

    for video, result in zip(videos, results):
        if isinstance(result, Exception):
            print(f"Failed: {video} - {result}")
        else:
            print(f"Created: {result['job']['id']} for {video}")

asyncio.run(main())

Idempotency Keys

Idempotency keys are critical for batch processing. They ensure that if a request is retried (due to network errors, timeouts, or application restarts), the same job is returned instead of creating a duplicate.

How Idempotency Works

  1. Include an idempotency_key in your create request
  2. If a job with that key already exists, the existing job is returned (no duplicate is created)
  3. The key is scoped to your app — different apps can use the same key without conflict
  4. Keys are permanent — they never expire

Key Design Patterns

Choose idempotency keys that uniquely identify the intent:

PatternExampleUse Case
Source file pathencode_uploads/video.mp4One encoding per source file
Batch + filebatch_2026-02-28_episode-01.mp4Daily batch runs
User + uploaduser_usr_abc123_upload_12345Per-user upload processing
Content IDcontent_cid_789_v2Versioned content library
curl -X POST https://api.transcodely.com/transcodely.v1.JobService/Create 
  -H "Content-Type: application/json" 
  -H "Authorization: Bearer {{API_KEY}}" 
  -H "X-Organization-ID: {{ORG_ID}}" 
  -d '{
    "input_origin_id": "ori_input12345",
    "input_path": "uploads/episode-01.mp4",
    "output_origin_id": "ori_output6789",
    "idempotency_key": "batch_2026-02-28_episode-01.mp4",
    "outputs": [
      {
        "type": "mp4",
        "video": [{ "codec": "h264", "resolution": "1080p", "quality": "standard" }]
      }
    ]
  }'

If you run this request again with the same idempotency_key, you get back the existing job without creating a new one. This makes your entire batch script safe to re-run.


Rate Limiting

When submitting large batches, be mindful of API rate limits. Transcodely applies per-app rate limits to prevent abuse:

TierRate LimitBurst
Standard100 requests/second200
Premium500 requests/second1000

For large batches (hundreds or thousands of videos), add concurrency control:

// Process in batches of 20 concurrent requests
const CONCURRENCY = 20;

async function processBatch(videos: string[]) {
  const results = [];

  for (let i = 0; i < videos.length; i += CONCURRENCY) {
    const batch = videos.slice(i, i + CONCURRENCY);
    const batchResults = await Promise.allSettled(
      batch.map((video) => createJob(video))
    );
    results.push(...batchResults);

    // Brief pause between batches
    if (i + CONCURRENCY < videos.length) {
      await new Promise((resolve) => setTimeout(resolve, 100));
    }
  }

  return results;
}

Monitoring Batch Progress

Polling All Jobs

After submitting a batch, poll all job statuses to track progress:

async function monitorBatch(jobIds: string[]) {
  const interval = setInterval(async () => {
    const jobs = await Promise.all(
      jobIds.map((id) => jobClient.get({ id }).then((r) => r.job))
    );

    const completed = jobs.filter((j) => j.status === "completed").length;
    const failed = jobs.filter((j) => j.status === "failed").length;
    const processing = jobs.filter(
      (j) => j.status === "processing" || j.status === "pending" || j.status === "probing"
    ).length;

    console.warn(`Progress: ${completed} done, ${failed} failed, ${processing} in progress`);

    if (processing === 0) {
      clearInterval(interval);
      console.warn("Batch complete!");
    }
  }, 10000); // Check every 10 seconds
}

Using Webhooks

For production systems, use webhooks instead of polling. Tag each job with metadata to identify the batch:

{
  "input_origin_id": "ori_input12345",
  "input_path": "uploads/episode-01.mp4",
  "output_origin_id": "ori_output6789",
  "webhook_url": "https://yourapp.com/webhooks/transcodely",
  "metadata": {
    "batch_id": "batch_2026-02-28",
    "content_id": "episode-01",
    "user_id": "usr_abc123"
  },
  "outputs": [
    {
      "type": "mp4",
      "video": [{ "codec": "h264", "resolution": "1080p", "quality": "standard" }]
    }
  ]
}

In your webhook handler, track batch completion:

async function handleJobCompleted(job: any) {
  const batchId = job.metadata.batch_id;

  // Record completion
  await db.batchJobs.update({
    where: { jobId: job.id },
    data: { status: "completed", completedAt: new Date() },
  });

  // Check if batch is complete
  const remaining = await db.batchJobs.count({
    where: { batchId, status: "pending" },
  });

  if (remaining === 0) {
    await notifyBatchComplete(batchId);
  }
}

Handling Partial Failures

In a batch, some jobs may fail while others succeed. Handle failures gracefully:

async function handleBatchResults(results: PromiseSettledResult<any>[]) {
  const failures = results
    .map((r, i) => ({ result: r, index: i }))
    .filter((r) => r.result.status === "rejected");

  if (failures.length === 0) {
    console.warn("All jobs created successfully");
    return;
  }

  console.warn(`${failures.length} jobs failed to create`);

  // Retry failed jobs
  for (const failure of failures) {
    console.warn(`Retrying job ${failure.index}:`, failure.result.reason);
    try {
      // Safe to retry because we use idempotency keys
      await createJob(videos[failure.index]);
    } catch (err) {
      console.error(`Retry failed for ${failure.index}:`, err);
    }
  }
}

Because idempotency keys are included, retrying a job that actually succeeded (e.g., the original request timed out but the job was created) will simply return the existing job.


Best Practices

PracticeRationale
Always use idempotency keysMakes batch scripts safe to re-run after failures
Limit concurrencyRespect rate limits and avoid overwhelming your system
Use metadata for trackingTag jobs with batch_id, content_id for easy filtering
Prefer webhooks over pollingMore efficient for monitoring large batches
Handle partial failuresNot all jobs in a batch will necessarily succeed
Use economy priority for bulk workLower cost for non-urgent batch processing
Log all job IDsEssential for debugging and support
Use consistent key namingMakes it easy to identify and deduplicate across runs