Search Documentation
Search across all documentation pages
Overview

API Reference

The Transcodely API is built on Connect-RPC, a modern RPC framework that speaks both gRPC and HTTP/JSON. All endpoints accept and return JSON over HTTPS, making them accessible from any language or HTTP client.

Base URL

All API requests are made to:

https://api.transcodely.com

Authentication

Transcodely uses API keys for authentication. Include your key in the Authorization header as a Bearer token:

Authorization: Bearer {{API_KEY}}

API keys come in two environments:

EnvironmentPrefixUsage
Liveak_live_Production workloads, real billing
Testak_test_Development and testing, no charges

Organization scope

Most endpoints are scoped to an organization. Include the organization ID in the X-Organization-ID header:

X-Organization-ID: {{ORG_ID}}

Endpoints that do not require this header are noted in their documentation (e.g., GetMe, Create Organization).

Content type

All requests and responses use JSON:

Content-Type: application/json

Field names in request and response bodies use snake_case. Enum values are simple lowercase strings (e.g., "pending", "h264", "1080p").

Request format

The API uses Connect-RPC’s HTTP/JSON mapping. All methods use POST with a JSON request body, regardless of whether the operation is a read or write.

curl -X POST https://api.transcodely.com/transcodely.v1.JobService/Get 
  -H "Authorization: Bearer {{API_KEY}}" 
  -H "X-Organization-ID: {{ORG_ID}}" 
  -H "Content-Type: application/json" 
  -d '{"id": "job_a1b2c3d4e5f6"}'

ID format

All resource IDs use Stripe-style prefixed identifiers:

ResourcePrefixExample
Organizationorg_org_f6g7h8i9j0
Appapp_app_k1l2m3n4o5
API Keyak_ak_live_abc123...
Userusr_usr_a1b2c3d4e5
Membershipmem_mem_a1b2c3d4e5
Originori_ori_x9y8z7w6v5
Presetpst_pst_x9y8z7w6v5
Jobjob_job_a1b2c3d4e5f6

Pagination

All list endpoints support cursor-based pagination to efficiently traverse large result sets. Cursors provide stable results even as new resources are created or deleted between requests.

Request parameters

Every list endpoint accepts a pagination object:

FieldTypeDefaultDescription
limitinteger20Maximum items per page (1-100)
cursorstring""Cursor from previous response for next page
offsetinteger0Alternative to cursor — skip N items
curl -X POST https://api.transcodely.com/transcodely.v1.JobService/List 
  -H "Authorization: Bearer {{API_KEY}}" 
  -H "X-Organization-ID: org_a1b2c3d4e5" 
  -H "Content-Type: application/json" 
  -d '{
    "pagination": {
      "limit": 10
    }
  }'

Response metadata

Every list response includes a pagination object:

FieldTypeDescription
next_cursorstringCursor for fetching the next page. Empty if no more pages.
total_countinteger (optional)Total number of matching items, if available
{
  "jobs": [ "..." ],
  "pagination": {
    "next_cursor": "eyJpZCI6ImpvYl94OXk4ejd3NnY1In0",
    "total_count": 142
  }
}

Pass the next_cursor value as cursor in your next request to fetch the next page. An empty next_cursor means there are no more results.

Iterating all pages

Loop through pages using the cursor returned in each response:

async function getAllJobs(client: JobServiceClient): Promise<Job[]> {
  const allJobs: Job[] = [];
  let cursor = '';

  do {
    const response = await client.list({
      pagination: { limit: 100, cursor },
    });

    allJobs.push(...response.jobs);
    cursor = response.pagination?.next_cursor ?? '';
  } while (cursor !== '');

  return allJobs;
}
def get_all_jobs(client):
    all_jobs = []
    cursor = ""

    while True:
        response = client.list(
            pagination={"limit": 100, "cursor": cursor}
        )
        all_jobs.extend(response.jobs)

        cursor = response.pagination.next_cursor
        if not cursor:
            break

    return all_jobs
func getAllJobs(ctx context.Context, client jobv1connect.JobServiceClient) ([]*jobv1.Job, error) {
	var allJobs []*jobv1.Job
	cursor := ""

	for {
		resp, err := client.List(ctx, connect.NewRequest(&jobv1.ListJobsRequest{
			Pagination: &commonv1.PaginationRequest{
				Limit:  100,
				Cursor: cursor,
			},
		}))
		if err != nil {
			return nil, err
		}

		allJobs = append(allJobs, resp.Msg.Jobs...)
		cursor = resp.Msg.Pagination.NextCursor
		if cursor == "" {
			break
		}
	}

	return allJobs, nil
}

Offset pagination

As an alternative to cursors, you can use offset-based pagination by setting the offset field. This is simpler but less stable — if items are created or deleted between pages, you may see duplicates or skip items.

{
  "pagination": { "limit": 10, "offset": 20 }
}

Use offset pagination only when you need random access to a specific page (e.g., “jump to page 3”). For sequential traversal, always prefer cursors.

Cursor vs offset

FeatureCursorOffset
StabilityStable across inserts/deletesMay skip or duplicate items
PerformanceConsistent (index-based)Slower on deep pages
Random accessNot supportedSupported
Recommended forSequential iteration, real-time dataJump-to-page UIs

Pagination best practices

  1. Use cursor pagination for iterating through results sequentially.
  2. Set limit to the maximum your UI can display — fewer requests means better performance.
  3. Stop when next_cursor is empty — this is the only reliable signal that you have reached the last page.
  4. Do not construct cursors manually — they are opaque tokens. Always use the value returned by the API.
  5. Cache total_count if needed — it may not be available on all endpoints and can be expensive to compute.

Error format

Errors follow a structured format with machine-readable codes and field-level detail:

{
  "code": "invalid_argument",
  "message": "Request validation failed",
  "details": [
    {
      "type": "transcodely.v1.ErrorDetails",
      "value": {
        "code": "validation_error",
        "message": "Request validation failed",
        "field_violations": [
          {
            "field": "outputs[0].video[0].codec",
            "description": "codec is required"
          }
        ]
      }
    }
  ]
}

Error codes

The API uses standard Connect-RPC error codes:

CodeHTTP StatusDescription
invalid_argument400Request validation failed
unauthenticated401Missing or invalid API key
permission_denied403Insufficient permissions or suspended account
not_found404Resource does not exist
already_exists409Resource already exists (e.g., duplicate slug)
failed_precondition412Operation not allowed in current state
resource_exhausted429Rate limit exceeded
internal500Internal server error
unavailable503Service temporarily unavailable

Idempotency

Idempotency ensures that retrying a request produces the same result as the original, without creating duplicate resources. This is critical for handling network failures, timeouts, and other transient errors in production systems.

How it works

When creating a job, include an idempotency_key in the request body. If Transcodely receives a second request with the same key, it returns the result of the original request instead of creating a new job.

curl -X POST https://api.transcodely.com/transcodely.v1.JobService/Create 
  -H "Authorization: Bearer {{API_KEY}}" 
  -H "X-Organization-ID: org_a1b2c3d4e5" 
  -H "Content-Type: application/json" 
  -d '{
    "input_url": "gs://my-bucket/video.mp4",
    "output_origin_id": "ori_x9y8z7w6v5",
    "outputs": [
      {
        "type": "mp4",
        "video": [
          { "codec": "h264", "resolution": "1080p", "quality": "standard" }
        ]
      }
    ],
    "idempotency_key": "upload_usr12345_2026-01-15T10:30:00Z"
  }'

The first request creates the job and associates it with the key. Subsequent requests with the same key return the existing job without creating a new one.

Key format

Idempotency keys are free-form strings up to 128 characters. We recommend a format that ties the key to the specific operation:

StrategyExampleBest for
UUID v4550e8400-e29b-41d4-a716-446655440000Simple, guaranteed uniqueness
Operation-basedupload_usr12345_2026-01-15T10:30:00ZReadable, debuggable
Content hashsha256:a1b2c3d4e5f6...Deduplication based on input

Scope and replay behavior

Idempotency keys are scoped to the app associated with the API key. The same key can be used independently across different apps without conflict.

ScenarioBehavior
Same key, same request bodyReturns the original job
Same key, different request bodyReturns the original job (request body is not compared)
Same key, different API key (same app)Returns the original job
Same key, different appCreates a new job (keys are app-scoped)

Important: The API does not compare request bodies when replaying an idempotency key. If you reuse a key with a different request body, you will get back the original job — not a new job with the new parameters. Always use unique keys for distinct operations.

Expiration

Idempotency keys are stored for 24 hours. After expiration, a previously used key can be reused to create a new job.

When to use idempotency keys

  • Network retries — your HTTP client automatically retries on timeout or connection reset
  • Queue-based processing — a message queue may deliver the same message more than once
  • User-triggered actions — a user clicks “Submit” multiple times before the UI disables the button
  • Batch processing — processing a list of items where some may need to be retried

Example: safe retry logic

async function createJobWithRetry(
  client: JobServiceClient,
  request: CreateJobRequest,
  maxRetries = 3
): Promise<Job> {
  const idempotencyKey = `job_${crypto.randomUUID()}`;

  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      const response = await client.create({
        ...request,
        idempotency_key: idempotencyKey,
      });
      return response.job;
    } catch (err) {
      if (err instanceof ConnectError) {
        if (err.code === Code.InvalidArgument || err.code === Code.NotFound) {
          throw err;
        }
        if (attempt < maxRetries) {
          await sleep(Math.pow(2, attempt) * 1000);
          continue;
        }
      }
      throw err;
    }
  }
  throw new Error('Max retries exceeded');
}
import time
import uuid
from connectrpc.exceptions import ConnectError

def create_job_with_retry(client, request, max_retries=3):
    idempotency_key = f"job_{uuid.uuid4()}"

    for attempt in range(max_retries + 1):
        try:
            request.idempotency_key = idempotency_key
            response = client.create(request)
            return response.job
        except ConnectError as e:
            if e.code in ("invalid_argument", "not_found"):
                raise
            if attempt < max_retries:
                time.sleep(2 ** attempt)
                continue
            raise

Idempotency best practices

  1. Generate the key before the first attempt and reuse it across retries.
  2. Use descriptive, deterministic keys when possible — they make debugging easier.
  3. Never reuse a key for a different operation — always generate a new key for each distinct request.
  4. Store the key alongside your internal records so you can trace which Transcodely job maps to which internal entity.

Metadata

Metadata lets you attach custom key-value pairs to jobs. This is useful for linking Transcodely jobs to your internal systems — tracking which user uploaded a video, tagging jobs by campaign, or storing any other context you need.

Setting metadata

Metadata is a flat map of string keys to string values, set at job creation time:

{
  "input_url": "gs://my-bucket/video.mp4",
  "output_origin_id": "ori_x9y8z7w6v5",
  "outputs": [
    {
      "type": "mp4",
      "video": [
        { "codec": "h264", "resolution": "1080p", "quality": "standard" }
      ]
    }
  ],
  "metadata": {
    "user_id": "usr_12345",
    "campaign": "summer-2026",
    "source": "upload-api",
    "content_id": "vid_abc123"
  }
}

Metadata is returned in all job responses, including webhook payloads:

{
  "job": {
    "id": "job_a1b2c3d4e5f6",
    "status": "completed",
    "metadata": {
      "user_id": "usr_12345",
      "campaign": "summer-2026",
      "source": "upload-api",
      "content_id": "vid_abc123"
    }
  }
}

Constraints

ConstraintLimit
Maximum entries20 key-value pairs per job
Key length1-64 characters
Value lengthUp to 1,024 characters
Key formatFree-form string
Value formatFree-form string

Metadata is immutable after job creation. You cannot add, update, or remove metadata entries after the job has been created.

Common use cases

Link to internal records — map Transcodely jobs back to your own database entities. When a webhook fires, use these values to update the correct records in your system:

{
  "metadata": {
    "user_id": "usr_12345",
    "video_id": "vid_abc123",
    "upload_session": "sess_x9y8z7w6"
  }
}

Categorization and reporting — tag jobs for analytics and cost reporting. Export metadata alongside job costs to build per-team or per-campaign reports:

{
  "metadata": {
    "team": "content-team",
    "campaign": "summer-2026",
    "content_type": "ugc",
    "tier": "free"
  }
}

Debugging and tracing — include request tracing information:

{
  "metadata": {
    "trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
    "request_id": "req_n3o4p5q6r7s8",
    "environment": "staging"
  }
}

Batch processing — track batch position and source:

{
  "metadata": {
    "batch_id": "batch_2026-01-15",
    "batch_index": "42",
    "total_in_batch": "100"
  }
}

Metadata in webhooks

All metadata is included in webhook payloads, making it easy to correlate events with your internal state:

{
  "type": "job.completed",
  "data": {
    "job": {
      "id": "job_a1b2c3d4e5f6",
      "status": "completed",
      "metadata": {
        "user_id": "usr_12345",
        "video_id": "vid_abc123"
      }
    }
  }
}

Metadata best practices

  1. Use consistent key naming across your application — decide on a convention (e.g., snake_case) and stick with it.
  2. Do not store sensitive data in metadata. Values are visible in API responses and webhook payloads.
  3. Keep values short when possible. While values can be up to 1,024 characters, shorter values are easier to work with.
  4. Use metadata for correlation, not configuration. Metadata does not affect how a job is processed — use output specs and presets for encoding configuration.
  5. Plan your keys upfront. Since metadata is immutable after creation, decide what you need to track before submitting jobs.

Field emission

API responses emit all fields consistently:

Field typeBehavior
Scalars (string, int, bool)Always present, even if zero/empty
Repeated fieldsAlways present as empty array []
Map fieldsAlways present as empty object {}
Message fields (unset)Emitted as null
Timestamps (unset)Emitted as null
Enum defaultsEmitted as "unspecified"
Optional scalars (unset)Omitted entirely

Available services

ServiceDescriptionEndpoints
OrganizationsBilling entities that contain apps and usersCheckSlug, Create, Get, Update, List
AppsProjects within organizationsCreate, Get, Update, List, Archive
API KeysProgrammatic API access credentialsCreate, Get, List, Revoke
UsersUser profiles, authentication, and membership managementGetMe, Get, UpdateMe, List, ListMembers, UpdateRole, RemoveMember
OriginsStorage locations for inputs and outputsCreate, Get, List, Update, Validate, Archive
PresetsReusable encoding configurationsCreate, Get, GetBySlug, List, Update, Duplicate, Archive
JobsVideo transcoding operationsCreate, Get, List, Cancel, Confirm, Watch
WebhooksEvent delivery for job lifecycle notificationsCreate, Get, List, Update, Delete, ListEvents
HealthService health monitoringCheck