API Reference
The Transcodely API is built on Connect-RPC, a modern RPC framework that speaks both gRPC and HTTP/JSON. All endpoints accept and return JSON over HTTPS, making them accessible from any language or HTTP client.
Base URL
All API requests are made to:
https://api.transcodely.comAuthentication
Transcodely uses API keys for authentication. Include your key in the Authorization header as a Bearer token:
Authorization: Bearer {{API_KEY}}API keys come in two environments:
| Environment | Prefix | Usage |
|---|---|---|
| Live | ak_live_ | Production workloads, real billing |
| Test | ak_test_ | Development and testing, no charges |
Organization scope
Most endpoints are scoped to an organization. Include the organization ID in the X-Organization-ID header:
X-Organization-ID: {{ORG_ID}}Endpoints that do not require this header are noted in their documentation (e.g., GetMe, Create Organization).
Content type
All requests and responses use JSON:
Content-Type: application/jsonField names in request and response bodies use snake_case. Enum values are simple lowercase strings (e.g., "pending", "h264", "1080p").
Request format
The API uses Connect-RPC’s HTTP/JSON mapping. All methods use POST with a JSON request body, regardless of whether the operation is a read or write.
curl -X POST https://api.transcodely.com/transcodely.v1.JobService/Get
-H "Authorization: Bearer {{API_KEY}}"
-H "X-Organization-ID: {{ORG_ID}}"
-H "Content-Type: application/json"
-d '{"id": "job_a1b2c3d4e5f6"}'ID format
All resource IDs use Stripe-style prefixed identifiers:
| Resource | Prefix | Example |
|---|---|---|
| Organization | org_ | org_f6g7h8i9j0 |
| App | app_ | app_k1l2m3n4o5 |
| API Key | ak_ | ak_live_abc123... |
| User | usr_ | usr_a1b2c3d4e5 |
| Membership | mem_ | mem_a1b2c3d4e5 |
| Origin | ori_ | ori_x9y8z7w6v5 |
| Preset | pst_ | pst_x9y8z7w6v5 |
| Job | job_ | job_a1b2c3d4e5f6 |
Pagination
All list endpoints support cursor-based pagination to efficiently traverse large result sets. Cursors provide stable results even as new resources are created or deleted between requests.
Request parameters
Every list endpoint accepts a pagination object:
| Field | Type | Default | Description |
|---|---|---|---|
limit | integer | 20 | Maximum items per page (1-100) |
cursor | string | "" | Cursor from previous response for next page |
offset | integer | 0 | Alternative to cursor — skip N items |
curl -X POST https://api.transcodely.com/transcodely.v1.JobService/List
-H "Authorization: Bearer {{API_KEY}}"
-H "X-Organization-ID: org_a1b2c3d4e5"
-H "Content-Type: application/json"
-d '{
"pagination": {
"limit": 10
}
}'Response metadata
Every list response includes a pagination object:
| Field | Type | Description |
|---|---|---|
next_cursor | string | Cursor for fetching the next page. Empty if no more pages. |
total_count | integer (optional) | Total number of matching items, if available |
{
"jobs": [ "..." ],
"pagination": {
"next_cursor": "eyJpZCI6ImpvYl94OXk4ejd3NnY1In0",
"total_count": 142
}
}Pass the next_cursor value as cursor in your next request to fetch the next page. An empty next_cursor means there are no more results.
Iterating all pages
Loop through pages using the cursor returned in each response:
async function getAllJobs(client: JobServiceClient): Promise<Job[]> {
const allJobs: Job[] = [];
let cursor = '';
do {
const response = await client.list({
pagination: { limit: 100, cursor },
});
allJobs.push(...response.jobs);
cursor = response.pagination?.next_cursor ?? '';
} while (cursor !== '');
return allJobs;
}def get_all_jobs(client):
all_jobs = []
cursor = ""
while True:
response = client.list(
pagination={"limit": 100, "cursor": cursor}
)
all_jobs.extend(response.jobs)
cursor = response.pagination.next_cursor
if not cursor:
break
return all_jobsfunc getAllJobs(ctx context.Context, client jobv1connect.JobServiceClient) ([]*jobv1.Job, error) {
var allJobs []*jobv1.Job
cursor := ""
for {
resp, err := client.List(ctx, connect.NewRequest(&jobv1.ListJobsRequest{
Pagination: &commonv1.PaginationRequest{
Limit: 100,
Cursor: cursor,
},
}))
if err != nil {
return nil, err
}
allJobs = append(allJobs, resp.Msg.Jobs...)
cursor = resp.Msg.Pagination.NextCursor
if cursor == "" {
break
}
}
return allJobs, nil
}Offset pagination
As an alternative to cursors, you can use offset-based pagination by setting the offset field. This is simpler but less stable — if items are created or deleted between pages, you may see duplicates or skip items.
{
"pagination": { "limit": 10, "offset": 20 }
}Use offset pagination only when you need random access to a specific page (e.g., “jump to page 3”). For sequential traversal, always prefer cursors.
Cursor vs offset
| Feature | Cursor | Offset |
|---|---|---|
| Stability | Stable across inserts/deletes | May skip or duplicate items |
| Performance | Consistent (index-based) | Slower on deep pages |
| Random access | Not supported | Supported |
| Recommended for | Sequential iteration, real-time data | Jump-to-page UIs |
Pagination best practices
- Use cursor pagination for iterating through results sequentially.
- Set
limitto the maximum your UI can display — fewer requests means better performance. - Stop when
next_cursoris empty — this is the only reliable signal that you have reached the last page. - Do not construct cursors manually — they are opaque tokens. Always use the value returned by the API.
- Cache
total_countif needed — it may not be available on all endpoints and can be expensive to compute.
Error format
Errors follow a structured format with machine-readable codes and field-level detail:
{
"code": "invalid_argument",
"message": "Request validation failed",
"details": [
{
"type": "transcodely.v1.ErrorDetails",
"value": {
"code": "validation_error",
"message": "Request validation failed",
"field_violations": [
{
"field": "outputs[0].video[0].codec",
"description": "codec is required"
}
]
}
}
]
}Error codes
The API uses standard Connect-RPC error codes:
| Code | HTTP Status | Description |
|---|---|---|
invalid_argument | 400 | Request validation failed |
unauthenticated | 401 | Missing or invalid API key |
permission_denied | 403 | Insufficient permissions or suspended account |
not_found | 404 | Resource does not exist |
already_exists | 409 | Resource already exists (e.g., duplicate slug) |
failed_precondition | 412 | Operation not allowed in current state |
resource_exhausted | 429 | Rate limit exceeded |
internal | 500 | Internal server error |
unavailable | 503 | Service temporarily unavailable |
Idempotency
Idempotency ensures that retrying a request produces the same result as the original, without creating duplicate resources. This is critical for handling network failures, timeouts, and other transient errors in production systems.
How it works
When creating a job, include an idempotency_key in the request body. If Transcodely receives a second request with the same key, it returns the result of the original request instead of creating a new job.
curl -X POST https://api.transcodely.com/transcodely.v1.JobService/Create
-H "Authorization: Bearer {{API_KEY}}"
-H "X-Organization-ID: org_a1b2c3d4e5"
-H "Content-Type: application/json"
-d '{
"input_url": "gs://my-bucket/video.mp4",
"output_origin_id": "ori_x9y8z7w6v5",
"outputs": [
{
"type": "mp4",
"video": [
{ "codec": "h264", "resolution": "1080p", "quality": "standard" }
]
}
],
"idempotency_key": "upload_usr12345_2026-01-15T10:30:00Z"
}'The first request creates the job and associates it with the key. Subsequent requests with the same key return the existing job without creating a new one.
Key format
Idempotency keys are free-form strings up to 128 characters. We recommend a format that ties the key to the specific operation:
| Strategy | Example | Best for |
|---|---|---|
| UUID v4 | 550e8400-e29b-41d4-a716-446655440000 | Simple, guaranteed uniqueness |
| Operation-based | upload_usr12345_2026-01-15T10:30:00Z | Readable, debuggable |
| Content hash | sha256:a1b2c3d4e5f6... | Deduplication based on input |
Scope and replay behavior
Idempotency keys are scoped to the app associated with the API key. The same key can be used independently across different apps without conflict.
| Scenario | Behavior |
|---|---|
| Same key, same request body | Returns the original job |
| Same key, different request body | Returns the original job (request body is not compared) |
| Same key, different API key (same app) | Returns the original job |
| Same key, different app | Creates a new job (keys are app-scoped) |
Important: The API does not compare request bodies when replaying an idempotency key. If you reuse a key with a different request body, you will get back the original job — not a new job with the new parameters. Always use unique keys for distinct operations.
Expiration
Idempotency keys are stored for 24 hours. After expiration, a previously used key can be reused to create a new job.
When to use idempotency keys
- Network retries — your HTTP client automatically retries on timeout or connection reset
- Queue-based processing — a message queue may deliver the same message more than once
- User-triggered actions — a user clicks “Submit” multiple times before the UI disables the button
- Batch processing — processing a list of items where some may need to be retried
Example: safe retry logic
async function createJobWithRetry(
client: JobServiceClient,
request: CreateJobRequest,
maxRetries = 3
): Promise<Job> {
const idempotencyKey = `job_${crypto.randomUUID()}`;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await client.create({
...request,
idempotency_key: idempotencyKey,
});
return response.job;
} catch (err) {
if (err instanceof ConnectError) {
if (err.code === Code.InvalidArgument || err.code === Code.NotFound) {
throw err;
}
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
continue;
}
}
throw err;
}
}
throw new Error('Max retries exceeded');
}import time
import uuid
from connectrpc.exceptions import ConnectError
def create_job_with_retry(client, request, max_retries=3):
idempotency_key = f"job_{uuid.uuid4()}"
for attempt in range(max_retries + 1):
try:
request.idempotency_key = idempotency_key
response = client.create(request)
return response.job
except ConnectError as e:
if e.code in ("invalid_argument", "not_found"):
raise
if attempt < max_retries:
time.sleep(2 ** attempt)
continue
raiseIdempotency best practices
- Generate the key before the first attempt and reuse it across retries.
- Use descriptive, deterministic keys when possible — they make debugging easier.
- Never reuse a key for a different operation — always generate a new key for each distinct request.
- Store the key alongside your internal records so you can trace which Transcodely job maps to which internal entity.
Metadata
Metadata lets you attach custom key-value pairs to jobs. This is useful for linking Transcodely jobs to your internal systems — tracking which user uploaded a video, tagging jobs by campaign, or storing any other context you need.
Setting metadata
Metadata is a flat map of string keys to string values, set at job creation time:
{
"input_url": "gs://my-bucket/video.mp4",
"output_origin_id": "ori_x9y8z7w6v5",
"outputs": [
{
"type": "mp4",
"video": [
{ "codec": "h264", "resolution": "1080p", "quality": "standard" }
]
}
],
"metadata": {
"user_id": "usr_12345",
"campaign": "summer-2026",
"source": "upload-api",
"content_id": "vid_abc123"
}
}Metadata is returned in all job responses, including webhook payloads:
{
"job": {
"id": "job_a1b2c3d4e5f6",
"status": "completed",
"metadata": {
"user_id": "usr_12345",
"campaign": "summer-2026",
"source": "upload-api",
"content_id": "vid_abc123"
}
}
}Constraints
| Constraint | Limit |
|---|---|
| Maximum entries | 20 key-value pairs per job |
| Key length | 1-64 characters |
| Value length | Up to 1,024 characters |
| Key format | Free-form string |
| Value format | Free-form string |
Metadata is immutable after job creation. You cannot add, update, or remove metadata entries after the job has been created.
Common use cases
Link to internal records — map Transcodely jobs back to your own database entities. When a webhook fires, use these values to update the correct records in your system:
{
"metadata": {
"user_id": "usr_12345",
"video_id": "vid_abc123",
"upload_session": "sess_x9y8z7w6"
}
}Categorization and reporting — tag jobs for analytics and cost reporting. Export metadata alongside job costs to build per-team or per-campaign reports:
{
"metadata": {
"team": "content-team",
"campaign": "summer-2026",
"content_type": "ugc",
"tier": "free"
}
}Debugging and tracing — include request tracing information:
{
"metadata": {
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"request_id": "req_n3o4p5q6r7s8",
"environment": "staging"
}
}Batch processing — track batch position and source:
{
"metadata": {
"batch_id": "batch_2026-01-15",
"batch_index": "42",
"total_in_batch": "100"
}
}Metadata in webhooks
All metadata is included in webhook payloads, making it easy to correlate events with your internal state:
{
"type": "job.completed",
"data": {
"job": {
"id": "job_a1b2c3d4e5f6",
"status": "completed",
"metadata": {
"user_id": "usr_12345",
"video_id": "vid_abc123"
}
}
}
}Metadata best practices
- Use consistent key naming across your application — decide on a convention (e.g.,
snake_case) and stick with it. - Do not store sensitive data in metadata. Values are visible in API responses and webhook payloads.
- Keep values short when possible. While values can be up to 1,024 characters, shorter values are easier to work with.
- Use metadata for correlation, not configuration. Metadata does not affect how a job is processed — use output specs and presets for encoding configuration.
- Plan your keys upfront. Since metadata is immutable after creation, decide what you need to track before submitting jobs.
Field emission
API responses emit all fields consistently:
| Field type | Behavior |
|---|---|
| Scalars (string, int, bool) | Always present, even if zero/empty |
| Repeated fields | Always present as empty array [] |
| Map fields | Always present as empty object {} |
| Message fields (unset) | Emitted as null |
| Timestamps (unset) | Emitted as null |
| Enum defaults | Emitted as "unspecified" |
| Optional scalars (unset) | Omitted entirely |
Available services
| Service | Description | Endpoints |
|---|---|---|
| Organizations | Billing entities that contain apps and users | CheckSlug, Create, Get, Update, List |
| Apps | Projects within organizations | Create, Get, Update, List, Archive |
| API Keys | Programmatic API access credentials | Create, Get, List, Revoke |
| Users | User profiles, authentication, and membership management | GetMe, Get, UpdateMe, List, ListMembers, UpdateRole, RemoveMember |
| Origins | Storage locations for inputs and outputs | Create, Get, List, Update, Validate, Archive |
| Presets | Reusable encoding configurations | Create, Get, GetBySlug, List, Update, Duplicate, Archive |
| Jobs | Video transcoding operations | Create, Get, List, Cancel, Confirm, Watch |
| Webhooks | Event delivery for job lifecycle notifications | Create, Get, List, Update, Delete, ListEvents |
| Health | Service health monitoring | Check |