The Studio Automated REST API can be used to manage your servers, cameras and recordings.
Below an overview is sketched of the landscape and its components.
The following hierarchy is used for the different entities in our landscape.
The following explains the (video) data flow from camera to end user. Standalone Servers locally encode, process and directly serve their content to requested endpoints. Integrated Camera Servers output rely on Studio Automated Cloud Processing the generate AI directed broadcast streams.
The following explains the control flow to manage servers and recordings.
End users can either use the Studio Automated Dashboard or an integration made by a Studio Automated Partner who uses the REST API.
This allows scheduling recordings, managing servers and monitoring server health.
A tool to work with the API is also available: Studio Automated REST API Admin Portal
Authentication to the API is handled by either passing a cookie or an Authorization header:
curl /api/v3/... --cookie "access_token={MY_ACCCESS_TOKEN}"
curl /api/v3/... -H "Authorization: Bearer {MY_ACCCESS_TOKEN}"
Retrieving a temporary token can be done by performing a login: (Please note the sports domain instead of api.sports (sports-test for test) and v2 which is still used here)
curl 'https://sports.studioautomated.com/api/v2/auth/login' -H 'Content-Type: application/json' -X POST -d '{"username":"me@example.com","password":"mypassword"}' -i
Get a single item:
GET /api/v3/core/servers/432b9f88-2685-47b2-ba14-71c68a2c26ad
Get all items matching a filter:
GET /api/v3/core/servers?customer_id=432b9f88-2685-47b2-ba14-71c68a2c26ad
Using multiple values for a filter:
GET /api/v3/core/servers?customer_id=432b9f88-2685-47b2-ba14-71c68a2c26ad,422b9f88-2685-47b2-ba14-71c68a2c26ad
Using multiple filters:
GET /api/v3/core/servers?customer_id=432b9f88-2685-47b2-ba14-71c68a2c26ad&host_name=www.example.com
Using filters on nested fields:
GET /api/v3/core/servers?system.ram_total=31851
Using range filter operators to get all items with a value greater than:
GET /api/v3/core/servers?system.ram_total=gt:31851
Where the following operators are available:
gt
= greater thanlt
= less thanne
= not equal toeq
= equal to (default)lte
= less or equal togte
= greater or equal toin
= value in string or array (use ini
for case insensitive matching)nin
= value not in string or array (use nini
for case insensitive matching)ex
= field exists (ex:true
) or not exists (ex:false
)sz
= size of array matches valueem
= string or array is empty (em:true
) or non empty (em:false
)Sorting available using the sort
parameter:
GET /api/v3/core/servers?sort=customer_id
GET /api/v3/core/servers?sort=customer_id:asc
GET /api/v3/core/servers?sort=customer_id:desc
GET /api/v3/core/servers?sort=customer_id:asc,host_name:desc
Scheduling recordings and overlays
Recordings can be scheduled as jobs to be performed by the server / worker machines.
Currently there are two types of jobs: recordings and optional overlays, which are images that are superimposed on the broadcast.
Both have a schedule
consisting of a start_time
and an end_time
. Overlays require a parent_id
set to the id of the recording. Also, for both a recording and overlay the start_time and end_time are in UTC.
A recording can have multiple views. These are multiple stream outputs for a recording. An example would be a low resolution stream for mobile usage and a high resolution stream for broadband viewing.
A single view recording would look like this:
POST /api/v3/scheduler/recordings
{
"schedule": {
"start_time": "2022-01-28T15:00:00",
"end_time": "2022-01-28T15:45:00"
},
"server_id": "48a37397-89f4-40b7-b07b-5481bb5dd8f2",
"title": "my recording",
"type_settings": {
"sport": "football",
"views": [
{
"camera_name": "main",
"url": "rtmp://my_video_stream"
}
]
}
}
This will then return an id to that recording
``` [{"id":"78a39997-89f4-40b7-b07b-5481bb5ee8f2"}]
```
Use GET /api/v3/scheduler/recordings/{recording_id}
to retrieve the full recording status.
To make changes to the recording you can use partial update with PATCH
``` PATCH /api/v3/scheduler/recordings/{recording_id} {
"schedule": { "start_time": "2022-02-28T15:00:00", "end_time": "2022-02-28T15:45:00" } } ```
These type settings depend on the license in which they are defined/enabled:
For more details, please see:
A single recording can have multiple views defined. This offers multi camera and multiple quality outputs.
An overlay is an image that is super imposed on top of a recording stream. For example a logo or sponsor message.
This image will be stretched full screen. When a partial, for example, corner, logo is required supply an image that is mostly transparant and has a logo in the corner.
layer: overlays will be merged in order of the overlay number. The higher number, as compared to other overlays, will end up "on top".
```
POST /api/v3/scheduler/overlays { "parent_id": "432b9f88-2685-47b2-ba14-71c68a2c26ad", "schedule": { "start": 0, "end": 0 }, "server_id": "48a37397-89f4-40b7-b07b-5481bb5dd8f2", "title": "logo", "type_settings": { "image_location": "https://www.logos.com/mylogo.png", "layer": 42 } }
```
Since overlays are child jobs of recordings they can use relative schedules using start
and end
instead of start_time
and respectively end_time
or a combination:
This schedule will start run for the full duration of the parent recording:
```
"schedule": { "start": 0, "end": 0 }
```
This schedule will start 30 seconds after the recording starts and end 30 seconds later (60 seconds after the recording started):
```
"schedule": { "start": 30, "end": 60}
```
This schedule will start 30 seconds after the recording starts and end 30 before the recording ends by using negative values.
```
"schedule": {"start": 30, "end": -30}
```
This schedule will start 60 seconds before the recording ends and end 30 before the recording ends by using negative values.
```
"schedule": {"start": -60,"end": -30}
```
This schedule will start 30 seconds after the recording starts and end at an absulte end time:
```
"schedule": {"start": 30, "end_time": "2022-01-28T15:05:00"}
```
For more details, please see:
Note: Requires Virtual Director >= v5.3.0
By providing a scoreboard_zip_url
instead of an image_location
to an overlay it can be updated dynamically using score and time (clock) data.
The zip should contain the following file structure:
The OCR or the fast data api can be used to provide input to the dynamic overlay.
Note: Requires Virtual Director >= v5.3.0
Note: Only a limited SVG feature set is implemented. A browser side preview may not be accurate. Please test beforehand.
By providing a svg_template
instead of an image_location
to an overlay it can be updated dynamically using score, time (clock) and more custom data defined in SVG format.
The {{variable_name}}
notation is used to dynamically insert data. This data can be updated using the OCR scoreboard detection or using the FastData API.
```
POST /api/v3/scheduler/overlays { "parent_id": "432b9f88-2685-47b2-ba14-71c68a2c26ad", "schedule": { "start": 0, "end": 0 }, "server_id": "48a37397-89f4-40b7-b07b-5481bb5dd8f2", "title": "my custom svg scoreboard", "type_settings": { "svg_template": "" } }
```
Note: Requires Virtual Director >= v5.3.0
By providing a toggle
name an overlay can be shown or hidden dynamically.
```
POST /api/v3/scheduler/overlays { ... "title": "sponsors", "type_settings": { "image_location": "https://www.logos.com/sponsors.png", "toggle": "show_sponsors" } }
```
Pushing Bshow_sponsors:1
to the fastdata websocket will will show the overlay while Bshow_sponsors:0
will hide it. (The B
is short for boolean toggle)
(See also FastData API)
Note: Requires Virtual Director >= v5.4.0 Note: Requires LIVE_COMMENTARY license
Retrieve the stream_key
property from the view you want to add audio to GET https://.../api/v3/scheduler/recordings/{recording_id}
This can also be found in the dashboard on the recording's page under the properties tab in recording.type_settings.views[{view of your liking}].commentary_stream_key
Note that this is different to the recording or view ID's.
The best way to stream audio to the recording is to use WebRTC. To use WebRTC, stream audio to:
https://commentary.sports.studioautomated.com/{stream_key}
Make sure to change {stream_key}
to the stream_key of the view you want to add audio to.
To test adding live commentary to a recording using WebRTC you can go to the following address in your browser:
This is only for testing, do not use this in a production environment
https://commentary.sports.studioautomated.com/{stream_key}/publish
Connecting to the SA Live commentary:
Prequisites: - Basic knowledge of WebRTC protocol as well its flow - View of a recording needs to have a commentary_stream_key - SA authorization token
Live commentary features allow user to add commentaries from any device that allows sending audio. (In case of a browser easiest way to combine navigator.mediaDevices and using the stream recieved create a MediaRecorder instance) Streaming the audio to the recording relies on the WHIP part of the WebRTC protocol.
Main URL (named main_url further down) for SA live commentary feature is https://commentary.sports.studioautomated.com While the target url (named target_url further down) to send the stream to a concrete recording will be https://commentary.sports.studioautomated.com/commentary_stream_key/whip
WHIP Connection basic overview:
Initialize the Client: Start by creating an instance of the WHIP client. This sets up the necessary properties for managing the connection.
Prepare the Media Stream: Obtain the media stream (audio, video, or both) that you want to send over the connection. This can be from a user's camera or microphone.
Request ICE Servers (Call OPTIONS target_url): Send a request to the server to retrieve ICE servers, which are essential for establishing connectivity between peers. This request typically uses an HTTP OPTIONS method.
Handle ICE Servers Response: Once you receive the response with the ICE servers, parse the information to extract the necessary details (URLs, usernames, credentials) for setting up the peer connection. The information is inside the Link header of the response.
Create a Peer Connection: Use the extracted ICE server information to create a new RTCPeerConnection. This connection will manage the media streams and ICE candidates.
Add Media Tracks: Add the tracks from your media stream to the peer connection. This step ensures that the audio and/or video data can be sent through the connection.
Generate and Send an SDP Offer Call (POST target_url with body:offer.sdp): Create an SDP offer that describes the media capabilities of the connection. Send this offer to the remote peer using an HTTP POST request. Make sure to save the Location header information somewhere as it representes the established Session id.
Modify the SDP Offer (if necessary): Optionally, adjust the SDP offer to optimize settings such as audio codec and bitrate for better performance before sending it.
Monitor Connection State: Set up event listeners to monitor the connection state. This includes handling events such as disconnection and ICE candidate gathering.
Handle Remote Answer: When the remote peer responds with an SDP answer, set this as the remote description for your connection. This step finalizes the connection setup.
Gather and Send ICE Candidates: As ICE candidates are generated, either queue them for later or send them immediately to the remote peer. This ensures that the connection can successfully establish. You can send the Ice candidates with PATCH main_url/Session Id (Session Id is stored in the previous step from Location Header)
Implement Reconnection Logic: If the connection experiences issues, implement logic to close the current connection and attempt to reconnect after a brief pause.
Another way would be to stream audio to the recording with RTSP. To use RTSP, stream audio to:
rtsp://commentary.sports.studioautomated.com:28554/{stream_key}
One of the ways audio can be streamed to the recording with RTSP is to use ffmpeg on a system that has a microphone. The following command could be a good starting point to get things up and running:
ffmpeg -f alsa -i hw:1,{audio_device_id} -acodec aac -b:a 128k -f rtsp rtsp://commentary.sports.studioautomated.com:28554/{stream_key}
The audio_device_id is a number that corresponds to the input device you want to use. You can find this device id on linux machines with the following command:
arecord -l
Which will give you an output like:
```
* List of CAPTURE Hardware Devices * card 1: PCH [HDA Intel PCH], device 0: ALC285 Analog [ALC285 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0
```
In this case the audio_device_id
would be 0.
To mute or unmute a recording the following endpoints are available. These only work if the recording is running and will cause the output stream to me muted/unmuted while local recordings will remain available; one with original audio, one without.
```
POST /api/v3/scheduler/recordings/{recording_id}/mute
```
```
POST /api/v3/scheduler/recordings/{recording_id}/unmute
```
To schedule a recording that uses file input instead of a live camera feed. This can for example be used to process offline after a game and/ or to reprocess with other settings, like using another AI sports model.
Note: this requires the 'video_file_input' license module
To start/schedule a recording from file input you can use:
```
POST /api/v3/scheduler/recordings { "title": "Recording using video file input", "schedule": { "start": 0, # start directly "end": 600 # end for example after 600 seconds }, "type_settings": { "cameras": { "main": { "urls": [ "/home/user/data/20230831T045601/video/videomaintest_1_000.ts", "/home/user/data/20230831T045601/video/videomaintest_2_000.ts", "/home/user/data/20230831T045601/video/videomaintest_3_000.ts", "/home/user/data/20230831T045601/video/videomaintest_4_000.ts" ], "offsets": [0, 0.1, 0.2, 0.3], # the relative synchronization offsets per lens in seconds, where 0 is default for each lens "processing": "all_frames", # or "keep_live", which is default "camera_type": "panovu32" } }, "views": [ { "camera_name": "main", "url": "/home/user/data/recording_using_video_file_input_20230831T045601.avi", "quality": "high", "mode": "broadcast", "enable_audio": true, "enable_ocr": false, "sport": "hockey" } ] } }
```
Processing enum explained:
keep_live
is a best effort attempt to stream as live as possible, skipping past part of video and dropping frames in order to keep the stream as live as possible
all_frames
processes all frames and, depending on system specs, may take longer or shorter that the actual video duration.
Stop recording:
```
POST /api/v3/scheduler/stop/{recording_id}
```
Note: locally /api/v3/scheduler/stop
suffices
Both the view url
and the result_url
support template parameters.
This can be used to define default urls to which meta data is added automatically.
For example you can configure your server with:
```
PATCH /api/v3/core/servers/{server_id} { "recording_options" : { "url": "rtmp://www.mysite.com/stream/{recording_id}" "result_url": "rtmp://www.mysite.com/view/{recording_id}" } }
```
All subsequent recordings will automatically use this template to stream without user input required when scheduling.
For more information about available template place holders, see:
Schedule a recording with the event detection module. The event detection module requires a sport and a list of camera names (note that supplying a list is enforced). An Event detection license is required to use this feature. Currently, when supplying multiple camera names, only the first one will be used for actual event detection.
```
POST /api/v3/scheduler/recordings { "schedule": { "start_time": "2022-01-28T15:00:00", "end_time": "2022-01-28T15:45:00" }, "server_id": "48a37397-89f4-40b7-b07b-5481bb5dd8f2", "title": "my recording", "type_settings": { "views": [ { "camera_name": "main", "enable_audio": true, "enable_ocr": false, "mode": "broadcast", "quality": "high", "sport": "football", "url": "rtmp://my_video_stream" } ], "modules": [ { "type": "event_detection", "sport": "football", "camera_names": [ "main" ] } ] } }
```
Right now, we support football for event detection, which is capable of detecting the following events:
More sports will follow.
Once the recording has finished, the events are available in the recording output:
```
GET /api/v3/scheduler/recordings/{id}
```
Note: Requires Virtual Director >= v5.26.1 (Local only, not available for Cloud Processing) Note: Requires LOCAL_STREAM license
Passing a local_stream_key
to a view:
```
"views" : [{"local_stream_key":"hello",...}]
```
Will stream that view live on local server at
rtsp://localhost:554/api/v3/media/streams/{local_stream_key}
or at http://localhost:8888/api/v3/media/streams/{local_stream_key}/stream.m3u8
for hls.
Monitor server and camera health status
Health data is continuously updated and can be retrieved for monitoring servers and cameras
To monitor for example the status of the server hardware, software, license and internet connection.
For more details, please see:
GET /api/v3/health/servers
GET /api/v3/health/servers/432b9f88-2685-47b2-ba14-71c68a2c26ad
The /core/servers
endpoint also includes status
and status_messages
fields to indicate whether certain checks have failed. For example:
GET /api/v3/core/servers/432b9f88-2685-47b2-ba14-71c68a2c26ad
could return:
{
"status": "warning",
"status_messages" : {
"Hard_disk_almost_full" : true
},
...
}
To monitor for example the status of the camera hardware, connection, configuration and calibration. The camera calibration can be checked with the help of images, specifically generated for evaluating the calibration.
For more details, please see:
Manage servers and camera configuration
For more details on the core data and endpoints, please see:
To edit your license - Navigate to your server in the Dashboard - Go to the License tab - Click on Edit License - Modify your license - Click save changes
WARNING! Changing some of the options of license may result in changed billing. Please be careful when changing these options!
Retrieve or update (score) data during recordings
To retrieve or update score data a websocket connection can be used.
Authentication to the socket occurs via the same JTW tokens as the REST API.
A limited access token for a specific recording can be retrieved from:
GET /api/v3/auth/recordings/{recording_id}/token
Updates can be pushed by setting up a websocket connection to
wss://ws.sports.studioautomated.com/api/v3/websocket/events/recordings/{recording_id}?token={token}
You can subscribe to updates from a recording and specific variables by using:
wss://ws.sports.studioautomated.com/api/v3/websocket/events/recordings/{recording_id}?topic=score,time&token={token}
Using the topic query parameter is it possible to subscribe to a subset of the available topics.
Note: after a disconnect all subscriptions need to be renewed.
Topic = time
Messages will be encoded as follows:
Clock (time) updates are prefixed with a T
and contain the fields
t: total time in seconds s: seconds m: minutes
s_a, s_b: shot clocks time in seconds [Optional]
p: period [Optional]
Example
Tt:436545,s_a:123,s_b:0,p:2
To toggle auto increment of the clock the following commands can be used:
Tt:pause
Tt:resume
Topic = score
Score updates are prefixed with a S
and contain the fields:
a: Score For team A
b: Score for team B
Example
Sa:3,b:1
Note that only the changed fields are send so if only score b
is changed then the message will be Sb:1
Topic = booleans
Toggle updates are prefixed with a B
and can contain custom fields:
Toggles can be used to hide or show certain overlays dynamically. (See also Dynamic Overlays)
Example
Bx:0,y:1
Topic = data
Data updates are prefixed with a D
and can contain full json content.
Data updates can be used to add dynamic content to overlays.
Example
D{"hello":"world"}
Note: Only available using the 'fast_data_view' license module
Topic = view
View updates are prefixed with a V
and can contain the following fields
p
pan
t
tilt
f
focal (zoom)
i
the view index
T
the timestamp
Example
Vp:85.30209318169183,t:15.072756413648113,f:2124.8599678543183,i:0
Please note that multiple view messages with a short timeframe will be joined using the |
symbol
Vp:85.31065166918995|Vp:85.33455346897067|Vp:85.37084108262293|Vp:85.41711924147013|Vp:85.47148954801708|Vp:85.53237750445858|Vp:85.59835135040966|Vp:85.66823409580628|Vp:85.74128378395918|Vp:85.81704097548666|Vp:85.89510926052954|Vp:85.97506296354337|Vp:86.05648736201184|Vp:86.13916702309282|Vp:86.223066044064|Vp:86.3081592981031|Vp:86.39452517965891|Vp:86.48224928640666|Vp:86.57136023396733
Note: Only available using the 'fast_data_custom_url' license module
Passing an url to the recording allows the FastData to be directed to a custom endpoint.
``` POST /scheduler/recordings { "type_settings" : { "fast_data" : { "url" : "wss://my-websocket-listener.com" } ... }, ... }
```
Manage files
File entities can be found and managed in the /api/v3/file/files
endpoint
Using this endpoint requires the file content to be base64 encoded.
To upload and download files in their native (binary) encoding please use:
POST /api/v3/file/upload/{file_name}
This will create a file entity and return a file id.
To view a file directly use:
GET /api/v3/file/view/{file_id}
Virtual Coach Assistant
An automatically created video stream that enables you as a viewer to be your 'own director' of the match. Zoom in, freeze and playback to analyze key moments in the game. Streams can be shared directly.
Please note that this feature requires the vca module to be enabled in your license.
To maximize resolution in the desired Field of View a calibration is required. This calibration will indiciate the region of interest which will be recorded.
A VCA view can be scheduled alongside another view using mode=vca
Webhooks are triggered by events such as the starting of a recording.
A webhook is an url that is registered to a specific event topic. This url will then be called when such an event occurs.
The request made to the webhook will be a POST
request with a json body containing the body of the event.
Please note: Due to caching it can take up to one hour for the hooks to become active.
PATCH /api/v3/core/partners/PARTNER_ID
{
"webhooks" : {
"job/started" : "https://www.test.com/my-webhook",
"job/stopped" : "https://www.test.com/my-other-webhook"
}
}
Below is an overview of all events that can be subscribed to:
Event | Description |
job | |
job/deleted | Base class for jobs, messages only, used to return messages by worker |
job/failed | Job message containing an error message |
job/finished | Output for finished updates |
job/finishing | Base class for jobs, messages only, used to return messages by worker |
job/retrying | Retry message, including outputs |
job/scheduled | Base class for jobs, messages only, used to return messages by worker |
job/shutdown | Server sent on message shutdown |
job/started | Base class for jobs, messages only, used to return messages by worker |
job/starting | Base class for jobs, messages only, used to return messages by worker |
worker | |
worker/core/multi_camera_details | Details for multiple cameras |
worker/core/settings | Deprecated The settings from settings.json |
worker/health/camera | Server health details of all cameras |
worker/health/server/audio | Audio devices currently configured on server |
worker/health/server/camera_status | Server health status of all cameras |
worker/health/server/connection | Server health status of its connection to cloud |
worker/health/server/hardware | Server health status of hardware |
worker/health/server/internet | Server health status of internet connection |
worker/health/server/schedule | Deprecated Server health status of schedule(.json) |
worker/health/server/settings | Deprecated The settings from settings.json |
worker/health/server/software | Server health status of software |
worker/health/server/state | Server state |
worker/health/shutdown | Server sent on message shutdown |