ImageServer - requests and responses
An ImageServer session uses a TCP socket connection to an XProtect Recording Server.
A Recording Server may be located on the same computer as the Management Server, but it can be located on any computer. The address of the Recording Server for at given device can be retrieved using ServerCommandService.GetConfiguration()
.
The default port number is 7563
. Using your socket library, connect to this address using TCP. This connection can be kept open throughout the entire session with a device.
You do not need to close the socket until you are done with that device. All ImageServer requests consist of a small serialized XML document with a root element named <methodcall>
. Likewise, many responses — but not all — consist of a serialized <methodresponse>
XML document.
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<methodname>connect</methodname>
</methodcall>
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<methodname>connect</methodname>
</methodresponse>
- For most requests, there is a one-to-one relation with a response. The exception is requests for live video or audio.
- After requests for live video or audio, you can expect a series of responses. Responses will not end until you send the stop request. In this situation, you must always have a socket receive call active in a separate thread reserved for this purpose only.
The terminology from here on will be to refer to any <methodcall>xyz</methodcall>
as the "xyz
request" and to refer to <methodresponse>xyz</methodresponse>
as the "xyz
response".
- Each request must always end with the four bytes CR-LF-CR-LF (decimal 13-10-13-10). If you forget these bytes, you will not get any response.
- Responses also end with these four bytes.
- Note: In the follow the XML are shown with linebreaks to make it readable, but all requests should be send without any linebreaks in middle of the XML.
List of Requests and their Responses:
- connect – Simple
- connect – More options
- connectupdate
- goto
- next
- previous
- nextsequence
- previoussequence
- begin
- end
- alarms - Get recorded sequences between two points in times
- alarms - Get max N Recorded Sequences around a point in time
- live
- live – Change live adaptive streaming
- changelivecompressionrate
- stop
- ptz
- ptzcenter
- ptzrectangle
- preset
- output
- aviinformation
connect – Simple
The first thing to send after having successfully opened a TCP socket is always a connect request.
- Use the token you obtained from the SOAP login to authenticate.
- Obtain the device GUID using
ServerCommandService.GetConfiguration
. - Fill in dummy elements like
<username>dummy</username><password>dummy</password>
. Do not omit these elements, that will cause connect to fail. - Pass your token together with your device GUID in the element named
<connectparam>
so what you send is formatted like this and in specified order, where {guid} and {token} shall be replaced by actual values without the {} characters like<connectparam>id={guid}&connectiontoken={token}</connectparam>
Request - Simple example
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>1</requestid>
<methodname>connect</methodname>
<username>dummy</username>
<password>dummy</password>
<cameraid>[guid]</cameraid>
<connectparam>id=[guid]&connectiontoken=[token]</connectparam>
</methodcall>
Request - Simple example with StreamId
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>1</requestid>
<methodname>connect</methodname>
<username>dummy</username>
<password>dummy</password>
<cameraid>[guid]</cameraid>
<connectparam>id=[guid]&streamid=[guid]&connectiontoken=[token]</connectparam>
</methodcall>
The id
is the guid of the device, and the streamid
is the guid for a specific stream as configured in XProtect. The order of the parameters must be maintained.
connect – More options
The <alwaysstdjpeg>
element allows you to force transcoding to JPEG.
It is recommended to specify no
to receive the raw data in the format produced by the camera. If you set the <alwaysstdjpeg>
element to yes, all live video data will be transcoded and returned in standard image/jpeg format.
- Settings can implicitly force transcoding of live video, even if
<alwaysstdjpeg>
is set to no. Examples are:<compressionrate>
is set to a value < 100<height>
or<width>
is set to a value different from the original stream's
- A
compressionrate
of 75 reduces the data size considerably without blurring the image much.
If you have not forced transcoding, you will receive the raw camera data embedded in Milestone's "generic byte data" headers. Such data may be a lot more compact to transfer and will consume less CPU on the server, but currently we do not provide any library with which you can decode it.
The <clientcapabilities>
node specifies what the client supports.
- If
<privacymask>
is specified as yes (no is the default and will mean that a dummy image is returned whenever a privacy mask is in effect for the current camera), then the privacy mask will be sent out every time a live request or first browse request is received and you commit to apply the privacy mask in your code before using the image - see further below for the format of the mask data. - If the privacy mask is changed while the client is viewing images, then the privacy mask is sent out as well.
The <privacymaskversion>
element can be specified along with the <privacymask>
element to specify whether the client understands newer versions of the privacy mask format. If not specified or if the client specify '0', the server will send the privacy mask in the simple format specifying only the size of the grid followed '1' for each area not to be shown. If specifying '1' the server will send the privacy mask as a base64 encoded xml string if supported by the server.
The <multipartdata>
element tells the server if the client supports the interpretation of multiple data packages in a single response. This will allow the server to send all the frames of an MPEG GOP as one response and also to optimize in other situations.
The <datarestriction>
element tells the server if the client implements data restriction.
- If the client sends a yes here, the server supposes that it can optimize, and that it does not need to send dummy data when the end user is not authorized to receive data. The client must then able to inform the end user correctly.
- If the server receives a no here (the default), dummy data like a 'padlock image' will be sent in situations where the end user has no access to data.
The <transcode>
node contains elements for configuring the transcoding of the video data. It does not explicitly force transcoding.
The <allframes>
element in the <transcode>
node applies to cameras with an MPEG stream. If set to no, which is the default, all sub-frames are skipped when transcoding, and only key-frames are transcoded. Typically, this will result in an effective frame rate of 1 per second. If set to yes, all sub-frames may be transcoded, and the effective transcoded frame rate may be higher than 1.
- If
<allframes>
is set to yes, every sub-frame is considered when transcoding. Frame rate options in the live request may filter data further, so you get every N'th sub-frame or key-frame.
The elements <width>
, <height>
, <keepaspectratio>
and <allowupsizing>
in the <transcode>
node enable the application to specify the output image size measured in absolute pixels.
- The image aspect ratio will not be preserved, unless the element
keepaspectratio
is included and has the content yes. In that case width and height will be considered as maximum values, i.e. if the aspect ratio does not fit with the size given then the actual image will be smaller on one side. - If the original frame size is smaller than the requested width or height, the image will never be upsized, unless the application explicitly included the XML element
<allowupsizing>
with a content of yes. The values passed here will be valid for all the session's live andGoTo
requests and will imply a JPEG quality of 75%, unless the<compressionrate>
is explicitly set with a live orGoTo
request.
The <timerestriction>
element is used when export is about to begin. The purpose is to limit the audit logging to a single entry as long as the video frames being retrieved are within the specific start and end time stamp. When this entry is not available, the server will make one line in the audit log for each minute of video being accessed.
Connect request – the full syntax
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>connect</methodname>
<username>[text]</username>
<password>[text]</password>
<cameraid>[guid or cameraname]</cameraid>
<alwaysstdjpeg>yes/no</alwaysstdjpeg> (Recommended)
<connectparam>[text]</connectparam>
<clientcapabilities> (optional)
<privacymask>yes/no</privacymask> (optional)
<privacymaskversion>0/1</privacymaskversion> (optional, def=0)
<multipartdata>yes/no</multipartdata> (optional)
<datarestriction>yes/no</datarestriction> (optional)
</clientcapabilities>
<transcode> (optional)
<allframes>yes/no</allframes> (optional, def=no)
<width>[number]</width> (optional, def=unaltered)
<height>[number]</height> (optional, def=unaltered)
<keepaspectratio>yes/no</keepaspectratio> (optional, def=no)
<allowupsizing>yes/no</allowupsizing> (optional, def=no)
</transcode>
<timerestriction> (only one log and control client behaviour – time forward only)
<starttime>[milliseconds since epoc]</starttime>
<endtime>[milliseconds since epoc]</endtime>
</timerestriction>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[actual]</requestid>
<methodname>connect</methodname>
<connected>yes/no</connected>
<errorreason>[text]</errorreason> (only if connected = no)
<alwaysstdjpeg>[actual]</alwaysstdjpeg>
<camera>
[camera info]
</camera>
<clientcapabilities>
<privacymask>[actual]</privacymask>
<privacymaskversion>[actual]</privacymaskversion>
<multipartdata>[actual]</multipartdata>
<datarestriction>[actual]</datarestriction>
</clientcapabilities>
<servercapabilities>
<connectupdate>[actual]</connectupdate>
<adaptivestreamingversion>[version]</adaptivestreamingversion>
</servercapabilities>
<transcode>
<allframes>[actual]</allframes>
<width>[actual]</width>
<height>[actual]</height>
<keepaspectratio>[actual]</keepaspectratio>
<allowupsizing>[actual]</allowupsizing>
</transcode>
</methodresponse>
connectupdate
Before every response, the server repeatedly validates the token you passed with the initial connect request or the previous connectupdate
request.
- If it has expired, the server will disconnect the TCP session. To prevent this, your application must repeat the SOAP Login, and in due time pass the new token to each open Image Server session using this request.
- For each open session, the camera given in the content of the
<connectparam>
element must be the same camera which you stated in the<connectparam>
element of the original connect request.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>connectupdate</methodname>
<connectparam>[text]</connectparam>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>connectupdate</methodname>
<connected>yes/no</connected>
<errorreason>[text]</errorreason> (only if connected = no)
</methodresponse>
goto
Sets the database pointer for the currently connected camera to a specific timestamp and returns a recorded image close to the timestamp.
- If no recorded image is found very close, we go back in time until one is found. Compression rates can be requested as documented in the
ChangeLiveCompressionRate
request. - Setting
<keyframesonly>
to no makes it possible to receive a sub-frame within an MPEG GOP thereby getting images more accurately in time at the cost of more CPU being used in the server.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>goto</methodname>
<time>[milliseconds since UNIX epoch]</time>
<compressionrate>[number]</compressionrate> (optional)
<keyframesonly>yes/no</keyframesonly> (optional)
</methodcall>
Response - Live and browse without multipart
ImageResponse
RequestId: [number]
PrivacyMask: none/grid;3x3;110;110;000/[base64 encoded xml] (first time and when mask changes)
Restriction: none/time
Prev: [milliseconds since UNIX epoch]
Current: [milliseconds since UNIX epoch]
Next: [milliseconds since UNIX epoch]
SequenceNumber: [number] (if requested)
MotionLevel: [number] (if requested)
Content-length: [number]
Content-type: image/jpeg
[binary data]
You should note that this response is basically different from the other ImageServer responses, in that it is not formatted as an XML document. Instead, it is formatted as an HTTP response, having its initial HTTP identification line with the HTTP status code replaced by an initial line with just the identifier string ImageResponse
.
The length of the [binary JPEG data] is equal to the Content-length
parameter.
- If the
Content-length
andCurrent
timestamp parameters are both 0 then no image could be retrieved from the database, probably because the database is empty. - If
Content-length
is 0 andCurrent
timestamp is greater than 0 then no binary image data is attached.
The value returned in the Current
header line indicates the actual time for the image. This may be different from the time you requested an image from.
- If the values returned in the
Prev
andNext
headers are not 0, this indicate that there is a next and/or a previous recorded image in this recorded sequence. - If
Prev
is 0, you are at the beginning of a sequence and ifNext
is 0, you are at the end of a sequence.
- If a client specified
<multipartdata>
in its<clientcapabilities>
of the connect request, it will in the response receive multiple data packages similar to a multipart mime response.
Response - Browse with multipart
ImageResponse
RequestId: [number]
PrivacyMask: none/grid;3x3;110;110;000/[base64 encoded xml] (first time and when mask changes)
Restriction: none/time
Prev: [milliseconds since UNIX epoch]
Current: [milliseconds since UNIX epoch]
Next: [milliseconds since UNIX epoch]
SequenceNumber: [number] (if requested)
MotionLevel: [number] (if requested)
Content-type: multipart/related; boundary=frontier
--frontier
Content-length: [number]
Content-type: application/x-genericbytedata-octet-stream
[binary data]
--frontier
Content-length: [number]
Content-type: application/x-genericbytedata-octet-stream
[binary data]
.
.
.
--frontier
Content-length: [number]
Content-type: application/x-genericbytedata-octet-stream
[binary data]
--frontier--
Privacy mask
The PrivacyMask will be returned the first time or whenever the mask changes. The mask can be one of the following three values:
none
if the mask is empty, disabled or removed.grid
followed by the dimensions of the mask and then a semi-colon delimited series '0' and '1' characters where '1' specifies areas to not show. E.g. grid;3x3;110;110;000- base64 encoded XML if supported by both client and server. See specification of format below.
The privacy mask XML conforms to privacy_protection_mask_schema.xsd
. The following shows an example of the content of a privacy mask.
<?xml version="1.0" encoding="utf-8"?>
<mask>
<methods>
<solid value="1" removable="false">
<color>
<red>255</red>
<green>0</green>
<blue>0</blue>
</color>
</solid>
</methods>
<grid size="8x8">
<row index="1" values="0100000000000000" />
<row index="3" values="0000000000010100" />
</grid>
</mask>
The <methods>
element contains a number of method specific child elements such as <solid>
where each method can have zero or more method specific parameters. A method always has a unique value attribute, which is the value that is written in the values attribute of a row in the grid.
The removable
attribute indicates whether or not the grid cells for the method may be lifted by the user or not.
The <grid>
element has the size attribute which specifies the number of rows and columns in the mask and thus the number of areas/cells.
The <row>
element has two attributes, index and values.
-
The
index
attribute specifies the zero base index of the row (top to bottom) in the grid and must be between 0 and the maximum grid size minus 1. -
The
values
attribute is a string containing the cell values (left to right) as byte values where each byte is written as two characters.
The byte value indicates which method to use for the area/cell where the method is found by looking up the method with the specified numeric value - e.g. the method /mask/methods/*[@value='numeric-value']
.
A mask with no rows is valid and is considered an empty mask.
next
Moves the database pointer for the currently connected camera to the next image and returns the image. The element <time>
is optional. If not included, the current value is used.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>next</methodname>
<time>[milliseconds since UNIX epoch]</time> (optional)
<compressionrate>[number]</compressionrate> (optional)
</methodcall>
Response
See the goto response.
previous
Moves the database pointer for the currently connected camera to the previous image and returns the image. The element <time>
is optional. If not included, the current value is used.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>previous</methodname>
<time>[milliseconds since UNIX epoch]</time> (optional)
<compressionrate>[number]</compressionrate> (optional)
</methodcall>
Response
See the goto response.
nextsequence
Moves the database pointer for the currently connected camera to the first image in the next sequence and returns the image. The element <time>
is optional. If not included, the current value is used.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>nextsequence</methodname>
<time>[milliseconds since UNIX epoch]</time> (optional)
<compressionrate>[number]</compressionrate> (optional)
</methodcall>
Response
See the goto response.
previoussequence
Moves the database pointer for the currently connected camera to the first image in the previous sequence and returns the image. The element <time>
is optional. If not included, the current value is used.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>previoussequence</methodname>
<time>[milliseconds since UNIX epoch]</time> (optional)
<compressionrate>[number]</compressionrate> (optional)
</methodcall>
Response
See the goto response.
begin
Moves the database pointer for the currently connected camera to the first image in the database and returns the image. The response includes timestamps for the previous, the returned and the next image in the database and the image data in binary format.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>begin</methodname>
<compressionrate>[number]</compressionrate> (optional)
</methodcall>
Response
See the goto response.
end
Moves the database pointer for the currently connected camera to the last image in the database and returns the image. The response includes timestamps for the previous, the returned and the next image in the database and the image data in binary format.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>end</methodname>
<compressionrate>[number]</compressionrate> (optional)
</methodcall>
Response
See the goto response.
alarms – Get recorded sequences between two points in time
This requests information about all the current camera's recorded sequences within the specified period of time.
- The response is an XML document including start, alarm and end timestamps for each recorded sequence.
- Limited to this request, recorded sequences are named "alarms".
- The items received here are recorded sequences, they are not the alarms stored with the much newer Event server.
- All sequences that has any overlap with the specified time interval are returned.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>alarms</methodname>
<starttime>[milliseconds since UNIX epoch]</starttime>
<stoptime>[milliseconds since UNIX epoch]</stoptime>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>alarms</methodname>
<dblimit startTime="[milliseconds since UNIX epoch]" endTime="[milliseconds since UNIX epoch]"/>
<alarms>
<alarm startTime="ms" alarmTime="ms" endTime="ms" numImages="[number]"/>
<alarm startTime="ms" alarmTime="ms" endTime="ms" numImages="[number]"/>
...
</alarms>
</methodresponse>
- If you use start- and stop time, the XML document returned can potentially be very large. You may consider using the same request differently as shown below.
The <dblimit>
element contains the timestamp of the first and last image in the database.
alarms – Get max N recorded sequences around a point in time
This requests information about the current camera's recorded sequences around the specified center time.
- The response is an XML document including start, alarm and end timestamps for each recorded sequence.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>alarms</methodname>
<centertime>[milliseconds since UNIX epoch]</centertime>
<numalarms>[max number of alarms to return before and after center time]</numalarms>
<timespan>[max time before and after center to look for alarms]</timespan>
</methodcall>
Response
Same as the response listed in the first example of using Alarms. Using this variant of the request, you may compute what the possible maximum size of the response is.
live
Start live feed form the currently connected camera.
- The ImageServer streams live images in binary format until a stop command is received.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>live</methodname>
<compressionrate>[number]</compressionrate> (optional)
<sendinitialimage>yes/no</sendinitialimage> (optional)
<attributes [text]/> (optional)
<adaptivestreaming> (optional, if stated one option must be supplied)
<resolution> (option #1)
<widthhint>[number]<widthhint/> (range 1 - Int32.Max)
<heighthint>[number]<heighthint/> (range 1 - Int32.Max)
</resolution>
<maxresolution/> (option #2)
<disabled/> (option #3)
</adaptivestreaming>
</methodcall>
- The optional
<sendinitialimage>
node instructs the server whether or not it shall send an initial image before the actual live stream data is started. - The initial image is a relatively recent JPEG image produced by the recorder at regular intervals for cameras with active connections.
- The initial image response is sent if, and only if the node value is
yes
or, if the node is not present at all. A value ofno
disables sending of the initial image.
- The optional
<attributes>
node contains options on the formoption="value"
for the live session separated by a space character. Supported attributes are:
Option | Value | Comments |
frameraterelative | Integer N | Send every N'th frame only |
framerateabsolute | Integer N | Send as close to N fps as possible (future enhancement) |
framerate | full/medium/low | Reduce frame rate by dropping frames |
motiononly | yes/no | Send only image sequence where motion is detected |
Example: <attributes framerate="full" motiononly="true" />
.
framerateabsolute
is a future enhancement. It will overrule framerate
when implemented.
- Setting
framerateabsolute
to 30 will request a frame every 30 seconds. - Depending on the actual frame rate for the live camera stream, it may be possible to generate such a regular stream or not.
- The actual frame rate will be as close to the requested one can come by selecting every N'th frame from the original stream, still providing frames evenly spaced in time, or at least very close to.
frameraterelative
overrules framerate
.
- Setting
frameraterelative
to 5 will cause only every 5th frame to be sent to the client. - If
<allframes>
was set to yes in the connect request, this will be every 5th frame of any kind, if that was not done, this will be every 5th key-frame. - Depending on how the individual cameras are configured, key frames may have quite an interval between them, like 1 second or more.
The three values for the framerate
option aim to send all images at full level (no filter), around one image pr. second at low level and a rate in between those two at medium level. The frame reduction scheme is implemented by simply dropping frames coming from the monitor pipe, not taking time into account.
The scheme for dropping frames is listed in the following table, where 0 = drop all frames, 1 = never drop frames, x > 0 = send 1 out of x frames.
Stream type/Value | Low | Medium | Full |
JPEG | 20 | 4 | 1 |
MPEG - IFrame | 1 | 1 | 1 |
MPEG - PFrame | 0 | 0 | 1 |
- The optional
<advancedstreaming>
element instructs the server to find the best suited stream, provided multiple streams are defined for the camera in question, based on the following options:
Option | Value | Comments |
resolution | Integer N Integer N |
Use the optimal stream, which doesn't require upscaling, according to width and height |
maxresolution | none | Use the highest resolution stream (width*height) |
disabled | none | Don't use Adaptive streaming |
Responses
The result of sending the live request is a series of responses of two different types, GoTo
or LivePackage
. Each response ends with a double CR-LF, which means four bytes with decimal values 13, 10, 13, 10. The GoTo
response is described in the GoTo
section.
The LivePackage
responses arrive with regular intervals, so you should keep your socket receive active at all times.
- If you do not get any responses at all for this request and you got responses from other requests, you have forgotten to append the double CR-LF to the live request.
Each LivePackage
response contains status information for the currently connected camera. This includes information about camera-to-server connection problems, database errors, recording status and motion status. The package has the following format.
<?xml version="1.0" encoding="UTF-8"?>
<livepackage>
<status>
<statustime>[milliseconds since UNIX epoch]</statustime>
<statusitem id="[Number]" value="[Text]" description="[Text]"/>
...
</status>
</livepackage>
- Each status item contains information about a specific piece of status information identified with a unique id.
- The ImageServer sends a new package shortly after the live command has been received and whenever the value of at least one of the status items has changed. In addition, the ImageServer guaranties that the time between these packages is never more than 5 seconds even if no status changes have occurred. This may be used by the client as a sort of a keep-alive signal from the ImageServer.
- If no LivePackage was received within the 5 seconds, you may assume that the connection to the ImageServer has been lost.
The status time given for each information package is the (server) time at which the values of the status items have been read out. The time format for the status time is similar to all other time stamps from the ImageServer. The currently supported status items are listed in the table below. The description part of each status item is optional. A live information package will always contain exactly one status time and may contain from zero to all of the status items.
ID | Description | Value | Comments |
1 |
Camera live feed started | 0 , 1 |
1 if live feed from the camera is started.
0 if not. |
2 |
Live feed motion | 0 , 1 |
1 if the live feed contains motion.
0 if not. |
3 |
Live feed recording | 0 , 1 |
1 if the live feed is being recorded.
0 if not. |
4 |
Live feed event notification | 0 , 1 |
1 if there was an event notification during the
live feed. 0 if not. |
5 |
Camera connection lost | 0 , 1 |
1 if the connection between the camera and the
server is lost. 0 if not. |
6 |
Database fail | 0 , 1 |
1 if accessing the database failed.
0 if not. |
7 |
Server is running out of disk space | 0 , 1 |
1 if the server is running out of disk space.
0 if not. |
100 |
Client Live feed stopped | 0 , 1 |
1 if live feed to client is stopped.
0 when live feed is started. |
live – Change live adaptive streaming
Update options for Adaptive streaming for currently connected camera.
- The ImageServer reevaluates best stream according to supplied options, and switches stream if possible.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>live</methodname>
<adaptivestreaming> (optional, if stated one option must be supplied)
<resolution> (option #1)
<widthhint>[number]<widthhint/> (range 1 - Int32.Max)
<heighthint>[number]<heighthint/> (range 1 - Int32.Max)
</resolution>
<maxresolution/> (option #2)
<disabled/> (option #3)
</adaptivestreaming>
</methodcall>
- Instructs the server to find the best suited stream, provided multiple streams are defined for the camera in question, based on the following options:
Option | Value | Comments |
resolution | Integer N Integer N |
Use the optimal stream, which doesn't require upscaling, according to width and height |
maxresolution | none | Use the highest resolution stream (width*height) |
disabled | none | Don't use Adaptive streaming |
Responses
None.
changelivecompression rate
Immediately changes the compression rate for the live stream.
- Has an effect only if sent after a live request and before any matching stop request.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>changelivecompressionrate</methodname>
<compressionrate>[number]</compressionrate>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>changelivecompressionrate</methodname>
</methodresponse>
- The number selected for compression rate determines the quality of the video data sent.
- A value of
100
gives you the original image data in full quality. - Values under
100
give you less network traffic, but result in a lower image quality, keeping the image width and height. - Values other than
100
force data to be transcoded to JPEG.
This interpretation is valid for all commands in this API which have an element named
<compressionrate>
. Special values are
101
-104
:
100
: Original resolution, full quality, known as full in the Smart Client101
: 4CIF (width), 25% quality, known as super high in the Smart Client102
: CIF (width), 25% quality, known as high in the Smart Client103
: 1/3 CIF (width), 25% quality, known as medium in the Smart Client104
: 1/2 CIF (width), 20% quality, known as low in the Smart Client
- These values are still allowed, but if you use values
1
-100
together with the newer features available in the<width>
and<height>
elements of the<transcode>
node in theconnect
request and theframeraterelative
attribute in thelive
request, you will have a much more fine-grained control over the data you receive.
stop
Stops live feed streaming.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>stop</methodname>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>stop</methodname>
</methodresponse>
ptz
A "PTZ (Pan, Tilt Zoom)" command is executed on the camera you are connected to.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>ptz</methodname>
<ptzcommand>[ptzcommand]</ptzcommand>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>ptz</methodname>
</methodresponse>
The [ptzcommand] is one of these text strings:
up
down
left
right
upleft
upright
downleft
downright
zoomin
zoomout
home
The camera will perform one step in the given direction, or move to home position for that command.
ptzcenter
A "PTZ Move" command is executed on the camera you are connected to.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>ptzcenter</methodname>
<refwidth>[number]</refwidth>
<refheight>[number]</refheight>
<centerx>[number]</centerx>
<centery>[number]</centery>
<zoom>[number]</zoom>
</methodcall>
- The content of the
<refwidth>
and<refheight>
elements specify a logical reference area that together with the center coordinate defines the point that the camera should center to. - The zoom parameter specifies the absolute zoom level from 0 to 999. Setting the zoom parameter to -1 indicates that the current zoom level on the camera shall be maintained.
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>ptzcenter</methodname>
</methodresponse>
ptzrectangle
A "PTZ Rectangle" command is executed on the camera you are connected to.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>ptzrectangle</methodname>
<refwidth>[number]</refwidth>
<refheight>[number]</refheight>
<left>[number]</left>
<top>[number]</top>
<right>[number]</right>
<bottom>[number]</bottom>
</methodcall>
- The content of the
<refwidth>
and<refheight>
elements specify a logical reference area that together with the rectangle coordinates defines the rectangle that the camera should move to.
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>ptzrectangle</methodname>
</methodresponse>
preset
A "PTZ Move to Preset" command is executed on the camera you are connected to.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>preset</methodname>
<presetname>[presetid]</presetname>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>preset</methodname>
</methodresponse>
output
Activates a named output on the camera you are connected to
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>output</methodname>
<outputname>[outputid]</outputname>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>output</methodname>
</methodresponse>
aviinformation
Returns AVI export related information about the recorded images within a specified time span for the currently connected camera.
- The response in an XML document including image dimensions, color depth and max frames per seconds.
Request
<?xml version="1.0" encoding="UTF-8"?>
<methodcall>
<requestid>[number]</requestid>
<methodname>aviinformation</methodname>
<start>[milliseconds since UNIX epoch]</start>
<stop>[milliseconds since UNIX epoch]</stop>
</methodcall>
Response
<?xml version="1.0" encoding="UTF-8"?>
<methodresponse>
<requestid>[number]</requestid>
<methodname>aviinformation</methodname>
<width>[number]</width>
<height>[number]</height>
<depth>[number]</depth>
<fps>[float]</fps>
</methodresponse>