REST Service
Supported pipeline types:
|
Use the REST Service origin to create a REST-based microservice. In a microservice pipeline, the REST Service origin works with one or more microservice destinations that specify the HTTP response code to pass back to the originating REST API client.
The REST Service origin generates a batch for each request that it receives. The REST Service origin can use multiple threads to enable parallel processing of requests from multiple clients. Before you configure the origin, perform additional steps to configure the clients.
When you configure the REST Service origin, you define the listening port, application ID, and the maximum message size. You can configure the maximum number of concurrent requests to determine how many threads to use. You can also configure SSL/TLS properties, including default transport protocols and cipher suites. And, you can require mutual SSL/TLS authentication. You also configure the format of generated responses.
The REST Service origin can read requests containing messages with no compression or with the Gzip or Snappy compression format.
Prerequisites
Before you run a pipeline with the REST Service origin, complete the following prerequisites to configure the REST API clients.
Include the Application ID in Requests
Configure the REST API clients to include the REST Service application ID in each request.
When you configure the REST Service origin, you define an application ID that is used to pass requests to the origin. All messages sent to the origin must include the application ID.
Include the application ID for each client request in one of the following ways:
- In request headers
- Add the following information to the HTTP request header for all requests that
you want the origin to
process:
X-SDC-APPLICATION-ID: <application_ID> - In a query parameter in the URL
- If you cannot configure the client request headers - for example if the requests are generated by another system - then configure each REST API client to send data to a URL that includes the application ID in a query parameter.
Send Data to the Listening Port
Configure the REST API clients to send data to the REST Service listening port.
When you configure the origin, you define a listening port number where the origin listens for data. To pass data to the pipeline, configure each REST API client to send data to a URL that includes the listening port number.
<http | https>://<sdc_hostname>:<listening_port>/<path>?<queryString>&sdcApplicationId=<application_ID>- <http | https> - Use https for secure HTTP connections.
- <sdc_hostname> - The Data Collector host name.
- <listening_port> - The port number where the origin listens for data.
- <path> - Optional. The path of the URL.
- <queryString> - Optional. The parameters of the URL that come after the path.
- <application_ID> - Optional application ID if not passed in the request header.
For example: https://localhost:8000/ or
https://localhost:8000/rest/v1/user.
Multithreaded Processing
The REST Service origin performs parallel processing and enables the creation of a multithreaded pipeline.
The REST Service origin uses multiple concurrent threads based on the Max Concurrent Requests property. When you start the pipeline, the origin creates the number of threads specified in the Max Concurrent Requests property. Each thread generates a batch from an incoming request and passes the batch to an available pipeline runner.
A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors and destinations in the pipeline and represents all pipeline processing after the origin. Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.
Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline runners, the order that batches are written to destinations is not ensured.
For example, say you set the Max Concurrent Requests property to 5. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing. In the batch, REST Service includes only the REST API requests with the specified application ID.
Each pipeline runner performs the processing associated with the rest of the pipeline. After a batch is written to pipeline destinations, the pipeline runner becomes available for another batch of data. Each batch is processed and written as quickly as possible, independent from other batches processed by other pipeline runners, so batches may be written differently from the read-order.
At any given moment, the five pipeline runners can each process a batch, so this multithreaded pipeline processes up to five batches at a time. When incoming data slows, the pipeline runners sit idle, available for use as soon as the data flow increases.
For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.
Generated Response
In a microservice pipeline, the REST Service origin can send a response back to the originating REST API client.
- Records received from microservice destinations
- Pipeline error records received when the pipeline is configured to use the Send Response to Origin pipeline error handling
The origin generates a single response for each batch of records received. The origin can generate the response in JSON or XML format. The response can include an envelope or only raw data.
Responses with an Envelope
| Response Key or Element | Value |
|---|---|
| httpStatusCode | The status code associated with the records in the
response. If the records in the generated response share the same status code, the code is written to the httpStatusCode key or element. If the records have different status codes, the httpStatusCode is set to 207 for multiple statuses. |
| data | A list of records passed to the origin by the microservice destinations used in the pipeline. |
| error | A list of pipeline error records passed to the origin by the Send Response to Origin pipeline error handling. |
| errorMessage | The error message associated with the first error
record in the response. Used only when the response includes error records. |
{
"httpStatusCode":<status code>,
"data":[<list of success records>],
"error":[<list of error records>],
"errorMessage": <error message, if any>
}Raw Responses
When configured to send a raw response, the origin generates a response that contains either the list of records passed from the microservice destinations or the list of error records passed by the Send Response to Origin pipeline error handling. If the origin receives data records from destinations and error records from the pipeline, then the origin includes only the error records in the response. If the origin receives no data records from destinations and no error records from the pipeline, then the origin generates an empty response.
Sample Responses
- Single record
- The origin receives a single record from the Send Response to Origin
destination. The destination is configured to use the 200 status code.For a response with an envelope, the origin sends the following response:
{ "httpStatusCode":200, "data":[{"ID":"103","NAME":"Jack","AGE":"37","STATE":"MD"}], "error":[], "errorMessage":null }For a raw response, the origin sends the following response:{"ID":"103","NAME":"Jack","AGE":"37","STATE":"MD"} - Multiple data and error records
- The origin receives several records, data and error. Because each record has
a different status code, the response uses status code 207 for multiple
statuses. The errorMessage key includes the error associated with the first
record which has a missing ID. The origin is configured to present multiple
records as multiple JSON objects.For a response with an envelope, the origin sends the following response:
{ "httpStatusCode":207, "data":[{"ID":"248","NAME":"Pina","AGE":"24","STATE":"RI"}], "error":[{"NAME":"Liz","AGE":"37","STATE":"DE"}, {"ID":"302","NAME":"Roco","AGE":"","STATE":"CA"}], "errorMessage":"COMMON_0001 - Stage precondition: CONTAINER_0051 - Unsatisfied precondition(s) '${record:exists('/ID')}'" }For a raw response, the origin sends the following response:{"NAME":"Liz","AGE":"37","STATE":"DE"}, {"ID":"302","NAME":"Roco","AGE":"","STATE":"CA"}
Record Header Attributes
The REST Service origin creates record header attributes that include information about the requested URL.
You can use the record:attribute or record:attributeOrDefault functions to access the information in the attributes. For more information about working with record header attributes, see Working with Header Attributes.
- method - The method for the request, such as GET, POST, or DELETE.
- path - The path of the URL.
- queryString - The parameters of the URL that come after the path.
- remoteHost - The name of the client or proxy that made the request.
The REST Service origin also includes HTTP request header fields – such as Host or Content-Type – in records as record header attributes. The attribute names match the original HTTP request header field name.
Data Formats
The REST Service origin processes data differently based on the data format that you select. The origin processes the following types of data:
- Avro
- Generates a record for every Avro record. The origin includes the Avro schema in the
avroSchemarecord header attribute. It also includes aprecisionandscalefield attribute for each Decimal field. - Binary
- Generates a record with a single byte array field at the root of the record.
- Datagram
- Generates a record for every message. The origin can process collectd messages, NetFlow 5 and NetFlow 9 messages, and the following types of syslog messages:
- Delimited
- Generates a record for each delimited line. You can use the
following delimited format types:
- Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
- RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
- MS Excel CSV - Microsoft Excel comma-separated file.
- MySQL CSV - MySQL comma-separated file.
- Tab-Separated Values - File that includes tab-separated values.
- PostgreSQL CSV - PostgreSQL comma-separated file.
- PostgreSQL Text - PostgreSQL text file.
- Custom - File that uses user-defined delimiter, escape, and quote characters.
- Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
- JSON
- Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
- Protobuf
- Generates a record for every protobuf message. By default, the origin assumes messages contain multiple protobuf messages.
- SDC Record
- Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
- XML
- Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Configuring a REST Service Origin
Configure a REST Service origin to process REST API requests and pass responses back to the originating REST API client. Use the origin as part of a microservice pipeline.