Mezmo is pleased to announce the release of the HTTP Endpoint Source for Mezmo Edge.
You can use the HTTP Endpoint Source to send data to a Mezmo Edge Pipeline via a POST command. For example, you may want to send and process data from an uncommon open source application. As long as you're able to use a RESTful POST transport to send the data to an endpoint, you can send the data into the Edge Pipeline.
For more information, check out our technical documentation.
Mezmo is pleased to announce the release of the Syslog Source for Mezmo Edge Pipelines.
With the Syslog Source, you can send Syslog data to your Mezmo Edge Pipeline and it will be automatically parsed for the Pipeline. An API key is not required, and the connection is not protected with encryption. This is intended for use within your own network and isolated from the wide area network.
For more information, check out our technical documentation.
Mezmo is pleased to announce the release of the Script Execution Processor.
With this Processor you can use a subset of JavaScript to transform your data, which can significantly simplify your Pipeline map.
You can combine multiple actions like filtering, dropping, mapping, and casting inside of a single script.
For more information, check out our technical documentation.
Mezmo is pleased to announce System Cross-Identity Management (SCIM) for Log Analysis.
With SCIM, you can provision/de-provision user accounts in Mezmo via your identity providers, such as Okta and Azure.
For more information, check out our technical documentation.
Now you can build and manage your Mezmo Telemetry Pipelines as code with the Mezmo Terraform Provider. After adding the Provider to your Terraform instance, you can define the Sources, Destinations, and Processors as you would using the Mezmo Web App UI, but also create and re-use "code recipes" for Pipelines. For more information, check out our technical documentation.
Mezmo Telemetry Pipeline is built as a cloud-native SaaS application, which limits your ability to use it for cases where processing data within your environment is a requirement, such as working with sensitive medical or financial data.
Mezmo Edge lets you run a telemetry data pipeline within your environment that has the same functionality available within your Mezmo Telemetry Pipeline cloud environment, but adds its own capabilities for processing your data locally. You can run any Pipeline as a satellite node within an Edge instance. All of the metrics and management of the Pipeline are still handled by the SaaS infrastructure, making it easy to build, test, and deploy without requiring any additional coding or configuration management.
Mezmo Edge can be deployed to any Kubernetes cluster using a Helm chart. For more details about Mezmo edge, check out our technical documentation.
Mezmo is pleased to announce the GA release of the Mezmo Ingestion Agent 3.9. You can download this version of the Agent from the Mezmo GitHub repo, where you can also find more detailed documentation.
Major updates in this version of the Agent include:
Added an option to delay Kubernetes logs until pod metadata is available
The agent uses the Kubernetes api to enrich log line information with appropriate pod data. If the api is non-responsive, by default the agent will send the log lines without the associated pod labels. This feature adds a new configuration parameter MZ_METADATA_RETRY_DELAY
to wait for a specified period of time for the label data to become available.
Replaced debouncer with a custom solution to simplify and reduce file events
On heavily loaded systems, where the Agent is monitoring files with multiple symlink copies, if the Agent has queued data from one of the copies and the symlinks are deleted out of sequence, then the Agent could send multiple copies of the same line. The change modifies the order in which events are processed, which ensures no duplicates are sent. The deduplication also reduces memory required and improves overall throughput in these scenarios.
Correctly handle truncation during create events
There’s a still existing edge case when log rotation re-uses inodes, new files might be treated as truncations. Ensuring the truncation is caught during a file create instead of a create write can help alleviate potential of log drop in this case. If you’re using logrotation, make sure the `create` option is enabled.
Additional updates include:
Hello everyone, and welcome to the new Mezmo Product Announcements newsfeed! Here is where you'll find the latest updates about Mezmo Telemetry Pipelines product and features. You can also find all our product announcements collected at announcements.mezmo.com.