Parent Events versus Base Events Concept
The diagram below highlights the SOC Triad, with core cybersecurity tools like Network Detection and Response (NDR) and Endpoint Detection and Response (EDR) forming its foundation. These tools generate critical telemetry data, which flows into a Security Information and Event Management (SIEM) system at the top of the triad. It’s important to note that in modern environments, additional sources beyond NDR and EDR, such as cloud and other specialized data, now contribute valuable logging data to enhance SOC visibility and response.
Logging systems, such as Elastic Security, continuously scan ingested log data for conditions that may signal a security issue, which Elastic refers to as "alert rules." When an alert rule's criteria are met, an alert is generated in the logging system. This alert is logged as a distinct event from the original log or logs that triggered it. It's worth noting that some SIEM systems process data in memory, evaluating alert rule conditions at the point of ingestion rather than through post logging system ingest queries.
In traditional SIEMs, logs received directly from sources are labeled as "Base Events," while logs generated through alert rules are termed "Correlated Events." In Elastic, however, these logs are classified differently: original source logs are known as "Ancestors" or "Children" while the alert logs generated from them are referred to as "Parents".
Base events and Parents events in SIEM Triage
The diagram below introduces Security Orchestration, Automation, and Response (SOAR) in relation to SIEM. Using SOAR for "SIEM Triage" can standardize and automate numerous activities involved in handling SIEM alerts, enhancing efficiency and consistency in response.
When SOAR interacts with a SIEM, it retrieves alert events either through an API call or by receiving them via a push from the SIEM (such as a webhook). The example below shows an API call pulling current SIEM alerts. Once alerts are retrieved, SOAR can initiate actions like investigation, containment, notification, and documentation on the affected data, systems, and users.
To perform further actions, it's often essential to retrieve the base or child events that triggered the alert, as they may contain additional data needed for analysis. This step is particularly essential for threshold alerts (i.e., "Logon failures exceeding 10"), as these alerts only include data on the fields aggregated (the "Group by" fields). Other fields are omitted since they may vary across the source documents that contributed to the threshold count. This requires an extra processing step: parsing the initial alert log to identify the base event (or "ancestor event" in Elastic) that initiated the alert. An additional API query can then be made to obtain detailed information from the base or child event.
The diagram below illustrates this process. The Elastic Signals API is queried to retrieve a specific type of signal event, such as Elastic Security events:
<kibana host>:<port>/api/detection_engine/signals
The returned alert signal event has an event.kind value of "signal." This field, one of the four Elastic Common Schema (ECS) Categorization Fields, represents the highest level in the ECS hierarchy. The event.kind field accepts values such as alert, asset, enrichment, event, metric, state, pipeline_error, and signal. The "signal" type is specifically used in Elastic Security for alert documents created by rules within the Kibana alerting framework. Read more about ECS categorization. https://www.elastic.co/guide/en/ecs/current/ecs-allowed-values-event-kind.html
Two other critical fields are signals.ancestor.id and signals.ancestors.index, as they provide the ancestor ID and index needed to locate the base event that triggered the alert. These values are essential for querying the correct index to retrieve details of the base event.
Other Important Concepts
Data sizes related to threshold alert ancestors
Another key consideration involves handling ancestors of threshold alerts (e.g., "Logon failures exceeding 10"), which are a type of Elastic ancestor referred to as "children." If the threshold alert rule is set with a high threshold value, it will have numerous child ancestors (e.g., each of the 10, 20, 30+ logon failure events). When using an API query to retrieve data from each of these events, be mindful that the SOAR object you intend to populate may not be able to store all of this data due to storage limitations. If your SOAR data object has a size limitation, you may need to restrict the number of ancestor child events retrieved. The actual count of documents that exceeded the threshold can be found in the kibana.alert.threshold_result.count field.
signal.status
When you pull data from a logging system into a platform like SOAR for standardization and automation, you are effectively shifting much of the alert handling away from the original system (e.g., Elastic Security). It's important to remember that alerts in Elastic Security have a status, such as "open," "acknowledged," or "closed." Once necessary actions are taken in the external SOAR system (e.g., "acknowledged," "closed"), ensure that your SOAR system sends a post back to update the signal.status field of the alert in elastic. Failing to do so may result in alerts being closed in your external SOAR while remaining open in Elastic Security. Learn more about the Signals API. https://www.elastic.co/guide/en/security/current/signals-api-overview.html
POST <kibana host>:<port>/api/detection_engine/signals/status
Comments