Not all Logs are Created Equal
Posted on March 12, 2026 • 8 minutes • 1515 words
To have a well-oiled incident response team, it is crucial to have strong detection rules. Although security teams know what good detection rules look like, many other teams that help provide logging to enable detection do not know which information is useful to an incident response team or a detection team. This gap of what is useful and what is not can cause friction between the cyber teams (incident response and detection) and the teams that help them. This blog post is an attempt to help bridge this gap.
Understanding the Different Teams
Incident Response Team, Security Operations Team (SOC)
The incident response team or security operations team is responsible for responding to all potential security incidents. The main way the incident response team responds to incidents is by investigating alerts generated by a detection rule.
Detection Engineering Team
A detection team is responsible for creating rules that catch potentially malicious behavior or abnormal activity. A detection team creates these rules by reviewing logs over time, baselining what normal looks like, and creating logic based on threat actor activity, intelligence, or activity flagged as malicious or abnormal. Since intelligence changes frequently, a good detection engineer must understand their log sources well to develop rules that reflect the evolving threat landscape and are robust enough to withstand shifts in attacker behavior. Not all organizations separate detection teams from the incident response team. Depending on the organization’s size, a detection team might be a function of another team or part of the incident response team.
Logging Team
The logging team is not a team directly within a cybersecurity organization, but rather the team or teams that provide logs to the detection engineers, enabling them to write rules. The logging team could be the teams responsible for the specific applications or for maintaining them.
Difference between a Useful Log and a Not-So-Useful Log
A common example of a log that is enabled but not sufficient is audit logs. Some modern SAAS applications provide audit logging for free. An audit log records system activity, typically including who logged in, the timestamp, and the outcome. The example below shows an audit log:
{
"timestamp": "2025-03-15T09:04:32.441Z",
"event_type": "user_login",
"username": "jsmith",
"outcome": "success"
}
In plain English
On March 15th, 2025, at 9:04 am UTC, jsmith logged in successfully.
The log above might look useful, and many organizations enable audit logs for compliance. Still, for an enterprise application with thousands of users, the usefulness of audit logs to a security team is very limited. The reason is that audit logs lack the context that makes them actionable, and generate too much noise to be useful. For example, knowing that the user logged in at 9:04 am UTC provides very little context to the detection engineer. It does not tell them whether this is an unusual sign-in, which device the user logged on to, or what the user did after logging in. Without this context, an audit log like this cannot distinguish between a legitimate user and a malicious user.
In addition, audit logs not only provide little value, but they also create noise by generating an entry every time a user signs in. As a result, noisy audit logs hinder performance and unnecessarily increase the budget. In most SIEMs (Security Information and Event Management Systems), logs are stored in a centralized location, so the more data it ingests, the more it will cost and the slower the platform becomes. Noisy audit logs burn the budget for nonessential logs, preventing the ingestion of more useful logs that could improve detection.
Example of a Useful Log
{
"timestamp": "2025-03-15T09:04:32.441Z",
"application": {
"name": "HR Portal",
"version": "4.2.1",
"environment": "production"
},
"event_type": "user_login",
"outcome": "success",
"user": {
"username": "jsmith"
},
"request": {
"ip_address": "192.168.1.105",
"network": "corporate",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
},
"authentication": {
"password_verified": true,
"mfa_sent": true,
"mfa_approved": true,
"identity_provider": "Okta"
},
"session": {
"session_id": "a3f92b1c-4d78-11ee-be56-0242ac120002"
}
}
What this log says in plain English
On March 15th, 2024, at 9:04 am UTC, the HR Portal recorded a login from jsmith. This user initiated a login request using Chrome browser version 122.0. The application confirmed the password was correct, sent an MFA notification through Okta, and recorded that jsmith approved it. The application assigned a session ID to track all of jsmith’s activity for the duration of their visit, and logged that the request came from IP address 192.168.1.105 on the corporate network. It shows that this application was running version 4.2.1.
Several characteristics make this a useful log. It answers the core questions: who took an action, what they did, where the request came from, and how they authenticated. Each field is distinct enough to serve as a baseline for normal behavior. In addition, each field is unique enough to build rules that can help distinguish a legitimate user from a potentially malicious one.
To further illustrate why this matters, consider a scenario where the same username logs in from two different IP addresses within minutes of each other, using an unrecognized user-agent, and MFA was never sent. On its own, any one of these data points provide very little insight. But when correlated together, they tell a different story: the IP addresses suggest the user is in two places at once, the unfamiliar user-agent indicates an unusual or automated client, and the missing MFA step raises the question of whether authentication was bypassed entirely. A detection engineer can only build a rule that catches this pattern if all three fields are present in the log to catch malicious activity.
How Context-rich Logs Lad to Better Detection Engineering and an Better Security Program
A context-rich log is the foundation of a good detection rule, and a good detection rule is the foundation of an efficient SOC. The relationship among the three is similar to that among fresh ingredients, a recipe, and a chef. No matter how talented the chef or how well-written the recipe, the quality of the meal is ultimately decided before anyone sets foot in the kitchen.
Just as a good recipe is only as strong as the ingredients behind it, a context-rich log gives a detection engineer the foundation needed to write high-fidelity rules. High-fidelity rules are rules that fire on genuine threats and rarely fire on legitimate activity. Making high-fidelity rules are only possible when a detection engineer has enough context to baseline what normal looks like. Without a baseline, there is no way to spot abnormalities. Context-rich logs provide that baseline, giving detection engineers the data points needed to identify suspicious activity and giving the SOC something worth investigating when an alert fires.
Having high-fidelity rules is important because it prevents SOC alert fatigue. Alert fatigue occurs when analysts are overwhelmed by a high volume of alerts, most of which turn out to be false positives. Most SOC teams are underfunded due to budget cuts, so it is more important than ever to have high-quality alerts that make the most of the resources available. Just as a chef cannot save a dish made with bad ingredients, no matter how talented they are, a SOC analyst cannot investigate their way out of a flood of low-quality alerts rooted in poor log data.
Finally, by ingesting only context-rich logs that matter, organizations can improve SIEM performance, leading to faster investigations and lower costs. When irrelevant data does not bog down a SIEM, detection rules execute more quickly and analyst queries return results sooner, directly translating to faster investigations and quicker containment. In incident response, every minute counts. The longer a threat actor goes undetected, the more time they have to move through the environment, escalate privileges, and cause damage. On the cost side, most SIEM platforms charge based on the amount of data ingested; every low-value log in the pipeline is wasted money on data that does not improve security, money that could have been reinvested in analyst training and development instead.
Closing Thoughts
Good security is not just about tools or the right people. It is about having good data. Every detection rule, every alert, and every investigation depends on the quality of the data. Poor log context compromises a company’s security: detection engineers write rules that miss the mark, SOC analysts burn time chasing false positives, and threat actors move through the environment undetected.
For every team responsible for an application that feeds into a security pipeline, the question is simple:
Is what we are logging actually useful? Does it give the security team enough context to do their job?
This distinction is very important. Noisy, shallow logs do not just fail to help; they actively get in the way. consume budget, slow down platforms, and bury the signals that actually matter. For many small security teams that are already stretched thin, a flood of low-context logs without any actionable items is barely helpful than no logging at all.
The security team’s ability to detect breaches, respond quickly, and protect the organization often traces back to a decision made long before any incident.
So please, log with intention.
References
