Skip to main content

Querying Logs in Explore

Explore provides a comprehensive interface for investigating log data from sources like Loki, Elasticsearch, CloudWatch Logs, and other log aggregation systems. The logs view is optimized for rapid filtering, searching, and contextual analysis.

Logs Visualization

The logs panel in Explore displays log entries with several key features:
  • Timestamp - When the log entry was recorded
  • Log level - Color-coded severity (error, warning, info, debug, trace)
  • Log message - The actual log content
  • Labels/fields - Structured metadata attached to each log line
  • Detected fields - Automatically parsed fields from log content
Logs are displayed in reverse chronological order by default (newest first), but you can change the sort order using the sort dropdown.

Loki Log Queries

Loki is Grafana’s native log aggregation system. It uses LogQL, a query language designed specifically for logs.

Basic LogQL Queries

Stream Selection

Select log streams using label matchers:
{job="varlogs"}
This returns all logs from the varlogs job.

Multiple Label Filters

Combine multiple label filters:
{job="varlogs", env="production", namespace="default"}
All labels must match (AND logic).

Label Matching Operators

# Exact match
{job="varlogs"}

# Negative match
{job!="varlogs"}

# Regex match
{job=~".*varlogs.*"}

# Negative regex
{job!~".*test.*"}

Log Pipeline Operations

LogQL pipelines transform and filter log lines after stream selection:

Line Filtering

Filter logs containing specific text:
{job="varlogs"} |= "error"
Operators:
  • |= - Contains string
  • != - Does not contain string
  • |~ - Matches regex
  • !~ - Does not match regex

Chained Filters

Combine multiple filters:
{job="varlogs"} 
  |= "error" 
  != "timeout" 
  |~ "database|connection"

Parser Operations

Extract fields from log lines:
# JSON parsing
{job="varlogs"} | json

# Logfmt parsing  
{job="varlogs"} | logfmt

# Regex parsing
{job="varlogs"} | regexp "(?P<method>\\w+) (?P<path>[\\w/]+)"

# Pattern parsing
{job="varlogs"} | pattern `<_> <level> <_> <msg>`

Label Filtering

Filter on parsed labels:
{job="varlogs"} 
  | json 
  | level="error" 
  | duration > 1s

Line Formatting

Reformat log lines for display:
{job="varlogs"} 
  | json 
  | line_format "{{.level}} - {{.message}}"

Metric Queries from Logs

LogQL can aggregate logs into metrics:

Count Over Time

Count log lines per time interval:
count_over_time({job="varlogs"}[5m])

Rate

Calculate logs per second:
rate({job="varlogs"} |= "error" [5m])

Aggregation Functions

Aggregate across labels:
sum by (level) (
  count_over_time({job="varlogs"}[5m])
)

Extracted Values

Aggregate parsed numeric fields:
avg_over_time(
  {job="varlogs"} 
    | json 
    | unwrap duration [5m]
)
This calculates the average duration field over 5-minute windows.

Query Builder

The LogQL query builder helps construct queries visually:
1

Select labels

Choose label filters from dropdowns. Available labels are auto-populated from your Loki instance.
2

Add line filters

Add text or regex filters to match specific log content.
3

Parse fields

Select a parser (JSON, logfmt, regex, pattern) to extract structured fields.
4

Filter parsed labels

Apply filters on the extracted fields.
5

Switch to code

Click Code to see the generated LogQL and make manual adjustments.
Start with the query builder when learning LogQL, then switch to code mode as you become more comfortable.

Interactive Log Filtering

Explore provides powerful interactive filtering directly from log lines:

Click to Filter

  1. Hover over any label or field in a log line
  2. Click the filter icon to show available actions:
    • Filter for value - Show only logs with this value
    • Filter out value - Hide logs with this value
  3. The query automatically updates with the new filter

Text Selection

Select any text within a log message:
  1. Highlight text in the log content
  2. Right-click or use the popup menu
  3. Choose:
    • Line contains - Add |= "text" filter
    • Line does not contain - Add != "text" filter
Interactive filters are added to your query, making them visible and easy to modify or remove.

Log Context

View surrounding log lines for any log entry:
1

Expand a log line

Click on any log line to expand its details.
2

Open context

Click Show context to load log lines before and after this entry.
3

Adjust range

Use the controls to load more lines before or after.
4

Navigate context

Scroll through the context to understand what happened around this log.
Context is essential for understanding the sequence of events leading to an issue.

Logs Volume

The logs volume panel shows a histogram of log entries over time:
  • Visual overview - See spikes and patterns in log volume
  • Color-coded by level - Errors in red, warnings in yellow, info in blue
  • Interactive selection - Click a bar to filter to that time range
  • Auto-updating - Refreshes with your query results
Use logs volume to identify when issues started, then click that time period to focus your investigation.

Deduplication

Reduce noise from duplicate log lines:
  1. Click the Dedup dropdown in the logs panel
  2. Choose a strategy:
    • None - Show all logs
    • Exact - Hide exact duplicate lines
    • Numbers - Hide lines that differ only in numbers
    • Signature - Hide lines with similar patterns
Deduplication is applied client-side after query results are returned, so it doesn’t reduce query load.

Live Tail

Stream logs in real-time as they’re ingested:
1

Start live tail

Click the Live button in the Explore toolbar.
2

Watch logs stream

New log lines appear at the top automatically.
3

Pause to review

Click Pause to freeze the stream while maintaining the connection.
4

Resume or stop

Click Resume to continue streaming, or Stop to exit live mode.
5

Clear buffer

Click Clear to remove all streamed logs from the view.
Live tail works best with focused queries. Broad queries may stream too many logs to review effectively.

Log Details

Click any log line to view detailed information:

Detected Fields

Automatically parsed fields from the log content:
  • JSON object keys
  • Logfmt key-value pairs
  • Common patterns (IP addresses, URLs, etc.)
Each field shows:
  • Field name and value
  • Filter actions (filter for/out)
  • Copy value button
  • Statistics (for numeric fields)

Labels

Structured metadata attached to the log stream:
  • Kubernetes labels
  • Service identifiers
  • Environment tags
  • Custom labels from Loki configuration
Configured links to related data:
  • Trace IDs linking to distributed traces
  • Service names linking to metrics
  • Custom correlations defined in Grafana
Click any link to open the related data in a split pane.

Performance Optimization

Label Filter First

Always start queries with specific label filters:
# Good - filters streams first
{job="api", env="prod"} |= "error"

# Bad - processes all logs
{job=~".*"} |= "error" | json | level="error"
Label filters reduce the data scanned before line processing.

Limit Time Range

Querying large time ranges (> 1 day) can be slow and resource-intensive. Start with smaller ranges and expand as needed.

Use Metric Queries for Aggregations

For counting and aggregations, use metric queries instead of loading individual logs:
# Efficient - returns aggregated metric
sum by (level) (count_over_time({job="api"}[1h]))

# Inefficient - loads all logs then counts client-side  
{job="api"}

Limit Results

Add a line limit for exploratory queries:
{job="api"} |= "error" | limit 100

Advanced Techniques

Multi-Line Logs

Handle logs that span multiple lines:
{job="api"} 
  | regexp "(?s)(?P<log>ERROR.*?\n.*?\n.*?\n)"
  | line_format "{{.log}}"
The (?s) flag enables . to match newlines.

JSON Filtering

Query nested JSON fields:
{job="api"} 
  | json 
  | json error="response.error" 
  | error != ""

Pattern Extraction

Extract structured data from unstructured logs:
{job="nginx"} 
  | pattern `<_> - - <_> "<method> <path> <_>" <status> <size> <_>`
  | status >= 400

Time Formatting

Format timestamps in log output:
{job="api"} 
  | json 
  | line_format `{{.timestamp | unixEpoch | date "2006-01-02 15:04:05"}} {{.message}}`

Elasticsearch and Other Sources

While this guide focuses on Loki, Explore supports other log sources:

Elasticsearch

  • Use Lucene query syntax
  • Access all indexed fields
  • Support for aggregations
  • Full-text search capabilities

CloudWatch Logs

  • Query using CloudWatch Logs Insights query language
  • Access AWS log groups and streams
  • Filter by log patterns
  • Time-based filtering

Splunk

  • SPL (Splunk Processing Language) support
  • Access to Splunk indexes
  • Field extraction and transformation
Each data source has its own query language and capabilities. Refer to the specific data source documentation for query syntax.

Exporting and Sharing

Share Query URL

  1. Build your query in Explore
  2. Click Share in the toolbar
  3. Copy the URL containing your query and time range
  4. Share with team members

Download Logs

Export log results:
  1. Run your query to load results
  2. Click the download icon in the logs panel
  3. Choose format (TXT or JSON)
  4. Logs export with all fields and metadata

Create Alert

Convert log queries into alerts (with supported data sources):
  1. Build a metric query from logs
  2. Click Create alert rule
  3. Configure alert conditions and notifications

Best Practices

  • Begin with basic label filters to see overall log volume
  • Use logs volume to identify time periods of interest
  • Add line filters and parsers to narrow results
  • Leverage interactive filtering for rapid iteration
  • Standardize labels across services (env, cluster, namespace)
  • Use meaningful job names that identify the log source
  • Document your labeling scheme for the team
  • Emit logs in JSON or logfmt for easier parsing
  • Include contextual fields (trace_id, request_id, user_id)
  • Use consistent field names across services
  • Set appropriate log levels
  • Use split view to correlate logs with metrics
  • Click trace IDs in logs to view distributed traces
  • Set up data links to connect related data
  • Enable exemplars in Prometheus to link metrics → traces → logs

Troubleshooting

No logs returned

  • Verify the time range includes log data
  • Check label filters match your log streams
  • Confirm logs are being ingested (check in Loki directly)
  • Try removing line filters to see if they’re too restrictive

Query timeout

  • Reduce the time range
  • Add more specific label filters
  • Remove expensive regex operations
  • Consider if you need individual logs or can use metric aggregations

Parsing errors

  • Verify log format matches the parser (JSON logs need | json)
  • Check regex patterns for syntax errors
  • Use the query inspector to see raw log content
  • Test parsers on a small time range first

Next Steps

Querying Metrics

Learn how to query time-series metrics data

Distributed Tracing

Investigate distributed traces in Explore