Querying Logs in Explore
Explore provides a comprehensive interface for investigating log data from sources like Loki, Elasticsearch, CloudWatch Logs, and other log aggregation systems. The logs view is optimized for rapid filtering, searching, and contextual analysis.Logs Visualization
The logs panel in Explore displays log entries with several key features:- Timestamp - When the log entry was recorded
- Log level - Color-coded severity (error, warning, info, debug, trace)
- Log message - The actual log content
- Labels/fields - Structured metadata attached to each log line
- Detected fields - Automatically parsed fields from log content
Logs are displayed in reverse chronological order by default (newest first), but you can change the sort order using the sort dropdown.
Loki Log Queries
Loki is Grafana’s native log aggregation system. It uses LogQL, a query language designed specifically for logs.Basic LogQL Queries
Stream Selection
Select log streams using label matchers:varlogs job.
Multiple Label Filters
Combine multiple label filters:Label Matching Operators
Log Pipeline Operations
LogQL pipelines transform and filter log lines after stream selection:Line Filtering
Filter logs containing specific text:|=- Contains string!=- Does not contain string|~- Matches regex!~- Does not match regex
Chained Filters
Combine multiple filters:Parser Operations
Extract fields from log lines:Label Filtering
Filter on parsed labels:Line Formatting
Reformat log lines for display:Metric Queries from Logs
LogQL can aggregate logs into metrics:Count Over Time
Count log lines per time interval:Rate
Calculate logs per second:Aggregation Functions
Aggregate across labels:Extracted Values
Aggregate parsed numeric fields:duration field over 5-minute windows.
Query Builder
The LogQL query builder helps construct queries visually:Select labels
Choose label filters from dropdowns. Available labels are auto-populated from your Loki instance.
Interactive Log Filtering
Explore provides powerful interactive filtering directly from log lines:Click to Filter
- Hover over any label or field in a log line
- Click the filter icon to show available actions:
- Filter for value - Show only logs with this value
- Filter out value - Hide logs with this value
- The query automatically updates with the new filter
Text Selection
Select any text within a log message:- Highlight text in the log content
- Right-click or use the popup menu
- Choose:
- Line contains - Add
|= "text"filter - Line does not contain - Add
!= "text"filter
- Line contains - Add
Interactive filters are added to your query, making them visible and easy to modify or remove.
Log Context
View surrounding log lines for any log entry:
Context is essential for understanding the sequence of events leading to an issue.
Logs Volume
The logs volume panel shows a histogram of log entries over time:- Visual overview - See spikes and patterns in log volume
- Color-coded by level - Errors in red, warnings in yellow, info in blue
- Interactive selection - Click a bar to filter to that time range
- Auto-updating - Refreshes with your query results
Deduplication
Reduce noise from duplicate log lines:- Click the Dedup dropdown in the logs panel
- Choose a strategy:
- None - Show all logs
- Exact - Hide exact duplicate lines
- Numbers - Hide lines that differ only in numbers
- Signature - Hide lines with similar patterns
Deduplication is applied client-side after query results are returned, so it doesn’t reduce query load.
Live Tail
Stream logs in real-time as they’re ingested:Log Details
Click any log line to view detailed information:Detected Fields
Automatically parsed fields from the log content:- JSON object keys
- Logfmt key-value pairs
- Common patterns (IP addresses, URLs, etc.)
- Field name and value
- Filter actions (filter for/out)
- Copy value button
- Statistics (for numeric fields)
Labels
Structured metadata attached to the log stream:- Kubernetes labels
- Service identifiers
- Environment tags
- Custom labels from Loki configuration
Data Links
Configured links to related data:- Trace IDs linking to distributed traces
- Service names linking to metrics
- Custom correlations defined in Grafana
Performance Optimization
Label Filter First
Always start queries with specific label filters:Limit Time Range
Use Metric Queries for Aggregations
For counting and aggregations, use metric queries instead of loading individual logs:Limit Results
Add a line limit for exploratory queries:Advanced Techniques
Multi-Line Logs
Handle logs that span multiple lines:(?s) flag enables . to match newlines.
JSON Filtering
Query nested JSON fields:Pattern Extraction
Extract structured data from unstructured logs:Time Formatting
Format timestamps in log output:Elasticsearch and Other Sources
While this guide focuses on Loki, Explore supports other log sources:Elasticsearch
- Use Lucene query syntax
- Access all indexed fields
- Support for aggregations
- Full-text search capabilities
CloudWatch Logs
- Query using CloudWatch Logs Insights query language
- Access AWS log groups and streams
- Filter by log patterns
- Time-based filtering
Splunk
- SPL (Splunk Processing Language) support
- Access to Splunk indexes
- Field extraction and transformation
Exporting and Sharing
Share Query URL
- Build your query in Explore
- Click Share in the toolbar
- Copy the URL containing your query and time range
- Share with team members
Download Logs
Export log results:- Run your query to load results
- Click the download icon in the logs panel
- Choose format (TXT or JSON)
- Logs export with all fields and metadata
Create Alert
Convert log queries into alerts (with supported data sources):- Build a metric query from logs
- Click Create alert rule
- Configure alert conditions and notifications
Best Practices
Start broad, then filter down
Start broad, then filter down
- Begin with basic label filters to see overall log volume
- Use logs volume to identify time periods of interest
- Add line filters and parsers to narrow results
- Leverage interactive filtering for rapid iteration
Use consistent label names
Use consistent label names
- Standardize labels across services (env, cluster, namespace)
- Use meaningful job names that identify the log source
- Document your labeling scheme for the team
Structure your logs
Structure your logs
- Emit logs in JSON or logfmt for easier parsing
- Include contextual fields (trace_id, request_id, user_id)
- Use consistent field names across services
- Set appropriate log levels
Combine with metrics and traces
Combine with metrics and traces
- Use split view to correlate logs with metrics
- Click trace IDs in logs to view distributed traces
- Set up data links to connect related data
- Enable exemplars in Prometheus to link metrics → traces → logs
Troubleshooting
No logs returned
- Verify the time range includes log data
- Check label filters match your log streams
- Confirm logs are being ingested (check in Loki directly)
- Try removing line filters to see if they’re too restrictive
Query timeout
- Reduce the time range
- Add more specific label filters
- Remove expensive regex operations
- Consider if you need individual logs or can use metric aggregations
Parsing errors
- Verify log format matches the parser (JSON logs need
| json) - Check regex patterns for syntax errors
- Use the query inspector to see raw log content
- Test parsers on a small time range first
Next Steps
Querying Metrics
Learn how to query time-series metrics data
Distributed Tracing
Investigate distributed traces in Explore