index
This guide provides information on building searches.
In this section, we will introduce the following concepts:
📄️ Best Practices for Searches
Be specific with search scope
📄️ Dynamic Parsing
Dynamic Parsing allows automatic field extraction from your JSON log messages when you run a search. This allows you to view fields from JSON logs without having to manually specify parsing logic.
📄️ Keyword Search Expressions
A Keyword Search Expression defines the scope of data for the query.
📄️ Search Syntax Overview
The Sumo Logic Search Language operates on your entire log repository, no matter how many different log sources you have—in real time. The search query language is intuitive and efficient, allowing you to search terabytes of data and see results in seconds.
📄️ Search Templates
Search templates can help you simplify searches for your users by giving them a few easy input choices. You can have search templates replace any text in a query, including fields, keywords, and arguments to operators. You can also determine what type of information is valid such as text, strings, and keywords.
📄️ Set the Time Range
To set the time range for searches or metrics visualizations, click the time area.
📄️ Use Receipt Time
To search data based on the order that Collectors received the messages use Receipt Time. This option has the search reference the metadata field receiptTime instead of messageTime, giving you the abilityto view the difference in the parsedtimestamp (messageTime) and receipt time (receiptTime) to pinpoint Sources that may be parsing the message's timestamps incorrectly.
📄️ Use a URL to Run a Search
You can create a custom URL to launch a log search in Sumo Logic.
What Data Do I Have?
It can be hard to create a search query if you don't know what data you have in your Sumo Logic environment.
You can use the following simple queries to identify possible values for your existing Source Categories, Source Names, and Source Hosts. You can also approximate data volume for each of the possible values using these queries.
We discourage the use of *
, as it does not provide much value, but in this exception, it is an easy way to identify all messages received in the last 5 minutes, and provide an approximate volume for each.
For Source Categories:
* | count_frequent(_sourceCategory)
For Source Hosts:
* | count_frequent(_sourceHost)
For Source Names:
* | count_frequent(_sourceName)
Write Efficient Search Queries
Make the search as selective as possible
The more specific the query, the more efficiently it will run, as unnecessary messages are quickly thrown out of the mix. For example, the following two queries will generate the same result:
* | parse regex "uid=(\<userI\>\d+)"
"uid=" | parse regex "uid=(\<userI\>\d+)"
The second query will return the results more efficiently because the first query includes "*
", which prompts Sumo Logic to comb through all messages for the given time range.
Use Field Extraction Rules
If your admin has created Field Extraction Rules, learn how to use them. Field Extraction Rules parse out fields from your organization's log files, meaning that you will not need to parse out fields in your query.
Include the most selective filters first
It is best to filter data as early as possible in the query, using the most selective filters first.
For example, look at the following queries:
* | parse "queryTime=* " as queryTime | parse "uid=* " as uid | where queryTime\> 10000
* | parse "queryTime=* " as queryTime | where queryTime\> 10000 | parse "uid=* " as ``uid
Because most log lines have a uid
, but only a small fraction have queryTime > 10000
, the second query is more efficient.