The case() function is used to specify which ranges of the depth fits each description. The eval command is used to create a field called Description, which takes the value of "Low", "Mid", or "Deep" based on the Depth of the earthquake. | stats count min(mag) max(mag) by Description | eval Description=case(depth70 AND depth300, "Deep") | from my_dataset where source="all_month.csv" We'll use Low, Mid, and Deep for the category names. Deep-focus earthquakes occur at depths greater than 300 km. Mid-focus earthquakes occur at depths between 70 and 300 km. Shallow-focus earthquakes occur at depths less than 70 km. You want classify earthquakes based on depth. The data is a comma separated ASCII text file that contains magnitude (mag), coordinates (latitude, longitude), region (place), and so forth, for each earthquake recorded. This example uses earthquake data downloaded from the USGS Earthquakes website. This example shows you how to use the case function in two different ways, to create categories and to create a custom sort order. The word Other displays in the search results for status=406 and status=408. | eval description=case(status = 200, "OK", status =404, "Not found", status = 500, "Internal Server Error", true, "Other") To display a default value when the status does not match one of the values specified, use the literal true. In the above example, the description column is empty for status=406 and status=408. | eval description=case(status = 200, "OK", status =404, "Not found", status = 500, "Internal Server Error") |from my_dataset where sourcetype="access_*" The following example returns descriptions for the corresponding HTTP status code. You can use this function with the eval and where commands, in the WHERE clause of the from command, and as part of evaluation expressions with other commands. The function defaults to NULL if none of the arguments are true. When the first expression is encountered that evaluates to TRUE, the corresponding argument is returned. The arguments are Boolean expressions that are evaluated from first to last. This function takes pairs of and arguments and returns the first value for which the condition evaluates to TRUE. To do this, dedup has a consecutive=true option that tells it to remove only duplicates that are consecutive.The following list contains the functions that you can use to compare values or specify conditional statements.įor information about using string and numeric fields in functions, and nesting functions, see Overview of SPL2 evaluation functions. But that’s not what we want we want to remove duplicates that appear in a cluster. By default, dedup will remove all duplicate events (where an event is a duplicate if it has the same values for the specified fields). As long as we don’t really care about the number of repeated runs of duplicates, the more straightforward approach is to use dedup, which removes duplicates. Using transaction here is a case of applying the wrong tool for the job. You might be tempted to use the transaction command as follows. Your goal is to get 7 events, one for each of the code values in a row: 239, 773, -1, 292, -1, 444, -1. Suppose you have events as follows: 11:45:23 code=239
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |