Unifying Parameters Throughout Databricks | Databricks Weblog

Unifying Parameters Throughout Databricks | Databricks Weblog
Unifying Parameters Throughout Databricks | Databricks Weblog


In the present day, we’re excited to announce the assist for named parameter markers within the SQL editor. This characteristic means that you can write parameterized code within the SQL editor, that may then be copied and run straight in a dashboard or pocket book with none syntax modifications. This marks a major milestone on our journey to unify parameters throughout queries, notebooks, and dashboards.

1

Utilizing named parameter markers

Parameters allow you to substitute values into dataset queries at runtime, permitting you to filter knowledge by standards resembling dates and product classes. This leads to extra environment friendly querying and exact evaluation earlier than knowledge is aggregated in a SQL question.

Parameter markers are supported throughout queries, notebooks, dashboards, workflows, and the SQL Execution API. They’re strongly typed and extra resilient to SQL injection assaults by clearly separating supplied values from the SQL statements. The named parameter marker syntax can be utilized by merely including a colon (:) to the start of an alphabetic phrase, for instance, :parameter_name or :`parameter with an area`.

2

To specify a key phrase like a column or desk title, use the identifier() clause as a wrapper. For instance, identifier(:parameter_name).

2

We suggest updating present parameters utilizing the named parameter marker syntax.  We’ll quickly present an assistant motion to transform parameters mechanically.

4

Frequent use instances

Listed below are a number of use instances that parameters are helpful for: 

Add a parameterized date vary to a question to pick out data inside a selected time-frame.

SELECT * FROM samples.nyctaxi.journeys the place tpep_pickup_datetime BETWEEN :start_date AND :end_date

5

Dynamically choose or create a catalog, schema, and desk.

SELECT * FROM IDENTIFIER(:catalog || '.' || :schema || '.' || :desk)

6

CREATE TABLE IDENTIFIER(:catalog || '.' || :schema || '.' || :desk) AS SELECT 1;

7

Outline the question schema with a parameter.

USE SCHEMA IDENTIFIER(:selected_schema)

8

Parameterize templated strings to format outputs, resembling cellphone quantity.

SELECT format_string("(%d) %d", :area_code, :phone_number) as phone_number

9

Parameterize rollups by day, month, or yr.

SELECT DATE_TRUNC(:date_granularity, tpep_pickup_datetime) AS date_rollup, COUNT(*) AS total_trips FROM journeys GROUP BY date_rollup

10

Choose a number of parameter values in a single question.

SELECT * FROM journeys WHERE

  array_contains(

    TRANSFORM(SPLIT(:list_parameter, ','), s -> TRIM(s)),

    dropoff_zip

  )

11

Coming quickly

We look ahead to offering much more simplicity and adaptability in discipline and parameter filtering with date vary and multi-value parameters. For instance,

SELECT * FROM journeys the place tpep_pickup_datetime BETWEEN :date.min AND :date.max 

and

SELECT * FROM journeys the place WHERE array_contains(:zipcodes, dropoff_zip)

Get began with named parameter marker syntax

Named parameter marker syntax for queries, notebooks, dashboards, workflows, and the SQL Execution API is offered as we speak. In case you have any suggestions or questions, please contact us at  [email protected]. Examine the documentation page for extra detailed sources on getting began parameters on Databricks.

To be taught extra about Databricks SQL, go to our website or learn the documentation. It’s also possible to take a look at the product tour for Databricks SQL. If you wish to migrate your present warehouse to a high-performance, serverless knowledge warehouse with an awesome person expertise and decrease complete value, then Databricks SQL is the answer — try it without spending a dime.

Leave a Reply

Your email address will not be published. Required fields are marked *