Window functions, also known as analytic functions, perform calculations across a set of table rows that are somehow related to the current row. This is different from regular aggregate functions, which aggregate results for the entire set of rows. Both Oracle PL/SQL and Apache Hive support window functions, but there are some differences in their implementation and usage.
Window Functions in Oracle PL/SQL
Oracle provides a comprehensive set of window functions that can be used to perform complex calculations.
Syntax:
function_name(expression) OVER ( [PARTITION BY partition_expression] [ORDER BY order_expression]
)
Common Oracle Window Functions:
Function
Description
Example Usage
ROW_NUMBER
Assigns a unique number to each row within the partition.
SELECT employee_id, ROW_NUMBER() OVER (ORDER BY salary DESC) AS row_num FROM employees;
RANK
Assigns a rank to each row within the partition of a result set.
SELECT employee_id, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees;
DENSE_RANK
Similar to RANK but without gaps in ranking.
SELECT employee_id, DENSE_RANK() OVER (ORDER BY salary DESC) AS dense_rank FROM employees;
NTILE
Divides rows into a specified number of approximately equal groups.
SELECT employee_id, NTILE(4) OVER (ORDER BY salary DESC) AS quartile FROM employees;
LAG
Provides access to a row at a specified physical offset before that row.
SELECT employee_id, LAG(salary, 1) OVER (ORDER BY hire_date) AS prev_salary FROM employees;
LEAD
Provides access to a row at a specified physical offset after that row.
SELECT employee_id, LEAD(salary, 1) OVER (ORDER BY hire_date) AS next_salary FROM employees;
FIRST_VALUE
Returns the first value in an ordered set of values.
SELECT employee_id, FIRST_VALUE(salary) OVER (ORDER BY hire_date) AS first_salary FROM employees;
LAST_VALUE
Returns the last value in an ordered set of values.
SELECT employee_id, LAST_VALUE(salary) OVER (ORDER BY hire_date RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS last_salary FROM employees;
SUM
Calculates the sum of a set of values.
SELECT department_id, SUM(salary) OVER (PARTITION BY department_id) AS dept_total_salary FROM employees;
AVG
Calculates the average of a set of values.
SELECT department_id, AVG(salary) OVER (PARTITION BY department_id) AS dept_avg_salary FROM employees;
Window Functions in Apache Hive
Hive also supports window functions, although its implementation may have slight differences compared to Oracle.
Syntax:
function_name(expression) OVER ( [PARTITION BY partition_expression] [ORDER BY order_expression] [ROWS|RANGE BETWEEN start_expression AND end_expression] )
Common Hive Window Functions:
Function
Description
Example Usage
ROW_NUMBER
Assigns a unique number to each row within the partition.
SELECT employee_id, ROW_NUMBER() OVER (ORDER BY salary DESC) AS row_num FROM employees;
RANK
Assigns a rank to each row within the partition of a result set.
SELECT employee_id, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees;
DENSE_RANK
Similar to RANK but without gaps in ranking.
SELECT employee_id, DENSE_RANK() OVER (ORDER BY salary DESC) AS dense_rank FROM employees;
NTILE
Divides rows into a specified number of approximately equal groups.
SELECT employee_id, NTILE(4) OVER (ORDER BY salary DESC) AS quartile FROM employees;
LAG
Provides access to a row at a specified physical offset before that row.
SELECT employee_id, LAG(salary, 1) OVER (ORDER BY hire_date) AS prev_salary FROM employees;
LEAD
Provides access to a row at a specified physical offset after that row.
SELECT employee_id, LEAD(salary, 1) OVER (ORDER BY hire_date) AS next_salary FROM employees;
FIRST_VALUE
Returns the first value in an ordered set of values.
SELECT employee_id, FIRST_VALUE(salary) OVER (ORDER BY hire_date) AS first_salary FROM employees;
LAST_VALUE
Returns the last value in an ordered set of values.
SELECT employee_id, LAST_VALUE(salary) OVER (ORDER BY hire_date RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS last_salary FROM employees;
SUM
Calculates the sum of a set of values.
SELECT department_id, SUM(salary) OVER (PARTITION BY department_id) AS dept_total_salary FROM employees;
AVG
Calculates the average of a set of values.
SELECT department_id, AVG(salary) OVER (PARTITION BY department_id) AS dept_avg_salary FROM employees;
Comparison of Window Functions in Oracle PL/SQL and Hive
Feature
Oracle PL/SQL
Hive
Syntax for Window Functions
function_name(expression) OVER ([PARTITION BY] [ORDER BY] [window_clause])
`function_name(expression) OVER ([PARTITION BY] [ORDER BY] [ROWS
ROW_NUMBER
ROW_NUMBER() OVER (ORDER BY column)
ROW_NUMBER() OVER (ORDER BY column)
RANK
RANK() OVER (ORDER BY column)
RANK() OVER (ORDER BY column)
DENSE_RANK
DENSE_RANK() OVER (ORDER BY column)
DENSE_RANK() OVER (ORDER BY column)
NTILE
NTILE(n) OVER (ORDER BY column)
NTILE(n) OVER (ORDER BY column)
LAG
LAG(column, offset, default) OVER (ORDER BY column)
LAG(column, offset, default) OVER (ORDER BY column)
LEAD
LEAD(column, offset, default) OVER (ORDER BY column)
LEAD(column, offset, default) OVER (ORDER BY column)
FIRST_VALUE
FIRST_VALUE(column) OVER (ORDER BY column)
FIRST_VALUE(column) OVER (ORDER BY column)
LAST_VALUE
LAST_VALUE(column) OVER (ORDER BY column)
LAST_VALUE(column) OVER (ORDER BY column)
SUM
SUM(column) OVER (PARTITION BY partition_column ORDER BY column)
SUM(column) OVER (PARTITION BY partition_column ORDER BY column)
AVG
AVG(column) OVER (PARTITION BY partition_column ORDER BY column)
AVG(column) OVER (PARTITION BY partition_column ORDER BY column)
Window Clause
Supports ROWS BETWEEN and RANGE BETWEEN
Supports ROWS BETWEEN and RANGE BETWEEN
Recursion in CTEs
Supported
Supported (from Hive 3.1.0)
Example Comparison
Oracle PL/SQL Example:
SELECT employee_id, department_id, salary, SUM(salary) OVER (PARTITION BY department_id ORDER BY salary DESC) AS running_total FROM employees;
Hive Example:
SELECT employee_id, department_id, salary, SUM(salary) OVER (PARTITION BY department_id ORDER BY salary DESC) AS running_total FROM employees;
Both Oracle PL/SQL and Hive support window functions with similar syntax and capabilities. However, Oracle generally offers more comprehensive and robust support for advanced SQL features due to its long history and broad usage in enterprise environments. Hive, while powerful, is tailored more towards big data processing in Hadoop ecosystems, which may affect performance and feature sets depending on the version and underlying infrastructure.
Complex Use Cases for Window Functions
Window functions go beyond basic ranking and aggregations, enabling powerful data manipulations within a result set. Here are some complex use cases that showcase their capabilities:
1. Identifying Customer Churn:
Scenario: You want to predict customer churn by analyzing purchase behavior.
Process:
Use ROW_NUMBER() to assign a sequential number to each customer purchase record, ordered by purchase date (most recent first).
Calculate the difference between the current purchase date and the previous purchase date using LAG() with an offset of 1. This gives the time gap between purchases.
Identify customers with a significant increase in time gap between purchases compared to their historical buying pattern. This could indicate potential churn.
2. Flagging Stock Price Volatility:
Scenario: You want to identify periods of unusual stock price volatility.
Process:
Calculate the daily rolling standard deviation of the stock price using STDDEV() over a window of the past X days (e.g., 30 days).
Compare the daily standard deviation with a historical average or a threshold. Flag days where the standard deviation is significantly higher, indicating potential volatility.
3. Analyzing User Engagement in Web Applications:
Scenario: You want to understand user engagement patterns in a web application.
Process:
Use DENSE_RANK() to assign a rank to each user session based on the total time spent on the site within a specific timeframe (e.g., day or week). This identifies the most engaged users.
Calculate the cumulative number of page views per user session using SUM() over a window defined by the session ID. This helps analyze browsing depth.
4. Comparing Sales Performance Across Different Locations:
Scenario: You want to compare sales performance between stores in different regions while accounting for overall trends.
Process:
Calculate the percentage change in daily sales for each store using PERCENT_CHANGE() over a window of the previous N days. This removes the influence of seasonal trends.
Use RANK() to compare the percentage change for each store within a specific region, identifying top and bottom performers relative to their regional peers.
5. Identifying Data Anomalies (Outliers):
Scenario: You want to detect outliers in a dataset that might indicate errors or suspicious activity.
Process:
Calculate the interquartile range (IQR) for a specific column using custom logic or dedicated functions (if available). The IQR represents the middle 50% of the data distribution.
Identify rows where the value falls outside the range defined by Q1 – 1.5 * IQR and Q3 + 1.5 * IQR. These could be potential outliers.
Additional Considerations:
When dealing with complex calculations, ensure proper handling of null values within window functions to avoid unexpected results.
Frame clauses (ROWS BETWEEN or CURRENT ROW) can be used to define the specific window size or offset for calculations within the window.
Explore advanced window functions like LEAD() and LAG() with custom offsets for more intricate data manipulations.
Write a query using a window function to calculate a running total of sales for each salesperson in your dataset. Explain How Each row is processed with Window Functions!!
let’s consider a dataset named sales with the following columns: salesperson, sale_date, and amount.
To calculate a running total of sales for each salesperson, we can use the SUM window function along with the PARTITION BY clause to group by each salesperson, and ORDER BY to ensure the sales are accumulated in chronological order.
SQL Query
SELECT
salesperson,
sale_date,
amount,
SUM(amount) OVER (PARTITION BY salesperson ORDER BY sale_date) AS running_total
FROM
sales
ORDER BY
salesperson,
sale_date;
Explanation of the Window Function
Window Function Basics:
A window function performs a calculation across a set of table rows that are somehow related to the current row. This is different from a regular aggregate function, which performs a calculation across a set of rows but returns a single value.
SUM(amount) OVER (PARTITION BY salesperson ORDER BY sale_date):
SUM(amount): This is the window function that calculates the sum of the amount column.
OVER: The OVER clause defines the window of rows over which the window function operates.
PARTITION BY salesperson: This clause divides the result set into partitions to which the window function is applied. Each partition contains all rows for a particular salesperson.
ORDER BY sale_date: This clause orders the rows within each partition. The running total is calculated in this order.
Processing Each Row:
The window function processes each row within the defined partitions. For each row, it calculates the sum of amount for all rows from the start of the partition up to the current row (inclusive), based on the ORDER BY clause.
For example, if the salesperson “Alice” has three sales on different dates, the window function calculates the running total as follows:
For the first sale, the running total is simply the amount of the first sale.
For the second sale, the running total is the sum of the first and second sales.
For the third sale, the running total is the sum of the first, second, and third sales.
Example Dataset
Let’s say we have the following sales table:
salesperson
sale_date
amount
Alice
2023-01-01
100
Alice
2023-01-02
200
Bob
2023-01-01
150
Alice
2023-01-03
50
Bob
2023-01-02
100
Result Set
After running the query, we would get the following result:
salesperson
sale_date
amount
running_total
Alice
2023-01-01
100
100
Alice
2023-01-02
200
300
Alice
2023-01-03
50
350
Bob
2023-01-01
150
150
Bob
2023-01-02
100
250
Detailed Step-by-Step Processing
Partitioning: The data is divided into two partitions: one for “Alice” and one for “Bob”.
Ordering: Within each partition, the rows are ordered by sale_date.
Using this method, the window function efficiently calculates a running total for each salesperson, providing a cumulative sum of sales up to each point in time.
Common Table Expressions (CTEs) are a useful feature in SQL for simplifying complex queries and improving readability. Both Oracle PL/SQL and Apache Hive support CTEs, although there may be slight differences in their syntax and usage.
Common Table Expressions in Oracle PL/SQL
In Oracle, CTEs are defined using the WITH clause. They are used to create a temporary result set that can be referenced within the main query.
Syntax:
WITH cte_name AS (
SELECT query
)
SELECT columns
FROM cte_name
WHERE condition;
Example Usage:
WITH department_totals AS ( SELECT department_id, SUM(salary) AS total_salary FROM employees GROUP BY department_id ) SELECT d.department_id, d.department_name, t.total_salary FROM departments d JOIN department_totals t ON d.department_id = t.department_id;
In this example:
The CTE department_totals calculates the total salary for each department.
The main query then joins the departments table with the department_totals CTE to display the department name and total salary.
Common Table Expressions in Apache Hive
In Hive, CTEs are also defined using the WITH clause. They function similarly to those in Oracle, allowing you to create temporary result sets for use within a larger query.
Syntax:
WITH cte_name AS (
SELECT query
)
SELECT columns
FROM cte_name
WHERE condition;
Example Usage:
WITH department_totals AS ( SELECT department_id, SUM(salary) AS total_salary FROM employees GROUP BY department_id ) SELECT d.department_id, d.department_name, t.total_salary FROM departments d JOIN department_totals t ON d.department_id = t.department_id;
In this example:
The syntax and functionality are essentially the same as in Oracle PL/SQL.
The CTE department_totals is used to calculate the total salary for each department.
The main query joins the departments table with the department_totals CTE to display the department name and total salary.
Key Points
Scope: CTEs in both Oracle and Hive are limited to the scope of the query they are defined in. Once the query execution is complete, the CTEs are discarded.
Multiple CTEs: You can define multiple CTEs in a single query by separating them with commas.
Recursion: Oracle supports recursive CTEs, which allow a CTE to refer to itself. Hive also supports recursive CTEs starting from version 3.1.0.
Example of Multiple CTEs:
WITH cte1 AS ( SELECT department_id, SUM(salary) AS total_salary FROM employees GROUP BY department_id ), cte2 AS ( SELECT department_id, COUNT(*) AS employee_count FROM employees GROUP BY department_id ) SELECT c1.department_id, c1.total_salary, c2.employee_count FROM cte1 c1 JOIN cte2 c2 ON c1.department_id = c2.department_id;
In both Oracle and Hive, this example:
Defines two CTEs: cte1 and cte2.
The main query joins these two CTEs to display the department ID, total salary, and employee count for each department.
CTEs are a powerful tool in both Oracle PL/SQL and Hive for breaking down complex queries, making them easier to read and maintain.
Here’s an example of a complex query using multiple CTEs and embedded CTEs in a single WITH statement. This query is designed to work in both MySQL and Spark SQL. It involves six layers of CTEs to illustrate a comprehensive use case.
Scenario:
You have two tables: products with columns (product_id, product_name, category_id, price) and categories with columns (category_id, category_name).
You want to perform various aggregations and calculations to ultimately determine the price difference between each product and the next most expensive product in the same category, while including additional metrics such as category-level statistics and rank.
SQL Query with Six Layers of CTEs
WITH
-- Layer 1: CTE to join products and categories
product_categories AS (
SELECT
p.product_id,
p.product_name,
p.category_id,
p.price,
c.category_name
FROM
products p
JOIN
categories c ON p.category_id = c.category_id
),
-- Layer 2: CTE to calculate the minimum and maximum price per category
category_min_max_prices AS (
SELECT
category_id,
MIN(price) AS min_price,
MAX(price) AS max_price
FROM
product_categories
GROUP BY
category_id
),
-- Layer 3: CTE to rank products within each category by price
ranked_products AS (
SELECT
pc.product_id,
pc.product_name,
pc.category_id,
pc.price,
pc.category_name,
ROW_NUMBER() OVER (PARTITION BY pc.category_id ORDER BY pc.price DESC) AS price_rank
FROM
product_categories pc
),
-- Layer 4: CTE to calculate the next higher price for each product
next_higher_prices AS (
SELECT
rp.product_id,
rp.product_name,
rp.category_id,
rp.price,
rp.category_name,
rp.price_rank,
LEAD(rp.price) OVER (PARTITION BY rp.category_id ORDER BY rp.price DESC) AS next_higher_price
FROM
ranked_products rp
),
-- Layer 5: CTE to calculate the price difference with the next higher price
price_differences AS (
SELECT
nhp.product_id,
nhp.product_name,
nhp.category_id,
nhp.price,
nhp.category_name,
nhp.price_rank,
nhp.next_higher_price,
nhp.price - nhp.next_higher_price AS price_difference
FROM
next_higher_prices nhp
),
-- Layer 6: CTE to calculate category-level statistics and combine all previous data
category_statistics AS (
SELECT
pd.category_id,
pd.category_name,
AVG(pd.price) AS avg_price,
COUNT(pd.product_id) AS total_products,
cm.min_price,
cm.max_price,
pd.product_id,
pd.product_name,
pd.price,
pd.price_rank,
pd.next_higher_price,
pd.price_difference
FROM
price_differences pd
JOIN
category_min_max_prices cm ON pd.category_id = cm.category_id
GROUP BY
pd.category_id, pd.category_name, cm.min_price, cm.max_price, pd.product_id, pd.product_name, pd.price, pd.price_rank, pd.next_higher_price, pd.price_difference
)
-- Final query to select all relevant columns
SELECT
cs.product_id,
cs.product_name,
cs.category_id,
cs.category_name,
cs.price,
cs.price_rank,
cs.next_higher_price,
cs.price_difference,
cs.avg_price,
cs.total_products,
cs.min_price,
cs.max_price
FROM
category_statistics cs
ORDER BY
cs.category_id, cs.price DESC;
Explanation:
product_categories CTE:
Joins the products and categories tables to select relevant columns.
category_min_max_prices CTE:
Calculates the minimum and maximum price for each category.
ranked_products CTE:
Ranks products within each category by price.
next_higher_prices CTE:
Calculates the next higher price for each product within the same category.
price_differences CTE:
Calculates the price difference between each product and the next higher price.
category_statistics CTE:
Calculates category-level statistics and combines data from all previous CTEs.
Final Query:
Selects the final results, including product details, price difference, and category-level statistics.
This approach demonstrates how to use multiple and embedded CTEs to build a complex query step-by-step, making it easier to understand and maintain.
Let us suppose a scenario in which i have converted six steps of pyspark scripts (Mostly Dataframes ) into a multiple CTEs with ‘single with’ statement in Spark sql Query. which will be performing better?
When deciding whether to use multiple CTEs in a single WITH statement in Spark SQL versus writing equivalent transformations in PySpark, performance can depend on several factors:
Factors Affecting Performance:
Execution Plan Optimization:
Spark SQL: The Catalyst optimizer in Spark SQL can optimize the query plan, potentially reducing the execution time. Spark SQL queries are usually well-optimized because of this.
PySpark: The DataFrame API also benefits from the Catalyst optimizer. However, PySpark scripts can be more flexible in terms of optimization strategies and custom transformations.
Readability and Maintainability:
Spark SQL: Using multiple CTEs can make the query easier to read and maintain, especially for complex transformations. It provides a declarative approach.
PySpark: While PySpark scripts can be more verbose, they offer more control over the transformations and are often easier to debug step-by-step.
Lazy Evaluation:
Both Spark SQL and PySpark follow lazy evaluation principles, meaning transformations are not executed until an action (e.g., show(), count(), collect()) is called.
Resource Utilization:
The performance of Spark SQL and PySpark can vary depending on cluster resource configurations, data size, and complexity of transformations.
Example: Converting Multiple Steps of PySpark to CTEs in Spark SQL
PySpark Script Example:
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, lead
from pyspark.sql.window import Window
spark = SparkSession.builder.appName("Example").getOrCreate()
# Step 1: Read data
products = spark.read.csv("products.csv", header=True, inferSchema=True)
categories = spark.read.csv("categories.csv", header=True, inferSchema=True)
# Step 2: Calculate total number of products
product_totals = products.groupBy("category_id").count().alias("total_products")
# Step 3: Calculate average price in each category
category_averages = products.groupBy("category_id").agg(avg("price").alias("avg_price"))
# Step 4: Identify products with prices above the average in their category
above_average_products = products.join(category_averages, "category_id").filter(col("price") > col("avg_price"))
# Step 5: Rank products within their categories by price
window_spec = Window.partitionBy("category_id").orderBy(col("price").desc())
ranked_products = products.withColumn("price_rank", rank().over(window_spec))
# Step 6: Calculate the next most expensive product price within each category
next_expensive_prices = products.withColumn("next_higher_price", lead("price").over(window_spec))
# Final Step: Combine all previous steps
final_result = products
.join(product_totals, "category_id")
.join(category_averages, "category_id")
.join(above_average_products, "product_id", "left")
.join(ranked_products, "product_id")
.join(next_expensive_prices, "product_id")
.select(products["*"],
product_totals["total_products"],
category_averages["avg_price"],
above_average_products["price"].alias("above_avg_price"),
ranked_products["price_rank"],
next_expensive_prices["next_higher_price"],
(products["price"] - next_expensive_prices["next_higher_price"]).alias("price_difference"))
final_result.show()
Equivalent Spark SQL with Multiple CTEs:
WITH
product_totals AS (
SELECT
category_id,
COUNT(*) AS total_products
FROM
products
GROUP BY
category_id
),
category_averages AS (
SELECT
category_id,
AVG(price) AS avg_price
FROM
products
GROUP BY
category_id
),
above_average_products AS (
SELECT
p.product_id,
p.product_name,
p.category_id,
p.price,
ca.avg_price
FROM
products p
JOIN
category_averages ca ON p.category_id = ca.category_id
WHERE
p.price > ca.avg_price
),
ranked_products AS (
SELECT
p.product_id,
p.product_name,
p.category_id,
p.price,
RANK() OVER (PARTITION BY p.category_id ORDER BY p.price DESC) AS price_rank
FROM
products p
),
next_expensive_prices AS (
SELECT
p.product_id,
p.product_name,
p.category_id,
p.price,
LEAD(p.price) OVER (PARTITION BY p.category_id ORDER BY p.price DESC) AS next_higher_price
FROM
products p
),
final_result AS (
SELECT
p.product_id,
p.product_name,
p.category_id,
c.category_name,
p.price,
pt.total_products,
ca.avg_price,
ap.price AS above_avg_price,
rp.price_rank,
nep.next_higher_price,
p.price - nep.next_higher_price AS price_difference
FROM
products p
JOIN
categories c ON p.category_id = c.category_id
LEFT JOIN
product_totals pt ON p.category_id = pt.category_id
LEFT JOIN
category_averages ca ON p.category_id = ca.category_id
LEFT JOIN
above_average_products ap ON p.product_id = ap.product_id
LEFT JOIN
ranked_products rp ON p.product_id = rp.product_id
LEFT JOIN
next_expensive_prices nep ON p.product_id = nep.product_id
)
SELECT
product_id,
product_name,
category_name,
price,
total_products,
avg_price,
above_avg_price,
price_rank,
next_higher_price,
price_difference
FROM
final_result
ORDER BY
category_name, price DESC;
Performance Considerations:
Execution Plan:
Run EXPLAIN or EXPLAIN EXTENDED on both the Spark SQL and PySpark transformations to analyze the execution plans. This will help you understand how Spark plans to execute each query and transformation.
Benchmarking:
Benchmark both approaches with representative data sizes. Measure the execution time, resource usage (CPU, memory), and shuffle read/write operations. This will give you practical insights into which approach performs better in your specific context.
Caching Intermediate Results:
In PySpark, you can cache intermediate DataFrames if they are reused multiple times, which can improve performance. This can be controlled more explicitly compared to Spark SQL.
Complex Transformations:
If your transformations are very complex and require custom logic, PySpark may offer more flexibility and control. However, for standard SQL operations, Spark SQL might be more concise and easier to optimize automatically.
In general, both approaches benefit from Spark’s Catalyst optimizer, but the best choice depends on your specific use case, data size, and complexity of transformations. Performing benchmarking and analyzing execution plans will provide the most accurate assessment of performance differences.
Recursive CTE
Recursive Common Table Expressions (CTEs) are powerful tools for handling queries that involve hierarchical or tree-like data structures, and iterative or recursive operations. They allow you to define a base case and then repeatedly apply the recursive logic until the final result is obtained. Recursive CTEs are widely used in various real-world use cases.
Here are some common use cases for recursive CTEs:
1. Employee Hierarchies (Organizational Trees)
Recursive CTEs are often used to represent and query employee-manager relationships, where employees report to managers, and managers report to higher-level executives. This hierarchical structure can be easily modeled and queried using a recursive CTE.
Use Case: Find All Subordinates for a Manager
Given an employee table where each employee has a manager_id, you may want to find all employees who report directly or indirectly to a specific manager.
Example:
WITH RECURSIVE subordinates AS (
-- Base case: Start with the given manager (e.g., manager_id=1)
SELECT emp_id, emp_name, manager_id
FROM employees
WHERE manager_id = 1
UNION ALL
-- Recursive step: Find employees who report to the employees from the previous step
SELECT e.emp_id, e.emp_name, e.manager_id
FROM employees e
INNER JOIN subordinates s ON e.manager_id = s.emp_id
)
SELECT * FROM subordinates;
Result:
This query will recursively find all subordinates who report directly or indirectly to the manager with manager_id = 1.
2. Analyzing Bill of Materials (BOM)
In manufacturing, a Bill of Materials (BOM) defines the components required to produce a product, and those components may have their own subcomponents, creating a multi-level hierarchy.
Use Case: Exploding a Bill of Materials
You want to recursively find all the subcomponents required to manufacture a given product.
Example:
WITH RECURSIVE bom AS (
-- Base case: Start with the main product (e.g., product_id=100)
SELECT product_id, component_id, quantity
FROM components
WHERE product_id = 100
UNION ALL
-- Recursive step: Find the subcomponents of the components from the previous step
SELECT c.product_id, c.component_id, c.quantity
FROM components c
INNER JOIN bom b ON c.product_id = b.component_id
)
SELECT * FROM bom;
Result:
This query will return the entire hierarchy of components needed to build the product, including all subcomponents.
3. Pathfinding in Graphs (Shortest Path)
Recursive CTEs can be used to perform graph traversal, such as finding the shortest path between two nodes in a graph. This is commonly used in transportation networks, social networks, and routing algorithms.
Use Case: Finding the Shortest Path Between Two Nodes
Given a table representing connections (edges) between nodes (vertices), you can use a recursive CTE to find the shortest path between two nodes.
Example:
WITH RECURSIVE paths AS (
-- Base case: Start with the source node (e.g., node_id=1)
SELECT node_id, connected_node_id, 1 AS path_length
FROM edges
WHERE node_id = 1
UNION ALL
-- Recursive step: Continue finding connected nodes
SELECT e.node_id, e.connected_node_id, p.path_length + 1
FROM edges e
INNER JOIN paths p ON e.node_id = p.connected_node_id
)
SELECT * FROM paths
WHERE connected_node_id = 5
ORDER BY path_length
LIMIT 1;
Result:
This query will return the shortest path from node 1 to node 5.
4. Working with Family Trees
Recursive CTEs are useful in genealogical applications, where you need to model and query family trees. Each person can have a parent or a child, and you may want to trace ancestry or descendants.
Use Case: Find All Ancestors of a Person
Given a table of family relationships, you can use a recursive CTE to trace the ancestors of a specific person.
Example:
WITH RECURSIVE ancestors AS (
-- Base case: Start with the person (e.g., person_id=100)
SELECT person_id, parent_id
FROM family
WHERE person_id = 100
UNION ALL
-- Recursive step: Find the parents of the ancestors
SELECT f.person_id, f.parent_id
FROM family f
INNER JOIN ancestors a ON f.person_id = a.parent_id
)
SELECT * FROM ancestors;
Result:
This query will return all ancestors of the person with person_id = 100.
5. Time-Series Data (Running Totals)
Recursive CTEs can also be used to calculate running totals or cumulative sums over time-series data, where you need to recursively aggregate values.
Use Case: Calculate a Running Total of Sales by Date
You want to calculate the cumulative total of sales up to each date.
Example:
WITH RECURSIVE running_totals AS (
-- Base case: Start with the first date
SELECT sale_date, amount, amount AS running_total
FROM sales
WHERE sale_date = (SELECT MIN(sale_date) FROM sales)
UNION ALL
-- Recursive step: Calculate the running total for each subsequent date
SELECT s.sale_date, s.amount, rt.running_total + s.amount AS running_total
FROM sales s
INNER JOIN running_totals rt ON s.sale_date > rt.sale_date
ORDER BY s.sale_date
)
SELECT * FROM running_totals;
Result:
This query will return a running total of sales for each date.
6. Data Lineage and Change Tracking (History)
In data warehousing and ETL processes, Recursive CTEs can be used to track the history of changes in records or tables, allowing you to understand how data has evolved over time.
Use Case: Trace the History of Changes to a Record
You may have a table that records changes made to data over time, and you want to trace how a particular record has changed.
Example:
WITH RECURSIVE history AS (
-- Base case: Start with the current version of the record
SELECT record_id, change_date, old_value, new_value
FROM changes
WHERE record_id = 100
UNION ALL
-- Recursive step: Trace back to the previous version
SELECT c.record_id, c.change_date, c.old_value, c.new_value
FROM changes c
INNER JOIN history h ON c.new_value = h.old_value
)
SELECT * FROM history;
Result:
This query will return the entire change history of the record with record_id = 100.
7. Recursive Summation of Hierarchical Data (Financial Reporting)
In financial reporting, recursive CTEs are useful for aggregating financial data across hierarchical levels, such as summing revenues or expenses for all subsidiaries in a corporate structure.
Use Case: Summing Revenues for All Subsidiaries
Given a table that lists companies and their parent companies, you can recursively calculate the total revenue for all subsidiaries.
Example:
WITH RECURSIVE revenue_summary AS (
-- Base case: Start with the main company (e.g., parent_company_id=1)
SELECT company_id, revenue
FROM companies
WHERE parent_company_id = 1
UNION ALL
-- Recursive step: Sum the revenues of all subsidiaries
SELECT c.company_id, rs.revenue + c.revenue
FROM companies c
INNER JOIN revenue_summary rs ON c.parent_company_id = rs.company_id
)
SELECT SUM(revenue) FROM revenue_summary;
Result:
This query will return the total revenue for the company and all its subsidiaries.
8. Data Cleanup and De-Duplication
Recursive CTEs can also be used in data cleaning operations, such as de-duplicating data where there are multiple records for the same entity, and you need to recursively eliminate duplicates.
Use Case: Removing Duplicate Entries Based on Conditions
You can use a recursive CTE to keep only the most recent or relevant record for each entity.
Example:
WITH RECURSIVE de_duplication AS (
-- Base case: Select the first occurrence of each entity
SELECT entity_id, data, change_date
FROM records
WHERE entity_id IN (SELECT DISTINCT entity_id FROM records)
UNION ALL
-- Recursive step: Select subsequent occurrences and decide whether to keep them
SELECT r.entity_id, r.data, r.change_date
FROM records r
INNER JOIN de_duplication dd ON r.entity_id = dd.entity_id AND r.change_date > dd.change_date
)
SELECT * FROM de_duplication;
Result:
This query will return a de-duplicated list of records based on the latest
entry for each entity.
Points:
Recursive CTEs are incredibly versatile and can be used in various real-world scenarios such as:
Hierarchical Data (employee-manager relationships, family trees).
Graph and Network Traversal (shortest paths, pathfinding).
Bill of Materials (BOM) and multi-level hierarchies.
Data Cleanup (de-duplication and data correction).
Recursive CTE in PySpark (Simulating Recursion)
PySpark does not directly support Recursive CTEs. However, you can simulate the same using iterative loops or DataFrame joins.
Simulating Recursive CTE Using Iterative Joins in PySpark
Let’s try to mimic the recursive CTE using loops in PySpark to get all subordinates for a manager.
1. Sample Data (Employee Hierarchy in PySpark):
from pyspark.sql import SparkSession
# Initialize Spark session
spark = SparkSession.builder.appName("RecursiveCTE").getOrCreate()
# Sample data
data = [(1, 'John', None), (2, 'Mary', 1), (3, 'Mike', 1), (4, 'Jane', 2), (5, 'Peter', 3)]
columns = ['emp_id', 'emp_name', 'manager_id']
# Create the DataFrame
employees_df = spark.createDataFrame(data, columns)
# Show the initial data
employees_df.show()
emp_id
emp_name
manager_id
1
John
NULL
2
Mary
1
3
Mike
1
4
Jane
2
5
Peter
3
2. Recursive Query to Find Subordinates of a Manager:
In this example, we will simulate recursion by iteratively joining the DataFrame on itself.
from pyspark.sql.functions import col
# Define the starting manager (e.g., John, emp_id=1)
manager_id = 1
# Anchor: Start with the manager
result_df = employees_df.filter(col('emp_id') == manager_id)
# Iteratively find all subordinates (mimicking recursion)
for i in range(5): # Loop to find multiple levels of subordinates
new_subordinates = employees_df.join(result_df, employees_df['manager_id'] == result_df['emp_id'], 'inner')
.select(employees_df['emp_id'], employees_df['emp_name'], employees_df['manager_id'])
result_df = result_df.union(new_subordinates)
# Show the final result (all subordinates)
result_df.show()
Result:
emp_id
emp_name
manager_id
1
John
NULL
2
Mary
1
3
Mike
1
4
Jane
2
5
Peter
3
Explanation:
Anchor (Base case): Start with the base manager (John, emp_id=1).
Iterative Join: In each iteration, we join the employees_df with the result of the previous iteration to find employees reporting to the current manager or subordinates.
Union: We accumulate the results by unioning the DataFrames.
Loop: The loop mimics recursion, where we keep expanding the subordinates until no more subordinates are found.
3. Alternative: Using GraphX (for Hierarchical or Graph-like Data)
If you are dealing with hierarchical data such as employee hierarchies, family trees, or network graphs, you may want to consider using GraphX or GraphFrames, which are part of Apache Spark.
Example Using GraphFrames:
from graphframes import GraphFrame
# Create vertices (employees)
vertices = spark.createDataFrame([(1, 'John'), (2, 'Mary'), (3, 'Mike'), (4, 'Jane'), (5, 'Peter')], ["id", "name"])
# Create edges (manager relationships)
edges = spark.createDataFrame([(1, 2), (1, 3), (2, 4), (3, 5)], ["src", "dst"])
# Create the GraphFrame
g = GraphFrame(vertices, edges)
# Run BFS (Breadth First Search) to find all subordinates of a manager (John)
results = g.bfs(fromExpr="id = 1", toExpr="id > 0")
results.show()
This approach can be more efficient when dealing with deeply hierarchical or graph-based data in PySpark.
Summary:
Recursive CTE is commonly used in SQL databases to handle hierarchical or tree-structured data.
While Spark SQL doesn’t natively support Recursive CTE, you can simulate it in PySpark using iterative JOIN operations.
Alternatively, for graph-related hierarchical data, consider using GraphFrames or GraphX in Spark for more efficient traversal of complex structures.
Capitalizes the first letter of each word in a string.
SELECT INITCAP(‘oracle pl/sql’);
Oracle Pl/Sql
SUBSTR
Returns a substring from the specified position.
SELECT SUBSTR(‘OraclePLSQL’, 7, 5) FROM dual;
PLSQL
INSTR
Returns the position of the first occurrence of a substring.
SELECT INSTR(‘OraclePLSQL’, ‘PL’) FROM dual;
7
REPLACE
Replaces all occurrences of a substring within a string.
SELECT REPLACE(‘OraclePLSQL’, ‘PL’, ‘SQL’) FROM dual;
OracleSQLSQL
LPAD
Pads the left side of a string with a specified set of characters.
SELECT LPAD(‘Oracle’, 10, ‘*’) FROM dual;
****Oracle
RPAD
Pads the right side of a string with a specified set of characters.
SELECT RPAD(‘Oracle’, 10, ‘*’) FROM dual;
Oracle****
TRIM
Removes leading and trailing spaces (or specified characters) from a string.
SELECT TRIM(‘ Oracle ‘) FROM dual;
Oracle
LTRIM
Removes leading spaces (or specified characters) from a string.
SELECT LTRIM(‘ Oracle’) FROM dual;
Oracle
RTRIM
Removes trailing spaces (or specified characters) from a string.
SELECT RTRIM(‘Oracle ‘) FROM dual;
Oracle
SOUNDEX
Returns a phonetic representation of a string.
SELECT SOUNDEX(‘Oracle’) FROM dual;
O624
ASCII
Returns the ASCII value of the first character in a string.
SELECT ASCII(‘A’) FROM dual;
65
CHR
Returns the character corresponding to the ASCII value.
SELECT CHR(65) FROM dual;
A
TRANSLATE
Replaces characters in a string based on corresponding characters in two other strings.
SELECT TRANSLATE(‘SQL’, ‘SQ’, ‘PL’) FROM dual;
PQL
REGEXP_LIKE
Determines if a string matches a regular expression pattern.
SELECT REGEXP_LIKE(‘Oracle’, ‘Ora’) FROM dual;
1 (true)
REGEXP_INSTR
Returns the position of the first occurrence of a substring matching a regular expression.
SELECT REGEXP_INSTR(‘OraclePLSQL’, ‘P.L’) FROM dual;
7
REGEXP_SUBSTR
Returns the substring matching a regular expression.
SELECT REGEXP_SUBSTR(‘OraclePLSQL’, ‘P.L’) FROM dual;
PL
REGEXP_REPLACE
Replaces substrings matching a regular expression with another substring.
SELECT REGEXP_REPLACE(‘OraclePLSQL’, ‘P.L’, ‘SQL’) FROM dual;
OracleSQLSQL
INSTR
Returns the position of the first occurrence of a substring.
SELECT INSTR(‘OraclePLSQL’, ‘PL’) FROM dual;
7
SUBSTR
Returns a substring from the specified position.
SELECT SUBSTR(‘OraclePLSQL’, 7, 5) FROM dual;
PLSQL
TO_CHAR
Converts a number or date to a string.
SELECT TO_CHAR(SYSDATE, ‘YYYY-MM-DD’) FROM dual;
2024-06-04
TO_NUMBER
Converts a string to a number.
SELECT TO_NUMBER(‘12345’) FROM dual;
12345
TO_DATE
Converts a string to a date.
SELECT TO_DATE(‘2024-06-04’, ‘YYYY-MM-DD’) FROM dual;
04-Jun-24
REPLACE
Replaces all occurrences of a substring within a string.
SELECT REPLACE(‘OraclePLSQL’, ‘PL’, ‘SQL’) FROM dual;
OracleSQLSQL
RPAD
Pads the right side of a string with a specified set of characters.
SELECT RPAD(‘Oracle’, 10, ‘*’) FROM dual;
Oracle****
LPAD
Pads the left side of a string with a specified set of characters.
SELECT LPAD(‘Oracle’, 10, ‘*’) FROM dual;
****Oracle
These functions provide various capabilities to manipulate and format string and character data in Oracle PL/SQL, allowing for complex transformations and queries involving text data.
Function Name
Description
Example Usage
Result
CONCAT
Concatenates two or more strings.
SELECT CONCAT(‘Hive’, ‘QL’);
HiveQL
CONCAT_WS
Concatenates strings with a separator.
SELECT CONCAT_WS(‘-‘, ‘Hive’, ‘QL’);
Hive-QL
LENGTH
Returns the length of a string.
SELECT LENGTH(‘HiveQL’);
6
LOWER
Converts all characters in a string to lowercase.
SELECT LOWER(‘HiveQL’);
hiveql
UPPER
Converts all characters in a string to uppercase.
SELECT UPPER(‘HiveQL’);
HIVEQL
REVERSE
Reverses the characters in a string.
SELECT REVERSE(‘HiveQL’);
LQeviH
TRANSLATE
Replaces characters in a string.
SELECT TRANSLATE(‘HiveQL’, ‘HQL’, ‘hql’);
hiveql
TRIM
Removes leading and trailing spaces from a string.
SELECT TRIM(‘ HiveQL ‘);
HiveQL
LTRIM
Removes leading spaces from a string.
SELECT LTRIM(‘ HiveQL’);
HiveQL
RTRIM
Removes trailing spaces from a string.
SELECT RTRIM(‘HiveQL ‘);
HiveQL
SPACE
Returns a string of spaces of the specified length.
SELECT SPACE(3);
‘ (3 spaces)
REPEAT
Repeats a string a specified number of times.
SELECT REPEAT(‘Hive’, 3);
HiveHiveHive
SPLIT
Splits a string into an array using a specified delimiter.
SELECT SPLIT(‘Hive,QL’, ‘,’);
[“Hive”, “QL”]
SUBSTR
Returns a substring from the specified position.
SELECT SUBSTR(‘HiveQL’, 1, 4);
Hive
INSTR
Returns the position of the first occurrence of a substring.
SELECT INSTR(‘HiveQL’, ‘QL’);
5
FIND_IN_SET
Returns the position of a string in a comma-separated list.
SELECT FIND_IN_SET(‘QL’, ‘Hive,QL,Hadoop’);
2
INITCAP
Capitalizes the first letter of each word in a string.
SELECT INITCAP(‘hive ql’);
Hive Ql
REGEXP_REPLACE
Replaces substrings that match a regular expression.
In Oracle SQL, date and time manipulation is essential for many database operations, ranging from basic date arithmetic to complex formatting and extraction. Here’s a guide covering various common operations you might need.
1. Basic Date Arithmetic
Adding/Subtracting Days:
-- Add 5 days to a date
SELECT SYSDATE + 5 FROM dual;
-- Subtract 10 days from a date
SELECT SYSDATE - 10 FROM dual;
Adding/Subtracting Months:
-- Add 3 months to a date
SELECT ADD_MONTHS(SYSDATE, 3) FROM dual;
-- Subtract 2 months from a date
SELECT ADD_MONTHS(SYSDATE, -2) FROM dual;
Adding/Subtracting Years:
-- Add 1 year to a date
SELECT ADD_MONTHS(SYSDATE, 12) FROM dual;
-- Subtract 1 year from a date
SELECT ADD_MONTHS(SYSDATE, -12) FROM dual;
2. Extracting Components from Date
Extract Year, Month, Day, Hour, Minute, Second:
SELECT
EXTRACT(YEAR FROM SYSDATE) AS year,
EXTRACT(MONTH FROM SYSDATE) AS month,
EXTRACT(DAY FROM SYSDATE) AS day,
EXTRACT(HOUR FROM SYSTIMESTAMP) AS hour,
EXTRACT(MINUTE FROM SYSTIMESTAMP) AS minute,
EXTRACT(SECOND FROM SYSTIMESTAMP) AS second
FROM dual;
3. Formatting Dates
To Convert Date to String in a Specific Format:
-- Convert date to 'YYYY-MM-DD' format SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD') FROM dual;
-- Convert date to 'DD-Mon-YYYY HH24:MI:SS' format SELECT TO_CHAR(SYSDATE, 'DD-Mon-YYYY HH24:MI:SS') FROM dual;
To Convert String to Date:
-- Convert string to date
SELECT TO_DATE('2024-06-02', 'YYYY-MM-DD') FROM dual;
-- Convert string with time to date
SELECT TO_DATE('02-Jun-2024 14:30:00', 'DD-Mon-YYYY HH24:MI:SS') FROM dual;
4. Finding Difference Between Dates
Difference in Days:
SELECT
TRUNC(TO_DATE('2024-12-31', 'YYYY-MM-DD') - TO_DATE('2024-01-01', 'YYYY-MM-DD')) AS difference_in_days
FROM dual;
Difference in Hours, Minutes, Seconds:
-- Difference in hours
SELECT
(SYSDATE - (SYSDATE - 1)) * 24 AS difference_in_hours
FROM dual;
-- Difference in minutes
SELECT
(SYSDATE - (SYSDATE - 1)) * 24 * 60 AS difference_in_minutes
FROM dual;
-- Difference in seconds
SELECT
(SYSDATE - (SYSDATE - 1)) * 24 * 60 * 60 AS difference_in_seconds
FROM dual;
5. Working with Time Zones
Convert between Time Zones:
-- Convert from current time zone to UTC
SELECT
FROM_TZ(CAST(SYSDATE AS TIMESTAMP), 'YOUR_TIME_ZONE') AT TIME ZONE 'UTC'
FROM dual;
-- Convert from one time zone to another
SELECT
FROM_TZ(CAST(SYSTIMESTAMP AS TIMESTAMP), 'America/New_York') AT TIME ZONE 'Asia/Kolkata'
FROM dual;
6. Miscellaneous Functions
Last Day of the Month:
SELECT LAST_DAY(SYSDATE) FROM dual;
First Day of the Month:
SELECT TRUNC(SYSDATE, 'MM') FROM dual;
Next Day (Find next specific weekday):
-- Find the next Monday
SELECT NEXT_DAY(SYSDATE, 'MONDAY') FROM dual;
Truncate Date (to start of day, month, year, etc.):
-- Truncate to the start of the current day SELECT TRUNC(SYSDATE) FROM dual;
-- Truncate to the start of the current month SELECT TRUNC(SYSDATE, 'MM') FROM dual;
-- Truncate to the start of the current year SELECT TRUNC(SYSDATE, 'YYYY') FROM dual;
These are some fundamental operations for date and time manipulation in Oracle SQL. Depending on your specific needs, you can combine these functions to perform more complex operations.
Oracle PL/SQL Date Manipulation Cheatcode:-
Function Name
Description
Example Usage
Result
SYSDATE
Returns the current date and time.
SELECT SYSDATE FROM dual;
04-Jun-24 12:34:56 PM
CURRENT_DATE
Returns the current date in the session time zone.
SELECT CURRENT_DATE FROM dual;
04-Jun-24
CURRENT_TIMESTAMP
Returns the current date and time in the session time zone.
SELECT CURRENT_TIMESTAMP FROM dual;
04-JUN-24 12:34:56.123456 PM -05:00
SYSTIMESTAMP
Returns the current date and time with time zone.
SELECT SYSTIMESTAMP FROM dual;
04-JUN-24 12:34:56.123456 PM -05:00
EXTRACT
Extracts and returns a specified part of a date (e.g., year).
SELECT EXTRACT(YEAR FROM SYSDATE) FROM dual;
2024
TRUNC
Truncates a date to the specified unit (e.g., month).
SELECT TRUNC(SYSDATE, ‘MM’) FROM dual;
01-Jun-24
ROUND
Rounds a date to the specified unit (e.g., month).
SELECT ROUND(SYSDATE, ‘MM’) FROM dual;
01-Jul-24
ADD_MONTHS
Adds a specified number of months to a date.
SELECT ADD_MONTHS(SYSDATE, 2) FROM dual;
04-Aug-24
MONTHS_BETWEEN
Returns the number of months between two dates.
SELECT MONTHS_BETWEEN(SYSDATE, ’01-MAY-24′) FROM dual;
1.129032258
NEXT_DAY
Returns the date of the next specified day of the week.
SELECT NEXT_DAY(SYSDATE, ‘MONDAY’) FROM dual;
10-Jun-24
LAST_DAY
Returns the last day of the month for a given date.
SELECT LAST_DAY(SYSDATE) FROM dual;
30-Jun-24
NEW_TIME
Converts a date from one time zone to another.
SELECT NEW_TIME(SYSDATE, ‘EST’, ‘PST’) FROM dual;
04-Jun-24 9:34:56 AM
TO_DATE
Converts a string to a date.
SELECT TO_DATE(‘2024-06-04’, ‘YYYY-MM-DD’) FROM dual;
04-Jun-24
TO_CHAR
Converts a date to a string in a specified format.
SELECT TO_CHAR(SYSDATE, ‘YYYY-MM-DD’) FROM dual;
2024-06-04
SYSDATE + n
Adds n days to the current date.
SELECT SYSDATE + 10 FROM dual;
14-Jun-24
SYSDATE – n
Subtracts n days from the current date.
SELECT SYSDATE – 10 FROM dual;
25-May-24
DATE ‘YYYY-MM-DD’
Specifies a date literal in ANSI date format.
SELECT DATE ‘2024-06-04’ FROM dual;
04-Jun-24
INTERVAL
Specifies a period of time.
SELECT SYSDATE + INTERVAL ‘1’ MONTH FROM dual;
04-Jul-24
DBTIMEZONE
Returns the database time zone.
SELECT DBTIMEZONE FROM dual;
SESSIONTIMEZONE
Returns the session time zone.
SELECT SESSIONTIMEZONE FROM dual;
Date and Time manipulation in Apache Hive QL
In Apache Hive, date and time manipulation is crucial for data analysis and ETL processes. Hive provides a rich set of functions to perform various date and time operations. Here is a guide covering common operations you might need.
1. Basic Date Arithmetic
Adding/Subtracting Days:
-- Add 5 days to the current date
SELECT DATE_ADD(CURRENT_DATE, 5);
-- Subtract 10 days from the current date
SELECT DATE_SUB(CURRENT_DATE, 10);
Adding/Subtracting Months:
- Add 3 months to the current date
SELECT ADD_MONTHS(CURRENT_DATE, 3);
-- Subtract 2 months from the current date
SELECT ADD_MONTHS(CURRENT_DATE, -2);
2. Extracting Components from Date
Extract Year, Month, Day, Hour, Minute, Second:
SELECT YEAR(CURRENT_DATE) AS year, MONTH(CURRENT_DATE) AS month, DAY(CURRENT_DATE) AS day, HOUR(CURRENT_TIMESTAMP) AS hour, MINUTE(CURRENT_TIMESTAMP) AS minute, SECOND(CURRENT_TIMESTAMP) AS second;
3. Formatting Dates
To Convert Date to String in a Specific Format:
-- Convert date to 'YYYY-MM-DD' format
SELECT DATE_FORMAT(CURRENT_DATE, 'yyyy-MM-dd');
-- Convert timestamp to 'DD-Mon-YYYY HH24:MI:SS' format
SELECT DATE_FORMAT(CURRENT_TIMESTAMP, 'dd-MMM-yyyy HH:mm:ss');
To Convert String to Date:
-- Convert string to date SELECT TO_DATE('2024-06-02', 'yyyy-MM-dd');
-- Convert string with time to timestamp SELECT TO_TIMESTAMP('02-Jun-2024 14:30:00', 'dd-MMM-yyyy HH:mm:ss');
4. Finding Difference Between Dates
Difference in Days:
SELECT
DATEDIFF(TO_DATE('2024-12-31', 'yyyy-MM-dd'), TO_DATE('2024-01-01', 'yyyy-MM-dd')) AS difference_in_days;
Difference in Hours, Minutes, Seconds: Hive does not have built-in functions for calculating differences in hours, minutes, or seconds directly. You may need to use UNIX_TIMESTAMP and other functions for such calculations:
-- Difference in hours
SELECT
(UNIX_TIMESTAMP('2024-12-31 12:00:00') - UNIX_TIMESTAMP('2024-12-31 00:00:00')) / 3600 AS difference_in_hours;
-- Difference in minutes
SELECT
(UNIX_TIMESTAMP('2024-12-31 12:00:00') - UNIX_TIMESTAMP('2024-12-31 00:00:00')) / 60 AS difference_in_minutes;
-- Difference in seconds
SELECT
UNIX_TIMESTAMP('2024-12-31 12:00:00') - UNIX_TIMESTAMP('2024-12-31 00:00:00') AS difference_in_seconds;
5. Working with Time Zones
Hive does not directly support time zone conversion functions. This typically needs to be handled either during data ingestion or by using custom functions/UDFs.
6. Miscellaneous Functions
Last Day of the Month:
SELECT LAST_DAY('2024-06-02') AS last_day_of_month;
First Day of the Month:
SELECT DATE_TRUNC('MM', '2024-06-02') AS first_day_of_month;
Next Day (Find next specific weekday):
Truncate Date (to start of day, month, year, etc.):
-- Truncate to the start of the current day
SELECT DATE_TRUNC('DD', CURRENT_TIMESTAMP) AS start_of_day;
-- Truncate to the start of the current month
SELECT DATE_TRUNC('MM', CURRENT_TIMESTAMP) AS start_of_month;
-- Truncate to the start of the current year
SELECT DATE_TRUNC('YY', CURRENT_TIMESTAMP) AS start_of_year;
These operations should help you with most date and time manipulations in Hive. Depending on your specific requirements, you may need to combine these functions to achieve more complex transformations.
Hive treats dates as strings by default. Ensure consistent formatting (e.g., yyyy-MM-dd) when working with dates for accurate comparisons and manipulations. Consider using functions like from_unixtime or to_date for conversions if needed.
These are some core functions for manipulating dates and times in Hive. By combining them, you can achieve various tasks like finding dates N days before/after, calculating the difference between dates, etc., within your HiveQL queries.
Examples:–
what will be replacement of oracle last_day(add_months(sysdate,-1)) in Hive QL?
To achieve the equivalent of the Oracle function last_day(add_months(sysdate, -1)) in HiveQL, you need to:
Subtract one month from the current date: Use add_months(current_date, -1).
Get the last day of that month: Use the last_day function.
Here is the equivalent HiveQL query:
SELECT last_day(add_months(current_date, -1));
This query does the following:
add_months(current_date, -1): Subtracts one month from the current date.
last_day(...): Gets the last day of the month for the resulting date.
How to translate sas code put(datepart(olddate), MONYY7.) to pyspark codes . here olddate is a datetime value?
Hive QL Date Manipulation Cheatcode
Function Name
Description
Example Usage
Result
current_date
Returns the current date.
SELECT current_date;
2024-06-04 (if today is June 4, 2024)
current_timestamp
Returns the current timestamp.
SELECT current_timestamp;
2024-06-04 12:34:56
unix_timestamp
Returns current Unix timestamp in seconds.
SELECT unix_timestamp();
1654257296 (if current timestamp)
from_unixtime
Converts Unix timestamp to string in the specified format.
SELECT from_unixtime(1654257296);
2024-06-04 12:34:56
to_date
Extracts date part from a timestamp string.
SELECT to_date(‘2024-06-04 12:34:56’);
2024-06-04
year
Extracts the year from a date or timestamp.
SELECT year(‘2024-06-04’);
2024
month
Extracts the month from a date or timestamp.
SELECT month(‘2024-06-04’);
6
day
Extracts the day from a date or timestamp.
SELECT day(‘2024-06-04’);
4
hour
Extracts the hour from a timestamp.
SELECT hour(‘2024-06-04 12:34:56’);
12
minute
Extracts the minute from a timestamp.
SELECT minute(‘2024-06-04 12:34:56’);
34
second
Extracts the second from a timestamp.
SELECT second(‘2024-06-04 12:34:56’);
56
add_months
Adds a specified number of months to a date or timestamp.
SELECT add_months(‘2024-06-04’, 2);
2024-08-04
months_between(date1, date2)
Calculates the number of months between two dates.
Returns the first date which is the specified day of the week.
SELECT next_day(‘2024-06-04’, ‘MON’);
2024-06-10 (next Monday from June 4, 2024)
trunc
Truncates a date or timestamp to the specified unit (e.g., month).
SELECT trunc(‘2024-06-04’, ‘MM’);
2024-06-01 (truncates to the start of month)
date_format
Formats a date or timestamp into a string.
SELECT date_format(‘2024-06-04’, ‘yyyy-MM’);
2024-06
2024-06-04′ used in select here is Date values not a string
1.How to remove datepart from Oracle dates which are is string format or string literals such as 2024-05-01 00:00:00?
To remove the time part from Oracle dates that are in string format or string literals, such as 2024-05-01 00:00:00, you can use various string manipulation functions available in SQL. Here are several methods to achieve this:
Using TO_DATE and TO_CHAR Functions
Convert String to Date and Format: If the date is in a string format, you can first convert it to a date and then format it back to a string without the time part.
SELECT TO_CHAR(TO_DATE('2024-05-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD') AS date_without_time FROM dual;
Using SUBSTR Function
Extract Only the Date Part: Use the SUBSTR function to extract only the date part from the string.
SELECT SUBSTR('2024-05-01 00:00:00', 1, 10) AS date_without_time FROM dual;
Using TRUNC Function for Date Type
Directly Truncate Date Type (if applicable): If you have an actual date column and you want to remove the time part, you can use the TRUNC function. However, this is for columns that are already in the DATE type.
SELECT TRUNC(TO_DATE('2024-05-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')) AS date_without_time FROM dual;
Example Usage in a Table
Assuming you have a table my_table with a column date_column that stores dates as strings in the format YYYY-MM-DD HH24:MI:SS, here are the SQL queries you could use:
Using TO_DATE and TO_CHAR:
SELECT TO_CHAR(TO_DATE(date_column, 'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD') AS date_without_time FROM my_table;
Using SUBSTR:
SELECT SUBSTR(date_column, 1, 10) AS date_without_time FROM my_table;
Using TRUNC (for actual DATE type columns):
If date_column is of DATE type, you can use TRUNC directly:
SELECT TRUNC(date_column) AS date_without_time FROM my_table;
Full Example with a Data Table
Let’s create a full example with a sample table to illustrate these methods:
Create the Table:
CREATE TABLE my_table ( id NUMBER, date_column VARCHAR2(19) );
INSERT INTO my_table (id, date_column) VALUES (1, '2024-05-01 00:00:00'); INSERT INTO my_table (id, date_column) VALUES (2, '2024-06-15 12:34:56');
Query Using Different Methods:
Using TO_DATE and TO_CHAR:
SELECT id, TO_CHAR(TO_DATE(date_column, 'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD') AS date_without_time FROM my_table;
Using SUBSTR:
SELECT id, SUBSTR(date_column, 1, 10) AS date_without_time FROM my_table;
Output
The output for both queries would be:
ID | DATE_WITHOUT_TIME
-----------------------
1 | 2024-05-01
2 | 2024-06-15
2.How to translate sas code put(datepart(olddate), MONYY7.) to pyspark sql. here olddate is a datetime value?
To translate the SAS code put(datepart(olddate), MONYY7.) to PySpark SQL, where olddate is a datetime value, you can use the combination of PySpark SQL functions to extract the date part and format it similarly to MONYY7. in SAS.
Here is how you can achieve this:
SAS Code Explanation:-
datepart(olddate): Extracts the date part from the datetime value. put(…, MONYY7.): Formats the extracted date in MONYY7. format (e.g., JAN2024). Equivalent in PySpark SQL Extract the Date Part: Use the to_date function. Format the Date Part: Use the date_format function to format the date. Here’s a step-by-step breakdown:
Create a DataFrame with a datetime value:
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, to_date, date_format
# Initialize SparkSession
spark = SparkSession.builder
.appName("DatePartExample")
.getOrCreate()
# Sample data
data = [("2024-05-01 15:30:00",)]
# Create DataFrame
df = spark.createDataFrame(data, ["olddate"])
df.show(truncate=False)
Extract the Date Part and Format:
python
Copy code
# Extract the date part and format it as 'MONYY'
df_formatted = df.withColumn("formatted_date", date_format(to_date(col("olddate")), "MMMYYYY"))
df_formatted.show(truncate=False)
Putting It All Together
Here's the full PySpark code that accomplishes the translation from SAS to PySpark SQL:
python
Copy code
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, to_date, date_format
# Initialize SparkSession
spark = SparkSession.builder
.appName("DatePartExample")
.getOrCreate()
# Sample data
data = [("2024-05-01 15:30:00",), ("2024-07-21 09:15:00",)]
# Create DataFrame
df = spark.createDataFrame(data, ["olddate"])
# Show initial DataFrame
df.show(truncate=False)
# Extract the date part and format it as 'MONYY'
df_formatted = df.withColumn("formatted_date", date_format(to_date(col("olddate")), "MMMYYYY"))
# Show the formatted DataFrame
df_formatted.show(truncate=False)
Expected Output
+-------------------+
|olddate |
+-------------------+
|2024-05-01 15:30:00|
|2024-07-21 09:15:00|
+-------------------+
+-------------------+--------------+
|olddate |formatted_date|
+-------------------+--------------+
|2024-05-01 15:30:00|May2024 |
|2024-07-21 09:15:00|Jul2024 |
+-------------------+--------------+
In this example, to_date extracts the date part from the datetime string, and date_format formats it as MMMYYYY to match the MONYY7. format in SAS.
If you need the month in uppercase and year in two digits like MAY24, you can adjust the date_format function:
df_formatted = df.withColumn(“formatted_date”, date_format(to_date(col(“olddate”)), “MMMYY”).upper()) This approach covers the translation of extracting and formatting date parts from a datetime value in PySpark SQL, providing a similar functionality to the SAS put(datepart(olddate), MONYY7.)
In SAS, the DATEPART() and TIMEPART() functions are used to extract the date and time parts from datetime values, respectively. Here’s how each function works:
1. DATEPART():
The DATEPART() function extracts the date part from a datetime value and returns it as a date value. This function is particularly useful when you have datetime values and need to perform date-specific operations without considering the time component.
Syntax:
DATEPART(datetime)
Example: Suppose you have a datetime variable datetime_var containing the value ’01Jan2022:12:30:45′, and you want to extract the date part from it. You can use the DATEPART() function as follows:
data output; set input; date_only = DATEPART(datetime_var); run;
In this example, date_only will contain the date part ’01Jan2022′.
2. TIMEPART():
The TIMEPART() function extracts the time part from a datetime value and returns it as a time value. Similarly to DATEPART(), this function is useful when you need to perform time-specific operations without considering the date component.
Syntax:
TIMEPART(datetime)
Example: Suppose you have the same datetime variable datetime_var containing the value ’01Jan2022:12:30:45′, and you want to extract the time part from it. You can use the TIMEPART() function as follows:
data output; set input; time_only = TIMEPART(datetime_var); run;
In this example, time_only will contain the time part ’12:30:45′.
3. HOUR():
The HOUR() function extracts the hour component from a datetime value and returns it as an integer value representing the hour of the day (0-23).
Syntax:
HOUR(datetime)
Example:
data output; set input; hour_of_day = HOUR(datetime_var); run;
4. MINUTE():
The MINUTE() function extracts the minute component from a datetime value and returns it as an integer value representing the minute of the hour (0-59).
Syntax:
MINUTE(datetime)
Example:
data output; set input; minute_of_hour = MINUTE(datetime_var); run;
5. SECOND():
The SECOND() function extracts the second component from a datetime value and returns it as an integer value representing the second of the minute (0-59).
Syntax:
SECOND(datetime)
Example:
data output; set input; second_of_minute = SECOND(datetime_var); run;
Additional Notes:
Both DATEPART() and TIMEPART() functions are part of Base SAS.
These functions are handy when dealing with datetime values in SAS, allowing you to perform specific operations on either the date or time component separately.
The output of DATEPART() is a SAS date value, while the output of TIMEPART() is a SAS time value.
You can use the extracted date or time values for various purposes such as filtering, grouping, or calculations in your SAS programs.
There is a simple way- You can use the calendar module in Python to create a calendar for the current year. But it defeats the purpose – of getting your hands dirty by writing big lengthy Python Code. But anyway i am adding it here:-
You can use the calendar module in Python to create a calendar for the current year. Here’s a simple Python program to achieve that.
import calendar
def create_calendar(year):
# Create a TextCalendar instance
cal = calendar.TextCalendar(calendar.SUNDAY) # Start week on Sunday
# Generate the calendar for the given year
calendar_text = cal.formatyear(year, 2, 1, 1, 3) # 2 spaces between months, 1 row between months, 1 column between months, 3 months per row
return calendar_text
def main():
# Get the current year
current_year = calendar.datetime.datetime.now().year
# Create calendar for the current year
calendar_text = create_calendar(current_year)
# Display the calendar
print(f"Calendar for the year {current_year}:n")
print(calendar_text)
if __name__ == "__main__":
main()
This program creates a calendar for the current year using the TextCalendar class from the calendar module. It then formats the calendar as text and prints it to the console. The calendar starts the week on Sunday and formats the months with 2 spaces between them, 1 row between them, 1 column between them, and displays 3 months per row. You can adjust these parameters based on your preferences.
Creating a calendar without importing the calendar package involves implementing the logic for generating the calendar manually. Here’s a Python program that generates a calendar for the current year without using the calendar package:
import datetime
def is_leap_year(year): """Check if a year is a leap year.""" if year % 4 == 0 and (year % 100 != 0 or year % 400 == 0): return True return False
def get_month_days(year, month): """Get the number of days in a month.""" if month in {1, 3, 5, 7, 8, 10, 12}: return 31 elif month in {4, 6, 9, 11}: return 30 elif is_leap_year(year): return 29 else: return 28
def create_calendar(year): """Create a calendar for the given year.""" months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]
calendar_text = ""
for month_num, month_name in enumerate(months, start=1): calendar_text += f"n{month_name}n" calendar_text += "Mo Tu We Th Fr Sa Sun"
# Get the first day of the month first_day = datetime.date(year, month_num, 1).weekday()
# Fill the leading spaces calendar_text += " " * first_day
# Fill in the days of the month for day in range(1, get_month_days(year, month_num) + 1): calendar_text += f"{day:2d} " first_day += 1 if first_day == 7: calendar_text += "n" first_day = 0 calendar_text += "n"
return calendar_text
def main(): # Get the current year current_year = datetime.datetime.now().year
# Create calendar for the current year calendar_text = create_calendar(current_year)
# Display the calendar print(f"Calendar for the year {current_year}:n") print(calendar_text)
if __name__ == "__main__": main()
This program calculates the days of the week and fills in the dates for each month manually. It also accounts for leap years to determine the number of days in February. The generated calendar is printed to the console for the current year.
In SAS, the FIRST. and LAST. automatic variables are used within a DATA step to identify the first and last occurrences of observations within a BY group. These variables are particularly useful when working with sorted data or when you need to perform specific operations on the first or last observation of a group. Here’s how you can use FIRST. and LAST. with examples:
Syntax:
FIRST.variable: Returns a value of 1 for the first observation in a BY group, and 0 otherwise.
LAST.variable: Returns a value of 1 for the last observation in a BY group, and 0 otherwise.
Example:
Suppose we have a dataset named sales with the following structure:
cssCopy codeID Product Revenue
1 A 100
2 A 150
3 B 200
4 B 180
5 B 220
We want to calculate the total revenue for each product and display it only for the first and last observations of each product group.
SAS Code:
DATA sales_summary; SET sales; BY Product;
/* Calculate total revenue for each product */ IF FIRST.Product THEN Total_Revenue = 0; /* Initialize Total_Revenue for the first observation */ Total_Revenue + Revenue; /* Accumulate revenue for each observation */
/* Output total revenue only for the last observation of each product group */ IF LAST.Product THEN OUTPUT; RUN;
Explanation:
The BY Product; statement ensures that the data is processed in groups based on the Product variable.
The IF FIRST.Product THEN Total_Revenue = 0; statement initializes the Total_Revenue variable to 0 for the first observation of each product group.
The Total_Revenue + Revenue; statement accumulates the revenue for each observation within the product group.
The IF LAST.Product THEN OUTPUT; statement outputs the total revenue only for the last observation of each product group.
Output (sales_summary dataset):
Product Total_Revenue A 250 B 600
In this example, FIRST. and LAST. are used to identify the first and last observations within each product group. The total revenue is calculated and displayed only for the last observation of each product group. This technique allows you to perform specific operations or calculations based on the position of observations within BY groups in SAS.