Search Tutorials


DP-600 Practice Test 4 | Microsoft Fabric Analytics Engineer | JavaInUse

DP-600 Microsoft Fabric Analytics Engineer - Practice Test 4

Your Progress

0 / 65
Question 1EASY
In T-SQL (Fabric Data Warehouse), what does the following query return? SELECT ProductID, SUM(SalesAmount) AS TotalSales FROM FactSales GROUP BY ProductID HAVING SUM(SalesAmount) > 50000 ORDER BY TotalSales DESC;
The query groups fact sales by ProductID, aggregates SalesAmount, filters groups with total over 50,000 (HAVING), and sorts the result descending. The result is a ranked list of top-performing products by total sales, showing only those exceeding the threshold. See more: SQL in Fabric
Question 2EASY
What is the correct SQL to add a new column to an existing table in a Fabric Data Warehouse?
ALTER TABLE tableName ADD columnName datatype; is the standard T-SQL syntax to add a column. INSERT adds rows, not columns. UPDATE modifies existing row data. The MODIFY keyword is MySQL syntax, not T-SQL. Fabric Data Warehouse uses T-SQL. See more: SQL in Fabric
Question 3MEDIUM
What is a running total in SQL and how is it computed?
A running total is computed using a window function: SELECT OrderDate, Amount, SUM(Amount) OVER (ORDER BY OrderDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS RunningTotal FROM Sales; The ROWS UNBOUNDED PRECEDING clause accumulates all prior rows from the partition start to the current row. Without ROWS/RANGE, the default is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. See more: SQL in Fabric
Question 4MEDIUM
What SQL function returns the first non-NULL value from a list of expressions?
COALESCE(expr1, expr2, ..., exprN) returns the first non-NULL expression from the list. It is ANSI-standard SQL. ISNULL(column, replacement) is SQL Server-specific and only takes two arguments. NVL is Oracle syntax. IFNULL is MySQL syntax. COALESCE is the recommended portable approach in T-SQL/Fabric SQL. See more: SQL in Fabric
Question 5EASY
In Fabric, what is the SQL Analytics Endpoint of a Lakehouse?
Every Fabric Lakehouse automatically provides a SQL Analytics Endpoint - a read-only T-SQL interface to the Delta tables in the Tables section. This allows BI tools, Power BI, and developers to run SELECT queries using T-SQL without needing Spark notebooks. The SQL endpoint is auto-generated; it cannot be used for DML (INSERT/UPDATE/DELETE). The Fabric Data Warehouse, by contrast, supports full DML. See more: Lakehouse & Data Warehouse
Question 6MEDIUM
What is Delta Lake time travel and how do you use it in SQL?
Delta Lake maintains a transaction log (_delta_log) that records every change. Time travel lets you query a table as it existed at a past version or timestamp: SELECT * FROM sales VERSION AS OF 5; SELECT * FROM sales TIMESTAMP AS OF '2024-01-01'; This is useful for auditing, debugging incorrect data loads, or comparing snapshots. Older versions are retained based on the table's retention period (default 30 days in Fabric). See more: Lakehouse & Data Warehouse
Question 7EASY
Which SQL function calculates the number of distinct values in a column?
COUNT(DISTINCT column) counts the number of unique non-null values in a column. For example: SELECT COUNT(DISTINCT CustomerID) FROM Orders returns the number of unique customers who placed orders. COUNT(*) counts all rows including duplicates and nulls. SUM(DISTINCT) sums unique values, not counts them. See more: SQL in Fabric
Question 8MEDIUM
In a Fabric notebook, how do you read a Lakehouse Delta table as a Spark DataFrame?
In a Fabric notebook with an attached Lakehouse, you can read a Delta table as a Spark DataFrame using: df = spark.read.format("delta").load("Tables/sales") or using SparkSQL: df = spark.sql("SELECT * FROM lakehouse_name.sales") The Lakehouse name and tables are automatically registered in the notebook's Spark session. You can also use the table shorthand spark.table("sales") when a default Lakehouse is set. See more: Lakehouse & Data Warehouse
Question 9MEDIUM
What is the purpose of VACUUM in Delta Lake?
VACUUM tableName [RETAIN n HOURS] deletes Parquet data files that are no longer referenced by recent versions of the Delta table. By default it retains 7 days of history; files older than the retention period are deleted. Running VACUUM too aggressively (with a very short retention) prevents time travel to older snapshots. In Fabric notebooks: spark.sql("VACUUM lakehouse.mytable RETAIN 168 HOURS") See more: Lakehouse & Data Warehouse
Question 10EASY
What SQL statement creates a database view?
CREATE VIEW viewname AS SELECT ... is the standard T-SQL syntax for creating a view. A view is a named SQL query stored in the database that can be queried like a table. Views in Fabric Data Warehouse are supported. In a Lakehouse SQL endpoint, you can create views over Delta tables. Fabric views cannot contain ORDER BY (unless using TOP). See more: SQL in Fabric
Question 11MEDIUM
What is the SQL CASE expression used for?
CASE provides conditional logic: SELECT OrderID, CASE WHEN Amount > 1000 THEN 'Large' WHEN Amount > 100 THEN 'Medium' ELSE 'Small' END AS OrderSize FROM Orders; CASE can appear in SELECT, WHERE, ORDER BY, and HAVING clauses. It is the SQL equivalent of IF/ELSE and is fundamental for data categorization in SQL. See more: SQL in Fabric
Question 12EASY
What SQL function returns the current date and time in T-SQL (Fabric Data Warehouse)?
In T-SQL, GETDATE() returns the current date and time as datetime. SYSDATETIME() returns higher precision (datetime2). CURRENT_TIMESTAMP (no parentheses) is the ANSI equivalent of GETDATE(). NOW() is MySQL syntax. DATE() is not a T-SQL function for getting current date (you'd use CAST(GETDATE() AS DATE)). See more: SQL in Fabric
Question 13MEDIUM
What does the SQL MERGE statement do?
MERGE is used for upsert operations: MERGE target AS t USING source AS s ON t.ID = s.ID WHEN MATCHED THEN UPDATE SET t.Amount = s.Amount WHEN NOT MATCHED THEN INSERT (ID, Amount) VALUES (s.ID, s.Amount) WHEN NOT MATCHED BY SOURCE THEN DELETE; MERGE is supported in the Fabric Data Warehouse. Delta Lake in notebooks uses the .merge() DataFrame API or DeltaTable.forPath().merge() syntax. See more: SQL in Fabric
Question 14MEDIUM
What is a bridge table in a Power BI data model used for?
A bridge table (junction table) resolves a many-to-many relationship. For example, a Customer can have multiple Accounts and an Account can have multiple Customers. Instead of a direct many-to-many, a CustomerAccount bridge table has one row per customer-account pair, with one-to-many relationships on each side. Filters flow through the bridge cleanly without double-counting. See more: Semantic Models
Question 15EASY
What is the purpose of the SQL LIMIT (or TOP in T-SQL) clause?
In T-SQL (used in Fabric Data Warehouse), SELECT TOP(n) returns the first n rows: SELECT TOP(10) * FROM Sales ORDER BY SalesDate DESC. In KQL and other SQL dialects the equivalent is LIMIT n or TAKE n. This is used to preview data, build top-N rankings, or limit result set sizes for performance. See more: SQL in Fabric
Question 16MEDIUM
In Fabric, what storage mode is best for very large tables that need near real-time data without full imports?
Direct Lake mode reads Delta/Parquet files from OneLake directly on demand using the VertiPaq engine. It avoids the full data import bottleneck - column segments are loaded into memory as needed. Since the data lives in a Fabric Lakehouse updated by pipelines or notebooks, Power BI sees fresh data quickly. Direct Lake requires Fabric capacity (not Power BI Pro or free). See more: Semantic Models
Question 17MEDIUM
What does OPTIMIZE do in Delta Lake (Fabric notebook)?
OPTIMIZE tableName consolidates many small Parquet files (created by frequent small writes or streaming) into fewer, larger files (default target 1 GB each). This improves SQL and Spark read performance significantly. Optionally, ZORDER BY column can be used to co-locate related data within files for skipping optimization. In Fabric notebooks: spark.sql("OPTIMIZE lakehouse.mytable ZORDER BY (CustomerID)") See more: Lakehouse & Data Warehouse
Question 18EASY
What SQL string function returns the number of characters in a string?
LEN(string) in T-SQL returns the number of characters (excluding trailing spaces). LENGTH() is the ANSI/MySQL equivalent. CONCAT combines strings. TRIM removes leading/trailing spaces. SUBSTRING extracts a portion. In DAX the equivalent is LEN() and in KQL it is strlen(). LEN is the T-SQL function used in Fabric Data Warehouse. See more: SQL in Fabric
Question 19MEDIUM
What is a CTE (Common Table Expression) in SQL?
A CTE is defined using WITH cte_name AS (SELECT ...) and can be referenced once in the immediately following query. CTEs improve readability by breaking complex queries into named building blocks. Recursive CTEs handle hierarchical data. Unlike subqueries, CTEs can be referenced multiple times in the main query and support recursion. Supported in Fabric Data Warehouse T-SQL. See more: SQL in Fabric
Question 20EASY
What does the Power Query "Group By" transformation do?
The "Group By" transformation in Power Query Editor (Transform ribbon) reduces rows by grouping on key columns and applying an aggregation function (Sum, Count, Min, Max, Average, etc.) to other columns. It generates M code: Table.Group(#"Prev Step", {"Product"}, {{"TotalSales", each List.Sum([Amount]), type number}}). This is the Power Query equivalent of SQL GROUP BY. See more: Dataflows & Pipelines
Question 21MEDIUM
What is the Fabric visual query builder?
In the Fabric Warehouse or Lakehouse SQL Analytics Endpoint, the Visual Query Builder provides a drag-and-drop canvas where you add tables, draw join lines, apply filters, and select columns - it automatically generates the corresponding T-SQL. You can switch to the SQL view to see and edit the generated code. It is ideal for users not fluent in SQL syntax. See more: Lakehouse & Data Warehouse
Question 22EASY
What does the SQL DISTINCT keyword do?
SELECT DISTINCT col1, col2 FROM table returns only unique combinations of the selected columns. If two rows have identical values for all selected columns, only one row appears. DISTINCT considers ALL selected columns together as the uniqueness key. It is different from COUNT(DISTINCT col) which counts distinct values in a single column. See more: SQL in Fabric
Question 23MEDIUM
What is the choose the right Fabric storage item for this scenario: A team needs to store raw CSV files, apply transformations, and make results available via SQL endpoint for Power BI in Direct Lake mode?
The Fabric Lakehouse is the ideal fit: (1) Files section accepts raw CSVs, (2) Dataflow Gen2 or Spark notebook transforms and writes Delta tables to the Tables section, (3) the auto-generated SQL endpoint exposes tables for T-SQL queries, (4) Power BI connects via Direct Lake mode for fast, fresh analytics. This covers the full bronze -> silver -> gold medallion architecture within one item. See more: Lakehouse & Data Warehouse
Question 24EASY
What does the SQL WHERE clause do when compared to HAVING?
SQL execution order: FROM -> WHERE (row filter) -> GROUP BY -> HAVING (group filter) -> SELECT -> ORDER BY. WHERE cannot reference aggregate functions because aggregation hasn't happened yet. HAVING is evaluated after GROUP BY so it can filter on aggregate results like SUM() or COUNT(). You can use both in the same query to filter rows first, then filter groups. See more: SQL in Fabric
Question 25MEDIUM
In Fabric Data Warehouse, what is a cross-database query?
Fabric Data Warehouse supports cross-database queries using three-part naming: [WorkspaceName].[ItemName].[SchemaName].[TableName]. For example: SELECT s.*, p.ProductName FROM [MyWarehouse].[dbo].[FactSales] s JOIN [SalesLakehouse].[dbo].[Products] p ON s.ProductID = p.ProductID This allows joining data across Lakehouses and Warehouses in the same workspace without copying data. It requires appropriate permissions on both items. See more: Lakehouse & Data Warehouse

Popular Posts

��