AZ-305 Designing Azure Infrastructure Solutions - Practice Test 1
Your Progress
0 / 50
Question 1
EASY
Your company runs several Azure App Services and wants centralized collection of application logs and performance metrics. The operations team needs to query logs and create dashboards. Which Azure service should you recommend as the primary solution?
Azure Monitor with Log Analytics workspace is the recommended centralized logging and monitoring solution in Azure. It enables collection of logs and metrics from multiple Azure resources, supports powerful Kusto Query Language (KQL) queries, and integrates with Azure Dashboards and Workbooks for visualization. Event Hubs is for streaming data ingestion, not querying. Blob Storage lacks query capabilities, and SQL Database is not designed for log analytics at scale.
See more: Logging and Monitoring
Question 2
MEDIUM
A financial services company requires that all resource changes in their Azure environment are audited and retained for 7 years to meet regulatory requirements. They need to detect unauthorized configuration changes in near real-time. Which combination provides the best solution?
The best approach combines Log Analytics for near real-time querying and alerting on configuration changes with a Storage Account using immutable blob storage (WORM policies) for 7-year regulatory retention. Activity Log alone in storage does not enable near real-time alerting. Azure Monitor alerts on individual resources would be difficult to manage at scale. Azure Policy enforces compliance but does not provide the auditing and retention capabilities required.
See more: Logging and Monitoring
Question 3
HARD
A global enterprise runs microservices across multiple Azure regions. They need distributed tracing across services, correlation of logs from different regions, and the ability to trigger automated remediation workflows when specific failure patterns are detected across regions. Retention must be at least 2 years. What architecture should you recommend?
Workspace-based Application Insights resources connected to a centralized Log Analytics workspace provide distributed tracing with correlation across regions through shared workspace queries. Setting retention to 2 years on the workspace meets the retention requirement. Azure Monitor alert rules with action groups can trigger Logic Apps for automated remediation workflows. Separate workspaces per region would complicate cross-region correlation. A single Application Insights resource with default retention does not meet 2-year requirements. Event Grid with Functions does not provide distributed tracing.
See more: Logging and Monitoring
Question 4
MEDIUM
Your organization has both Azure AD-based applications and several legacy on-premises applications that use header-based authentication. You need a single solution that provides SSO for users accessing both types of applications. What should you recommend?
Azure AD Application Proxy supports header-based SSO for legacy on-premises applications while Azure AD handles modern cloud applications natively. This combination provides a unified identity experience. Azure AD B2C is for customer-facing applications, not internal SSO. AD FS is legacy and does not simplify the architecture. Azure Front Door is a networking service, not an identity solution.
See more: Authentication and Authorization
Question 5
EASY
A development team is building a new web application that will be hosted in Azure. Users should sign in using their corporate Microsoft 365 credentials. Which identity platform should you use?
Microsoft Entra ID (formerly Azure AD) with the Microsoft identity platform is the standard identity solution for corporate users who already have Microsoft 365 credentials. It supports OpenID Connect and OAuth 2.0 for modern authentication. Azure AD B2C is for consumer/customer-facing scenarios. AD FS is legacy and unnecessary when Azure AD is available. Key Vault stores secrets, not user identities.
See more: Authentication and Authorization
Question 6
MEDIUM
A healthcare company requires that certain API endpoints are accessible only to applications that have been granted specific permissions by an administrator, not by individual users. The API is registered in Azure AD. Which authentication flow should you recommend?
The client credentials flow is designed for service-to-service (daemon) applications that access resources using application permissions granted by an administrator, rather than delegated user permissions. Authorization code flow with PKCE is for user-interactive scenarios. Implicit grant flow is deprecated for most scenarios. Device code flow is for devices without a browser.
See more: Authentication and Authorization
Question 7
HARD
A multinational bank is building a customer-facing portal. Customers from different countries should be able to sign in using local identity providers (Google, Facebook, or national ID systems). The bank requires custom branding per region, step-up authentication for high-value transactions, and progressive profiling to collect user details over time. Which solution should you design?
Azure AD B2C with custom policies (Identity Experience Framework) is the correct choice for customer-facing (CIAM) scenarios requiring federation with social and national identity providers, per-region custom branding, step-up authentication for high-risk transactions, and progressive profiling to collect user attributes over multiple sessions. Azure AD is for workforce identities, not customer-facing scenarios at this scale. Guest accounts do not support custom branding or progressive profiling. AD FS is on-premises and legacy.
See more: Design Authentication
Question 8
MEDIUM
Your organization is implementing a Zero Trust authentication strategy for Azure resources. You need to ensure that users accessing sensitive applications must meet specific conditions before being granted access. Which two components should you configure together to enforce this?
Select all that apply (multiple correct answers)
In a Zero Trust authentication model, Conditional Access policies are the policy engine that evaluates conditions (location, device compliance, risk level) before granting access, and MFA enforcement adds explicit verification. Together, they ensure that users must prove their identity and meet compliance conditions. NSGs are network-layer controls, not authentication controls. DDoS Protection is for availability, not identity verification.
See more: Design Authentication
Question 9
MEDIUM
A company is deploying an Azure Kubernetes Service (AKS) cluster. Developers should authenticate to the cluster using their corporate Azure AD identities, and access should be tied to Kubernetes RBAC roles. What should you configure?
AKS-managed Azure AD integration enables developers to authenticate using their Azure AD credentials. Enabling Azure RBAC for Kubernetes authorization allows you to map Azure AD groups to Kubernetes RBAC roles, providing unified access management. Local accounts are less secure and harder to manage. Shared certificates do not support per-user access control. SSH access to nodes is for administration, not application-level RBAC.
See more: Design Authentication
Question 10
EASY
Your organization wants to ensure that only users in the Security Admins group can manage Azure Key Vault secrets, while application developers can only read secrets. What should you use?
Azure RBAC (Role-Based Access Control) is the recommended way to control access to Key Vault data plane operations. You assign the Key Vault Secrets Officer role to Security Admins for full management, and Key Vault Secrets User role to developers for read-only access. Azure Policy enforces governance rules but is not for fine-grained data access control. NSGs are network-level controls. SAS tokens are for Azure Storage, not Key Vault.
See more: Design Authorization
Question 11
MEDIUM
A company has multiple Azure subscriptions organized under a management group hierarchy. They need to allow the Network team to manage virtual networks across all subscriptions, but this team should not be able to create or manage virtual machines or other compute resources. What is the best approach?
Assigning the Network Contributor role at the management group level gives the Network team permissions to manage virtual networks and related networking resources across all subscriptions in the hierarchy, without granting compute resource permissions. The Contributor role is too broad. Assigning per subscription is viable but not the most efficient approach. Azure AD custom roles apply to Azure AD operations, not Azure resource management (you would need a custom Azure role instead, but the built-in Network Contributor already fits).
See more: Design Authorization
Question 12
HARD
An enterprise is implementing Privileged Identity Management (PIM) for their Azure environment. They require that Global Administrator role activations require approval from two separate approvers, include a business justification, are limited to 4 hours, and trigger alerts to the security team. How should you configure PIM?
PIM role settings allow you to configure all the specified requirements directly: require approval for activation with designated approvers, require justification, set maximum activation duration to 4 hours, and configure notification recipients for the security team. Default PIM settings do not include approval or justification by default. Conditional Access controls sign-in conditions, not role activation. Access reviews audit existing assignments but do not control the activation process.
See more: Design Authorization
Question 13
MEDIUM
Your company is expanding to multiple Azure subscriptions. Management requires that all resources must have a CostCenter and Environment tag, and no public IP addresses should be created in production subscriptions. How should you implement this governance?
Azure Policy with Deny effect is the correct governance tool for enforcing mandatory tags and restricting resource types. A policy requiring tags will block resource creation if the CostCenter or Environment tag is missing. A separate policy denying Public IP resource type on production subscriptions prevents public IP creation. Automation runbooks are reactive, not preventive. Blueprints are for environment setup, not ongoing enforcement. ARM templates do not prevent creation through other methods like the portal.
See more: Design Governance
Question 14
MEDIUM
A company with 20 Azure subscriptions needs to ensure that all subscriptions comply with a standard set of security policies, RBAC assignments, and resource groups. When new subscriptions are created, they must automatically inherit these configurations. What should you use?
Management groups provide a hierarchy above subscriptions where Azure Policy assignments are automatically inherited by all child subscriptions, including new ones. Azure Blueprints provide subscription-level scaffolding (resource groups, RBAC, policies) that can be assigned at the management group level. DevOps pipelines require manual or triggered execution for each new subscription. Azure Lighthouse is for cross-tenant management. ARM templates need explicit deployment.
See more: Design Governance
Question 15
EASY
Your organization needs to restrict Azure resource deployments to only the West Europe and North Europe regions. What is the simplest way to enforce this?
The built-in "Allowed locations" Azure Policy restricts where resources can be deployed. Assigning it with West Europe and North Europe as allowed values will deny resource creation in any other region. Network rules control traffic, not resource deployment locations. RBAC does not control region restrictions. You cannot remove regions from a subscription.
See more: Design Governance
Question 16
MEDIUM
A retail company has a globally distributed application. They need a database solution that supports multi-region writes, automatic failover, single-digit millisecond latency for reads, and multiple consistency levels. The data model is document-based with frequent schema changes. What should you recommend?
Azure Cosmos DB is the only Azure service that natively supports multi-region writes, guarantees single-digit millisecond latency, offers five well-defined consistency levels, and handles schema-flexible document data with the NoSQL (formerly SQL) API. Azure SQL Database supports geo-replication but only single-region writes. PostgreSQL read replicas are read-only in secondary regions. Table Storage does not provide multi-region writes or tunable consistency.
See more: Data Management Strategy
Question 17
HARD
An enterprise runs a transactional system in Azure SQL Database that requires near-zero RPO and RTO under 1 hour for regional disaster recovery. They also need readable secondaries for reporting workloads that do not impact production performance. The solution must support automatic failover without application connection string changes. What should you design?
Auto-failover groups provide automatic failover with near-zero RPO using asynchronous replication, and they expose read-write and read-only listener endpoints that redirect connections automatically during failover, so application connection strings do not need to change. The read-only listener endpoint directs reporting traffic to the secondary, offloading production. Active geo-replication requires manual failover and application connection string updates. Point-in-time restore has much higher RPO and RTO. Log shipping is not a native Azure SQL Database feature.
See more: Data Management Strategy
Question 18
MEDIUM
A data analytics team needs to process large volumes of semi-structured log data (JSON format, 5 TB daily) from IoT devices. They need to run ad-hoc analytical queries and join this data with structured data in Azure SQL Database. Data should be stored cost-effectively. What should you recommend?
Azure Data Lake Storage Gen2 provides cost-effective storage for large volumes of semi-structured data. Azure Synapse Analytics serverless SQL pools allow ad-hoc T-SQL queries directly against files in the data lake without provisioning infrastructure, and support external tables or OPENROWSET to join with data in Azure SQL Database. Importing 5 TB daily into SQL Database is cost-prohibitive. Cosmos DB is not optimized for large-scale batch analytics. HDInsight adds cluster management overhead and does not natively support T-SQL joins with SQL Database.
See more: Data Management Strategy
Question 19
MEDIUM
A company stores sensitive customer PII data in Azure SQL Database. They need to ensure that database administrators can manage the database but cannot view the actual values in sensitive columns such as Social Security numbers and credit card numbers. What feature should you implement?
Always Encrypted ensures that sensitive data is encrypted at the column level and the encryption keys are only accessible to the application, not DBAs. This means DBAs can perform their management tasks but never see plaintext values. TDE encrypts data at rest but DBAs with query access can still see plaintext data. Dynamic Data Masking can be bypassed by users with UNMASK permission and does not encrypt the underlying data. Row-Level Security controls which rows are visible, not which column values.
See more: Data Protection Strategy
Question 20
EASY
Your company stores application secrets (API keys, database connection strings) in configuration files. You need to move these secrets to a centralized, secure location. Which Azure service should you use?
Azure Key Vault is the dedicated Azure service for securely storing and managing secrets, encryption keys, and certificates. It provides access control through RBAC, audit logging, and HSM-backed key protection. App Configuration is for feature flags and non-sensitive settings. Blob Storage is for general file storage. Azure AD manages identities, not application secrets.
See more: Data Protection Strategy
Question 21
MEDIUM
A company needs to encrypt data in Azure Blob Storage using keys that they fully control, including the ability to rotate, revoke, and audit key usage. The keys must be stored in a FIPS 140-2 Level 3 validated hardware module. What should you recommend?
Customer-managed keys (CMK) stored in Azure Key Vault Managed HSM provide FIPS 140-2 Level 3 validated key storage with full customer control over rotation, revocation, and auditing. The Standard tier of Key Vault only provides FIPS 140-2 Level 1 (software-based) protection. Microsoft-managed keys do not give customers control. Client-side encryption shifts key management to the application, which may not meet the HSM requirement.
See more: Data Protection Strategy
Question 22
MEDIUM
Your company is building a monitoring solution for an Azure-based e-commerce platform. The platform team needs to be alerted when the order processing Azure Function has an error rate above 5% or when average response time exceeds 3 seconds. Which two components should you configure?
Select all that apply (multiple correct answers)
Application Insights collects telemetry from Azure Functions including error rates and response times, and you can create alert rules with specific thresholds (error rate above 5%, response time above 3 seconds). Azure Monitor action groups define the notification channels (email, SMS, webhook) and are attached to alert rules to notify the platform team. Service Health alerts are for Azure platform-level issues, not application metrics. Advisor provides optimization recommendations, not real-time alerting.
See more: Monitoring Data Platform
Question 23
HARD
A large organization operates a hybrid environment with on-premises servers, Azure VMs, and multi-cloud workloads in AWS. They need a single pane of glass for security posture management, threat detection, and regulatory compliance scoring across all environments. The solution must provide actionable recommendations. What should you deploy?
Microsoft Defender for Cloud provides unified security posture management (CSPM) with a secure score, regulatory compliance dashboards, and threat detection across Azure, on-premises (via Azure Arc), and multi-cloud environments (via native AWS and GCP connectors). It provides actionable security recommendations. Azure Monitor is for operational monitoring, not security posture. Sentinel is a SIEM/SOAR but not a CSPM tool. Azure Policy only covers Azure resources and does not provide threat detection.
See more: Monitoring Data Platform
Question 24
MEDIUM
A SaaS company wants to provide their customers with insights into application performance. They need to track custom business metrics such as "orders per minute" and "payment failures" alongside standard infrastructure metrics. Dashboards should be shareable with stakeholders. What should you use?
Application Insights supports custom metrics (trackMetric) and custom events (trackEvent) alongside standard performance metrics. Azure Workbooks provides rich interactive reports that combine metrics, logs, and text, and these can be shared through Azure Dashboards with fine-grained access control. Data Explorer and Power BI add complexity. Event Hubs with SQL and Blob Storage with Power BI require significant custom development for what Application Insights provides natively.
See more: Monitoring Data Platform
Question 25
EASY
Your company hosts a critical web application in the East US region. They need to ensure the application can recover to another Azure region if East US becomes unavailable. What Azure service should you use to replicate Azure VMs to a secondary region?
Azure Site Recovery (ASR) provides disaster recovery for Azure VMs by continuously replicating them to a secondary Azure region. During a regional outage, you can failover to the secondary region with minimal data loss. Azure Backup is for data backup and restore, not live replication. Traffic Manager routes traffic but does not replicate VMs. Load Balancer distributes traffic within a region.
See more: Site Recovery Strategy
Question 26
MEDIUM
A company runs a three-tier application on Azure VMs. The business requires an RPO of less than 15 minutes and RTO of less than 2 hours for disaster recovery. They also need to test the DR plan quarterly without impacting production. What should you design?
Azure Site Recovery provides continuous replication with RPO typically under 15 minutes and supports recovery plans that orchestrate multi-tier failover within the RTO target. Test failover creates VMs in an isolated network so you can validate the DR plan without affecting production. GRS backups have much higher RPO and RTO. Snapshots every 15 minutes are not practical at scale. Active-active is over-engineering for a DR scenario with 2-hour RTO tolerance.
See more: Site Recovery Strategy
Question 27
MEDIUM
A manufacturing company is migrating on-premises VMware workloads to Azure. During the migration, they need to maintain business continuity by keeping the on-premises workloads protected with disaster recovery to Azure. Once migrated, the DR target changes to a secondary Azure region. Which approach should you recommend?
Azure Site Recovery supports both VMware-to-Azure and Azure-to-Azure replication scenarios. During migration, ASR protects on-premises VMware VMs with DR to Azure. After the workloads are migrated to Azure VMs, you reconfigure ASR to replicate Azure-to-Azure for cross-region DR. Azure Backup does not provide near-real-time replication. Manual export/import is slow and availability sets are for in-region availability. Third-party solutions add licensing cost and complexity.
See more: Site Recovery Strategy
Question 28
HARD
A financial trading platform requires 99.99% availability for its order matching engine. The system processes thousands of transactions per second and any downtime results in significant financial loss. The solution must survive a complete Azure region failure with minimal data loss. What architecture should you recommend?
For 99.99% availability that survives a complete region failure, an active-active multi-region architecture is required. Azure Front Door provides global load balancing with automatic failover. Zone-redundant resources in each region protect against datacenter failures. Cosmos DB with multi-region writes ensures data availability and minimal data loss during region failures. A single-region deployment, even with availability zones, only provides 99.95-99.99% and cannot survive a full region outage. ASR has failover time that violates 99.99% SLA. VMSS does not automatically coordinate cross-region deployment.
See more: High Availability
Question 29
MEDIUM
Your company deploys an application across three Azure Availability Zones in a single region. The application uses Azure SQL Database and Azure App Service. You need to ensure maximum availability for the SQL Database tier. Which deployment option provides zone-redundant database availability?
Azure SQL Database Premium and Business Critical service tiers support zone-redundant configuration, which distributes database replicas across multiple availability zones within a region. This provides a 99.995% SLA. The Basic tier does not support zone redundancy. General Purpose with locally redundant storage keeps all replicas in a single zone. Hyperscale with a single replica does not provide zone-redundant high availability.
See more: High Availability
Question 30
EASY
You are designing an Azure App Service deployment. The application must remain available even if a single datacenter in the region fails. Which feature should you enable?
Zone redundancy on the App Service plan distributes instances across multiple Availability Zones (separate datacenters) within a region. If one datacenter fails, the instances in other zones continue serving traffic. Auto-scaling in the same datacenter does not protect against datacenter failure. Deployment slots are for release management. CDN caches content at edge locations but does not provide compute high availability.
See more: High Availability
Question 31
MEDIUM
A media company generates 10 TB of video files monthly. After 90 days, files are rarely accessed but must be retrievable within 1 hour. After 1 year, files must be retained for 7 more years for legal compliance with retrieval within 15 hours acceptable. What storage strategy minimizes cost?
Lifecycle management policies automate tier transitions: Hot for the first 90 days (frequent access), Cool from 90 days to 1 year (infrequent access, retrieval within hours), and Archive after 1 year (rare access, retrieval within 15 hours meets the rehydration time). Immutable storage policies with time-based retention ensure legal compliance for the full 8-year period. Hot tier for all data is expensive. Delete and re-upload is unnecessary with lifecycle policies. Azure Files Premium is cost-prohibitive for archival.
See more: Data Archiving Strategy
Question 32
MEDIUM
A pharmaceutical company needs to retain clinical trial data for regulatory audit purposes. Once written, the data must not be modified or deleted for a minimum of 10 years, even by administrators. What Azure feature satisfies this requirement?
Azure Blob Storage immutable storage with a time-based retention policy in locked mode creates a WORM (Write Once, Read Many) state. Once locked, the retention interval cannot be shortened and data cannot be modified or deleted until the retention period expires, even by storage account administrators or Microsoft support. Soft delete allows recovery of deleted data but does not prevent deletion. RBAC deny assignments and Azure Policy can be modified by privileged users.
See more: Data Archiving Strategy
Question 33
HARD
A government agency needs to archive petabytes of classified documents. The archive must support legal hold for ongoing investigations (preventing deletion regardless of retention policies), versioning for audit trails, and data must remain in a specific geographic region. Access latency for retrieval of up to 48 hours is acceptable. What architecture should you recommend?
Azure Blob Storage Archive tier provides the lowest-cost storage for petabyte-scale data with acceptable retrieval latency (up to 15 hours for standard rehydration). LRS (Locally Redundant Storage) keeps data within a single region. Blob versioning provides an audit trail. Immutable storage legal hold policies prevent deletion during active investigations regardless of other retention policies. Azure Policy enforces regional deployment. Data Lake Gen2 does not support Archive tier. Managed Disks are not designed for document archival. SQL Database is not cost-effective for petabyte-scale document storage.
See more: Data Archiving Strategy
Question 34
MEDIUM
A development team wants to deploy application updates to Azure App Service with zero downtime. They need to validate the new version with a subset of production traffic before completing the rollout. If issues are detected, they want instant rollback. What deployment strategy should you recommend?
Deployment slots allow you to deploy to a staging slot, route a percentage of production traffic to it for validation (canary testing), and then swap slots for a zero-downtime cutover. If issues are found, you can swap back instantly. Stopping the service causes downtime. Manual approval gates do not validate with live traffic. Creating a new App Service and updating DNS involves DNS propagation delays and is not instant.
See more: Design Deployments
Question 35
EASY
Your team is adopting Infrastructure as Code (IaC) for Azure deployments. Which two tools are natively supported by Azure for declarative infrastructure deployment?
Select all that apply (multiple correct answers)
ARM templates (JSON) and Bicep templates are Azure's native declarative Infrastructure as Code tools. They define the desired state of infrastructure and Azure Resource Manager handles the deployment. Bicep is a domain-specific language that compiles to ARM template JSON with a simpler syntax. Azure CLI and Azure PowerShell are imperative scripting tools that execute commands step by step, not declarative IaC.
See more: Design Deployments
Question 36
MEDIUM
An enterprise uses Azure Kubernetes Service (AKS) for their microservices platform. They want to implement a deployment strategy where a new version of a service is gradually rolled out, replacing instances of the old version one at a time. If a health check fails, the rollout should automatically stop. What should you configure?
Kubernetes RollingUpdate strategy incrementally replaces old pods with new ones. Combined with readiness probes, Kubernetes ensures new pods are healthy before continuing the rollout. If a readiness probe fails, the rollout pauses automatically. maxUnavailable and maxSurge control the pace. Recreate strategy causes downtime by terminating all old pods first. Manual approval is not automatic. Two clusters for blue-green is over-engineered for this scenario.
See more: Design Deployments
Question 37
MEDIUM
A company is migrating 200 on-premises SQL Server databases to Azure. The databases range from 10 GB to 2 TB. They need to assess compatibility, choose the right Azure SQL target (SQL Database, Managed Instance, or SQL on VM), and estimate costs. Which tool should they use first?
Azure Migrate with Azure SQL Assessment discovers SQL Server instances, assesses compatibility with each Azure SQL target, recommends the best-fit Azure SQL deployment option, identifies migration blockers, and provides cost estimates. This should be the first step before migration. Database Migration Service performs the actual migration but does not help with target selection and assessment. SSMS backup/restore is a manual migration method. The Pricing Calculator does not assess compatibility.
See more: Design Migrations
Question 38
HARD
An enterprise is migrating a legacy monolithic Java application running on WebLogic Server to Azure. The application uses JMS queues, JDBC connection pooling, JNDI lookups, and has complex clustering requirements. The team has limited time and wants minimal code changes. After migration, they plan to gradually modernize. What target should you recommend?
For a legacy monolithic Java application with deep WebLogic dependencies (JMS, JDBC pooling, JNDI, clustering), the lift-and-shift approach using WebLogic on Azure VMs from the Marketplace is the fastest path with minimal code changes. Azure provides pre-configured WebLogic Marketplace images that simplify the setup. This enables a phased modernization approach. Refactoring to microservices requires significant time and effort. App Service does not support WebLogic-specific features. Azure Functions is for event-driven workloads, not monolithic applications.
See more: Design Migrations
Question 39
MEDIUM
A company needs to migrate a large on-premises data warehouse (50 TB) to Azure with minimal downtime. The existing warehouse uses complex stored procedures and ETL jobs. They want to modernize to a cloud-native analytics platform. What migration approach should you recommend?
Azure Synapse Analytics dedicated SQL pool is the cloud-native replacement for on-premises data warehouses, supporting T-SQL, stored procedures, and massive parallel processing. Azure Data Factory handles the large-scale data movement with built-in connectors to common on-premises data warehouse systems. Stored procedures can be converted with some modifications. Azure SQL Database is not designed for 50 TB data warehouse workloads. Blob Storage with Databricks loses the SQL-based stored procedure ecosystem. Cosmos DB is not a data warehouse platform.
See more: Design Migrations
Question 40
EASY
Your company needs to expose internal APIs to external partners securely. You need rate limiting, API key management, and a developer portal for partners. Which Azure service should you recommend?
Azure API Management (APIM) provides a full API gateway with rate limiting (throttling policies), subscription key management for API access, and a built-in developer portal where partners can discover, test, and subscribe to APIs. Application Gateway is a Layer 7 load balancer without API management capabilities. Front Door is a global CDN and load balancer. Load Balancer operates at Layer 4 and does not understand HTTP/API traffic.
See more: API Integration Strategy
Question 41
MEDIUM
A company has multiple backend microservices that need to communicate asynchronously. Order events from the e-commerce frontend must be processed by the inventory, shipping, and notification services. Each service needs its own copy of the message. What messaging pattern and Azure service should you use?
Azure Service Bus Topics with subscriptions implement the publish-subscribe pattern where a single message published to a topic is delivered to multiple subscribers (inventory, shipping, notification services). Each subscription receives its own copy. Queue Storage delivers each message to a single consumer. Event Hubs is optimized for high-throughput telemetry streaming, not enterprise messaging patterns. Direct HTTP calls create tight coupling and do not provide message persistence or pub-sub capabilities.
See more: API Integration Strategy
Question 42
MEDIUM
Your company needs to integrate Azure services with a third-party SaaS platform. When a new blob is uploaded to Azure Storage, a workflow should be triggered that sends the file metadata to the SaaS platform via webhook over HTTPS. The solution should be serverless with no infrastructure management. What should you use?
Azure Event Grid natively integrates with Azure Blob Storage to publish events on blob creation. An Azure Function triggered by the Event Grid event processes the file metadata and sends it via HTTPS webhook to the SaaS platform. Both Event Grid and Azure Functions are serverless. Data Factory is for data movement and ETL, not event-driven workflows. A VM adds infrastructure management overhead. Service Bus requires a listener application, adding complexity.
See more: API Integration Strategy
Question 43
HARD
A streaming media company needs to store and serve 500 TB of video content globally. Content is write-once, read-many with high throughput requirements. They need the lowest possible storage cost for frequently accessed content while maintaining high availability across regions. Access must be anonymous for public content. What storage architecture should you design?
Azure Blob Storage Hot tier provides cost-effective storage for frequently accessed large-scale data. RA-GRS (Read-Access Geo-Redundant Storage) provides high availability with read access from the secondary region. Azure CDN caches content at global edge locations for low-latency delivery worldwide. Anonymous blob access supports public content scenarios. Azure Files Premium is designed for file shares, not large-scale blob content. Data Lake Gen2 adds cost from hierarchical namespace without benefit for video serving. Managed Disks are for VM storage, not content distribution.
See more: Storage Strategy
Question 44
MEDIUM
An enterprise needs a shared file system accessible from multiple Azure VMs running Windows. The file share must support SMB protocol, Active Directory authentication, and provide at least 100 MB/s throughput. Data must be encrypted at rest and in transit. What should you recommend?
Azure Files Premium provides SMB file shares with SSD-backed performance exceeding 100 MB/s throughput. It supports identity-based authentication through Azure AD Domain Services or on-premises Active Directory, and provides encryption at rest (SSE) and in transit (SMB encryption). NFS protocol does not support Windows SMB clients. Azure NetApp Files Ultra tier would work but is more expensive and complex than needed. Shared disks require cluster-aware file systems and do not support SMB natively.
See more: Storage Strategy
Question 45
EASY
Your team is designing a storage solution for an Azure-based application. Which two Azure storage redundancy options protect against a regional outage?
Select all that apply (multiple correct answers)
GRS and GZRS replicate data to a secondary region, providing protection against a complete regional outage. GRS replicates three copies in the primary region (LRS) plus three copies in the secondary region. GZRS combines zone-redundant storage in the primary region (ZRS) with geo-replication to the secondary region. LRS keeps all copies in a single datacenter. ZRS distributes across availability zones within one region but does not protect against regional failures.
See more: Storage Strategy
Question 46
MEDIUM
A company is building a web API that experiences highly variable traffic. During business hours, traffic is 100 requests per second, but it spikes to 10,000 requests per second during promotional events lasting a few hours. They want to minimize costs during low-traffic periods. What compute platform should you recommend?
Azure Functions Consumption plan charges only for actual execution time, making it the most cost-effective during low-traffic periods. It scales automatically to handle traffic spikes. For latency-sensitive API paths, a Premium plan with pre-warmed instances eliminates cold start delays. VMs have the highest baseline cost. App Service Premium tier has a fixed minimum cost. AKS adds management complexity and baseline node costs even during low traffic.
See more: Compute Strategy
Question 47
MEDIUM
A software company needs to run CI/CD build agents that compile large codebases. Each build takes 20 minutes and requires 8 vCPUs and 32 GB RAM. Builds happen during business hours (8 AM to 6 PM) on weekdays only. Outside business hours, no compute resources should be running. What is the most cost-effective solution?
Azure Container Instances provide per-second billing with no infrastructure to manage. Using Azure Automation or a scheduled workflow to spin up containers only during business hours and tear them down afterward ensures zero cost outside those hours. The resources meet the 8 vCPU and 32 GB RAM requirement. Dedicated VMs running 24/7 waste money on idle hours. App Service plan does not scale down to zero cost. Azure Functions Consumption plan has a maximum execution time limit and memory constraints that would not support large compilation tasks.
See more: Compute Strategy
Question 48
HARD
An enterprise is designing a hub-and-spoke network topology connecting 50 spoke virtual networks across 4 Azure regions. They need transitive routing between spokes, centralized firewall inspection in each regional hub, integration with on-premises networks via ExpressRoute, and simplified management. What networking architecture should you recommend?
Azure Virtual WAN with secured virtual hubs provides managed hub-and-spoke networking at scale. It supports automatic transitive routing between spokes (no custom UDRs needed), integrated Azure Firewall for centralized security inspection (via Firewall Manager), native ExpressRoute gateway integration, and global transit connectivity between hubs across regions. Managing 50 spokes with manual VNet peering and UDRs would be extremely complex. VPN site-to-site between hubs adds latency and management burden. Private Link is for accessing PaaS services privately, not inter-VNet routing.
See more: Networking Strategy
Question 49
MEDIUM
A company hosts a public-facing web application on Azure App Service. They need to protect the application from common web attacks (SQL injection, XSS, bot attacks) and DDoS attacks while also improving performance with global content caching. What should you deploy in front of the application?
Azure Front Door combines a global load balancer, WAF (Web Application Firewall) for protection against SQL injection, XSS, and bot attacks, built-in DDoS protection, and CDN capabilities for global content caching, all in one service. Application Gateway with WAF v2 provides WAF but is regional, not global, and does not include CDN. Load Balancer operates at Layer 4 and does not understand HTTP attacks. Traffic Manager is DNS-based and does not inspect traffic.
See more: Networking Strategy
Question 50
MEDIUM
Your company wants Azure PaaS services (Azure SQL Database, Azure Storage) to be accessible only from within their virtual network, not from the public internet. The services must still be resolved via their standard DNS names. What networking feature should you implement?
Azure Private Endpoints create a private IP address in your VNet for the PaaS service, making it accessible only from within the VNet. Private DNS Zones resolve the standard DNS names (e.g., mydb.database.windows.net) to the private IP address, so applications work without connection string changes. Service Endpoints keep traffic on the Azure backbone but the PaaS service still has a public IP. NSGs control traffic between subnets but do not affect PaaS connectivity. Azure Firewall can filter outbound traffic but does not make PaaS services private.
See more: Networking Strategy
← Back to all AZ-305 Practice Tests
Popular Posts
1Z0-830 Java SE 21 Developer Certification
1Z0-819 Java SE 11 Developer Certification
1Z0-829 Java SE 17 Developer Certification
AWS AI Practitioner Certification
AZ-204 Azure Developer Associate Certification
AZ-305 Azure Solutions Architect Expert Certification
AZ-400 Azure DevOps Engineer Expert Certification
DP-100 Azure Data Scientist Associate Certification
AZ-900 Azure Fundamentals Certification
PL-300 Power BI Data Analyst Certification
Spring Professional Certification
Azure AI Foundry Hello World
Azure AI Agent Hello World
Foundry vs Hub Projects
Build Agents with SDK
Bing Web Search Agent
Function Calling Agent
Spring Boot + Azure Key Vault Hello World Example
Spring Boot + Elasticsearch + Azure Key Vault Example
Spring Boot Azure AD (Entra ID) OAuth 2.0 Authentication
Deploy Spring Boot App to Azure App Service
Secure Azure App Service using Azure API Management
Deploy Spring Boot JAR to Azure App Service
Deploy Spring Boot + MySQL to Azure App Service
Spring Boot + Azure Managed Identity Example
Secure Spring Boot Azure Web App with Managed Identity + App Registration
Elasticsearch 8 Security - Integrate Azure AD OIDC