AZ-305 Designing Azure Infrastructure Solutions - Practice Test 2
Your Progress
0 / 50
Question 1
MEDIUM
A financial services company runs a microservices application on Azure Kubernetes Service (AKS). The operations team needs to correlate logs across multiple services, track request latency percentiles, and set up alerts when error rates exceed 5% over a 10-minute window.
Which solution should you recommend?
Application Insights with distributed tracing provides end-to-end correlation across microservices using correlation IDs. It natively supports latency percentile tracking (P50, P95, P99), automatic dependency mapping, and alert rules based on error rate thresholds. While Log Analytics can query logs, Application Insights is purpose-built for application performance monitoring (APM) scenarios and integrates seamlessly with AKS through the OpenTelemetry collector.
See more: Logging and Monitoring
Question 2
EASY
A company wants to allow partner organizations to access a shared Azure web application. Partners already have their own Azure AD tenants. The company wants to avoid creating separate user accounts for each partner employee.
What should you recommend?
Azure AD B2B collaboration allows you to invite external users who authenticate using their own organization's Azure AD credentials. This eliminates the need to create and manage separate accounts. B2B supports self-service sign-up, access reviews, and conditional access policies for guest users. B2C is designed for consumer-facing applications, not business partner scenarios.
See more: Authentication and Authorization
Question 3
MEDIUM
A healthcare company is building a patient portal. Users must sign in using their email accounts and go through a multi-step registration process that includes accepting terms of service and verifying their patient ID. The portal is accessed by millions of patients who are not part of the company's Azure AD.
Which authentication solution should you recommend?
Azure AD B2C is designed for consumer-facing applications with millions of external users. It supports custom user flows for multi-step registration, API connectors for custom validation (such as patient ID verification), and custom branding. B2C can scale to millions of users and supports social and local account sign-up. Azure AD B2B is for business partner collaboration, not consumer-facing scenarios.
See more: Design Authentication
Question 4
MEDIUM
A company has a data analytics platform where different departments need access to different datasets in Azure Data Lake Storage Gen2. The security team requires that access permissions be based on the user's department attribute stored in Azure AD, and new employees should automatically receive the correct permissions.
Which authorization approach should you recommend?
Azure AD dynamic groups automatically add and remove members based on attribute rules such as department. By assigning RBAC roles (e.g., Storage Blob Data Reader) to these dynamic groups on the appropriate storage containers, new employees automatically inherit the correct permissions when their department attribute is set. This approach is scalable, auditable, and follows least-privilege principles without manual intervention.
See more: Design Authorization
Question 5
EASY
A company is expanding to Azure and wants to ensure that all resources deployed across multiple subscriptions follow consistent naming conventions and are tagged with a cost center. Resources without the required tags should be automatically denied during deployment.
What should you recommend?
Azure Policy with a Deny effect evaluates resource properties at deployment time and blocks deployments that do not meet the defined rules. You can create a policy that requires specific tags (like cost center) and assign it across multiple subscriptions via a management group. This is the most direct and reliable way to enforce tag compliance. Azure Blueprints can include policies but is being deprecated in favor of direct policy management.
See more: Design Governance
Question 6
MEDIUM
An e-commerce company stores product catalog data that is read frequently but updated only a few times per day. The catalog data is stored in Azure SQL Database. During flash sales, read traffic increases 20x and causes database performance degradation.
Which data management strategy should you recommend to handle the read spikes?
Azure Cache for Redis is ideal for caching frequently read, infrequently updated data like a product catalog. A read-through or cache-aside pattern offloads 20x read spikes from the database to the cache layer, which can handle millions of requests per second with sub-millisecond latency. This is more cost-effective than scaling the database, and the data refresh frequency (few times per day) aligns well with cache invalidation patterns.
See more: Data Management Strategy
Question 7
MEDIUM
A government agency stores classified documents in Azure Blob Storage. Regulatory requirements mandate that all data must be encrypted with keys managed solely by the agency, and the agency must be able to revoke access to the encrypted data at any time without depending on Microsoft.
Which encryption approach should you recommend?
Customer-managed keys (CMK) stored in Azure Key Vault allow the agency to maintain full control over encryption keys. With purge protection enabled, keys cannot be permanently deleted prematurely. The agency can revoke access to encrypted data at any time by disabling or deleting the key in Key Vault, which immediately renders the blob data inaccessible. This meets the requirement for sole key management while leveraging Azure's encryption infrastructure.
See more: Data Protection Strategy
Question 8
MEDIUM
A retail company collects telemetry data from 10,000 IoT devices in stores worldwide. The data must be ingested in real time, stored for long-term analysis, and available for dashboards that operations managers view daily. The solution must support both real-time anomaly detection and historical trend analysis.
Which two components should you include in the monitoring data platform? (Choose two.)
Select all that apply (multiple correct answers)
Azure Stream Analytics provides real-time processing of telemetry streams and has built-in anomaly detection capabilities using machine learning models. Azure Data Explorer (ADX) is optimized for large-scale time-series data ingestion and analysis, making it ideal for long-term storage of IoT telemetry and powering historical trend dashboards with near-instant query response times. Azure Event Grid is event-driven (not suitable for high-volume telemetry streaming), and Azure Monitor Metrics has retention limits that make it unsuitable for long-term IoT data storage.
See more: Monitoring Data Platform
Question 9
MEDIUM
A company runs a critical ERP system on Azure VMs in the East US region. The business requires that in the event of a complete regional outage, the ERP system must be operational in another region within 30 minutes. Data loss must be limited to no more than 5 minutes worth of transactions.
Which disaster recovery strategy should you recommend?
Azure Site Recovery (ASR) provides continuous replication of Azure VMs to a secondary region with an RPO (Recovery Point Objective) of typically seconds to minutes, meeting the 5-minute data loss requirement. ASR supports automated failover with recovery plans that can bring systems online within the 30-minute RTO (Recovery Time Objective). Azure Backup is designed for data protection, not rapid failover. Hourly snapshots would exceed the 5-minute RPO requirement.
See more: Site Recovery Strategy
Question 10
EASY
A company deploys a web application on Azure App Service. The application must remain available even if the underlying hardware in a single datacenter fails.
What is the simplest way to achieve this?
Enabling zone redundancy on the App Service plan distributes instances across multiple Availability Zones within a region. This ensures that if a single datacenter (zone) fails, the application continues to run on instances in other zones. This is the simplest approach as it requires only a configuration change on the App Service plan. Deploying to a single zone would not protect against datacenter failure.
See more: High Availability
Question 11
MEDIUM
A media company stores video files in Azure Blob Storage. Videos older than 90 days are rarely accessed but must be available within 12 hours when requested. Videos older than 2 years can take up to 48 hours to retrieve. The company wants to minimize storage costs.
Which data archiving strategy should you recommend?
Lifecycle management policies can automatically transition blobs between tiers based on age. Moving to Cool tier after 90 days reduces costs for infrequently accessed data while keeping retrieval within hours. Moving to Archive tier after 2 years provides the lowest storage cost, and the Archive tier rehydration time (up to 15 hours for standard priority) fits within the 48-hour retrieval window. Moving directly to Archive after 90 days would not meet the 12-hour retrieval requirement for videos between 90 days and 2 years.
See more: Data Archiving Strategy
Question 12
MEDIUM
A software company needs to deploy updates to a production web application running on Azure App Service. The team wants to validate new deployments with 10% of production traffic before completing the rollout, with the ability to instantly roll back if issues are detected.
Which deployment strategy should you recommend?
App Service deployment slots support traffic routing, allowing you to direct a configurable percentage of production traffic to a staging slot. This enables canary testing with real production traffic. If issues are detected, you can instantly route 100% traffic back to the production slot (rollback) or proceed with a slot swap to promote the new version. This is the most integrated and cost-effective approach for App Service deployments.
See more: Design Deployments
Question 13
MEDIUM
A company is migrating 200 on-premises SQL Server databases to Azure. The databases range from 10 GB to 2 TB. The migration must minimize downtime, and the team needs to assess compatibility issues before migration begins.
Which migration approach should you recommend?
Azure Database Migration Service (DMS) supports online (minimal downtime) migration of SQL Server databases to Azure SQL. Combined with Azure Migrate, it provides compatibility assessment to identify issues before migration begins. DMS handles the continuous data sync during migration, reducing cutover time to minutes. BACPAC export/import requires downtime proportional to database size, making it impractical for large databases. Data Factory is not designed for schema-aware database migration.
See more: Design Migrations
Question 14
EASY
A company builds a mobile application that needs to call multiple backend microservices. The company wants a single entry point for the mobile app, with the ability to enforce rate limiting, transform request/response payloads, and manage API versioning.
What should you recommend?
Azure API Management (APIM) is purpose-built as an API gateway that provides a single entry point for backend APIs. It natively supports rate limiting (throttling policies), request/response transformation via policies, API versioning and revision management, developer portal for documentation, and analytics. Application Gateway and Front Door are load balancers/CDNs that lack API-specific management features like payload transformation and versioning.
See more: API Integration Strategy
Question 15
MEDIUM
A company needs to store and serve millions of small JSON documents (1-5 KB each) with single-digit millisecond read latency. The data model is simple key-value pairs, and the application performs approximately 100,000 reads per second with occasional writes.
Which storage solution should you recommend?
For simple key-value pair access with 100,000 reads per second and single-digit millisecond latency requirements, Azure Cache for Redis is the optimal choice. Redis is an in-memory data store designed exactly for this access pattern - high-throughput key-value reads with sub-millisecond latency. With data persistence enabled, it also handles durability. While Cosmos DB also offers low latency, Redis is more cost-effective for pure key-value workloads at this scale.
See more: Storage Strategy
Question 16
MEDIUM
A company runs a batch processing workload that processes thousands of independent image files. Each file takes 2-5 minutes to process. The workload runs daily and the number of files varies between 500 and 50,000. The company wants to minimize costs and processing time.
Which compute solution should you recommend?
Azure Batch is designed specifically for large-scale parallel batch processing workloads. It provides automatic scheduling of tasks across a managed pool of compute nodes, supports auto-scaling based on workload size (handling 500 to 50,000 files), and offers low-priority (Spot) nodes at up to 80% discount. Azure Functions has a 5-minute default timeout on Consumption plans and is not ideal for compute-intensive image processing. Azure Batch handles job orchestration, retry logic, and node management automatically.
See more: Compute Strategy
Question 17
MEDIUM
A company has a web application hosted in Azure that needs to communicate with an on-premises Oracle database. The connection must be encrypted, and the company does not want to expose the database to the public internet. The company already has an ExpressRoute circuit connecting their on-premises datacenter to Azure.
Which networking configuration should you recommend?
Since the company already has an ExpressRoute circuit, using ExpressRoute private peering provides a dedicated, encrypted private connection between the Azure VNet and on-premises network. By enabling VNet integration on the web application (App Service), the app can route traffic through the VNet and reach the on-premises Oracle database via ExpressRoute without exposing it to the public internet. This is the most straightforward and performant approach given the existing infrastructure.
See more: Networking Strategy
Question 18
HARD
A global financial trading platform runs on Azure and generates 500 GB of logs daily from application, infrastructure, and security sources. The compliance team requires 7-year log retention with tamper-proof storage. The security operations team needs real-time threat detection on security logs, while the development team only needs last-30-day application logs for debugging.
Which logging architecture should you recommend?
This architecture addresses all three requirements cost-effectively. Log Analytics provides 30-day interactive retention for the development team's debugging needs. Microsoft Sentinel running on top of the security log data provides real-time threat detection with analytics rules. Archiving all logs to immutable blob storage (with WORM policies) satisfies the 7-year tamper-proof retention requirement at the lowest cost tier. A single Log Analytics workspace with 7-year retention would be extremely expensive at 500 GB/day.
See more: Logging and Monitoring
Question 19
HARD
A multinational corporation is implementing a zero-trust security model for its Azure environment. The company requires that access to sensitive resources is granted only when the user is on a compliant device, authenticating from a trusted location, and presents a risk level of low or medium as assessed in real time.
Which two components are essential to implement this requirement? (Choose two.)
Select all that apply (multiple correct answers)
Conditional Access is the zero-trust policy engine that evaluates multiple signals (device compliance, location, risk level) to make access decisions. However, the real-time risk assessment (low/medium risk levels) is provided by Azure AD Identity Protection, which feeds risk signals into Conditional Access policies. Together, they enforce the zero-trust model: Identity Protection detects risk in real time, and Conditional Access enforces the access decision based on all conditions. PIM manages privileged roles but does not evaluate device compliance or location.
See more: Authentication and Authorization
Question 20
HARD
A company is designing a SaaS application that must support authentication for enterprise customers using their own identity providers (Okta, Ping Identity, on-premises ADFS). Each customer should be isolated in their own tenant context, and the onboarding process for new identity providers must be self-service without requiring code changes.
Which authentication architecture should you recommend?
Azure AD B2C with Identity Experience Framework (IEF) custom policies supports dynamic identity provider discovery, where new IdPs can be configured through metadata without code changes. IEF custom policies allow complex authentication orchestration including tenant isolation by routing users to the correct IdP based on their email domain. This is the foundation for enterprise-grade SaaS multi-tenancy. The standard B2C user flows (option A) lack dynamic IdP discovery, while multi-tenant Azure AD does not support non-Azure AD identity providers natively.
See more: Design Authentication
Question 21
EASY
A company wants to ensure that virtual machine administrators can manage VMs but cannot modify the virtual network settings or create new virtual networks.
Which approach should you use?
The Virtual Machine Contributor built-in role grants permissions to manage virtual machines but does not grant permissions to manage virtual networks, storage accounts, or other resources not directly part of the VM resource. This follows the principle of least privilege by providing only the permissions needed for VM administration. Using broader roles like Contributor or Owner and then trying to restrict them adds unnecessary complexity.
See more: Design Authorization
Question 22
MEDIUM
A large enterprise has 15 Azure subscriptions organized under a single management group. Each subscription is used by a different business unit. The company wants to ensure that no business unit can deploy resources in regions outside of Europe (West Europe and North Europe) to comply with GDPR data residency requirements.
Which governance solution should you implement?
The built-in Azure Policy "Allowed locations" restricts which Azure regions resources can be deployed to. By assigning this policy at the management group level, it is inherited by all 15 subscriptions, ensuring consistent enforcement across all business units. This is the standard approach for data residency compliance. RBAC controls who can do what, not where resources are deployed. Subscription defaults do not prevent deployment to other regions.
See more: Design Governance
Question 23
HARD
A global SaaS company needs to design a multi-region data platform. The application has a product catalog (read-heavy, eventually consistent), user sessions (low-latency reads/writes), and financial transactions (strong consistency, ACID compliance). Each data type has different consistency, latency, and compliance requirements.
Which combination of data services should you use?
This design matches each service to its optimal use case. Azure Cosmos DB with session consistency is ideal for the read-heavy, eventually consistent product catalog with global distribution. Azure Cache for Redis provides sub-millisecond latency for user session data (key-value access pattern). Azure SQL Database provides full ACID compliance and strong consistency required for financial transactions. Using Cosmos DB with strong consistency for financial transactions sacrifices its distributed performance advantages and may not meet all ACID requirements that SQL provides natively.
See more: Data Management Strategy
Question 24
EASY
A company stores sensitive customer data in Azure SQL Database. They want to ensure that database administrators can manage the database but cannot read the actual customer data stored in certain columns like Social Security numbers and credit card numbers.
Which data protection feature should you use?
Always Encrypted ensures that sensitive data is encrypted at the column level, and the encryption keys are managed by the application, not the database server. This means database administrators (who have server-level access) can perform management tasks but cannot decrypt the actual column data because they do not possess the column master key. TDE encrypts data at rest but does not prevent DBAs from reading data. Dynamic Data Masking can be bypassed by users with certain permissions.
See more: Data Protection Strategy
Question 25
MEDIUM
A company wants to create a unified monitoring dashboard that displays metrics from Azure resources, on-premises servers, and third-party SaaS applications. The dashboard must support custom visualizations, team collaboration, and role-based access to different dashboard sections.
Which monitoring platform combination should you recommend?
Azure Managed Grafana integrates natively with Azure Monitor and supports data sources from on-premises (via Prometheus, InfluxDB) and third-party SaaS applications through its extensive plugin ecosystem. Grafana provides advanced custom visualizations, team-based dashboards with folder permissions, and role-based access control. Azure Monitor workbooks are limited to Azure data sources, Power BI is more suited for business analytics, and Azure Dashboards have limited customization capabilities.
See more: Monitoring Data Platform
Question 26
HARD
A company runs a multi-tier application across two Azure regions (primary and secondary). The application includes a web tier on App Service, business logic on AKS, and data on Azure SQL with geo-replication. During disaster recovery testing, they discover that DNS failover takes 15 minutes and the AKS cluster in the secondary region does not have the latest container images.
How should you redesign the disaster recovery strategy to address both issues?
Azure Front Door performs health-based routing and can fail over instantly (no DNS TTL dependency) because it acts as a reverse proxy - client connections go to Front Door, which routes to healthy backends. Azure Container Registry (ACR) geo-replication automatically syncs container images across regions, ensuring the secondary AKS cluster always has access to the latest images. Traffic Manager relies on DNS TTL for failover (causing the 15-minute delay), and ASR does not support AKS natively.
See more: Site Recovery Strategy
Question 27
MEDIUM
A company deploys an Azure SQL Database that must maintain 99.995% availability. The database serves a payment processing application where any downtime directly results in revenue loss. The solution must protect against both zone-level and region-level failures.
Which configuration should you recommend?
To achieve 99.995% availability and protect against both zone-level and region-level failures, you need both zone redundancy (protects against datacenter failures within a region) and a failover group (protects against entire region outages). The Business Critical tier with zone redundancy provides an SLA of 99.995%. The auto-failover group adds automatic geo-failover capability with a read-write listener endpoint that automatically routes to the current primary, minimizing application changes during failover.
See more: High Availability
Question 28
MEDIUM
A legal firm stores millions of case documents in Azure Blob Storage. Documents must be retained for 10 years after the case is closed, and during this period they cannot be modified or deleted. After 3 years, documents are very rarely accessed.
Which archiving solution should you implement?
Immutable blob storage with time-based retention policies ensures documents cannot be modified or deleted for the specified 10-year period, meeting legal compliance requirements (WORM - Write Once, Read Many). Lifecycle management policies then automatically transition documents to the Archive tier after 3 years when they are rarely accessed, significantly reducing storage costs. The combination of immutability and lifecycle management provides both compliance and cost optimization.
See more: Data Archiving Strategy
Question 29
HARD
A company operates a microservices application on AKS with 30 services. They want to implement a progressive delivery strategy where new versions are deployed to a small subset of pods, monitored for error rates using Application Insights, and automatically promoted or rolled back based on metrics thresholds without manual intervention.
Which deployment approach should you recommend?
Flagger is a progressive delivery operator for Kubernetes that automates the entire canary deployment lifecycle. It integrates with Application Insights to query real-time metrics (error rates, latency), automatically shifts traffic to the canary in increments, and promotes or rolls back based on configurable metric thresholds without manual intervention. Standard Kubernetes rolling updates do not support metric-based automated rollback, and manual approval gates contradict the requirement for automated promotion/rollback.
See more: Design Deployments
Question 30
MEDIUM
A company needs to migrate 50 on-premises VMware virtual machines to Azure. The VMs run various Linux and Windows workloads. The company wants to assess the VMs for Azure readiness, right-size recommendations, and estimate monthly costs before migration.
Which tool should you use for assessment?
Azure Migrate provides a centralized hub for discovering, assessing, and migrating on-premises workloads to Azure. The discovery and assessment tool automatically discovers VMware VMs, collects performance data (CPU, memory, disk, network), evaluates Azure readiness, provides right-sizing recommendations based on actual utilization, and generates cost estimates. The TCO Calculator provides cost estimates but not readiness assessment. Azure Advisor only works for resources already in Azure.
See more: Design Migrations
Question 31
MEDIUM
A company is designing an event-driven architecture where an order placement in their e-commerce system must trigger inventory updates, send email notifications, and update the analytics dashboard. Each downstream system must process the event independently, and the failure of one system must not affect the others.
Which two components should you include in the integration design? (Choose two.)
Select all that apply (multiple correct answers)
Azure Service Bus Topics implement the publish-subscribe pattern where a single message (order placed event) is delivered to multiple subscriptions (inventory, email, analytics). Each subscription operates independently, so a failure in one does not affect others. Azure Functions triggered by each subscription process events independently with built-in retry logic and dead-letter queue support. Queue Storage is point-to-point (one consumer per message). A sequential Logic Apps workflow would mean one system failure blocks the others.
See more: API Integration Strategy
Question 32
EASY
A company needs to store large unstructured files such as images, videos, and documents. The files will be uploaded by a web application and served to users through a CDN. The total storage is expected to grow to 50 TB.
Which storage solution should you recommend?
Azure Blob Storage is purpose-built for storing large amounts of unstructured data (images, videos, documents). It integrates natively with Azure CDN for content delivery, supports HTTP-based access for web applications, and can scale to petabytes. The Hot tier is appropriate when files are frequently accessed (served to users). Azure Files is for file share scenarios (SMB/NFS), Managed Disks are for VM storage, and Table Storage is for structured NoSQL data.
See more: Storage Strategy
Question 33
MEDIUM
A company is building an application that processes user-uploaded documents. The processing involves OCR, text extraction, and sentiment analysis. Each document takes 30-90 seconds to process. The workload is unpredictable with quiet periods followed by bursts of hundreds of uploads. The company wants to pay only for actual processing time.
Which compute solution should you recommend?
Azure Functions on a Premium plan supports execution durations up to 60 minutes (handling 30-90 second processing), provides pre-warmed instances for fast scale-out during bursts, and scales to zero during quiet periods (you pay only for active instances). The storage queue trigger provides reliable message-based processing with automatic retry. The Consumption plan has a 5-minute default timeout. App Service always incurs costs even during quiet periods. VMSS lacks the serverless pay-per-use model.
See more: Compute Strategy
Question 34
HARD
A company is designing a hub-and-spoke network architecture with 20 spoke VNets across 4 Azure regions. On-premises connectivity is required through ExpressRoute. The company needs centralized firewall inspection for all inter-spoke and spoke-to-internet traffic, with simplified routing management. The network team reports that managing hundreds of UDR entries is becoming unmanageable.
Which networking solution should you recommend?
Azure Virtual WAN with secured hubs provides a managed hub infrastructure that automatically handles routing between spokes, regions, and on-premises via ExpressRoute. Azure Firewall Manager enables centralized security policy across all hubs. The routing intent feature automatically programs routes to direct inter-spoke and internet-bound traffic through Azure Firewall without manually managing UDRs. This addresses the routing complexity issue that makes traditional hub-and-spoke unmanageable at scale with 20+ VNets across 4 regions.
See more: Networking Strategy
Question 35
MEDIUM
A company uses Azure Kubernetes Service and needs to be alerted within 2 minutes when any pod enters a CrashLoopBackOff state or when node CPU exceeds 85% for more than 5 minutes. The SRE team wants alerts sent to both a Slack channel and an on-call PagerDuty rotation.
Which monitoring configuration should you recommend?
Azure Monitor Container Insights provides native monitoring for AKS including pod state changes and node metrics. Metric alerts can detect conditions like CrashLoopBackOff within minutes. Action groups support webhook actions that integrate with both Slack (incoming webhooks) and PagerDuty (Events API). This provides a fully managed solution without the operational overhead of running self-managed Prometheus infrastructure. The 2-minute alerting requirement is met by near-real-time metric alerts.
See more: Logging and Monitoring
Question 36
MEDIUM
A company has a legacy application that uses service account credentials stored in configuration files to access Azure resources. The security team has mandated eliminating all stored credentials. The application runs on Azure Virtual Machines and accesses Azure Key Vault, Azure Storage, and Azure SQL Database.
Which authentication approach should you implement?
System-assigned managed identities eliminate the need for any stored credentials. Azure automatically manages the identity lifecycle, handles token acquisition and rotation, and the credentials never appear in code or configuration files. You assign RBAC roles to the managed identity for each target resource (Key Vault access policies, Storage Blob Data roles, SQL Database user). This fully satisfies the security mandate to eliminate stored credentials. Key Vault still requires credentials to access it initially, and client secrets are still stored credentials.
See more: Authentication and Authorization
Question 37
MEDIUM
A company requires that all employees use multi-factor authentication when accessing cloud applications. However, users accessing from the corporate network during business hours should have a streamlined experience with reduced authentication friction. Contractors must always complete full MFA regardless of location.
Which authentication configuration should you recommend?
Conditional Access provides the granular policy engine needed for this scenario. Named locations define the corporate network (by IP ranges), allowing policies to require MFA only when users are outside the trusted network. Authentication strengths can define different MFA requirements. A separate policy targeting the contractors group enforces full MFA regardless of location. Per-user MFA is a legacy approach that lacks the granularity of location-based and group-based conditions.
See more: Design Authentication
Question 38
HARD
A company is implementing a privileged access strategy for their Azure environment. Requirements include: 1) Global administrators must request and justify access for a limited time period, 2) Standing access to production subscriptions must be eliminated, 3) All privileged operations must require approval from a security team member, and 4) Emergency access must be available when the approval workflow is unavailable.
Which combination of features addresses all requirements?
Azure AD Privileged Identity Management (PIM) addresses all four requirements: 1) Just-in-time activation requires administrators to request time-limited access with a business justification, 2) Eligible role assignments eliminate standing access (roles are not active until explicitly activated), 3) Approval workflows can be configured to require security team member approval before activation, 4) Break-glass (emergency access) accounts are dedicated accounts excluded from PIM and Conditional Access, providing access when the approval workflow or PIM is unavailable.
See more: Design Authorization
Question 39
MEDIUM
A company has adopted a multi-cloud strategy using Azure and AWS. They need to maintain consistent governance policies across both clouds, including cost management, compliance reporting, and resource inventory. The governance team wants a single pane of glass for managing policies.
Which Azure governance approach should you recommend?
Azure Arc extends Azure management and governance to resources running outside of Azure, including AWS. With Azure Arc, you can apply Azure Policy to AWS resources, manage resource inventory through a single pane of glass, and enforce compliance across both clouds. While Defender for Cloud provides security posture management across clouds and Cost Management handles cost analysis, Azure Arc provides the broadest governance capability for multi-cloud resource management with policy enforcement.
See more: Design Governance
Question 40
MEDIUM
A healthcare company collects patient data from multiple hospitals. Each hospital's data must be logically isolated, but the analytics team needs to run cross-hospital aggregate queries. Individual hospital data must be encrypted with hospital-specific keys, and data residency must be maintained per country.
Which data management architecture should you recommend?
Azure Synapse Analytics workspaces per hospital provide logical data isolation. Each workspace can use hospital-specific customer-managed keys (CMKs) for encryption at rest. Workspaces can be deployed in country-specific regions for data residency. Synapse serverless SQL pools can query across workspaces and storage accounts for aggregate analytics without copying data. This architecture provides isolation, per-hospital encryption, data residency, and cross-hospital analytics capabilities.
See more: Data Management Strategy
Question 41
MEDIUM
A company must comply with PCI DSS requirements for their Azure-hosted payment processing application. They need to ensure that credit card numbers are never stored in plain text in any database, log file, or application trace. Even if a breach occurs, the stolen data must be useless without access to additional systems.
Which data protection strategy should you implement?
Tokenization replaces sensitive card data with non-reversible tokens throughout the application stack - in databases, logs, and traces. The actual payment card data is stored only in the Azure Payment HSM, which is a FIPS 140-2 Level 3 certified hardware security module. Even if the application database, logs, or traces are breached, the tokens are meaningless without access to the HSM's detokenization capability. This is the gold standard for PCI DSS compliance as it reduces the PCI scope of the entire application.
See more: Data Protection Strategy
Question 42
EASY
A company wants to receive an email notification whenever the monthly Azure spending exceeds 80% of the allocated budget for any subscription.
Which monitoring feature should you configure?
Azure Cost Management allows you to create budgets for subscriptions and configure alert conditions at specified thresholds (such as 80% of the budget). When spending reaches the threshold, email notifications are automatically sent to configured recipients. This is the purpose-built feature for cost monitoring and budget alerts. Azure Monitor alerts do not directly track billing, and Advisor provides optimization recommendations rather than budget threshold notifications.
See more: Monitoring Data Platform
Question 43
MEDIUM
A company runs a mission-critical SAP HANA workload on Azure VMs. They need to ensure that during an Azure region outage, the SAP system can resume operations in a secondary region within 4 hours. The SAP database is 8 TB in size and changes by approximately 200 GB per day.
Which disaster recovery approach is most appropriate?
For SAP HANA disaster recovery, the recommended approach is HANA System Replication (HSR) in asynchronous mode for cross-region database replication, as it provides database-aware replication with minimal RPO. Azure Site Recovery handles the application tier VMs. ASR alone is not recommended for HANA database VMs because it performs block-level replication that may not ensure database consistency for an 8 TB in-memory database. Azure Backup's restore time for 8 TB would exceed the 4-hour RTO. ZRS only provides zone redundancy, not region redundancy.
See more: Site Recovery Strategy
Question 44
HARD
A company runs a globally distributed application that requires 99.999% uptime SLA. The application consists of a web tier, an API tier, and a database tier. Users are located across North America, Europe, and Asia. The company needs to ensure that a complete regional failure does not result in any user-facing downtime.
Which two design patterns are essential to achieve this SLA? (Choose two.)
Select all that apply (multiple correct answers)
Achieving 99.999% uptime (approximately 5 minutes of downtime per year) requires active-active multi-region deployment where all regions simultaneously serve traffic. Azure Front Door provides instant failover without DNS propagation delay. Azure Cosmos DB with multi-region writes eliminates the database as a single point of failure - writes can continue in any region during a regional outage. Active-passive adds failover delay that erodes the SLA, and single-region cannot survive a regional failure. Note: Cosmos DB multi-region writes use multi-master replication with conflict resolution, which is compatible with bounded staleness or session consistency for this pattern.
See more: High Availability
Question 45
EASY
A company wants to automatically move Azure SQL Database backups that are older than 1 year to a lower-cost storage tier to reduce costs while maintaining compliance with a 5-year backup retention requirement.
Which feature should you configure?
Azure SQL long-term retention (LTR) allows you to retain full database backups for up to 10 years in Azure Blob Storage. LTR policies let you configure weekly, monthly, and yearly backup retention schedules. The backups are automatically stored in cost-effective storage and can be used to restore a database to any point covered by the LTR policy. Short-term retention only supports up to 35 days, which does not meet the 5-year requirement.
See more: Data Archiving Strategy
Question 46
MEDIUM
A company needs to deploy identical infrastructure across development, staging, and production environments. The infrastructure includes VNets, App Services, SQL Databases, and Key Vaults. Changes must be tracked in source control, and deployments must be repeatable and idempotent.
Which deployment approach should you recommend?
Bicep is the recommended Infrastructure as Code (IaC) language for Azure that compiles to ARM templates. Storing Bicep files in Git provides source control and change tracking. Environment-specific parameter files allow the same templates to deploy to dev, staging, and production with appropriate settings. CI/CD pipelines ensure repeatable deployments. Bicep deployments are idempotent by design - deploying the same template multiple times produces the same result. CLI scripts are imperative and not inherently idempotent.
See more: Design Deployments
Question 47
EASY
A company runs a legacy .NET Framework web application on on-premises IIS servers. They want to move the application to Azure with minimal code changes. The application uses Windows-specific features like the Windows Registry and COM components.
Which migration target should you recommend?
Azure App Service on Windows supports .NET Framework applications with Windows-specific features like Registry access and COM components. This is a PaaS migration that requires minimal code changes (rehost/lift-and-shift) while eliminating infrastructure management. Linux-based services cannot support Windows-specific features. AKS with Windows nodes adds unnecessary complexity for a straightforward web application migration. Azure Functions is designed for event-driven workloads, not traditional web applications.
See more: Design Migrations
Question 48
MEDIUM
A company operates a SaaS platform and needs to expose REST APIs to third-party developers. The API must support OAuth 2.0 authentication, provide interactive API documentation, enforce usage quotas per developer, and allow developers to self-register and obtain API keys.
Which integration solution should you recommend?
Azure API Management provides all required capabilities: OAuth 2.0 authentication through policies, a built-in developer portal with interactive API documentation (based on OpenAPI specs), product-level quotas and rate limiting per subscription key, and self-service developer registration. Developers can sign up on the portal, browse API documentation, obtain subscription keys, and test APIs interactively. No other Azure service provides this complete API management lifecycle.
See more: API Integration Strategy
Question 49
HARD
A company needs to design a storage solution for a data lake that will store 500 TB of data. The data includes raw files (CSV, Parquet, JSON) and must support both batch processing with Spark and interactive SQL queries. Fine-grained access control at the folder and file level is required, and the solution must support POSIX-style permissions for compatibility with existing Hadoop-based ETL pipelines.
Which storage configuration should you recommend?
Azure Data Lake Storage Gen2 (ADLS Gen2) combines Azure Blob Storage scalability with a hierarchical namespace (file system semantics). The hierarchical namespace enables POSIX-style ACLs at the folder and file level, which is critical for Hadoop-compatible ETL pipelines. ADLS Gen2 integrates with Azure Synapse for both Spark-based batch processing and serverless SQL queries over Parquet, CSV, and JSON files. Standard Blob Storage does not support hierarchical namespace or POSIX ACLs, making fine-grained access control at the file level impractical.
See more: Storage Strategy
Question 50
MEDIUM
A company is designing a multi-tenant SaaS application where each tenant's traffic must be isolated at the network level. The application runs on Azure App Service, and each tenant accesses the application through a custom domain. The company wants to inspect and filter traffic per tenant without deploying separate App Service instances.
Which networking solution should you recommend?
Azure Application Gateway supports multi-site listeners that route traffic based on the host header (custom domain per tenant). Each listener can have its own WAF policy for per-tenant traffic inspection and filtering, and path-based routing rules can direct tenant traffic to specific backends. This provides network-level tenant isolation without requiring separate App Service deployments. Azure Load Balancer operates at Layer 4 and cannot inspect HTTP headers or host names. Traffic Manager is DNS-based and does not provide traffic inspection.
See more: Networking Strategy
← Back to all AZ-305 Practice Tests
Popular Posts
1Z0-830 Java SE 21 Developer Certification
1Z0-819 Java SE 11 Developer Certification
1Z0-829 Java SE 17 Developer Certification
AWS AI Practitioner Certification
AZ-204 Azure Developer Associate Certification
AZ-305 Azure Solutions Architect Expert Certification
AZ-400 Azure DevOps Engineer Expert Certification
DP-100 Azure Data Scientist Associate Certification
AZ-900 Azure Fundamentals Certification
PL-300 Power BI Data Analyst Certification
Spring Professional Certification
Azure AI Foundry Hello World
Azure AI Agent Hello World
Foundry vs Hub Projects
Build Agents with SDK
Bing Web Search Agent
Function Calling Agent
Spring Boot + Azure Key Vault Hello World Example
Spring Boot + Elasticsearch + Azure Key Vault Example
Spring Boot Azure AD (Entra ID) OAuth 2.0 Authentication
Deploy Spring Boot App to Azure App Service
Secure Azure App Service using Azure API Management
Deploy Spring Boot JAR to Azure App Service
Deploy Spring Boot + MySQL to Azure App Service
Spring Boot + Azure Managed Identity Example
Secure Spring Boot Azure Web App with Managed Identity + App Registration
Elasticsearch 8 Security - Integrate Azure AD OIDC