Search Tutorials


Top SDET (2025) frequently asked interview questions | JavaInUse

Top SDET frequently asked interview questions.

In this post we will look at SDET - Software Development Engineer in Test Interview questions. Examples are provided with explanations.


  1. Verification vs Validation: Can you explain the difference between verification and validation in software testing?
  2. Regression vs Retesting: What's the difference between regression testing and retesting? When would you use each?
  3. Test Automation Framework: How would you design a test automation framework from scratch for a web application? Walk me through your approach.
  4. Selenium Waits: What's the difference between implicit and explicit waits in Selenium? When would you use each?
  5. API Testing: How would you test a REST API? What tools would you use and what aspects would you verify?
  6. Login Page Testing: Explain how you would test a login page. Include both positive and negative test scenarios.
  7. Password Validation Testing: Write a test case for validating a password field with these requirements:
    • Minimum 8 characters
    • At least one uppercase letter
    • At least one number
    • At least one special character
  8. Cyclomatic Complexity: What is cyclomatic complexity? How does it help in determining test cases?
  9. Dynamic Elements: How would you handle dynamic elements in web automation, especially when IDs or classes change frequently?
  10. Test Pyramids: Explain the concept of test pyramids. How would you determine the right balance of tests for a project?
  11. Mocking vs Stubbing: What's the difference between mocking and stubbing? Provide an example of when you'd use each.
  12. Performance Testing: How would you test a feature that involves processing large amounts of data? What performance metrics would you consider?
  13. Test Data Management: How do you handle test data in your automation framework? Explain your approach to test data management.
  14. SQL Testing: Write a SQL query to find duplicate records in a table. How would you test this query?
  15. E-commerce Testing: Design test cases for an e-commerce shopping cart. Consider edge cases and potential integration points.

Q: Can you explain the difference between verification and validation in software testing?

A:

  • Verification is checking whether we are building the product RIGHT
  • Validation is checking whether we are building the RIGHT product

Key differences:

  1. Timing:
    • Verification occurs early in development (reviews, inspections)
    • Validation occurs after the software is built (testing)
  2. Focus:
    • Verification: Reviews documentation, code, requirements
    • Validation: Tests actual software functionality
  3. Question answered:
    • Verification: "Are we building it according to specifications?"
    • Validation: "Does it fulfill the user's actual needs?"

Q: What's the difference between regression testing and retesting? When would you use each?

A:

Regression Testing:

  • Testing done to ensure new code changes haven't broken existing functionality
  • Covers wider scope of application
  • Usually automated
  • Performed after every significant change
  • Example: Running full test suite after adding new feature

Retesting (Confirmation Testing):

  • Testing specifically to verify fixed defects
  • Focused on specific bug fixes
  • Usually manual
  • Performed after bug fixes
  • Example: Verifying a specific bug fix works as expected

Q: How would you design a test automation framework from scratch for a web application?

A:

Step-by-step approach:

  1. Choose Core Technologies:
    • Programming language (e.g., Java, Python)
    • Testing framework (e.g., TestNG, PyTest)
    • Web automation tool (e.g., Selenium WebDriver)
    • Build tool (e.g., Maven, Gradle)
  2. Design Framework Architecture:
    • Page Object Model (POM) design pattern
    • Configuration management
    • Test data management
    • Reporting mechanism (e.g., Extent Reports)
    • Logging system
  3. Framework Components:
    • Base classes for common functions
    • Utility classes for reusable methods
    • Configuration files for different environments
    • Test data files (Excel, JSON, etc.)
    • Test runners and suites




Q: What's the difference between implicit and explicit waits in Selenium? When would you use each?

A:

Implicit Wait:

  • Global timeout for the entire session
  • Applies to all elements
  • Set once and applies throughout WebDriver instance
  • Use when elements generally take consistent time to load

Example:

driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(10));

Explicit Wait:

  • Specific to individual elements
  • More precise and conditional
  • Better for dynamic elements
  • Use when different elements need different wait conditions

Example:

WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10));
wait.until(ExpectedConditions.elementToBeClickable(By.id("button")));

Q: How would you test a REST API? What tools would you use and what aspects would you verify?

A:

Tools:

  • Postman for manual testing
  • Rest Assured for automation
  • JUnit/TestNG for assertions
  • Jenkins for CI/CD

Testing Aspects:

  1. Basic Verification:
    • Status codes (200, 404, 500, etc.)
    • Response time
    • Response format (JSON/XML)
  2. Data Validation:
    • Schema validation
    • Data types
    • Required fields
  3. Security Testing:
    • Authentication
    • Authorization
    • API keys validation
  4. Edge Cases:
    • Boundary value testing
    • Negative testing
    • Invalid input handling

Q: Explain how you would test a login page. Include both positive and negative test scenarios.

A:

Positive Test Scenarios:

  • Valid username and valid password
  • Login with email instead of username (if supported)
  • Login with case-sensitive credentials
  • Remember me functionality
  • Session management after successful login

Negative Test Scenarios:

  • Invalid username with valid password
  • Valid username with invalid password
  • Empty username and password fields
  • SQL injection attempts
  • XSS vulnerability testing
  • Account lockout after multiple failed attempts

Non-Functional Testing:

  • Page load time
  • Browser compatibility
  • Mobile responsiveness
  • Password field masking
  • Error message validation

Q: Write a test case for validating a password field with specified requirements.

A:

Test Case ID:

TC_PW_001

Test Case Title:

Password Field Validation Testing

Preconditions:

  • User is on password creation/change page
  • Password field is visible and enabled

Test Scenarios:

  1. Valid Password Tests:
    • Input: "Test@123" (Expected: Pass)
    • Input: "Complex@Password123" (Expected: Pass)
    • Input: "Ab#12345" (Expected: Pass)
  2. Invalid Password Tests:
    • Length: "Test@12" (7 characters)
    • No uppercase: "test@123"
    • No number: "Test@abcd"
    • No special character: "Test12345"

Q: What is cyclomatic complexity? How does it help in determining test cases?

A:

Definition:

  • Cyclomatic complexity is a metric that measures the number of linearly independent paths through a program's source code
  • Formula: V(G) = E - N + 2P where:
    • E = Number of edges
    • N = Number of nodes
    • P = Number of connected components

How it helps in testing:

  1. Test Case Design:
    • Helps determine minimum number of test cases needed
    • Each path should be tested at least once
  2. Code Complexity Assessment:
    • 1-10: Simple code, low risk
    • 11-20: Moderate complexity, moderate risk
    • 21-50: Complex, high risk
    • >50: Untestable code, very high risk

Q: How would you handle dynamic elements in web automation, especially when IDs or classes change frequently?

A:

Strategies:

  1. Use Stable Locators:
    • Custom data attributes (data-testid)
    • Name attributes
    • Static parts of IDs/classes
  2. XPath Strategies:
    • Use contains() function
    • Use parent-child relationships
    • Use text() function
  3. CSS Selectors:
    • Use partial match selectors
    • Use multiple class combinations
    • Use structural selectors
  4. Custom Functions:
    • Create dynamic wait functions
    • Implement retry mechanisms
    • Use relative locators

Q: Explain the concept of test pyramids. How would you determine the right balance of tests for a project?

A:

Test Pyramid Layers:

  1. UI Tests (Top):
    • 10% of total tests
    • End-to-end testing
    • Slower and more brittle
  2. Integration Tests (Middle):
    • 20% of total tests
    • Testing multiple components
    • API and service testing
  3. Unit Tests (Base):
    • 70% of total tests
    • Testing individual components
    • Fast and reliable

Determining Balance:

  • Consider application architecture
  • Evaluate risk areas
  • Consider maintenance costs
  • Account for execution time
  • Consider team expertise

Q: What's the difference between mocking and stubbing? Provide an example of when you'd use each.

A:

Mocking:

  • Creates objects that simulate behavior of real objects
  • Verifies interactions between objects
  • Used for behavior verification

Example of Mocking:

// Testing email notification service
@Test
public void testEmailSent() {
EmailService mockEmail = mock(EmailService.class);
NotificationManager manager = new NotificationManager(mockEmail);
manager.sendNotification("Test");
verify(mockEmail).send("Test");
}

Stubbing:

  • Provides predefined responses to calls
  • Used for state verification
  • Simpler than mocks

Example of Stubbing:

// Testing weather service
@Test
public void testWeatherAlert() {
WeatherService stubWeather = stub(WeatherService.class);
when(stubWeather.getTemperature()).thenReturn(32.5);
assertEquals(32.5, stubWeather.getTemperature());
}

Q: What's the difference between mocking and stubbing? Provide an example of when you'd use each.

A:

Mocking:

  • Creates dummy objects that mimic real object behavior
  • Focuses on behavior verification
  • Records and verifies method calls
  • More complex but offers more control

Example of Mocking:

// Payment Service Mock
@Test
public void testOrderProcessing() {
PaymentService mockPayment = mock(PaymentService.class);
OrderProcessor processor = new OrderProcessor(mockPayment);
processor.processOrder(order);
verify(mockPayment).processPayment(any(Payment.class));
}

Stubbing:

  • Provides canned answers to calls
  • Focuses on state verification
  • Simpler implementation
  • Used when you only need return values

Example of Stubbing:

// Database Stub
@Test
public void testUserRetrieval() {
UserRepository stubRepo = stub(UserRepository.class);
when(stubRepo.findById(1L)).thenReturn(new User("John"));
assertEquals("John", stubRepo.findById(1L).getName());
}

Q: How would you test a feature that involves processing large amounts of data? What performance metrics would you consider?

A:

Testing Approach:

  1. Data Setup:
    • Create realistic test data sets
    • Use data generators for volume
    • Consider various data patterns
  2. Test Types:
    • Load testing
    • Stress testing
    • Volume testing
    • Scalability testing

Performance Metrics:

  • Response time
  • Throughput (transactions per second)
  • CPU utilization
  • Memory usage
  • Database performance
  • Network latency
  • Error rate under load

Monitoring Tools:

  • JMeter for load testing
  • New Relic for monitoring
  • Grafana for visualization
  • Prometheus for metrics collection

Q: How do you handle test data in your automation framework? Explain your approach to test data management.

A:

Data Management Strategies:

  1. Data Sources:
    • External files (Excel, CSV, JSON)
    • Database repositories
    • API responses
    • Test data generators
  2. Data Organization:
    • Environment-specific data
    • Test case-specific data
    • Shared test data
    • Master data

Implementation Approach:

  • Data Provider patterns
  • Factory methods for data creation
  • Data cleanup mechanisms
  • Version control for test data
  • Data security handling

Q: Write a SQL query to find duplicate records in a table. How would you test this query?

A:

SQL Query:

SELECT column1, column2, COUNT(*) as count
FROM table_name
GROUP BY column1, column2
HAVING COUNT(*) > 1
ORDER BY count DESC;

Testing Approach:

  1. Test Data Setup:
    • Create test table with known duplicates
    • Include edge cases
    • Add null values
  2. Test Scenarios:
    • Single duplicate records
    • Multiple duplicate records
    • No duplicates
    • Partial duplicates
  3. Performance Testing:
    • Query execution time
    • Index usage
    • Resource consumption

Q: Design test cases for an e-commerce shopping cart. Consider edge cases and potential integration points.

A:

Functional Test Cases:

  1. Add to Cart:
    • Add single item
    • Add multiple items
    • Add out-of-stock items
    • Add maximum quantity allowed
  2. Update Cart:
    • Modify quantities
    • Remove items
    • Clear entire cart
    • Update with invalid quantities
  3. Price Calculations:
    • Subtotal calculation
    • Tax calculation
    • Shipping cost calculation
    • Discount application

Integration Points:

  • Inventory system sync
  • Price service integration
  • User account integration
  • Payment gateway integration
  • Shipping service integration

Edge Cases:

  • Cart session timeout
  • Network disconnection handling
  • Concurrent cart updates
  • Price changes during checkout
  • Items becoming unavailable
  • Maximum cart value handling

Non-Functional Testing:

  • Performance under load
  • Cart data persistence
  • Browser compatibility
  • Mobile responsiveness
  • Security testing

See Also

Spring Boot Interview Questions Apache Camel Interview Questions Drools Interview Questions Java 8 Interview Questions Enterprise Service Bus- ESB Interview Questions. JBoss Fuse Interview Questions Angular 2 Interview Questions