1. Start
  2. Dokumente
  3. Streamlining the Software...
  4. 7. Quality Assurance and ...
  5. 7.1 Automated Test Case Creation

7.1 Automated Test Case Creation

Generating a Full Set of Test Cases (Test ID, Scenario, Steps, and Expected Result) Directly from Use Case Descriptions

One of the most powerful advantages of a use case-driven approach is the direct traceability from requirements → design → testing. Visual Paradigm’s AI-Powered Use Case Modeling Studio capitalizes on this by automatically generating a comprehensive, structured set of test cases from the detailed use case specifications, decision tables, Activity Diagrams, and Sequence Diagrams created in earlier modules.

With a single click — typically labeled “Generate Test Cases”, “Create Test Suite”, or “Derive Tests from Use Case” — the AI analyzes:

  • Main Success Scenario and numbered steps
  • Alternative Flows and their extension points
  • Exception Flows and error-handling paths
  • Preconditions and postconditions
  • Decision tables / decision matrix rules (from Module 6)
  • Key decision nodes and branches in Activity & Sequence Diagrams

It then produces a full set of test cases, each containing at minimum:

  • Test ID — unique identifier (e.g., UC001-TC01)
  • Scenario — short descriptive name of the specific path being tested
  • Preconditions — setup required before execution
  • Test Steps — clear, numbered actions the tester (or automation script) performs
  • Expected Result — precise post-execution state or system response
  • Priority — High / Medium / Low (often inferred from risk, frequency, or criticality)
  • Traceability — links back to the originating use case step, flow section, or decision table rule

Generated test cases cover:

  • Positive/happy path
  • Negative/exceptional paths
  • Boundary conditions
  • Alternative flows
  • Combinations identified in decision tables

You can refine them in the integrated editor: adjust wording for clarity, add data values, mark for automation, assign to test cycles, or export (CSV, Excel, XML, or directly to testing tools like TestRail, Jira, Azure DevOps).

Practical Examples

Example 1: GourmetReserve – Use Case: Book a Table (UC-001)

AI-Generated Test Cases (selected subset):

  • Test ID: UC001-TC01 Scenario: Happy path – no deposit required (Gold loyalty, non-peak, small party) Priority: High Preconditions: Diner logged in with Gold loyalty status, multiple tables available, current time is weekday 6 PM Test Steps:
    1. Open app and search for tables (Seattle, tomorrow 7 PM, party of 4)
    2. Select an available slot from results
    3. Review booking summary (no deposit shown)
    4. Tap “Confirm Booking” Expected Result: Reservation confirmed screen appears, confirmation push notification received, reservation status = Confirmed, no payment screen displayed Trace: Main flow steps 1–7, Decision Table Rule R1
  • Test ID: UC001-TC04 Scenario: Deposit required – successful payment with promo code Priority: High Preconditions: Diner logged in (non-Gold), party size = 10, peak hours (Saturday 8 PM), valid promo code “WEEKEND10” Test Steps:
    1. Search and select table for party of 10
    2. Enter promo code “WEEKEND10” at checkout
    3. Review summary (discounted deposit shown)
    4. Proceed to payment → enter valid test card
    5. Submit payment Expected Result: Deposit processed successfully (10% reduced by promo), reservation confirmed, confirmation + discount receipt sent Trace: Main flow + Alt 4a, Decision Table Rule R4
  • Test ID: UC001-TC07 Scenario: Payment failure – booking not created Priority: Critical Preconditions: Party size ≥ 8, peak hours, deposit required, valid card but insufficient funds Test Steps:
    1. Select slot requiring deposit
    2. Proceed to payment screen
    3. Enter card with insufficient funds
    4. Submit payment Expected Result: Error message “Payment declined – insufficient funds”, booking not created, remains on payment screen or returns to slot selection, no reservation record exists Trace: Exception flow 4b, Decision Table Rule R5

Example 2: SecureATM – Use Case: Withdraw Cash (UC-ATM-002)

  • Test ID: UC002-TC03 Scenario: High-value withdrawal – biometric verification fails Priority: Critical Preconditions: User authenticated, sufficient funds & daily limit, ATM supports biometrics, test amount = $1,500 Test Steps:
    1. Select “Withdraw Cash”
    2. Enter amount 1500
    3. When prompted, fail biometric scan (simulate rejection) Expected Result: Transaction aborted, message “Security verification failed – card retained”, card not returned, fraud alert sent to operations, transaction logged as failed security check Trace: Decision Table Rule 3, Sequence Diagram alt fragment [high-value + biometric fail]

Example 3: CorpLearn – Use Case: Take Final Assessment (UC-LRN-005)

  • Test ID: UC005-TC02 Scenario: Passing score but mandatory privacy question failed Priority: High (compliance) Preconditions: All modules completed, assessment loaded, time remaining Test Steps:
    1. Answer all questions correctly (score would be 92%)
    2. On mandatory data privacy question, select incorrect answer
    3. Submit assessment Expected Result: Assessment auto-failed due to compliance violation, message “Failed – Data privacy acknowledgment incorrect”, no certificate issued, attempt logged with fail reason, score not recorded toward completion Trace: Decision Table Rule 2, postcondition compliance check

Best Practices for Working with AI-Generated Test Cases

  • Review coverage first — Ensure every decision table rule and major flow branch has at least one corresponding test case.
  • Add concrete test data — Replace generic “valid card” with specific values (e.g., test card 4111111111111111, CVV 123, expiry 12/28).
  • Mark automation candidates — Flag UI/API tests suitable for Selenium, Appium, Postman, etc.
  • Prioritize ruthlessly — Focus High/Critical tests first; defer Low unless time permits.
  • Export & integrate — Push to your test management tool for execution tracking and defect linking.

By the end of Section 7.1, you have a robust, traceable test suite that is largely auto-generated yet fully customizable — dramatically reducing manual test writing effort while ensuring excellent functional coverage. These test cases serve as the primary verification mechanism that the implemented system matches the agreed-upon behavior defined throughout the project. With test cases generated, the final step is monitoring overall completeness and coverage in the Project Dashboard (next section).