This document aims to provide a standardized guideline for writing functional test cases to ensure consistent formatting, clear content, and compatibility with tools such as XMind.
-
Equivalence Class Partitioning
- Divide input data into valid and invalid equivalence classes.
- Select one or a few representative inputs from each class.
- Design at least one test case for each equivalence class.
-
Boundary Value Analysis
- Focus on testing boundary conditions of input and output (e.g., min/max values, just above/below limits, empty, critical lengths).
- A supplement to equivalence class partitioning.
- Includes normal, abnormal, and special boundary testing.
-
Decision Table Testing
- Applicable to scenarios with multiple condition combinations resulting in different actions.
- List all condition and action stubs to form a decision table.
- Ensure all combinations are covered by test cases.
-
Scenario-Based Testing (Use Case Testing)
- Design test cases based on how users actually use the system.
- Simulate user workflows to verify system behavior in real-world scenarios.
- Include normal, abnormal, and edge-case scenarios.
-
Error Guessing
- Based on experience, intuition, and analysis of system weaknesses to predict possible defects.
- Often used to complement structured test methods.
- Focus on areas where the system is prone to errors.
-
State Transition Testing
- Suitable for systems/modules with defined state transitions.
- Focus on state changes and triggering events.
- Verify the correctness and completeness of state transitions.
Using the "Set Nickname" feature to demonstrate multiple test design methods:
- Valid Classes: 1–20 characters; letters, numbers, Chinese characters, underscores, middle dot.
- Invalid Classes: Empty, too long, prohibited special characters, only symbols.
- Min Boundary: 1-character nickname.
- Max Boundary: 20-character nickname.
- Invalid Boundary: 0 characters (min – 1), 21 characters (max + 1).
- Normal: First-time nickname setup.
- Modification: Editing an existing nickname.
- Conflict: Handling duplicate nicknames.
- Verification: Display and confirmation after setting.
- Security: Special character/script injection.
- Compatibility: Emojis, multilingual characters.
- Abnormal: Network interruptions, concurrent operations.
- States: Unset → Setting → Set → Modifying.
- Transitions: Normal and abnormal transitions between states.
- Validation: UI and feature availability in each state.
The structure of a functional test case document is as follows:
# XX Functional Test Cases
## Feature Module Name
### Test Focus 1 (e.g., Avatar Settings)
#### Verification Point 1.1 (e.g., Upload Rule Validation)
##### Test Scenario 1.1.1
###### Expected Result
Expected result details 1
##### Test Scenario 1.1.2
###### Expected Result
Expected result details 2
#### Verification Point 1.2 (e.g., Default Logic Check)
##### Test Scenario 1.2.1
###### Expected Result
- Expected result details 1
### Test Focus 2 (e.g., Nickname Rules)
#### Verification Point 2.1 (e.g., Format Rule Check)
##### Test Scenario 2.1.1
###### Expected Result
Expected result details 1-
Tag:
# -
Purpose: Highest-level heading of the document.
-
Example:
# Functional Test Cases
-
Tag:
## -
Purpose: Names a major feature module, user story, or test unit.
-
Tip: Avoid numbering to keep it clean.
-
Example:
## User Login Feature
-
Tag:
### -
Purpose: Identifies a specific feature or related group of verifications.
-
Example:
### Avatar Settings ### Nickname Rule and Uniqueness Checks
-
Tag:
#### -
Purpose: Further categorizes aspects under a test focus.
-
Example:
#### Upload Rule Validation #### Uniqueness Check #### Dynamic Display Formatting
-
Tag:
##### -
Purpose: Concisely and uniquely describe a test case scenario with actions/conditions.
-
Example:
##### User registers with a valid, unregistered email and a compliant password
-
Tag:
###### -
Header: Typically titled
###### Expected Result -
Purpose: Describes the measurable system outcome after executing the scenario.
-
Example:
###### Expected Result - Registration successful. - System displays "Registration successful" message.
# User Management Test Cases
## User Registration
### Input Validation
#### Email and Password Rules
##### User registers with a valid, unregistered email and compliant password
###### Expected Result
- Registration succeeds.
- System displays “Registration successful, please check your email to activate your account.”
- New user record is created with status "Pending Activation".
##### User attempts to register with existing email (existing@example.com)
###### Expected Result
- Registration fails.
- System displays “Email already registered. Try a different email or log in.”
- No change to user record count in the database.
## User Login
### Username and Password Login
#### Credential Validation
##### Activated user logs in with correct credentials (user@example.com)
###### Expected Result
- Login successful.
- Redirect to user dashboard.
- User session is correctly initialized.
##### User attempts to log in with non-existent email
###### Expected Result
- Login fails.
- System shows “Account does not exist or password is incorrect.”- Use version control for managing test case files.
- Regularly update test cases to reflect product changes.
- Maintain consistent structure and naming.
- Track test execution results promptly.
- Periodically review and optimize test cases.
- Generate test case files under:
test-cases/{version}/{feature-name}-test-cases.md
- Markdown Format: Follow the H1–H6 hierarchy strictly, especially for tools like XMind to parse correctly.
- Clarity & Brevity: Scenario and title text should clearly describe the test condition and action.
- Missing Step Details: No dedicated section for "steps," so H5 titles must convey the operation and conditions succinctly. Split into smaller scenarios if complex.
- Verifiability: Expected results must be specific and testable.
- Independence: Each test scenario should be executable independently.
- Test Data: Include critical test data in the H5 title or under the relevant H4 if needed.
- XMind Compatibility: Each markdown header level becomes a node in XMind. Ensure syntax correctness.
- Independence: Each test case should run independently.
- Repeatability: Should yield consistent results on repeated execution.
- Verifiability: Outcomes must be specific and measurable.
- Clarity: Descriptions must be unambiguous.
- Completeness: Include all information needed for execution.
- Coverage First: Prioritize test coverage to ensure full feature validation.