Traceability is a cornerstone of change management. It tells you why things have changed. It lets you identify the level of verification for particular changes. It links the Requirements and Requests of customers and product managers down through code, test cases and test run data.
Here's an example of a fully traced system development.
- Requirement R1: Build a CM system
- Request Q1: Make it easier to use
- Request Q2: It's not working, we need it to work
- Activity A1: Implement R1
- Activity A2: Implement Q1
- Problem P1: It doesn't work spawned from Q2
- Problem P2: Testing failed - see R1
- Change C1: Implement A1 part 1
- Change C2: Implement A1 part 2
- Create Build B1 using C1 and C2
- Change C3: Implement A2 and fix P1 and P2
- Create Build B2 using B1 and C3
- Testcase T1: Test the system to ensure it meets R1
- Testrun TR1: System failed T1 testing run on B1 - Created P2
- Testrun TR2: System passed T1 testing run on B2
It doesn't really matter which system you're developing, you can probably shoe-horn your system into this template. The problem is, although I have full traceability, the granularity of my traceability makes the data all but useless. Granularity will certainly dictate how useful the data is.
Although the above example gives little traceability information, it does cover a lot of bases. It identifies the fact that requirements (Rs) and requests (Qs) are needed for development to proceed. It shows that the Rs and Qs spawn design activities (As) and problem reports (Ps). It shows that Changes (Cs) address the As and Ps. It shows that builds (Bs) are created with certain Cs. It shows that test cases (Ts) address Rs. It shows that test runs (TRs) run Ts using Bs and spawn Ps where necessary.