What Else Does Application Lifecycle Management Need?

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

As we start a new year, the CM/ALM world continues to evolve.  We've seen a lot of migration to open source solutions, but we've also seen, in the past year, some impressive commercial ALM releases, especially from IBM (RTC) and from Neuma (CM+7).  It appears clear that those who can do with less will move from lower end commercial solutions to open source.  Those who need more comprehensive and integrated solutions, targeted at the wider project team will continue to seek them commercially.

I thought it might be fun to start off this column for 2011 with a look, not at the CM/ALM functionality itself, but at what else ALM needs - the underpinnings and structures that are in place within a CM/ALM tool, which help give the tool significant character.  Rather than pointing at any one tool, or even characterizing tools as a whole, I'll present this month's contribution in the form or ALM tool requirements.   These are requirements for adopting a CM/ALM tool that fall outside of the specifics of the CM/ALM functions themselves.

So for example, rather than focusing on the ability to automatically generate baselines, do fancy merges or create an elaborate branching scheme, we'll focus more on things such as ease-of-use, traceability navigation, and process support.

So that you may use these requirements to evaluate your current tools in these non-functional areas, I've even included a way for you to evaluate your current solutions.  First, define what you see as your solution and then look at the criteria to see how you see your solution fitting the criteria.  I'd be interested in hearing results.  Just post the solution description and your score at the end of the article.  Hopefully we'll get a representative sample.  After we get a few postings, perhaps I'll share my own environment's score.

In my opinion, it is in these areas more than in the CM/ALM functional requirements that we' willl see the biggest improvements over the coming year(s).  Yes they are functional areas of the tool, but not requirements of each ALM function per se.  If you're looking at a prospective tool, look closely at these areas.  We would love to see your results and I'll gladly publicize a spreadsheet of results if I hear from at least 3 of you. I'll post it against this article.

The scoring criteria are my own.  You may not agree with them and your comments are more than welcome.  They are in the absence of any weighting criteria, so each is significant on its own merit.  If you have some other criteria you think might be added to the list, let's hear from you too.

The evaluation is heavily geared toward ALM, rather than strict CM or SCC systems.  All criteria are evaluated on a scale of 1 to 5.  Pick the closest one or use .5 modifiers if you wish.  In some cases, the criteria are clear.  In others, there is a range of factors to consider.  In this case, an estimate of the percentage of factors your solution excels at will do.  However, results will vary because "excels" is subjective.

End-to-end Functionality Coverage
OK.  We're not looking at ALM functionality in detail here.  Rather, only the areas of ALM your solution covers (i.e., sufficiently that you don't need additional tools to support that function).

Scoring:  Start with 5 and subtract 1 for every 3 functions missing):

  • Multiple product management
  
  • User management
  
  • Requirements management
  
  • Document management
  
  • Problem/defect management
  
  • Feature/activity/project management
  
  • Version control/baseline management
  
  • Change package management
  
  • Build/iteration management
  
  • Test case management
  
  • Test suite management
  
  • Release management
  
  • Deployment/site management
  
  • Customer request tracking  

Reliability/Availability
There are many ways to evaluate reliability: How many modes of redundancy are there?  What are the disaster recovery capabilities? What is the mean time between server restarts?  And so on.  But we're going to put these aside and bring it down to nuts and bolts.

The bottom line here is the question of, in a 100 user environment, how many user hours are lost per year due to non-availability of the solution? It might be because of upgrade, problems, or even network outages.  If you have a smaller/larger environment for which you have figures, scale your actual figures (i.e., divide by your number of users and multiply by 100).  If you don't keep these metrics, take a few minutes to think about outages.  Did they affect one person (1 hour/hour) or everyone (100 hours/hour)?

Scoring: Number of user hours down per 100 users per year:

  1. >10K  
  2. <10K  
  3. <5K   
  4. <2K  
  5. <500

Global Operation at Multiple Sites
Does your solution support multiple sites?  We're not talking about a single function of your solution.  We're talking about your whole solution.  It might be OK for some requirements management functions used only by one person not to be supported across multiple sites.  But it's not OK for CM, testing, developer, etc.  Instead, ask if you can go to another site and still perform your job.

Scoring:
 

  1. Single site operation
 
  2. Multiple site thru partition/resync operation
 
  3. Central site operation with web/remote client
 
  4. Multiple site full operation from any site
 
  5. Full multiple site and can keep data from select sites

Low Level of Operational Administration
I've seen some great tools that are simply too expensive to operate.  They preclude usage by the majority of smaller shops.  How does yours measure up.  You may need someone trained as a backup person, but if that's all they are, you shouldn't count them as a separate administrator.  If you have 10 people spending 25% of their time each on CM/ALM operations that keep the solution running, that's 2.5 administrators.

Scoring:


  1. >2 administrators per 100 users
 
  2. <= 2 administrators per 100 users
 
  3. <= 1 administrator per 100 users
 
  4. <= 0.5 administrators per 100 users
 
  5. <= 0.1 administrators per 100 users

Rapid Installation/Upgrade and Data Population
(Based on 10,000 records, 2,500 files x 2 revisions)

The solution may be running now, but if you had to start from scratch, how long would it take you put the solution in place, assuming you have good knowledge of how to use the solution.  Look at times it takes to do installation (some tools are just a few minutes).  If you have to install on multiple machines, multiple the average install time by the number of machines.  What about an upgrade to a new major release: how long will it take?  Again, multiply if you have multiple machines to update.  And what about initial data population? We're talking about source code, documents, problem reports, etc.  But to be fair, normalize your result based on 10,000 records and 2,500 files with an average of 2 revisions of each.

Scoring:
 

  1. Install, upgrade, data population each > 1 day
 
  2. Install, upgrade, data population any one < 1 day
 
  3. Install, upgrade, data population all in < 1 day
 
  4. Install, upgrade, data population all in < 3 hours
 
  5. Install, upgrade, data population all in < 1 hour

Level of Integration Across Functions
It's great to have multiple functions supported in your ALM tool.  But if each function (apart from the actual process) has to be learned separately, that a big learning curve.  If each has to be supported separately on separate infrastructure, more learning curve.  Not to mention more difficulty for use.  How cleanly integrated are your ALM functions.  Look primarily at end-users, but also at administrators, CM staff and other users.  By multiple user interfaces, we don't mean separate role-based interfaces. Instead, we are referring to the look and feel different across functions.  Or do I have to exit one interface and enter another to perform a role? Either of these constitute multiple interfaces.  For databases/repositories, your source code, documentation, problem records, baselines, etc. all have to be considered.  If you have separate servers for each of these, those are separate databases.  If your database has multiple servers, but otherwise is consistent (allowing consistent backups), you may consider that a single database.

Scoring:
 

  1. Multiple user interfaces, multiple databases
 
  2. Multiple user interfaces or multiple databases (not both)

  3. Single UI, single repository
 
  4. Traceability across functions, single UI/repository
 
  5. Dashboards, traceability, full reporting across functions

Process and Workflow Capabilities
Process is often buried in scripts.  This makes the process difficult to see and difficult to change.  In this case I've given you two extremes to choose from. Figure out which you're closest to and apply a 1 to 5 rating based on your best guess.

Scoring:
 

  1. Process/workflow supported through scripting
 
  2. State/transition-based record flow with rules/triggers,     
    Role-based UI definition and data access permission,
    Cross-function rules/triggers,
      
    Ability to define and customize new roles,
      
    On/off settings for common email and data change logging functions,
      
    Process definition common across all functions,
     
    Scripting, and ability to invoke 3rd party tools from your scripts
      
    Flow graph editing to define process and flow graph generation from process definition
     
    Digital authorization of approval/sign-off/review

Process Maturity
Whereas the last topic dealt with process capabilities, this one deals with the maturity of the out-of-the-box process.  If the out-of-the-box process is customized through point and click selections, you may consider those options part of the out-of-the-box solution. Not so if you have to write scripts for additional options.

For evaluation, look at the factors given below and do a gut-feel evaluation on a scale of 1 (low) to 5 (high), how well your solution's process maturity complies.  If you have a solution that you've evolved over time, consider the effort you've put into it.  Large effort process customization is not out-of-the-box maturity.   But if you adopted a mature in-house solution from elsewhere in the organization, you can consider the maturity at which you adopted the solution.

Scoring Factors:
 

  • Good out of box process across all functions
  
  • Process minimizes risk of unauthorized resource waste
 
  • Full traceability of all process steps and artifact linkages
  
  • Automatic capture of context-specific data (i.e., which product, release, user, date)
  
  • Automation level of CM functions in general (baseline generation, functional audit, identification, etc.)
  
  • Support for agile and traditional methods
  
  • Change packaging support  (this is functional, but essential to process maturity)
  
  • Clear tracking of product/release/iteration backlogs
  
  • Visibility of state/transition flow and workflow
  
  • Promotion levels clearly defined and customizable
  
  • Good set of initial role definitions, easily customized

Documentation
Documentation refers, not to the ability of your solution to handle documentation, but rather to how well your solution is documented for its users.  Again, we'll do a generic scoring (1 low to 5 high) based on a consideration of the factors.  

Scoring Factors:
 

  • Completeness of user, reference and customization guide
  
  • Completeness of process and administration guides
  
  • Concepts/overview documentation
  
  • Getting started documentation
  
  • On-line (in-tool) documentation/guidance
 
  • Web-site and FAQ, including user forum
  
  • Video demos and video FAQ responses
  
  • Screen shots of typical useful situations
  
  • Clarity and ease of finding information

Extent of Potential Customization
Customization is a tricky subject.  Some claim to have extensive customization because there is a scripting tool (others because it runs on a general purpose computer).  Use reasonableness tests here.  Can you customize your solution in the following areas?  Are you stuck with what the solution provides except for a few exceptions?

Scoring Factors:
   

  • Terminology customization
   
  • Dialog panels to match exact end-user requirements
  
  • Content of forms and selections on forms
   
  • State/transition flow/diagrams, rules and triggers
  
  • User interface pull-down, pop-up menus, tool bars
  
  • User interface status bars
   
  • User interface layout, command access, panel sizing
   
  • To-do lists, hot lists, quick links (adding, defining, removing)
   
  • Dashboard creation/modification
   
  • Canned reports (can you add in your own canned reports?)
  
  • Role definitions, new roles
 
  • User interface by project, role, user
   
  • In-application guidance/help
   
  • Data states, data values, data colors
  
  • Data schema:  add/change/remove fields
   
  • Data schema:  add/change/remove tables
   
  • Add-in custom native functions (mini-apps) such as lab equipment tracking
   
  • Integration with 3rd party tools


Ease of Customization
How quickly can you customize your solution. You recognize what the extent of customizations are.  Maybe you can add a new field in a minute, but it takes a few hours to add a new menu item.  What do you do most frequently or what sort of customizations would you like to do most frequently?  How long will it take on average?

Scoring:
   

  1. Typically weeks for a customization (or next release)
  2. Typically several days for a customization
   
  3. Most customizations typically in 1 day
  4. Most customization in a couple of hours or so
   
  5. Most customizations < 30 minutes

Data Navigation/Traceability
You may have great functionality in your ALM tool, and your solution may accurately capture information required.  But if you can't navigate that information easily, you're in trouble.  Keep in mind here that most data navigation that is neglected is not because the tools don't allow it, but rather because the tools make it so difficult that it is seldom performed.  This will cause reduced productivity and quality.  If you can point-and-click, but you then have to wait a minute to get the information, forget it.  The capability, though present, will seldom be used.

Scoring factors
:

  • Point and click navigation of reference links
   
  • Automatic population of traceability data where possible
   
  • Customized navigation of traceability (i.e., typical data paths traced)
   
  • Easy scrolling through records
   
  • Arbitrary specification of records to navigate (i.e., subset selection for navigation)
   
  • Interactive graphs and tables with drill-down
   
  • Ability to enforce traceability links (i.e.. make sure that the data is captured)
   
  • Hierarchical navigation (WBS, org chart, source code)
   
  • Historical navigation (revisions, changes, etc.)
   
  • Requirements/test case traceability matrix
  
  • Data filtering and data search
   
  • Text content filtering/search (i.e., source code, notes)

Dashboard Capabilities
Dashboards have evolved rapidly in the past 2 years.  They are a central focus for dealing with process and information.  Some dashboards simply present a static picture of a point in time.  Others are dynamic and allow customization, as well as the introduction of variables within the dashboard (i.e., what user, release, product, etc. do I want to look at?).  Dashboards tend to work well when they are generic or role-specific.  Consider what information is necessary to perform a role.  What's the best way to present this information to maximize productivity of that role.  In some cases, a dashboard may be referred to as a work station, as it lets a person perform their work (i.e., peer reviews, build management, etc.).  Again, consider the factors, dream big, and choose a reasonable rating for your tool.

Scoring factors
:

  • Significant pre-defined dashboards (i.e., out-of-box)
   
  • Customization of pre-defined dashboards allowed
   
  • Creation of custom dashboards
   
  • Change context within a dashboard (i.e., select new user, product, release)
   
  • Arbitrary items on dashboard (i.e., variety of graphs, tables, indicators, data summaries)
   
  • Ability to run meetings from a dashboard (consider your PCB, or CCB meeting in particular)
   
  • Fast presentation and frequent refresh rate (if you have to wait, forget it - on-demand refresh is good)
   
  • Easy drill-down, and direct action capability (i.e., promote a change, change priority, etc.)

Reporting
There will be many variations at this point.  Some people don't want any reports and only just great dashboards.  Others live and die by them, often because the tools are tool difficult/technical for executives (too bad - this shouldn't be the case).  So again, a number of factors are presented.  A report can be viewed interactively (HTML, for example), but does not generally fall into the category of dashboards (even though HTML5 supports some great dashboard technology).

Scoring factors:
   

  • Valuable mix of pre-canned reports
  
  • Easy, interactive report generation
   
  • Multiple reporting formats (HTML, text, XML, interactive)
   
  • Context-based reporting options (i.e., for my staff, the product, and release I'm working with, etc.)
   
  • Org-chart based reporting (i.e., for a department/group)
   
  • Report scripting capability (can I get exactly what I need out?)
   
  • Live reports without (explicit) tool access (i.e., directly from the file system without learning the tools)
   
  • Automated report generation
   
  • Hierarchical reports with roll-ups/sub-totals

Per User License Cost for full Functionality
This is a bit easier:  All options are in, all server costs in, and annual maintenance is out.  What is the per user cost for the tool?  If you use floating licenses, consider what the vendor recommends as a user: license ratio and divide the cost of the floating license appropriately.  This excludes operational costs, consulting, customization, etc.  Just the license costs, but including all servers required, and all database or other infrastructure.  Not including the hardware/OS platform costs though.  Full functionality includes all Multiple Site access functionality, backup technology, etc.  If you want to consider lowest license cost for partial functionality, please split this into 14.a (full) and 14.b (lite).

Scoring:
 

  1.  >$5000  
  2. <$5000  
  3. <$2500  
  4. <$1000  
  5. <$200

Per User Annual Support/Maintenance/Upgrades

Annual support typically runs near 20% of license fees.  Some vendors charge no annual support but then charge per-incident fees.  Some charge lower annual support, but do not cover the cost of upgrades.  Assuming you upgrade every 2 years (i.e., include 1/2 the license cost of the upgrade), and assuming you have 24 incidents per year, what will you pay as your annual support and upgrade fees.  This is a per user fee.  It does not include consulting or customization, although it does include upgrade support costs that are not internal.

  1. >$1000
  2. <$1000
  3. <$500 
  4. <$200 
  5. <$50

Scalability:  Number Users
Let's see how many ways we can slice and dice scalability.  We have three separate sections for this.  For number of users, the question is how many users can you support before you have to add a new server.  Servers should be standard technology (not top-of-the-line) for purposes of this evaluation.  You may need 3 servers to get your first 10 users going, but might not need your next server until user 50.  The result would be 50, assuming you could then get to supporting 100 users without adding a third server.

Scoring (number of users before adding a new server):
  

  1. <20 
  2. <100  
  3. <500   
  4. <2000  
  5. >2000

Scalability: Number Files
Maybe you can support 1000 users, but only as long as the number of files stay low.  Not impressed.  This measure identifies when you have to add a new server to support the number of files that you have.  If you can support half a million files with a single server, that’s great.  Assume 100 average users on the system (i.e., it's no good if you can support 1 million files as long as you only have 1 user).  Again, if you can go 200K on one server, but need 3 servers to support 400K, you may want to average your results over a few servers.

Scoring: (number of files per server)
:

  1. <10K  
  2. <50K  
  3. <200K 
  4. <500K 
  5. >500K

Scalability:  Number of Data Records
In ALM, we're not just dealing with files, as in source code control.  We're dealing with a multitude of data records. We're also dealing with a multitude of data relationships.  For the purpose of evaluation, ignore records dealing strictly with relationships.  So if you have an RDBMS, and you need a separate table to relate change packages to problems, you can ignore those records.  Instead, count how many problems, documents, file issues, change packages/updates, activities/tasks, requests, test cases, test case results, etc. that you have.  How many records can be supported by your solution before you have to add a new server? Again, assume 100 average users doing their queries, CM work, etc.

Scoring:
 

  1. <100K 
  2. <250K 
  3. <1M
  4. <10M 
  5. > 10M

Ease of Use
If you have a great CM tool that is difficult to use, you may get by with appropriate training, though you may simply have rebellion.  If you have a great ALM tool that is difficult to use, it's not a great ALM tool.  ALM crosses too many disciplines to be tolerated.   A number of the factors here have appeared above in various places.  This is an overall consideration of ease of use.  This is where the biggest focus has been and will be in ALM, and indeed in all technology, over the next decade (as well as the past).

Scoring factors:

Speed of tool response
  

  • User/Role-specific user interface
  
  • Customization of terminology to corporate culture
  
  • Minimal number of key clicks per operation
  
  • Use of intelligent and recently used defaults in forms
  
  • Good on-line process and tool guidance
  
  • Small training curve to use the tool
 
  • Needed information visible / single click away
  
  • Point-and-click traceability
  
  • Role-based dashboards instead of multiple panels/queries
  • Personal customization of tool interface

Cross Platform User Capability
This one is straightforward.  What platforms can your users use.  There are some excluded here (notably mainframes, smart phones).  These are less significant at this point in time than Unix (including Linux), Windows, and Web interfaces.

Scoring:
  

1: Similar Web and native Windows/Unix interfaces
  
2: Web access and either Windows/Unix interfaces
  
3: Exclusively Web or exclusively Windows/Unix interface
  
4: Web-only interface
  
5: Windows-only or Unix-only Interface (or other native interface)

Cross Platform InterOperation of Server and Client
Now let’s move on to the server and related client restrictions.  ALM is interoperable if servers and clients can, during normal operation, mix and match platforms across the 3 boundaries:  Windows/Unix, 32-bit/64-bit architectures, and big-endian/little-endian architectures.  Partial interoperability comes if servers and clients can mix and match across at least two of these functions.  Platform independence means the ALM solution can run on any platform (but not necessarily interoperably).  If you're stuck on the platform you start with, than that's worse.  Keep in mind that proper operation assumes that line end characters work properly (i.e. for interoperable or switchable solutions).

Scoring:

5: Fully interoperable Win/Unix; 32/64, big/little endian
4. Partially interoperable (2 of 3)
3: Can run on/switch to any platform but not interoperably
2: Can run on any one platform but not interoperably
1: Must run on a specific platform

There are 21 areas of ALM that are significant.  A next generation (3G) solution will excel in these areas and will typically have a score in excess of 80 (out of a possible 105).  A 4G solution will approach 100.  However, keep in mind that these criteria alone do not make a next generation solutions.  There are dozens of other criteria that must be met when the functions are examined in detail:  workspace support, developer-oriented capabilities, project management capabilities, branching, etc.

However, these dozens of criteria are usually far easier to fit on top of a Next Gen ALM platform than vice versa.

Vendors, your input (i.e. how do you assess your own ALM tools) is more than welcome.  You may want to evaluate your own tools before someone beats you to the punch.  I will place raw results (from your feedback) into the spreadsheet mix.  You may think some of these categories are difficult to assess, but the scoring is generally clear enough and flexible enough that you should be able to do a first pass in a few minutes (if you're familiar with your solution)

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.