Guide to Reliability of Electrical/Electronic Equipment and Products--Software (part 2)

Home | Articles | Forum | Glossary | Books


Level 2: The Repeatable Process Level

Organizations at the initial process level can improve their performance by instituting basic project controls. The most important are project management, management oversight, quality assurance, and change control.

The fundamental role of a project management system is to ensure effective control of commitments. This requires adequate preparation, clear responsibility, a public declaration, and a dedication to performance. For software, project management starts with an understanding of the job's magnitude. In all but the simplest projects, a plan must then be developed to determine the best schedule and the anticipated resources required. In the absence of such an orderly plan, no commitment can be better than an educated guess.

A suitably disciplined software development organization must have senior management oversight. This includes review and approval of all major development plans prior to the official commitment. Also, a quarterly review should be conducted of facility-wide process compliance, installed quality performance, schedule tracking, cost trends, computing service, and quality and productivity goals by project. The lack of such reviews typically results in uneven and generally inadequate implementation of the process as well as frequent over-commitments and cost surprises.

A quality assurance group is charged with assuring management that soft ware work is done the way it is supposed to be done. To be effective, the assurance organization must have an independent reporting line to senior management and sufficient resources to monitor performance of all key planning, implementation, and verification activities.

Change control for software is fundamental to business and financial control as well as to technical stability. To develop quality software on a predictable schedule, requirements must be established and maintained with reasonable stability throughout the development cycle. While requirements changes are often needed, historical evidence demonstrates that many can be deferred and incorporated later. Design and code changes must be made to correct problems found in development and test, but these must be carefully introduced. If changes are not controlled, then orderly design, implementation and test is impossible and no quality plan can be effective.

The repeatable process level has one other important strength that the initial process does not: it provides control over the way the organization establishes its plans and commitments. This control provides such an improvement over the initial process level that people in the organization tend to believe they have mastered the software problem. They have achieved a degree of statistical control through learning to make and meet their estimates and plans. This strength, how ever, stems from their prior experience at doing similar work.

Some of the key practices for software project planning are:

The project's software development plan is developed according to a documented procedure.

Estimates for the size of the software work products are derived according to a documented procedure.

The software risks associated with the cost, resource, schedule, and technical aspects of the project are identified, assessed, and documented.

Organizations at the repeatable process level thus face major risks when they are presented with new challenges. Examples of the changes that represent the highest risk at this level are the following:

Unless they are introduced with great care, new tools and methods will affect the process, thus destroying the relevance of the intuitive historical base on which the organization relies. Without a defined process framework in which to address these risks, it is even possible for a new technology to do more harm than good.

When the organization must develop a new kind of product, it is entering new territory. For example, a software group that has experience developing compilers will likely have design, scheduling, and estimating problems when assigned to write a real-time control program. Similarly, a group that has developed small self-contained programs will not understand the interface and integration issues involved in large-scale projects. These changes again destroy the relevance of the intuitive historical basis for the organization's process.

Major organizational changes can also be highly disruptive. At the repeatable process level, a new manager has no orderly basis for understanding the organization's operation, and new team members must learn the ropes through word of mouth.

Level 3: The Defined Process

The key actions required to advance from the repeatable level to the next stage, the defined process level, are to establish a process group, establish a development process architecture, and introduce a family of software engineering methods and technologies.

Establish a process group. A process group is a technical resource that focuses exclusively on improving the software process. In software organizations at early maturity levels, all the people are generally devoted to product work.

Until some people are given full-time assignments to work on the process, little orderly progress can be made in improving it.

The responsibilities of process groups include defining the development process, identifying technology needs and opportunities, advising the projects, and conducting quarterly management reviews of process status and performance.

Because of the need for a variety of skills, groups smaller than about four professionals are unlikely to be fully effective. Small organizations that lack the experience base to form a process group should address these issues by using specially formed committees of experienced professionals or by retaining consultants.

The assurance group is focused on enforcing the current process, while the process group is directed at improving it. In a sense, they are almost opposites: assurance covers audit and compliance, and the process group deals with support and change.

Establish a software development process architecture. Also called a development life cycle, this describes the technical and management activities required for proper execution of the development process. This process must be attuned to the specific needs of the organization, and it will vary depending on the size and importance of the project as well as the technical nature of the work itself.

The architecture is a structural description of the development cycle specifying tasks, each of which has a defined set of prerequisites, functional descriptions, verification procedures, and task completion specifications. The process continues until each defined task is performed by an individual or single management unit.

If they are not already in place, a family of software engineering methods and technologies should be introduced. These include design and code inspections, formal design methods, library control systems, and comprehensive testing methods. Prototyping should also be considered, together with the adoption of modern implementation languages.

At the defined process level, the organization has achieved the foundation for major and continuing progress. For example, the software teams when faced with a crisis will likely continue to use the process that has been defined. The foundation has now been established for examining the process and deciding how to improve it.

As powerful as the Defined Process is, it is still only qualitative: there are few data generated to indicate how much is accomplished or how effective the process is. There is considerable debate about the value of software measurements and the best ones to use. This uncertainty generally stems from a lack of process definition and the consequent confusion about the specific items to be measured.

With a defined process, an organization can focus the measurements on specific tasks. The process architecture is thus an essential prerequisite to effective measurement.

Level 4: The Managed Process

The key steps required to advance from the defined process to the next level are:

1. Establish a minimum basic set of process measurements to identify the quality and cost parameters of each process step. The objective is to quantify the relative costs and benefits of each major process activity, such as the cost and yield of error detection and correction methods.

2. Establish a process database and the resources to manage and maintain it. Cost and yield data should be maintained centrally to guard against loss, to make it available for all projects, and to facilitate process quality and productivity analysis.

3. Provide sufficient process resources to gather and maintain this process database and to advise project members on its use. Assign skilled professionals to monitor the quality of the data before entry in the database and to provide guidance on analysis methods and interpretation.

4. Assess the relative quality of each product and inform management where quality targets are not being met. An independent quality assurance group should assess the quality actions of each project and track its progress against its quality plan. When this progress is compared with the historical experience on similar projects, an informed assessment can generally be made.

In advancing from the initial process through the repeatable and defined processes to the managed process, software organizations should expect to make substantial quality improvements. The greatest potential problem with the man aged process level is the cost of gathering data. There is an enormous number of potentially valuable measures of the software process, but such data are expensive to gather and to maintain.

Data gathering should be approached with care, and each piece of data should be precisely defined in advance. Productivity data are essentially meaning less unless explicitly defined. Several examples serve to illustrate this point:

The simple measure of lines of source code per expended development month can vary by 100 times or more, depending on the interpretation of the parameters. The code count could include only new and changed code or all shipped instructions. For modified programs, this can cause variations of a factor of 10.

Non-comment nonblank lines, executable instructions, or equivalent assembler instructions can be counted with variations of up to seven times.

Management, test, documentation, and support personnel may or may not be counted when calculating labor months expended, with the variations running at least as high as a factor of 7.

When different groups gather data but do not use identical definitions, the results are not comparable, even if it makes sense to compare them. It is rare that two projects are comparable by any simple measures. The variations in task complexity caused by different product types can exceed 5 to 1. Similarly, the cost per line of code of small modifications is often two to three times that for new programs. The degree of requirements change can make an enormous difference, as can the design status of the base program in the case of enhancements.

Process data must not be used to compare projects or individuals. Its purpose is to illuminate the product being developed and to provide an informed basis for improving the process. When such data are used by management to evaluate individuals or teams, the reliability of the data itself will deteriorate.

Level 5: The Optimizing Process

The two fundamental requirements for advancing from the managed process to optimizing process level are

1. Support automatic gathering of process data. All data are subject to error and omission, some data cannot be gathered by hand, and the accuracy of manually gathered data is often poor.

2. Use process data both to analyze and to modify the process to prevent problems and improve efficiency.

Process optimization goes on at all levels of process maturity. However, with the step from the managed to the optimizing process there is a major change.

Up to this point software development managers have largely focused on their products and typically gather and analyze only data that directly relate to product improvement. In the optimizing process, the data are available to tune the process itself. With a little experience, management will soon see that process optimization can produce major quality and productivity benefits.

For example, many types of errors can be identified and fixed far more economically by design or code inspections than by testing. A typically used rule of thumb states that it takes one to four working hours to find and fix a bug through inspections and about 15 to 20 working hours to find and fix a bug in function or system test. To the extent that organizations find that these numbers apply to their situations, they should consider placing less reliance on testing as the primary way to find and fix bugs.

However, some kinds of errors are either uneconomical to detect or almost impossible to find except by machine. Examples are errors involving spelling and syntax, interfaces, performance, human factors, and error recovery. It would be unwise to eliminate testing completely since it provides a useful check against human frailties.

The data that are available with the optimizing process give a new perspective on testing. For most projects, a little analysis shows that there are two distinct activities involved: the removal of defects and the assessment of program quality.

To reduce the cost of removing defects, inspections should be emphasized, together with any other cost-effective techniques. The role of functional and system testing should then be changed to one of gathering quality data on the programs.

This involves studying each bug to see if it is an isolated problem or if it indicates design problems that require more comprehensive analysis.

With the optimizing process, the organization has the means to identify the weakest elements of the process and to fix them. At this point in process improvement, data are available to justify the application of technology to various critical tasks, and numerical evidence is available on the effectiveness with which the process has been applied to any given product. It is then possible to have confidence in the quality of the resulting products.

Clearly, any software process is dependent on the quality of the people who implement it. There are never enough good people, and even when you have them there is a limit to what they can accomplish. When they are already working 50 to 60 hr a week, it is hard to see how they could handle the vastly greater challenges of the future.

The optimizing process enhances the talents of quality people in several ways. It helps managers understand where help is needed and how best to provide people with the support they require. It lets the software developers communicate in concise, quantitative terms. This facilitates the transfer of knowledge and minimizes the likelihood of wasting time on problems that have already been solved.

It provides a framework for the developers to understand their work performance and to see how to improve it. This results in a highly professional environment and substantial productivity benefits, and it avoids the enormous amount of effort that is generally expended in fixing and patching other peoples' mistakes.

The optimizing process provides a disciplined environment for software development. Process discipline must be handled with care, however, for it can easily become regimentation. The difference between a disciplined environment and a regimented one is that discipline controls the environment and methods to specific standards, while regimentation applies to the actual conduct of the work.

Discipline is required in large software projects to ensure, for example, that the many people involved use the same conventions, don't damage each other's products, and properly synchronize their work. Discipline thus enables creativity by freeing the most talented software developers from the many crises that others have created.

Unless we dramatically improve software error rates, the increased volume of code to be generated will mean increased risk of error. At the same time, the complexity of our systems is increasing, which will make the systems progressively more difficult to test. In combination these trends expose us to greater risks of damaging errors as we attempt to use software in increasingly critical applications. These risks will thus continue to increase as we become more efficient at producing volumes of new code.

As well as being a management issue, quality is an economic one. It is always possible to do more reviews or to run more tests, but it costs both time and money to do so. It is only with the optimizing process that the data are available to understand the costs and benefits of such work. The optimizing process provides the foundation for significant advances in software quality and simultaneous improvements in productivity.

There are few data on how long it takes for software organizations to advance through the maturity levels toward the optimizing process. What can be said is that there is an urgent need for better and more effective software organizations. To meet this need, software managers and developers must establish the goal of moving to the optimizing process.

Example of Software Process Assessment

This section is an excerpt from an SEI software process assessment (including actual company and assessor dialog) that was conducted for a company that develops computer operating system software. The material is presented to facilitate learning, to identify the pertinent issues in software development up close, to provide a perspective on how an organization deals with the issues/items assessed, and to see the organization's views (interpretation) of these items.

Item: Process Focus and Definition.

Assessment: We do not have confidence in formal processes and we resist the introduction of new ones.

Process focus establishes the responsibility for managing the organization's software process activities.

Process definition involves designing, documenting, implementing, maintaining and, enforcing the organization's standard software development process.

The standard software development process defines the phases and deliverables of the software life cycle and the role of each responsible organization in the life cycle. It defines criteria for completion of each phase and standards for project documents. The standard process is flexible and adaptable, yet it is followed and enforced. The standard process is updated when necessary and improvements are implemented systematically (e.g., through controlled pilot tests).

Process definition also involves managing other "assets" related to the standard software process, such as guidelines and criteria for tailoring the standard process to individual project needs and a library of software process-related documentation. In addition, process related data are collected and analyzed for the purpose of continuous process improvement.

1. We have had experiences with inflexible processes that constricted our ability to do good work. Inflexible processes waste time, lower productivity, involve unnecessary paperwork, and keep people from doing the right thing. People in our company have dealt with inflexible processes and therefore resist the introduction of new processes.

2. Processes are not managed, defined, implemented, improved, and en forced consistently. Like products, key software processes are assets that must be managed. At our company, only some processes have owners and many of our processes are not fully documented. As such, people find it difficult to implement and/or improve these processes.

3. Processes (e.g., the use of coding standards) that are written down are not public, not consistently understood, not applied in any standard way, and not consistently enforced. Even when we define standards, we do not publicize them nor do we train people in the proper use of the standard. As a result, standards are applied inconsistently and we cannot monitor their use or make improvements.

4. We always go for the "big fix" rather than incremental improvements. People find the big fix (replacement of a process with a completely different process) to be disruptive and not necessarily an improvement. People are frustrated with our apparent inability to make systematic improvements to processes.

5. The same (but different) process is reinvented by many different groups because there is no controlling framework. Some software development groups and individual contributors follow excellent practices.

These methods and practices (even though not documented) are followed consistently within the group. However, since each project and development group develops its own methodology, the methods and practices are ad hoc and vary from group to group and project to project. For example, the change control process exists in many different forms, and people across our company use different platforms (Macintosh, SUN, PC, etc.) and different formats for documentation.

6. There is no incentive to improve processes because you get beaten up.

People who try to implement or improve processes are not supported by their peers and management. People do not work on process improvement or share best practices for several reasons, including

There is no reward for doing so.

Such activity is regarded by peers and managers as not being real work.

"Not invented here" resistance.

Why bother; it is too frustrating.

Management rewards the big fix.

7. Our core values reinforce the value of the individual over the pro cess. We encourage individual endeavor and invention but this is of ten interpreted as being at odds with the use of standard practices.

People in our company believe that formal processes hinder an individual's ability to do good work, rather than see processes as a way to be more productive, enabling them to produce higher quality products. Therefore, we tend to do things "my way" rather than use standard practices. "Processes are okay; you can always work around them" is a typical sentiment.

The lack of process focus and definition affects customer perception of our company as well as our internal activities.

From an internal perspective, we cannot reliably repeat our successes be cause we do not reuse our processes. Instead of improving our processes, we spend time reinventing our processes; this reduces our productivity and impacts schedules. When people move within our company they spend time trying to learn the unique practices of the new group. When we hire new people, we cannot train them in processes that are not defined.

From the customer perspective, process audits provide the assurance that we can produce quality products. Lack of process definition means that our company may not be able to provide a strong response to such audits leading to the perception that we are not a state-of-the-art organization and that we need better control over our software development and management practices. This has a direct effect on sales to our current and future customers.

Item: Project Commitment.

Assessment: At every level we commit to do more work than is possible.

Software project planning involves developing estimates for the work to be performed, producing a plan (including a schedule) to perform the work, negotiating commitments, and then tracking progress against the plan as the project proceeds. Over the course of the project, as circumstances change, it may be necessary to repeat these planning steps to produce a revised project plan.

The major problem with our project planning is that everyone, including individual contributors, first-line and second-line managers, and executives, commits to more work than is possible given the time and resources available.

Several factors contribute to this problem. Our initial estimates and schedules are often poor, causing us to underestimate the time and resources required to accomplish a task. Furthermore, we do not analyze the impact nor do we renegotiate schedules when requirements or resources change. These practices are perpetuated because we reward people for producing short schedules rather than accurate schedules.

1. Our planning assumptions are wrong. We make a number of mis takes in our planning assumptions. For example, We do not allocate enough time for planning. We are often "too busy to plan." We schedule one high-priority project after another with- out allowing time for postmortems on the previous project and planning for the next project.

We do not allow enough time in our schedules for good engineering practices such as design inspections.

We schedule ourselves and the people who work for us at 100% instead of allowing for meetings, mail, education, vacation, illness, minor interruptions, time to keep up with the industry, time to help other groups, etc.

We do not keep project history so we have no solid data from which to estimate new projects. We rely almost entirely on "swags." We "time-slice" people too many ways. Ten people working 10% of their time on a project will not produce one person's worth of work because of loss of focus and time to switch context.

2. Commitments are not renegotiated when things change. In our business, priorities change and critical interrupts must be serviced. How ever, when requirements change and when we reassign resources (people, machines), we do not evaluate the impact of these changes on project schedules. Then we are surprised when the original project schedule slips.

3. It is not okay to have a 4-year end date, but it is okay to have a 2 year end date that slips 2 years. A "good" schedule is defined as a short schedule, not an accurate schedule. There is management pres sure to cut schedules. People are rewarded for presenting a short schedule even if the project subsequently slips. There is fear that a project will be canceled if it has a long schedule.

4. We rely on heroic efforts to get regular work done. Heroic effort means long hours, pulling people off their assigned projects, overusing key people, etc. There will always be exceptional circumstances where these measures are required; however, this is our normal mode of operation. We count on both managers and developers putting in these heroic efforts to fix problems, keep regular projects moving forward, work with other groups, write appraisals, do planning, etc.

5. We have a history of unreliable project estimates. Most of our projects miss their schedules and exceed their staffing estimates. It is not uncommon for a project to overrun its schedule by 100 to 200%. It is common for a project to start, be put on hold, and not be restarted for several years.

Unreliable project estimates are taken for granted. Very rarely is there any formal analysis and recognition of the problem, for example, in the form of a project postmortem.

6. There are no company standards or training for project planning. Lack of organizational standards makes it difficult to compare or combine multiple project schedules, which in turn makes it difficult to recognize overcommitment. Lack of training perpetuates bad planning practices.

Our market is becoming more and more competitive, with much shorter product development cycles. When we miss our schedules be cause of overcommitment, we lose sales because we fail to deliver products within the narrow market window. We also lose credibility with our customers by making promises that we cannot keep. Customers complain that they do not know what new features and products are coming and when.

Overcommitment also impacts our internal operations (which in directly affects our customers again). Our software development organization operates in crisis mode most of the time. We are so busy handling interrupts and trying to recover from the effects of those interrupts that we sacrifice essential software engineering and planning practices. We are "too busy to plan." We are also too busy to analyze our past mistakes and learn from them.

Overcommitment also has a serious effect on people. Both man agers and developers get burned out, and morale suffers as they context switch and try to meet unrealistic schedules. They never feel like they've done a good job, they do not have time to keep up their professional skills, and creativity suffers. Low morale also means lower productivity now and higher turnover in the future.

Item: Software Engineering Practices.

Assessment: We do not invest enough time and effort in defect prevention in the front end of the development cycle.

Software engineering practices are those activities that are essential for the reliable and timely development of high-quality software. Some examples of software engineering practices are design documentation, design reviews, and code inspections. It is possible to develop software without such practices. How ever, software engineering practices, when used correctly, are not only effective in preventing defects in all phases of the software development life cycle, but also improve an organization's ability to develop and maintain many large and complex software products and to deliver products which meet customer requirements.

The phases of our software development life cycle are requirements definition, design, code, unit testing, product quality assurance (QA) testing, and integration testing. We spend a lot of time on and devote many resources to the backend product QA and integration testing trying to verify that products do not have defects. Much less effort is devoted to ensuring that defects are not introduced into products in the first place. In other words, we try to test in quality instead of designing in quality, even though we know that a defect discovered late in the product development life cycle is much more expensive to correct than one detected earlier. It would be far more cost effective to avoid defects in the first place and to detect them as early as possible.

1. We are attempting to get quality by increasing product testing. Customers are demanding higher quality; we are responding by increasing our testing efforts. But increased product testing has diminishing re turns. As noted, it is far more cost effective to avoid defects altogether or at least detect defects prior to product QA and integration testing.

2. We do not recognize design as a necessary phase in the life cycle. Few groups do design reviews and very few groups use formal design methodologies or tools. Code inspections are often difficult be cause there is no internal design document. Consequently, code inspections often turn into design reviews. This approach is misguided; it assumes that code is the only deliverable. Design work is an essential activity and design inspections have distinct benefits and are necessary in addition to code inspection. Moreover, design is a necessary phase of the life cycle.

3. Our reward system does not encourage the use of good software engineering practices. Developers are primarily held responsible for de livery of code (software) on time. This is their primary goal; all else is secondary. Managers typically ask about code and test progress, not about progress in design activities.

4. We recognize people for heroic firefighting; we do not recognize the people who avoid these crises. Code inspections are generally practiced, but may not be used and are sometimes ineffective. Often, people are not given adequate time to prepare for inspections, especially when code freeze dates are imminent. Also, we do not use the inspection data for anything useful, so many people do not bother entering these data.

Unit testing by developers is not a general practice; it is not required, and is only done at the developer's discretion. Unit testing is sometimes omitted because of tight schedules and many developers rely on product QA testing to find the bugs.

5. Few well-defined processes or state-of-the-art tools exist. Most groups have their own processes; these tend to be ad hoc and are often undocumented. There are very few processes which are well documented and used throughout software development.

Our company has no software design tools other than white boards or paper and pencil and has no standards for the design phase.

Developers have complained about the lack of state-of-the-art tools for a long time and are dismayed at the continued lack of such tools.

6. No formal software engineering training is required. Most developers know very little about software engineering. A few developers have learned about software engineering on their own; some of these have promoted software engineering practices within their own group. We don't offer any formal software engineering training.

7. We have very little internal design documentation. There are few internal design documents because there are no requirements for producing internal design documentation. When internal design documents exist, they are often out of date soon after code has been written because we do not have any processes for ensuring that the documentation is kept current. Furthermore, there is a lack of traceability between requirements, design, and code.

When there is no internal design documentation, developers have two choices: either spend a lot of time doing software archeology (i.e., reading a lot of code trying to figure out why a specific implementation was chosen) or take a chance that they understood the design and rely on testing to prove that they didn't break it.

Lack of internal design documentation makes code inspection much more difficult; code inspections often become design reviews.

When a defect is found, it means that all of the effort subsequent to the phase where the defect was introduced must be repeated. If a defect is introduced in the design phase and is found by integration testing, then fixing the defect requires redesign, recoding, re-inspecting the changed code, rereleasing the fix, retesting by QA, and retesting by the integration testing staff--all this effort is collectively known as rework.

The earlier a defect is introduced and the farther the product progresses through the life cycle before the defect is found, the more work is done by various groups. Industry studies have shown that 83% of defects are introduced into products before coding begins; there is no reason to believe that we are significantly better than the industry average in this respect. Anything we do to prevent or detect defects before writing code will have a significant payoff.

Rework may be necessary, but it adds no value to the product. Rework is also very expensive; it costs us about $4 per share in earnings. Ideally, if we produced defect-free software, we would not have to pay for any rework.

It is difficult to hire and train people. People are not eager to have careers in software maintenance, yet that is what we do most of the time. We cannot have a standard training program if every development group uses different software development practices.

Increased testing extends our release cycle. We get diminishing returns on money spent doing testing.

Item: Requirements Management.

Assessment: We do a dreadful job of developing and managing requirements.

The purpose of requirements management is to define product requirements in order to proceed with software development, and then to control the flow of requirements as software development proceeds.

In the first phase of the software development life cycle it is important to identify the needs of the intended customers or market and establish a clear understanding of what we want to accomplish in a software product. These requirements must be stated precisely enough to allow for traceability and validation at all phases of the life cycle and must include technical details, delivery dates, and supportability specifications, among other things. Various functional organizations can then use the requirements as the basis for planning, developing, and managing the project throughout the life cycle.

It is common for requirements to change while a product is under development. When this happens, the changes must be controlled, reviewed, and agreed to by the affected organizations. The changes must also be reflected in project plans, documents, and development activities. Having a core initial set of requirements means that any changes during the life cycle can be better controlled and managed.

Our profitability and market variability depends on developing the right product to meet customer needs. Requirements identify these needs. When we do not manage requirements, it affects our profitability and internal operations.

If we do not know what our customers want, we end up developing and shipping a product that may not meet the marketplace needs; we miss the market window; and customers do not get what they are willing to pay for. We may produce a product that lacks necessary functionality, is not compatible with some standard, does not look like or work well with our other products, is too slow, or is not easy to support.

Because we do not do primary market research, we do not have a sense of what the marketplace will need. We gather the bulk of the requirements from customers who mostly identify their immediate needs; we can seldom meet these needs because it takes time to produce the product.

Even when we know the requirements, we continue to add new requirements and to recycle the product through the development process. This is one reason why our products take a long time to reach the marketplace.

An industry-conducted study shows that 56% of bugs are introduced be cause of bad requirements; as stated earlier, bugs introduced early in the life cycle cause a lot of expensive rework. We have no reason to believe that we do a better job than the rest of the industry in managing requirements.

Poorly understood and constantly changing requirements mean that our plans are always changing. Development has to change its project plans and re sources; sales volume predictions have to change; and staffing levels for all the supporting organizations such as education, documentation, logistics, and field engineering to go through constant change. All of these organizations rely on clear requirements and a product that meets those requirements in order to be able to respond in a timely and profitable manner.

Item: Cross-Group Coordination and Teamwork.

Assessment: Coordination is difficult between projects and between functional areas within a single project.

Cross-group coordination involves a software development group's participation with other software development groups to address program-level requirements, objectives, and issues.

Cross-group coordination is also required between the functional groups within a single project, that is, between Product Management, Software Development, Quality Assurance, Release, Training, and Support.

At our company cross-group coordination is typically very informal and relies on individual developers establishing relationships with other developers and QA people. Consequently, intergroup communication with cooperation often does not happen, and groups which should be working together sometimes end up being adversaries.

1. There is no good process for mediating between software development teams. Software development teams sometimes compete for re sources; there is no standard procedure for resolving these conflicts.

2. It is left to individual effort to make cross-group coordination hap pen. There is usually very little planning or resource allocation for cross-group coordination. It is often left to the individual developer to decide when it is necessary and to initiate and maintain communication with other groups.

3. "The only reward for working on a team is a T-shirt or plaque." Individual effort is often recognized and rewarded. However, with few exceptions, there is very little reward or recognition for teamwork.

4. Development and release groups do not communicate well with each other. To many developers, the release group is perceived as an obstacle to releasing their product. People in the release group think that they are at the tail end of a bad process--they think that developers do not know how to correctly release their product and release defective software all the time. At times, the release group is not aware of development plans or changes to plans.

5. There is a brick wall between some development and QA groups. In some groups, development and QA people have little sense of being on the same team working toward a common goal.

6. Development does not know or understand what product managers do. Most developers do not know what the product management organization is supposed to do. They see no useful output from their product manager. In many cases, developers don't even know the product manager for their product.

7. There is too much dependency on broadcast e-mail. There are a lot of mail messages of the form "the ABC files are being moved to..." or "the XYZ interface is being changed in the TZP release. If you use it, you will need to..." These are sent to all developers in the hope that the information will reach those individuals who are affected. This use of broadcast e-mail requires people to read many messages that do not affect them. Also, if someone does not read such mail messages, they run the risk of not knowing about something that does affect them. This is a poor method of communication which results from people not knowing who their clients are.

Lack of coordination between projects has resulted in products that do not have similar user interfaces, do not use the same terminology for the same functions, do not look like they came from the same company, or simply do not work well together. This often results in rework, which increases our cost of development.

Poor coordination reduces productivity and hurts morale. Lack of effective coordination usually results in one group waiting for another to complete development of a software product; this causes the first group to be idle and the second group to be under pressure to deliver. It also causes some developers to work overtime to meet their deadline, only to discover that the other group is not ready to use the product.

SEI CMM Level Analysis.

Assessment: Our company's practices were evaluated at Level 1 of CMM.

The improvement areas that we have targeted are not all CMM Level 2 activities. Rather, these areas represent the most significant problem areas in our company and are crucial to resolve. We will, therefore, continue to use the SEI assessment and the CMM to guide us and not necessarily follow the model in sequence. Resolving the problem areas will certainly help us achieve at least Level 2 capability.

The assessment results offer no surprises nor do they offer a silver bullet.

The assessment was a first step in identifying the highest priority areas for improvement. Knowing these priorities will help us target our efforts correctly--to identify and implement solutions in these high-priority areas. The assessment also generated enthusiasm and a high level of participation all across the company, which is encouraging and makes us believe that as an organization we want to improve our software development processes.

The SEI assessment was a collaborative effort between all organizations.

Future success of this project, and any process improvement effort, will depend on sustaining this collaboration. Some development groups are using excellent software development practices and we want to leverage their work. They can help propagate the practices that work for them throughout the company by participating in all phases of this project.

The SEI assessment evaluates processes, not products. The findings, therefore, are relevant to the processes used to develop products. As a company we produce successful products with high quality. However, we have a tremendous cost associated with producing high-quality products. The assessment and follow up activities will help us improve our processes. This in turn will have a significant impact on cost and productivity, leading to higher profits for us and higher reliability and lower cost of ownership for our customers.


1. Humphrey WS. Managing the Software Process. Addison-Wesley Publishing, 1989.

Top of Page

PREV.   NEXT Article Index HOME