Regardless of the type of software application one is creating, many of us in the software development world face a similar set of challenges and goals: - high-quality applications - fast time to market; and - lower cost.

In the real-time software application space, the list does not stop there. Additional challenges include - response time and latency requirements; - system throughput requirements; and - availability

In the embedded real-time application space (an embedded system is one whose primary function is often not computing), the list grows even more: - tight physical constraints (such as size, weight, and power); - asynchronous and simultaneous events; and - high reliability requirements

In the embedded real-time world, resources like CPU, memory, and power are scarce because customers do not want to pay for large amounts of memory or CPU power. Neither do customers want to carry portable devices that weigh too much or consume large amounts of battery power. Performance must be managed. The goal is to fit as much application processing as possible onto as little hardware and software as possible!

If the embedded application is a programmable device, the performance of the software will be instrumental in overall system performance. However, it is common to ignore performance related issues until late in the life cycle, after the application is up and running. This "fix it later" approach often leads to substantial redesign and refactoring at later stages of the project, such as integration and test, a time where the project is already on the critical path. These performance concerns transcend the software engineering and performance evaluation communities.

A disciplined approach to managing system performance throughout the lifecycle is needed to insure systems achieve full "performance entitlement" and meet performance objectives when the product is delivered. Software Performance Engineering (SPE) is the systematic process for planning and evaluating a system's performance throughout the life cycle of its development (1). The goals of SPE are to enhance the responsiveness and usability of systems while at the same time preserving quality.

The overall approach to SPE is straightforward: consider quantitative behavior from a system's requirements stage through maintenance and enhancement stages. The process consists of the following steps:

- Assess the performance risk. Not all real-time embedded systems have a performance risk. Soft real-time systems (a soft real-time system is a system where missing a deadline will degrade the system but it will not fail) may have less overall risk that a hard real-time system (a hard real-time system is one where missing the deadline causes the system to fail). The amount of risk drives how much effort should be expended on the SPE effort.

- Identify critical use cases and performance scenarios. A use case is a "typical course of events" that describes the external behavior of a system from one particular viewpoint. There can be many uses of the system. A "critical" use case is one that is important to the responsiveness or performance of the system. A performance scenario represents an interaction with the system that drives the performance of the system.

- Establish performance objectives. It is not good enough to state "the system must be fast" or "the responsiveness must be good." Performance objectives must be quantitative and measurable, such as "the total CPU utilization should not exceed 75%." These objectives need to be established early and managed throughput the lifecycle.

- Construct performance models. There are many modeling approaches for embedded systems, including software-modeling languages such as the Unified Modeling Language (UML) (2). When analyzing systems for performance, modeling approaches are used that focus on system performance scenarios identified earlier. Common modeling approaches for performance engineering include software execution models which provide a static analysis of the best and worst case execution times in the absence of external factors (if there are performance issues with these simple models, these must be resolved before going to more detailed models), and system execution models which are dynamic models that consider competing workloads or delays due to contention for shared resources.

- Determine software and computer resource requirements. During this phase, the computational needs of the software (how many CPU instructions are required, network messages) and the corresponding computer resource requirements (a mapping of the software resource requirements onto the amount of "service" they require from the key hardware devices in the system such as CPUs, Disks, etc) are determined. A "what-if" analysis is performed to determine the best mapping to meet the system requirements for performance as well as cost and power.

- Perform validation, verification, and evaluation. Modeling requires a verification phase (Is the model correct?) and a validation phase (Is the right model being used?). If the models have indicated a performance problem, the product concept can be modified or the performance estimates can be revised.

Modeling is iterative. The steps above may iterate several times as the product is being developed. As the lifecycle proceeds more accurate estimates are obtained and fed back into the models. Following this process, system problems are discovered earlier and can be addressed when the costs to fix or modify the system are relatively low because all that exists at this stage are models, not the product. Addressing performance problems after hardware and software are already developed is much more expensive.

Embedded system complexity is growing. Software technologies continue to evolve rapidly, time to market is getting shorter, documentation is incomplete or absent, and system performance, power, and resource constraints are always a challenge. These factors necessitate the need for performance engineering of embedded systems. Texas Instruments (TI) --> http://lists.planetee.com/cgi-bin3/DM/y/eA0CWyaL0Gth08CD0A1

References 1. Performance Solutions, A Practical Guide to Creating Responsive, Scalable Software, by Connie Smith and Lloyd Williams, Addison Wesley, Reading, MA, 2002 2. The Unified Modeling Language User Guide, by G. Booch, J. Rumbaugh, and I. Jacobson, Addison Wesley, Reading MA, 1999

Please share your thoughts about this feature article with our readers by contacting the editor at: jbrowne@penton.com.