Consider the following dialogue between a systems professional, John Juan, and a manager of a department targeted for a new information system, Peter Pedro:
"Juan: The way to go about the analysis is to first examine the old system, such as reviewing key documents and observing the workers perform their tasks. Then we can determine which aspects are working well and which should be preserved.
Pedro: We have been through these types of projects before and what always ends up happening is that we do not get the new system we are promised; we get a modified version of the old system.
Juan: Well, I can assure you that will not happen this time. We just want a thorough understanding of what is working well and what isn’t.
Pedro: I would feel much more comfortable if we first started with a list of our requirements. We should spend some time up-front determining exactly what we want the system to do for my department. Then you systems people can come in and determine what portions to salvage if you wish. Just don’t constrain us to the old system."
Required:
a.Obviously these two workers have different views on how the systems analysis phase should be conducted. Comment on whose position you sympathize with the most.
b.What method would you propose they take? Why?
In this scenario, we realized that both workers have different views on how they want to develop their new system. Juan want to study first the old system so that they can just see the strengths and weakness of the old system they have while Pedro wants to start a new system without referring to the old system they have.
So if i am at the one of the workers, i am favor with the idea of Juan which is examining and studying first the old system that we have before making move to a new system because in has many advantage for me. If i will the study first the old system i will know the ways or the algorithm of the old system it is now easy for me to develop a new version of its system because you can just rely with the useful codes or features in the system. If i can understand it well, i guess i can come up a good idea about how to upgrade the old system so that it could be user friendly, good security and competitive system in its competitors.
I agree that it is hard to study an old system that you dont have any idea on it. It is time consuming and always a risk that you can't come up with a new system because of some reasons. But for me , my decisions will depend on the answer of the developer(always depend on the skills of the developer), if he truly says that he can come up with a good system after studying it with a minimum time he need so i will take the risk.
What method would you propose they take?why?
So the method that I would propose to them that they will take is the method of a system development process? For me the method depends on what really the management wants in order to make the system. So what are the steps? There are steps in creating a system, so in order for them to develop a system they should follow some protocols in developing a system. So there are steps in developing a system, which is the planning, implementation, testing, documenting, deployment, and maintenance. So these are the steps in system development process. So first things first is what is its definition a software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. The reason why I added the definition is that they can understand what my point is, or shall we say it is an overview in creating or developing a system. So to start, the first step you need to do is to plan; the important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified. So planning is short for SPMP (Software Project Management Plan). SPMP is a technique in which making the planning phase feasible. As you acquired the SPMP that is the time you can proceed to the analysis phase. So part of this is step are the analysis phase in a system development. So the requirements on analysis phase are the SRS or (Software Requirements Specification). Here you are going to identify all the information that is going to be included in your system. An example would be the system features of your system, the functional and non-functional requirements of your system. So if you are done in doing the SRS the next thing you should do is design. So these are some of the things you need to do in order to attain what are the goals that are needed to be accomplished. So if you already finished planning on what are the things you need to do, then you can proceed to the next step which is implementation, testing and documenting. Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important part of the software development process. This part of the process ensures that bugs are recognized as early as possible. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal. Then, deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment. Software Training and Support is important because a large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software. Maintenance and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. It is during this phase that customer calls come in and you see whether your testing was extensive enough to uncover the problems before customers do. If the labor cost of the maintenance phase exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality, of at least one prior phase, is poor. In that case, management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control. Bug Tracking System tools are often deployed at this stage of the process to allow development teams to interface with customer/field teams testing the software to identify any real or perceived issues. These software tools, both open source and commercially licensed, provide a customizable process to acquire, review, acknowledge, and respond to reported issues. The other technique that is crucial in developing a system is that you need to identify your model. There are many examples of models such as waterfall, agile, and extreme programming. But also there are other model that is also needs to be considered. Iterative development prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want. Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software. Extreme Programming (XP) is the best-known iterative process. In XP, the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature — merging design and code — is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system. The waterfall model shows a process, where developers are to follow these steps in order:
1. Requirements specification (AKA Verification or Analysis)
2. Design
3. Construction (AKA implementation or coding)
4. Integration
5. Testing and debugging (AKA validation)
6. Installation (AKA deployment)
7. Maintenance
Formal Methods:
The complexity of software that will be embedded in new aircraft and spacecraft has outpaced the capabilities of our current verification and certification methods. Software performs safety- and mission-critical functions on these
platforms, and correct operation is essential. Verification and certification based on manual reviews, process constraints, and testing are proving too expensive for even current products, let alone advanced software-based systems. Traditional method scannot verify the correctness of applications such as adaptive control for upset recovery of aircraft, intelligent control of space craft, and control software for advanced military and unmanned aircraft (UAVs) operating in commercial airspace. Unless safety-critical embedded software can be developed
and verified with less cost and effort – while still satisfying the highest reliability requirements – these new capabilities may never reach the market. Honeywell has recognized this challenge and has an active research program in advanced software development and verification tools and methodologies. Over the last 5 years, Formal Methods has emerged as a key component in the development and verification of the next generation of safetycritical systems.
Formal Methods
Formal Methods is the use of ideas and techniques from mathematics and formal logic to specify and reason about computing systems to increase design assurance and eliminate defects. Formal Methods tools allow comprehensive analysis of requirements and design and complete exploration of system behavior, including fault conditions. Formal Methods provides a disciplined approach to analyzing complex safetycritical systems. The benefits of using Formal Methods include:
Product-focused measure of correctness. The use of Formal Methods provides an objective measure of the correctness of a system, as opposed to current process quality measures.
Early detection of defects. Formal Methods can be applied to the earliest design artifacts, thereby leading to earlier detection and elimination of design defects and associated
late cycle rework. Guarantees of correctness. Unlike testing, formal analysis tools such as model checkers consider all possible execution paths through the system. If there is any way to reach a fault condition, a model checker will find it. In a multi-threaded system where concurrency is an issue, formal analysis can explore all possible interleavings and event orderings.. This level of coverage is impossible to achieve through testing.
Analytical approach to complexity.
The analytical nature of Formal Methods is better suited for verification of complex behaviors than testing alone. Provably correct abstractions can be used to bound the behavioral space of systems with adaptive or non-deterministic behaviors. Formal Methods can also be used to perform “what-if” analyses to study the effects of proposed system changes. Though the basic techniques have been under development world-wide for over two decades, they have just reached the maturity at which, in combination with increased processor speeds and cheaper memory, they can be used to address realworld systems.
Honeywell’s Experience
Honeywell has developed a wide array of capabilities in the application of Formal Methods to safety-critical systems. We can draw upon our expertise with many different Formal Methods technologies to choose the right tools and level of
abstraction for each verification task. Honeywell’s strength lies in our ability to apply this expertise to real systems based on our deep understanding of the aerospace domain, requirements for safety-critical systems, and actual development processes. Examples of how we have applied existing Formal Methods tools and developed new ones
include:
Source code: Source code is frequently the only complete design artifact available for verification. Therefore, analysis of source code is an important capability. We have found explicit-state model checkers to be the best tools for verifying source code. We are currently developing automated tools for generating verification models from source code and are using this approach to verify the time partitioning guarantees in Honeywell’s Deos™ real-time operating system. Deos is a key element of the Primus Epic avionics suite and was implemented in C++, incorporating many advanced features such as dynamic creation and deletion of processes and threads, slack time reclamation, and aperiodic interrupts. While no test case can directly check system level properties like time partitioning, we have been able to verify this property using the SPIN model checker. High integrity communication protocols: When high-level requirements documents are available, critical properties of high-integrity communication protocols can be analyzed using Formal Methods. In our verification of the synchronization protocol of the ASCB-D bus used in Primus Epic, we derived the model from textual design specifications. The verification proved that the protocol achieves synchronization of the timing frames within the required 200 msec start-up period, irrespective of the component start-up order, various bus faults, or clock drift. Control flow diagrams: Control flow diagrams in
Simulink™ are a common design representation in avionics control systems. We have found that symbolic model checkers capturing the synchronous transition structure of these designs are best suited for their verification. We are working on tools to automatically generate models from block diagrams such as a triplex sensor voter design. This redundancy management algorithm monitors three independent sensors, each with its own self-check validity flag. The output of the algorithm is a single sensor output and a validity flag computed from the inputs. We have verified that the algorithm computes the correct output and is tolerant to sensor faults, noise transients, and small differences in sensor measurements. Real-time scheduling: The MetaH Architectural Description Language was developed by Honeywell for specifying realtime embedded systems. The specifications include information about configurations of tasks, their message and event connections, information about how these objects are mapped onto a specified hardware architecture, and information about timing behaviors and requirements, and
partitioning and safety behaviors and requirements. We developed hybrid verification tools for real-time, faulttolerant,
high-assurance software and hardware architectures specified in the MetaH language. Dense time linear hybrid
automata models are generated automatically through instrumentation of the source code. The models result from the
execution of the instrumented code during testing. Properties analyzed using this approach include schedulability and
deadline satisfaction. We used this approach to analyze the portion of the MetaH real-time executive that implements uni-processor task scheduling, time partitioning, and error handling. Nine defects were discovered in the course of the verification. Of these, three defects were almost impossible to detect through testing because multiple, carefully-timed events were required to produce erroneous behavior.
sources: http://www51.honeywell.com/aero/technology/common/documents/formal-methods.pdf
http://en.wikipedia.org/wiki/Software_development_process
"Juan: The way to go about the analysis is to first examine the old system, such as reviewing key documents and observing the workers perform their tasks. Then we can determine which aspects are working well and which should be preserved.
Pedro: We have been through these types of projects before and what always ends up happening is that we do not get the new system we are promised; we get a modified version of the old system.
Juan: Well, I can assure you that will not happen this time. We just want a thorough understanding of what is working well and what isn’t.
Pedro: I would feel much more comfortable if we first started with a list of our requirements. We should spend some time up-front determining exactly what we want the system to do for my department. Then you systems people can come in and determine what portions to salvage if you wish. Just don’t constrain us to the old system."
Required:
a.Obviously these two workers have different views on how the systems analysis phase should be conducted. Comment on whose position you sympathize with the most.
b.What method would you propose they take? Why?
In this scenario, we realized that both workers have different views on how they want to develop their new system. Juan want to study first the old system so that they can just see the strengths and weakness of the old system they have while Pedro wants to start a new system without referring to the old system they have.
So if i am at the one of the workers, i am favor with the idea of Juan which is examining and studying first the old system that we have before making move to a new system because in has many advantage for me. If i will the study first the old system i will know the ways or the algorithm of the old system it is now easy for me to develop a new version of its system because you can just rely with the useful codes or features in the system. If i can understand it well, i guess i can come up a good idea about how to upgrade the old system so that it could be user friendly, good security and competitive system in its competitors.
I agree that it is hard to study an old system that you dont have any idea on it. It is time consuming and always a risk that you can't come up with a new system because of some reasons. But for me , my decisions will depend on the answer of the developer(always depend on the skills of the developer), if he truly says that he can come up with a good system after studying it with a minimum time he need so i will take the risk.
What method would you propose they take?why?
So the method that I would propose to them that they will take is the method of a system development process? For me the method depends on what really the management wants in order to make the system. So what are the steps? There are steps in creating a system, so in order for them to develop a system they should follow some protocols in developing a system. So there are steps in developing a system, which is the planning, implementation, testing, documenting, deployment, and maintenance. So these are the steps in system development process. So first things first is what is its definition a software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. The reason why I added the definition is that they can understand what my point is, or shall we say it is an overview in creating or developing a system. So to start, the first step you need to do is to plan; the important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified. So planning is short for SPMP (Software Project Management Plan). SPMP is a technique in which making the planning phase feasible. As you acquired the SPMP that is the time you can proceed to the analysis phase. So part of this is step are the analysis phase in a system development. So the requirements on analysis phase are the SRS or (Software Requirements Specification). Here you are going to identify all the information that is going to be included in your system. An example would be the system features of your system, the functional and non-functional requirements of your system. So if you are done in doing the SRS the next thing you should do is design. So these are some of the things you need to do in order to attain what are the goals that are needed to be accomplished. So if you already finished planning on what are the things you need to do, then you can proceed to the next step which is implementation, testing and documenting. Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important part of the software development process. This part of the process ensures that bugs are recognized as early as possible. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal. Then, deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment. Software Training and Support is important because a large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software. Maintenance and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. It is during this phase that customer calls come in and you see whether your testing was extensive enough to uncover the problems before customers do. If the labor cost of the maintenance phase exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality, of at least one prior phase, is poor. In that case, management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control. Bug Tracking System tools are often deployed at this stage of the process to allow development teams to interface with customer/field teams testing the software to identify any real or perceived issues. These software tools, both open source and commercially licensed, provide a customizable process to acquire, review, acknowledge, and respond to reported issues. The other technique that is crucial in developing a system is that you need to identify your model. There are many examples of models such as waterfall, agile, and extreme programming. But also there are other model that is also needs to be considered. Iterative development prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want. Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software. Extreme Programming (XP) is the best-known iterative process. In XP, the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature — merging design and code — is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system. The waterfall model shows a process, where developers are to follow these steps in order:
1. Requirements specification (AKA Verification or Analysis)
2. Design
3. Construction (AKA implementation or coding)
4. Integration
5. Testing and debugging (AKA validation)
6. Installation (AKA deployment)
7. Maintenance
Formal Methods:
The complexity of software that will be embedded in new aircraft and spacecraft has outpaced the capabilities of our current verification and certification methods. Software performs safety- and mission-critical functions on these
platforms, and correct operation is essential. Verification and certification based on manual reviews, process constraints, and testing are proving too expensive for even current products, let alone advanced software-based systems. Traditional method scannot verify the correctness of applications such as adaptive control for upset recovery of aircraft, intelligent control of space craft, and control software for advanced military and unmanned aircraft (UAVs) operating in commercial airspace. Unless safety-critical embedded software can be developed
and verified with less cost and effort – while still satisfying the highest reliability requirements – these new capabilities may never reach the market. Honeywell has recognized this challenge and has an active research program in advanced software development and verification tools and methodologies. Over the last 5 years, Formal Methods has emerged as a key component in the development and verification of the next generation of safetycritical systems.
Formal Methods
Formal Methods is the use of ideas and techniques from mathematics and formal logic to specify and reason about computing systems to increase design assurance and eliminate defects. Formal Methods tools allow comprehensive analysis of requirements and design and complete exploration of system behavior, including fault conditions. Formal Methods provides a disciplined approach to analyzing complex safetycritical systems. The benefits of using Formal Methods include:
Product-focused measure of correctness. The use of Formal Methods provides an objective measure of the correctness of a system, as opposed to current process quality measures.
Early detection of defects. Formal Methods can be applied to the earliest design artifacts, thereby leading to earlier detection and elimination of design defects and associated
late cycle rework. Guarantees of correctness. Unlike testing, formal analysis tools such as model checkers consider all possible execution paths through the system. If there is any way to reach a fault condition, a model checker will find it. In a multi-threaded system where concurrency is an issue, formal analysis can explore all possible interleavings and event orderings.. This level of coverage is impossible to achieve through testing.
Analytical approach to complexity.
The analytical nature of Formal Methods is better suited for verification of complex behaviors than testing alone. Provably correct abstractions can be used to bound the behavioral space of systems with adaptive or non-deterministic behaviors. Formal Methods can also be used to perform “what-if” analyses to study the effects of proposed system changes. Though the basic techniques have been under development world-wide for over two decades, they have just reached the maturity at which, in combination with increased processor speeds and cheaper memory, they can be used to address realworld systems.
Honeywell’s Experience
Honeywell has developed a wide array of capabilities in the application of Formal Methods to safety-critical systems. We can draw upon our expertise with many different Formal Methods technologies to choose the right tools and level of
abstraction for each verification task. Honeywell’s strength lies in our ability to apply this expertise to real systems based on our deep understanding of the aerospace domain, requirements for safety-critical systems, and actual development processes. Examples of how we have applied existing Formal Methods tools and developed new ones
include:
Source code: Source code is frequently the only complete design artifact available for verification. Therefore, analysis of source code is an important capability. We have found explicit-state model checkers to be the best tools for verifying source code. We are currently developing automated tools for generating verification models from source code and are using this approach to verify the time partitioning guarantees in Honeywell’s Deos™ real-time operating system. Deos is a key element of the Primus Epic avionics suite and was implemented in C++, incorporating many advanced features such as dynamic creation and deletion of processes and threads, slack time reclamation, and aperiodic interrupts. While no test case can directly check system level properties like time partitioning, we have been able to verify this property using the SPIN model checker. High integrity communication protocols: When high-level requirements documents are available, critical properties of high-integrity communication protocols can be analyzed using Formal Methods. In our verification of the synchronization protocol of the ASCB-D bus used in Primus Epic, we derived the model from textual design specifications. The verification proved that the protocol achieves synchronization of the timing frames within the required 200 msec start-up period, irrespective of the component start-up order, various bus faults, or clock drift. Control flow diagrams: Control flow diagrams in
Simulink™ are a common design representation in avionics control systems. We have found that symbolic model checkers capturing the synchronous transition structure of these designs are best suited for their verification. We are working on tools to automatically generate models from block diagrams such as a triplex sensor voter design. This redundancy management algorithm monitors three independent sensors, each with its own self-check validity flag. The output of the algorithm is a single sensor output and a validity flag computed from the inputs. We have verified that the algorithm computes the correct output and is tolerant to sensor faults, noise transients, and small differences in sensor measurements. Real-time scheduling: The MetaH Architectural Description Language was developed by Honeywell for specifying realtime embedded systems. The specifications include information about configurations of tasks, their message and event connections, information about how these objects are mapped onto a specified hardware architecture, and information about timing behaviors and requirements, and
partitioning and safety behaviors and requirements. We developed hybrid verification tools for real-time, faulttolerant,
high-assurance software and hardware architectures specified in the MetaH language. Dense time linear hybrid
automata models are generated automatically through instrumentation of the source code. The models result from the
execution of the instrumented code during testing. Properties analyzed using this approach include schedulability and
deadline satisfaction. We used this approach to analyze the portion of the MetaH real-time executive that implements uni-processor task scheduling, time partitioning, and error handling. Nine defects were discovered in the course of the verification. Of these, three defects were almost impossible to detect through testing because multiple, carefully-timed events were required to produce erroneous behavior.
sources: http://www51.honeywell.com/aero/technology/common/documents/formal-methods.pdf
http://en.wikipedia.org/wiki/Software_development_process