Wednesday
Nov282012

Software Design Principles & Practices – Testing

Test Driven Development - In test driven development the software is written to a test, if the test fails then the software is either not complete or not correct. In TDD the programmer knows what the result of an action is supposed to be (store or retrieve user information, do a calculation, generate a report, etc.) so they write the test that says the success scenario is x. If they immediately run the test, the test will fail since no software has been written to pass the test. They then write the software to do what they want, and run the test, until the test passes when the produced software meets the requirements of the test. This is useful because at the end the programmer knows the software at least does what was planned from the start.
  
Abstraction principle (don't repeat yourself) - This principle relates to code only being written once and shared between objects, instead of repeating code to replicate functionality across multiple objects. If code is repeated and something needs to be changed, it means that everyplace the code is replicated will need to be updated and changed. This makes it very easy to fail to make all changes to every piece of software since one component could be missed. It also opens up the possibility for more mistakes, as the more code that needs to be changed the more likely it is that a mistake will be made and not noticed. All of this could result in code that doesn't compile properly, or software that doesn't work right either doing things incorrectly, or crashing and not really working at all. When code only exists in once place and is used for multiple components, it means that it only has to be updated once for everything to change.
 
Single responsibility principle - Under this principle an object should only be responsible for a single unit of work. If an object has multiple responsibilities they could negatively impact one another causing the software to fail. This is similar in a way to the abstraction principle, since a single responsibility makes the class more robust as it can be applied whenever it's singular function would be useful. With multiple responsibilities a class may do too much to be useful in a certain situation, another class would have to be created. Furthermore if there are two responsibilities and one of them needs to be updated or changed, this could cause the other task to no longer work either correctly or at al.
 
Seperation of concerns - Not entirely distinct from SRP and DRY, SoC is where a computer program is broken up into specific features while attempting to minimize any overlap in functionality. This makes it easier for work to be done on individual pieces of the system, helps with reusability and maintainability, improves understanding of the system, and makes it easier for new features to be added in the future.

 

Wednesday
Nov282012

Software Quality

Planning - One of the most important areas to focus on in order to achieve software quality is in the initial planning. Getting a good idea of the user requirements up front and developing strong use cases will help the team to efficiently create reliable software. Even if they can't get all the specific requirements up front, even having a rough idea of what the completed software will need to accomplish will help ensure it is flexible enough to change or grow to meet the needs of the customer.

Testing - Regular testing throughout the development process should help in eliminating bugs, as well as verifying that the software meets the set requirements. The benefits of regular, on-going testing have a positive impact on the development process in that any problems are discovered and can be fixed early on during the process, instead of needing to do huge amounts of fixes right before delivering the software, and make it easier to present real, working software to the customer throughout the development process for feedback.

Feedback/Customer Involvement - Getting feedback from the customer during development will help improve software quality. Having the customer involved in the process means that the development team can better understand what the customer is expecting, and can make tweaks or changes to individual components early on. As components come together to form the full software package, the customer can get a better idea of what the team is producing and bring up any concerns or changes during the development process so changes can be made. If the customer was not involved until the end, the software would be delivered and it may be too difficult, costly, or time consuming to make changes that would improve the quality of the softare at that time.

Define and Measure - The development team should not only strive for quality software, but should have a definition of quality and metrics they can use to measure success. This is often done by creating a weighted scoring system using 5 characteristics identified by The Consortium for IT Software Quality (CISQ) - reliability, efficiency, security, maintainability, and size. Reliability, efficiency and security mostly relates to using good practices for architecture and coding. Maintainability has to do with the ability of a new development team to understand, change, test, and reuse the software/code when the responsibilities are transferred from the original developers and has a lot to do with documentation, programming practices, and the original teams approach to complexity. Size relates to both the actual size of the application (lines of code, file size, databases, etc.) and the functional size (functionality, inputs, outputs, data, etc.)

Wednesday
Nov282012

Verification vs. Validation

Software verification and validation are the two primary components of software quality control. Software verification is verifying that the software is being put together correctly, and that the software can successfully and correctly do what it is supposed to do. Software verification happens all throughout the software development process as code is tested to ensure it works properly. Software validation, on the other hand, is the process of verifying with the customer that the application meets expectations. This happens before any code is written and makes sure the needs and expectations of the customer are understood prior to actually creating any software.

If accounting software was being developed, for example, validation would be understanding what the customer needs out of the accounting software and how it is going to be used, where verification is the process where the software is tested to ensure it is performing calculations correctly, storing and retrieving information from the correct places, categorizing things accurately, etc.

Wednesday
Nov212012

Design Process

Why is it important to have your choice of principles, patters, and architectural decisions identified “PRIOR” to construction?  How does or does not this conflict with the agile development methodology?

Making strong decisions about principles, patterns, and architecture before beginning construction of software makes the rest of the processes that much easier to complete. Principles and patterns seem to really be the same as good programming (and software design) practices, and the architecture seems to be more about planning a general framework for the rest of the project.

Not doing this could result in a lot of rework if two different components are being worked on by two different people who take different approaches and forgot to account for the ways that those components need to interact with one another. Alternatively work could be duplicated across components, and other regular principles could also be violated. Finally without strong decisions up front the final software may not be modular enough to support updates, changes, or additions in the future without serious rework.

I do not believe this conflicts with agile methodologies whatsoever. Lots of up front planning to establish good, strong decisions about these items actually seems pretty par for the course for this methodology. It's almost more important in agile since individual components will be developed one at a time - this means that the team must make sure up front that they all follow the same principles and architecture so everything will fit together in the end.

Wednesday
Nov212012

Interfaces

1. Name 3-4 design principles/practices that an interface helps support.
 
Single responsibility principle - A class should only have one responsibility. If there are multiple classes with different responsibilities, but each class utilizes the same method, then an interface can support this. For example an alarm clock, watch, cell phone, wall clock, and clock on a computer all have different responsibilities, but no matter which one of those you 'grab' it still has to tell time, so "Time" could be an interface that applies to all the subclasses.
 
Dependency inversion principle - High-level modules should not depend on low-level modules, but should depend on abstractions. Abstractions should not depend on details, details should depend upon abstractions. High-level and low-level components need to be separated in different libraries, with the behavior of the high-level component defined by interfaces. The low-level component then depends on the high-level interface to know what to do. A good example would be a car - instead of coming up with all the details on how pressing the accelerator makes the car go one could say "the car needs to stop, the car needs to go, the car needs to turn" then each subclass (accelerator, brakes, steering wheel, wheels, engine) can be detailed based on the higher level interface decisions.
 
Interface segregation principle - Interfaces should only contain methods it will actually use, and customers/users should never be forced to depend on a method that is not used. Without interfaces this principle would not exist. If an interface is responsible for more than one thing (see single responsibility principle) it should be broken down into more specific, individual interfaces. This will result in fewer issues or complications as a system is modified, updated, refactored, or becomes more complicated over time.
 
 
2. Why is it better to "program to the interface" and not "the concrete" class?
 
An interface is really defined by an abstract class, without including any implementations of the underlying functions. As an abstract class the interface is really a base class from which other classes can be derived. This new derived class (sub-class) will contain the actual implementation of the functions in the base class. This newly derived sub-class is a concrete class.
 
If you are programming to the concrete class and something changes, then each of those concrete classes will need to be updated as well. This could result in a lot of work and numerous changes if there are a large number of concrete classes. Furthermore if the implementation of a method is only slightly different across concrete classes you can create a base class containing the common method, which will be invoked from the more specific/individualized information contained within the subclasses - so "animals" would be an abstract class (interface) and individual animals can be subclases with 'concrete' traits. Since all animals eat, "eat" can be another interface with the derived sub-classes providing the 'concrete' details of what the animals eats, how it eats, etc.
 
Or another example (and I really hope I'm understanding this correctly): Instead of writing code to "add to database" for every class and their details, you can create an "add to database" interface, then as long as the class can call the method to add to database, it can use any criteria specific to the class to figure out exactly HOW to add to the database.
 
 
3. What are some other uses for using an interface that may not have been mentioned in the course content?
 
Gives the end user/developer additional flexibility - if, for example, the final product needs to generate some sort of report on a regular basis, but the report might change, or when, or other factors, then creating an interface that can generate reports, and maybe a separate interface to schedule reports, will be useful when the reports themselves change.
 
An interface can be used as 'notes' to ensure that appropriate functionality is included in more specific classes. If an interface will get/set a particular string name, then it's easier to remember to make sure that string is set in any classes that will implement the interface.
 
The use of interfaces creates what is essentially a plug-and-play architecture making it easier to replace or swap individual components. Any interchangeable components will utilize and implement the same interface so it does not require any extra programming to use them.