Translate

Search This Blog

Total Pageviews

Showing posts with label Object Oriented. Show all posts
Showing posts with label Object Oriented. Show all posts

Friday, November 30, 2018

Object-Oriented Software Development

Have you ever wondered why some organizations refer to the group responsible for computers and information systems as “data processing”?
  1. Structured programming and structured design (which grew out of structured programming) understand the mission of software as that of processing data.
  2. Structured programming and structured design are focused on the changes programs make in transforming input data to output data, seeing computer programs as action-oriented.
The early name of the computer programming profession—data processing—reflects this procedural perspective.

Object-oriented programming and design emphasize the view that software systems model the real world. 
Objects within an object-oriented system may still transform input data to output data, but this is not the only possible way to organize an object-oriented program.
From an object-oriented perspective, the group responsible for computers and information systems might aptly be named the Business Object Portfolio (BOP) group. Their function, is to assemble and maintain a portfolio of objects that model their organization’s processes.
If  you’re therefore less than enthusiastic about the prospect of being called a BOPper, never fear. You can choose to work in the health care industry, in which case you may come to be known HOPper (for “Health care Object Portfolio”) or
MOPper (for “Medical Object Portfolio”). That’s decidedly better than working in law enforcement, where you might come to be known as a COPper (for “Crime Object Portfolio”), or working in agriculture, where you might come to be known as a CROPper (for “Crop Rotation Object Portfolio”).


                            Procedural Programs

Computer programs, whether designed based on structured design or object-oriented design, usually model some process that exists in the real world.
A payroll program, for example, models the manual process that a real business goes through when it pays its employees. In a small business, the process might work something like this:
  1. Get the list of employees from the file over by the coffee machine.
  2. Get the federal and state withholding schedules out of the bottom right drawer of the desk.
  3. Get the general ledger from the supervisor’s office.
  4. For each employee on the list, do the following:
  5.   (4.1). Get the amount of pay from the employee record.
  6.   (4.2).  Calculate the amount of taxes due, based on the  withholding schedules.
  7.   (4.3). Calculate the net pay by subtracting the deductions and withholding from the gross pay.
  8.   (4.4). Prepare the check.
  9.   (4.5). Record the check in the general ledger.
  10. (5). Take the stack of checks to the boss to be signed.
  11. (6). Mail the checks at the post office.
  12. (7). Return the general ledger, withholding schedules, and list of employees to their regular places.
Most of these operations could be performed by a computer, though the computer wouldn’t do them exactly the same way the payroll clerk would.
  1. When written as part of a computer program, the steps necessary to carry out a task are called a process
  2. Each step within a process is known as a procedure
  3. When a procedure is long or complex, it may consist of several steps, called subprocedures or simply procedures.
  4. Procedures are the blocks used to build structured programs.
A typical payroll program, for example, would contain procedures to
  • open and read the files, 
  • perform the payroll calculations, and 
  • print the checks.
  • By using direct deposit, the program might even “sign” and “mail” the checks.
A procedural payroll program is structured like

Each of the boxes in Figure represents a procedure that carries out a series of steps.
Each procedure
  1. receives input data, 
  2. processes the data, and 
  3. transmits the results of its processing, either to a subsequent procedure or to a human.
Data is fed to the procedures in much the same way that raw materials are fed to an assembly line—except the procedures produce information rather than cars or toasters.
A useful property of structured programs is that the “shape” of the solution (that is, the program) closely models the shape of the problem. 
Each of the procedures of the structured program in Figure relates to one or more of the steps in the process for manually preparing payroll checks.
Structured programs are designed by means of procedural decomposition
Using procedural decomposition, a designer studies the problem and attempts to break it apart by identifying a series of actions that solve it.
When a designer is asked to automate an existing business process, the design process is often simple because procedural decomposition is easy to perform. The designer merely uses the steps of the manual process to identify the actions that the program must perform. Because these steps have successfully kept the business from devolving into chaos, using them as the basis for a computer program may be less risky that trying an entirely new series of steps.
Think again, though, about what would happen to the manual payroll process in your imaginary business if it grew to 20,000 employees instead of 20.
  • The employee file could no longer be kept in the filing cabinet over by the coffee machine. 
  • Fred, the part-time bookkeeper, could no longer finish his work each Tuesday afternoon. 
  • And, most importantly, the boss, who previously signed every check and would likely notice if Ms.Smallie’s check had $1,000 written on it, instead of $100, could no longer sign each check personally—there simply wouldn’t be enough remaining time to properly watch over the business.
When businesses grow, they change their structure to handle the added complexity caused by their growth. Finance departments, vice-presidents, controllers, and auditors are added because the simple structure that worked fine for a 20-person company is no longer adequate.
Computer programs can suffer from a similar malady. The  procedural paradigm (paradigm is just a fancy word for pattern) works fine for automating routine office processes, like preparing payroll checks. But it fails to offer sufficient structure when applied to many other kinds of problems, such as simulations and interactive environments.

If you’ve been around a while, you might remember when the main job of computer programmers and designers was writing programs that solved “assembly-line” problems like
  • payroll, 
  • batch accounting, and 
  • monthly invoicing.
Things are different today.
  • Instead of being assigned to write a data-processing program to tally the month-end statements, a bank programmer is more likely to be responsible for writing code to control the ATM or the bank’s new World Wide Web site. 
  • A programmer for a stock broker might design automatic trading programs instead of a simple client billing application.
  • Such interactive or “reactive” programs are much more complex than traditional data-processing applications, because the flow of control is no longer linear. Data doesn’t come in at the start of the program, flow through a number of predefined procedures, and exit at the end, relaxed and refreshed. 
  • In a reactive program, the procedure DoThingC() might be called first, second, last, or not at all—unlike the procedural program where DoThingC() always follows DoThingA() and DoThingB().
Look back at Figure What does it look like? A pyramid, right? The pyramid structure occurs because of the hierarchical nature of control in the program.
ReadEmployeeRecord() relies on the fact that ProcessPayroll() has already performed the OpenEmployeeFile() process. The data and the environment required by ReadEmployeeRecord() are available only because the OpenEmployeeFile() procedure has been called first.
If you attempt to write an interactive program that uses procedures as its basic building block, however, the program structure no longer resembles a neat pyramid. Instead, it begins to look like a dense web of interconnections.

If you remember your first programming class, this might set off a light bulb.
  1. Before the advent of structured programming, back in the days of “iron men,” when “big-iron” was not merely metaphorical, computer programs were largely monolithic—they had no procedures at all. Thus, when a programmer needed to execute a piece of code in another part of the program, an unconditional branch was used; such branches were called gotos. As programs got larger, the typical path of program execution began to resemble a large web. Such code became known as spaghetti code, code that was difficult or impossible to understand and thus difficult or impossible to maintain, fix, or change. The underlying problem was that programs were organized as a collection of source statements. Too many “blocks” (that is, source statements) were required to build large programs. 
  2. To solve this problem, structured programming introduced the procedure as a second, larger organizing unit. Source statements were used to build procedures, but procedures (not source statements) were used to build programs. Thus, the number of blocks required to build a program decreased, reducing the complexity of the program.

                  Object-Oriented Programs

Object-oriented programming attacks the complexity of today’s programs in a similar fashion. By grouping procedures into still larger organizing units called objects, programs require fewer blocks and are, therefore, simpler.

Studying object-oriented programming, it’s hard not to notice the fact that different folks have very different views when it comes to OOP.
Reading various OOP books and papers, it almost seems that people are talking about entirely different things.
When you finally cut through all the rhetoric, though, there are two points of view:
  1. the revolutionary: The advocate of the revolutionary view loudly proclaims that OOP is so different from traditional programming that you have to learn programming over again from scratch.
  2. the evolutionary: The evolutionists, in contrast, say that OOP is really just new packaging of old concepts. Perhaps there’s some truth, as well as some error, in each of these views.


The evolutionists are correct when they assert that it is possible to write clear, well-commented, understandable code in a procedural language, and that it is possible to write incomprehensible, unmaintainable code in an object-oriented language.
The evolutionist generally fails to recognize, however, that an OOP program is organized in a fundamentally different manner than a procedural program.

The revolutionist is right in pointing out that the OOP design process uses different tools and different types of abstraction, and that no amount of functional decomposition will ever yield an object-oriented program.
The revolutionist overestimates, perhaps, the value of such an object-oriented design when weighed against factors of clarity and understandability.
A well-designed and implemented procedural program is definitely to be preferred over a poorly conceived and written OOP program. OOP and object-oriented languages provide tools to express ideas clearly, but are not instant, automatic panaceas.
Five fundamental concepts govern object-oriented programs:
  • Objects
  • Classes
  • Encapsulation
  • Inheritance
  • Polymorphism

                      What Are Objects?

Just as procedures are used to build structured programs, objects are used to build object-oriented programs.
An object-oriented program is a collection of objects that are organized for, and cooperate toward, the accomplishment of some goal. 
Every object:
  • Contains data. The data stores information that describes the state of the object.
  • Has a set of defined behaviors. These behaviors are the things that the object “knows” how to do and are triggered by sending the object a message. 
  • Has an individual identity. This makes it possible to distinguish one object from another, just as it’s possible to distinguish one program variable from another.
Like the records or structures used in procedural programs, objects contain data. In this sense, an object looks very much like one of the employee records that would be used in the payroll program. An object’s data is used to represent the object’s state. For example, data within an employee object might indicate whether an employee is full-time or part-time, hourly or salaried.

Unlike the employee record within a procedural program, however, an employee object can also contain operations. These operations may be used to read or change the object’s data.
In this sense, an object acts like a small “mini-program” that carries its own data around on its back
If you want to do something to an object, or want to know something about it, you “ask it” to perform one of its operations. In object-oriented parlance, you send it a message.
In response, it performs some behavior.

The second characteristic of an object, then, is that it has some built-in behavior: An  employee object may know how to tell you its salary, or how to print itself out to a mailing-address label.
The third characteristic of an object is that every object has a unique identity. This doesn’t mean that every object necessarily has an ID number, or a “primary key” like you find in relational databases. Objects are very much like program variables in a procedural language. 
The integer variables i and j may have exactly the same value—say 3—and yet they are distinct variables, stored at different locations within the computer’s memory.
Changing the value of i to 4, for example, does not change the value of j.
Similarly, two employee objects that represent the identical twins who work in shipping, Fred and Ned, may have the same data contents, yet still be distinct objects.

Much, but not all, of the terminology used in object-oriented programming is the same from programming language to programming language. However, knowing about the differences in terminology might help you avoid some confusion when you find yourself “talking objects” to a Smalltalk or Object Pascal or C++ programmer.
In Java, the operations of an object or class are called methods, just as in Smalltalk.
C++ programmers call methods member functions.
While Smalltalk programmers always speak of sending a message, C++ programmers tend to refer to calling a member function.
Java programmers tend to split the difference, and speak either of  sending a message to an object, or calling an object’s method, depending on whether it is the sender or the recipient of the message that is the focus of discussion.




[to be continued...]

Saturday, November 10, 2018

Concepts of Object-Oriented Design

                                       The Central Role of Objects

Object-orientation, makes objects the centerpiece of software design.
The design of earlier systems was centered around processes, which were susceptible to change, and when this change came about, very little of the old system was ‘re-usable’.
The notion of an object is centered around
  1. a piece of data and 
  2. the operations (or methods) that could be used to modify it.
This makes possible the creation of an abstraction that is very stable since it is not dependent on the changing requirements of the application.
The execution of each process relies heavily on the objects to store the data and provide the necessary operations; with some additional work, the entire system is ‘assembled’ from the objects.

                                                 The Notion of a Class


Classes allow a software designer
  • to look at objects as different types of entities.
  • Viewing objects this way allows us to use the mechanisms of classification to categorise these types, define hierarchies and 
  • engage with the ideas of specialization and generalization of objects.

                                 Abstract Specification of Functionality

In the course of the design process, the software engineer specifies the properties of objects (and by implication the classes) that are needed by a system.
  • This specification is abstract in that it does not place any restrictions on how the functionality is achieved.
  • This specification, called an interface or an abstract class, is like a contract for the implementer which also facilitates formal verification of the entire system.


                               A Language to Define the System

The Unified Modelling Language (UML) has been chosen by consensus as the standard tool for describing the end products of the design activities. The documents generated in this language can be universally understood and are thus analogous to the ‘blueprints’ used in other engineering disciplines.



                                                    Standard Solutions

The existence of an object structure facilitates the documenting of standard solutions, called design patterns. Standard solutions and corresponding patterns are found at all stages of software development, but
design patterns are perhaps the most common form of reuse of solutions.

                      An Analysis Process to Model a System

Object-orientation provides us with a systematic way to translate a functional specification to a conceptual design. This design describes the system in terms of conceptual classes from which the subsequent steps of the development process generate the implementation classes that constitute the finished software.
  1. functional specification -->
  2. conceptual design -->
  3. conceptual classes -->
  4. implementation classes

            The Notions of Extendability and Adaptability

Software has a flexibility that is not typically found in hardware, and this allows us to modify existing entities in small ways to create new ones.
  1. Inheritance, which creates a new descendant class that modifies the features of an existing (ancestor) class, and 
  2. composition, which uses objects belonging to existing classes as elements to constitute a new class,
are mechanisms that enable such modifications with classes and objects.

As the object-oriented methodology developed, the science of software design progressed too, and several desirable software properties were identified. Not central enough to be called object-oriented concepts, these ideas are nonetheless closely linked to them and are perhaps better understood because of these developments.

                                   Modular Design and Encapsulation

Modularity refers to the idea of putting together a large system by developing a
number of distinct components independently and then integrating these to provide the required functionality.
  1. This approach, when used properly, usually makes the individual modules relatively simple and thus the system easier to understand than one that is designed as a monolithic structure. In other words, such a design must be modular. 
  2. The system’s functionality must be provided by a number of well-designed, cooperating modules. 
  3. Each module must obviously provide certain functionality that is clearly specified by an (module's) interface. The interface also defines how other components may interact or communicate with the module. 
  4. We would like that a module clearly specify what it does, but not expose its implementation(internal workings). This separation of concerns gives rise to the notion of encapsulation, which means that the module hides details of its implementation from external agents. 
The abstract data type (ADT), the generalization of primitive data types such as integers and characters, is an example of applying encapsulation.
The programmer specifies the collection of operations on the data type and the data structures that are needed for data storage.
Users of the ADT perform the operations without concerning themselves with the implementation details.

                                             Cohesion and Coupling

Each module provides certain functionality;
cohesion of a module tells us how well the entities within a module work together to provide this functionality.
Cohesion is a measure of how focused the responsibilities of a module are. If the responsibilities of a module are unrelated or varied and use different sets of data, cohesion is reduced.
Highly cohesive modules tend to be more reliable, reusable, and understandable than less cohesive ones.
To increase cohesion, we would like that all the constituents
contribute to some well-defined responsibility of the module. This may be quite a challenging task.
In contrast, the worst approach would be to arbitrarily assign responsibilities to modules, resulting in a module whose constituents have no obvious relationship.


Coupling refers to how dependent modules are on each other. 
The very fact that we split a program into multiple modules introduces some coupling into the system.

Coupling could result because of several factors:
  1. a module may refer to variables defined in another module or 
  2. a module may call methods of another module and use the return values.
The amount of coupling between modules can vary.
In general, if modules do not depend on each others implementation, i.e., modules depend only on the published interfaces of other modules and not on their internals, we say that the coupling is low.
In such cases, changes in one module will not necessitate changes in other modules as long as the interfaces themselves do not change. 
Low coupling allows us to modify a module without worrying about the ramifications of the changes on the rest of the system. 
By contrast, high coupling means that changes in one module would necessitate changes in other modules, which may have a domino effect and also make it harder to understand the code.

                                      Modifiability and Testability

A software component, unlike its hardware counterpart, can be easily modified in small ways. This modification can be done to change both functionality and design.
  1. The ability to change the functionality of a component allows for systems to be more adaptable; the advances in object-orientation have set higher standards for adaptability. 
  2. Improving the design through incremental change is accomplished by refactoring, again a concept that owes its origin to the development of the object-oriented approach. 
There is some risk associated with activities of both kinds; and in both cases, the organization of the system in terms of objects and classes has helped develop systematic procedures that mitigate the risk.


Testability of a concept, in general, refers to both: 

falsifiability, i.e., the ease with which we can find counterexamples, 
and
the practical feasibility of reproducing such counterexamples. 
In the context of software systems, it can simply be stated as
the ease with which we can find bugs in a software and the extent to which the structure of the system facilitates the detection of bugs. 
Several concepts in software testing (e.g., the idea of unit testing) owe their prominence to concepts that came out of the development of the object-oriented paradigm.



                  Benefits and Drawbacks of the Paradigm

From a practical standpoint, it is useful to examine how object-oriented methodology has modified the landscape of software development. As with any development, we do have pros and cons.
The advantages listed below are largely consequences of the ideas already presented here.
  1. Objects often reflect entities in application systems. This makes it easier for a designer to come up with classes in the design. In a process-oriented design, it is much harder to find such a connection that can simplify the initial design.
  2. Object-orientation helps increase productivity through reuse of existing software. Inheritance makes it relatively easy to extend and modify functionality provided by a class. Language designers often supply extensive libraries that users can extend.
  3. It is easier to accommodate changes. One of the difficulties with application
    development is changing requirements. With some care taken during design, it is possible to isolate the varying parts of a system into classes.
  4. The ability to isolate changes, encapsulate data, and employ modularity reduces the risks involved in system development. 
 The above advantages do not come without a price tag. 
  1. Perhaps the number one casualty of the paradigm is efficiency. The object-oriented development process introduces many layers of software, and this certainly increases overheads. 
  2. In addition, object creation and destruction is expensive. Modern applications tend to feature a large number of objects that interact with each other in complex ways and at the same time support a visual user interface. This is true whether it is a banking application with numerous account objects or a video game that has often a large number of objects. 
  3. Objects tend to have complex associations, which can result in non-locality, leading to poor memory access times. 
  4. Programmers and designers schooled in other paradigms, usually in the imperative paradigm, find it difficult to learn and use object-oriented principles. In coming up with classes, inexperienced designers may rely too heavily on the entities in the application system, ending up with systems that are ill-suited for reuse. Programmers also need acclimatisation; some people estimate that it takes as much as a year for a programmer to start feeling comfortable with these concepts. 
  5. Some researchers are of the opinion that the programming environments also have not kept up with research in language capabilities. They feel that many of the editors and testing and debugging facilities are still fundamentally geared to the imperative paradigm and do not directly support many of the advances such as design patterns.