Automated Hiring Process in Human Resource Management

Automated Hiring Process

In the enterprise world there have typically been course of redesign groups and automated implementation teams.  Each with a common goal of bettering efficiency and streamlining to enhance outcomes and lower value.  Sometimes improved efficiency and decrease cost do not equal improved results.

There are multiple purposes obtainable that can scan and skim resumes for organizations.  There are automated techniques that may run background checks and credit reviews on candidates.  For some occupations there are knowledge bases out there for knowledge retrieval related to performance on potential applicants.

All of those individually or collectively can assist many organizational human resource departments.

There is also the perceived thought course of that by removing human flawed interpretation of data – one would obtain unbiased interpretation of information.  When there are no human palms screening a resume then there may be little chance very important information a couple of potential associate will be missed.  By automating the process then there shall be less time delay in retrieving solutions.

Theoretically this automated course of would lower the time and cost of choosing and hiring of new associates.  In flip this is able to additionally improve the standard of the associates being hired by any organization.  The cost financial savings can be instantly recognized and these financial savings would be seen on the underside line of the monetary reviews.

Since all applicants had been screened and selected by way of an automated process – the standard of those associates should be greater than those chosen with human intervention.  There was no likelihood of any favoritism enjoying a component within the hiring process.

  Due to automation the applicant was completely screened – matched to an open place in the organization – made a job provide – accepted and was employed  without any attainable human error.  There was one step missed in the automation – a important step in constructing an effective group within any group.  The one step missed was the human perception step.

Without the human involvement within the hiring course of – be default – that process is flawed.  Numbers and information will never inform the whole story.  Data retrieved off of some data base will not reveal how nicely the applicant works underneath stress.  Data retrieved from the pages of a resume will never reveal the power for the applicant to combine as part of a team.  A credit report won’t reveal how keen the applicant would be to working floating shift work or unplanned overtime.

The interview course of is the one part of the hiring course of that may be the most difficult to eliminate.  Any automated system used for sole determination of hiring associates for any group will fall short.  No matter the talent level or job description of the candidates.  A custodian affiliate will nonetheless must be screened for possible work ethic points.  The hiring determination is made not solely on expertise – but on ethical causes.  How nicely did the affiliate carry out the tasks?  How properly did the affiliate work throughout the team?  How flexible was the associate with regard to work hour adjustments and overtime?  Some of those questions may be answered by way of reference checks – but most will need to be answered by way of an interview process.  This interview process might be conducted by a skilled human resources associate who can ask the best questions to realize accurate answers from the applicant.

There are many circumstances where the numbers will not inform the whole story.   It is only by way of a structured interview can the most effective candidate be identified.  There are disadvantages to the human based mostly hiring course of additionally.  The process takes man hours for screening and reference checks.  The course of is open for favoritism to come into play when selecting the best candidate.

The best course of is possible a hybrid of the 2.  Utilization of an automatic process to establish the top 5 – 10% of candidates.  Then use the human assets interplay and interview course of to make the final choice.  Mark Lange, with Brass Ring Solutions – an automated applicant screening firm states, “we don’t outline high quality, however we do proved instruments for the corporate to use…” (Lange).

Do not depend upon an automatic system solely – it is simply a tool to be used by organizations to increase a hiring process.  Automation gives us the best of both worlds – it doesn’t substitute a world.


Lange, M. (2001). Brass Ring Systems Automated Hiring Systems: How to Impress a Robot. Retrieved February 16, 2009, from

Automated Rental System of Sam”s Fashion Beauty

In this chapter, the researchers outline the preparations they’d beneath taken as preliminary steps for conducting this examine. Chapter One consists of seven elements: (1) Background and Conceptual Framework of the Study; (2) Statement of the Problem; (3) Overview of the Current System and Related System, (4) Objectives of the Study; (5) Significance of the Study; (6) Definition of Terms; and (7) Delimitation of the Study.

Part One, Background of the Study and Theoretical Framework, discusses the topic of the examine and describes how the proposed system can course of the development of the present system.

Part Two, Statement of the Problem, presents the final and particular issues of the study. Part Three, Overview of the Current System and Related System, describes and likewise discusses the operation of the present manual system being utilized by the enterprise and it supplies certain characteristics which are subjected to the examine.

Part Four, Objectives of the research, presents the goals and effectiveness of the examine to be solved and completed. Part Five, Significance of the Study, specifies the benefits that may be derived from the system designed by the researchers, the benefits it may present to the system users.

Part Six, Definition of Terms, presents the conceptual and operational meanings of the essential phrases used within the study.
Part Seven, Delimitation of the Study, specifies the areas to be included, the scope and the restrictions of this analysis try. This additionally includes particular areas not included on this research.

Background of the Study

Computer has turn into a lifestyle since every thing is straightforward by having just a click away from the problem.

It solely wants two characteristics to make it work, one who know tips on how to use it and one who understands. Though handbook system remains to be used, computer generated continues to be the simplest way and trouble free to work on especially in enterprise transaction. In most rental stores, daily transactions are still carried out manually. Transaction processing system has turn into a aggressive necessity and are at all times extra worthwhile. A rental is a good option for the customers, who need a beautiful gown or formal dresses with out the expensive value. Instead of paying a hundreds of pesos, for a robe, barong any formal clothes attire that you will just wear once, why not think about renting a gown/barong and any formal attire attire?

Choosing rental Products as a substitute of purchasing for one-time-wear gown/formal dresses is decisions that can help the shoppers trim their finances without trimming their big day. Customers will look beautiful in any gown or any formal dresses, it does not matter what they cost or where they are from. In the sphere of business, many institutions use computerized system to course of easy transactions to complex one. That’s the rationale this examine is hereby proposed to be able to simplify the processes and procedures of this enterprise. The researcher used to study the standard issues that occur on this institution.

Statement of the Problem

The primary concern of the proposed system is to solve the problems of the current system of Boutique De Marquee. General Problem: Recording of all rental transactions are done manually. Specific Problem: (1) Slow strategy of recording rental transactions and reports; (2) Sometimes create knowledge redundancy, (3) Conflict in Reservation, (4) Data not secured.

Overview of the Current System and Related Literature

Current System

The current operation of Boutique de Marquee includes the rental of various sorts of Gowns and provides any rental companies which are being requested and ordered of the purchasers. The quality, Designs and Classification of different Packages they provide and some other obtainable kind of Dresses are highly thought of to determine the correct worth to make an order. So the operation begins when the purchasers inquire for info. The customer order the Rental gadgets by way of handbook procedure by both asking the Manager for the rental, reservation, providers, high quality, designs and cost. Sometimes, there is a problem, when the battle of reservation happens and when the rental gadgets returned delayed.

Objectives of the Study

The study aims to automate the current system to design and implement for Boutique de marquee. So, that by way of automation, the rental business may have a fast, reliable and environment friendly processing of rental transaction, reservation and stories. General Objectives: To Automate the Rental System of Sam’s Fashion Beauty and Glamour. Specific Objectives: (1) To provide fast Process of recording rental transactions and reports (2) To avoid data redundancy (3) To avoid Reservation Conflict (4) To secured knowledge.

Significance of the Study

The results of the examine would be of great importance to the following: The Owner and Manager. The computer-based system can give great contribution to the event of the enterprise and will allow them to handle the Rental Business and helps the proprietor in making records of the customer’s data, rental barong and any formal attire. And to ensure that the records are secured. The proposed system will help to simply the process of Rental transaction. The Customer. This system might help the client transact at Boutique de Marque sooner and simpler in selecting their design and choice of cloths/gowns.

The Researchers. This study will present the researchers valuable skills and knowledge in analyzing and encoding the system. This can even help them fulfill their faculty diploma requirements, thus helping them to develop professionally of their chosen career sooner or later.

The Future Researchers. Future researchers shall be benefited as a result of this book will serve as their guide on how to make a good and high quality system. It will serve also as basis of their study that will assist them understand what a system is.

Delimitation of the Study

This study is confined only to the automation of Boutique de Marque by providing the fast process of recording rental transactions and reviews had been important for their enterprise for the longer term functions.

This system will solely focus to the rental transaction which reservation; rent-a-new and ready-to-wear only. Other transactions aren’t concerned on this study.

Definition of Terms

Available. ready to be used or obtained.
( In this study, Available refers to the shares whereby in a position to be hire of the customer.
Arrival. act of arriving.
In this study, Arrival refers to the arriving of Rental stocks from the customer within the period of time. Customer. an individual with whom one has to deal
(Webster’s New Universal Unabridged Dictionary)
In this research, Customer refers to the one that desires to rent gowns/ barong / formal clothes. Delivery. the motion of delivering letters, packages, or ordered goods (Webster’s New Universal Unabridged Dictionary)

In this examine, Delivery refers to rental merchandise or objects delivered on a selected event or place. Description. act of describing or illustration in phrases of the qualities of an individual or a factor. ( In this research, Description refers to the describing the quality of Rental stocks. (Webster’s New Universal Unabridged Dictionary)

Gowns. outer dress of woman.
( In this examine, Gowns refers again to the rental stocks/ clothes that boutique supply. Receipt. a written acknowledgement of cash obtain or act receiving (Webster’s New Universal Unabridged Dictionary)

In this examine, Receipts refers to a piece of paper/ receipt that the client receive from the manager after the rental transaction done. Rental. an quantity paid or received as hire.
(Webster’s New Universal Unabridged Dictionary)
In this study, rental refers again to the shares that the shopper wants to hire. Reservation. the motion of reserving one thing or a qualification to an expression of agreement or approval. ( In this study, Reservation refers to the rental gadgets or stocks whereby the purchasers going to guarantee Type. Model or Person/thing characterize of group or certain high quality. ( In this study, Type refers to quality of the rental stocks.

Chapter 2

Design of the Study

The researchers current on this chapter the methods and techniques on how they will undertake this examine. This Chapter is divided into Eight Parts: (1) Purpose of the Study, (2) Methods, (3) Procedures, (4) Software Design, (5) Data/Database design, (6) Architectural Design, (7) Procedural Design, and Statistical Treatment of Data. Part One, Purpose of the Study, proves the necessity for a cautious research before implementing an info system and likewise to explain the type of analysis technique employed by the researchers.

Part Two, Methods, elaborates the System of problem fixing researchers will apply to attain the goals and clear up the issues of the research. Part Three, Procedure, proposed system is designed utilizing the system Development Life Cycle (SDLC). Part Four, Software Design, sketches the designs of the proposed pc application that may use by the proposed system. Part Five, Database Design which outlines the info storage structure for the records for the new system. Part Six, Architectural Design which represents the features of the software in hierarchal type. Part Seven, Procedural Design, relates the circulate of the instructions and interaction within the program. Part Eight, Statistical Treatment Data, sums up the attainable effects of the brand new system by way of efficiency.

Purpose of the Study

This study goals to improve the enterprise processes of Boutique De Marque, to automate and provide strategies and techniques on tips on how to simplify the processes of the business. In order to hasten the flow of day by day rental transactions, secured knowledge and information of that mentioned boutique.


Sources of Information

To conduct the research, we, the researchers, gathered all needed info via interviews, remark and listing of some wanted documents like receipts, price listing and Rental lists and important information to be examined. These things helped in determining the various issues that exist within the present system. And gave hints to us for an answer.


The proposed system was developed with the utilization of System Development Life Cycle (SDLC) concept in planning and managing the system improvement process. The SDLC describes actions and capabilities that the researchers perform regardless of which strategy to use to develop Automated Rental System of Boutique De Marque.

The researchers used the 4 steps of System Development Life Cycle Model: (1) System Planning Phase; (2) System Analysis Phase; (3) System Design Phase and (4) System Implementation and Evaluation Phase.

In the primary section, the researchers performed preliminary interviews to investigate and formulate ways to unravel the problems by the current system.
In the second part, the researchers investigated the business processes and doc what the proposed system can do. They carried out investigations and interviews to know the flow of the present system and indentify the issue.

In the third part, the researchers created the person pleasant interface, prepared techniques requirements corresponding to order slips and identify all essential outputs, inputs and processes.
Finally, within the implementation phase, the researchers designed this system that matches to the system. The program might be written, tested and documented.

Software Design

The researchers made use of Microsoft Access to develop the program for the Automated Rental System of Boutique De Marque. It is certainly one of the applications that will adapt in creating database for business and private and use. It additionally makes use of Microsoft Visual Basic as its programming language. It is a user-friendly programming language that allows the builders to easily create the system.

Data/Database Design

In designing the proposed software program for the Automated Rental System of Boutique De Marque, the schematic ought to first be outlined. The Researchers used relational database management system the place data stored in a spreadsheet-like diagram referred to as table that contains rows and tables

Chapter 3

Presentation of the Proposed System and Evaluation Results

This chapter contains the next components; (1) Proposed System, (2) Technical Specifications, (3) Implementation, (4) System Inputs and Outputs, (5) Evaluation Results-Juror’s Evaluation and recommendations for enchancment.

Part One, Proposed System, shows the new processes and proposed changes of operation flow of Automated Rental System of Boutique de Marque.
Part Two, Technical Specification, specifies the wanted technicalities to implement the proposed system. It includes software program, hardware and people requirements.
Part Three, Evaluations Results, states the jurors, analysis and suggestions for enchancment of the proposed system.

Proposed System

Using the proposed System the Boutique de Marque will not going to listing all of the transactions manually and scan their Notes to find a way to confirm the customer’s info and transaction to them because the new Automated Rental Transaction of Boutique de Marque will provide them their needed File transactions instantly.

The following data circulate diagram illustrates the brand new Boutique de Marque Rental Transaction procedures to be adopted for the proposed system.

Software Specification

In this proposed system the program was developed by way of the use of Microsoft Windows 7 Starter; Microsoft Visual Basic Application Software type with a dimension 0f 1.09 KB (1.a hundred and twenty bytes); and Microsoft entry utility software program kind with a dimension of two.fifty eight KB (2.643 bytes).

Hardware Specification

In this proposed system a single Net guide shall be used for computerized processes of transaction of the business. Intel® Atom™ Processor , 2.00 GB Memory, Mouse, 1 and four GB Flash drive and Printer.

User Specification

User of this proposed system will be educated by an expert data Technologist, for him/her to be more practical in using the computer-based processes.

System Implementation

The implementation of proposed system shall be cover from creating, testing and putting in this system for new system.
We the researchers advocate the right use and selection of hardware, software program and other instructed important instruction and that they should be adopted accordingly as illustrated on this documentation as these are fastidiously studied and designs.

Chapter 4

Summary, Conclusion, Implications and Recommendations

Chapter four consist of 5 components: (1) summary of Proposed System and Research Design, (2) Summary of Findings, (4) Conclusions, (4) Implications, and (5) Recommendation.
Part One, Summary of the Proposed and Research Design, drafts the proposals of the researchers and shows how the whole analysis was designed.
Part Two, Summary of Findings, reflects the vital points of the study and present the findings after the analyzing the information gathered.
Part Three, Conclusions, presents the conclusions drawn from the outcomes of the research.
Part Four, Implications, describes the optimistic results of the model new system or the Rental System.
Part Five, Recommendations, provides sure suggestions in view of the situations given.

Summary of Proposed System and Research Design

The Automated Rental System of Boutique de Marque was created and designed to enhance the services and the operations of Boutique de Marque Rental Business. The present system is undergoing analyzed and given a technological twist whereas revising its procedure so as to be extra efficient via the proposed system.

To achieve this technique, we the researchers utilized the tactic of system evaluation and design. First, we the researchers analyzed the current system, issues and factors enchancment.
Then, we the researchers designed and launched a form of resolution, which after all involved computerization.

Summary of Findings

The study aimed to design and find out the effects of Automated Rental System of Boutique de Marque in the present system of that stated boutique.
Findings revealed from the evaluation of gathered knowledge that through Automated Rental System of Boutique de Marquee offered the Manager/Owner to have an environment friendly servicing to prospects and hasten their rental transactions and stories.


After a radical study about the data gathered, we the proponents conclude the Automated Rental System of Boutique de Marque is in want on this boutique so that it will be more efficient to the manager/owner to hasten the method of their recording rental transactions and reports and to keep away from any battle and to secure data.


With the model new Automated Rental System of Boutique de Marque we, the researchers hope that this new system would deliver about the technological development to the boutique. Since the proposed system of Boutique de Marquee solved, the problem of current system, this can make transactions and operations definitely are extra accurate and dependable if it is applied correctly.


We the proponents highly suggest that the Automated Rental System of Boutique de Marquee be applied in order that it has more quality to the customers, manager/owner might be made accurate, secured and easy.

Automated Intelligent Wireless Drip Irrigation Environmental Sciences Essay

Drip Irrigation is todays want as a result of H2O is natures present to the world and it is non limitless and free forever. Worlds H2O resources are fastly disappearing. The one and merely one solution to this job is automated Drip Irrigation system. In the sector of agribusiness, utilization of proper methodology of irrigation is of import and it’s good identified that irrigation by trickle is actually economical and efficient. In the standard trickle irrigation system, the husbandman has to maintain ticker on irrigation timetable, which is different for different harvests.

In Automatic microcontroller based trickle irrigation system irrigation will take topographic level merely when there might be intense demand of H2O. Irrigation system uses valves to turn irrigation ON and OFF. These valves may be simple automated by utilizing accountants and solenoids. The intent of this paper is to produce extra set up in agribusiness area by utilizing radio detector internet along with additive scheduling. Paper describes an software of a radio detector net for low-cost radio managed and monitored irrigation solution.

The developed irrigation methodology removes the demand for craft for deluging irrigation every bit good as trickle irrigation. usage of additive scheduling help us to manage available H2O to the harvests if and merely if there is huge demand of H2O to the harvest in order to acquire maximal net revenue with minimal value. Besides additive Programming helps us to make correct direction of obtainable H2O.

Keywords – Irrigation- Surface, Drip, Wireless Sensor Network, Real Time Monitoring, Automation.


Agricultural irrigation is extremely of import in harvest production everyplace in the universe. In India, the place the economic system is chiefly base on agribusiness and the climatic conditions are isotropous and are non in a position to do full utilization of agricultural assets. The chief ground is the deficiency of rains and scarify of land reservoir H2O. so efficient H2O direction plays an of import perform in the irrigated agricultural cropping methods. The demand for new H2O salvaging methods in irrigation is rising shortly proper now. In order to deliver forth “ more harvest per bead ” , agriculturists in ( semi ) waterless components presently discover irrigation method [ 1 ] . In the trendy trickle irrigation techniques, the most important benefit is that H2O is equipped near the basis zone of the workss drip by trickle as a result of which a big measure of H2O is saved, at the present epoch, the husbandmans have been using irrigation method in India by way of the manual management by which the husbandmans irrigate the land on the common intervals. This process generally consumes extra H2O or sometimes the H2O reaches tardily because of which the harvests get dried. This job can be completely rectified if husbandmans use machine-controlled clever radio trickle irrigation system by utilizing additive Programming [ 2 ] .


To salvage H2O, vitality and grownup male energy within the agribusiness sector

Handle the system manually each bit good as automatically

Detect H2O degree

To plan, which might be environment friendly and try discount of the former.


Irrigation is an unreal utility of H2O to the dust. An irrigation system is a system that delivers H2O to an nation where H2O is required but non usually present in the wanted sums. By and huge, it’s used for agribusiness and landscape gardening intents. The effectivity of the irrigation is determined by a determine of different elements, including the kind of irrigation system and the circumstances at its clip of utilization. Additionally, irrigation apart from has different utilizations in harvest manufacturing, which embrace protecting workss towards hoar, stamp downing weed turning in addition Fieldss and assisting in forestalling dust consideration. In contrast, agribusiness that depends merely on direct rainfall is referred to as rain-fed or dry and farming. [ 2 ]

Types of Irrigation: Surface Irrigation: – Surface irrigation is defined because the group of software techniques where H2O is applied and distributed over the filth surface by gravitation. It is by far the commonest signifier of irrigation all through the universe. Surface irrigation is incessantly referred to as inundation irrigation.

Drip Irrigation: – Drip irrigation, apart from generally recognized as trickle irrigation or micro irrigation or localized irrigation, is an irrigation method which saves H2O and fertiliser by leting H2O to drip easy to the roots of workss, either onto the filth surface or straight onto the root zone, by way of an internet of valves, pipes, tube, and emitter. It is completed with the help of slender tubings which delivers H2O straight to the base of the works


In Existing Automated Drip Irrigation system it’s non potential to run it on determinations, it merely operated merely on particular person dirt circumstances like dirt wet, ph_value, and temperature, seen radiation. It operates on merely one status at a clip like if we utilizing soil wet detector to command machine-controlled trickle irrigation so every time dirt moist diploma is get lessening so & amp ; so merely it direct the valve to change its place from OFF to ON, and if soil moist degree is go to the right pre-setted diploma at that clip system is purchase OFF mechanically. This drip irrigation was performed by photo voltaic powered pumps. One of them ( pump-1 ) carries H2O from Dam Lake to H2O armored combat car, one other one ( pump-2 ) is used for accomplishing the needed force per unit area for irrigation.

Figure: Overview of the Existing Automated Drip Irrigation System

Relay Soil Moistr

Drip Temperature Valve Unit Sensor Unit

Farm Limit


In Current / Existing Automated Drip Irrigation system it is non potential to run it on determinations, it merely operated merely on individual dust situations like dirt moist, ph_value, and temperature, seen radiation. It operates on merely one status at a clip


It is slightly just like the bing automated drip irrigation system, however together with that my purpose is to do my proposed system to be more clever that ‘s why I am touring to utilize additive scheduling in my proposed system. In Current/ Existing Automated Drip Irrigation system it’s non potential to run it on determinations, it merely operated merely on particular person filth situations like filth moist, ph_value, and temperature, seen radiation. It operates on merely one standing at a clip like if we using soil moist detector to command machine-controlled trickle irrigation so each time filth moist diploma is get lessening so & amp ; so merely it direct the valve to alter its place from OFF to ON, and if soil moist degree is go to the correct pre-setted degree at that clip system is acquire OFF mechanically. Here it is non traveling to look into handiness of H2O and demand of H2O. But my system is traveling to look into that and on that footing it’s get operated. For that intent I ‘m utilizing additive programming attack to be able to make correct utilization of obtainable H2O all of the available harvests in the subject the place our system is get implemented to accumulate maximal net earnings and besides with assistance from additive scheduling we simple place available H2O and wanted H2O for the harvests.








Remote Monitor Control


Personal laptop / Server

Figure: – Proposed System Architecture

Figure: – Proposed System Architecture

The function is to plan a micro-controlled and Personal laptop driven automated drip irrigation system. This system must be ready to command the valve timings of trickles mechanically based mostly on pre-programmed timings. The clip intervals for all the valves can be fed into Personal pc for an full hebdomad or month. Regional linguistic communication primarily based GUI should be developed so that novice customers should have the power to feed in the timings or plan the hardware. An ADC connected to microcontroller should garner the humidness values for filth at assorted points. These values must be visualized in package using 3D secret plans to assist the consumer in make up one’s minding valve timings.

A Personal pc interface is provided for straightforward scheduling of the hardware ( No conventional keypad-LCD interface for square informations entry ) .The 3D graphs generated from detector values situated across the complete subject helps us to visualise, construe and take decisive actions for the peculiar state of affairs.

Figure: – Radio Sensor Network for Drip Irrigation System

Detectors ( Light, Temperature, PH_Value, Humidity ) : Detector Sense the different bodily parametric portions like seen radiation, ph_value of dirt, temperature and humidness and converts these sense informations into electrical alerts ( both electromotive drive or current )

Signal Array: It is aggregation of assorted detectors fundamentally it took enter from detector and Federal that information as an enter for the sign conditioning.

Signal Conditioning: It is actually indispensable. By and large the signal obtained from detectors are weak hence we uses signal conditioning to be able to maintain sign in to its authentic province. That means it actually works as like amplifier.

ADC ( Analog to Digital Converter ) : It Converts linear signal into digital sign and Federal that digital signal to the micro accountant as an enter.

Micro-Controller: It is bosom of the entire system, means it controls the all activities of the system. It has reminiscence during which control plans are saved.

Sensor Unit: The SU acquires informations given by the ADC, and the data sent to BSU. Value of ADC enter which comes from the detector is stored in a 10-bit registry. Different sort of detectors can be added easy for future developments.

Base Station Unit: The BSU is a maestro device that’s programmed to learn and to measure detectors informations, to command valves and to cross on with different models.

Personal pc ( Personal Computer / Server ) : Basically for Data Acquisition every bit good as logging intent we are touring to utilize personal. The graphical visual picture shows 3D Graphs generated from detector values situated throughout the sector.

Darlington Drivers: It is management unit which controls relays, fan, hotter and H2O pump harmonizing to the filth circumstances and provides needed situations to the dust means humidness, ph_value, seen radiation, temperature.

Valve Unit: Valve unit has the same connexion with radio school and the same belongingss with SU. It has an end product for commanding the valve. This valve was operated digital finish products on the microcontroller by transistor.


A radio detector internet ( WSN ) consists of spatially distributed independent detectors to supervise bodily or environmental circumstances, similar to temperature, sound, quiver, drive per unit area, gesture or pollution and handy in glove undergo their informations by way of the online to a chief location. The extra modern webs are bi-directional, in addition to enabling management of detector exercise. The development of radio detector webs was motivated by army purposes such as battlefield surveillance ; right now such webs are utilized in many industrial and client applications, similar to industrial procedure monitoring and control, machine wellness monitoring, and so forth. Wireless detector webs ( WSN ) have late been proposed for an enormous scope of purposes in place and industrial mechanization. It consists of many bantam nodes, which have a number of detectors and a wireless interface that is decided by the IEEE 802.15.four criterion that helps massive figure of embedded units in one internet. WSN can be used for a lot of functions similar to setting monitoring, medical applications, robotic techniques and place and industrial mechanization.


Linear scheduling ( LP or additive optimisation ) is a mathematical methodology for locating a way to accomplish one of the best end result ( such as maximal net income or lowest value ) in a given mathematical theoretical account for some listing of calls for represented as additive relationships. Linear scheduling is a particular occasion of mathematical scheduling ( mathematical optimisation ) .

To measure control parametric portions like how much complete H2O we’ve and what measures of various harvests have to be used to provide optimal throughput ( manufacturing )

E.g. tips on how to cut up drip H2O timings to find a way to achieve best possible throughput.

Problem: – one thousand litres of H2O

Net revenue: – 4 Rs/Liter for Crop 1

5 Rs/Liter for Crop 2

Let ‘x ‘ = litres for harvest 1

‘y ‘ = litres for harvest 2

Then PROFIT ( P ) = four x + 5 Y ( to maximise )

ten + y & lt ; = a thousand — — — — — — — — — — — ( 1 )

Power required to direct 1 litre of H2O for harvest 1 = 2 Watts

Power required to direct 1 litre of H2O for harvest 2 = 3 Watts

Max power available = 2400 Watts

2x + 3y & lt ; = 2400 — — — — — — — — — — — – ( 2 )

Solution: –

Constraints x & gt ; =0, y & gt ; =0

x+y & lt ; =1000 — — — — — — — — — — — — — – ( 1 ) 2x+3y & lt ; =2400 — — — — — — — — — — — — – ( 2 )

For Equation ( 1 ) put x=0 we get y=1000 and set y=0 we get x=1000 and for equation ( 2 ) put x=0 we get y=800 and put y=0 we get x=1200

Now remedy these 2 equations we get the purpose where we get maximal internet income

2x + 3y = 2400 — — — — — — — ( 2 )

-2x- 2y = -2000 — — — — — — — ( 1 ) multiplies by -2

— — — — — — — — — — — — — — — — — — — — — — — — — — — – Y = 400

So put y=400 in equation ( 1 ) we get x=600

So now we’ve 4 points in graph

i.e. ( zero,0 ) , ( a thousand,zero ) , ( 0,800 ) , ( 600,400 )

Now we’ve to cipher net income for that intent we’ve to set these Valuess in equation ( P = 4x + 5y )

For ( 0, 0 ) we get profit P = 0,

For ( 1000,zero ) we get revenue P = 4000

For ( 0,800 ) we get revenue P = 4000

For ( 600,400 ) we get profit P = 2400 + 2000 = 4400 = maximal internet income


600 litres of H2O for harvest 1


400 litres of H2O for harvest 2


To map the physical parametric amount readings for nations in farm the place taking handbook readings is non attainable. E.g. If we’ve a reading at 1 level and so straight at 2nd level 25 metres off. Then we shall extrapolate the values for factors at every metre between the 2 measured points

Interpolation: -Interpolation is a method of constructing new informations points throughout the scope of a definite set of identified info points.

Extrapolation: – The time period extrapolation is used if we wish to happen knowledge points exterior the scope of recognized data points.


System could be acknowledged as set S that consists of S = { N, Pr, Po, C, LP, X } ;

Where N = figure of harvests Pr = { pr1, pr2aˆ¦n } ; Set of internet incomes generated per litre for harvest 1, 2aˆ¦n ( Input to the system ) Po = { po1, po2aˆ¦n } ; Set of values of power required to direct 1 Liter of H2O to reap 1, 2aˆ¦n ( Input to the system ) C = { c1, c2aˆ¦ } ; Set of restraints the system must observe. ( Predefined ) LP = Liner Programming map that takes input Pr, Po and C and generates unknowns X= { x1, x2, aˆ¦xn } ; the place x1, x2, aˆ¦xn are optimum values of H2O that should be supplied to each harvest 1,2, ..n

Ten = LP ( Pr, Po, C ) ;


Are comparatively easy to plan and put in

This makes addition in productiveness and reduces H2O ingestion

This is safest.

No work drive is required

Reduce dirt eroding and alimentary leaching.

Here we’re using additive scheduling. it has apart from some advantages which are as follows: –

LP is nice for optimisation jobs affecting maximising internet incomes and minimising costs.

The additive scheduling approach helps to do the absolute best utilization of available productive resources ( corresponding to clip, labour, machines and so forth. )

In a manufacturing process, bottle cervixs may happen. For e.g. in a mill some machines could additionally be in great demand whereas others may lie tick over for some clip. A important advantage of additive scheduling is foregrounding of such bottle cervixs.

Relatively speedy.

Guaranteed to happen optimum solution

Provides natural sensitiveness analysis ( shadow financial values )


As compared to Conventional Irrigation system equipments are costlier.

Require frequent take care of efficient operation

Have restricted life after installing as a result of impairment of the fictile constituents in a hot, waterless clime when uncovered to ultraviolet visible radiation.

Linear scheduling is applicable merely to jobs where the restraints and nonsubjective map are additive. In existent life state of affairss, when restraints or nonsubjective maps are non additive, this technique can non be used.

Factors such as uncertainness, weather conditions etc. are non considered.

Reducing the universe to a set of additive equations is normally actually hard


The Automated Intelligent Wireless Drip Irrigation System Using Linear Programming provides to be a existent clip feedback control system which proctors and controls all of the actions of trickle irrigation system expeditiously every bit good as it helps us for to make the efficient H2O path in order to purchase more web income with less value. Using this method, one can salvage manpower, every bit good as H2O to better productiveness and at last the net revenue.

In future when you modify it decently so this technique can in addition to present agricultural chemicals like Ca, Na, ammonium, Zn to the field together with Fertilizers with adding new detectors and valves.

Besides it’s potential to registered husbandman to obtain trickle management timings from agricultural universities website and management ain trickle irrigation system harmonizing to university

Automated Classroom Monitoring System

Every tutorial institution aims outstanding scholastic performance of each and every pupil enrolled. Not solely the schools but additionally the dad and mom of those college students hope to see their little kids to excel in school with flying colours. In order to achieve so, college students should attend their lessons frequently in order that they are going to be given the utmost commonplace studying experience. Class takes it to a different degree, an automatic, secured and environment friendly attendance checking system. There are sure reasons why colleges are inspired to use this sort of know-how.

Teachers these days are having issues in taking day by day attendance of the scholars, generally they neglect to take it or they will just merely give the class a blank paper to allow them to write their names and sign so it’s going to serve their attendance and then afterward who is aware of if that paper might be misplaced. This system will replace the out of date swipe card system with finger authentication expertise that is extra reliable and correct.

With this expertise, attendance will be recorded quicker to the database with just a contact of the thumb.

____________________ is an automated system positioned on every classroom, geared up with biometric know-how and automated SMS software, interfaced with a computer server the place the database is then saved.

Statement of the Problem

School instructors have obligations inside the classroom in direction of the scholars. It is their obligation to ensure all the students enrolled in that subject must benefit the lesson that the class is taking over in the intervening time. But sometimes teachers forgot to take class attendance or misplaced them.

That’s why mother and father can not be assured that the students are inside the varsity premises because the academics wouldn’t have the info that will help the state of affairs. The main goal of conducting this design is to enhance the automated attendance monitoring system in academic institutions and to assist the parents of the scholars to be able to monitor their baby whether he/she attended faculty or has been chopping lessons.

Hypothesis of the Study

1. Ensures that the daily attendance of the category will be taken and a delicate copy shall be saved on a database and print a tough copy for the instructor’s documentation. 2. To limit the students from loitering in hallways throughout class hours. three. Minimize the tardiness of scholars in attending courses. four. Avoid altering school documents that are accessible only by the instructor in authority. 5. Provides mother and father the possibility to watch the attendance of scholars. 6. Enables the instructor to tell students if he/she is not going to have the power to come at school on time or will be absent. 7. To replace the obsolete and inefficient swipe card expertise with the finger print authentication biometric expertise. 8. Saves class time for discussion as an alternative of the normal roll-call of the names of each scholar.

Theoretical framework of the study

This analysis is stimulated for the event of the school’s attendance monitoring system, provided with the biometrics expertise and automatic system. This will assist the school authorities to have a everlasting and extra secured database administration in attendance monitoring. A study from Teleron 2000 “Data Acquisition on Class Hour Attendance of the Faculty in Southwestern University” options the same idea as ____________, however his research is focusing solely on college employees. Teleron used Barcodes on the staff’s ID that accommodates the information and the Bar code reader on the other hand reads the bar codes as the workers swipes the IDs. He recommends the utilization of biometric expertise and further enhancement of his examine, which convinced the researchers to push by way of to work with _______________________.

Biometric units nowadays are preferable than other authentication technologies such as barcodes and magnetic stripe scanners. This type of expertise is tough to alter or tampered as a outcome of it wants a singular sample for authentication. Amazingly, human finger possesses unique ridges and valleys that differ from one human being from another, even similar twins.

The first advantage of using this new technology is the uniqueness and it is also the main characteristic which permits biometrics expertise to turn into increasingly more essential in our lives. With uniqueness of biometrics technology, each individual’s identification shall be single handiest identification for that person. A chance of two customers having the identical identification within the biometrics security know-how system is kind of zero (Tistarelli, 2009).

Significance of the Study

People who will be benefited by this project design favor these in:


This innovated expertise will be ready to assist the teachers to mark and update pupil class attendance quickly. He/She can simply print hard copies of the attendance in case of necessity. It will lessen wasted time on roll-calls and trainer can instantly proceed into his/her lecture.


Our design might be mostly be benefited by the students.


The dad and mom will simply observe their sons and daughters’ attendance by merely sending the best keyword to the ____________ by way of Short Message Service (SMS). They will know directly when the scholars are really attending the category or chopping lessons when the system will reply after they sent the SMS. They also can ask for a tough copy of the attendance, for example: they want the report for the whole month of January; the trainer will search for on the records of that specific scholar and then print it right away.

Scope and Delimitation

This design project aims to help the school and the entire scholar body of this establishment to promote safe monitoring of the students’ attendance throughout class hours, and that folks may also be given the chance to know the student’s standing first hand with just an SMS away.

Researchers designed the biometric gadget to be positioned on every classroom solely. Since the gadget is powered by electricity, a sudden lack of energy will interrupt the whole system if there’s an on-going entry, the place it’s going to take a couple of minutes for the generator to supply electricity quickly for the system to renew.

SMS function of the system is restricted only to the parent’s mobile number given to the administration encoded on the database. Other cellular numbers unknown to the database or not recorded can not be entertained by the system. Incase dad and mom want to add or change cellphone numbers for the system; they need to submit a model new form to the administration for approval and re-entry of information.

Definition of Terms

Listed below are the terminologies and the conceptual that means used in the study.


It is the measurement and analysis of distinctive physical or behavioral traits (as fingerprint or voice patterns) especially as a method of verifying personal identity. (


Fingerprint scanning basically provides an identification of a person based on the acquisition and recognition of these unique patterns and ridges in a fingerprint. The actual fingerprint identification process will change barely between merchandise and techniques. The foundation of identification, however, is nearly the identical. Standard systems are comprised of a sensor for scanning a fingerprint and a processor which shops the fingerprint database and software which compares and matches the fingerprint to the predefined database. Within the database, a fingerprint is normally matched to a reference number, or PIN quantity which is then matched to a person’s name or account.


Automated Qualifying Entrance Examination



“Technology can change the way students think, learn and revolutionize,” says the Chief Executive Officer on Education and Technology (Courte, 2005). Technology also calls for broadening the definition of student achievement to include digital-age literacy, inventive thinking, effective communication and high productivity-skills necessary for students to thrive in the 21st century. According to the report, technology can help deliver significant results when combine with other key factors known to increase achievement, such as clear, measurable objectives; parental and community involvement; increase time spent on task; frequent feedback; and the teacher’s subject-matter expertise.

In this age of computers, many educators see it as inevitable that students will someday learn in classrooms without walls, desks, or face-to-face contact with teachers. The gradual degeneration of the conventional examination system manifested in frequent leakage of question papers, manipulation of marks, copying and use of unfair means by all involved (administration not ruled out). This conventional examination system was also referred to as paper-and-pencil tests. This is a fixed-item test in which the student and/or examinee answer the same questions.

Fixed-item test waste students’ time because they give students a large number of items that are either too easy or too difficult. As a result, the tests give little information about the particular level of ability of each student. With recent advancement in measurement theory and the increased availability of microcomputers in schools, the practice of using electronic examination system may change. Computerized tests may replace paper-and-pencil tests in some instances.

These scenarios triggered the researchers to conduct and Electronic Qualifying Examination that would be beneficial to the College of Science. The system being studied would facilitate the systematic storage, updating and retrieval of pertinent examinee data as well as checking and scoring of examinee answers to test questions. It is also able to generate reports of ratings and statistics of the test scores. However, it does not monitor the users’ actions and event to block the users.

Statement of the Problem

Generally, this study sought to determine the operations and performance of an Electronics Qualifying Examination System Compared with the traditional qualifying examination procedure and process.

Specifically, it endeavored to answer the following questions: 1. What are the existing problems being encountered on the current conventional qualifying examination? 2. What will be the design of an electronic qualifying examination system in terms of the following:

2.1 Process;

2.2 Data;
2.3 Language;

3. What is the level of acceptability of the proposed system in the College of Science?

Objective of the Study

In general, this study aimed to determine the performance and operation of an Electronic Qualifying Examination System compare with the current qualifying examination procedure and process.

In particular, it envisioned to:

1. Determine the existing problems being encountered on the current conventional qualifying examination; 2. Design an Electronic Qualifying Examination System in terms of the following.

2.1 Process;

2.2 Data;
2.3 Language; and,
3. Ascertain the level of acceptability of the proposed system in the College of Science.

Scope and Limitation of the Study

This study was conducted in the College of Science, University of Eastern Philippines. It is limited only to the performance of the specified functions such as scheduling, the actual examination and the retrieval of the examination results. It is meant to assist the users, especially the examination personnel to meet the needs of the students or applicants.

If at present, the facilities and equipment of the College of Science are inadequate, this system may be used in the future.

The proposed system was drawn randomly. It will not monitor the user action and even block the user. Moreover, the system will not suggest what would be the appropriate course does the examinee will take.

The system does not guarantee complete benefits to all users. This may be bound to happen that some of them might experience technical difficulties that are not covered by the system such as the malfunctioning of the computer. Such scenarios are beyond the control of the system.

Nevertheless, this will be more comprehensive and interesting if this will be introduced or presented covering the other services of the College.

Significance of the Study

The Electronics Qualifying Examination would replace the Paper-pencil-type of examination. It provides easy transaction between the test administrator and the examinee.
The results of this study would be beneficial to the following:

College of Science. The Proposed system would be beneficial to the College of Science in terms of improving its management system. Through the existence of the proposed system, workflow during the qualifying examination would be minimized. Human resource will be reduced and security will be foolproof.

Examinees. This system will provide them a convenient way of taking the qualifying examination. The system provides an instant checking and scoring scale of each examination that would enable them to get their results in a few hours. In this way, they will be able to minimize their time, effort and money splat on each activity.

College Guidance Personnel. In general, this system would greatly increase the flexibility of test management. It reduces their time in administering the examination, thus reduces their fatigue also. It also provides them convenience throughout the examination process. They will likewise be able to immediately get feedback whether the given examination is easy or difficult.

Future Researchers. This study can be used as a springboard for further study. This can be used as their reference or guide in the development of a system they are going to develop.

Definition of Terms
For easy understanding, the following terms were defined operationally and conceptually.

Conventional Examination. Operationally, it refers to the current system, which is the paper-pencil examination.

Data. It is information in a form suitable for processing by a computer, such as the digital representation of text, numbers, graphic, image and sound. Strictly speaking it is mean, an item of information (Cowart, 2000). In this study, this would refer to the information extracted from the examinee, their profile, schedules and results. It represents the facts, concepts or instruction produced by the examinee and the test administrator.

Database. Conceptually, it is an application used to store and manipulate data. The application may be a simple one that provides for flat files only and that cannot be programmable, or it may have the capability of producing databases that are programmable and relational (Dictionary of Information Technology, 1995). Operationally, this will be a storage device used to store important data and information in accordance to the system such as examinee profile, schedules and the results of examination.

End-User. Conceptually, it refers to the person who uses the application program and computer products to produce his or her own results. This is a person at the end of a long chain of people who design and make computer products. The end user is usually the person who buys the products (Cowart, 2000). It refers to the test administrator and examinees involved in this investigation.

Electronic Examination. According to Webster dictionary, to be electronic is to incorporate your work with the use of the computer (The New Webster Pocket Computer Dictionary, 1998). In this study, it is meant to take an examination with the use of a computer system, its hardware software and peripherals.

Error. A mistake. An error or bug in the system may cause the computer to crash (Dictionary of Information Technology, 1995).

Examinee. Generally speaking, it points to a person taking the actual examination.

Password. According to the Webster’s dictionary, a password is a security code that is required in the use of a computer, a particular program, or a certain file. Computer files protected by a password require the user to type the needed password before the protected files can be made available (The New Webster’s Pocket Computer Dictionary, 1998). Operationally speaking, this will be a secret word a user must input into the Computer in order the gain access to the electronic qualifying examination.

Problems. Operationally, this refers to the existing obstacles that the Guidance Office is experiencing. The problems encountered were in scheduling, actual examination and the retrieval of results. This is the main reasons why the proponents conducted this study, in order to reduce and lessen the existing problems Procedure. Operationally and conceptually, it is sequence of steps taken by the system to carry out its job. Process. Operationally, it is to carry out an action such as the scheduling process. Profile. Operationally it refers to the personal information of the examinee such as last name, first name, age, gender, ID number, status, address and score in the test/ examination.

Report. Conceptually speaking, it is a document from the computer or that is an output or a hard copy that summarizes the outcome from data processing (Cowart, 200). This would be the printed report copy of schedules and results of the examinee. It is collected data and information from the database. System. According to the book, it is everything that is needed to carry out a certain task. Just like a computer system, it includes the hardware, software and the manuals (Cowart, 2000). Operationally, it refers to the Electronic Qualifying Examination. This will enable the College of Science to replace the current conventional system of Qualifying Examination. It involves three major processes that includes the scheduling, the actual examination and the retrieval of data.

Test Administrator/Examiner. Operationally, it refers to the person or persons involved in giving an examination. They are the one’s responsible in operating the examination.

Automated Monitoring Attendance System

1.1 The problem and its scope

In this paper we propose a system that automates the whole process of taking attendance and maintaining its records in an academic institute. Managing people is a difficult task for most of the organizations, and maintaining the attendance record is an important factor in people management. When considering academic institutes, taking the attendance of students on daily basis and maintain the records is a major task. Manually taking the attendance and maintaining it for a long time adds to the difficulty of this task as well as waste a lot of time.

For this reason an efficient system is designed. This system takes attendance electronically with the help of a fingerprint sensor and all the records are saved on a computer server. Fingerprint sensors and LCD screens are placed at the entrance of each room. In order to mark the attendance, student has to place his/her thumb on the fingerprint sensor. On identification student’s attendance record is updated in the database and he/she is notified through LCD screen. No need of all the stationary material and special personal for keeping the records. Furthermore an automated system replaces the manual system.

1.2 Introduction

Nowadays, industry is experiencing many technological advancement and changes in methods of learning. With the rise of globalization, it is becoming essential to find an easier and more effective system to help an organization or company. In spite of this matter, there are still business establishments and schools that use the old-fashioned way. In a certain way, one thing that is still in manual process is the recording of attendance. After having these issues in mind we develop an Automated Monitoring Attendance System, which automates the whole process of taking attendance and maintaining it, plus it holds an accurate records.

Biometric systems have been widely used for the purpose of recognition. These recognition methods refer to automatic recognition of people based on the some specific physiological or behavioral features [1]. There are many biometrics that can be utilized for some specific systems but the key structure of a biometric system is always same [2]. Biometric systems are basically used for one of the two objectives identification [3] or verification [4]. Identification means to find a match between the query biometric sample and the one that is already been stored in database [5]. For example to pass through a restricted area you may have to scan your finger through a biometric device. A new template will be generated that will be then compared with the previously stored templates in database. If match found, then the person will be allowed to pass through that area.

On the other hand verification means the process of checking whether a query biometric sample belongs to the claimed identity or not [6]. Some of the most commonly used biometric systems are (i) Iris recognition, (ii) Facial recognition,(iii)Fingerprint identification, (iv) Voice identification, (v) DNA identification, (vi) Hand geometry recognition and (viii)Signature Verification [5].Previously the biometrics techniques were used in many areas such as building security, ATM, credit cards, criminal investigations and passport control [4]. The proposed system uses fingerprint recognition technique [1] for obtaining student’s attendance. Human beings have been using fingerprints for recognition purposes for a very long time [7], because of the simplicity and accuracy of fingerprints.

Finger print identification is based on two factors: (i) Persistence: the basic characteristics and features do not change with the time. (ii) Individuality: fingerprint of every person in this world is unique [8]. Modern fingerprint matching techniques were initiated in the late 16th century [9] and have added most in 20th century. Fingerprints are considered one of the most mature biometric technologies and have been widely used in forensic laboratories and identification units [10]. Our proposed system uses fingerprint verification technique to automate the attendance system. It has been proved over the years that fingerprints of each every person are unique [8]. So it helps to uniquely identify the students.

1.3 Theoretical Background

For over 100 years, fingerprint has been used to identify people. As one of the biometric identification, fingerprint is the quite the most popular one. Besides getting the print for fingerprint is easy, it doesn’t need a special sophisticated hardware and software to do the identification. In the old times and even until now, fingerprints are usually taken using merely inks and papers (could be one print, ten prints, or latent print). Finger print is unique. There is no case where two fingerprints are found to be exactly identical.

During the fingerprint matching process, the ridges of the two fingerprints will be compared. Besides using ridges, some of the identification techniques also use minutiae. In brief, minutiae can be described as point of interest in fingerprints. Many types of minutiae have been defined, such as pore, delta, island, ridge ending, bifurcation, spurs, bridges, crossover, etc, but commonly only two minutiae are used for their stability and robustness (4), which are ridge ending and bifurcation.

To help in fingerprint identification, fingerprint classification method is implemented. There are some classification theories applicable in the real world such as The NCIC System (National Crime Information Center) Still used even until now, the NCIC system classifies fingers according to the combination of patterns, ridge counts, whorl tracing. NCIC determines
.Fingerprint Classification (FPC) field codes to represent the fingerprint characteristics. The following are the field codes tables:

Using NCIC system FPC Field Codes eliminates the need of the fingerprint image and, thus, is very helpful for the need of fingerprint identification for those who do not have access to an AFIS. Instead of relying to the image, NCIC relies more on the finger image information. The Henry and American Classification Systems Henry and American classification systems, although has a lot in common, are actually two different systems developed by two different people. The Henry Classification System (5) was developed by Sir Edward Henry in 1800s; used to record criminals’ fingerprints during Civil War. Henry System used all ten fingerprints with the right thumb denoted number 1, right little left finger denoted number 5, left thumb denoted number 6, and lastly the left little finger denoted number 10.

According to Henry System, there were two classifications; the primary and the secondary. In the primary classification, it was a whorl that gives the finger a value. While even numbered fingers were treated as the nominator, odd numbered fingers were treated as denominator. Each finger’s value was equal to the value of the whorl plus one. In the secondary classification, each hand’s index finger would be assigned a special capital letter taken from the pattern types (radial loop (R), tented loop (T), ulnar loop (U), and arch (A)). For other fingers except those two index fingers, they were all assigned with small letter which was also known as small letter group. Furthermore, a sub secondary classification existed; it was the grouping of loops and whorls, which coded the ridge of the loops and ridge tracings of whorls in the index, middle, and ring fingers. The following is the table of Henry System.

The American Classification System was developed by Captain James Parke. The difference lies in assigning the primary values, the paper used to file the fingerprint, and the primary values calculation.

Filing Systems
In this system, all of the fingerprints are stored in cabinets. Each cabinet contains one different classification and, thus, the fingerprint cards are stored accordingly. The existence of AFIS system greatly helps the classification process. There is no need to even store the physical fingerprint cards. AFIS does not need to count the primary values of all those fingers and does not have to be as complicated as NCIC System. With the power of image recognition and classification algorithm, fingerprint identification can be done automatically by comparing the source digital image to the target database containing all saved digital images. Another important issue to know is the fingerprint classification patterns. These patterns are growing with each generation of AFIS and differ from one too to another, searching time and reduced computational complexity.

The first known study of fingerprint classification was proposed by in 1823 by Purkinje, which resulted in fingerprint classification down into 9 categories: transverse curve, central longitudinal strain, oblique stripe, oblique loop, almond whorl, spiral whorl, ellipse, circle, and double whorl. Later on, more in depth study was conducted by Francis Galton in 1892, resulted in fingerprint classification down into 3 major classes: arch, loop, and whorl. Ten years later, Edward Henry refined Galton’s experiment, which was later used by many law enforcement agencies worldwide. Many variations of Henry Galton’s classification schemes exists, however there are 5 most common patterns: arch, tented arch, left loop, right loop, and whorl. The following are types of fingerprint classification patterns:

Since IDAFIS is another extended form of AFIS, we do not need to implement all other classification systems. What we need to do is to see what kind of classification pattern the algorithm can distinguish.

Fingerprint Matching

In general, fingerprint matching can be categorized down into three categories:  Correlation-based matching: the matching process begins by superimposing (lying over) two fingerprints, and calculating the correlation between both by taking displacement (e.g. translation, rotation) into account.  Minutiae – based matching: Minutiae are first extracted from each fingerprint, aligned, and then calculated for their match.  Ridge feature – based matching: Ridge patterns are extracted from each fingerprint and compared one with another. The difference with minutiae – based is that
instead of extracting minutiae (which is very difficult to do to low – quality fingerprint image); ridge pattern such as local orientation and frequency, ridge shape, and texture information is used.

Chapter Two

Most of the attendance systems use paper based methods for taking and calculating attendance and this manual method requires paper sheets and a lot of stationery material. Previously a very few work has been done relating to the academic attendance monitoring problem. Some software’s have been designed previously to keep track of attendance [11].But they require manual entry of data by the staff workers. So the problem remains unsolved. Furthermore idea of attendance tracking systems using facial recognition techniques have also been proposed but it requires expensive apparatus still not getting the required accuracy [12]. Automated Monitoring Attendance System is divided into three parts: Hardware/Software Design, Rules for marking attendance and Online Attendance Report. Each of these is explained below. 2 System Description

2 .1 Hardware

Required hardware used should be easy to maintain, implement and easily available. Proposed hardware consists following parts:
(1) Fingerprint Scanner
(2) LCD Screen
(3) Computer

Fingerprint scanner will be used to input fingerprint of teachers/students into the computer software. LCD display will be displaying rolls of those whose attendance is marked. Computer Software will be interfacing fingerprint scanner and LCD and will be connected to the network. It will input fingerprint, will process it and extract features for matching. After matching, it will update the database attendance records of the students. A fingerprint sensor device along with an LCD screen is placed at the entrance of each classroom. The fingerprint sensor is used to capture the fingerprints of students while LCD screen notifies the student that his/her attendance has been marked.

2 .2 Rules for marking attendance

This part explains how students and teacher will use this attendance management system. Following points will make sure that attendance is marked correctly, without any problem: (1) All the hardware will be outside of the classroom.

(2) When teacher enters the classroom, the attendance marking will start. Computer software will start the process after inputting fingerprint of the teacher. It will find the Subject ID and current semester using the ID of the teacher or could be set manually on the software. If the teacher doesn’t enter the classroom, attendance marking will not start. (3) After some time, say 15 minutes of this process. The student who login after this time span will be marked as late on the attendance. This time period can be increased or decreased per requirements.

2 .3 Online Attendance Report

Database for attendance would be a table having following fields as a combination for primary field: (1) Day, (2) Roll, (3) Subject and following non-primary fields: (1) Attendance, (2) Semester. Using this table, all the attendance can be managed for a student. For online report, a simple website will be made for it. Which will access this table for showing attendance of students .The sq queries will be used for report generation? Following query will give total numbers of classes held in a certain subject. Now the attendance percent can easily be calculated:

2.4 Using wireless network instead of LAN

We are using LAN for communication among servers and hard wares in the classroom. We can instead use wireless LAN with portable devices. Portable device will have an embedded fingerprint scanner, wireless connection, a
microprocessor loaded with software, memory and a display terminal.


[1] D. Maltoni, D. Maio, A. K. Jain, S. Prabhaker, “Handbook of Fingerprint Recognition”, Springer, New York, 2003.
[2] A.C. Weaver, “Biometric authentication”, Computer, 39(2), pp 96 – 97 (2006). [3] J. Ortega – Garcia, J. Bigun, D. Reynolds and J.Gonzalez – Rodriguez, “Authentication gets personal with biometrics”, Signal Processing Magazine, IEEE, 21(2), pp 50 – 62 (2004).

[4] Anil K. Jain, Arun Ross and Salil Prabhakar,” An introduction to biometric recognition” , Circuits and Systems for Video Technology, IEEE Transactions on Volume 14, Issue 1, Jan. 2004 Page(s):4 – 20. [5] Fakhreddine Karray, Jamil Abou Saleh, Mo Nours Arab and Milad Alemzadeh, Multi Modal Biometric Systems: A State of the Art Survey ” , Pattern Analysis and Machine Intelligence Laboratory, University of Waterloo, Waterloo, Canada. [6] Abdulmotaleb El Saddik, Mauricio Orozco, Yednek Asfaw, Shervin Shirmohammadi and Andy Adler “A Novel Biometric System for Identification and Verification of Haptic Users ” , Multimedia Communications Research Laboratory (MCRLab) School of Information Technology and Engineering University of Ottawa, Ottawa, Canada .

[7] H. C. Lee and R. E. Gaensslen, “ Advances in Fingerprint Technology ” , Elsevier, New York . [8] Sharath Pankanti, Salil Prabhakar, Anil K. Jain, “ On the Individuality of Fingerprints ” , IEEE transaction on pattern analysis and machine intelligence, vol.24, no.8, August 2002. [9] Federal Bureau of Investigation, “ The Science of Fingerprints: Classification and Uses ” , U. S. Government Printing Office, Washington, D. C., 1984. [10] H. C. Lee and R. E. Gaensslen (eds.), “ Advances in Fingerprint Technology ” , Second Edition, CRC Press, New York, 2001. [11] K.G.M.S.K. Jayawardana, T.N. Kadurugamuwa, R .G. Rage and S. Radhakrishnan ” , Timesheet: An Attendance Tracking System ”, Proceedings of the Peradeniya University Research Sessions, Sri Lanka, Vol.13, Part II, 18th December 2008 .

[12] Yohei KAWAGUCHI, Tetsuo SHOJI , Weijane LIN ,Koh KAKUSHO , Michihiko MINOH ,“ Face Recognition – based Lecture Attendance System” , Department of Intelligence Science and Technology, Graduate School of Informatics,\ Kyoto
University. Academic Center for Computing and Media Studies, Kyoto University. [13] Digital Persona, Inc. t720 Bay road Redwood City, CA 94063 USA 5,

Table of Contents

Chapter One
1.1 The problem and its scope
1.2 Introduction
1.3 Theoretical Background
Chapter Two
2.1 Hardware and Software
2.2 Rule for marking attendance
2.3 Online Attendance Report
2.4 Using Wireless network instead of LAN
Chapter Three

Chapter Four
4.1 Summary
4.2 Conclusion and Recommendation
4.3 Bibliography

Automated Inventory System and Pos

A lot of companies and organizations needed the help of computers because of its speed, precision and productivity. Moreover, many businesses flourished because production was increased and human errors were lessened and management decisions were facilitated through accurate and reliable information generated by software applications. The business world has become dependent to the massive use of computers and electronics. Nowadays, almost every corporate and company, great or small, enhances their business success rates and profitability through the use of computers. In Inventory System with POS whether automated or manual, comprises of machines, people, and/or methods organized to process, disseminate and transmit data that represent user information. An Inventory System with POS is a system that supports a business in the monitoring of items and sales. Also, it is a computer processing in which the computer responds immediately to user requests. Thus, Malaya and Lumber Construction Supply, the subject of this study, is in need of an inventory and sales software to help them identify inventory requirements, set targets, and report actual and projected inventory status. The introduction of an automated system hopes to optimize the inventory levels and eliminate stock-outs. BRIEF HISTORY

Malaya Lumber and Construction Supply has been a recognized name in the Makati Hardware industry for over 40 years. Malaya Lumber and Construction Supply carry a wide range of construction supply to suit every need. The Malaya Lumber and Construction Supply range includes industry renowned hardware equipment’s from leading manufacturers including YELE, CEMEX, ABOY, etc. Malaya Lumber and Construction Supply, supply a range of electrical, sand, cement, steel, plywood, sink, toilets, plumbing and tiles to suit every budget.

1.1 Statement of the Problem
Malaya Lumber and Construction Supply are having difficulty in monitoring their inventory and sales. * How to design and develop a module to monitor the availability of their items. * How to generate reports as per client needs, for e.g. Sales Report, Inventory Sales Report. * How to
track the return/exchange of the items.

1.2 Current State of the Technology
Malaya Lumber and Construction Supply are currently using a labor-intensive process in inventory, calculating of sales and producing reports. Listed below are classifications of internal operations that enable the company to do business with the public. Purchase OrdersSales Inventory


These classifications are done by hand and most of the information is stored on a ‘Logbook’ and with sales transaction is completed thru a pre-printed numbered blank receipt. The needed reports are mostly encoded by a managerial level employee thru Microsoft Excel. Problems like misplace files of records are minimized and will help them alleviate deluge track of work done by existing age-old use of the hand system and logbook based record keeping. The proposed Inventory System with Point of Sale will make daily operations effective and convenient to use as well.

1.3 Objective
1.3.1 General Objective
The proponents aim to develop a computerized Inventory System with Point-of-Sale for the Malaya Lumber and Construction Supply that will aid their daily operations regarding their inventory and sales function. 1.3.2 Specific Objectives

* To provide a monitoring module that will track the availability of items in the inventory. * To create a module that will generate reports for Inventory and Sales. * To develop a module that will track the record of the return and exchange of the items. 1.3.3 Scope and Limitations

This study is exclusively developed for Malaya Lumber and Construction Supply. This study is concerned to develop an Inventory System with Point of
Sale for Malaya Lumber and Construction Supply which covers sale transactions and monitoring stocks. The system provides the following functionalities:

1. Display inventory conditions of the products, including stock, out-of-stock, back-ordered or pre-orderable. 2. Filter product listing to show only those products that are currently available in stock. 3. Decrement inventory levels when orders are processed.

4. Receive notifications when inventory levels reach an out-of-stock threshold. * The system has the capability to keep track customer and supplier information. * A delivery module that monitors products delivered by the supplier and products delivered to the costumers. * The system has the capability to create a back-up copy of database file. * The system has a security to keep all information secured for unauthorized users. * The system has a module that will prompt the user if a particular item has reach its critical level. * The system will be implemented in a LAN based network.

* Report Module generates hard copy of record data in a daily, quarterly and annually basis.


* The system will not support bar -coding for items.
* The system is incapable of accepting credit cards as payment. * There is no record entry in computation of tax payment for submission in BIR. * The system does not support schedule of delivery for the clients.


2.1 Introduction
Every computer system should be supported by theories. Given that the proponents intend to develop a sales and inventory system, theories concerning inventory control and transaction processing system should be
studied. Computer related topics, such as database, GUIs and others, is also studied. These theories will eventually lead to the overall structure and design of a system. The theories mentioned in this chapter will be the foundation of the propose system. 2.2 Inventory Control System

Some of the best inventory management software is equipped with a low-level warning system that will alert you when your stock is getting low so you don’t run out of something that is selling well. You also have the ability to see in real time what stock you have on hand at another location and keep track of it. If you offer item kits it’s important to use a program that will allow you to keep an eye on your sales and inventory so that your kits are all accounted for. [TOPT2013] 2.3 Software Prototyping

Prototyping is the process of building a model of a system. In terms of an information system, prototypes are employed to help system designers build an information system that intuitive and easy to manipulate for end users. Prototyping is an iterative process that is part of the analysis phase of the systems development life cycle. [UMSL2012] 2.3 Transaction Processing System

A transaction process system (TPS) is an information processing system for business transactions involving the collection, modification and retrieval of all transaction data. Characteristics of a TPS include performance, reliability and consistency. [TECH2013]

2.4 Graphic User Interface
A graphical user interface (GUI) is a human-computer interface (i.e., a way for humans to interact with computers) that uses windows, icons and menus and which can be manipulated by a mouse (and often to a limited extent by a keyboard as well). GUIs stand in sharp contrast to command line interfaces (CLIs), which use only text and are accessed solely by a keyboard. The most familiar example of a CLI too many people is MS-DOS. Another example is Linux when it is used in console mode (i.e., the entire screen shows text only). [LINF2004] 2.5 Database

A database is a set of data that has a regular structure and that is organized in such a way that a computer can easily find the desired information. Data is a collection of distinct pieces of information, particularly information that has been formatted (i.e., organized) in some specific way for use in analysis or making decisions. A database can generally be looked at as being a collection of records, each of which contains one or more fields (i.e., pieces of data) about some entity (i.e., object), such as a person, organization, city, product, work of art, recipe, chemical, or sequence of DNA. For example, the fields for a database that is about people who work for a specific company might include the name, employee identification number, address, telephone number, date employment started, position and salary for each worker. [LINF2006] 2.6 Database Normalization

Normalization is the process of organizing data in a database. This includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency. [MICR2013]

2.7 Computer Network
A computer network is a group of computer systems and other computing hardware devices that are linked together through communication channels to facilitate communication and resource-sharing among a wide range of users. Networks are commonly categorized based on their characteristics. [TECH2013] 2.8 Back-up

In the computer world, a backup is a copy of some data. This copy could be used to restore the original data when the original information is lost or damaged. You can make backups of your data manually, by copying your files to another place: a CD, another disc, another machine, to a tape device, etc. Ideally, the copy should be stored on another physical place and should not be stored on the same room where the original is. In case of disaster, like a fire, having both the original data and the backup on the same physical place could be fatal. I recommend you to make multiple copies of
your valuable data: for example, you can have a copy stored on another hard drive and another copy on some remote FTP server, for maximum security. [COBI2009] 2.8 Summary

Different theories the proponents have taken to consideration to develop our sales and inventory system. The proponents studied the inventory and transaction theory so that we can have of idea on how this concept works. The software prototyping will help the proponents and the customer to have overview of the outline of the system. The database normalization theory will play a huge role in an inventory system. The inventory system will handle numerous amounts of data so it will be hard to have the database normalized properly. Back-up theory will help to ensure the safety of the data. For internal cooperation on the company, the system will be implemented in a LAN environment. The graphical user interface theory will help in making the design more user-friendly.

Chapter 4 Performance Analysis

4.1 Introduction
This chapter gives the procedures on how the proponents used to analyze and test the performance of the system. The proponent’s objective was to provide a monitoring module and to create a module that generate reports and to develop a module that will tract the record. The group conducted proper testing procedures to prove that the system is capable of doing necessary requirements. The intended users of the system are the sales representative, purchaser, cashier and the administrator. The sales representative manages all the walk-in orders and delivery orders. The purchaser manages all the transaction in ordering products and monitoring of inventory. The cashier manages the payment of the customers.

The administrator is the one who updates the file maintenance, process the access level, makes backup and restores the database of their system.

4.2 Experimental

List of testing procedures below are the aspects used by the proponents to measure all functions accurately according to the specific objectives of the systems.

4.2.1 Unit Testing

Refers to test that verify the functionality of specific section code, usually at the function level.
The proponents conducted an intensive testing of all validation rules implemented. Using the system, proponents entered some values to all data entry forms to check all valid, invalid and limit of all input. In addition to that, the proponents checked what are the result, kind of value and attributes will return if it was called. Overall consistency of system’s application is also check on this experimentation.

4.2.2 Integration testing

It is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together.

In this part of experiment, the objective of proponents was to expose defects in the interface and interaction between integrated components. The proponents studied all the areas of inventory and sales to make sure if every module and its functions are integrated properly, specially the computation of commission and generations of all necessary reports.

4.2.3 System testing

The proponents examined a complete integrated system to verify that it meets its requirements.
The accuracy and consistency of the computerize system was very effective rather than the old business process hence the proponents realize the opposite of new process and old convention of their business.

4.2.3 Alpha testing

Alpha testing is simulate or actual operational testing by the prospective user or an independent test team at the developers site.
Alpha testing was conducted at the developers’ site by the sales rep. to make sure if there’s a problem when using the said system. Therefore, the sales rep. gave some suggestions and comments about the process of the system to enable the proponents to gain more information.

4.3 Results and Analysis

The proponents had finished the experimental and proceeds to the analysis of all problems encountered. The following are the errors and the proponents’ actions during the experimental of the system. * Logic Errors

The types of errors occur when incorrect judgment and reason used during system development. This usually occurred when loops were not properly terminated, incorrect assignments were done, and incorrect comparison made during filtering operations. * Syntax Errors

Syntax errors occur when typographical errors and incorrect usage of object properties and other keywords were used. The group has ensured that all syntax errors have been eliminated.

* An updateable Query
This error occurred when fields in the tables of the system’s database were being update while the database was currently in a read only mode. * Expected Statement Error (End If without If)
This error occurs when the End If of the If Statement is placed wrongly or the If Statement does not have a corresponding End If. * Integrity Constrains
This error occurs when a record having child record are being deleted or a record that is being added contains a similar primary key code as an existing record in database. *

4.4 Summary

The result of series of testing and analysis proved to be satisfactory for both the proponents and customers. The system was able to perform the processes that it is intended to do.
The system was able to efficiently record and monitor sales and products of the company therefor making it for the administrator and the sales representative to monitor items and sales.
It also made the sales rep. work easier because the system provided them with a file maintenance module that enables to add, edit, delete and back-up files. The system was also able automatically print all the important reports that customers needed.

Shifting the process of Malaya Lumber and Construction Supply from unorganized to well organize system by automating it is a big help out to the company. Since the system has been tested thoroughly, it was able to perform well thus making a processes a lot easier for the Malaya personnel.

Chapter 5
The proponents have to complete all the requirements and specifications of the system which include a monitoring module, report generating and a module that will tract the record. The developers have successfully met the objectives of the study.

Specifically, the system was able to secure all the records from unauthorized personnel to maintain data integrity to be generated using Log-in featured of the system. For payment and cashiering, with the use of the system, producing receipts were easier and faster without miscalculating of service rendered of every employee. With the used of Computerized Sales and Inventory for Malaya Lumber and Construction Supply, sales, delivery and inventory of products are efficiently monitored and recorded. Also, fast and accurate generation of reports is provided. Therefore, the proponents conclude that the produce outputs and operation of Computerized Sales and Inventory System for Malaya Lumber and Construction Supply are proven
enhance and better than the company’s current labor intensive system.


World Wide Web:
[TOPT1013]toptenreviews (2013), ‘Inventory Control System’
[UMSL2012]umsl (2012), ‘Software Prototyping’ [TECH2013]techopedia (2013), ‘Transaction Processing System’ [LINF2004]linfo (2004), ‘Graphic User Interface’
[LINF2006]linfo (2006), ‘Database’
[MICR2013] (2013), ‘Database Normalization’
[TECH2013]techopedia (2013), ‘Computer Network’
[COBI2009]cobiansoft (2009), ‘Back-up’


Automated entrance exam

1.1Background of the Problem
After a long summer vacation enrollment takes place to the next exciting moment of the student before the classes start for elementary, secondary and colleges. It is busiest activity in the school. Besides that it is the busiest activity in school most of the problems arise that causes hustles in students and class administration. Because of the unsystematic procedure that was being established during the enrolment terrified them. The long queue and slow movement that takes more than an hour of standing and waiting to finish, sometimes students made a decision to come back after two days or even a week so that they can be officially enrolled. On the other side, school administration suffers in lots of burden and works to process and prepare in order to officially enroll the student. Particularly, calculation of payments (tuition fee, miscellaneous and other school fees) scheduling of both students and instructors, evaluation of grades for sectioning and lastly breakdown of the payments and class schedules in form of study load. With this problem that intensify it captures our attention and decided to choose to enhanced and developed the enrollment system of Mary Mount of School Koronadal Inc. that will dissolved the problematic process in manual enrollment system.

1.2Overview of the Current State of Technology
For more than a decade the school uses the manual operated enrollment system. The student must fill in the registration fee and grade reports attached. Afterwards, enrollment and other required bills must be paid full or installation. Suddenly, enrollment process will be done. The registrar will notify using the receipt rendered by the cahier that the student already paid the bills and should be enrolled. Evaluation of grades will be used for sectioning and scheduling of the subjects. For the cahier, all the payments must be provided and breakdown. Receiving of payments and releasing of receipt is a part of his rule. This will be used for the next transaction for the enrollment process. Compiling of the information requirements in paying of bills is also his rule. He provides the necessary reports in terms of billing. The registrar will handle the enrollment process.

The evaluation of grades for sectioning and scheduling of students and instructors is the main goal. The list of students and instructors involve must be provided before the classes starts. The Study aims to develop a Computerize Enrollment System for Mary Mount of School Koronadal Inc. This System provides an easy and convenient way in storing information with a systematic approach in computer. With this, burden to the person in charge would be lighter. To avoid redundancy, inaccuracy, and mishandling of information computerized system is recommended. Easy generation of reports with accurate and satisfying results is provided in a convenient manner. Accurate computation and breakdown of all the payments with exact schedule for payments in installation form. Study loads for students and instructor in brief and concise information. Further, the Enrolment System will provide a convenient and accessible way to provide satisfaction in students, faculty and class administration.

2.1 Problem Statements
Enrollment plays a very serious role in every school premise. It is very important in every school and it acts as their foundation. Each school has their own system in handling their enrollment. And for them to accommodate many students, they need to computerize their enrollment system, for them to make their work easier and easy to manage. A computer based system is a system in which the computer plays a major role and this kind of system is needed by every companies and institutions nowadays. This is the best way of storing and retrieving data on a server or hard disk rather than using papers and file cabinets. This will help the registrar of Mary Mount School of Koronadal, Incorporated generates a quick and efficient data they need. The Mary Mount School of Koronadal, Incorporated is using a manual system for their enrollment system. As a result they encountered unexpected problems like loss of information and slow transaction.

The inaccuracies of information were minimal, but the possibility of encountering a more difficulties and tedious task was still at hand. As the proponents conducted their research and analyze the existing system, they’ve decided to recommend a computer based enrollment system at that enables the faculty and administration of the school to gain a good services for every students and to address the problems that occurs because of their manual enrollment system so that institution will be one or the leading institution in the city.One of the problems that the institution would like to include in the study was the lack manpower in the accounting department. In the accounting office or the cashier, it only had two personnel to accommodate his students during the payment of fees for their enrollment.

2.2 Proposed Statement
2.2.1 General Objectives
To be able to design, develop and implement an Computerized Enrollment System for Mary Mount School of Koronadal, Incorporated. 2.2.2 Specific Objectives
To be able to create a module that will develop an enrollment system that will lessen the time spent in their transaction To be able to create a module that will manage the records of students To be able to create a module that will lessen the time spent in generating reports. 2.2.3 Scope and Limitations

The proposed computerized enrollment system that covered the major processes in the enrollment system of Mary Mount School of Koronadal, Incorporated namely: Registration of the current students, assessment of fees, file maintenance, report generation (registration form, assessment slip, student master list, and other forms and reports essential to the system. The proposed system included the processing of student’s personal records and the mode of payment that the student would choose. Limitation

This study aims to develop an enrollment system for the school, the study limits on the following and functions:
1. Record Student’s Information
2. Records fees- collected and uncollected.
3. Use of the system is limited to principal, faculty and staff designated to do thework. 2.2.4 Methodology

The proponent conducted an interview to gain full knowledge of the developed system, the process of acquiring and retrieving information, updating and security of the file. Developing a computerized enrollment system is difficult to do because there will be a series of tests and revisions before it will become functional.

Therefore, there are some useful tools in building in integrated system methods such as System Development Life Cycle models which include waterfall, fountain, spiral, build and fix, rapid prototyping, incremental and synchronize and stabilize.

Paper prototyping is a widely used method in the user-centered design process, a process that helps developers to create software that meets the user’s expectations and needs.It is a throwaway prototyping and involves creating rough, even-hand sketched, drawings of an interface to use as prototypes, or models, of a design. The spiral model combines the iterative nature of prototyping with the controlled and systematic aspects of the waterfall model, therein providing the potential for rapid developmentof incremental versions of the software. In this model, the software is developed in a series of incremental releases with the early stages of being either paper models or prototypes. Lateriterations become increasingly more complete versions of the product.

Figure 2.2.4 Spiral
The study included creating rough drafts of how the proposed system would look like andwhat the pages would contain. Through paper prototyping, the proponents had a more organizedapproach and modifications of the system could easily be implemented compared to workingwith the system directly where there is a great possibility that the internal workings of the systemcould encounter certain errors.The proponents developed a preliminary release or version of the system where the key requirements and functionalities were used as a basis. With continuous testing and evaluation of the initial release, the proponents were able to come up with series of incremental releases, and these releases were developed through the integration of the results gathered from the tests,evaluations, and feedbacks. When the results are to be implemented, the proponents use paperprototyping before directly applying the modifications directly to the system itself.

Activities and steps of the spiral model:
Requirement Analysis
The first step encompassed the tasks that go into determining the needs or conditions tomeet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. This step was critical to the success of the development project. The requirements must be actionable, measurable, testable,related to identified needs or opportunities, and defined to a level of detail sufficient forsystem design. For the requirements analysis, the proponents conducted an interview to gather the data needed and went to the institution to observe how the manual system works.

Functional Specification
The second step was the documentation that described the requested behavior of the proposed system. The documentation determined the needs of the system users as well as the requested properties of inputs and outputs. The proponents consulted the Mary Mount School of Koronadal, Inc. (MMSKI) registrar regarding how they would like the system to behave and the way that the users could interact with it, along with the inputs that it needsand the outputs that it would supply.

Software Architecture
The software architecture of a program or computing system is the structure or structures of the system, which comprises software components, the externally visible properties of those components, and the relationships between them.The proponents decided to distinguished the essential parts of this type of system that they covered in the study. The proponents chose only those that are necessary in the enrolment processes. The unnecessary features were set aside to focus on the essential processes of the enrollment system that the portal would contain.

Software Design
Software design is a process of problem-solving and planning for a software solution.After the purpose and specifications of software are determined, software developers will design or employ designers to develop a plan for a solution. It includes low-level component and algorithm implementation issues as well as the architectural view. The proponents considered different aspects in the design of the enrollment system. Eachaspect must reflect the goals that the proponents and the system were trying to achieve. Some of the aspects that the proponents incorporated in their study are the following: compatibility,extensibility, fault-tolerance, maintainability, reliability, reusability, and usability. For thedesign of the software, the proponents also used data flow diagram and entity relationshipdiagram along with normalization.

Implementation is the process of writing, testing, debugging/troubleshooting, andmaintaining the source code of computer programs. This source code is written in aprogramming language. The purpose of programming is to create a program that exhibits acertain desired behavior. Coding requires expertise in many different subjects, includingknowledge of the application domain, specialized algorithms and formal logic. The proponents used VB.NET for the coding and the interface, and for the system to be available online, the proponents uploaded it to Upon being uploaded, errors wereexpected to emerge since the codes must also be compatible with the technology, further debugging was done until there are no errors found.

Software Testing
Software testing is an empirical investigation conducted to provide the company with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. It also provides an objective, independent view of the software to allow the business to appreciate and understand the risks at implementation of the software. Test techniques include the process of executing a program or application with the intent of finding software bugs. It can also be the process of validating and verifying that the system meets the requirements that guided its design and development. Software Deployment

Software deployment is all of the activities that make a software system available for use.The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur at the producer site or at the consumer site or both. Deployment should be interpreted as a general process that has to be customized according to specific requirements or characteristics. Maintenance

Software maintenance is the modification of a software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a modifiedenvironment. When the system became ready and available for the institution, there continuous improvementsand modifications would be done as needed to correct the errors that the system mightcounter and might cause it to be inefficient to meet the needs of its users for the onlineenrollment. 3PROJECT MANAGEMENT

3.1Calendar of Activities
Description of Activities
1. Project Planning
The proponent plan for a project to provide a convenient system for the client. The proponent select a client to conduct the project. 2. Approval for Mary Mount of School Koronadal. The proponent chose this school to be a client and also ask a permission to conduct the project proposal. The proponent ask the background of the school and update the system of what the client has. 3. Interview

The proponent conduct an interview with the principal Mrs. Ma. Lourdes T. Juanillo regarding with our proposed system. The proponent knows all the important details regarding with their transaction in such away to help them to be more easier their own obligation. 4. Making a Project Proposal

After conducting an interview, the proponent will proposed their own system or the said project by the approval of the principal. The proponent come up to a solution that will enhance their school about the automated enrollment
system. The proponent and the client discuss about what will be the flow of the system and their functionalities.

5. Project Proposal And Defence Schedule
After the conducting of an interview, the proponent prepared their all documents for the project proposal and title defense to the panel members. The proponent will kept the copy of the documentation to be submitted to the panel. 6. Project Construction

The proponent will proposed the project and will be designed and coded the data and the information gather during the interview will used in construction a project. All the details should be applied in the said project. 7. Documentation

The proponent should also started to make documents out of the gathered information during the interview for the project. The proponents should applied the important details in the documentation and will be kept and secure a copy of the document. The documents should be finalized by the proponent. 8. Testing

The proponent should test if the system will work efficiently and ready to deploy to the client but before to deploy it the adviser should test it first were in for the inspection of the said project. Likewise, the finished project will be tested by the administrator of Mary Land School Inc. After testing, the project will be evaluated. 9. Implementation

The proponent will ready to deploy the project after it is finished and evaluated. The proponent will present the system in Mary Mount School Inc. The client should evaluate the difference between the maual operation and the automated operation. The proponent will defend the project proposal to the panel of members. 10. Maintenance

The proponents maintain and improve the system for the preparation for handling problems identified during the development.
Gantt Chartof Activities


The proponents used the Intel® Core (TM) i3-2100 [email protected] processor speed, 2 GB Memory Capacity, Standard Keyboard and mouse to develop their system and HP DeskJet Ink Advantage 2010 Printer to print Report and receipt.

The software used by the proponents is Microsoft Windows 7 Ultimate and other Microsoft windows version in Operating System, Mysql as database, Crystal Reports and Microsoft Visual Studio 2010 as programming software.


Automated Rental System of Sam’s Fashion Beauty

In this chapter, the researchers outline the preparations they had under taken as preliminary steps for conducting this study. Chapter One consists of seven parts: (1) Background and Conceptual Framework of the Study; (2) Statement of the Problem; (3) Overview of the Current System and Related System, (4) Objectives of the Study; (5) Significance of the Study; (6) Definition of Terms; and (7) Delimitation of the Study.

Part One, Background of the Study and Theoretical Framework, discusses the subject of the study and describes how the proposed system can process the development of the current system.
Part Two, Statement of the Problem, presents the general and specific problems of the study. Part Three, Overview of the Current System and Related System, describes and also discusses the operation of the current manual system being used by the business and it provides certain characteristics that are subjected to the study.

Part Four, Objectives of the study, presents the aims and effectiveness of the study to be solved and accomplished. Part Five, Significance of the Study, specifies the benefits that can be derived from the system designed by the researchers, the benefits it may provide to the system users.

Part Six, Definition of Terms, presents the conceptual and operational meanings of the important terms used in the study.
Part Seven, Delimitation of the Study, specifies the areas to be included, the scope and the limitations of this research attempt. This also includes specific areas not included in this research.

Background of the Study

Computer has become a way of life since everything is easy by having just a click away from the problem. It only needs two characteristics to make it work, one who know how to use it and one who understands. Though manual system is still used, computer generated is still the simplest way and hassle free to work on especially in business transaction. In most rental stores, daily transactions are still done manually. Transaction processing system has become a competitive necessity and are always more profitable. A rental is a great option for the customers, who want a beautiful gown or formal dresses without the expensive price. Instead of paying a thousands of pesos, for a gown, barong any formal dresses attire that you will just wear once, why not consider renting a gown/barong and any formal dresses attire?

Choosing rental Products instead of buying one-time-wear gown/formal dresses is decisions that can help the customers trim their budget without trimming their special day. Customers will look beautiful in any gown or any formal dresses, no matter what they cost or where they are from. In the field of business, many establishments use computerized system to process simple transactions to complex one. That’s the reason this study is hereby proposed in order to simplify the processes and procedures of this business. The researcher used to study the usual problems that occur in this establishment.

Statement of the Problem

The main concern of the proposed system is to solve the problems of the present system of Boutique De Marquee. General Problem: Recording of all rental transactions are done manually. Specific Problem: (1) Slow process of recording rental transactions and reports; (2) Sometimes create data redundancy, (3) Conflict in Reservation, (4) Data not secured.

Overview of the Current System and Related Literature

Current System

The current operation of Boutique de Marquee involves the rental of different kinds of Gowns and offers any rental services that are being requested and ordered of the customers. The quality, Designs and Classification of different Packages they offer and any other available kind of Dresses are highly considered to determine the proper price to make an order. So the operation starts when the customers inquire for information. The customer order the Rental items through manual procedure by either asking the Manager for the rental, reservation, services, quality, designs and cost. Sometimes, there is a problem, when the conflict of reservation occurs and when the rental items returned delayed.

Objectives of the Study

The study aims to automate the current system to design and implement for Boutique de marquee. So, that through automation, the rental business will have a fast, reliable and efficient processing of rental transaction, reservation and reports. General Objectives: To Automate the Rental System of Sam’s Fashion Beauty and Glamour. Specific Objectives: (1) To provide fast Process of recording rental transactions and reports (2) To avoid data redundancy (3) To avoid Reservation Conflict (4) To secured data.

Significance of the Study

The result of the study would be of great importance to the following: The Owner and Manager. The computer-based system can give great contribution to the development of the business and will enable them to handle the Rental Business and helps the owner in making records of the customer’s information, rental barong and any formal attire. And to make sure that the records are secured. The proposed system will help to simply the process of Rental transaction. The Customer. This system can help the customer transact at Boutique de Marque faster and easier in selecting their design and choice of cloths/gowns.

The Researchers. This study will provide the researchers valuable skills and knowledge in analyzing and encoding the system. This can also help them fulfill their school degree requirements, thus helping them to grow professionally in their chosen career in the future.

The Future Researchers. Future researchers will be benefited because this book will serve as their guide on how to make a good and quality system. It will serve also as basis of their study that will help them understand what a system is.

Delimitation of the Study

This study is confined only to the automation of Boutique de Marque by providing the fast process of recording rental transactions and reports were important for their business for the future purposes.

This system will only focus to the rental transaction which reservation; rent-a-new and ready-to-wear only. Other transactions are not involved in this study.

Definition of Terms

Available. able to be used or obtained.
( In this study, Available refers to the stocks wherein able to be rent of the customer.
Arrival. act of arriving.
In this study, Arrival refers to the arriving of Rental stocks from the customer in the period of time. Customer. a person with whom one has to deal
(Webster’s New Universal Unabridged Dictionary)
In this study, Customer refers to the person who wants to rent gowns/ barong / formal dresses. Delivery. the action of delivering letters, packages, or ordered goods (Webster’s New Universal Unabridged Dictionary)

In this study, Delivery refers to rental item or items delivered on a particular occasion or place. Description. act of describing or representation in words of the qualities of a person or a thing. ( In this study, Description refers to the describing the quality of Rental stocks. (Webster’s New Universal Unabridged Dictionary)

Gowns. outer dress of woman.
( In this study, Gowns refers to the rental stocks/ dresses that boutique offer. Receipt. a written acknowledgement of money receive or act receiving (Webster’s New Universal Unabridged Dictionary)

In this study, Receipts refers to a piece of paper/ receipt that the customer receive from the manager after the rental transaction done. Rental. an amount paid or received as rent.
(Webster’s New Universal Unabridged Dictionary)
In this study, rental refers to the stocks that the customer wants to rent. Reservation. the action of reserving something or a qualification to an expression of agreement or approval. ( In this study, Reservation refers to the rental items or stocks wherein the customers going to assure Type. Model or Person/thing represent of group or certain quality. ( In this study, Type refers to quality of the rental stocks.

Chapter 2

Design of the Study

The researchers present in this chapter the methods and techniques on how they will undertake this study. This Chapter is divided into Eight Parts: (1) Purpose of the Study, (2) Methods, (3) Procedures, (4) Software Design, (5) Data/Database design, (6) Architectural Design, (7) Procedural Design, and Statistical Treatment of Data. Part One, Purpose of the Study, proves the need for a careful study before implementing an information system and also to explain the type of research method employed by the researchers.

Part Two, Methods, elaborates the System of problem solving researchers will apply to achieve the objectives and solve the problems of the study. Part Three, Procedure, proposed system is designed using the system Development Life Cycle (SDLC). Part Four, Software Design, sketches the designs of the proposed computer application that will use by the proposed system. Part Five, Database Design which outlines the data storage structure for the records for the new system. Part Six, Architectural Design which represents the functions of the software in hierarchal form. Part Seven, Procedural Design, relates the flow of the commands and interaction within the program. Part Eight, Statistical Treatment Data, sums up the possible effects of the new system in terms of efficiency.

Purpose of the Study

This study aims to improve the business processes of Boutique De Marque, to automate and provide strategies and techniques on how to simplify the processes of the business. In order to hasten the flow of daily rental transactions, secured data and information of that said boutique.


Sources of Information

To conduct the study, we, the researchers, gathered all needed information through interviews, observation and list of some needed documents like receipts, price list and Rental lists and important files to be examined. These things helped in determining the various problems that exist in the current system. And gave hints to us for a solution.


The proposed system was developed with the use of System Development Life Cycle (SDLC) concept in planning and managing the system development process. The SDLC describes activities and functions that the researchers perform regardless of which approach to use to develop Automated Rental System of Boutique De Marque.

The researchers used the four steps of System Development Life Cycle Model: (1) System Planning Phase; (2) System Analysis Phase; (3) System Design Phase and (4) System Implementation and Evaluation Phase.

In the first phase, the researchers conducted preliminary interviews to analyze and formulate ways to solve the problems by the current system.
In the second phase, the researchers investigated the business processes and document what the proposed system can do. They conducted investigations and interviews to understand the flow of the current system and indentify the problem.

In the third phase, the researchers created the user friendly interface, prepared systems requirements such as order slips and identify all necessary outputs, inputs and processes.
Finally, in the implementation phase, the researchers designed the program that fits to the system. The program will be written, tested and documented.

Software Design

The researchers made use of Microsoft Access to develop the program for the Automated Rental System of Boutique De Marque. It is one of the applications that will adapt in creating database for business and personal and use. It also uses Microsoft Visual Basic 6.0 as its programming language. It is a user-friendly programming language that enables the developers to easily create the system.

Data/Database Design

In designing the proposed software for the Automated Rental System of Boutique De Marque, the schematic should first be defined. The Researchers used relational database management system where data stored in a spreadsheet-like diagram called table that contains rows and tables

Chapter 3

Presentation of the Proposed System and Evaluation Results

This chapter contains the following parts; (1) Proposed System, (2) Technical Specifications, (3) Implementation, (4) System Inputs and Outputs, (5) Evaluation Results-Juror’s Evaluation and suggestions for improvement.

Part One, Proposed System, shows the new processes and proposed changes of operation flow of Automated Rental System of Boutique de Marque.
Part Two, Technical Specification, specifies the needed technicalities to implement the proposed system. It includes software, hardware and people requirements.
Part Three, Evaluations Results, states the jurors, evaluation and suggestions for improvement of the proposed system.

Proposed System

Using the proposed System the Boutique de Marque will not going to list all the transactions manually and scan their Notes in order to verify the customer’s information and transaction to them because the new Automated Rental Transaction of Boutique de Marque will provide them their needed File transactions immediately.

The following data flow diagram illustrates the new Boutique de Marque Rental Transaction procedures to be followed for the proposed system.

Software Specification

In this proposed system the program was developed through the use of Microsoft Windows 7 Starter; Microsoft Visual Basic 6.0 Application Software type with a size 0f 1.09 KB (1.120 bytes); and Microsoft access application software type with a size of 2.58 KB (2.643 bytes).

Hardware Specification

In this proposed system a single Net book will be used for computerized processes of transaction of the business. Intel® Atom™ Processor , 2.00 GB Memory, Mouse, 1 and 4 GB Flash drive and Printer.

User Specification

User of this proposed system will be trained by an expert information Technologist, for him/her to be more effective in using the computer-based processes.

System Implementation

The implementation of proposed system will be cover from developing, testing and installing the program for new system.
We the researchers recommend the proper use and choice of hardware, software and other suggested important instruction and that they should be followed accordingly as illustrated in this documentation as these are carefully studied and designs.

Chapter 4

Summary, Conclusion, Implications and Recommendations

Chapter 4 consist of five parts: (1) summary of Proposed System and Research Design, (2) Summary of Findings, (4) Conclusions, (4) Implications, and (5) Recommendation.
Part One, Summary of the Proposed and Research Design, drafts the proposals of the researchers and shows how the whole research was designed.
Part Two, Summary of Findings, reflects the vital points of the study and present the findings after the analyzing the data gathered.
Part Three, Conclusions, presents the conclusions drawn from the results of the study.
Part Four, Implications, describes the positive effects of the new system or the Rental System.
Part Five, Recommendations, offers certain recommendations in view of the conditions given.

Summary of Proposed System and Research Design

The Automated Rental System of Boutique de Marque was created and designed to enhance the services and the operations of Boutique de Marque Rental Business. The current system is undergoing analyzed and given a technological twist while revising its procedure in order to be more efficient through the proposed system.

To achieve this system, we the researchers applied the method of system analysis and design. First, we the researchers analyzed the current system, problems and points improvement.
Then, we the researchers designed and introduced a form of solution, which of course involved computerization.

Summary of Findings

The study aimed to design and find out the effects of Automated Rental System of Boutique de Marque in the existing system of that said boutique.
Findings revealed from the analysis of gathered data that through Automated Rental System of Boutique de Marquee provided the Manager/Owner to have an efficient servicing to customers and hasten their rental transactions and reports.


After a thorough study about the data gathered, we the proponents conclude the Automated Rental System of Boutique de Marque is in need in this boutique so that it will be more efficient to the manager/owner to hasten the process of their recording rental transactions and reports and to avoid any conflict and to secure data.


With the new Automated Rental System of Boutique de Marque we, the researchers hope that this new system would bring about the technological advancement to the boutique. Since the proposed system of Boutique de Marquee solved, the problem of current system, this will make transactions and operations certainly are more accurate and reliable if it is implemented properly.


We the proponents highly recommend that the Automated Rental System of Boutique de Marquee be implemented so that it has more quality to the customers, manager/owner will be made accurate, secured and simple.

Automated Mapping and Recording System


Now a days, computer is one of the technologies that most people use in their everyday lives.For instance ,instead of waiting by a telephone for a call or snail mail to appear, a person can receive a document or dispatch with lightning speed using email or mobile phones also the use of projectors and video conferences help in important amount in the process of learning; by using these approaches, different kinds of students’ intelligence can be addressed. The use of computer is one of the fastest growing and most important developments in our time. As computer technology is rapidly changing our world, it has permitted man not only store his knowledge, but to organize, manipulate, and modify it systematically. Organizations nowadays are adapting to office automation systems. Some barangay use automated systems to lessen their work and minimize their problems. Barangay Pansol Proper Quezon City is also one of those barangay that would like to adapt computerization for a more productive output. Yet, there are still problems that the barangay cannot avoid such as storing of the resident records or profile and having a hard time finding where is a certain family lives, for them to meet the need of families requiring priority action and attention.Some files are inevitably misplaced or lost because of the manual recording process. Barangay Pansol Proper, Quezon City in Spanish times to the Commonwealth era was a sitio of Balara, a barrio of Marikina, Rizal. The sitio was sparsely populated with small scale farming and a cottage industry of footwear making as the main sources of livelihood. The people of Marikina used to refer to Pansol as “bundok (mountain) or “ulat” a corruption of the word “ulap” (cloud) as the place was almost always covered with mist or ground fog in the early mornings and late afternoons throughout the year. The area was almost farm fields. The largest plantation was of sugar cane owned by the Tuazon Family. Sugar cane fields in the area feed the sugar cane mill to produce muscovado sugar.

These farm fields were irrigated through a natural spring located in an area called Boliran. Its water never ran out and was also used by the residents for their daily household needs. Other natural springs are found also in a nearby low mountain fondly called, “Payong” resembling an opened umbrella. These springs could have been the source of Pansol after Pansol, laguna known for its natural springs. The barrio was administered through a “Cabeza de Barangay” who was popularly designated by consensus usually schooled and a man of means. Pansol’s population increased due mainly to migrants. The earliest, were the work forces that extended the Carriedo Water Supply System (1878) through developmental projects such as: the Marikina River development – Montalban System (1908 – 1924), the Angat – Montalban System (1924 – 1944) and the post World War II projects, (1945 – 1964). Many of the families residing in Pansol now are descendants of these work forces. This influx was followed during the liberation of Metro Manila from the Japanese occupation forces. Families from the nearby towns of Marikina particularly from Montalban evacuated to Pansol to avoid the dangers of bombardment by American forces of their towns and also of Japanese atrocities. The last wave of migrants came after liberation to work at U.S. Army camps at Pansol and neighboring areas. Pansol (as sitio of Balara) became part of Quezon City in 1939 when President Manuel L. Quezon signed into law Commonwealth Act No. 502 on October 12, 1939 creating Quezon City. As of now, the Chairman of Barangay Pansol , Quezon City in the name of Chairman Dominic Flores has many programs for the improvement of the Barangay. And we all know that as the number of residence continually increase, handling a Barangay can also become increasingly difficult, especially if everything is done manually. With this problem ,as a researcher I propose a study entitled Automated Mapping and Recording System in Barangay Pansol Proper Quezon City using Adobe Flash Professional CS5.5 or Visual Basic 6.0 which claims to suggest full automation of their current manual system. Automated Mapping and Recording System enables easy searching of records to locate and identify legitimate residents in the area including profile such as name, family name,telephone numbers,address and etc. This also enables the personnel to easily locate in the map the areas that an ambulance or fire truck can easily pass in case of emergency.

Statement of the Problem

This study aims to create a software entitled Automated Mapping and Recording System in Barangay Pansol Proper Quezon City. To test its overall performance level as percieved by the respondents .To minimize the errors of misplaced or lost records because of manual recording process. Specifically this Study was created and sought answer to the following questions? A.What are the features of Automated Mapping and Recording System and the programming tools used? B.What is the perception of the users of the Automated Mapping and Recording in terms of :

a.usability? handling?
d.user friendliness?
C. What is the over all performance level of the system as perceived by the respondents?

Significance of the Study
The proposed system will introduce technology to Barangay Pansol, Quezon City that is until now adapting the manual method of recording.The result of this study is beneficial to the Barangay, Barangay Officials and Staff, Residence of the Barangay,and to the Future Reseracher . The system will simplify and automate Barangay everyday tasks and can help minimize the errors thereby providing better service.This wiil also make it easier for the Barangay Officials and staff to handle the barangay .The Residence of the Barangay will benifit alot when it comes to conveniency because whenever they will need a Barangay permits and certificates, the personnel will just find the Identity of the Residence in the records.The proposed system will also benefit other researchers who wish to have similar studies as they can get background information from the result of this study which will serve as template to modify their research.

Scope and Limitation

In general, the focus of this study is directed towards the design and development of an Automated Mapping and Recording System. About 25 residence ,5 places each street and some barangay personel and officials are randomly selected within the streets located in the barangay .The study is largely dependent on the honesty, sincerity and integrity of the respondents.In this proposed system, records and files are automated for accessibility and portability. However, the proponents limit the automated feature of the system to Barangay Officials and staff only. The system has a secure log-in for Barangay Officials and staff.

Definition of Terms

The following terms were operationally defined for a better understanding of the study.

Performance . This term is the capability or ability of a system to work along the developments.

Usability . This term refers to one of the indicators of software performance which focuses on the functionality of the system.

Design . This term refers to one of the indicators of software performance which focuses on the functionality of the system. It is a measure use to know the process of problem-solving planning for a software solution.

User friendliness . This term refers to one of the indicators of software performance which focuses on the functionality of the system. It is a measure use to know if the software will be easy to use.

Adobe Professional . This term refers to the software used by the researchers in enhancing images.

Theoretical Framework
The study is anchor on the theory Mapping L..A. project in Los
Angeles.This project provides maps and information about demographics,crime,and schools in 272 neighborhoods across the county. In order to standardize how they refer to areas included in Mapping L.A., They uses the term neighborhood to encompass everything from unincorporated areas to standalone cities to neighborhoods within cities. The city of Los Angeles has posted hundreds of blue street signs denoting scores of neighborhoods — from Little Ethiopia to Little Tokyo to Little Armenia. But the city has never drawn the official boundaries of those district. The Thomas Guide shows the names of many communities but does not try to make clear where neighborhood boundaries are. Neighborhood councils within the city sometimes reflect narrow political considerations, and many have a propensity for names like People Involved in Community Organizing, which don’t do much to define a community. Many areas of the city have no neighborhood council, even as prized turf such as Occidental College is claimed by more than one council. The same problems apply with even greater force to homeowner associations. ZIP Codes provide many people with a community identity but are designed only to speed up the mail. That’s a nice fit in some places but unworkable in others, such as the part of Los Angeles that falls in Beverly Hills 90210. Van Nuys and North Hollywood each have four ZIP Codes. And dozens of ZIP Codes within the city are identified only as Los Angeles. Mapping L.A.,follows a set of principles intended to make it visually and statistically coherent: It gathers every block of the city into reasonably compact areas leaving no enclaves, gaps, overlaps or ambiguities. After nearly 100 revisions, a map of 114 city neighborhoods was released in June 2009 and in June 2010, the map was expanded beyond the city to cover all of Los Angeles County.

Figure1. Paradigm of the Study
Fig.1. Illustrate the paradigm of the study. This contains the performance of the system ,the feautures and programming tools as well as the perceptions of the users.

The Automated Testing Handbook

The Automated Testing Handbook About the Author Introduction Why automate? When not to automate How not to automate Setting realistic expectations Getting and keeping management commitment Terminology Fundamentals of Test Automation Maintainability Optimization Independence Modularity Context Synchronization Documentation The Test Framework Common functions Standard tests Test templates Application Map Test Library Management Change Control Version Control Configuration Management 1 3 3 4 8 9 10 15 17 19 20 22 23 25 26 29 30 32 32 37 39 41 44 44 45 46

Selecting a Test Automation Approach Capture/Playback Structure Advantages Disadvantages Comparison Considerations Data Considerations Data-Driven Structure Advantages Disadvantages Data Considerations Table-Driven Structure Advantages Disadvantages The Test Automation Process The Test Team Test Automation Plan Planning the Test Cycle Test Suite Design Test Cycle Design Test Execution Test log Error log Analyzing Results Inaccurate results Defect tracking Test Metrics Management Reporting Historical trends

48 50 51 52 52 55 57 58 60 61 62 63 64 65 66 69 70 70 73 76 77 79 81 81 84 85 85 87 88 95 97

Page 2  The Automated Testing Handbook

About the Author
Linda G. Hayes is an accountant and tax lawyer who has founded three software companies, including AutoTester – developer of the first PC-based test automation tool, and Worksoft – developer of the next generation of test automation solutions. She is an award-winning author and popular speaker on software quality. She has been a columnist continuously since 1996 in publications including Computerworld , Datamation and StickyMinds and her work has been reprinted for universities and the Auerbach Systems Handbook.
She co-edited Dare to be Excellent with Alka Jarvis on best practices and has published numerous articles on software development and testing. But most importantly she brings two decades of personal experience with thousands of people and hundreds of companies that is distilled into practical advice.

The risk of software failure has never been greater. The estimated annual economic impact ranges from $60 billion for poor testing to $100 billion in lost revenues and increased costs. Unfortunately, market pressure for the delivery of new functionality and applications has also never been stronger. This combination creates increasing pressure on software test organizations to improve test coverage while meeting ever-shorter deadlines with static or even declining resources. The only practical means to achieving quality goals within the constraints of schedules and budgets is to automate.

About the Author  Page 3

Since software testing is a labor-intensive task, especially if done thoroughly, automation sounds instantly appealing. But, as with anything, there is a cost associated with getting the benefits. Automation isn’t always a good idea, and sometimes manual testing is out of the question. The key is to know what the benefits and costs really are, then to make an informed decision about what is best for your circumstances. The unfortunate fact is that many test automation projects fail, even after significant expenditures of time, money and resources. The goal of this book is to improve your chances of being among the successful.

Why automate?
The need for speed is practically the mantra of the information age. Because technology is now being used as a competitive weapon on the front lines of customer interaction, delivery schedules are subject to market pressures. Late products can lose revenue, customers, and market share. But economic pressures also demand resource and cost reductions as well, leading many companies to adopt automation to reduce time to market as well as cut
testing budgets.

While it might be costly to be late to the market, it can be catastrophic to deliver a defective product. Software failures can cost millions or even billions, and in some cases entire companies have been lost. So if you don’t have enough people or time to perform adequate testing to begin with, adding automation will not reduce software instability and errors. Since it is welldocumented that software errors – even a single one – can cost millions more than your entire testing budget, the first priority should be first to deliver reliable software. Once that is achieved, then focus on optimizing the time and costs. In other words, if your software doesn’t work, it doesn’t matter how fast or cheap you deliver it.

Page 4  The Automated Testing Handbook

Automated delivers software tests provide three key benefits: cumulative coverage to detect errors and reduce the cost of failure, repeatabililty to save time and reduce the cost to market, and leverage to improve resource productivity. But realize that the test cycle will be tight to begin with, so don’t count on automation to shorten it – count on it to help you meet the deadline with a reliable product. By increasing your coverage and thus reducing the probability of failure, automation can help to avoid the costs of support and rework, as well as potentially devastating costs. Cumulative coverage It is a fact that applications change and gain complexity over their useful life. As depicted in the figure below, the feature set of an application grows steadily over time. Therefore, the number of tests that are needed for adequate coverage is also constantly increasing.

Just a 10% code change still requires that 100% of the features be tested. That is why manual testing can’t keep up – unless you constantly increase test resources and cycle time, your test coverage will constantly decline. Automation can help this by allowing you to accumulate your test cases over the life of the application so that both existing and new features can always be tested.

Ironically, when test time is short, testers will often sacrifice regression testing in favor of testing new features. The irony is that

Introduction  Page 5

the greatest risk to the user is in the existing features, not the new ones! If something the customer is already doing stops working – or worse, starts doing the wrong thing – then you could halt operations. The loss of a new feature may be inconvenient or even embarrassing, but it is unlikely to be devastating. But this benefit will be lost if the automated tests are not designed to be maintainable as the application changes. If they either have to be rewritten or require significant modifications to be reused, you will keep starting over instead of building on prior efforts. Therefore, it is essential to adopt an approach to test library design that supports maintainability over the life of the application.


True leverage from automated tests comes not only from repeating a test that was captured while performed manually, but from executing tests that were never performed manually at all. For example, by generating test cases programmatically, you could yield thousands or more – when only hundreds might be possible with manual resources. Enjoying this benefit requires the proper test case and script design to allow you to take advantage of external data files and other constructs.

Faster time to market

Because software has become a competitive weapon, time to market may be one of the key drivers for a project. In some cases, time is worth more than money, especially if it means releasing a new product or service that generates revenue. Automation can help reduce time to market by allowing test execution to happen 24X7. Once the test library is automated, execution is faster and run longer than manual testing. Of course, this benefit is only available once your tests are automated.

Reduced cost

Software is used for high risk, mission critical applications that

Page 6  The Automated Testing Handbook

of failure

represent revenue and productivity. A single failure could cost more than the entire testing budget for the next century! In one case a single bug resulted in costs of almost $2 billion. The national department of standards and technology estimates the cost of correcting defects at $59.5 billion a year, and USA Today claims a $100 billion annual cost to the US economy. Automation can reduce the cost of failure by allowing increased coverage so that errors are uncovered before they have a chance to do real damage in production.

Notice what was NOT listed as a benefit: reduced testing resources. The sad fact is that most test teams are understaffed already, and it makes no sense to try to reduce an already slim team. Instead, focus on getting a good job done with the time and resources you have. In this Handbook we will present practical advice on how to realize these benefits while keeping your expectations realistic and your management committed.

Introduction  Page 7

When not to automate
The cornerstone of test automation is the premise that the expected application behavior is known. When this is not the case, it is usually better not to automate. Unstable design There are certain applications that are inherently unstable by design. For example, a weather-mapping system or one that relies on realtime data will not demonstrate sufficiently predictable results for automation. Unless you have a simulator that can control the inputs, automation will be difficult because the expected
results are not known Also, if you can’t control the application test environment and data, then automation will be almost impossible. The investment required to develop and maintain the automated tests will not be offset by the benefits, since repeatability will be doubtful. If your application is highly configurable, for example, or has other attributes that make its design variable, then either forget automation or focus on implementing only selected configuration profiles. Whatever you do, don’t try to reproduce all of the configurability of the application into the test library, otherwise you will end up with excessive complexity, high probability of test failure, and increased maintenance costs. Inexperienced testers If the person(s) automating the test are not sufficiently experienced with the application to know the expected behavior, automating their tests is also of doubtful value. Their tests may not accurately reflect the correct behavior, causing later confusion and wasted effort. Remember, an automated test is only as good as the person who created it.

Page 8  The Automated Testing Handbook

If you have inexperienced testers who are new to the team, they make the best manual testers because they will likely make the same mistakes that users will. Save automation for the experts. Temporary testers In other cases, the test team may be comprised primarily of personnel from other areas, such as users or consultants, who will not be involved over the long term. It is not at all uncommon to have a “testfest” where other departments contribute to the test effort. But because of the initial investment in training people to use the test tools and follow your library design, and the short payback period of their brief tenure, it is probably not time or cost effective to automate with a temporary team. Again, let them provide manual test support while permanent staff handles automation. Insufficient time, resources If you don’t have enough time or resources to get your testing done manually in the short term, don’t expect a tool to help you. The initial investment for planning, training and implementation will take more time in the short term than the tool can save you. Get through the current crisis, then look at automation for the longer term. Keep in mind that automation is a strategic solution, not a short term fix.

How not to automate
Whatever you do, do not simply distribute a testing tool among your testers and expect them to automate the test process. Just as you would never automate accounting by giving a program compiler to the accounting department, neither should you attempt to automate testing by just turning a testing tool over to the test group. It is important to realize that test automation tools are really just specialized programming languages, and developing an automated test library is a development project requiring commensurate skills. Introduction  Page 9

Automation is more than capture/replay

If you acquired a test tool with the idea that all you have to do is record and playback the tests, you are due for disappointment. Although it is the most commonly recognized technique, capture/replay is not the most successful approach. As discussed in a later chapter, Selecting an Automation Approach, capture and replay does not result in a test library that is robust, maintainable or transferable as changes occur.

Don’t write a program to test a program!

The other extreme from capture/replay is pure programming. But if you automate your tests by trying to write scripts that anticipate the behavior of the underlying program and provide for each potential response, you will essentially end up developing a mirror version of the application under test! Where will it end? Who tests the tests? Although appealing to some, this strategy is doomed – no one has the time or resources to develop two complete systems. Ironically, developing an automated test library that provides comprehensive coverage would require more code than exists in the application itself! This is because tests must account for positive, negative, and otherwise invalid cases for each feature or function.

Automation is more than test execution

So if it isn’t capture/replay and it isn’t pure programming, what is it? Think of it this way. You are going to build an application that automates your testing, which is actually more than just running the tests. You need a complete process and environment for creating and documenting tests, managing and maintaining them, executing them and reporting the results, as well as managing the test environment. Just developing scores of individual tests does not comprise a strategic test automation system.

Duplication of effort

The problem is, if you just hand an automation tool out to individual testers and command that they automate their tests, each one of them will address all of these issues – in their own unique and

Page 10  The Automated Testing Handbook

personal way, of course. This leads to tremendous duplication of effort and can cause conflict when the tests are combined, as they must be. Automation is more than test execution So if it isn’t capture/replay and it isn’t pure programming, what is it? Think of it this way. You are going to build an application that automates your testing, which is actually more than just running the tests. You need a complete process and environment for creating and documenting tests, managing and maintaining them, executing them and reporting the results, as well as managing the test environment. Just developing scores of individual tests does not comprise a strategic test automation system. Need for a framework Instead, approach the automation of testing just as you would the automation of any application – with an overall framework, and an orderly division of the responsibilities. This framework should make the test environment efficient to develop, manage and maintain. How to develop a framework and select the best automation approach are the focus of this handbook.

Remember, test tools aren’t magic – but, properly implemented, they can work wonders!

Setting realistic expectations
All too often, automated testing tools are expected to save the day by making up for too little time, resources, or expertise. Unfortunately, when these expectations are inevitably disappointed, automation or the tool itself gets a bad name. Before any effort can be deemed a success, realistic expectations must be set up front.

Introduction  Page 11

There are three important things to remember when setting expectations about test automation: one, an initial as well as ongoing investment in planning, training and development must be made before any benefits are possible; two, the time savings come only when automated tests can be executed more than once, by more than one person, and without undue maintenance requirements; three, no tool can compensate for the lack of expertise in the test process. Test automation is strategic If your test process is in crisis and management wants to throw money at a tool to fix it, don’t fall for it. Test automation is a long term, strategic solution, not a short term band-aid. Buying a test tool is like joining a health club: the only weight you have lost is in your wallet! You must use the club, sweat it out and invest the time and effort before you can get the benefits. Use consultants wisely Along the same lines, be wary about expecting outside consultants to solve your problems. Although consultants can save you time by bringing experience to bear, they are not in and of themselves a solution. Think of consultants as you would a personal trainer: they are there to guide you through your exercises, not to do them for you! Paying someone else to do your situps for you will not flatten your stomach.

Here’s a good rule of thumb to follow when setting expectations for a test tool. Calculate what your existing manual test iteration requires, then multiply by (5) five for a text-based user interface and (10) ten for a GUI, then add on the time scheduled for training and planning. GUI interfaces have inherently more complexity than text interfaces. This will approximate
the time it will take to properly automate your manual tests. So, if it takes you two weeks to execute one iteration of tests manually, plan for ten to twenty weeks after training and planning are complete to get through your first automated iteration. From there on out, though, you can cut each iteration in half or more. Naturally, these are only approximations and your results may be

Page 12  The Automated Testing Handbook

different. For intensive manual test processes of stable applications, you may see an even faster payback. Not everything can be automated But remember, you must still allow time for tasks that can’t be automated – you will still need to gather test requirements, define test cases, maintain your test library, administer the test environment, and review and analyze the test results. On an ongoing basis you will also need time to add new test cases based on enhancements or defects, so that your coverage can constantly be improving. Accept gradual progress If you can’t afford the time in the short term, then do your automation gradually. Target those areas where you will get the biggest payback first, then reinvest the time savings in additional areas until you get it all automated. Some progress is better than none! Plan to keep staff As pointed out earlier, don’t plan to jettison the majority of your testing staff just because you have a tool. In most cases, you don’t have enough testers to begin with: automation can help the staff you have be more productive, but it can’t work miracles. Granted, you may be able to reduce your dependence on temporary assistance from other departments or from contractors, but justifying testing tools based on reducing staffing requirements is risky, and it misses the point. The primary goal of automation should be to increase test coverage, not to cut testing costs. A single failure in some systems can cost more than the entire testing budget for the next millennia. The goal is not to trim an already slim testing staff, it is to reduce the risk and cost of software failure by expanding coverage.

Introduction  Page 13

Reinvest time savings

As your test automation starts to reap returns in the form of time savings, don’t automatically start shaving the schedule. The odds are that there are other types of tests that you never had time for before, such as configuration and stress testing. If you can free up room in the schedule, look for ways to test at high volumes of users and transactions, or consider testing different platform configurations. Testing is never over!

When setting expectations, ask yourself this question: Am I satisfied with everything about our existing test process, except for the amount of time it takes to perform manually? If the answer is yes, then automation will probably deliver like a dream. But if the answer is no, then realize that while automation can offer great improvements, it is not a panacea for all quality and testing problems.

The most important thing to remember about setting expectations is that you will be measured by them. If you promise management that a testing tool will cut your testing costs in half, yet you only succeed in saving a fourth, you will have failed! So take a more conservative approach: be up front about the initial investment that is required, and offer cautious estimates about future savings. In many cases, management can be satisfied with far less than you might be. For example, even if you only break even between the cost to automate and the related savings in direct costs, if you can show increased test coverage then there will be a savings in indirect costs as a result of improved quality. In many companies, better quality is more important than lower testing costs, because of the savings in other areas: failures can impact revenues, drive up support and development costs, and reduce customer confidence.

Page 14  The Automated Testing Handbook

Getting and keeping management commitment
There are three types of management commitment needed for successful test automation: money, time and resources. And it is just as important to keep commitment as it is to get it in the first place! Keep in mind that test automation is a project that will continue for the life of the application under test. Commit money Acquiring a test automation tool involves spending money for software, training and perhaps consulting. It is easier to get money allocated all at once instead of piece meal, so be careful not to buy the software first then decide later you need training or additional services. Although the tool itself may be advertised as “easy to use”, this is different from “easy to implement”. A hammer is easy to swing, but carpentry takes skill. Do a pilot Just because the money is allocated all at once, don’t spend it that way! If this is your first time to automate, do a small pilot project to test your assumptions and prove the concept. Ideally, a pilot should involve a representative subset of your application and have a narrow enough scope that it can be completed in 2-4 weeks. Take the time to carefully document the resource investment during the pilot as well as the benefits, as these results can be used to estimate a larger implementation. Since you can be sure you don’t know what you don’t know, it is better to learn your lessons on a small scale. You don’t learn to drive on a freeway! Commit time All too often tools are purchased with the expectation that the acquisition itself achieves automation, so disappointment sets in when results aren’t promptly forthcoming. It is essential to educate management about the amount of time it takes to realize the benefits, but be careful about estimating the required time based on marketing

Introduction  Page 15

literature: every organization and application is different. A pilot project can establish a sound basis for projecting a full scale rollout. When you ask for time, be clear about what will be accomplished and how it will be measured. Commit resources Remember that even though test automation saves resources in the long run, in the short term it will require more than a manual process. Make sure management understands this, or you may find yourself with a tool and no one to implement it. Also be sure to commit the
right type of resources. As further described in the Test Team section of this Handbook, you will need a mix of skills that may or may not be part of your existing test group. Don’t imagine that having a tool means you can get by with less skill or experience: the truth is exactly the opposite. Track progress Even though benefits most likely won’t be realized for several months, it is important to show incremental progress on a regular basis – monthly at the least. Progress can be measured in a number of ways: team members trained on the tool, development of the test plan, test requirements identified, test cases created, test cases executed, defects uncovered, and so forth. Identify the activities associated with your test plan, track them and report them to management regularly. Nothing is more disconcerting than to wait for weeks or months with no word at all. Also, if you run up against obstacles, it is critical to let management know right away. Get bad news out as early as possible and good news out as soon as you can back it up.

Page 16  The Automated Testing Handbook

Adjust as you go

If one of your assumptions changes, adjust the schedule and expectations accordingly and let management know right away. For example, if the application is not ready when expected, or if you lose resources, recast your original estimates and inform everyone concerned. Don’t wait until you are going to be late to start explaining why. No one likes surprises!

Plan for the long term

Be sure to keep focus on the fact that the test automation project will last as long as the application under test is being maintained. Achieving automation is not a sprint, it is a long distance run. Just as you are never through developing an application that is being actively used, the same applies to the test library.

In order for management to manage, they must know where things stand and what to expect. By letting them know up front what is needed, then keeping them
informed every step of the way, you can get their commitment and keep it.

Throughout this Handbook we will be investing certain terms with specific meanings. Requirement A required feature or function of the application under test. A business requirement is a statement of function that is necessary for the application to meet its intended use by the customer: the “what” of the system. A design feature is an attribute of the way in which the functions are actually implemented: the “how” of the system. A performance requirement spells out the volume and speed of the application, such as the maximum acceptable response or processing time and the highest number of simultaneous users. Test This term will be used to describe the combination of a test case and a test script, as defined below.

Introduction  Page 17

Test Case

A test case is a set of inputs and expected application response that will confirm that a requirement has been met. Depending on the automation approach adopted, a test case may be stored as one or more data records, or may be stored within a test script.

Test Script

A test script is a series of commands or events stored in a script language file that execute a test case and report the results. Like a program, a test script may contain logical decisions that affect the execution of the script, creating multiple possible pathways. Also, depending on the automation approach adopted, it may contain constant values or variables whose values change during playback. The automation approach will also dictate the degree of technical proficiency required to develop the test script.

Test Cycle

A test cycle is a set of individual tests that are executed as a package, in a particular sequence. Cycles are usually related to application operating cycles, or by the area of the application they exercise, or by their priority or content. For example, you may have a build verification cycle that is used to establish acceptance of a new software build, as well as a regression cycle to assure that previous functionality has not been disrupted be changes or new features.

Test Schedule

A test schedule consists of a series of test cycles and comprises a complete execution set, from the initial setup of the test environment through reporting and cleanup.

Page 18  The Automated Testing Handbook

Fundamentals of Test Automation
It is a mistake to assume that test automation is simply the capture and replay of a manual test process. In fact, automation is fundamentally different from manual testing: there are completely different issues and opportunities. And, even the best automation will never completely replace manual testing, because automation is about predictability and users are inherently unpredictable. So, use automation to verify what you expect, and use manual testing for what you don’t. So, your chances of success with automation will improve if you understand the fundamentals of test automation. Test process must be welldefined A key consideration is that you cannot automate a process that is not already well-defined. A fully manual process may not have the formality or documentation necessary to support a well-designed automation library. However, defining a complete test process is outside the scope of this handbook; entire books have been written about software testing. For our purposes, we will assume that you know what needs to be tested. Testware is software But even when the test process is reasonably well-defined, automation is still a challenge. The purpose of this handbook is to bridge the gap between what should be tested and how it
should be automated. This begins by laying out certain fundamental principles that apply which must be understood before success is possible. All of these principles can be summarized in one basic premise: testware is software! Test automation is two different As odd as it sounds, test automation is really two different things. There is testing, which is one discipline, and automation, which is another. Automating software testing is no different than automating

Fundamentals of Test Automation  Page 19


accounting or any other business function: in each case, a computer is being instructed to perform a task previously performed manually. Whether these instructions are stored in something called a script or a program, they both have all of the characteristics of source code.

Test Application expertise What to test Test Cases

Automation Development expertise How to automate Test scripts

The fact that testware is software is the single most important concept to grasp! Once this premise is understood, others follow.

Just as application software must be designed in order to be maintainable over its useful life, so must your automated tests. Applications are maintained continuously One reason maintainability is so important is that without it you cannot accumulate tests. On average, 25% of an application is rewritten each year; if the tests associated with the modified portions cannot be changed with a reasonable amount of effort, then they will be obsolete. Therefore, instead of gradually improving your test coverage over time by accumulating more and more test cases, you will be discarding and recreating tests instead. Since each new version of the application most likely has increasing functionality, you will be lucky to stay even! Changes
must be known in advance It is also important to know where and how to make changes to the test library in advance. Watching tests execute in hopes of finding application changes in the form of errors is not only extremely inefficient, it brings the validity of test results and metrics into question. A failed test may in fact be a correct result! If a person must watch the test to determine the results, then the test is not truly

Page 20  The Automated Testing Handbook


In most cases, the application source code will be managed by a source control or configuration management system. These systems maintain detailed change logs that document areas of change to the source code. If you can’t get information directly from development about changes to the application, ask to be copied on the change log. This will at least give you an early warning that changes are coming your way and which modules are affected.

Crossreference tests to the application

Identifying needed changes is accomplished by cross-referencing testware components to the application under test, using consistent naming standards and conventions. For example, by using a consistent name for the same window throughout the test library, when it changes each test case and test script which refers to it can be easily located and evaluated for potential modifications. These names and their usage is is described more fully in the section on the Application Map.

Design to avoid regression

Maintainability is achieved not only by assuring that changes can be easily identified and made, but also that they do not have an unexpected impact on other areas. Unexpected impact can occur as a consequence of poor test
design or implementation. For example, a test script that selects an item from a list box based on its relative position in the list is subject to failing if the order or number of items in the list changes. In this case, a maintainable test script would be designed to enter the selected item or select it based on its text value. This type of capability may be limited by your test tool; if you are evaluating tools, look for commands that use object-based commands (“select list box item XYZ”) instead of coordinate-based events (click window @ 451,687).

Fundamentals of Test Automation  Page 21

Maintainability can be designed into your test cases and scripts by adopting and adhering to an overall test framework, discussed in the next section.

When designing your tests, remember that more is not always better. The more tests you have, the more time it will take to develop, execute and maintain them. Optimization is important to be sure you have enough tests to do the job without having too many to manage. One test, one requirement Well-designed tests should not roam across the entire application, accessing a wide variety of areas and functions. Ideally, each test should be designed to map to a specific business or design requirement, or to a previously reported defect. This allows tests to be executed selectively when needed; for example, to confirm whether a defect has been corrected. Having no requirements is no excuse If you don’t have formally defined requirements, derive them from the tests you are going to perform instead of the other way around, but don’t just ignore them altogether. Examine your tests and decide what feature or function this test verifies, then state this as a requirement. This is important because you must know what test cases are affected if an application requirement changes; it is simply not practical to review every test case to see whether it remains valid. Understanding test results Another reason to specify as precisely as possible what each test case covers is that, if the test case fails, it reduces the level of diagnostics required to understand the error. A lengthy, involved test case that covers multiple features or functions may fail for any number of
reasons; the time it takes to analyze a failure is directly related to the complexity of the test case itself. A crisp tie-in between requirements and test cases will quickly indicate the type and severity of the failure.

Page 22  The Automated Testing Handbook

Requirements measure readiness

Once you have them, requirements can be assigned priorities and used to measure readiness for release. Having requirements tied to tests also reduces confusion about which requirements have been satisfied or failed based on the results of the test, thus simplifying the test and error log reports. Unless you know what requirements have been proven, you don’t really know whether the application is suitable for release.

A requirements matrix is a handy way of keeping track of which requirements have an associated test. A requirement that has too many tests may be too broadly defined, and should be broken down into separate instances, or it may simply have more tests than are needed to get the job done. Conversely, a test that is associated with too many requirements may be too complex and should be broken down into smaller, separate tests that are more targeted to specific requirements.

There are tools available that will generate test cases based on your requirements. There are two primary approaches: one that is based on addressing all possible combinations, and one that is based on addressing the minimum possible combinations. Using the former method, requirements are easier to define because interdependencies are not as critical, but the number of tests generated is greater. The latter method produces fewer tests, but requires a more sophisticated means of defining requirements so that relationships among them are stated with the mathematical precision needed to optimize the number of tests.


Fundamentals of Test Automation  Page 23

Independence refers to the degree to which each test case stands alone. That is, does the success or failure of one test case depend on another, and if so what is the impact of the sequence of execution? This is an issue because it may be necessary or desirable to execute less than all of the test cases within a given execution cycle; if dependencies exist, then planning the order of execution becomes more complex. Independent data Independence is most easily accomplished if each test case verifies at least one feature or function by itself and without reference to other tests. This can be a problem where the state of the data is key to the test. For example, a test case that exercises the delete capability of a record in a file should not depend on a previous test case that creates the record; otherwise, if the previous test is not executed, or fails to execute properly, then the later test will also fail because the record will not be available for deletion. In this case, either the beginning state of the database should contain the necessary record, or the test that deletes the record should first add it. Independent context Independence is also needed where application context is concerned. For example, one test is expected to commence at a particular location, but it relies on a previous test to navigate through the application to that point. Again, if the first test is not successfully executed, the second test could fail for the wrong reason. Your test framework should give consideration to selecting common entry and exit points to areas of the application. and assuring that related tests begin and end at one of them. Result independence It is also risky for one test case to depend on the successful result of another. For example, a test case that does not expect an error message should provide assurance that, in fact, no message was issued. If one is found, steps should be added to clear the message. Otherwise, the next test case may expect the application to be ready for input when in fact it is in an error status.

Page 24  The Automated Testing Handbook

If proper attention is paid to independence, the test execution cycle will be greatly simplified. In those cases where total independence is not possible or desirable, then be certain that the dependencies are well documented; the sequence, for example, might be incorporated into the naming conventions for test cases (ADD RECORD 01,ADD RECORD 02, etc.).

Modularity in this context refers to test scripts, whereas independence refers to test cases. Given that your test library will include a number of scripts that together make up an automated test environment, modularity means scripts that can be efficiently assembled to produce a unified system without redundancy or omission. Tie script design to application design Ideally, the test scripts should be comprised of modules that correspond to the structure of the application itself, so that when a change is made to the application, script changes are as localized as possible. Depending on the automation approach selected, this may require separate scripts for each window, for example, or for each type of method of interacting with a control. But modularity should not be taken to an extreme: scripts should not be broken down so minutely that they lose all individual meaning. This will raise the same issues that lengthy, convoluted scripts do: where should changes be made? Identify common scripts Modularity also means that common functions needed by all tests should not be duplicated within each individual script; instead, they should be shared as part of the overall test environment. Suggested common routines are described further in the Test Framework chapter.

Fundamentals of Test Automation  Page 25

As described earlier, context refers to the state of the application during test playback. Because an automated test is executing at the same time the application is, it is critical that they remain synchronized. Synchronization takes two forms: one, assuring that the application is in fact located at the point where the test expects to be, and two, assuring the test does not run ahead of the application while it is waiting or
processing. We will cover the second type in the next section, Synchronization. Context controls results Because tests are performing inputs and verifying outputs, it is imperative that the inputs be applied at the proper location in the application, and that the outputs appear where expected. Otherwise, the test will report an incorrect result. Also, when multiple tests run one after the other, the result from one test can affect the next. If one test begins at the main menu and ends at a sub-menu, the following test must either expect to begin at the sub-menu or risk failure. Similarly, if a test which expects to complete at the main menu instead fails and aborts within a window, the next test will most likely begin out of context.

Page 26  The Automated Testing Handbook

The Main menu approach

The simplest solution to beginning and ending context is to design all tests to begin and end at the same point in the application. This point must be one from which any area of the application can be accessed. In most cases, this will be the main menu or SIGNON area. By designing every test so that it commences at this point and ends there, tests can be executed in any order without considering context.

Enabling error recovery

Adopting a standard starting and ending context also simplifies recovery from unexpected results. A test which fails can, after logging its error, call a common recovery function to return context to the proper location so that the next test can be executed. Granted, some applications are so complex that a single point of context may make each individual test too long; in these cases, you may adopt several, such as sub-menus or other intermediate points. But be aware that your recovery function will become more complex, as it must have sufficient logic to know which context is appropriate. Designing test suites, or combinations of tests, will also be more complex as consideration must be given to grouping tests which share common contexts.

Fundamentals of Test Automation  Page 27

The key to context is to remember that your automated tests do not have the advantage that you have as a manual tester: they cannot make judgment calls about what to do next. Without consistency or logic to guide them, automated tests are susceptible to the slightest aberration. By proper test design, you can minimize the impact of one failed test on others, and simplify the considerations when combining tests into suites and cycles for execution.

Page 28  The Automated Testing Handbook

Synchronization between the test and the application requires that they execute at the same rate. Because different conditions may exist at the time of playback than existed when the test was created, precise timing coincidence may not be possible. For example, if heavier system traffic increases processing time, the application may respond more slowly than it did previously. If the test does not have a means for compensating for fluctuating application speed, it may fail a test if the result does not appear in the time frame expected, or it may issue input when the application is not ready to receive it. Synchronization is complicated when there are multiple platforms involved. Methods for synchronizing with a local application are different from those for synchronizing with a remote host or network server. But in any case, synchronization can affect the result of your automated tests and must be accounted for. Global indicators Some test tools compensate for local synchronization by waiting for the application to cease processing. In Windows applications, for example, this may take the form of waiting while the hourglass cursor is being displayed. In other cases, this may require that the tool check to see that all application activity has ceased. Unfortunately, neither method is infallible. Not all applications use the hourglass cursor consistently, and some conduct constant polling activities which never indicate a steady state. Verify your tool’s synchronization ability against a subset of your application under varying circumstances before developing large volumes of
tests that may later require rework. Local indicators Other tools automatically insert wait states between windows or even controls, causing the test script to suspend playback until the proper window or control is displayed. This method is more reliable, as it does not rely on global behavior that may not be consistent.

Fundamentals of Test Automation  Page 29

However, this approach also requires that some form of timeout processing be available; otherwise, a failed response may cause playback to suspend indefinitely. Remote indicators When a remote host or network server is involved, there is yet another dimension of synchronization. For example, the local application may send a data request to the host; while it is waiting, the application is not “busy”, thus risking the indication that it has completed its response or is ready for input. In this case, the tool may provide for protocol-specific drivers, such as IBM 3270 or 5250 emulation, which monitor the host status directly through HLLAPI (high level language application program interface). If your tool does not provide this, you may have to modify your scripts to detect application readiness through more specific means, such as waiting for data to appear. Synchronization is one of the issues that is unique to automated testing. A person performing a manual test instinctively waits for the application to respond or become ready before proceeding ahead. With automated tests, you need techniques to make this decision so that they are consistent across a wide variety of situations.

Documentation of the testware means that, in a crunch, the test library could be executed manually. This may take the form of extensive comments sprinkled throughout the test cases or scripts, or of narrative descriptions stored either within the tests or in separate documentation files. Based on the automation approach selected, the form and location of the documentation may vary.

Page 30  The Automated Testing Handbook

Document for transferability

It may not be evident from reading an undocumented capture/playback script, for example, that a new window is expected to appear at a certain point; the script may simply indicate that a mouse click is performed at a certain location. Only the person who created the script will know what was expected; anyone else attempting to execute the script may not understand what went wrong if the window does not appear and subsequent actions are out of context. So, without adequate documentation, transferability from one tester to another is limited.

Mystery tests accumulate

Ironically, mystery tests tend to accumulate: if you don’t know what a test script does or why, you will be reticent to delete it! This leads to large volumes of tests that aren’t used, but nevertheless require storage, management and maintenance. Always provide enough documentation to tell what the test is expected to do.

More is better

Unlike some test library elements, the more documentation, the better! Assume as little knowledge as possible, and provide as much information as you can think of.

Document in context

The best documentation is inside the test itself, in the form of comments or description, so that it follows the test and explains it in context. Even during capture/playback recording, some test tools allow comments to be inserted. If this option is not available, then add documentation to test data files or even just on paper.

Fundamentals of Test Automation  Page 31

The Test Framework
The test framework is like an application architecture: it outlines the overall structure for the automated test environment, defines common functions, standard tests, provides templates for test structure, and spells out the ground rules for how tests are named, documented and managed, leading to a maintainable and transferable test library. The need for a well-defined and designed test framework is especially great in testing. For an application, you can at least assume that the developer has a basic understanding of software design and development principles, but for automated tests the odds are high that the tester does not have a technical background and is not aware of, much less well-versed in, structured development techniques. The test framework presented in this chapter can be applied to any of the automation approaches described in this Handbook. The only difference between one approach and another is in how the individual test cases and scripts are structured. By adopting a framework, you can enjoy the efficiency that comes from sharing common functions, and the effectiveness that standard tests and templates provide.

Common functions
Common functions are those routines which automate tasks that are shared throughout the entire test library. Some functions may be shared by all tests, such as routines which recover from unexpected errors, log results and other similar tasks. These functions should usually be structured as subroutines, which means that they can be called from any point and return to the next step after the one which called them.

Page 32  The Automated Testing Handbook

Other types of common functions are utility scripts: for example, refreshing the database or populating it with a known set of records, deleting temporary work files, or otherwise managing the test environment. Clearly defining and sharing these routines will reduce and simplify testware development and maintenance. These scripts should be structured so that they can be executed stand-alone, or linked together sequentially as part of an
integrated test cycle.

The Test Framework  Page 33

Following are suggested common functions: SETUP The SETUP function prepares the test environment for execution. It is executed at the beginning of each test cycle in order to verify that the proper configuration is present, the correct application version is installed, all necessary files are available, and all temporary or work files are deleted. It may also perform housekeeping tasks, such as making backups of permanent files so that later recovery is possible in the event of a failure that corrupts the environment. If necessary, it may also intialize data values, or even invoke sorts that improve database performance. Basically, SETUP means what it says: it performs the setup of the test environment. It should be designed to start and end at a known point, such as the program manager or the command prompt. SIGNON The SIGNON function loads the application and assures that it is available for execution. It may provide for the prompting of the user ID and password necessary to access the application from the point at which the SETUP routine ends, then operate the application to another known point, such as the main menu area. It may also be used to start the timer in order to measure the entire duration of the test cycle. SIGNON should be executed after SETUP at the beginning of each test execution cycle, but it may also be called as part of a recovery sequence in the event a test failure requires that the application be terminated and restarted. DRIVER The DRIVER function is one which calls a series of tests together as a suite or cycle. Some test tools provide this capability, but if yours does not you should plan to develop this function. Ideally, this function relies upon a data file or other means of storing the list of tests to be executed and their sequence; if not, there may be a separately developed and named DRIVER function for each test suite.

Page 34  The Automated Testing Handbook

Remember if you are using a DRIVER to design each individual test to return to the DRIVER function when it ends, so that the next test can be called.
MONITOR The MONITOR function may be called after each transaction is submitted, or at other regular intervals, in order to check the status of the system. For host-based applications, this may be the status line; for networked applications, this may be the area in which system messages are broadcast. The purpose of this script is to check for asynchronous messages or events – those which are not expected but which may nevertheless occur. Because result comparison is usually based on what is expected, some manner of checking for the unexpected is necessary; otherwise, host or network failures or warnings may go undetected. RECOVER The RECOVER function is most often called by the LOGERROR function, but in fact may be called by any script that loses context during playback. Instead of simply aborting test execution altogether, or blindly continuing to execute and generating even more errors, a routine like RECOVER can be used to attempt to restore context to a known location so that subsequent tests in the suite can be executed. This may include navigating through the application to reach a predefined point, such as the main menu, or terminating the application and restarting it. In the latter event, the RECOVER routine may also call the SIGNON script to reload the application. For instances where the steps to recover are not standard throughout the application and human intervention is needed, it may be helpful to insert an audible alarm of some type, or to halt playback and display a message, that alerts the test operator that assistance is needed. If correctly designed, this intervention can be provided without interfering with continuation of the test cycle. For example, the displayed message might instruct the operator to suspend playback,

The Test Framework  Page 35

return context to a particular window, then resume. SIGNOFF The SIGNOFF routine is the sibling script to SIGNON. It terminates the application and returns the system to a known point, such as the program manager or command prompt. It should be used at the end of the last test suite, before other shared routines such as CLEANUP are executed. SIGNOFF may also stop the test cycle timer, thus providing a measure of how long the entire cycle required for execution. LOGTEST The LOGTEST function is called at the end of each
test case or script in order to log the results for the component just executed. This routine may report not only pass/fail status, but also elapsed time and other measurements. Results may be written to a text file, to the clipboard, or any other medium that can be later used to derive reports. A test logging function may already be integrated into your test tool; if not, develop a function to provide it. LOGERROR The LOGERROR function is called by any test that fails. Its primary purpose is to collect as much information as possible about the state of the application at the time of the error, such as the actual context versus expected; more sophisticated versions may invoke stack or memory dumps for later diagnostics. A secondary purpose may be to call the RECOVER function, so that context can be restored for the next test. CLEANUP The CLEANUP function is the sibling script to SETUP. It begins at a selected point, such as the program manager or command prompt, and it does what its name implies: it cleans up the test environment. This may include deleting temporary or work files, making backups of result files, and otherwise assuring that the test environment does not accumulate any detritus left behind by the test execution process. A properly designed CLEANUP routine will keep your test environment

Page 36  The Automated Testing Handbook

organized and efficient. By designing your test framework to include common functions, you can prevent the redundancy that arises when each individual tester attempts to address the same issues. You can also promote the consistency and structure that provides maintainability.

Standard tests
The concept of common functions can be extended even further when you consider standard tests. A common function is shared among tests; a standard test is shared among test suites, cycles or even applications. Certain types of standard tests might also be shared with the development or production support groups. For example, each test library might include a standard test that performs a complete walkthrough of the application, down each menu branch and through each window and control. Although this type of test could
be shared by all testers, it need not be developed by all of them. These tests should be structured just as any other test, so that they can be executed stand-alone or as part of a complete test suite.

The Test Framework  Page 37


As described above, the WALKTHRU standard test navigates through the application, assuring that each menu item, window and control is present and in the expected default state. It is useful to establish that a working copy of the application has been installed and that there are no major obstacles to executing functional tests. Each test execution cycle can take advantage of this standard test in order to assure that fatal operational errors are uncovered before time and effort are expended with more detailed tests. This type of test could be executed by the development group after the system build, before the application is delivered for testing, or by the production support group after the application has been promoted into the production environment.


The STANDARDS test is one which verifies that application design standards are met for a given component. While the WALKTHRU test assure that every menu item, window and control is present and accounted for, the STANDARDS test verifies that previously agreed upon standards have been satisfied.

Page 38  The Automated Testing Handbook

For example, it may be a design criteria that every window have a maximize and minimize button, vertical and horizontal scroll bars, and both an OK and CANCEL push button. It might also verify standard key behaviors, such as using the ESC key to cancel. This type of test could be executed by developers against each individual window as part of the unit test phase. TESTHELP The TESTHELP standard test, like the STANDARDS test, is one which
might be useful on every screen or window. It assures that the help function is available at all points in the application. Depending on whether the help function is context-sensitive or not, this test may require additional logic to verify the correct response. If your application has similar functionality that is common to multiple areas of the application, you may consider developing standard tests specific to those functions. It is well worth the time to think through the particulars of testing your application in order to identify those tests that are widely applicable. By developing as many common and standard functions and tests as possible, you can streamline and standardize your test library development.

Test templates
A test template provides the structure for the development of individual tests. It may be used to speed development by allowing a single format to be quickly copied and filled in, saving time for new tests and promoting consistency. Although naming conventions for tests and their contents are important, and are more fully described in the next section on the Application Map, it is also important that each individual test follow a common structure so that it can be easily linked into the test framework.

The Test Framework  Page 39

For example, tests which are expected to be called as subroutines and shared with other tests must be developed in order to permit a return to the calling test; likewise, tests which are to be executed from a driver or other control mechanism must be capable of returning control when they are completed. The precise means of accomplishing this will vary with each automation approach, and is discussed in the related section for each approach. However, some elements of structure are common to all approaches. HEADER Just as a document has a Header section that describes important information about its contents, a test case or script should contain an area that stores key data about the test. Depending on the tool and approach selected, this information may be found within the test script, the data file, or on paper. The Header is designed to provide later testers with enough information about the test to execute or modify it. NEXT The NEXT
area is used for those tests that rely on external files, and it indicates the point at which the next record is read from the file. It is used as a branch point within the test after processing is complete for a single record. END At the end of each test there should be an ending area, which is the last section to be executed before the test terminates. For tests that read external files, this may be the branch point for an end-of-file condition. In most cases, this area would provide for the test to be logged, such as by calling the LOGTEST routine. For subroutine scripts or tests that are shared by other routines, this area would include the command(s) necessary to return control to the calling script, such as RESUME. For scripts that are executed stand-alone, this might simply say STOP.

Page 40  The Automated Testing Handbook

Test Case Header Application: Date Created: Last Updated: Test Description: This test case deletes an existing chart of accounts record that has a zero balance. The script DELETE_ACCTS is used to apply the test case. ———————————————————————Inputs: This test case begins at the Account Number edit control; the account number 112 and sub-account number 0000 are entered, then the OK button is clicked. ———————————————————————Outputs: The above referenced account is retrieved and displayed. Click DELETE button. The message “Account Deleted” appears. All fields are cleared and focus returns to Account Number field. ———————————————————————Special requirements: The security level for the initial SIGNON to the general ledger system must permit additions and deletions. ———————————————————————Dependencies: Test Case 112-0000 should be executed by the ADD_ACCTS script first so that the record will exist for deletion. before execution. Otherwise, the completed chart of accounts file ALL_ACCTS should be loaded into the database General Ledger 5.1.1 01/01/2X 01/11/2X Test Case ID: 112-0000 By: Teresa Tester By: Lucinda Librarian


Application Map
Similar to a data dictionary for an application, the Application Map names and describes the elements that comprise the application and provides the terminology used to tie the tests to the application. Depending on the type of automation approach you adopt, these elements may include the components that comprise the user interface of the application, such as windows, dialog boxes and data elements or controls.

The Test Framework  Page 41

Test Vocabulary

Think of your Application Map as defining the “vocabulary” of your automated tests. This vocabulary spells out what words can be used in the test library to refer to the application and what they mean. Assuring that everyone who contributes to the test process uses the same terminology will not only simplify test development, it will assure that all of the tests can be combined into a central test library without conflict or confusion.

Naming Conventions

In order to develop a consistent vocabulary, naming conventions are needed. A naming convention simply defines the rules by which names are assigned to elements of the application. The length and format of the names may be constrained by the operating system and/or test automation tool. In some cases, application elements will be identified as variables in the test script; therefore, the means by which variables are named by the tool may affect your naming conventions. Also, test scripts will be stored as individual files whose names must conform to the operating system’s conventions for file names.

Crossreference names to application

Because your tests must ultimately be executed against the application, and
the application will inevitably change over time, it is crucial that your tests are cross-referenced to the application elements they impact. By using consistent names for windows, fields and other objects, a change in the application can be quickly crossreferenced to the potentially affected test cases through a search for the name(s) of the modified elements.

Page 42  The Automated Testing Handbook

Following is an excerpt from the Application Map for the sample general ledger system; the Data-Driven approach is assumed. Object Names Conventions: Sub-menus are named within the higher level menu; windows are named within their parent menus. Controls are named within their parent window. Data files are named by the script file that applies them; script files are named by the parent window.


Description Chart of accounts Text file Script file Account number Sub account number Account description Statement type Account type Header Message Accept record Cancel record

Object Type Window .TXT .SLF Edit control Edit control Edit control Radio button List box Check box Information box Push button Push button


The Test Framework  Page 43

Test Library Management
Just as an application source library will get out of control if changes and different versions are not managed, so will the test library eventually become useless if it is not managed properly. Regardless of how many testers are involved, there must be a central repository of all test scripts, data
and related information that can be effectively managed over time and turnover. Individual, uncoordinated test libraries have no long term value to the organization; they are only as good – and around only as long – as the person who created them. Test library management includes change control, to assure that changes are made only by authorized persons, are documented, and are not made concurrently to different copies so that overwriting occurs; version control, to assure that tests for different versions of the same application are kept segregated; and configuration management to account for any changes to the test environment.

Change Control
Change control refers to the orderly process of introducing change to the test library. Documentation is key Changes may take the form of new tests being added or existing tests being modified or deleted. It is important to not only know that a change was made, but who made it, when and why. Documentation of the nature of the change to an existing module should ideally include a delta file, which contains the differences between the old module and the new one. At a minimum, an explanation should be provided of what was changed and where.

Page 44  The Automated Testing Handbook

Change log

The test librarian should manage the change control process, keeping either a written or electronic log of all changes to the test library. This change log should list each module affected by the change, the nature of the change, the person responsible, the date and time. Regular backups of the test library are critical, so that unintended or erroneous changes can be backed out if needed.

Test your tests

The librarian should also take steps to assure that the test being added to the library has been itself tested; that is, it should have been executed
successfully at least once before being introduced into the permanent library.

Synchronize with source control

There should also be some level of correspondence between the change log for the application source and the test library. Since changes to the application will often require changes to the affected tests, the test librarian may take advantage of the application change log to monitor the integrity of the test library. In fact, it is ideal to use the same source control system whenever possible. If the change to a test reflects a new capability in a different application version, then the new test should be checked into a different version of the test library instead of overwriting the test for the prior version. See Version Control, following, for more information.

Version Control
Just as the application source code must be kept aligned by versions, so must the test library. Multiple At any given time, more than one version of the application may

Test Library Management  Page 45

application versions

require testing; for example, fixes may be added to the version in the field, while enhancements are being added to the next version planned for release.

Multiple test library versions

Proper version control of the test library allows a test execution cycle to be performed against the corresponding version of the application without confusing changes made to tests for application modifications in subsequent versions. This requires that more than one version of the test library be maintained at a time.

Configuration Management
A thorough test exercises more than just the application itself: it ultimately tests the entire environment, including all of the supporting hardware and surrounding software. Multiple layers affect the test In today’s complex, layered environments, there may be eight or more different variables in the environment: the workstation operating system and hardware configuration, the network protocol and hardware connection, the host or server communications protocol, the server’s hardware configuration and operating system, and the state of the database. It is risky to test an application in one

Page 46  The Automated Testing Handbook

environment and deliver it in another, since all of these variables will impact the functionality of the system. Test integrity requires configuration management This means that configuration management for the test environment is crucial to test integrity. It is not enough to know what version of the software was tested: you must know what version and/or configuration of every other variable was tested as well. Granted, you may not always be able to duplicate the production environment in its entirety, but if you at least know what the differences are, you know where to look if a failure occurs.

Test Library Management  Page 47

Selecting a Test Automation Approach
There are as many ways to approach test automation as there are testers, test tools and applications. This is one reason why it is important to develop an overall approach for your automated test environment: otherwise, each tester will adopt his or her own, leading to a fragmented test library and duplication of effort. For our purposes, we will refer to three major approaches as described below. These approaches are not exclusive of each other, or of other approaches; rather, they are intended to describe options that may be mixed and matched, based on the problem at hand. Indeed, a
single test library may contain tests designed according to each approach, depending on the particular type of test being automated. But before you can get started, you have to know where you are starting from. The following assessment is designed to help you evaluate where you stand in terms of your application, test team and test process. Based on the results, you will be able to select the automation approach that is right for your needs. Start by answering these questions: What phase of development is the application in? _____ Planning _____ Analysis _____ Design _____ Code _____ Test _____ Maintenance

Page 48  The Automated Testing Handbook

What is the skill set of the test team? _____ Primarly technical _____ Some technical, some non-technical _____ Primarily non-technical How well documented is the test process? _____ Well-documented _____ Somewhat documented _____ Not documented How stable is the application? _____ Stable _____ Somewhat stable _____ Unstable Based on your answers to these questions, you should select an automation approach that meets your needs. Each of the approaches is described in more detail below.




Application already in test phase or maintenance Primarily non-technical test team Somewhat or not documented test process Stable application


Application in code or early test phase Some technical, some non-technical test team Well or somewhat documented test process Stable or somewhat stable application


Application in planning, analysis or design Some technical, most non- technical test team Well documented test process Unstable or stable application

Selecting a Test Automation Approach  Page 49

These profiles are not hard and fast, but they should indicate the type of approach you should consider. Remember that you have to start from where you are now, regardless of where you want to end up. With a little prior planning, it is usually possible to migrate from one method to another as time and expertise permits.

The capture/playback approach means that tests are performed manually while the inputs and outputs are captured in the background. During subsequent automated playback, the script repeats the same sequence of actions to apply the inputs and compare the actual responses to the captured results; differences are reported as errors. Capture/playback is available from almost all automated test tools, although it may be implemented differently. Following is an excerpt from an example capture/playback script:

Select menu item “Chart of Accounts>>Enter Accounts” Type “100000” Press Tab Type “Current Assets” Press Tab Select Radio button “Balance Sheet” Check box “Header” on Select list box item “Asset” Push button “Accept” Verify text @ 562,167 “Account Added”

Notice that the inputs – selections from menus, radio buttons, list boxes, check boxes, and push buttons, as well as text and keystrokes – are stored in the script. In this particular case, the output – the expected message – is explicit in the script; this may or may not be true with all tools – some simply capture all application responses automatically, instead of allowing or requiring that they be explicitly declared. See Comparison Considerations below for more information.

Page 50  The Automated Testing Handbook

In order to allow capture/playback script recording to be distributed among multiple testers, a common structure should be adopted. One script, one requirement The ideal structure is to have one script per requirement, although multiple instances of the requirement – i.e. test cases – might be grouped together. This also allows the requirements to be distributed among multiple testers, and the name of the script can be used as a cross-reference to the requirement and clearly indicate the content and purpose of each script. Associate scripts by application areas These scripts can be packaged together into test suites that are related by common characteristics, such as beginning and ending context and data requirements, and/or by the area of the application they exercise. This makes it easier for a single tester to focus on certain areas of the system, and simplifies later maintenance when changes are needed. Callable scripts Depending on the capabilities provided by your tool, you may need to build in the capability of tying scripts together into suites and/or test cycles. If your tool does not have a built-in mechanism, you should consider making each script callable from another, so that when it completes it returns processing to the next instruction in the calling script. A master or driver script can then be created which contains a series of calls to the individual scripts for execution.

Selecting a Test Automation Approach  Page 51

Capture/playback is one of the earliest and most common automated test approaches and offers several advantages over other methods. Little training or setup time The main advantage of this approach is that it requires the least training and setup time. The learning curve is relatively short, even for non technical test operators. Develop tests on the fly Tests need not be developed in advance, as they can be defined on the fly by the test operator. This allows experienced users to contribute to the test process on
an ad hoc basis. Audit trail This approach also provides an excellent audit trail for ad hoc or usability testing; in the event an error occurs, the precise steps that created it are captured for later diagnosis or reproduction.

There are, however, several disadvantages of capture/playback, many of which have led to more advanced and sophisticated test tools, such as scripting languages.

Page 52  The Automated Testing Handbook

Requires manual capture

Except for reproducing errors, this approach offers very little leverage in the short term; since the tests must be performed manually in order to be captured, there is no real leverage or time savings. In the example shown, the entire sequence of steps must repeated for each account to be added, updated or deleted.

Application must be stable

Also, because the application must already exist and be stable enough for manual testing, there is little opportunity for early detection of errors; any test that uncovers an error will most likely have to be recaptured after the fix in order to preserve the correct result.

Redunancy and omission

Unless an overall strategy exists for how the functions to be tested will be distributed across the test team, the probability of redundancy and/or omission is high: each individual tester will decide what to test, resulting in some areas being repeated and others ignored. Assuring efficient coverage means you must plan for traceability of the test scripts to functions of the application so you will know what has been tested and what hasn’t.

Tests must be combined

It is also necessary to give overall consideration to what will happen when the tests are combined; this means you must consider naming conventions and script development standards to avoid the risk of overwriting tests or the complications of trying to execute them as a set.

Selecting a Test Automation Approach  Page 53

Lack of maintainability

Although subsequent replay of the tests may offer time savings for future releases, this benefit is greatly curtailed by the lack of maintainability of the test scripts. Because the inputs and outputs are hard-coded into the scripts, relatively minor changes to the application may invalidate large groups of test scripts. For example, changing the number or sequence of controls in a window will impact any test script that traverses it, so a window which has one hundred test transactions executed against it would require one hundred or more modifications for a single change.

Short useful script life

This issue is exacerbated by the fact that the test developer will probably require additional training in the test tool in order to be able to locate and implement necessary modifications. Although it may not be necessary to know the script language to capture a test, it is crucial to understand the language when making changes. As a result, the reality is that it is easier to discard and recapture scripts, which leads to a short useful life and a lack of cumulative test coverage.

No logic means more tests fail

Note also that there is no logic in the script to be sure that the expected window is in fact displayed, or that the cursor or mouse is correctly
positioned before input occurs – all the decisions about what to do next are made by the operator at the time of capture and are not explicit in the script. This lack of any decision-making logic in the scripts means that any failure, regardless of its true severity, may abort the test execution session and/or invalidate all subsequent test results. If the application does not behave precisely as it did when the test was captured, the odds are high that all following tests will fail because of an improper context, resulting in many duplicate or false failures which require time and effort to review.

Page 54  The Automated Testing Handbook

Comparison Considerations
The implied assumption in capture/playback is that the application behavior captured at the time the test is created represents the expected, or correct, result. As simple as this sounds, there are issues that must be addressed before this is effective.

Selecting a Test Automation Approach  Page 55

Identify results to verify

For fixed screen format character-based applications, the comparison criteria often includes the entire screen by default, with the opportunity to exclude volatile areas such as time and date. In the case of windowed applications or those without a fixed screen format, it may become necessary to rely only on selected areas. In either event, it is critical to evaluate what areas of the display are pertinent to the verification and which are not.

Use text instead of bitmaps when possible

For graphical applications, full screen or window bitmap comparisons are usually impractical. Simply capturing, storing and comparing the huge amount of information present in a graphical image is a tremendously resource intensive task. Also, merely moving the test from one computer to another
may invalidate the comparison altogether, since different monitor resolutions return different values for bitmaps. Further, the very nature of graphical applications is to be fluid instead of fixed, which means that the same inputs may not result in precisely the same outputs. For example, the placement of a window is often determined by the window manager and not by the application. Therefore, it is usually more accurate to use text to define expected results instead of using images.

Verify by inclusion instead of exclusion

If your tool permits it, define the test results by inclusion rather than exclusion. That is, define what you are looking for instead of what you are not looking at – such as everything except what is masked out. Explicit result verification is easier to understand and maintain there is no guesswork about what the test is attempting to verify. Having said that, however, also be aware that minimally defined results may allow errors to go unnoticed: if, for example, system messages may be broadcast asynchronously, then you might miss an error message if you are not checking the system message area.

Page 56  The Automated Testing Handbook

Of course your tool will control the types of comparison available to you and how it is defined, to some degree. Familiarize yourself with your options and adopt a consistent technique.

Data Considerations
Because capture/playback expects the same inputs to produce the same outputs, the state of the data is critical. Static data The beginning state of the application database is essential to predictable results in any automated test method. Assure that your test cycle contains steps to prepare the database to a known state, either by refreshing it with a new copy or populating it with known records. Dynamic data In some cases, the application will generate a data value dynamically that cannot be known in advance but must be used later during the test. For example, a unique
transaction identifier may be assigned to each new record as it is entered, which must be input later in order to access the record. Because capture/playback hard codes the test data in the script at the time of capture, in this situation subsequent playback will not produce the same results. Using variables In these cases, it may be necessary to implement variable capability for the dynamic field, so that the value can be retrieved during playback and saved for later reference. This will require at least one member of the test team to become familiar with how the test tool defines and manipulates variables in order to substitute them in place of the fixed values which were captured against the dynamic field.

If your test tool can store its scripts in a text format, you can use your favorite word processor to copy the script for a single transaction, then simply search and replace the data values for each iteration. That way, you can create new tests without having to perform them

Selecting a Test Automation Approach  Page 57


The difference between classic capture/playback and Data-Driven is that in the former case the inputs and outputs are fixed, while in the latter the inputs and outputs are variable. This is accomplished by performing the test manually, then replacing the captured inputs and expected outputs with variables whose corresponding values are stored in data files external to the script. The sequence of actions remain fixed and stored in the test script. Data-Driven is available from most test tools that employ a script language with variable data capability, but may not be possible with pure capture/playback tools. The following page contains an example of the previous capture/playback script, modified to add an external file and replace the fixed values with variables. Comments have been added for documentation:

Page 58  The Automated Testing Handbook

Select menu item “Chart of Accounts>>Enter Accounts” Open file “CHTACCTS.TXT” Label “NEXT” Read file “CHTACCTS.TXT” End of file? If yes, goto “END” Type ACCTNO Press Tab Type ACCTDESC Press Tab Select Radio button STMTTYPE Is HEADER = “H”? * Select radio button for statement * Is account a header? * Enter data for description * Open test data file * Branch point for next record * Read next record in file * Check for end of file * If last record, end test * Enter data for account #

If yes, Check Box HEADER on * If so, check header box Select list box item ACCTTYPE * Select list box item for type Push button “Accept” Verify text MESSAGE If no, Call LOGERROR Press Esc CALL LOGTEST Goto “NEXT” Label “END” * Verify message text * If verify fails, log error * Clear any error condition * Log test case results * Read next record * End of test

Example file contents:
Test Case 1000000 1001000 ACCT NO 100 100 SUB ACCT 0000 1000 ACCT DESC STMT TYPE ACCT TYPE Asset Asset HEADER H MESSAGE Account Added Account Added

Current Balance Assets Sheet Cash in Balance Banks Sheet

Selecting a Test Automation Approach  Page 59

In order to permit test cases to be defined as data records to be processed by as external files to scripts, the application data elements associated with each process must be known. This should be provided by the Application Map. Also, the following structure should be followed: One script, one process A Data-Driven script is tied to a single processing sequence but will support multiple test cases. A sequence of steps that enters or processes data may include many test cases relating to individual elements of the data or steps in the sequence. Select a sequence of steps that require a consistent set of data for each iteration, and name the script for
the process or application window it addresses. One record, one test case Each record in the test data file should relate to a single test case, and the test case identifier should be stored in the data record. This allows a single script to process multiple test cases while logging results for each. Notice in the example that the test results are logged for each record, instead of at the end of the script. Data-intensive Implied in this approach is that the application is fairly data intensive; that is, the same steps tend to be repeated over and over with different data. In the general ledger example, the steps to enter one account are identical to those needed to enter hundreds. Consistent behavior Also assumed is that the same steps are repeated without significant variance, so that different inputs do not have a major impact on the sequence of actions. For example, if the value of one field causes a completely different processing path to take effect, then the amount of logic required to process each test case increases exponentially.

Page 60  The Automated Testing Handbook

There are several advantages to Data-Driven over simple capture/playback. Create test cases earlier Data-Driven allows test cases – the inputs and expected outputs – to be created in advance of the application. The software does not have to be stable enough to operate before test cases can be prepared as data files; only the actual script has to await the application. Flexible test case creation Because they are stored as data, the sets of inputs and outputs can be entered through a spreadsheet, word processor, database or other familiar utility, then stored for later use by the test script. Familiarity with, or even use of, the test tool is not required for test cases. Leverage Data-Driven provides leverage in the sense that a single test script can be used to apply many test cases, and test cases can be added later without modifications to the test script. Notice that the example script could be used to enter one or one

Selecting a Test Automation Approach  Page 61

thousand different accounts, while the capture/playback script enters only
one. Cut and paste facilities of the selected utility can be used to rapidly “clone” and modify test cases, providing leverage. Reduced maintenance This approach reduces required maintenance by not repeating the sequence of actions and logic to apply each test case; therefore, should the steps to enter an account change, they would have to be changed only one time, instead of once for each account.

The main disadvantage of this approach is that it requires additional expertise in the test tool and in data file management. Technical tool skills required In order to convert the script to process variable data, at least one of the testers must be proficient in the test tool and understand the concept of variable values, how to implement external data files, and programming logic such as if/then/else expressions and processing loops. Data file management needed Similarly, the test case data will require someone with expertise in creating and managing the test files; large numbers of data elements in a test case may lead to long, unwieldy test case records and awkward file management. Depending on the utility used, this may require expertise in creating and manipulating spreadsheet macros, database forms or word processor templates, then exporting the data into a file compatible with the test tool.

Page 62  The Automated Testing Handbook

Data Considerations
Because the test data is stored externally in files, there must be a consistent mechanism for creating and maintaining the test data. One script, one file Generally, there will be a file for each script that contains records comprising the set of values needed for the entire script. Therefore, the content and layout of the file must be defined and organized for easy creation and maintenance. This may take the form of creating macros, templates or forms for spreadsheets, word processors or databases that lay out and describe the necessary fields; some level of editing may also be provided. Obviously, the more that can be done to expedite and simplify the data collection process, the easier it will be to add and
change test cases. Using multiple files It is of course possible to have more than one test data file per script. For example, there may be standalone data files that contain a list of the equivalence tests for specialized types of fields, such as dates. Instead of repeating these values for every script in which dates appear, a single file may be read from multiple scripts. This approach will require additional script logic to accomodate nested processing loops for more than one file per script iteration. Dynamic data This approach may also require the same manipulation for dynamic variables that was described under Data Considerations for capture/playback, above.

Selecting a Test Automation Approach  Page 63

Table-Driven differs from Data-Driven in that the sequence of actions to apply and evaluate the inputs and outputs are also stored external to the script. This means that the test does not have to be performed manually at all. The inputs and expected outputs, as well as the sequence of actions, are created as data records; the test scripts are modular, reusable routines that are executed in a sequence based on the test data. The logic to process these records and respond to application results is embedded in these routines. Another key differentiator is that these script routines are reusable across applications. They are completely generic in that they are based on the type of field or object and the action to be performed. The exact instance of the field or object is defined in the Application Map and is provided to the routine when the test executes. Below is an excerpt of the test script routines and test data file that would process the same entry of the chart of accounts: Open file TESTDATA Label “NEXT” Read file @TESTDATA End of file? If yes, goto “END” Does @WINDOW have focus? If no, Call LOGERROR Does @CONTROL have focus? If no, set focus to @CONTROL Does @CONTROL have focus? If no, Call LOGERROR Call @METHOD Call LOGTEST Goto ”NEXT” Label “END” * Open test data file * Branch point for next test case * Read next record in file * Check for end of file * If last record, end test * Does the window have focus? * If not, log error * Does the control have focus? * Try to set the focus * Was set focus successful? * If not, log
error * Call test script for method * Log test results

Does menu item VALUE exist? * Does the menu item exist?

Page 64  The Automated Testing Handbook

If no, Call LOGERROR If no, Call LOGERROR Select menu item VALUE Resume

* If not, log error * If not, log error * Select the menu item * Return to main script

Is menu item VALUE enabled? * Is the menu item enabled?

Example file contents:

Test Case Add Account Add Account Add Account Add Account Add Account Add Account Add Account Add Account



Method Select

Value Chart of Accounts>>Ent er Accounts 100000

On Pass Continue

On Fail Abort

Chart of Accounts Chart of Accounts Chart of Accounts Chart of Accounts Chart of Accounts Chart of Accounts Chart of Accounts

Account Number Account Descriptio n Statement Type Header





Current Assets Balance Sheet










Account Type OK








Message Box

Verify Text

Account Added



Like Data-Driven, this method requires that the names of the application data elements be known; however, it also requires that the type of object and the valid methods it supports also be defined and named, as well as the windows and menus. In the earlier stages of development, the object types and methods may not yet be known, but that should not prevent test cases from being developed. A simple reference to input or output can be later refined as the precise methods are implemented. For example, the input of the statement type can eventually be converted to a check box action. This information will support the following structure:

Selecting a Test Automation Approach  Page 65

One file, multiple scripts

A single test data file in Table-Driven script is processed by multiple scripts. In addition to common and standard scripts, there will be a master script that reads the test file and calls the related method scripts. Each object and method will have its own script that contains the commands and logic necessary to execute it.

Multiple records, one test case

A single test case is comprised of multiple records, each containing a single step. The test case identifier should be stored in each data record to which it relates. This allows a single set of scripts to process multiple test cases. Notice in the example that the test results are logged for each step, instead of at the end of the test case.

The main advantage to this approach is that it provides the maximum maintainability and flexibility. Develop test cases, scripts earlier Test cases can be constructed much earlier in the development cycle, and can be developed as data files through utilities such as spreadsheets, word processors, databases, and so forth. The elements of the test cases, can be easily modified and extended as the application itself becomes more and more defined. The scripts that process the data can be created as soon as the objects

Page 66  The Automated Testing Handbook

(screens, windows, controls) and methods have been defined.

Selecting a Test Automation Approach  Page 67

Minimized maintenance

By constructing the test script library out of modular, reusable routines, maintenance is minimized. The script routines need not be modified unless a new type of object is added that supports different methods; adding new
windows or controls simply requires additions to the tool’s variable file or GUI map, where the objects are defined.

Portable architecture between applications

Another key advantage of Table-Driven is that the test library can be easily ported from one application to another. Since most applications are composed of the same basic components – screens, fields and keys for character-based applications; windows and controls for graphical applications – all that is needed to move from one to another is to change the names and attributes of the components. Most of the logic and common routines can be left intact.

Portable architecture between tools

This approach is also portable between test tools. As long as the underlying script language has the equivalent set of commands, test cases in this format could be executed by a script library created in any tool. This means you are free to use different tools for different platforms if necessary, or to migrate to another tool.

No tool expertise needed to create test cases

Because logic is defined and stored only once per method, and there is substantial implied logic for verifying the context and state of the application to assure proper playback, individual testers may create test cases without understanding logic or programming. All that is needed is an understanding of the application components, their names, and the valid methods and values which apply to them.

Page 68  The Automated Testing Handbook

The central disadvantage of the Table-Driven approach is the amount of training and setup time required to implement it. Extensive technical and tool skills required In order to properly design and construct the script
library, extensive programming skills and test tool expertise are needed. Programming skills are needed to implement a library of modular scripts and the surrounding logic that ties them together and uses external data to drive them. Data conventions and management critical Because all test assets are maintained as data, it is essential to have a means of enforcing conventions and managing the data. While this can be done in spreadsheets, a database is far more powerful.

Selecting a Test Automation Approach  Page 69

The Test Automation Process
In an ideal world, testing would parallel the systems development life cycle for the application. This cycle is generally depicted as: Software








Test Plan

Test Cases

Test Scripts

Test Execution/Maintenance

Unfortunately, not all test efforts commence at the earliest stage of the software development process. Depending on where your application is in the timeline, these activities may be compressed and slide to the right, but in general each of these steps must be completed.

The Test Team
But regardless of the approach you select, to automate your testing you will need to assemble a dedicated test team and obtain the assistance of other areas in the company. It is important to match the skills of the persons on your team with the responsibilities of their role. For example – although the type and level of skills will vary somewhat with the automation approach you adopt – developing test scripts is essentially a form of programming; for this role, a more technical background is needed.

Page 70  The Automated Testing Handbook

You must also be sure that the person in each role has the requisite authority to carry out their responsibilities; for example, the team leader must have control over the workflow of the team members, and the test librarian must be able to enforce procedures for change and version control.

Following are suggested members of the test team and their respective responsibilities: Team Leader The Team Leader is responsible for developing the Test Plan and managing the team members according to it, as well as coordinating with other areas to accomplish the test effort. The Team Leader must have the authority to assign duties and control the workflow of those who are dedicated to the test team. Test Developers Test Developers are experts in the application functionality, responsible for developing the test cases, executing them, analyzing and reporting the results. They should be trained on how to develop tests, whether as data records or as scripts, and use the test framework.

The Test Automation Process  Page 71

Script Developers

Script Developers are experts in the testing tool, ideally with technical programming experience. They are responsible for developing and maintaining the test framework and supporting scripts and publishing the Application Map.

Test Librarian

The Test Librarian is responsible for managing the configuration, change and version control for all elements of the test library. This includes defining and enforcing check in and check out procedures for all files and related documentation.

Customer Liaison

The Customer Liaison represents the user community of the application under test and is responsible for final approval of the test plan or any changes to it, and for working with the Test Developers to identify test cases and gather sample documents and data. Even though the Customer Liaison may not be a dedicated part of the testing organization, he or she must have dotted line responsibility to the Test Team to assure the acceptance criteria are communicated and met.

Development Liaison

The Development Liaison represents the programmers who will provide the application software for test and is responsible for delivering unit test cases and informing the Test Librarian of any changes to the application or its environment. Even though the Development Liaison may not be a dedicated part of the testing organization, he or she must have dotted line responsibility to the Test Team to assure the software is properly unit tested and delivered in a known state to the Test Team.

Page 72  The Automated Testing Handbook

Systems Liaison

The Systems Liaison represents the system or network support group and database administrator, and is responsible for supporting the test environment to assure that the Test Team has access to the proper platform configuration and database for test execution. The Systems Liaison must also inform the Test Librarian of any changes to the test platform, configuration or database.

Test Automation Plan
A Test Automation Plan describes the steps needed to automate testing. It outlines the necessary components of the test library, the resources which will be required, the schedule, and the entry/exit criteria for moving from one step to the next. Note that the Test Automation Plan is a living document that will be maintained throughout the test cycle as progress is made and additional information is obtained. Following is an example Plan:

Document Control
Activity Created Updated Updated Revised Published Approved Initials LL CC TT PP LL CC Date 5/1/XX 5/21/XX 5/31/XX 6/3/XX 6/15/XX 6/17/XX Comments F:\GENLDGR\TESTPLAN.DOC Added acceptance criteria Added test cases Modified schedule Circulated to team members for approval Updates and revisions accepted

This section is used to control additions and changes to the plan. Because the plan will likely be modified over time, keeping track of the changes is important.

The Test Automation Process  Page 73

Version 1.0 of the General Ledger system.

Describe the application under test in this section. Be sure to specify the version number. If only a subset is to be automated, describe it as well.

Scope of Test Automation
Black box tests for each of the listed requirements will be automated by the test team. Unit string and integration testing will be performed by development manually using a code debugger when necessary. Performance testing will be done using Distributed Test Facility to create maximum simultaneous user load. Ad hoc testing for usability will be captured by the automated tool for reproducing conditions that lead to failure but will not be checked into the test library.

The statement of scope is as important to describe what will be tested as what will not be, as well as who will be responsible.

Test Team
Name Mike Manager Tina Tester Steve Scripter Loretta Librarian Carla Customer Dave Developer Percy Production Role Team Leader Test Developer Script Developer Test Librarian Customer Liaison Development Liaison Systems Liaison Initials MM TT SS LL CC DD PP

List the names and roles of the test team members, and cross-reference each of the steps to the responsible party(ies).

Page 74  The Automated Testing Handbook

Be sure you have a handle on the test environment and configuration. These factors can affect compatibility and performance as much as the application itself. This includes everything about the environment, including operating systems, databases and any third party software.

Phase Test requirements defined Configuration of test environment Publication of Application Map Development of test cases Initial installation of application Development of test scripts Execution of tests and result reporting Result analysis and defect reporting Test cases for defects found Scheduled Date Entry Criteria Planning completed Hardware and software
Design completed Requirements defined Coding completed Application installed Cases, scripts completed Execution completed Exit Criteria SIGNOFF by customer SIGNOFF by system support SIGNOFF by development SIGNOFF by customer SIGNOFF by development SIGNOFF by team leader SIGNOFF by team leader Date Complete

SIGNOFF by team leader

Defect reporting

SIGNOFF by customer

The Test Automation Process  Page 75

Schedule (continued)
Test script modifications for execution errors Second installation of application Execution of tests Result analysis and defect reporting Test cases for defects found Test script modifications for execution errors Third installation of application Execution of tests Result analysis SIGNOFF by team leader

Changes completed Application installed Execution completed

SIGNOFF by development SIGNOFF by team leader SIGNOFF by team leader

Defect reporting Result analysis

SIGNOFF by customer SIGNOFF by team leader

Changes completed Application installed

SIGNOFF by development SIGNOFF by team leader

Ad hoc and usability testing Performance testing Result analysis and defect reporting Application release

No known or waived defects No known or waived defects All tests executed

SIGNOFF by customer SIGNOFF by systems SIGNOFF by team leader

No known or waived defects

SIGNOFF by all test team

Planning the Test Cycle

Page 76  The Automated Testing Handbook

In an automated environment, the test cycle must be carefully planned to minimize the amount of supervision or interaction required. Ideally, an execution cycle should be capable of automatically preparing and verifying the test environment, executing test suites or individual tests in sequence, producing test result reports, and performing final cleanup.

Test Suite Design
A test suite is a set of tests which are related, either by their function or by the area of the application they impact, and which are executed as a group. Not only the set of tests but their sequence within the suite should be considered. The execution of a suite may be a feature available from the test tool, or may have to be scripted within the tool itself using a driver. Related tests A test suite usually contains tests that are related by the area of the application they exercise, but they may also be selected by their priority. For example, each suite of tests may be designated by the priority level of the requirements they verify. This allows the most critical tests to be executed selectively, and less important ones to be segregated. Another means of differentiation is to identify tests by their type; for example, verifying error messaging or other type of design requirements. Context All tests in a suite should share the same beginning and ending context, as well as the expected state of the database. This allows the suite to be packaged so that the data is prepared at the
beginning, and all tests that depend on each other for to be executed in the proper sequence. Any RECOVERY routine that is included should also coincide with the desired context. If the suite is packaged to be executed with a driver script, each individual test within the suite should end with a return to the calling driver. DocumentaTest suite documentation should include the set and sequence of

The Test Automation Process  Page 77


individual tests, the beginning and ending context, as well as any data or sequence dependencies with other test suites.

Page 78  The Automated Testing Handbook

Test Cycle Design
In addition to designing test suites, the entire test execution cycle must also be designed. There may be different types of test cycles needed; for example, a regression cycle that exercises the entire test library, or a fix cycle that tests only targeted areas of the application where changes have been made. Although there may be varying cycles, certain aspects of each cycle must always be considered, such as the configuration of the test platform as well as initialization, setup and cleanup of the test environment. Setup The cycle should commence with the setup of the test environment, including verifying the configuration and all other variables that affect test execution. Preparing the test environment for execution requires that the platform be properly configured. A test cycle executed against the wrong platform configuration may be worthless. The configuration includes not only assuring that the hardware, operating system(s), and other utilities are present and of the expected model or version, but also that the version or level of the application and test library are properly synchronized. Certain portions of the configuration may be automated and included in a shared routine, such as SETUP; others may require human intervention, such as loading software. Whatever the required steps, the
configuration of the test platform should be carefully documented and verified at the beginning of each test execution cycle.


The beginning and ending context of a cycle should be the same point, usually the program manager or command prompt. Care should be taken to synchronize the suites within the cycle to assure that the context for the first and last suite meets this requirement. In addition to assuring that the test platform is configured, it may be

The Test Automation Process  Page 79

important to initialize the state of the database or other data elements. For example, a clean version of the database may be restored, or a subset appended or rewritten, in order to assure that the data is in a known state before testing begins. Data elements, such as error counters, may also require initialization to assure that previous test results have been cleared. Schedule sequence A test schedule is often comprised of a set of test cycles. The sequence should reflect any dependencies of either context or data, and standard tests, such as a WALKTHU, should be packaged as well. A test schedule template may be useful for assuring that all standard tests and tasks are included for each run. Cleanup The cycle should end with the cleanup of the test environment, such as deleting work files, making file backups, assembling historical results, and any other housekeeping tasks.

Page 80  The Automated Testing Handbook

Test Execution
Since an ideally automated test cycle does not depend on human intervention or supervision, the test execution process must thoroughly document results. This documentation must be sufficient to determine which tests passed or failed, what performance was, as well as provide additional information that may be needed to assist with diagnosis of failures.

Test log
The test log reports the results of the test execution for each test case. It is also useful to include the elapsed time for each test, as this may indicate performance problems or other issues. For example, a test which executes too quickly may indicate that it was not run all the way to completion; one that takes too long might raise questions about host response time. Pass/fail Each individual test case – whether as data record(s) or scripts should be logged as to whether it executed successfully or not. Ideally, each case should be cross-referenced to a requirement that has a priority rating, so that high priority requirements can be easily tracked. Performance In addition to reporting the elapsed time for each test, if host or server based testing is involved the test log should track the highest response time for transactions as well as the average. Many service level agreements specify the maximum allowable response time, and/or the expected average, for given areas of the system.

Test Execution  Page 81

Performance measurements may also include the overall time required to execute certain functions, such as a file update or other batch process. It is of course critical to establish the performance criteria for the application under test, then assure that the necessary tests are executed and measurements taken to confirm whether the requirements are in fact met or not. Configuration Each test log should clearly indicate the configuration against which it was executed. This may take the form of a header area or comments. If subsequent logs show widely varying results, such as in the area of performance, then any changes to the configuration may provide a clue. Totals Total test cases executed, passed and failed, as well as the elapsed time overall, should be provided at the end of the execution log to simplify the updating of historical trends.

Page 82  The Automated Testing Handbook

NEW_ACCTS NEW_ACCTS NEW_ACCTS DEL_ACCTS Elapsed Time: 00:11:07 100-0000 101-0000 102-0000 102-1000 102-2000 110-0000 111-0000 111-1000 111-2000 112-0000 TIME: HH:MM 5.1.1 End Page: XXX Test Cycle: Status ALL

Version: Test Case

———————————————————————–Begin 08:11:12 08:11:15 08:12:23 08:13:29 08:14:34 08:15:45 08:16:53 08:18:05 08:19:20 08:19:33 08:21:02 08:12:21 08:13:25 08:14:31 08:15:42 08:16:50 08:18:01 08:19:17 08:19:28 08:20:54 08:22:19 Cases Failed: Passed Passed Passed Passed Passed Passed Passed Passed Passed Failed 1 ————————————————————————

Cases Passed: 9

Test Log Summary Failed: Priority 1 Priority 2 Priority 3 Total Passed: Total Executed Ratios: Previous 9 58 70 137 172 217 21% defects 55% recurrence New 10 10 25 45 Resolved 9 22 30 61 Remaining 10 46 65 121

Test Execution  Page 83

Error log
For every test which fails, there should be a corresponding entry in the error log. The error log provides more detailed information about a test failure to support diagnosis of the problem and determination about its cause. Test case and script The test case which failed, as well as the script being executed, should be documented in the error log to enable later review of the error condition. Errors do not necessarily indicate a defect in the application: the test case or script may contain errors, or the application context or data may be incorrect. State of application When an error occurs, it is important to document the actual state of the application for comparison against what was expected. This may include a snapshot of the screen at the time of failure, the date and time, test case being executed, the expected result, and whether recovery was attempted. Diagnostics Whenever possible, the error log should include as much
diagnostic information about the state of the application as is available: stack or memory dumps, file listings, and similar documentation may be generated to assist with later diagnosis of the error condition.

Page 84  The Automated Testing Handbook

Analyzing Results
At the conclusion of each test cycle, the test results – in the form of the execution, performance and error logs – must be analyzed. Automated testing may yield results which are not necessarily accurate or meaningful; for example, the execution log may report hundreds of errors, but a closer examination may reveal that an early, critical test failed which in turn jeopardized the integrity of the database for all subsequent tests.

Inaccurate results
Inaccurate results occur when the test results do not accurately reflect the state of the application. There are generally three types of inaccurate results: false failures, duplicate failures, and false successes. False failure from test environment A false failure is a test which fails for a reason other than an error or defect in the application. A test may fail because the state of the database is not as expected due to an earlier test, or because the test environment is not properly configured or setup, or because a different error has caused the test to lose context. Or, a test which relies on bitmap comparisons may have been captured against one monitor resolution and executed against another. False failure from application changes Another type of false failure can occur if a new field or control is added, causing the script to get out of context and report failures for other fields or controls that are actually functional. Any of these situations will waste resources and skew test results, confusing the metrics which are used to manage the test process. False failure from test errors It is unfortunately true that the failure may also be the result of an error in the test itself. For example, there may be a missing test case record or an error in the script. Just as programmers may introduce

Test Execution  Page 85

one problem while fixing another, test cases and scripts are subject to error when modifications are made. Duplicate failure A duplicate failure is a failure which is attributable to the same cause as another failure. For example, if a window title is misspelled, this should be reported as only one error; however, depending on what the test is verifying, the name of the window might be compared multiple times. It is not accurate to report the same failure over and over, as this will skew test results. For example, if a heavily-used transaction window has an error, this error may be reported for every transaction that is entered into it; so, if there are five hundred transactions, there will be five hundred errors reported. Once that error is fixed, the number of errors will drop by five hundred. Using these figures to measure application readiness or project the time for release is risky: it may appear that the application is seriously defective, but the errors are being corrected at an astronomical rate – neither of which is true.

False success from test defect

A false success occurs when a test fails to verify one or more aspects of the behavior, thus reporting that the test was successful when in fact it was not. This may happen for several reasons. One reason might be that the test itself has a defect, such as a logic path that drops processing through the test so that it bypasses certain steps. This type of false success can be identified by measurements such as elapsed time: if the test completes too quickly, for example, this might indicate that it did not execute properly.

Page 86  The Automated Testing Handbook

False success from missed error

Another false success might occur if the test is looking for only a specific response, thus missing an incorrect response that indicates an error. For example, if the test expects an error to be reported with an error message in a certain area of the screen, and it instead appears elsewhere. Or, if an asynchronous error message appears, such as a broadcast message from the
database or network, and the test is not looking for it. This type of false success may be avoided by building in standard tests such as a MONITOR, described in this Handbook.

Defect tracking
Once a test failure is determined to be in fact caused by an error in the application, it becomes a defect that must be reported to development for resolution. Each reported defect should be given a unique identifier and tracked as to the test case that revealed it, the date it was logged as a defect, the developer it was assigned to, and when it was actually fixed.

Test Execution  Page 87

Test Metrics
Metrics are simply measurements. Test metrics are those measurements from your test process that will help you determine where the application stands and when it will be ready for release. In an ideal world, you would measure your tests at every phase of the development cycle, thus gaining an objective and accurate view of how thorough your tests are and how closely the application complies with its requirements. In the real world, you may not have the luxury of the time, tools or tests to give you totally thorough metrics. For example, documented test requirements may not exist, or the set of test cases necessary to achieve complete coverage may not be known in advance. In these cases, you must use what you have as effectively as possible. Measure progress The most important point to make about test metrics is that they are essential to measuring progress. Testing is a never-ending task, and if you don’t have some means of establishing forward progress it is easy to get discouraged. Usually, testers don’t have any indication of success, only of failure: they don’t hear about the errors they catch, only the ones that make it into production. So, use metrics as a motivator. Even if you can’t test everything, you can get comfort from the fact that you test more now than before! Code coverage Code coverage is a measurement of what percentage of the underlying application source code was executed during the test cycle. Notice that it does not tell you how much of the code passed the test – only how much was executed during the test. Thus,
100% code coverage does not tell you whether your application is 100% ready.

Page 88  The Automated Testing Handbook

A source level tool is required to provide this metric, and often it requires that the code itself be instrumented, or modified, in order to capture the measurement. Because of this, programmers are usually the only ones equipped to capture this metric, and then only during their unit test phase. Although helpful, code coverage is not an unerring indicator of test coverage. Just because the majority of code was executed during the test, it doesn’t means that errors are unlikely. It only takes a single line – or character – of code to cause a problem. Also, code coverage only measures the code that exists: it can’t measure the code that is missing. When it is available, however, code coverage can be used to help you gauge how thorough your test cases are. If your coverage is low, analyze the areas which are not exercised to determine what types of tests need to be added.

Requirements coverage

Requirements coverage measures the percentage of the requirements that were tested. Again, like code coverage, this does not mean the requirements were met, only that they were tested. For this metric to be truly meaningful, you must keep track of the difference between simple coverage and successful coverage. There are two prerequisites to this metric: one, that the requirements are known and documented, and two, that the tests are crossreferenced to the requirements. In many cases, the application requirements are not documented sufficiently for this metric to be taken or be meaningful. If they are documented, though, this measurement can tell you how much of the expected functionality has been tested.


However, if you have taken care to associate requirements with your

Test Metrics  Page 89


test cases, you may be able to measure the percentage of the requirements that were met – that is, the number that passed the test. Ultimately, this is a more meaningful measurement, since it tells you how close the application is to meeting its intended purpose. Because requirements can vary from critical to important to desirable,

Priority Requirements

simple percentage coverage may not tell you enough. It is better to rate requirements by priority, or risk, then measure coverage at each level. For example, priority level 1 requirements might be those that must be met for the system to be operational, priority 2 those that must be met for the system to be acceptable, level 3 those that are necessary but not critical, level 4 those that are desirable, and level 5 those that are cosmetic. In this scheme, 100% successful coverage of level 1 and 2 requirements would be more important than 90% coverage of all requirements; even missing a single level 1 could render the system unusable. If you are strapped for time and resources (and who isn’t), it is well worth the extra time to rate your requirements so you can gauge your progress and the application’s readiness in terms of the successful coverage of priority requirements, instead of investing precious resources in low priority testing.

Exit criteria

Successful requirements coverage is a useful exit criteria for the test process. The criteria for releasing the application into production, for example, could be successful coverage of all level 1 through 3 priority requirements. By measuring the percentage of requirements tested versus the number of discovered errors, you could extrapolate the number of remaining errors given the remaining number of requirements. But as with all metrics, don’t use them to kid yourself. If you have only defined one requirement, 100% coverage is not meaningful!

Test case

Test case coverage measures how many test cases have been

Page 90  The Automated Testing Handbook


executed. Again, be sure to differentiate between how many passed and how many were simply executed. In order to capture this metric, you have an accurate count of how many test cases have been defined, and you must log out each test case that is executed and whether it passed or failed.

Predicting time to release

Test case coverage is useful for tracking progress during a test cycle. By telling you how many of the test cases have been executed in a given amount of time, you can more accurately estimate how much time is needed to test the remainder. Further, by comparing the rate at which errors have been uncovered, you can also make a more educated guess about how many remain to be found. As a simple example, if you have executed 50% of your test cases in one week, you might predict that you will need another week to finish the cycle. If you have found ten errors so far, you could also estimate that there are that many again waiting to be found. By figuring in the rate at which errors are being corrected (more on this below), you could also extrapolate how long it will take to turn around fixes and complete another test cycle.

Defect Ratio

The defect ratio measures how many errors are found as a percentage of tests executed. Since an error in the test may not necessarily be the result of a defect in the application, this measurement may not be derived directly from your error log; instead, it should be taken only after an error is confirmed
to be a defect. If you are finding one defect out of every ten tests, your defect ratio is 10%. Although it does not necessarily indicate the severity of the errors, this metric can help you predict how many errors are left to find based on the number of tests remaining to be executed.

Fix rate

Instead of a percentage, the fix rate measures how long it takes for a

Test Metrics  Page 91

reported defect to be fixed. But before you know if a defect is fixed, it must be incorporated into a new build and tested to confirm that the defect is in fact corrected. For this metric to be meaningful, you have to take into account any delays that are built into the process. For example, it may only take two hours to correct an error, but if a new build is created only weekly and the test cycle performed only once every two weeks, it may appear as though it takes three weeks to fix a defect. Therefore, measure the fix rate from the time the defect is reported until the corresponding fix is introduced into the source library. Recurrence ratio If a code change that is purported to fix a defect does not, or introduces yet another defect, you have a recurrence. The recurrence ratio is that percentage of fixes that fail to correct the defect. This is important because although your developers may be able to demonstrate a very fast turnaround on fixes, if the recurrence ratio is high you are spinning your wheels. This ratio is extremely useful for measuring the quality of your unit and integration test practices. A high recurrence ratio means your developers are not thoroughly testing their work. This inefficiency may be avoided to some degree by providing the programmer with the test case that revealed the defect, so that he or she can verify that the code change in fact fixes the problem before resubmitting it for another round of testing. So temper your fix rate with the recurrence ratio. It is better to have a slower fix rate than a high recurrence ratio: defects that recur cost everyone time and effort. Post-release defects A post-release defect is a defect found after the application has been released. It is the most serious type of defect, since
it not only reflects a weakness in the test process, it also may have caused

Page 92  The Automated Testing Handbook

mayhem in production. For this reason, it is important to know not just how many of these there are, but what their severity is and how they could have been prevented. As discussed earlier, requirements should be prioritized to determine their criticality. Post-release defects should likewise be rated. A priority 1 defect – one which renders the system unusable – should naturally get more attention than a cosmetic defect. Thus, a simple numerical count is not as meaningful. Defect prevention Once a defect is identified and rated, the next question should be when and how it could have been prevented. Note that this question is not about assessing blame, it is about continuous process improvement. If you don’t learn from your mistakes, you are bound to repeat them. Determining when a defect could have been prevented refers to what phase of the development cycle it should have been identified in. For example, a crippling performance problem caused by inadequate hardware resources should probably have been revealed during the planning phase; a missing feature or function should have been raised during the requirements or design phases. In some cases, the defect may arise from a known requirement but schedule pressures during the test phase may have prevented the appropriate test cases from being developed and executed. Continuous improvement Whatever the phase, learn from the problem and institute measures to improve it. For example, when pressure arises during a later cycle to release the product without a thorough test phase, the known impact of doing so in a previous cycle can be weighed against the cost of delay. A known risk is easier to evaluate than an unknown one.

Test Metrics  Page 93

As to how a defect could be prevented, there are a wide range of possibilities. Although the most obvious means of preventing it from being released into production is to test for it, that is really not what this is about. Preventing a defect means keeping it from coming into existence, not finding it afterwards. It is far more expensive to find a defect than to
prevent one. Defect prevention is about the entire development cycle: how can you better develop high quality applications in the future? By keeping track of post-release defects as well as their root causes, you can not only measure the efficacy of your development and test processes, but also improve them.

Page 94  The Automated Testing Handbook

Management Reporting
Although there are many sophisticated metrics for measuring the test process, management is usually interested in something very simple: when will the application be ready? If you can’t answer this question, you run the risk that the application will be released arbitrarily, based on schedules, instead of based on readiness. Few organizations can make open-ended commitments about release dates. Once management has invested time and money in test automation, they will also want to know what their return was. This return could take three forms: savings in money, time, and/or improved quality. By assuring that you have these measurements at your fingertips, you can increase the odds of keeping management committed to the test automation effort. Estimated time to release Although you can never precisely predict when or even if an application will be defect-free, you can make an educated guess based on what you do know. The best predictor of readiness for release is the requirements coverage as affected by the defect ratio, fix rate and recurrence ratio. For example, if after four weeks you are 80% through with 100 test cases with a 20% defect ratio, a two day fix rate and a 5% recurrence ratio, you can estimate time to release as: 4 weeks = 80% 1 week = 20% 2 day fix rate = 34 days 1 week + (34 days/5 days per week) + 5 weeks to test fixes = 13 weeks to release 20% defects = 16 5% recurrence = 1

Management Reporting  Page 95

Saving money

There are two kinds of savings from automated testing. The first is the
productivity that comes from repeating manual tests. Even though you may not actually cut staff, you can get more done in less time. To measure this savings – the amount you would have spent to get the same level of test coverage – measure the time it takes to manually execute an average test, then automate that test and measure the time to execute it. Divide the automated time into the manual test time. If it takes two hours to perform the test manually but it will playback in thirty minutes, you will get a productivity factor of 4. Next, execute a complete automated test cycle and measure the total elapsed time, then multiply that times the productivity factor. In this example, a twelve hour automated test cycle saves 48 hours of manual test time. So, if you have four releases per year and three test iterations per release, you are saving (4 times 3 times 48 hours) 576 hours per year. Multiply that by your cost per man hour; if it’s $50, then you are saving $28,800 per year.

Saving time

Getting the application into the market or back into production faster also saves the company time. In our above example, you are shaving 3.6 weeks off the release time (3 iterations times 48 hours/40 hours per week). This is almost a month of time savings for each release. If the reason for the release is to correct errors, that extra time could translate into significant productivity.

Page 96  The Automated Testing Handbook

Higher quality

It is hard to measure the impact of higher quality: you can’t really measure the amount of money you aren’t spending. If you do a thorough job of testing and prevent defects from entering into production, you have saved money by not incurring downtime or overhead from the error. Unfortunately, few companies know the cost to fix an error. The best way to tell if you are making progress is when the post-release defect rate declines.

Better coverage

Even if you can’t tell exactly what it is saving the company, just measure the increasing number of test cases that are executed for each release. If you assume that more tests mean fewer errors in production, this expanded coverage has value.

Historical trends
In all of these metrics, it is very useful to keep historical records so that you can measure trends. This may be as simple as keeping the numbers in a spreadsheet and plotting them graphically. Remember to also keep the numbers that went into the metric: not just test case coverage, for example, but the total number of test cases defined as well as executed that went into the calculation. The reason historical trends are important is that they highlight progress – or, perish the thought, regression. For example, the number of requirements and test cases which have been defined for an application should be growing steadily. This indicates that enhancements, as well as problems found in production, are being added as new requirements and test cases, assuring that your test library is keeping pace with the application. A declining recurrence ratio might indicate that programming practices or unit testing has improved.

Management Reporting  Page 97

Another reason to analyze historical trends is that you can analyze the impact of changes in the process. For example, instituting design reviews or code walkthroughs might not show immediate results, but later might be reflected as a reduced defect ratio.

Page 98  The Automated Testing Handbook

Management Reporting  Page 99

History of Automated Teller Machine

An automatic teller machine or ATM allows a bank customer to conduct their banking transactions from almost every other ATM machine in the world. As is often the case with inventions, many inventors contribute to the history of an invention, as is the case with the ATM. Read each page of this article to learn about the many inventors behind the automatic teller machine or ATM. In 1939, Luther Simjian patented an early and not-so-successful prototype of an ATM. However, some experts have the opinion that James Goodfellow of Scotland holds the earliest patent date of 1966 for a modern ATM, and John D White (also of Docutel) in the US is often credited with inventing the first free-standing ATM design. In 1967, John Shepherd-Barron invented and installed an ATM in a Barclays Bank in London. Don Wetzel invented an American made ATM in 1968. However, it wasn’t until the mid to late 1980s that ATMs became part of mainstream banking. Luther Simjian’s ATM

Luther Simjian came up with the idea of creating a “hole-in-the-wall machine” that would allow customers to make financial transactions. In 1939, Luther Simjian applied for 20 patents related to his ATM invention and field tested his ATM machine in what is now Citicorp. After six months, the bank reported that there was little demand for the new invention and discontinued its use. Luther Simjian Biography 1905 – 1997

Luther Simjian was born in Turkey on January 28, 1905. While he studied medicine at school, he had a life-long passion for photography. In 1934, the inventor moved to New York. Luther Simjian is best known for his invention of the Bankmatic automatic teller machine or ATM, however, Luther Simjian’s first big commercial invention was a self-posing and self-focusing portrait camera. The subject was able to look a mirror and see what the camera was seeing before the picture was taken. Luther Simjian also invented a flight speed indicator for airplanes, an automatic postage metering machine, a colored x-ray machine, and a teleprompter. Combining his knowledge of medicine and photography, Luther Simjian invented a way to project images from microscopes, and methods of photographing specimens under water. Luther Simjian started his own company called Reflectone to further develop his inventions.