Research proposal sample
The theory which will underpin this study is General Systems Theory (GST). The General Systems Theory, which was developed by Bertalanffy (1934) as cited in Tama (1987), provides an analytical framework which can be used explain the effect of planning on performance. According to Bertalanffy (1968), a system is an assemblage of things connected or interrelated so as to form a complex unity: a whole composed of parts and sub-parts in orderly arrangement according to some scheme or plan. The following are the features of a system. A system is basically a combination of parts, sub-parts, sub-systems. Each part may have various sub-parts. A system has mutually dependent parts, each of which may include many sub-systems. Parts and sub-parts of a system are mutually related to each other, some more, some less; some directly, some indirectly. The relationship is in the context of the whole. Any change in one part may affect other parts also. A system is an interdependent framework in which various parts are arranged (Tamas, 1987).
A system transforms inputs into outputs. This transformation is essential for the survival of the system. There are three aspects involved in this transformation process: inputs, mediator, and outputs. Inputs are taken from the environment, transformed into outputs and given back to the environment. The various inputs may be in the form of information, money, materials, human resources, etc. Outputs may be in the form of goods and services. The total relationship may be called the input-output process and system works as a mediator in the process (Bertalanffy, 1968). The systems theory has been used in a number of fields like community development, In this study, factors like technical, technological and the quality of M&E systems will be the inputs and utilization of M&E data by the public health units will be the output.
2.3 Conceptual Review
This section presents the literature review as reviewed by various scholars in line with the study objectives and conceptual frame work.
2.3.1 Technical capacity
Adequate personnel
Building an adequate supply of human resource capacity is critical for the sustainability of the M&E system and generally is an ongoing issue. Furthermore, it needs to be recognized that “growing” evaluators requires far more technically oriented M&E training and development than can usually be obtained with one or two workshops. Both formal training and on-the-job experience are important in developing evaluators with various options for training and development opportunities which include: the public sector, the private sector, universities, professional associations, job assignment, and mentoring programs (Acevedo et al., 2010:12).
Human capital, with proper training and experience is vital for the production of M&E results. There is need to have an effective M&E human resource capacity in terms of quantity and quality, hence M&E human resource management is required in order to maintain and retain a stable M&E staff (World Bank, 2011). This is because competent employees are also a major constraint in selecting M&E systems (Koffi-Tessio, 2002). M&E being a new professional field, it faces challenges in effective delivery of results. There is therefore a great demand for skilled professionals, capacity building of M&E systems, and harmonization of training courses as well as technical advice (Gorgens and Kusek, 2009).
The UNDP (2009) handbook on planning, monitoring and evaluation for development results, emphasizes that human resource is vital for an effective monitoring and evaluation, by stating that staff working should possess the required technical expertise in the area in order to ensure high-quality monitoring and evaluation. Implementing of an effective M&E demands for the staff to undergo training as well as possess skills in research and project management, hence capacity building is critical (Nabris, 2002). In-turn numerous training manuals, handbooks and toolkits have been developed for NGO staffs working in project, in order to provide them with practical tools that will enhance result-based management by strengthening awareness in M&E (Hunter, 2009). They also give many practical examples and exercises, which are useful since they provide the staff with ways of becoming efficient, effective and have impact on the projects (Shapiro, 2011).
Qualified Personnel; The M&E system cannot function without skilled people who effectively execute the M&E tasks for which they are responsible. Therefore, understanding the skills needed and the capacity of people involved in the M&E system (undertaking human capacity assessments) and addressing capacity gaps (through structured capacity development programs) is at the heart of the M&E system (Gorgens & Kusek, 2010:43). In its framework for a functional M&E system, UNAIDS (2008) notes that, not only is it necessary to have dedicated and adequate numbers of M&E staff, it is essential for this staff to have the right skills for the work. Moreover, M&E human capacity building requires a wide range of activities, including formal training, in-service training, mentorship, coaching and internships. Lastly, M&E capacity building should focus not only on the technical aspects of M&E, but also address skills in leadership, financial management, facilitation, supervision, advocacy and communication.
Experienced personnel; Monitoring and evaluation carried out by untrained and inexperienced people is bound to be time consuming, costly and the results generated could be impractical and irrelevant. Therefore, this will definitely impact the success of projects (Nabris, 2002:17). In assessment of CSOs in the Pacific, UNDP (2011:12) discusses some of the challenges of organizational development as having inadequate monitoring and evaluation systems. Additionally, the lack of capabilities and opportunities to train staff in technical skills in this area is clearly a factor to be considered. During the consultation processes, there was consensus among CSOs that their lack of monitoring and evaluation mechanisms and skills was a major systemic gap across the region. Furthermore, while there is no need for CSOs to possess extraordinarily complex monitoring and evaluation systems, there is certainly a need for them to possess a rudimentary knowledge of, and ability to utilize reporting, monitoring, and evaluating systems.
A study by White (2013:43) on monitoring and evaluation best practices in development INGOs, indicates that INGOs encounter a number of challenges when implementing or managing M&E activities one being insufficient M&E capacity where M&E staff usually advise more than one project at a time, and have a regional or sectoral assignment with a vast portfolio. Furthermore, taking on the M&E work of too many individual projects overextends limited M&E capacity and leads to rapid burnout of M&E staff whereby high burnout and turnover rates make recruitment of skilled M&E staff difficult, and limits the organizational expertise available to support M&E development. Mibey (2011:19) study on factors affecting implementation of monitoring and evaluation programs in kazi kwa kijana project, recommends that capacity building should be added as a major component of the project across the country (Kenya), and this calls for enhanced investment in training and human resource development in the crucial technical area of monitoring and evaluation.
2.3.2 Financial capacity
Availability of funds; Financial Capacity refers to the extent to which funds are available to finance and facilitate the monitoring and evaluation function in an organization (USAID, 2015). In this study therefore financial capacity will be measured in terms of whether there are adequate financial resources, whether the funds are released in a timely manner and whether the funds are accounted for.
There is empirical evidence to suggest that lack of adequate financial capacity constrains M& E systems. In Ghana, a study by CLEAR (2012) found that after several years of implementing the national M&E system, significant progress had been made. However, challenges include severe financial constraints; institutional, operational and technical capacity constraints; fragmented and uncoordinated information, particularly at the sector level. To address these challenges the CLEAR report argues that the current institutional arrangements will have to be reinforced with adequate capacity to support and sustain effective monitoring and evaluation, and existing M&E mechanisms must be strengthened, harmonized and effectively coordinated.
Timely funds
The study by Koffi-Tessio (2002), on Efficacy and Efficiency of Monitoring-Evaluation Systems (MES) for Projects Financed by the Bank Group that was done in Burkina Faso, Mauritana, Kenya, Rwanda and Mozambique, through desk review and interviews, for projects approved between 1987 and 2000. Monitoring-Evaluation systems are not meeting their obligatory requirements because of financial constraints.
Accountability
A study conducted by Gamba (2016) to determine the factors affecting the utilization of evaluation findings in malaria control projects in Uganda found that management support was low and financial resources allocated for M&E was insufficient. This greatly affected effective outcome and impact monitoring and evaluation of projects.
2.3.3 Quality of Evaluation Findings
Timeliness
Quality of M&E Findings refers to the extent to which the M&E systems and activities meet the specified requirements and standards (Mulandi, 2013:12). For this study, the qualities of M&E findings meet the standards and requirements.
The quality of evaluations is important to the credibility of reported results hence, it is important to incorporate data from a variety of sources to validate findings. Furthermore, while primary data are collected directly by the M&E system for M&E purpose, secondary data are those collected by other organizations for purposes different from M&E (Gebremedhin, Getachew & Amha, 2010:24). In the design of an M&E system, the objective is to collect indicator data from various sources, including the target population for monitoring project progress (Barton, 1997). The methods of data collection for M&E system include discussion/conversation with concerned individuals, community/group interviews, field visits, review of records, key informant interviews, participant observation, focus group interviews, direct observation, questionnaire, one-time surveys, panel surveys, census, and field experiments. Moreover, developing key indicators to monitor outcomes enables managers to assess the degree to which intended or promised outcomes are being achieved (Kusek & Rist, 2004).
Frequent data collection means more data points; more data points enable managers to track trends and understand intervention dynamics hence the more often measurements are taken, the less guess work there will be regarding what happened between specific measurement intervals. But, the more time that passes between measurements, the greater the chances that events and changes in the system might happen that may be missed (Gebremedhin et al., 2010). Mulandi (2013:75) concurs that to be useful, information needs to be collected at optimal moments and with a certain frequency. Moreover, unless negotiated indicators are genuinely understood by all involved and everyone’s timetable is consulted, optimal moments for collection and analysis will be difficult to identify.
Methodologically correct
According to Cornielje, Velema and Finkenflugel (2008:15), only when the monitoring system is owned by the users the system is it likely to generate valid and reliable information. However, all too often the very same users may be overwhelmed by the amount of daily work which in their view is seen as more important than collecting data and subsequently the system may become corrupted. They conclude that it is of extreme importance that the front-line workers are both involved in monitoring and evaluation and informed about the status of the services and activities they largely provide in interaction with other stakeholder and beneficiaries.
Evidence suggests that the quality of evaluations has an effect on the utilization of evaluation findings. According to an IFAD (2008:26) annual report on results and impact, recurrent criticisms against M&E systems include: limited scope, complexity, low data quality, inadequate resources, weak institutional capacity, lack of baseline surveys and lack of use. Moreover, the most frequent criticism of M&E systems in IFAD projects relates to the type of information included in the system. Most of the IFAD projects collect and process information on the project activities. However, the average IFAD project did not provide information on results achieved at the purpose or impact level. The M&E system of the Tafilalet and Dades Rural Development project in Morocco for example only focused on financial operations and could not be used for impact assessment. In the Pakistan IFAD Country Program Evaluation, cases were reported of contradictory logical frameworks combined with arbitrary and irrelevant indicators while in Belize, two different logical frameworks were generated which increased confusion and complexity. The Ethiopia IFAD Country Program Evaluation found that project appraisal documents made limited provision for systematic baseline and subsequent beneficiaries surveys. For example in one project in Ethiopia, the baseline survey was carried out 2-3 years after projects start-up.
Relevant
In a study report of an Australian NGO conducted by Spooner and Dermott (2008:45), staff reported that, as WAYS evolved over time, they were unsure about what works in the current system of monitoring and evaluation. Additionally, resources had not been dedicated to data analysis; and the data was rarely analyzed. A further problem found with data analysis was that the responsibility of doing the analysis lied with program managers, who had little time to analyze data that was not required by funding bodies. Some of the staff stated that they are required to collect information and analyze it, but that their analysis is hampered because they have minimal research skills. Finally, some staff reported that there was no feedback loop built into the current system so, while staff report on their activities to the management, they do not know what happens to the information once it is reported.
A problem in African countries, and perhaps in some other regions, is that while sector ministries collect a range of performance information, the quality of data is often poor. This is partly because the burden of data collection falls on over-worked officials at the facility level, who are asked with providing the data for other officials in district offices and the capital, but who rarely receive any feedback on how the data are actually being used, if at all. This leads to another problem: data are poor partly because they are not being used; and they are not used partly because their quality is poor therefore, in such countries there is too much data, not enough information (Mackay, 2006:7).
Well written
Obure (2008:5) in a study of RBM in Northern Ghana indicates a problem associated with post collection data management. As confessed by many field officers, the storage, processing and interpretation of data was ineffectively handled. Results from the study strongly point to a weakness in the system arising from the inability of stakeholders to handle and process data in a meaningful way. He concludes that this challenge could seriously lead to mere collection of large volumes of data which eventually might not be used in a helpful way. Data must be collected and analyzed regularly on the objectives and intermediate results. Furthermore, the PME&R system allows for three levels of information by project, activity and organization where the data for all organizations involved in a specific activity can be averaged up to the activity level, and the data for all activities can be averaged up to the project level (Booth, Ebrahim & Morin, and 2008:31).
A study by Gamba (2016:65) to establish the factors affecting the utilization of M&E results in malaria control projects in Uganda found that evaluation quality and communication of the M & E results had a significant positive effect on utilization and timeliness of the M & E activities had a significantly moderate positive effect on the utilization of M & E findings in the implementation of the MCP activities across the organizations.
2.3.4 Utilization of M&E findings
Utilization of Monitoring and Evaluation findings refers to putting monitoring and evaluation results to use. The use of monitoring and evaluation findings for decision making and project control ensure that there is a baseline against which to undertake new measurements (Mulandi, 2013)
Number of Decisions made
According to Gebremedhin, Getachew and Amha, (2010) establishes that the source of performance data is important to the credibility of reported results and thus their utilization in future programmes implementation. The author thus notes that it is important to incorporate data from a variety of sources if the results are to be validated. The processes of planning, monitoring and evaluation make up the Result-Based Management (RBM) approach, which is intended to aid decision-making towards explicit goals. Planning helps to focus on results that matter, while M&E facilitates learning from past successes and challenges and those encountered during implementation. Elements of an M&E system which if developed together with all key stakeholders will encourage participation and increased ownership of a project/plan – are: (a) Result Frameworks or logframes (“RF”), which are tools to organize intended results, i.e. measurable development changes. RFs inform the development of the M&E plan and both must be consistent with each other, the M&E plan, which contains a description of the functions required to gather the relevant data on the set indicators and the required methods and tools to do so. The M&E plan is used to systematically organize the collection of specific data to be assessed, indicating roles and responsibilities of project/plan stakeholders. It ensures that relevant progress and performance information is collected processed and analyzed on a regular basis to allow for real-time, evidence-based decision-making; the various processes and methods for monitoring (such as regular input and output data gathering and review, participatory monitoring, process monitoring) and for evaluation (including impact evaluation and thematic, surveys, economic analysis of efficiency and the Management Information System, which is an organized repository of data to assist managing key numeric information related to the project/plan and the analysis.
Furthermore, while primary data are collected directly by the M&E system for M&E purposes, secondary data are those collected by other organizations for purposes different from M&E. However, a study by Booth, Ebrahim and Morin, (2008) reports that the Monitoring and Evaluation system allows for three levels of information by project, activity and organization, where the data for all organizations involved in a specific activity. These can be averaged up to the activity level, and the data for all activities can be averaged up to the project level, easing utilization.
Learning
M&E systems will only add value to project implementation through interpretation and analysis, by
drawing on information from other sources and adapting it for use by project decision makers and a range of key partners. Knowledge generated by the M&E efforts should never stop at basic capturing of information or relying exclusively on quantitative indicators, but also to address the “why” questions. Here the importance of more qualitative and participatory approaches become particularly important, to analyze relationship between project activities and results. Evaluation therefore serves the purpose to establish attribution and causality, and forms a basis for accountability and learning by staff, management and clients. The challenge early on during project implementation is to design effective learning systems that can underpin management behavior towards results, and come up with strategies to optimize impact.
In this context, learning is defined as formulating responses to identified constraints and implementing them in real time. Useful at project/plan level are knowledge-sharing and learning instruments which can pick up information and analysis from the M&E systems as studies, such as; summarized studies and publications on lessons learned, case studies documenting successes and failures, publicity material including newsletters, radio and television programmes, formation of national and regional learning networks, periodic meetings and workshops to share knowledge and lessons learned, research-extension liaison or feedback meetings, national and regional study tours, preparation and distribution of technical literature on improved practices; and routine supervision missions, mid-term reviews or evaluations and project completion (end-of-project) reports.
A study by Guijt, (1999) also finds that useful information needs to be collected at optimal moments and with a certain frequency, if it is to be of quality. Moreover, unless negotiated indicators are genuinely understood by all involved, and everyone’s timetable is consulted, optimal moments for collection and analysis will be difficult to identify. On the other hand, Cornielje, Velema and Finkenflugel (2008) report that it is only when the monitoring system is owned by the users that it can generate quality data that is valid and reliable for utilization in future projects. The author however notes that all too often, the very same users may be overwhelmed by the amount of daily work, which, in their view, is seen as more important than collecting data; and that subsequently, the system may become corrupted and thus not usable in subsequent implementations.
A study by Barton, (2007), connotes that in designing of an M&E system, the objective is to collect indicator data from various sources, including the target population, for monitoring project progress. The methods of data collection for M&E system include discussion/conversation, with concerned individuals, community/group interviews, field visits, and review of records, key informant interviews, participant observation, focus group interviews, direct observation, questionnaire, one-time surveys, panel surveys, census, and field experiments. Kusek and Rist (2004) however, reports that developing key indicators to monitor outcomes enables managers to assess the degree to which intended or promised outcomes are being achieved. Frequent data collection means more data points, which as a result, enables managers to utilize them to track trends and understand intervention dynamics, reducing items of guess work regarding what happened between specific measurement intervals. In supplement, Gebremedhin et al., (2010) reports that the more time that passes between measurements, the greater the chances that events and changes in the system might happen that may be missed, which can have consequences if utilized in subsequent studies. These studies however have contradicting results, which gap this study hopes to clarify, in context of the malaria programs.
Project/program improvement
Bourckaert, Verhoest and De Corte (2009) observe that indicators for measuring programme performance are difficult to identify unless the M&E results are produced on time. It is therefore important to clearly define an appropriate system of indicators to measure and monitor programme performance with time. In support, a study by Cunnen, (2006) finds that a system of over two thousand societal indicators to measure Results for Canadians across all sectors need to be timely.
Kusek, et al, (2004) overwhelmingly support the assertion that indicators measured are just as important as the timing of M&E. This means that it is imperative to get the measurement correct, but also be done in such a way that when the said information is needed, it is readily available for its utilization. Kusek and colleagues note that the practice of using inappropriate baselines defeats the whole concept of “data quality triangle”, which encompasses elements of data reliability, data validity and data timeliness, for its usability.
2.7 Summary of empirical literature
2.8 Gaps Identified in the Literature
The review of literature indicates that organizational capacity factors, top management support and the quality of evaluation findings affect the utilization of M&E findings. Studies by CLEAR, 2012:31), the Kenyan NGO Coordination Board (2009:17), White (2013:10) Mibey (2011:25) and Nyagah (2015:47) indicate that organizational capacity has a significant effect on the utilization of M& E findings. This means that utilization of M&E findings improves with financial, technical and technological capacity of an organization. These studies however have a contextual gap in that they were conducted in organizations in other countries which may have more organizational capacity to utilize M&E findings. It remains to be seen whether organizational capacity will have the same effect on utilization of M&E findings within the health facilities in Ibanda District.
Studies by Turabi et al and Kasule (2016) Kasule indicate that top management support effects the utilization of M&E findings. This means that utilization of M&E findings improves with top management support in an organization. However it remains to be empirically seen whether top management support has any significant effect on utilization of M&E findings because earlier literature is largely anecdotal. The literature also indicates that the quality of M&E findings affects utilization of M&E findings (Mackay, 2006:7; Obure, 2008:5; Gamba (2016:65 and Booth et al., 2008:31). However it remains to be empirically seen whether top management support has any significant effect on utilization of M&E findings because earlier literature is largely anecdotal. Generally, whereas literature points out the effect of the above mentioned factors on the utilization of M&E findings; most studies are based on anecdotal observations and less on empirical findings. There is therefore need to conduct a study which will empirically document the factors affecting the utilization of M&E findings.