Research consultancy in uganda
ASSESSMENT OF THE FUNCTIONALITY OF THE MONITORING AND EVALUATION SYSTEM AND THE PERFORMANCE OF
THE PUBLIC SECTOR PROGRAM. A CASE STUDY
OF MASAKA DISTRICT LOCAL
GOVERNMENT
TABLE OF CONTENTS
1.2 Background to the Study. 1
1.2.1 Historical Background. 1
1.2.2 Theoretical Background. 4
1.2.3 Conceptual Background. 6
1.2.4 Contextual Background. 6
1.3 Statement of the Problem.. 8
1.9 Significance of the Study. 11
1.10 Justification of the Study. 11
1.12. Definition of Key Terms and Concepts. 13
2.3.3 Quality of Evaluation Findings. 19
2.3.4 Utilization of M&E findings. 23
2.7 Summary of empirical literature. 27
3.5 Sampling Techniques and Procedure. 30
3.6 Data collection Methods. 31
3.7 Data Collection Instruments. 32
3.7.1 Self-Administered Questionnaire. 32
3.7.3 Documentary Review Checklist 33
3.8 Validity and Reliability of the Research Instruments. 33
3.9 Data Collection Procedure. 35
3.10.1 Analysis of quantitative Data. 35
3.10.2 Analysis of qualitative data. 36
List of Tables
Table 1: Sample Size Determination. 36
LIST OF ACRONYMS
AHSPR – Annual Health Sector Performance Report
B.C : Before Christ
IOCE : International Organization for Cooperation in Evaluation
AEA :American Evaluation Association
US :United States
M&E – Monitoring and Evaluation
MoH – Ministry of Health
NGO – Non Governmental Organisation
NPM – New Public Management
RBM – Results Based Management
CHAPTER ONE
INTRODUCTION
1.1 Introduction
The chapter will focus on the background of the study, statement of the problem, purpose of study, specific objectives of the study, research questions, research hypotheses, conceptual framework, and scope of the study, significance, justification and operational definitions of key terms in the study.
1.2 Background to the Study
1.2.1 Historical Background
Evaluation has been an integral part of human existence since time immemorial, evolving alongside the progress of human civilization (Basheka, 2016). The roots of evaluation practices extend back to biblical times, as evidenced in the extensive account of creation found in Genesis (1:31). According to this biblical narrative, on the fifth day of creation, God observed the entirety of His creation and found it to be good. The influence of ancient Greek philosophers such as Socrates, Plato, and Aristotle, as well as mathematical thinkers like Pythagoras and Euclid, significantly shaped various fields, including evaluation (Zanakis, Theofanides, Kontaratos, and Tassios, 2003).
Scholarly records indicate that the Delphic oracle, operating from the ninth to the third centuries BC, served as the ancient world’s inaugural central intelligence database. Comprising around 90 priests considered the best-educated experts of their time, this interdisciplinary think tank engaged in collecting and evaluating information, offering counsel to both ordinary individuals and leaders, including figures like Alexander the Great (Theofanides et al., 2003). Notably, in the fourth century BC, substantial project management activities featured evaluation and monitoring as crucial components. Griffin (2005) highlights that the practice of management has deep historical roots, pointing to the construction of the great pyramids in 2900 B.C. as a classic example of effective management and coordination. The Egyptians applied management functions such as planning, organizing, and controlling in the realization of these monumental structures.
In the contemporary world, the international status of M&E research remains theoretically and methodologically influenced by the American tradition. The United States (US) is regarded as the motherland of the field in terms of its trends, number of authors and their academic and professional influence, degree of professionalization, focus of academic programs, legislation and institutionalization of evaluation, development of models and approaches for evaluation, evaluation capacity building initiatives, evaluation standards and guiding principles, number and attendees of evaluation conferences and workshops, publications and their impact factor, guides and evaluation handbooks (Basheka, 2016: 4). The American Evaluation Association (AEA) for example remains the most dominant evaluation society in the world with membership that has grown from just over 3000 members in 2001 to approximately 7000 by mid-2015 (Basheka & Byamugisha, 2015:76). Other countries however, equally have noticeable developments regarding evaluation. In Europe, professionalization of evaluation has progressed to different levels across countries with Sweden, the Netherlands, Great Britain, Germany, Denmark, Norway, France and Finland currently topping the list. Recent rankings further point to impressive developments of the field in Switzerland, Japan, Spain, Italy, Israel and Africa. In 2011, the International Organization for Cooperation in Evaluation (IOCE) identified 117 evaluation associations, 96 of which were national organizations located in 78 different countries. By 2013, the number had increased to 145 (IOCE 2013:2; BaTall 2009:7).
In Africa, the oldest evaluation association was established in 1997 in Ghana, while the African Evaluation association was itself established in 1999 with the heyday period of intense professional associations reported between 2000 and 2004 (Basheka & Byamugisha, 2015). Domestic and global forces played a role in this growth. Globally, Mertens and Russon (2000:275) proclaim that the emergence of many new regional and national organizations illustrated the growing worldwide recognition of the importance of evaluation. Before 1995 there existed only five regional and/or national evaluation organizations in the world but by 2000 there were more than 30 – a 500% increase in a 5-year period. Much of this growth was occurring in developing countries, particularly in Africa (p. 275). Malefetsane, Lungepi and Tembile (2014:5) observe that in Africa, evaluation has been on the increase; a trend predicted to continue especially with political recognition of the utility of evaluation to good governance. De Kool and Van Buuren (2004:173) conceded that the rise to New Public Management (NPM) which was constructed around key philosophies that emphasized outputs and outcomes, transparency and accountability, created a demand for M&E in Africa.
In Uganda, over the past two decades, considerable efforts have been made to establish a strong and robust basis for assessing both private and public spending. In achieving this, M&E was considered as a means of Government and NGOs measuring their development interventions. M&E was therefore enshrined in the National Development Plan and institutionalized in the governance systems and processes (National Development Plan,2010/11-2014/15).The Office of the Prime Minister (OPM) was given the constitutional mandate to oversee reforms and service delivery in all Government Ministries, Departments and Agencies and established an M&E function to support this role (National M&E Policy, 2013).
A National Integrated Monitoring and Evaluation Strategy of Government programmes was developed with the aim of enhancing M&E capacity as well as ensuring that sound evidence based data and information are available to inform decision making (NIMES, 2006). Significant effort went into introducing planning, results based budgets, monitoring systems and developing the institutional capacity to design ministry strategy and plans to implement M&E arrangements to monitor results and provide a basis for performance improvement as provided for in the National Development Plan (Annual Performance Assessment Report,2013/2014).
1.2.2 Theoretical Background
The study is grounded in the General Systems Theory developed by Bertalanffy (1934), as cited in Tama (1987). This theoretical framework provides an analytical lens through which to examine the factors influencing the utilization of evaluation data, specifically in the context of assessing the functionality of the monitoring and evaluation system and the performance of public sector programs.
According to Bertalanffy (1968), a system is conceptualized as an assemblage of interconnected elements forming a complex unity. It is comprised of various parts and sub-parts arranged in an orderly manner according to a scheme or plan. The key features of a system include the combination of parts, sub-parts, and sub-systems, each of which may have its own set of sub-components. Importantly, these parts are mutually dependent, with relationships existing both directly and indirectly. Any alteration in one part can potentially affect other parts, illustrating the interconnected nature of systems. Furthermore, a system is described as an interdependent framework where various parts are organized within a specific context, emphasizing the holistic perspective of systems (Tama, 1987).
One crucial function of a system, as outlined by Bertalanffy (1968), is the transformation of inputs into outputs. This process is vital for the survival of the system and involves three main aspects: inputs, mediator, and outputs. Inputs, drawn from the environment, undergo transformation into outputs, which are then returned to the environment. Inputs can take various forms such as information, money, materials, and human resources, while outputs manifest as goods and services. The entire process is termed the input-output process, and a system acts as a mediator in facilitating this transformation.
Applying these theoretical concepts to the study’s focus on assessing the functionality of the monitoring and evaluation system and the performance of public sector programs, the General Systems Theory provides a comprehensive framework. The monitoring and evaluation system can be seen as a subsystem within the broader system of public sector programs. The interconnectedness of different components, their mutual dependence, and the transformative nature of the system underscore the complexity inherent in evaluating the performance of public sector programs.
In conclusion, the General Systems Theory serves as a valuable theoretical foundation for understanding the dynamics of monitoring and evaluation systems within the broader context of public sector programs. By adopting this theoretical lens, the study aims to explore the interrelationships and dependencies that impact the utilization of evaluation data and contribute to a more comprehensive understanding of the functionality of the system and the performance of public sector programs.
1.2.3 Conceptual Background
Technical capacity, as defined by Byamugisha (2016), pertains to the capability of human resources within an organization to effectively handle evaluations. In the context of this study, technical capacity will be evaluated based on the qualifications, experience, and knowledge of staff members in utilizing Monitoring and Evaluation (M&E) systems.
Financial capacity, as outlined by USAID (2015), refers to the availability of funds to support and facilitate the monitoring and evaluation functions within an organization. In this study, financial capacity will be assessed by examining the adequacy of financial resources, the timely release of funds, and the proper accountability of these funds.
The quality of M&E systems, according to Mulandi (2013:12), is the degree to which M&E systems and activities adhere to specified requirements and standards. In the present study, the quality of M&E systems will be gauged by evaluating the extent to which these systems meet methodological and quality standards.
Assessment of Monitoring and Evaluation Information involves the practical application of monitoring and evaluation results. By using these findings for decision-making and project control, organizations establish a baseline for subsequent measurements (Mulandi, 2013). In this study, the utilization of M&E data will be measured by assessing the extent to which the data informs decision-making, enhances organizational processes, and contributes to organizational learning.
1.2.4 Contextual Background
Masaka District is likely to have its unique characteristics, such as demographics, economic activities, and cultural aspects. Understanding the local context is crucial for evaluating the effectiveness of any public sector program. Identify and describe the specific public sector programs being assessed. These could be related to health, education, infrastructure, agriculture, or other sectors. Understanding the nature and objectives of these programs is essential for evaluating their functionality.
The M&E system is a critical component of public sector management. Understanding the design, structure, and functioning of the M&E system in Masaka District is necessary. This includes the indicators used, data collection methods, reporting mechanisms, and feedback loops.
Performance Metrics and Criteria:
Define the criteria and metrics used to assess the performance of the public sector programs. This could involve measuring outcomes, outputs, and impacts. Consideration should be given to whether the selected metrics align with the goals and objectives of the programs.
Identify and analyze the involvement of various stakeholders in the M&E process and program implementation. This could include government officials, local communities, NGOs, and other partners. Assess the level of collaboration and coordination among these stakeholders.
Challenges and Opportunities: Explore the challenges faced by the M&E system and the public sector programs in Masaka District. This could include issues related to data quality, resource constraints, governance, and community engagement. Additionally, identify opportunities for improvement and innovation.
Examine the broader policy and regulatory environment that influences the functioning of the M&E system and public sector programs. Changes in policies, laws, or regulations may impact program implementation and evaluation.
1.3 Statement of the Problem
Findings from the M&E function ought to be used to inform decision making, planning and organizational learning (Basheka, 2016). However, at the public public sectorat level III and above in Ibanda, health information is not adequately used (Masaka district local government, 2017). In order to increase use of health information, the Ministry of Health has devoted significant human, technical and financial resources to the Monitoring and Evaluation function. Despite the significant investment in M&E, there is widespread concern that the utilization of evaluation findings in public public sectorin the district is low. According to the Annual Health Sector report (AHSPR) 2018/2019, Masaka district local government ranked 18th in the whole country out of 128 districts, and in the AHSPR 2019/2020, it ranked 76th. The AHSPR tracks performance of public sector in terms of coverage of education, administration and health services, and quality , as well as reporting for the public sector. The current study thus seeks to determine the assessment of the functionality of the monitoring and evaluation system and the performance of the public sector program.
1.4 Main objective
The study seeks to establish assessment of the functionality of the monitoring and evaluation system and the performance of the public sector program.
1.5 Specific objectives
- To establish the relationship between technical capacity of M&E systems on performance of public sector program.
- To assess the relationship between financial capacity on the functionality of M&E on performance of public sector program.
- To examine the relationship between the quality of M&E systems on performance of public sector program.
1.6 Research Questions
- What is the relationship between technical capacity of M&E systems on performance of public sector program?
- What is the relationship between financial capacity on the functionality of M&E on performance of public sector program?
- What is the relationship between the quality of M&E systems on performance of public sector program?
1.7 Study hypotheses
H1. There is no relationship between technical capacity of M&E systems on performance of public sector program.
H2: There is no relationship between financial capacity on the functionality of M&E on performance of public sector program.
H3 : The is no significant effect of the quality of M&E systems on performance of public sector program.
.
1.8 Conceptual Framework
Independent Variables (Factors) Dependent Variable (performance )
Technical Capacity · Adequate personnel · Experienced personnel · Qualified Personnel · Knowledgeable elected officials
|
Financial Capacity
|
Quality of M&E findings · Timeliness · Relevant · Well written · Methodologically correct
|
Performance · Timely delivery of services · Efficient services · Project/program improvement
|
Source: Adopted from Kasule (2016) and modified by the researcher
The conceptual framework above shows the factors affecting the utilization of M&E findings. The model assumes that utilization of M&E findings is enhanced by improved technical, financial and quality of M&E systems within organizations.
1.9 Significance of the Study
The study will provide information to the future scholars on the relationship between technical capacity on the utilization of M&E findings in the public level III and IV public sectorin Masaka district local government and this will enable them in drawing conclusions with the present situation and other times.
The study will provide to the government of Uganda and other policy makers information regarding the relationship between financial capacity and the utilization of M&E findings in the public level III and IV public sector in Masaka district local government, this will help the government in making key critical decisions that can help M & E findings be implemented in government health facilities.
The government will be able to determine the quality of monitoring and evaluation so to understand how to improve it for the better of implementing government programs and better utilize the M& E results.
1.10 Justification of the Study
It is critical that the factors that affect the utilization of M&E findings are thoroughly examined and understood by the public sector implementing Monitoring and Evaluation system. Without clear understanding of these factors, the public health sector may continue underutilizing evaluation findings. This may affect the performance of the sector and subsequent underperformance of the public health sector.
Recent health sector studies, as well as policies, strategies and plans, acknowledge that Human Resource for Health constraints are hampering health sector planning, service delivery and ultimately health outcomes in many African countries and world at large. Human Resource for Health inequities showed that America has 14 % world population compare to sub-Saharan Africa with 11 % of world population but Sub-Saharan Africa carries 25 % of the global disease burden and Americans taking 10 % of global disease burden. Further, America has the global health workers of 42 % compared with Sub-Saharan Africa with only global health workers of only 3%. Equally, the Americans allocate 50 % their annual expenditure to health as compared to Sub-Saharan Africa with less than 1 % annual budgetary allocation to health (WHO, 2016), all these challenges therefore indicates that there is need for a strong commitment to examine the factors affecting utilization of monitoring and evaluation findings in public health facilities.
The world faces a global shortage of well-trained health workers, which is considered as one of the biggest barriers to quality health-care services for millions of people throughout the world (World Health Organization, 2018). It is estimated that there currently is a shortfall of approximately 7.2 million doctors, nurses and midwives and that this shortfall is likely to rise to at least 12.9 million in the coming decades (Sidibe and Campbell, 2017). Although the health workforce crisis affects virtually all countries worldwide – including the high-income countries – sub-Saharan Africa and parts of Asia are most affected, as these regions have the lowest health worker densities when compared globally and are also strongly affected by poor attraction and retention as well as high attrition of health professionals (Kabbash et al., 2021), this therefore warrants this study in establishing the relationship between technical capacity on the utilization of M&E findings in the public level III and IV public sectoras such recommendations is critical in improving the general quality of Health institutions.
1.11 Scope of the Study
1.11.1 Content Scope
The study will examine the factors affecting the utilization of M&E findings. The study will specifically focus on the effects of technical, technological and quality factors on the utilization of M&E findings in Masaka district local government.
1.11.2 Geographical scope
The study will be conducted in Masaka district local government which lies in central Uganda.
1.11.3 Time scope
The study will be carried out for a period of eight months
1.12. Definition of Key Terms and Concepts
Technological capacity refers to the capacity of the human resources/people to use M&E findings.
Technological capacity refers to the ability of the organizations to utilize technology to manage M&E activities
Quality of M&E Findings refers to the extent to which the evaluation findings meet the specified requirements and standards
Functionality of Monitoring and Evaluation Findings. This will refer to the application of Monitoring and Evaluation findings in decision-making, quality improvement and learning.
CHAPTER TWO
LITERATURE REVIEW
2.1 Introduction
This chapter presents a review of literature on the topic under investigation. The chapter presents a review of the relevant theories. It also presents empirical literature on the assessment of the functionality of the monitoring and evaluation system and the performance of the public sector program.
2.2 Theoretical Review
The theory which will underpin this study is General Systems Theory (GST). The General Systems Theory, which was developed by Bertalanffy (1934) as cited in Tama (1987), provides an analytical framework which can be used explain the effect of planning on performance. According to Bertalanffy (1968), a system is an assemblage of things connected or interrelated so as to form a complex unity: a whole composed of parts and sub-parts in orderly arrangement according to some scheme or plan. The following are the features of a system. A system is basically a combination of parts, sub-parts, sub-systems. Each part may have various sub-parts. A system has mutually dependent parts, each of which may include many sub-systems. Parts and sub-parts of a system are mutually related to each other, some more, some less; some directly, some indirectly. The relationship is in the context of the whole. Any change in one part may affect other parts also. A system is an interdependent framework in which various parts are arranged (Tamas, 1987).
A system transforms inputs into outputs. This transformation is essential for the survival of the system. There are three aspects involved in this transformation process: inputs, mediator, and outputs. Inputs are taken from the environment, transformed into outputs and given back to the environment. The various inputs may be in the form of information, money, materials, human resources, etc. Outputs may be in the form of goods and services. The total relationship may be called the input-output process and system works as a mediator in the process (Bertalanffy, 1968). The systems theory has been used in a number of fields like community development, In this study, factors like technical, technological and the quality of M&E systems will be the inputs and utilization of M&E data by the public health units will be the output.
2.3 Conceptual Review
This section presents the literature review as reviewed by various scholars in line with the study objectives and conceptual frame work.
2.3.1 Technical capacity
Adequate personnel
Building an adequate supply of human resource capacity is critical for the sustainability of the M&E system and generally is an ongoing issue. Furthermore, it needs to be recognized that “growing” evaluators requires far more technically oriented M&E training and development than can usually be obtained with one or two workshops. Both formal training and on-the-job experience are important in developing evaluators with various options for training and development opportunities which include: the public sector, the private sector, universities, professional associations, job assignment, and mentoring programs (Acevedo et al., 2010:12).
Human capital, with proper training and experience is vital for the production of M&E results. There is need to have an effective M&E human resource capacity in terms of quantity and quality, hence M&E human resource management is required in order to maintain and retain a stable M&E staff (World Bank, 2011). This is because competent employees are also a major constraint in selecting M&E systems (Koffi-Tessio, 2002). M&E being a new professional field, it faces challenges in effective delivery of results. There is therefore a great demand for skilled professionals, capacity building of M&E systems, and harmonization of training courses as well as technical advice (Gorgens and Kusek, 2009).
The UNDP (2009) handbook on planning, monitoring and evaluation for development results, emphasizes that human resource is vital for an effective monitoring and evaluation, by stating that staff working should possess the required technical expertise in the area in order to ensure high-quality monitoring and evaluation. Implementing of an effective M&E demands for the staff to undergo training as well as possess skills in research and project management, hence capacity building is critical (Nabris, 2002). In-turn numerous training manuals, handbooks and toolkits have been developed for NGO staffs working in project, in order to provide them with practical tools that will enhance result-based management by strengthening awareness in M&E (Hunter, 2009). They also give many practical examples and exercises, which are useful since they provide the staff with ways of becoming efficient, effective and have impact on the projects (Shapiro, 2011).
Qualified Personnel: The M&E system cannot function without skilled people who effectively execute the M&E tasks for which they are responsible. Therefore, understanding the skills needed and the capacity of people involved in the M&E system (undertaking human capacity assessments) and addressing capacity gaps (through structured capacity development programs) is at the heart of the M&E system (Gorgens & Kusek, 2010:43). In its framework for a functional M&E system, UNAIDS (2008) notes that, not only is it necessary to have dedicated and adequate numbers of M&E staff, it is essential for this staff to have the right skills for the work. Moreover, M&E human capacity building requires a wide range of activities, including formal training, in-service training, mentorship, coaching and internships. Lastly, M&E capacity building should focus not only on the technical aspects of M&E, but also address skills in leadership, financial management, facilitation, supervision, advocacy and communication.
Experienced personnel: Monitoring and evaluation carried out by untrained and inexperienced people is bound to be time consuming, costly and the results generated could be impractical and irrelevant. Therefore, this will definitely impact the success of projects (Nabris, 2002:17). In assessment of CSOs in the Pacific, UNDP (2011:12) discusses some of the challenges of organizational development as having nadequate monitoring and evaluation systems. Additionally, the lack of capabilities and opportunities to train staff in technical skills in this area is clearly a factor to be considered. During the consultation processes, there was consensus among CSOs that their lack of monitoring and evaluation mechanisms and skills was a major systemic gap across the region. Furthermore, while there is no need for CSOs to possess extraordinarily complex monitoring and evaluation systems, there is certainly a need for them to possess a rudimentary knowledge of, and ability to utilize reporting, monitoring, and evaluating systems.
A study by White (2013:43) on monitoring and evaluation best practices in development INGOs, indicates that INGOs encounter a number of challenges when implementing or managing M&E activities one being insufficient M&E capacity where M&E staff usually advise more than one project at a time, and have a regional or sectoral assignment with a vast portfolio. Furthermore, taking on the M&E work of too many individual projects overextends limited M&E capacity and leads to rapid burnout of M&E staff whereby high burnout and turnover rates make recruitment of skilled M&E staff difficult, and limits the organizational expertise available to support M&E development. Mibey (2011:19) study on factors affecting implementation of monitoring and evaluation programs in kazi kwa kijana project, recommends that capacity building should be added as a major component of the project across the country (Kenya), and this calls for enhanced investment in training and human resource development in the crucial technical area of monitoring and evaluation.
2.3.2 Financial capacity
Availability of funds
Financial Capacity refers to the extent to which funds are available to finance and facilitate the monitoring and evaluation function in an organization (USAID, 2015). In this study therefore financial capacity will be measured in terms of whether there are adequate financial resources, whether the funds are released in a timely manner and whether the funds are accounted for.
There is empirical evidence to suggest that lack of adequate financial capacity constrains M& E systems. In Ghana, a study by CLEAR (2012) found that after several years of implementing the national M&E system, significant progress had been made. However, challenges include severe financial constraints; institutional, operational and technical capacity constraints; fragmented and uncoordinated information, particularly at the sector level. To address these challenges, the CLEAR report argues that the current institutional arrangements will have to be reinforced with adequate capacity to support and sustain effective monitoring and evaluation, and existing M&E mechanisms must be strengthened, harmonized and effectively coordinated.
Timely funds
The study by Koffi-Tessio (2002), on Efficacy and Efficiency of Monitoring-Evaluation Systems (MES) for Projects Financed by the Bank Group that was done in Burkina Faso, Mauritana, Kenya, Rwanda and Mozambique, through desk review and interviews, for projects approved between 1987 and 2000. Monitoring-Evaluation systems are not meeting their obligatory requirements because of financial constraints.
Accountability
A study conducted by Gamba (2016) to determine the factors affecting the utilization of evaluation findings in malaria control projects in Uganda found that management support was low and financial resources allocated for M&E was insufficient. This greatly affected effective outcome and impact monitoring and evaluation of projects.
2.3.3 Quality of Evaluation Findings
Timeliness
Quality of M&E Findings refers to the extent to which the M&E systems and activities meet the specified requirements and standards (Mulandi, 2013:12). For this study, the qualities of M&E findings meet the standards and requirements.
The quality of evaluations is important to the credibility of reported results hence, it is important to incorporate data from a variety of sources to validate findings. Furthermore, while primary data are collected directly by the M&E system for M&E purpose, secondary data are those collected by other organizations for purposes different from M&E (Gebremedhin, Getachew & Amha, 2010:24). In the design of an M&E system, the objective is to collect indicator data from various sources, including the target population for monitoring project progress (Barton, 1997). The methods of data collection for M&E system include discussion/conversation with concerned individuals, community/group interviews, field visits, review of records, key informant interviews, participant observation, focus group interviews, direct observation, questionnaire, one-time surveys, panel surveys, census, and field experiments. Moreover, developing key indicators to monitor outcomes enables managers to assess the degree to which intended or promised outcomes are being achieved (Kusek & Rist, 2004).
Frequent data collection means more data points; more data points enable managers to track trends and understand intervention dynamics hence the more often measurements are taken, the less guess work there will be regarding what happened between specific measurement intervals. But, the more time that passes between measurements, the greater the chances that events and changes in the system might happen that may be missed (Gebremedhin et al., 2010). Mulandi (2013:75) concurs that to be useful, information needs to be collected at optimal moments and with a certain frequency. Moreover, unless negotiated indicators are genuinely understood by all involved and everyone’s timetable is consulted, optimal moments for collection and analysis will be difficult to identify.
Methodologically correct
According to Cornielje, Velema and Finkenflugel (2008:15), only when the monitoring system is owned by the users the system is it likely to generate valid and reliable information. However, all too often the very same users may be overwhelmed by the amount of daily work which in their view is seen as more important than collecting data and subsequently the system may become corrupted. They conclude that it is of extreme importance that the front-line workers are both involved in monitoring and evaluation and informed about the status of the services and activities they largely provide in interaction with other stakeholder and beneficiaries.
Evidence suggests that the quality of evaluations has an effect on the utilization of evaluation findings. According to an IFAD (2008:26) annual report on results and impact, recurrent criticisms against M&E systems include: limited scope, complexity, low data quality, inadequate resources, weak institutional capacity, lack of baseline surveys and lack of use. Moreover, the most frequent criticism of M&E systems in IFAD projects relates to the type of information included in the system. Most of the IFAD projects collect and process information on the project activities. However, the average IFAD project did not provide information on results achieved at the purpose or impact level. The M&E system of the Tafilalet and Dades Rural Development project in Morocco for example only focused on financial operations and could not be used for impact assessment. In the Pakistan IFAD Country Program Evaluation, cases were reported of contradictory logical frameworks combined with arbitrary and irrelevant indicators while in Belize, two different logical frameworks were generated which increased confusion and complexity. The Ethiopia IFAD Country Program Evaluation found that project appraisal documents made limited provision for systematic baseline and subsequent beneficiaries surveys. For example in one project in Ethiopia, the baseline survey was carried out 2-3 years after projects start-up.
Relevant
In a study report of an Australian NGO conducted by Spooner and Dermott (2008:45), staff reported that, as WAYS evolved over time, they were unsure about what works in the current system of monitoring and evaluation. Additionally, resources had not been dedicated to data analysis; and the data was rarely analyzed. A further problem found with data analysis was that the responsibility of doing the analysis lied with program managers, who had little time to analyze data that was not required by funding bodies. Some of the staff stated that they are required to collect information and analyze it, but that their analysis is hampered because they have minimal research skills. Finally, some staff reported that there was no feedback loop built into the current system so, while staff report on their activities to the management, they do not know what happens to the information once it is reported.
A problem in African countries, and perhaps in some other regions, is that while sector ministries collect a range of performance information, the quality of data is often poor. This is partly because the burden of data collection falls on over-worked officials at the facility level, who are asked with providing the data for other officials in district offices and the capital, but who rarely receive any feedback on how the data are actually being used, if at all. This leads to another problem: data are poor partly because they are not being used; and they are not used partly because their quality is poor therefore, in such countries there is too much data, not enough information (Mackay, 2006:7).
Well written
Obure (2008:5) in a study of RBM in Northern Ghana indicates a problem associated with post collection data management. As confessed by many field officers, the storage, processing and interpretation of data was ineffectively handled. Results from the study strongly point to a weakness in the system arising from the inability of stakeholders to handle and process data in a meaningful way. He concludes that this challenge could seriously lead to mere collection of large volumes of data which eventually might not be used in a helpful way. Data must be collected and analyzed regularly on the objectives and intermediate results. Furthermore, the PME&R system allows for three levels of information by project, activity and organization where the data for all organizations involved in a specific activity can be averaged up to the activity level, and the data for all activities can be averaged up to the project level (Booth, Ebrahim & Morin, and 2008:31).
A study by Gamba (2016:65) to establish the factors affecting the utilization of M&E results in malaria control projects in Uganda found that evaluation quality and communication of the M & E results had a significant positive effect on utilization and timeliness of the M & E activities had a significantly moderate positive effect on the utilization of M & E findings in the implementation of the MCP activities across the organizations.
2.3.4 Utilization of M&E findings
Utilization of Monitoring and Evaluation findings refers to putting monitoring and evaluation results to use. The use of monitoring and evaluation findings for decision making and project control ensure that there is a baseline against which to undertake new measurements (Mulandi, 2013)
Number of Decisions made
According to Gebremedhin, Getachew and Amha, (2010) establishes that the source of performance data is important to the credibility of reported results and thus their utilization in future programmes implementation. The author thus notes that it is important to incorporate data from a variety of sources if the results are to be validated. The processes of planning, monitoring and evaluation make up the Result-Based Management (RBM) approach, which is intended to aid decision-making towards explicit goals. Planning helps to focus on results that matter, while M&E facilitates learning from past successes and challenges and those encountered during implementation. Elements of an M&E system which if developed together with all key stakeholders will encourage participation and increased ownership of a project/plan – are: (a) Result Frameworks or logframes (“RF”), which are tools to organize intended results, i.e. measurable development changes. RFs inform the development of the M&E plan and both must be consistent with each other, the M&E plan, which contains a description of the functions required to gather the relevant data on the set indicators and the required methods and tools to do so. The M&E plan is used to systematically organize the collection of specific data to be assessed, indicating roles and responsibilities of project/plan stakeholders. It ensures that relevant progress and performance information is collected processed and analyzed on a regular basis to allow for real-time, evidence-based decision-making; the various processes and methods for monitoring (such as regular input and output data gathering and review, participatory monitoring, process monitoring) and for evaluation (including impact evaluation and thematic, surveys, economic analysis of efficiency and the Management Information System, which is an organized repository of data to assist managing key numeric information related to the project/plan and the analysis.
Furthermore, while primary data are collected directly by the M&E system for M&E purposes, secondary data are those collected by other organizations for purposes different from M&E. However, a study by Booth, Ebrahim and Morin, (2008) reports that the Monitoring and Evaluation system allows for three levels of information by project, activity and organization, where the data for all organizations involved in a specific activity. These can be averaged up to the activity level, and the data for all activities can be averaged up to the project level, easing utilization.
Learning
M&E systems will only add value to project implementation through interpretation and analysis, by drawing on information from other sources and adapting it for use by project decision makers and a range of key partners. Knowledge generated by the M&E efforts should never stop at basic capturing of information or relying exclusively on quantitative indicators, but also to address the “why” questions. Here the importance of more qualitative and participatory approaches become particularly important, to analyze relationship between project activities and results. Evaluation therefore serves the purpose to establish attribution and causality, and forms a basis for accountability and learning by staff, management and clients. The challenge early on during project implementation is to design effective learning systems that can underpin management behavior towards results, and come up with strategies to optimize impact.
In this context, learning is defined as formulating responses to identified constraints and implementing them in real time. Useful at project/plan level are knowledge-sharing and learning instruments which can pick up information and analysis from the M&E systems as studies, such as; summarized studies and publications on lessons learned, case studies documenting successes and failures, publicity material including newsletters, radio and television programmes, formation of national and regional learning networks, periodic meetings and workshops to share knowledge and lessons learned, research-extension liaison or feedback meetings, national and regional study tours, preparation and distribution of technical literature on improved practices; and routine supervision missions, mid-term reviews or evaluations and project completion (end-of-project) reports.
A study by Guijt, (1999) also finds that useful information needs to be collected at optimal moments and with a certain frequency, if it is to be of quality. Moreover, unless negotiated indicators are genuinely understood by all involved, and everyone’s timetable is consulted, optimal moments for collection and analysis will be difficult to identify. On the other hand, Cornielje, Velema and Finkenflugel (2008) report that it is only when the monitoring system is owned by the users that it can generate quality data that is valid and reliable for utilization in future projects. The author however notes that all too often, the very same users may be overwhelmed by the amount of daily work, which, in their view, is seen as more important than collecting data; and that subsequently, the system may become corrupted and thus not usable in subsequent implementations.
A study by Barton, (2007), connotes that in designing of an M&E system, the objective is to collect indicator data from various sources, including the target population, for monitoring project progress. The methods of data collection for M&E system include discussion/conversation, with concerned individuals, community/group interviews, field visits, and review of records, key informant interviews, participant observation, focus group interviews, direct observation, questionnaire, one-time surveys, panel surveys, census, and field experiments. Kusek and Rist (2004) however, reports that developing key indicators to monitor outcomes enables managers to assess the degree to which intended or promised outcomes are being achieved. Frequent data collection means more data points, which as a result, enables managers to utilize them to track trends and understand intervention dynamics, reducing items of guess work regarding what happened between specific measurement intervals. In supplement, Gebremedhin et al., (2010) reports that the more time that passes between measurements, the greater the chances that events and changes in the system might happen that may be missed, which can have consequences if utilized in subsequent studies. These studies however have contradicting results, which gap this study hopes to clarify, in context of the malaria programs.
Project/program improvement
Bourckaert, Verhoest and De Corte (2009) observe that indicators for measuring programme performance are difficult to identify unless the M&E results are produced on time. It is therefore important to clearly define an appropriate system of indicators to measure and monitor programme performance with time. In support, a study by Cunnen, (2006) finds that a system of over two thousand societal indicators to measure Results for Canadians across all sectors need to be timely.
Kusek, et al, (2004) overwhelmingly support the assertion that indicators measured are just as important as the timing of M&E. This means that it is imperative to get the measurement correct, but also be done in such a way that when the said information is needed, it is readily available for its utilization. Kusek and colleagues note that the practice of using inappropriate baselines defeats the whole concept of “data quality triangle”, which encompasses elements of data reliability, data validity and data timeliness, for its usability.
2.7 Summary of empirical literature
2.8 Gaps Identified in the Literature
The review of literature indicates that organizational capacity factors, top management support and the quality of evaluation findings affect the utilization of M&E findings. Studies by CLEAR, 2012:31), the Kenyan NGO Coordination Board (2009:17), White (2013:10) Mibey (2011:25) and Nyagah (2015:47) indicate that organizational capacity has a significant effect on the utilization of M& E findings. This means that utilization of M&E findings improves with financial, technical and technological capacity of an organization. These studies however have a contextual gap in that they were conducted in organizations in other countries which may have more organizational capacity to utilize M&E findings. It remains to be seen whether organizational capacity will have the same effect on utilization of M&E findings within the public sectorin Masaka district local government.
Studies by Turabi et al and Kasule (2016) Kasule indicate that top management support effects the utilization of M&E findings. This means that utilization of M&E findings improves with top management support in an organization. However it remains to be empirically seen whether top management support has any significant effect on utilization of M&E findings because earlier literature is largely anecdotal. The literature also indicates that the quality of M&E findings affects utilization of M&E findings (Mackay, 2006:7; Obure, 2008:5; Gamba (2016:65 and Booth et al., 2008:31). However it remains to be empirically seen whether top management support has any significant effect on utilization of M&E findings because earlier literature is largely anecdotal. Generally, whereas literature points out the effect of the above mentioned factors on the utilization of M&E findings; most studies are based on anecdotal observations and less on empirical findings. There is therefore need to conduct a study which will empirically document the factors affecting the utilization of M&E findings.
CHAPTER THREE
METHODOLOGY
3.1 Introduction
This chapter presents the methodology that will be adopted during the study. It describes and discusses; the research design, sample size and selection, the data collection methods used and their corresponding data collection instruments, data management and analysis procedure as well as steps that will be taken to ensure validity and reliability during the study and measurement of variables.
3.2 Research Design
The study will adopt a cross sectional design. Explanatory research helps a researcher to analyze patterns, formulating hypotheses that can guide future endeavors. According to Amin, (2005) If a researcher is seeking a more complete understanding of a relationship between variables, explanatory research is a great place to start. Since the study seeks to examine the relationship between variables, a simple bivariate correlation design will be adopted to determine the relationship between organizational capacity, top management, quality of evaluations and utilization of evaluation findings.
The study will use both qualitative and quantitative approaches. The quantitative approach will be adopted because the study intends to examine the factors affecting the utilization of M&E findings. Such an endeavor can best be achieved when a quantitative approach is used because it allows for collecting numeric data on observable individual behavior of samples, then subjecting these data to statistical analysis (Amin, 2005:5).
A qualitative approach will also be adopted to enable the researcher capture data that will be left out by the quantitative approach. This will be aimed at capturing more in-depth information on the topic under investigation.
3.3 Study Population
Sekaran (2018) defines a population as the entire group of people, events or things that a researcher wishes to investigate. This is the total population that will be used in the study, this specific population has been arrived at due to their experience and knowledge on the subject matter on assessment of the functionality of the monitoring and evaluation system and the performance of the public sector program.
Population of respondents
Category | No. of Health centers | Population |
Administrators | 2 | 30 |
Top Educational managers | 6 | 52 |
Health department | 8 | 8 |
Total | 90 |
3.4 Study Sample
Mugenda and Mugenda (2003), argue that it is impossible to study the whole targeted population and therefore the researcher shall take a sample of the population. A sample is a subset of the population that comprises members selected from the population. Using Krejcie and Morgan’s (1970) table for sample size determination approach, a sample size of 73 employees will be selected from the total population of 90 employees of Masaka district local government.
Table 1: Sample Size Determination
Category | Population | Sample Size | Sampling Technique |
Administrators | 30 | 24 | Simple random sampling |
Top education officials | 52 | 42 | Simple random sampling |
Health officials | 8 | 7 | Purposive sampling |
Total | 90 | 73 |
Source: Chief Administrative Office, Masaka district local government (2017)
3.5 Sampling Techniques and Procedure
A number of sampling techniques will be used to select respondents to the study namely; simple random and purposive sampling techniques. The lower level Staff will be selected using simple random sampling technique. Simple random sampling will be used because it ensures generalizability of findings and minimizes bias (Sekaran, 2003). Purposive sampling technique will be used to select the health officials. These key informants will be purposively sampled because they are believed to have technical and specialized knowledge about the topic under investigation by virtue of the offices that they hold.
3.6 Data collection Methods
The section presents data collection methods which include survey method, interview and documentary review. The following data collections methods have been chosen because of their numerous advantages.
3.6.1 Survey Method
The study will use the questionnaire method to collect data. The questionnaire will be used because it allows for the collection of data from a big group of respondents in a short period as suggested by Mugenda and Mugenda (1999: 107). The questionnaire will also be used because it allows busy respondents fill it at their convenient time. It also allows respondents express their views and opinions without fear of being victimized (Oso & Onen, 2008:18).
3.6.2 Interview Method
The study will employ interview method. Interviews in this study will help the researcher obtain more information on the topic under investigation. Interviews will be used because they fetch a variety of ideas needed for the study and gives a deeper understanding of the topic. This method will also be used because it will offer the researcher an opportunity to adapt questions, clarify the questions by using the appropriate language, clear doubts and establish rapport and probe for more information (Sekaran, 2003:253).
The researcher will review documents in order to obtain recorded information that is related to the issue under investigation. This method will be used because it enables the researcher access data at his convenient time, obtain data that are thoughtful in that the informants have given attention in obtaining them and enables the researcher obtain data in the language of the respondent (Oso & Onen, 2008: 45).
3.7 Data Collection Instruments
The instruments used in this study will be questionnaire, interview guide and document review checklist.
3.7.1 Self-Administered Questionnaire
The study will employ a questionnaire as a tool of data collection. The questionnaire for staff will have 5 sections (see appendix I). Section A will deal with the demographic characteristics of the respondents, section B will focus on technical capacity, Section C will focus on financial capacity, Section D will be concerned with the quality of evaluation activities and section E will be concerned with the utilization of M&E findings. The questionnaires will be closed ended. Closed ended questions will be developed to help respondents make quick decisions; in addition, closed ended questions will help the researcher to code the information easily for subsequent analysis and narrow down the error gap while analyzing data as observed by Sekaran (2003:231).
An unstructured interview will be used as a tool for collecting in depth information from the key informants. The guide will have list of topical issues and questions which will be explored in the course of conducting the interviews. The guide will be drawn with the questions soliciting for the perception of the key informants regarding the factors affecting the utilization of M&E findings within the public health facilities. The interview guide will be used because it obtains in-depth data which may not be possible to obtain when using self-administered questionnaires (Mugenda & Mugenda, 1999:17; Kakoza, 1999:27).
3.7.3 Documentary Review Checklist
A document review checklist will be used to collect more in-depth data on the topic under investigation. This will also enable the researcher to supplement the data that is acquired from the interviews and questionnaires. The researcher will analyze the documents and publications related to the study topic. Documents that are expected to be reviewed include Ministry of Health reports, Journals, and Newspapers.
3.8 Validity and Reliability of the Research Instruments
3.8.1 Validity
Validity is defined as the extent to which results can be accurately interpreted and generalized to other populations (Oso & Onen, 2008). While Borg & Gall, 1989 as cited in Onyinkwa, (2013) validity is defined as the degree to which results obtained by the research instrument correctly represented to the phenomenon understudy and Mugenda & Mugenda, (1999) as the accuracy and meaningfulness of inferences which are based on the research results.
Amin, (2005) recommended minimum CVI of 0.7 to be used. Validity will be tested using content validity index which involves judges scoring the relevancy of the questions in the instruments in relation to the study variables.
The formula for Content Validity Index is;
CVI =
Where CVI = content validity
n= number of items indicated relevant.
N = total no. of items in the instrument
In this study, validity will be achieved by establishing content validity. The researcher will achieve content validity by using the experts to assess the validity of the research instrument. The experts especially research supervisors and consultants from UTAMU will be given data collection tools to assess whether the items in the instruments are valid in relation to research topic, objectives, and questions. From the instruments they will declare some items valid and others invalid. Those declared invalid will be dropped, others adjusted, while the valid ones will be maintained. Then content validity index (CVI) will be computed by dividing the number of items declared valid by total number of items/questions in the data collection instrument.
3.8.2 Reliability
According to Mugenda and Mugenda, (2003) reliability is the measure of the extent to which research instruments are able to provide the same results upon being tested repeatedly.
Crobach’s coefficient alpha (a) as recommended by Amin, (2005, P.302) will be used to test the reliability of the research instrument. The instrument is deemed reliable if reliable of 0.7 and above is obtained and therefore, it will be adopted for use in the data collection.
Formula for reliability is
= ( )
Where = alpha reliability co efficiency.
K=Number of items included4 in the questionnaire
= sum of variance of individual items
= variance of all items in the instrument.
To ensure credibility and trust worthiness of qualitative data the researcher will ensure that only the officials who are employees of Masaka district local government local government will be interviewed.
The coefficient ranges between a=0.00 for no reliability, a =1.00 for perfect reliability. The closer alpha gets to 1.0 the better. If the study findings result to Cronbanch’s Alpha of 0.7 and above, this will signify that research instrument is good enough for the study. According to Amin (2005), all the measurements in the instrument that show adequate levels of internal consistency of cronbach’s alpha of 0.77 and above are accepted as reliable.
3.9 Data Collection Procedure
The researcher will obtain a letter of introduction from UTAMU which will be presented to the authorities at in the public health units in Masaka district local government and after that she will obtain a list of all the staff in the organization.
The researcher will randomly select respondents to participate in the study, a self-administered questionnaire will be used to collect information from the above mentioned respondents.
The researcher will also purposively select senior and middle level managers who will be interviewed.
3.10 Data Analysis
3.10.1 Analysis of quantitative Data
Descriptive statistics namely frequency counts, percentages will be used to analyze the respondents’ demographic characteristics and the mean and standard deviation will be used to analyze the respondents’ opinions on the factors affecting the utilization of M&E findings within the public health units in Masaka district local government.
Data will be analyzed and correlated using Pearson Product-Moment correlation coefficient to establish the relationship between technical capacity, technological capacity, quality of M&E systems and the utilization of M&E data as suggested by Sekaran (2003), Amin (2005) and Oso and Onen (2008). Regression analysis will be used in determining the strength of the relationship between the variables, this will be possible by determining the value of R-squared value the higher the R-squared value the stronger the relationship. For this study, the three dimensions of technical capacity, financial capacity and quality of M&E will be regressed utilization of M&E findings within the health facilities. This will be aimed at determining the effect of each of these factors on the utilization of M&E findings.
The statistical package which will be used for analysis of data in this study is the SPSS version 16.0. Different statistical techniques will be used namely: correlation and regression analysis. The upper level of statistical significance for hypothesis testing will be at 5%. All statistical test results will be computed at 2-tailed level of significance.
3.10.2 Analysis of qualitative data
Qualitative data will be analyzed using content analysis. Responses from key informants will be grouped into recurrent issues. The recurrent issues which will emerge in relation to each guiding questions will be presented in the results, with selected direct quotations from participants offered as illustrations.
Data on the respondent’s views and opinions about the factors affecting the utilization of M&E data will be obtained using scaled variables from a self-developed questionnaire.
A five point Likert ordinal scales ranging from; strongly agree which shall be assigned 5, strongly Agree, 4 agree, Not Sure assigned 3, Disagree allocated 2 and strongly disagree allotted 1 to obtain responses on the variables. The Likert ordinal scale has been used by numerous scholars who have conducted similar studies such as Bowling, (1997).
The structured questions will be measured using the following variables;
Technical Skills Capacity; Adequate personnel, experienced personnel, Qualified Personnel.
Financial Capacity; Availability of funds, Timely funds, adequate funds, Accountability.
Quality of M&E findings; Timeliness, Relevant, Well written, methodologically correct
REFERENCES
Acevedo, G. L., Krause, P., & Mackay, K. (Eds.). (2012). Building better policies: the nuts and bolts of monitoring and evaluation systems, Washington DC, World Bank.
Amin, M. E. (2005). Social science research: Conception methodology and analysis. Kampala: Makerere University Printery.
Amkeni Wakenya. (2009). Strengthening the capacity of Kenyan civil society to participate more effectively in democratic governance reforms and in deepening democracy in Kenya. Amkeni Wakenya Annual progress report
Annual Health Sector Performance report 2019/2020 page 160 available at https://www.health.go.ug
Anywar, G., Kakudidi, E., Byamukama, R., Mukonzo, J., Schubert, A., & Oryem-Origa, H. (2020). Indigenous traditional knowledge of medicinal plants used by herbalists in treating opportunistic infections among people living with HIV/AIDS in Uganda. Journal of ethnopharmacology, 246, 112205.
Clear. (2012). Collaborative reflection and learning amongst peers. African monitoring and Evaluation systems workshop report. Available at: http://www.theclearinitiative.org/african_M&E_workshop.pdf
Davis, F. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance in Information Technology. MIS Quarterly, 13(3), 319-340.
Davis, N., Preston, C., & Sahin, I. (2009). ICT teacher training: Evidence for multilevel evaluation from a national initiative. British Journal of Educational Technology, 40(1), 135–148
Gamba, P (2016). Factors Affecting Utilization of Monitoring and Evaluation Findings in Implementation Of Malaria Control Programmes In Mukono District, Uganda. Unpublished Master’s Thesis, Uganda Technology and Management University.
Gebremedhin, B., Getachew, A. & Amha, R. (2010). Results based monitoring and evaluation for Organizations working in agricultural development: A guide for practitioners.
Gorgens, M.,& Kusek, J.Z. (2010). Making Monitoring and Evaluation Systems Work: A Capacity Development Toolkit. Washington D.C, World Bank
IFAD, (2002). A guide for project M & E, “Managing for impact in Rural Development” Rome, Italy. International Livestock Research Institute, Nairobi, Kenya
Kasule, J.S. (2016). Factors Affecting Application Of Results Based Monitoring And Evaluation System By Nurture Africa. Unpublished Master’s Thesis, Uganda Technology and Management University.
Kothari, C.R. (2004). Research Methodology: Methods and techniques. Daryaganj, New Delhi: New Age International (P) Ltd.
Krejcie,R. and Morgan,D. (1970). Determining Sample Size for Research activities. Education and psychological Measurement. 30, 607-610.
Kusek, J. Z., & Rist C. R. (2004). Ten steps to a Results-based Monitoring and Evaluation System. Washington DC, World Bank.
Mackay, K. (2006). Institutionalization of monitoring and evaluation systems to improve public Sector management. Evaluation Capacity Development working paper series no.15. Independent Evaluation Group
Mibey, H. K. (2011). Factors affecting Implementation of Monitoring and Evaluation Programs in Kazi kwa Vijana Project by government ministries in Kakamega Central District, Kenya (Unpublished master‟s thesis). University of Nairobi, Kenya.
Mugenda, O.M & Mugenda, A.G (1999).Research methods; Qualitative and quantitative approaches.ACTS publishers, Nairobi Kenya.
Duchoslav, J., & Cecchi, F. (2019). Do incentives matter when working for god? The impact of performance-based financing on faith-based healthcare in Uganda. World Development, 113, 309-319.
Mulandi, N.M (2013) Factors Influencing Performance Of Monitoring And Evaluation Systems Of Non-Governmental Organizations In Governance: A Case Of Nairobi, Kenya. Unpublished Master’s Thesis, University of Nairobi, Nairobi, Kenya.
Nabris, K. (2002). Monitoring and Evaluation, Civil Society Empowerment, Jerusalem, PASSIA. Nachmias, C.F. & Nachmias, D. (2007). Research Methods in the Social Sciences (7th Ed.). London: Worth Publishers Inc.
Obure, J. Otieno. (2008). Participatory monitoring and evaluation: A meta-analysis of anti povertyinterventions in Northern Ghana. (Unpublished master‟s thesis). University of Amsterdam.
Ogwal, A., Oyania, F., Nkonge, E., Makumbi, T., & Galukande, M. (2020). Prevalence and predictors of cancellation of elective surgical procedures at a Tertiary Hospital in Uganda: a cross-sectional study. Surgery research and practice, 2020.
Sekaran, U. (2003). Research methods for business .A skill building approach. (4th ed). John Wiley and sons. New York. USA.
Spooner, C. & S. McDermott (2008), Monitoring and evaluation framework for Waverley Action for Youth Service, Social Policy Research Centre Report, University of New South Wales
Turabi, A.E, Hallworth, M., T. & Grant, J. (2011). A novel performance monitoring framework for health systems; experiences of the National Institute for Health Research. England.
UNDP Pacific Centre (2011). A Capacity Development Plan for CSOs in the Pacific. UNDP Pacific Centre. Fiji Islands.
White, K. (2013). Evaluating to Learn: Monitoring & Evaluation best practices in Development INGOs. Available at dukespace.lib.duke.edu
Zogo, N. Y. E. (2015). The State of Monitoring and Evaluation of NGOs’ Projects in Africa. Hill & Knowlton Strategies Regional Office of Eastern Africa.