Judicial Leadership Research Paper

Academic Writing Service

View sample leadership research paper on judicial leadership. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Judicial leadership is what leaders of court systems do to translate values, visions, and goals into exceptional organizational performance. It means mobilizing and inspiring employees and other stakeholders to get extraordinary things done for their organizations. This research paper explores the relationship between leadership – widely considered the most important element of effective justice system administration – and self-governed, well-managed, and operationally efficient courts. It focuses on organizational performance measurement, a precondition of effective self-governance of courts.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


The theme of this research paper is that the relationship between leadership and effective organizational self-governance of courts, especially the role of performance measurement, is uniquely important for courts. Buttressed by the doctrine of separation of powers doctrine and the principle of judicial independence, courts enjoy, at least theoretically, a greater level of freedom from scrutiny and interference than other components of the executive and legislative branches of government. This freedom and institutional independence cannot maintain legitimacy in practice without effective self-governance; and effective self-governance is dependent on judicial leadership. In practical terms, effective self-governance of courts is the manifestation of the separation of powers doctrine and the principle of judicial independence.

To achieve a truly independent (and, practically speaking, a self-governing) judiciary requires judicial system leadership to overcome resistance from various entrenched interests that view courts as not deserving or not capable of self-governance, or both. To be effective, self-governance should drive a judicial system leadership committed to transparency and accountability and to rigorous performance measurement and performance management that ensures adherence to the highest standards of organizational performance.




The organization of this research paper is as follows. It begins with a brief definition of court performance measurement and performance management. The next section makes the case for the unique importance of transparency and accountability to the self-governance of courts. Stated simply, without transparency and accountability for performance, it is unlikely that judicial systems can be institutionally independent. Judicial leaders must ensure that a transparent and accountable self-governed framework exists not just in the “law and the books” but also in the “law in practice.” Described in the next section is the imperative of value-oriented performance measurement and how this imperative folds into a broad vision of self-governed, well-managed, effective, and operationally efficient courts.

The difficulties of thinking in terms of organizational performance measurement and its relationship to effective self-governance contribute to the theme of the next section of this research paper under the heading “How Are We Performing?” It describes two related impediments that have stood in the way of the advance of performance measurement, transparency, and accountability, and the self-governance of courts. The first is the reliance of courts and justice systems throughout the world on third-party monitoring and evaluations, which are quite different from performance measurement on various dimensions, to assess their performance. The second is a related tendency to think in terms of a research or program evaluation paradigm when addressing the question of How Are We Performing? – a paradigm ill-suited to answer the question except in a very narrow sense. The theoretical foundations and methodology of the disciplines of performance measurement and research overlap, of course, but they are very different in important aspects – purposes, sponsorship, organization, audience, functions, timing, and data interpretation rules – that are critical for self-governance of courts.

The final section of this research paper concludes with the proposition that judicial system leaders should focus on ensuring an effective self-governance of courts that is transparent and accountable and to committing to the primacy of performance measurement to drive success.

Court Performance Measurement Defined

Court performance measurement is the process of monitoring, analyzing, and using performance data on a regular and continuous basis for the purposes of transparency and accountability and for improvements in efficiency and effectiveness. This definition encompasses both performance measurement per se and the use of performance data in management (sometimes referred to as performance management). Measures of court performance include, for example, those promulgated by the National Center for State Courts (National Center for State Courts 2005, 2009):

  • Court user/citizen assessment and satisfaction with court services including access to the courts, fairness and integrity, timeliness and expedition, and their general trust and confidence in the court
  • Clearance rates, that is, the number of outgoing cases as a percentage of the number of incoming cases
  • Case finalization time (also referred to “time to disposition,” “on-time case processing,” and other terms)
  • Time in custody prior to trial (median number of days in pretrial detention)
  • Reliability and accuracy of court files
  • Backlog, that is, percentage of cases in the system longer (“older”) than established timeframes
  • Trial date certainty expressed the proportion of all trials that are held when first scheduled
  • Court employee engagement (a proxy for court success)
  • Recovery of criminal and civil court fees as a proportion of fees imposed
  • Money expenditure per case (net costs per finalization)

These measures used by United States courts and, on a more limited basis, by courts in other parts of the world are linked to success factors and core values; they form a set of balanced measures of a court’s or court systems’ performance; they focus on outcomes, that is, how well the courts are making things better as a result of their efforts instead of how much effort has been expended or by how many resources are used; they represent strategic planning in action; and, finally, they provide clarity and strategic focus to guide judicial leadership.

With the rapid advance of data analytics and business intelligence into judicial systems around the world, the capacity of courts to measure and manage their own performance has grown exponentially (Keilitz 2010). Court leaders and managers today can access and analyze performance data in seconds, what in the traditional world of courts would take weeks or months. It is not yet clear, however, to what extent courts have actually used performance data on a regular basis to guide planning, budgeting, strategy, and management decisions, a state of affairs that is probably true of governments in general, at least in the United States (Hatry 2010).

Judicial Independence, Transparency, And Accountability: The Law In Practice

In the United States, an independent judiciary is taken for granted as a matter of principle. Central to United States constitutional law is the doctrine of the separation of powers and the dynamic of checks and balances of three separate and independent functions of what we today see as the legislative, executive, and judicial branches of government. Article III of the United States Constitution vests judicial power in the third branch of government. The constitutions of 39 other States also expressly require that governmental power is divisible among three coequal separate branches. The function of checks and balances renders the judiciary dependent on the other two branches for resources. The purpose of this separation of powers is to prevent the concentration of governmental power and to provide for checks and balances.

Unfortunately, this constitutional framework and related legislative and policy mandates are not self-executing in practice, not in the United States or elsewhere throughout the world. There are few constitutions and laws, if any, that require judicial systems to be self-governing or specifically authorize the means to ensure that they are. Institutionally independent does not necessarily mean self-governing. As argued by some scholars, the difference between the United States, where self-governance is arguably more in evidence, and other judiciaries has less to do with prevailing theories of how popular sovereignty relates to jurisprudence and political theory but with the institutional capacities of courts to act independently in practice.

Three critical elements of modern democracies – and, arguably, the preconditions of a market-based economy – are a capable state, the state’s subordination to the rule of law, and government accountability to its citizen. There can be little doubt that the ideas and principles underlying these elements that are enshrined in law are fundamental causes of why some states fail and others prosper. Yet, the “law on the books” is not sufficient (Dakolias 2005). It is, of course, important to know what the law established as formal practices, but this will not be enough to understand the reality on the ground. Creating remedies for case processing delay and court congestion, for example, requires examining causes beyond the law to organizational structure and practice including the size of demand for dispute resolution and the points in the processing of cases where perverse incentives might exist that are contrary to efficiency.

Rule of law reform, which has been underway throughout the world for quite some time, began with a focus on drafting and enacting laws, codes, and rules. These initiatives, however, soon gave way to attention paid to the “law in practice” as opposed to the “law on the books,” as it became clear that laws are quite literally of little use if no efforts are devoted to establishing the means of their implementation and enforcement.

In a 2002 study of 75 countries, Lars Feld and Stefan Voigt looked into the effects of an independent judiciary (Feld and Voigt 2003). They distinguished between de jure independence (that expressed in formal law) and de facto independence (that which is implemented and enforced). Their de jure indicator was based on 12 variables made up of 23 characteristics concerning the legal basis for judicial independence (including institutional arrangements, appointments, tenure, salaries, and transparency). The de facto indicator based on eight variables measured over time in 75 countries shows the degree of judicial independence in practice (effective average term length, changes in number of judges, income, and court budgets). Not surprisingly, the main finding of the study is that judicial independence “on the books” is insufficient. They found that de jure independence was not correlated with economic growth, whereas de facto independence was conducive to economic growth. Indeed, de jure and de facto independence appeared to be almost completely unrelated, as there was no overlap whatsoever between rankings of the top ten countries examined.

Because courts traditionally have a greater degree of freedom from scrutiny compared to other public institutions, courts need to balance this independence with transparency and accountability for performance. As noted by the former president of the International Association for Court Management, Marcus Zimmer, judicial leaders often “fail to fathom that attaining such independence is conditioned upon their willingness to accept the commensurate levels of individual and institutional accountability.” Institutional independence, he writes, means “having in place an accountable self-governance framework built on effective internal control and enforcement mechanisms to ensure the confidence of the other branches and the trust of the people in its capacity to independently administer justice” (Zimmer 2011, p. 138). Regular and continuous performance measurement addresses this need.

Performance measurement promotes an independent judiciary by providing transparency and accountability.

The Performance Measurement Imperative

An imperative of value-oriented performance measurement and performance management folds well into a broad vision of judicial leadership of self-governed, well-managed, effective, and operationally efficient courts. This imperative rests on five basic assumptions that speak to the relationship between judicial leadership and performance measurement.

Performance matters. Successful leaders show a strong preference for outcome measurement that gauges the desired results of program of services instead of measures of inputs (such as the number of staff, costs, or hours worked by judges and staff). Nothing else really matters as much as results defined in terms of quality, that is, the achievement of good results as efficiently as possible. The quality of courts’ success should not be measured by how many hearings are held or even by the number of cases that are resolved or by the number of programs and processes they call for. What matters are outcomes that matter to the people served by courts. Self-governance of courts should be organized for performance.

Effective performance measures focus on ends, not the means to achieve them. They emphasize the condition or status of the recipients of services or the participants in court programs (outcomes) rather than the internal aspects of processes, programs, and activities (inputs and outputs). They focus on results rather than quantification of resources or level of effort. Traditionally, court managers have relied on measures of volume or frequency in three categories: (a) amount of work demand (such as the number of cases filed), (b) number of products or services delivered (such as the number of cases filed), and (c) the number of people served. Increasingly, there is the recognition that while such measures show demand and how much effort has been expended to meet that demand, they reveal nothing about whether the effort has made any difference – that is, whether anyone is better off as a result.

Second, performance is not about the numbers. Performance measurement uses numbers, but it is not about the numbers. It is about the perception, the understanding, and the insight required of effective leadership. Ultimately, it is not the measure itself that is important, but rather the questions that it compels judicial leaders to confront (Spitzer 2007) questions such as:

  • How well is the court, court system, or justice system performing?
  • Where is the court now (performance level, baseline)? What is the current performance level compared to established upper and lower “controls” (e.g., performance targets, objectives, benchmarks, or tolerance levels)?
  • How well is the court performing over time? Is performance better, worse, or flat? How much variability is there? (trend analysis)
  • Why is this particular performance happening (analysis and problem diagnosis)? What happened to make performance decline, improve, or stay the same. What are some credible explanations?
  • What is the court doing to improve or maintain performance levels? (planning future outcomes)
  • What actions should be started, continued, or stopped altogether as a result of what the measure reveals? What should be done to improve poor performance, reverse a declining trend, or recognize good performance? (strategy formulation)
  • What performance targets and goals should we set for future performance (goals)?

Third, courts must count what counts and measure what matters. Figuratively and literally, performance does not count unless it is related to the things that really matter and are critical to the success of a court. Key success factors have been referred to in the literature of organizational performance measurement as major performance areas, standards of success, perspectives, domains, performance criteria, key results factors, and key outcomes. Whatever they are called, they form the framework of a court’s accountability and transparency to the public and other stakeholders.

Alignment of performance measurement with purpose and fundamental responsibilities is vital. Well-conceived performance measures serve to align an organization’s efforts with the achievement of its mission. The requirement of linkage of performance measures to a court system’s mission and strategic goals is vital.

Fourth, performance measurement is a powerful antidote for too much information. Information overload is one of the biggest irritations of modern life. And it seems to be getting worse. It can make justice system executives and managers feel anxious and powerless, reduce their creativity, and render them less productive. A profusion of phrases describe the anxiety and anomie caused by too much information: data asphyxiation, data smog, information fatigue syndrome, and cognitive overload. Surveys have found that most man-agers believe that the data deluge has made their jobs less satisfying or hurt their relationships. Some think that it has damaged their health. Many managers think most of the information they receive is useless. The explosion of information hitting the courts is going off within systems too fragmented and disorganized to absorb it. Performance measurement, supported by business intelligence tools like performance dashboards, is an antidote and a remedy to data fog. It focuses on what counts and filters out the rest.

Finally, measuring and managing court performance is an essential survival skill for court leaders and managers. The right performance measures effectively delivered are clear, unambiguous, and actionable. Focus and clarity are factors of effective leadership. Above all, leaders need to be clear. Performance measures such as clearance rates or court user/citizen satisfaction focus on a limited number of success factors like access, fairness, and timeliness of case processing. They count only what counts and measure only what matters.

The discipline of performance measurement provides a conceptual shortcut to a host of organizational competencies like strategic planning, resource management, and communication with stakeholders. The benefits of an effective court performance measurement and management system are the same, for example, as those of strategic planning – that is, accountability, consensus building, focus, coordination, control, learning, communication, hope, and inspiration.

To identify the right performance measures, a court system must address the same fundamental questions about guiding ideals, values, mission, goals, and broad strategies as it must address in strategic planning. When it identifies a core performance measure such as court user/citizen satisfaction with court services, for example, it communicates a clear, simple, and penetrating theory of its “business” – its ideals and purpose – that informs decisions and actions.

How Are We Performing?

The question How Are We Performing? lies at the heart of self-governance and effective leadership of courts. The capacity and political will to address this self-directed question regularly and continuously using the tools of performance measurement is the hallmark of a successful court organization. More and more court leaders and managers are turning to performance measurement to drive success. However, the trend is on a slow march impeded by two entrenched ways the judicial sector has tended to measure its success: (1) reliance on third-party monitoring and evaluations of court performance and (2) an adherence to a research paradigm for assessing court performance.

Self-governance, transparency, and accountability of courts will depend on the degree to which these two impediments are attenuated. Court leaders will need to champion and support the self-assessment of performance by courts, instead of relying on third-party evaluations. They will need to seek the replacement of the methodologies of the disciplines of research or program evaluation (or evaluation research), which have dominated justice sector assessment in the past, with that of performance measurement and management. These suggestions are consistent with the values, principles, and tools of the National Center for State Courts’ High

Performing Courts Framework (Ostrom and Hanson 2010), which is oriented toward United States courts, and the International Framework for Court Excellence (International Consortium for Court Excellence 2010).

Self-Assessment Of Court Performance

As with most changes in life, self-directed change is the most meaningful and long lasting. Self-assessment, and not third-party monitoring and program evaluation (or evaluation research), is the hallmark of successful courts. It is integral to effective self-governance. A successful court has the capacity and the political will for self-directed rigorous performance measurement and management that addresses the question How Are We Doing? The “we” in the question suggests the critical differences between court performance measurement and third-party evaluations of courts.

Monitoring And Evaluation

Performance measurement is not yet the norm in the United States and around the world, though it has a strong foothold in the United States and large parts of the developed and developing world. Most assessments of programs, processes, and reform initiatives in courts are accomplished instead by monitoring and evaluations instigated and conducted by third parties, including funding agencies, donors, aid providers, and their agents (researchers, analysts, and consultants). The abiding concern of these third parties is return on their investments, and this concern does not necessarily align with the expressed purposes and fundamental responsibilities of courts, at least as these might be conceived by court leaders and managers, and in seminal authorities like the Trial Court Performance Standards (Keilitz 2000). For the most part, the focus of these third-party assessments is a specific initiative, program, or process (e.g., juvenile drug courts, small claims mediation, and summary jury trials).

Performance data produced by monitoring and evaluation efforts are used primarily in service of decisions to increase, decrease, or redirect funding or other support. While some performance data may be shared with courts and justice system, as a practical matter, most are collected, analyzed, interpreted, and used, first and foremost, by third parties, that is, funders, donors, aid providers, and their agents. The measures of success are defined and results interpreted by these third parties with little or no shared responsibility or accountability (ownership) by the courts or justice systems implementing the programs, processes, and reform initiatives.

Because of their time requirements and costs, these evaluation efforts are, at best, limited and generally unsatisfactory responses to the question How Are We doing? The research paradigm within which these third-party monitoring and evaluation operate makes the results less than responsive to the question. Large-scale studies can take 2–3 years before data reports become available. Therefore, the result of regular and continuous performance measurement by courts themselves is likely to be the major source of information for addressing the question.

Indexing Performance

Another type of third-party monitoring and evaluation of performance of courts takes the form of indexing of justice sector performance. Indexes are useful tools. The idea that complex things can be pinned down and quantified on a simple scale seems universally appealing. Indexes that reduce justice sector performance to a single number for purposes of comparing, ranking, and rating countries are being used throughout the world to understand everything from governance, corruption, economic vitality, health, and education to the quality of life. Governments and reformers are taking such indexes seriously. They are closely watched. “There is nothing,” wrote the economist in an October 9, 2010, report of the results of the Mo Ibrahim Foundation’s Index of African Governance, “like a bit of naming, shaming, and praising.”

The crowded field of performance indexes includes comprehensive indicators and indexes that encompass entire countries but include aspects of justice, like the Mo Ibrahim Foundation’s Index, and others more narrowly focused on rule of law like the World Justice Project’s WJP Rule of Law Index™ and the American Bar Association’s Judicial Reform Index. The World Justice Project’s WJP Rule of Law Index™ incorporates ten elements of the rule of law – such as limited government powers, fundamental rights, and clear, publicized laws – and 49 associated sub-factors that make general reference to various justice “systems” but reference to courts is conspicuous by its absence.

While the indexes are successful in getting people’s attention by “naming, shaming, and praising” the jurisdictions rated and ranked, buy-in of the leaders and managers of courts and court systems may be limited. Well-known indexes like the World Justice Project’s WJP Rule of Law Index™ and the American Bar Association’s (ABA) Rule of Law Initiative’s Judicial Reform Index might be seen to reflect the ethos of the sponsoring organizations, and not necessarily the values, purposes, and fundamental responsibilities of courts. Both of these well-known indexes rely heavily on polls of commissioned experts who assess the factors the third parties deem important to judicial reform.

Many court leaders and managers and most elected officials are interested in comparing courts. However, they are not enthusiastic about having such comparisons reported externally unless they stack up well against others (Keilitz 2005). Perhaps stemming in part by a broad interpretation of judicial independence that does not embrace transparency and accountability, many judges have a viscerally negative reaction to public reporting on the quality of judicial services, and some may be appalled that their judicial systems are ranked numerically by outside “experts” on the basis of what they perceive as misinformation.

Third-party assessment of performance, whether it takes the form of indexing or monitoring and evaluation, is the antithesis of self-governance. Because it is not self-directed by courts, it is not as likely to be embraced by its leaders and managers and lead to reform. As might be expected, when performance assessment is, or perceived to be, wholly initiated and executed by external organizations, more energy and resources may be expended by the leaders of courts and court systems on refuting poor evaluation results or low rankings than on developing strategies for sustained reform.

Performance Measurement

Both the disciplines of performance measurement and research adhere to the scientific method and use statistical thinking to draw conclusions. But, as already suggested above, performance measurement and research, or evaluation research, are vastly different in their purposes, functions, sponsorship, uses, and the way they are funded and structured. Sorting out the differences between them, and choosing to employ one over the other, is not just academic hairsplitting.

First, the purposes of the two disciplines are quite different. At a very fundamental level, the purpose of performance measurement is to answer the question How Are We Doing? in response to the demands for transparency and accountability from stakeholders and to provide a basis for improvement. Performance measurement can give clues to why outcomes are as bad or good as they are, but it cannot go the full distance of determining why things are as they are. Performance measurement can help to identify variations in performance and to isolate where and when those variations occur (e.g., an upward trend in the public’s rating of the courts is largely due to attorneys’ increased satisfaction with case processing timeliness after the court initiated electronic filing) so that decisions and actions can target improvements. It does not determine causes. Causal inference is the domain of research that helps us to understand why something has occurred. (Of course, an important value of performance data is to trigger in-depth evaluation research.) As noted above, the purpose of evaluation research in the justice sector, for the most part, is to answer the question “What has worked and what has not, and why?” in order to justify donors’ or funders’ investments various initiatives, programs, and processes.

Second, performance measurement and research differ in their sponsorship and audience – that is, who is doing it and for (or “to”) whom. Performance measurement is done by the courts, for the courts. Results are made known to court leaders and managers. Distribution of performance data to the public and other stakeholders is done at the discretion and direction of the court, preferable in real (or near real time) in a wholly transparent manner. Evaluation research, on the other hand, is more often than not sponsored or instigated by third parties (e.g., administrative offices of the courts or funding outside agencies). At the extreme circumstances, courts and other justice sector institutions are mere “subjects” of the research. Those conducting the research have no compunction to share the research results with court leaders or managers except as a courtesy or as a quid pro quo for the courts’ participation in the research.

Third, the functions of performance measurement are specific and targeted, that is, establishing a baseline for current performance, setting organizational goals and assessing whether performance is within determined boundaries or tolerances (controls), identifying and diagnosing problems, determining trends, and planning. Performance measurement is done for the utilitarian and practical purposes of making improvements in court programs, services, and policies. Court leaders and managers use performance information to make improvements in programs and services. Specifically, they might use performance measurement for a number of management purposes such as translating vision, mission, and broad goals into clear performance targets; communicating progress and success succinctly to the public and other stakeholders; responding to legislative and executive branch representatives’ and the public’s demand for accountability; formulating and justifying budget requests; responding quickly to performance downturns (corrections) and upturns (celebrations) with day-to-day resource allocation; motivating employees to make improvements in programs and services; setting future performance expectations based on past and current performance levels; and insulating the court from inappropriate performance audits and appraisals imposed by external agencies or groups. Evaluation research, on the other hand, seeks truth about the worth and merit of an initiative, program, or processes and is intended to add to our general knowledge and understanding, especially regarding future investments in those initiatives, programs, or processes.

Fourth, performance measurement and evaluation research adhere to different design and data interpretation protocols. Consistent with self-governance, performance measurement is focused on the performance of individual courts with the aim of individual accountability. Research, on the other hand, is interested in the generalizability of findings to all courts.

Of course, both performance measurement and evaluation research must adhere to the requirements of the scientific method. Both use quantitative and qualitative methods including surveys and questionnaires, interviews, direct observation, recording, descriptive methods, tests and assessments, and statistical analysis. But these requirements and methods are applied differently in performance measurement and evaluation research. For example, sample sizes may be smaller and levels of confidence lower in performance measurement primarily because replication of results is done on a regular and continuous basis as a critical matter of design. Evaluation research, on the other hand, is episodic. It is done when time and funds permit.

The matter of replication of results highlights a critical design difference between performance measurement and evaluation research. Basically, replication means repeating the performance measurement or evaluation research to corroborate the results and to safeguard against overgeneralizations and other false claims. Repeated measurements – that is, replication – on a regular and continuous basis are part of the required methodology of performance measurement. Analyzing trends beyond initial baseline measurement requires replication of the same data collection and analysis on a monthly, weekly, daily, or, in the case of automated systems, on a near real-time basis. In contrast, replication in research is a methodological safeguard that is universally lauded by scientists, but seldom done in practice.

Conclusion

Judicial leadership is the essential ingredient of the effective and efficient administration of courts. Effective judicial leaders fold the imperative of value-oriented performance measurement into a broad vision of self-governed, well-managed, effective, and operationally efficient courts. They are committed to transparency and accountability. They recognize that institutional independence and self-governance requires courts to be open and accountable. Effective judicial leaders focus on outcomes and use performance measurement as a tool to mobilize and inspire employees and other stakeholders and to drive improvements.

Consider how this relationship between leadership and self-governed, well-managed, and operationally efficient courts plays out in the minds of two successful judicial leaders, Christine M. Durham and Daniel Becker, the chief justice and the state court administrator of Utah, respectively:

We in the courts should know exactly how productive we are, how well we are serving public need, and what parts of our system and services need attention and improvement. This includes measuring the accessibility and fairness of justice provided by the courts as measured by litigants’ perceptions and other performance indices. And we should make that knowledge a matter of public record. (Durham and Becker 2011)

There is no doubt that to achieve a truly independent and self-governing judiciary requires vigorous judicial leadership to overcome resistance from various entrenched interests that view courts as not deserving or capable of self-governance. Most states in the United States today have some form of legislation requiring performance measurement of government institutions and agencies (see Lu et al. 2009). Who does it, how it is done, and whether it will lead to successful self-governance of courts will be determined by its leadership.

Bibliography:

  1. Dakolias M (1999) Court performance around the world: a comparative perspective. Available at: https://digitalcommons.law.yale.edu/yhrdlj/vol2/iss1/2/
  2. Dakolias M (2005) Methods for monitoring and evaluating the rule of law. Paper presented at the Center for International Legal Cooperation’s 20th Anniversary Conference. Applying the “Sectoral Approach” to the Legal and Judicial Domain, The Hague, Netherlands, 22 Nov 2005
  3. Durham CM, Becker D (2011) A case for court governance principles. Perspectives on state court leadership. Harvard University Executive Session for State Court leaders in the 21st century
  4. Feld LP, Voigt S (2003) Economic growth and judicial independence: cross country evidence using a new set of indicators. Eur J Polit Econ 19(3):497–527
  5. Hatry HP (2010) Looking into the crystal ball: performance management over the next decade. Public Administration Review, Special Issue, December 2010, S208–S211
  6. International Consortium for Court Excellence (2010) International framework for court excellence. Available at: http://www.courtexcellence.com/
  7. Keilitz I (2000) Standards and measures of court performance. In criminal justice 2000, vol 4. Measurement and analysis of crime and justice. U.S. Department of Justice, Office of Justice Programs, National Institute of Justice, Washington, DC, July 2000, pp 559–593. Available at: http://www.justicestudies.com/pubs/tcps.pdf
  8. Keilitz I (2005) How do we stack up against other courts? The challenges of comparative performance measurement. Court Manag 19(4):29–34, Winter 2004–2005
  9. Keilitz I (2010) Smart courts: performance dashboards and business intelligence. In: Future trends in state courts 2010. National Center for State Courts, Williamsburg. Available at: https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/1613/
  10. Lu Y, Willoughby K, Arnett S (2009) Legislative results: examining the legal foundations of PBB systems in the states. Publ Perform Manag Rev 33(4):671–676
  11. National Center for State Courts (2005) CourTools: trial court performance measures. National Center for State Court, Williamsburg, Available at: http://www.courtools.org/trial-court-performance-measures
  12. National Center for State Courts (2009) CourTools: appellate court performance measures. National Center for State Courts, Williamsburg, Available at: http://www.courtools.org/appellate-court-performance-measures
  13. Ostrom B, Hanson R (2010) Achieving high performance:a framework for courts. Working paper series. National Center for State Courts, Williamsburg. Available at: https://ncsc.contentdm.oclc.org/digital/collection/ctadmin/id/1874/
  14. Spitzer DR (2007) Transforming performance measurement: rethinking the way we measure and drive organizational success. AMACOM, New York
  15. Zimmer MB (2011) Judicial institutional frameworks: an overview of the interplay between self-governance and independence. Utah Law Rev 2011(1):121–139
Political Leadership Research Paper
Police Leadership Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!