Policy Information Research Paper

Academic Writing Service

Sample Policy Information Research Paper. Browse other research paper examples and check the list of political science  research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Also, chech our custom research proposal writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Information is used by governments for a wide variety of policy purposes. It can be used in the formulation of public policies, by helping to identify the need for a public policy, or to select one among alternatives. It can be used in the implementation of policies, to help target or improve programs. It can be used in the evaluation of policies, such as measures of progress toward desired outcomes, or of the intended or unintended social, economic, and environmental effects. Here, however, policy information is taken to mean information used in the formulation, design, and selection of public policies. It may also include information from the evaluation of public policies to the extent that effects of the policies inform consideration of alternative policies.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Policy information differs from political information, such as polls of public opinion, or of support for a policy among legislators. Policy information is objective. It may be quantitative or qualitative, but it is based on evidence, rather than opinions, attitudes, or beliefs (see Citro and Hanushek 1991). Policy information comprises both data and analysis. Indeed, it is often the result of policy analysis, by which is meant the objective prospective, and sometimes retrospective, evaluation of the social and economic implications of changes in public policy. For example, in considering choices among social welfare policies, decision makers typically want to know what populations would be affected, in what ways, and at what cost. Policy analysis provides answers to these ‘what if’ questions. In order to inform choices among policy alternatives, policy analysis involves forecasts or projections into the future. As a result, they are more uncertain than evaluations of the past. Three key ingredients of policy analysis are data; analytical models, including their documentation and validation; and effective communication of the results to policy makers, in particular of the assumptions in models and uncertainty in estimates.

1. Data

Accurate and relevant data are critical for policy analysis. Government surveys are a common source. Since the behavior of individuals in response to decisions of employers or government agencies is often what is being modeled, data may be needed on the interactions of individuals with organizations and institutions. For example, an individual’s access to health care services will depend on providers, insurers, and government health care programs and policies. Often, however, data are available only on a single unit, such as an individual, family, or firm. In this situation, analysts must meld together data from different sources.




Useful sources of data to augment survey data are administrative records, such as health care claims, tax returns, earnings reports for social security, and case files from public assistance programs. Such data may need to be linked, or matched, to the survey data. Increased concern in government statistical agencies about possible violations of the confidentiality of survey data, however, militate against such linking. Analysts in this situation must make assumptions, the validity of which the analysts may not be able to determine without the linked data.

As important as the availability of data is, it is not sufficient. Data for public policy analysis must reflect relevant and accurate measurement. For this reason, government statistical agencies need to conduct research on concepts, definitions, and measurement, and assess the quality of their data. In the highly decentralized statistical system in the United States, however, it is not uncommon for agencies to report different measures from different surveys of ostensibly the same concept, such as income, employment status, or disability. Efforts may be made to explain the differences, or a result from one survey may be adjusted, or controlled, to agree with the result from a more prominent survey. Good policy analysis, however, requires integrated databases with the best estimates that can be developed from multiple sources.

Policy analysts, as well as other sophisticated users, cope with the lack of integrated data by performing additional adjustments to make the data suitable for modeling and analysis. But not all users have the information, much less the resources, to do so. And those that do may either duplicate efforts of others or, much worse, wind up with different results.

2. Models

Models are fundamental tools of policy analysis. Microsimulation models are one important type of policy analysis tool. With data from individual records, a microsimulation model simulates how the records change over time and as a result of a proposed policy. To predict the effect of a change in tax law, for example, the model processes records for people as if they were filling out their tax forms. As Citro and Hanushek (1991) note:

Models based on microsimulation techniques are conceptually highly attractive because they operate at the appropriate decision level and take into account the diverse circumstances and characteristics of the relevant population, whether it be low-income families, taxpayers, or health care providers. Such models are able to respond to important needs of the policy process for information about the effects of very fine-grained as well as broader policy changes, the effects of changes that involve complicated interactions among more than one government program, and the effects of changes on specific population groups as well as total program costs and caseloads.

Microsimulation models, however, tend to be highly complex and expensive to develop and maintain. Because of the lack of integrated data, analysts using the models must often patch together a variety of data and research results of varying, if even known, quality. As a result, often many assumptions are made; and some are unsupported. Because of these problems, some analysts may rely on simpler models, akin to accounting methods, to understand the effects of changes in a program. But the value of microsimulation models in informing a policy debate is that they can model behavioral responses to changes in policies and programs.

Many other statistical models are also used in policy analysis, from linear regression models, to simultaneous equation models, to complex macroeconomic ones. Macroeconomic models are also tools for ‘what if’ questions. They are often used to make unconditional forecasts, such as the rate of growth of the economy, as opposed to conditional projections. In addition to these models, a cornucopia of social science methods are also employed in policy analysis, including cost–benefit studies, risk assessments, metaanalyses, and even social experiments.

While microsimulation models seek to answer ‘what if’ questions, other models seek to answer ‘why’ questions, such as why people save, or fail to save, and to provide estimates of relationships that may be used in microsimulation models. Sometimes the research model may not be explicit, as when a database is constructed by piecing together data from different sources, in particular to determine a trend for which there is no survey-based indicator, such as the retention of children in the same grade in school. Sometimes the model may be difficult to ascertain, because political factors must also be taken into consideration; for example, in formulas used in legislation to allocate funds to school districts according to the numbers of children from families with incomes below the poverty level.

3. Uncertainty

Even when the model is explicitly clear, however, its uncertainty may be unclear. The estimates from models in policy analysis are conditional forecasts of the effects of a policy change. The effects may be poorly understood. They may be unknown or depend on unknown factors. The model relating cause and effect may be incorrect; or it may be uncertain as to which of many factors contribute to the effects. Moreover, the lack of data is a ubiquitous problem.

The understanding, however, of the effects of a proposed policy is important, if decisions on it are to balance the benefits against the costs. Key to that understanding are reasonable judgments about the quality of estimates of the likely effects of a proposed policy change. Policy makers need to understand the uncertainty or variability in the estimates and to judge the track record of a policy analysis model. Unfortunately, they are rarely given the information to enable them to do so.

3.1 Model Validation

Assessing the quality of model estimates is a common problem in statistical research, but the problem is compounded in policy analysis, because it involves conditional forecasts. Some, if not all, of the policy alternatives studied may not be adopted. And, even for alternatives that are, data may never become available on the outcome. Yet it is crucial to understand the uncertainty in model estimates, if only to evaluate the utility of different models. Model validation is the process for measuring the uncertainty or variability in a model’s estimates and identifying the sources of that uncertainty (Citro and Hanushek 1991).

One technique of model validation is external validation, in which the model’s estimates are compared with the outcomes the model is forecasting. Because the outcomes may never be known, however, two variants of external validation may be used. One is ex post forecasting, in which one takes a policy or program change adopted in the past and, applying a model to data available then, forecasts the effects in a more recent past year when outcome data are available to compare. Another variant is back casting, in which the model and current data are used to simulate the effects of a reversion to program conditions in the past, and the results are compared to measures of the effects based on data in that period.

Internal validation refers to studies of the model, its components, and how it is used, as well as of the data used. Measures of internal validity derive from the data used and model estimates obtained, without any comparison to external reality. They include sampling variability and other errors in the data or other inputs, and errors of model misspecification. Estimates of variability may employ standard sample variance calculations or more modern resampling techniques. Sensitivity analysis looks at the effects on model estimates of changes in one or more assumptions, specifications, or components of the model. It is used to determine the factors that most affect the results, or how robust the results are to changes in assumptions.

3.2 Documentation

Documentation of data, models, and analyses is important, not only for validation studies but also for a better understanding of the model estimates and their quality. Documentation should include information on the policy alternatives under analysis, the data and other inputs, assumptions, interpretations, and changes made. The information should be specified in sufficient detail so that another person could replicate the results. Otherwise, the utility of the model may be limited to only a few analysts who are experienced in its use. Moreover, as models invariably grow in complexity, even experienced users may not understand the implications of underlying assumptions, or how components of the model interact.

3.3 Communication Of Uncertainty

Policy analysts should include information about the uncertainty of their estimates and the sources of that uncertainty in the presentations of results to policy makers. Sampling variability is often the most straight forward to calculate and report, but it neglects errors due to model misspecification, nonsampling errors in surveys, and other factors. Efforts should be made to convey a more complete description of uncertainty through a variety of techniques, including variance calculations, sensitivity analysis, and the proven record of a model.

It is important for policy makers to understand the uncertainty attendant in model estimates, for a number of reasons. First, the information on uncertainty allows them to distinguish differences from those that could occur solely by chance. Second, they can use the information to compare the quality of different estimates. Third, the information may help them decide how much weight to give to a policy analysis, as opposed to other, political considerations.

But there is an even more important reason for agencies that are responsible for policy analysis to measure and communicate the uncertainty in data and model estimates. It is only through such measures of quality that resources to improve the data and models can be justified. For example, in the United States, many national surveys are so sparse that they are unable to provide direct survey-based estimates for all but a handful of states. National policy makers have devolved some programs to the states, such as children’s health insurance and welfare reforms, and expect estimates of outcomes to hold states accountable. The estimates can be modeled, but their much larger uncertainty than that for the national average must be communicated. Not only would policy makers know if a reported difference reflects a real difference; they could see the need to invest resources to improve data or models, or both. For example, the sample size of the Current Population Survey has been increased to improve estimates, in each state, of the numbers of children covered by health insurance.

4. Organization Of Policy Analysis

Agencies conducting research in support of public policies in a department of government must anticipate policy issues if its research is to inform the policy debate when it occurs. As a result, a policy research agency may invest resources in research on policy issues for which it may never be asked for the results. When consideration of an unanticipated policy comes to the fore, the agency must be able to draw upon a synthesis of research or a review of extant data to provide the required policy information. For that reason, the policy research agency must develop data and research continually, and be cognizant of that developed elsewhere in the important subject matters of its department.

The national government may provide for unsolicited basic and applied research, in particular in academic institutions, but policy analysis rarely, if at all, is a byproduct of this research. Moreover, policy research conducted by private organizations with a stake in the outcome may not be objective. Effective policy analysis, therefore, can neither be delegated to academia nor to interested organizations. Government policy analysis agencies must either conduct or direct the research. In doing so, they may draw upon research conducted by a wide variety of institutions, such as quasi-governmental organizations, universities and academic research institutes, private foundations and other nonprofit organizations, and for-profit consulting firms.

Policy analysis in a department of government may or may not be the responsibility of a distinct, independent unit. For example, the United States has many units in the executive branch with responsibilities for policy analysis, such as the Office of Tax Analysis in the Department of the Treasury, and the Office of the Assistant Secretary for Planning and Evaluation in the Department of Health and Human Services, but no single unit in the Department of Transportation is responsible for policy analysis. Moreover, policy analysis need not be the responsibility of just the executive branch. The US Congress has developed policy analysis capabilities, through such agencies as the Congressional Budget Office, the Joint Committee on Taxation, and the Congressional Research Service, to counterbalance the policy analysis capabilities in the executive branch.

For government policy research agencies, maintaining independence from the political process itself is essential to the agency’s effectiveness. ‘Research in support of public policy depends for its credibility, in great part, on its conduct by those with a greater interest in the quality of the work than in the substance of the conclusion’ (Geweke et al. 1999).

5. Conclusion

Developing information to inform debate on public policies requires data, models, and an understanding of uncertainty in the resulting estimates. By far the most common problem and the bane of policy analysts is the lack of adequate data. When a panel of the National Academy of Sciences–National Research Council was asked by the US Department of Labor to advise on the development of models to assess policies for retirement income, the panel concluded instead that priority be given to funding data collection and behavioral research in preference to significant investments in large-scale projection models:

Generally speaking, it takes more time to collect new data and analyze them than it does to build a model to use data and research results. There are more than a few instances in the history of policy analysis when complex projection models were built in a span of weeks or months. It is rare that a critical unmet data or analysis need can be supplied in that short a time. (Citro and Hanushek 1997)

Government, the panel concluded, must take the lead in data collection, so that high-quality, comprehensive databases are available for policy analysis use in all sectors—government, private, and academic.

Bibliography:

  1. Citro C F, Hanushek E A 1991 Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Vol. I. Panel to Evaluate Microsimulation Models for Social Welfare Programs, Committee on National Statistics, National Research Council. National Academy Press, Washington, DC
  2. Citro C F, Hanushek E A 1997 Assessing Policies for Retirement Income: Needs for Data, Research, and Models. Panel on Retirement Income Modeling, Committee on National Statistics, National Research Council. National Academy Press, Washington, DC
  3. Geweke J F, Bonnen J T, White A A, Koshel J J 1999 Sowing Seeds of Change: Informing Public Policy in the Economic Research Service of USDA. Panel to Study the Research Program of the Economic Research Service, Committee on National Statistics, National Research Council. National Academy Press, Washington, DC

 

Policy Knowledge And Advocacy Organizations Research Paper
State Regimes Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!