How is measurement applied to science




















Measurement of any quantity involves comparison with some precisely defined unit value of the quantity. Standard units of measure need to be identified and defined as accurately as possible. All of the SI units used in scientific measurements can be derived from just seven fundamental standards called base units.

Each of these units has a definition based on a physical constant — an unchanging property of nature — such as the speed of light. SI derived units come from multiplying, dividing or powering the base units in various combinations. A significant number of SI derived units have been named in honour of individuals who did ground-breaking work in science. The article Expressing quantities describes how unit symbols and names should be written and used and how the values of quantities should be expressed.

Powers of 10 explains the prefix names and symbols for decimal multiples and submultiples of SI units. The SI was revised in May Four of the base units — kilogram, ampere, kelvin and mole — were redefined. They now join the second, metre and candela as being defined by physical constants. Chemistry Expert. Helmenstine holds a Ph. She has taught science courses at the high school, college, and graduate levels.

Facebook Facebook Twitter Twitter. Featured Video. Cite this Article Format. Helmenstine, Anne Marie, Ph. Measurement Definition in Science. International System of Measurement SI. Definition of Angstrom in Physics and Chemistry. How to Abbreviate Millions of Years Old. The 7 Base Units of the Metric System.

Constructs are denoted by variables in a model that predicts which correlations would be observed among the indications of different measures if they are indeed measures of the same attribute.

In recent years, philosophers of science have become increasingly interested in psychometrics and the concept of validity. One debate concerns the ontological status of latent psychological attributes. Elina Vessonen has defended a moderate form of operationalism about psychological attributes, and argued that moderate operationalism is compatible with a cautious type of realism Another recent discussion focuses on the justification for construct validation procedures.

According to Anna Alexandrova, construct validation is in principle a justified methodology, insofar as it establishes coherence with theoretical assumptions and background knowledge about the latent attribute.

This defeats the purpose of construct validation and turns it into a narrow, technical exercise Alexandrova and Haybron ; Alexandrova ; see also McClimans et al. A more fundamental criticism leveled against psychometrics is that it dogmatically presupposes that psychological attributes can be quantified. Michell , b argues that psychometricians have not made serious attempts to test whether the attributes they purport to measure have quantitative structure, and instead adopted an overly loose conception of measurement that disguises this neglect.

In response, Borsboom and Mellenbergh argue that Item Response Theory provides probabilistic tests of the quantifiability of attributes. Psychometricians who construct a statistical model initially hypothesize that an attribute is quantitative, and then subject the model to empirical tests.

When successful, such tests provide indirect confirmation of the initial hypothesis, e. Several scholars have pointed out similarities between the ways models are used to standardize measurable quantities in the natural and social sciences. Others have raised doubts about the feasibility and desirability of adopting the example of the natural sciences when standardizing constructs in the social sciences.

Examples of Ballung concepts are race, poverty, social exclusion, and the quality of PhD programs. Alexandrova points out that ethical considerations bear on questions about the validity of measures of well-being no less than considerations of reproducibility. Such ethical considerations are context sensitive, and can only be applied piecemeal. In a similar vein, Leah McClimans argues that uniformity is not always an appropriate goal for designing questionnaires, as the open-endedness of questions is often both unavoidable and desirable for obtaining relevant information from subjects.

In such cases, small changes to the design of a questionnaire or the analysis of its results may result in significant harms or benefits to patients McClimans ; Stegenga , Chap.

These insights highlight the value-laden and contextual nature of the measurement of mental and social phenomena. Rather than emphasizing the mathematical foundations, metaphysics or semantics of measurement, philosophical work in recent years tends to focus on the presuppositions and inferential patterns involved in concrete practices of measurement, and on the historical, social and material dimensions of measuring.

In the broadest sense, the epistemology of measurement is the study of the relationships between measurement and knowledge. Central topics that fall under the purview of the epistemology of measurement include the conditions under which measurement produces knowledge; the content, scope, justification and limits of such knowledge; the reasons why particular methodologies of measurement and standardization succeed or fail in supporting particular knowledge claims, and the relationships between measurement and other knowledge-producing activities such as observation, theorizing, experimentation, modelling and calculation.

In pursuing these objectives, philosophers are drawing on the work of historians and sociologists of science, who have been investigating measurement practices for a longer period Wise and Smith ; Latour Ch. The following subsections survey some of the topics discussed in this burgeoning body of literature.

A topic that has attracted considerable philosophical attention in recent years is the selection and improvement of measurement standards. Generally speaking, to standardize a quantity concept is to prescribe a determinate way in which that concept is to be applied to concrete particulars.

This duality in meaning reflects the dual nature of standardization, which involves both abstract and concrete aspects. In Section 4 it was noted that standardization involves choices among nontrivial alternatives, such as the choice among different thermometric fluids or among different ways of marking equal duration.

Appealing to theory to decide which standard is more accurate would be circular, since the theory cannot be determinately applied to particulars prior to a choice of measurement standard. A drawback of this solution is that it supposes that choices of measurement standard are arbitrary and static, whereas in actual practice measurement standards tend to be chosen based on empirical considerations and are eventually improved or replaced with standards that are deemed more accurate.

A new strand of writing on the problem of coordination has emerged in recent years, consisting most notably of the works of Hasok Chang , , ; Barwich and Chang and Bas van Fraassen Ch. These works take a historical and coherentist approach to the problem. Rather than attempting to avoid the problem of circularity completely, as their predecessors did, they set out to show that the circularity is not vicious.

Chang argues that constructing a quantity-concept and standardizing its measurement are co-dependent and iterative tasks. The pre-scientific concept of temperature, for example, was associated with crude and ambiguous methods of ordering objects from hot to cold. Thermoscopes, and eventually thermometers, helped modify the original concept and made it more precise.

With each such iteration the quantity concept was re-coordinated to a more stable set of standards, which in turn allowed theoretical predictions to be tested more precisely, facilitating the subsequent development of theory and the construction of more stable standards, and so on. From either vantage point, coordination succeeds because it increases coherence among elements of theory and instrumentation.

It is only when one adopts a foundationalist view and attempts to find a starting point for coordination free of presupposition that this historical process erroneously appears to lack epistemic justification The new literature on coordination shifts the emphasis of the discussion from the definitions of quantity-terms to the realizations of those definitions.

JCGM 5. Examples of metrological realizations are the official prototypes of the kilogram and the cesium fountain clocks used to standardize the second. Recent studies suggest that the methods used to design, maintain and compare realizations have a direct bearing on the practical application of concepts of quantity, unit and scale, no less than the definitions of those concepts Riordan ; Tal The relationship between the definition and realizations of a unit becomes especially complex when the definition is stated in theoretical terms.

Several of the base units of the International System SI — including the meter, kilogram, ampere, kelvin and mole — are no longer defined by reference to any specific kind of physical system, but by fixing the numerical value of a fundamental physical constant.

The kilogram, for example, was redefined in as the unit of mass such that the numerical value of the Planck constant is exactly 6. Realizing the kilogram under this definition is a highly theory-laden task. The study of the practical realization of such units has shed new light on the evolving relationships between measurement and theory Tal ; de Courtenay et al ; Wolff b.

As already discussed above Sections 7 and 8. On the historical side, the development of theory and measurement proceeds through iterative and mutual refinements.

On the conceptual side, the specification of measurement procedures shapes the empirical content of theoretical concepts, while theory provides a systematic interpretation for the indications of measuring instruments. This interdependence of measurement and theory may seem like a threat to the evidential role that measurement is supposed to play in the scientific enterprise.

After all, measurement outcomes are thought to be able to test theoretical hypotheses, and this seems to require some degree of independence of measurement from theory. This threat is especially clear when the theoretical hypothesis being tested is already presupposed as part of the model of the measuring instrument. To cite an example from Franklin et al. There would seem to be, at first glance, a vicious circularity if one were to use a mercury thermometer to measure the temperature of objects as part of an experiment to test whether or not objects expand as their temperature increases.

Nonetheless, Franklin et al. The mercury thermometer could be calibrated against another thermometer whose principle of operation does not presuppose the law of thermal expansion, such as a constant-volume gas thermometer, thereby establishing the reliability of the mercury thermometer on independent grounds. To put the point more generally, in the context of local hypothesis-testing the threat of circularity can usually be avoided by appealing to other kinds of instruments and other parts of theory.

A different sort of worry about the evidential function of measurement arises on the global scale, when the testing of entire theories is concerned. As Thomas Kuhn argues, scientific theories are usually accepted long before quantitative methods for testing them become available. The reliability of newly introduced measurement methods is typically tested against the predictions of the theory rather than the other way around. Hence, Kuhn argues, the function of measurement in the physical sciences is not to test the theory but to apply it with increasing scope and precision, and eventually to allow persistent anomalies to surface that would precipitate the next crisis and scientific revolution.

Note that Kuhn is not claiming that measurement has no evidential role to play in science. The theory-ladenness of measurement was correctly perceived as a threat to the possibility of a clear demarcation between the two languages. Contemporary discussions, by contrast, no longer present theory-ladenness as an epistemological threat but take for granted that some level of theory-ladenness is a prerequisite for measurements to have any evidential power.

Without some minimal substantive assumptions about the quantity being measured, such as its amenability to manipulation and its relations to other quantities, it would be impossible to interpret the indications of measuring instruments and hence impossible to ascertain the evidential relevance of those indications.

This point was already made by Pierre Duhem —6; see also Carrier 9— Moreover, contemporary authors emphasize that theoretical assumptions play crucial roles in correcting for measurement errors and evaluating measurement uncertainties. Indeed, physical measurement procedures become more accurate when the model underlying them is de-idealized, a process which involves increasing the theoretical richness of the model Tal This problem is especially clear when one attempts to account for the increasing use of computational methods for performing tasks that were traditionally accomplished by measuring instruments.

As Margaret Morrison and Wendy Parker argue, there are cases where reliable quantitative information is gathered about a target system with the aid of a computer simulation, but in a manner that satisfies some of the central desiderata for measurement such as being empirically grounded and backward-looking see also Lusk Such information does not rely on signals transmitted from the particular object of interest to the instrument, but on the use of theoretical and statistical models to process empirical data about related objects.

For example, data assimilation methods are customarily used to estimate past atmospheric temperatures in regions where thermometer readings are not available. These estimations are then used in various ways, including as data for evaluating forward-looking climate models.

Two key aspects of the reliability of measurement outcomes are accuracy and precision. Consider a series of repeated weight measurements performed on a particular object with an equal-arms balance. JCGM 2. Though intuitive, the error-based way of carving the distinction raises an epistemological difficulty.

It is commonly thought that the exact true values of most quantities of interest to science are unknowable, at least when those quantities are measured on continuous scales. If this assumption is granted, the accuracy with which such quantities are measured cannot be known with exactitude, but only estimated by comparing inaccurate measurements to each other.

And yet it is unclear why convergence among inaccurate measurements should be taken as an indication of truth. After all, the measurements could be plagued by a common bias that prevents their individual inaccuracies from cancelling each other out when averaged. In the absence of cognitive access to true values, how is the evaluation of measurement accuracy possible?

At least five different senses have been identified: metaphysical, epistemic, operational, comparative and pragmatic Tal —5. Instead, the accuracy of a measurement outcome is taken to be the closeness of agreement among values reasonably attributed to a quantity given available empirical data and background knowledge cf. Thus construed, measurement accuracy can be evaluated by establishing robustness among the consequences of models representing different measurement processes Basso ; Tal b; Bokulich ; Staley Under the uncertainty-based conception, imprecision is a special type of inaccuracy.

The imprecision of these measurements is the component of inaccuracy arising from uncontrolled variations to the indications of the balance over repeated trials. Other sources of inaccuracy besides imprecision include imperfect corrections to systematic errors, inaccurately known physical constants, and vague measurand definitions, among others see Section 7. Paul Teller raises a different objection to the error-based conception of measurement accuracy.

Teller argues that this assumption is false insofar as it concerns the quantities habitually measured in physics, because any specification of definite values or value ranges for such quantities involves idealization and hence cannot refer to anything in reality.

Removing these idealizations completely would require adding infinite amount of detail to each specification. As Teller argues, measurement accuracy should itself be understood as a useful idealization, namely as a concept that allows scientists to assess coherence and consistency among measurement outcomes as if the linguistic expression of these outcomes latched onto anything in the world.

The author is also indebted to Joel Michell and Oliver Schliemann for useful bibliographical advice, and to John Wiley and Sons Publishers for permission to reproduce excerpt from Tal Overview 2. Quantity and Magnitude: A Brief History 3. Operationalism and Conventionalism 5. Realist Accounts of Measurement 6. Information-Theoretic Accounts of Measurement 7. Model-Based Accounts of Measurement 7.

The Epistemology of Measurement 8. Overview Modern philosophical discussions about measurement—spanning from the late nineteenth century to the present day—may be divided into several strands of scholarship. The following is a very rough overview of these perspectives: Mathematical theories of measurement view measurement as the mapping of qualitative empirical relations to relations among numbers or other mathematical entities.

Information-theoretic accounts view measurement as the gathering and interpretation of information about a system.

Quantity and Magnitude: A Brief History Although the philosophy of measurement formed as a distinct area of inquiry only during the second half of the nineteenth century, fundamental concepts of measurement such as magnitude and quantity have been discussed since antiquity. Bertrand Russell similarly stated that measurement is any method by which a unique and reciprocal correspondence is established between all or some of the magnitudes of a kind and all or some of the numbers, integral, rational or real.

Operationalism and Conventionalism Above we saw that mathematical theories of measurement are primarily concerned with the mathematical properties of measurement scales and the conditions of their application. The strongest expression of operationalism appears in the early work of Percy Bridgman , who argued that we mean by any concept nothing more than a set of operations; the concept is synonymous with the corresponding set of operations.

Realist Accounts of Measurement Realists about measurement maintain that measurement is best understood as the empirical estimation of an objective property or relation. Information-Theoretic Accounts of Measurement Information-theoretic accounts of measurement are based on an analogy between measuring systems and communication systems.

Model-Based Accounts of Measurement Since the early s a new wave of philosophical scholarship has emerged that emphasizes the relationships between measurement and theoretical and statistical modeling Morgan ; Boumans a, ; Mari b; Mari and Giordani ; Tal , ; Parker ; Miyake Indications may be represented by numbers, but such numbers describe states of the instrument and should not be confused with measurement outcomes, which concern states of the object being measured.

As Luca Mari puts it, any measurement result reports information that is meaningful only in the context of a metrological model, such a model being required to include a specification for all the entities that explicitly or implicitly appear in the expression of the measurement result. Bibliography Alder, K. Alexandrova, A. Angner, E. Bruni, F.

Comim, and M. Pugno eds. Barnes ed. Baird, D. Barwich, A. Basso, A. Biagioli, F. Bokulich, A. Boring, E. Bridgman, H. Feigl, H. Israel, C. C Pratt, and B. Borsboom, D. Boumans, M. Westerstahl eds. Leonelli, and K. Eigner, Pittsburgh: University of Pittsburgh Press, pp. Hartmann, and S. Okasha eds. Bridgman, P. Brillouin, L. Byerly, H. Campbell, N. Campbell, D.

Schlaudt eds. Carnap, R. Martin ed. Carrier, M. Cartwright, N. Cartwright and E. Archeological artifacts show us that systems of measurement date back before BCE — over 4, years ago. As ancient civilizations in parts of the world as disparate as Greece, China, and Egypt became more formalized, the acts of dividing up land or trade with others led to a need for standardizing techniques for measuring things.

The weight of one grain of wheat, for example, or the volume of liquid that could be held by one goat skin were used as standards. Interestingly, many of these systems initiated with the human body. The Ancient Greeks and Romans used the units "pous" and "pes," both of which translate into "foot. However, as any trip to a clothing or shoe store will show, not all bodies are the same. In an effort to be fair to all its citizens, many civilizations moved to standardize measurements further.

It was approximately 52 cm in length This provided a baseline for others and consistency across the kingdom. Individuals could bring a stick or other object that could be marked, lay it against the marble and, in effect, create a ruler that they could use to measure length, width, or height elsewhere. As civilizations advanced and measurements became more standardized, systems of measurement were developed with increasing complexity. They also used the new crescent phase of the moon to mark the start of a new month.

Celestial objects like the Sun and stars were used to track hours, through the use of sundials or the known seasonal positions of stars. Measurement has a long and complex history. Measurement gives us a way to communicate with one another and interact with our surroundings — but it only works if those you are communicating with understand the systems of measurement you are using. Imagine you open a recipe book and read the following:.

Mix white sugar 10 with flour 1 and water Wait for 1, and then bake for 1. How would you go about using this recipe?



0コメント

  • 1000 / 1000