Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 
Home Print this page Email this page Small font size Default font size Increase font size Users Online: 934


 
 Table of Contents 
EDITORIAL
Year : 2010  |  Volume : 33  |  Issue : 4  |  Page : 160-166  

Concept and computation of radiation dose at high energies


Health Physics Division, Bhabha Atomic Research Centre, Mumbai, India

Date of Web Publication1-Dec-2011

Correspondence Address:
P K Sarkar
Health Physics Division, Bhabha Atomic Research Centre, Mumbai
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


Rights and PermissionsRights and Permissions
  Abstract 

Radiological safety aspects in general and neutron dosimetry in particular, around medium and high energy particle accelerators pose some unique challenges to the practitioners of radiation protection. At such high energies the concept of dose becomes a complicated issue primarily because of the onset of electromagnetic and hadronic cascades. The role of computational dosimetry in such cases as well as for radiation protection in general is having a growing significance. It is anticipated that the recently developed soft computing techniques will play an important role in radiation protection in the future. Estimation of uncertainty and propagation of errors (both statistical and systematic) assumes a pivotal role in computations and measurements in generating data with confidence. Finally, a discussion on the concept of dose and radiological protection and operational quantities is done along with the recommendation of using Evidence theory in addition to Bayesian probability in assessing radiological risk.

Keywords: Computational dosimetry, high energy radiation, concept of dose, uncertainty analysis


How to cite this article:
Sarkar P K. Concept and computation of radiation dose at high energies. Radiat Prot Environ 2010;33:160-6

How to cite this URL:
Sarkar P K. Concept and computation of radiation dose at high energies. Radiat Prot Environ [serial online] 2010 [cited 2023 Jun 2];33:160-6. Available from: https://www.rpe.org.in/text.asp?2010/33/4/160/90447


  1. Introduction Top


With the advent of high energy accelerators and their advanced technological applications in accelerator driven systems and radioactive ion beam facilities, increased high altitude, space fights and proposed long duration space activity for humans, the need for dosimetry in complex radiation fields has gained importance. Production of electromagnetic and hadronic cascades, spallation and photonuclear reactions, significant increase in pair and triplet production cross sections prompt us to reexamine the concepts of radiation "dose" which has been defined and works well in the low-energy regime (Sarkar, 2010). Absorbed Dose (the only measureable quantity) is the amount of energy transferred into body tissue averaged over large volumes (whole organs, the whole body, or even entire populations). However, the vital target for radiation damage is not the whole body but individual cells or the DNA to be more specific. To compensate for such anomalies, quantities like dose equivalent is calculated using a factor by which absorbed dose is multiplied to allow for differences in biological effectiveness of different types of radiation giving rise to the concept of Relative Biological Effectiveness (RBE) and the radiation weighting factor. The usefulness of such factors becomes doubtful in a high energy complex radiation environment.

Computational dosimetry, a sub-discipline of computational physics devoted to radiation metrology, is determination of absorbed dose and other dose related quantities by numbers. Computations are done separately both for external and internal dosimetry. The methodology used in external beam dosimetry is necessarily a combination of experimental radiation dosimetry and theoretical dose computation since it is not feasible to plan any physical dose measurements from inside a living human body. Estimation of organ dose or effective dose is done using anthropomorphic mathematical or recently developed voxel phantoms that are constructed by carrying out proper segmentation of CT-scan (grey-scale) data (ICRP 2009). Certain Monte Carlo transport codes can incorporate voxel geometry with suitably designed interfaces. However, care is to be exercised so that no bias is introduced and statistical convergence is achieved for such complex problems while estimating low probability events. For internal dosimetry, biokinetic compartment models are used to calculate the dynamic distribution of radio-nuclides within body tissues (Fell et al., 2007). The computations can be done using two distinct ways: shared kinetics and independent kinetics. The shared kinetics, computationally easier, assumes that all members of the decay chain share the parent's model, while in the more elaborate independent kinetics each member follow an element-specific model. The coupled differential equations modeling a decay chain circulating within a biokinetic system can be concisely expressed in matrix form with all biokinetic transfers and radiological decays defined in a single rate matrix, whose eigenvalues provide the required solution.

There are large uncertainties associated with both measured and calculated data concerning radiation dose particularly at high energies. These uncertainties are of two types: Stochastic uncertainty (due to random behavior of the system) and subjective uncertainty (due to lack of knowledge about the system). When there is little information or when that information is nonspecific, ambiguous, or conflicting, probability theory lacks the ability to handle such situations. The evidence theory offers an alternative to traditional probabilistic theory for the mathematical representation of such uncertainty. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. Possibilities of using evidence theory in radiological risk assessment are required to be explored.

In this paper we shall first discuss the concept of dose since this is the crucial first step for the development of any computational technique, next we shall move on to the recent advances in computational techniques used in radiation protection and dosimetry including high energy neutron, electron and gamma external dosimetry and also the biokinetic models for internal dosimetry. Finally, the subject of uncertainty in measurement and computation will be addressed.


  2. Concept of Dose and Dose-Equivalent Quantities Top


Dose, in the field of radiation protection and dosimetry, is a complicated concept and is conventionally understood as energy transfer from ionising radiation into substantial volumes of body tissue, expressed as Joules per kilogram, revealing the pivotal role of physics in this area. Absorbed Dose is the amount of energy transferred into body tissue. This energy transfer is averaged over large volumes of tissue, for example, whole organs, the whole body, or even entire populations - as in the case of the collective doses. However, it is well known that the vital target for radiation damage is not the whole body but individual cells or the DNA to be more specific. So the important effects are happening at the level of the individual cell, and additionally, we should remember that cancers are "monoclonal" - that is, DNA analysis of cancerous tissue indicates that all the cells are descended from a single mutated cell.

Radiation damage is caused by discrete events - single particles passing through tissue. Cells are either hit or they are missed. If they are hit the energy transfer (dose) can be high, if not, then the dose is zero. Cells that are hit may be damaged; the damage may or may not be repairable. If it is not repairable then the cell dies and causes no further problems, but if the damage is repaired it may be mis-repaired, passing defects on to daughter cells. There are newly discovered field effects which mean that cells near a cell that is hit may get the same symptoms as if they themselves had been hit (the bystander effect), and genomic instability which shows up many cell generations after the exposure. These amplify the error in the conventional view since there are even more effects at low dose.

The present concept of "dose" is quite ambiguous and has attracted criticisms such as an "artifact of the system adopted", "with a strong focus on the energy transfer model at it's heart" (Hunter, 2001). Every single radiation track may cause a mutation which may turn out to be deleterious or fatal to the individual person or to his/her descendants. Until one gets into the realm where the exposure is so high (acute) that the results can be predicted with fair certainty (deterministic effects) every exposure is associated with chance - harm may or may not happen, and those that do happen may not be detectable. These are known as stochastic effects. Some people and some cell types are more susceptible and some stages of development (e.g. foetus) are more sensitive. In other words, it's a matter of possibilities. Such effects do not vary in severity, but they do vary in frequency in proportion to dose. The shape of the dose-response curve is, however, disputed - it's unlikely that effects are strictly proportional to dose, and, in any case, there are large problems with the concept of dose itself.

Defining quantities like the dose equivalent or equivalent dose is an attempt to consider the biological effects of radiation and to solve the problem that has been created by concentrating on the physical (and perhaps the only measureable) quantity, the absorbed dose. Dose equivalent is obtained using a factor by which Absorbed Dose is multiplied to allow for differences in biological effectiveness of different types of radiation. This gives rise to the concept of relative biological effectiveness (RBE) and the radiation weighting factor ( w0 R ), for the type and energy of radiation incident on the body. The resulted weighted dose is designated as the organ- or tissue-equivalent dose, H T . The sum of the organ-equivalent doses weighted by the ICRP organ-weighting factors, w0 T , is termed the effective dose, E. None of these quantities are measurable nor are derivable using the laws of physics. Measurements can be performed in terms of the operational quantities, ambient dose equivalent, H*(d) and personal dose equivalent, H p (d). These quantities continue to be defined in terms of the absorbed dose at the reference point weighted by the quality factor, Q(L), a quantity dependent on the linear energy transfer (LET) of the radiation. The ambient dose equivalent is of great utility in some aspects of dosimetry but there are significant problems with it's application to neutron dosimetry, particularly at high energies.

ICRP Publication 74 (ICRP, 1996) provides sufficient data to show that the effective dose per unit fluence is strongly dependent upon the angular distribution of the irradiating neutron field. If w0 R is to be defined, as is done now, to be independent of the directional distribution of the radiation field, errors of as much as a factor of 2 in the estimation of E are possible at some energies. It remains a matter of pure conjecture for neutron dosimetrists as to whether a concept such as w0 R , so defined, is of any utility in their work. Although several years ago average radiation-weighting factors were of great value to the system of protection for neutrons, they are of much less interest at high energies or in more sophisticated detriment models. At high energies, and particularly in the accelerator environment, there is more interest in using conversion coefficients that relate field quantities (e.g., fluence) to determine the radiological protection quantities.

Following an intense debate for about a decade involving scientists, regulators and users, the ICRP in 2007 has published it's general recommendations in ICRP publication 103 (ICRP. 2007) replacing the previous 1990 recommendations. In this recent report ICRP has retained the concept of the tissue weighting factor ( w0 T ) and the w0 R but with modified values. The w0 R value for protons has been reduced from 5 to 2, while for neutrons; a new continuous function has been defined. For neutrons the w0 R values have been reduced mainly in two energy regions; below 1 MeV and above 50 MeV. Using a single set of tissue weighting factors for all human ignoring the effect of sex, age and other individual properties is considered to be an acceptable simplification by the ICRP. The definition of operational quantities like H*(10) has not been changed in ICRP-103 and hence with the lower estimated effective dose in the case of neutrons (due to lowering of w0 R ) the H*(10) now gives a more conservative estimate of the effective dose. According to ICRP-103, the effective dose is to be used for prospective dose assessment for planning and optimization in radiological protection and is not recommended for epidemiological evaluations. In ICRP-103 it is also stated that the values of w0 T are finally selected by "judgement" based on experimental radiobiological data. In fact, it appears that such a judicious choice has been based more on practicability rather than on firm radiobiological knowledge since data for high energy neutrons are scanty.

ICRP-103 has recommended introducing sex specific voxel phantoms to represent the reference male and the reference female for future update of organ dose as well as effective dose conversion coefficients for both internal and external radiation sources. Technical descriptions of the voxel phantom are released as a common ICRP/ICRU publication ICRP-110 (ICRP, 2009). The report ICRP-110 describes, along with the development and intended use of the computational voxel phantom, graphical illustrations of conversion coefficients for some external and internal exposures computed using several Monte Carlo simulation codes. Sato et al. (2009) have tabulated fluence to organ dose and effective dose conversion coefficients for neutrons and protons up to 100 GeV using the voxel phantom configurations and the updated values of w0 R and w0 T . Dietz (2009) has discussed the implication of the new ICRP recommendations for aircrew dosimetry and have concluded that the effective dose component from neutron radiations will be lowered only by a few percent compared to earlier situations. From the recent calculated results following the new recommendations, it is apparent that the change in w0 T or incorporation of the voxel geometry does not have any significant effect on the calculated effective dose. For neutrons, the only noticeable change is because of change in w0 R values below 1 MeV.

ICRP recommends a model for the assessment of detriment. Because of intrinsic uncertainties in the fundamental science the model is imperfect. The model has two components: physical and biological. Dosimetrists determine the physical components of this model with high precision and good absolute accuracy. The biological components of the model necessarily have poor absolute accuracy. However, ICRP "in certain circumstances" require the assessment of values the protection quantities "with a degree of precision beyond the accuracy of the underlying radiological information", it is possible, and indeed necessary, to achieve "rigour within uncertainty."


  3. Computational Dosimetry for High Energy Neutrons and Gammas Top


The radiation environment around positive ion (protons, alphas, heavy ions etc.) accelerators is dominated by neutrons. In fact, nowadays, such particle accelerators are the primary source of neutrons commonly encountered in scientific laboratories. The production and behavior of neutrons at proton and ion accelerators have different characteristics as the energy of the beam particles is increased. The large dynamic range of neutron/gamma energies, strong interference from other radiations as well as radio frequency waves and pulsed nature of the radiation field makes neutron dosimetry in such accelerators significantly different from other well established, conventional techniques (Sarkar, 2010).

To have knowledge of the radiation environment in accelerators, though measurement is the best choice, but it is not feasible to do so for all target + projectile combinations. Therefore, one has to rely heavily on computational procedures. The computational techniques can be classified in two categories: (i) calculation of the source term for emitted radiations; (ii) transport of the radiations through shield materials. While radiation transport is a subject more or less standardized, calculation of source terms involves computations using complex nuclear reaction model codes. There are different computational techniques for different types of projectiles (e.g. electrons, light ions and heavy ions) as well as for different projectile energy ranges. Extensive studies and validation of the models using experimental data are required before they can be used for generating the required quantities.

Three nuclear processes are important in calculating the source term for emitted radiations following positive ion-nucleus interactions: nuclear evaporation, pre-equilibrium emissions and direct reactions. At low proton energies, the interaction of a proton with a nucleus is best explained by a compound nucleus model, into which the incident particle is absorbed into the target nucleus to create a new compound nucleus. At higher incident energies (above 50 MeV) the development of an intranuclear cascade (including both pre-equilibrium and direct processes) in which an incident proton interacts with individual nucleons, rather than with the nucleus as a whole, becomes important. The angular distribution of the neutrons from this process is forward-peaked rather than isotropic, and the neutrons generated are higher in energy.

Data on the energy-angular distribution of secondary neutrons from targets thick enough to stop the accelerated ions completely, known as the thick target neutron yield (TTNY), are essential to estimate source terms of the accelerator shield design. Such data are also useful to calculate the dose delivered to the patient by secondary neutron interactions during therapy with light or heavy-ions. Furthermore, these data are expected to provide some insight into the nature of the neutron spectra produced by interactions of heavy ions present in galactic cosmic rays with shielding materials (Heilbronn, 2005) used to protect humans engaged in long-term missions outside the geo-magnetosphere. The measured neutron energy distributions for various target-projectile combinations can be theoretically analyzed using different nuclear reaction model codes and consequently the codes can be validated and input parameters optimized. In [Figure 1], a typical comparison of experimental data and theoretically calculated results with different input options are shown for angle-integrated energy distributions of thick target neutron yield from 110 MeV 19 F ions incident on a thick 27 Al target (Sunil et al., 2008). Open squares with a line through them represent measured (unfolded) data that are compared with results from nuclear reaction model codes EMPIRE-2.18 and PACE with different input options.
Figure 1: Angle integrated energy distributions of neutron yield from 110 MeV 19F on a thick 27Al target. Open squares with a line through them represent measured data. Calculated results using EMPIRE and PACE reaction model codes with different input parameters are plotted for comparison

Click here to view


In high energy electron accelerators, neutrons are generated due to the interaction of bremsstrahlung gamma photons with surrounding materials. For photons with energies above the typical binding energy of nucleons (>5-15 MeV), photonuclear interaction generally leads to emission of photo-neutrons as well as photo-protons. Theoretical estimation of such photo-neutron energy distributions are of importance since measurements in presence of very high intensity gamma field is extremely difficult. To compute neutron flux in high-energy electron accelerators, an empirical expression has been obtained for the energy differential neutron yield distributions from the giant dipole resonance (GDR) mechanism for 100MeV to 2.5 GeV electrons incident on thick targets. Calculated neutron yield distributions from the FLUKA code have been used to generate an expression relating neutron yield energy distribution per unit incident electron energy with target mass, charge, and neutron separation energy (Sunil and Sarkar 2007). [Figure 2] gives a plot of the neutron spectra from few incident electron energies (100 MeV to 2.5 GeV) on a thick (10 radiation length) Pb target in the extreme forward angle (0-degree) with respect to the incident beam direction.
Figure 2: Photoneutron energy distributions from 10 radiation length thick targets at different incident electron energies

Click here to view


The activity during the recent years in investigating quantities and correction factors for the ion chamber dosimetry of electron and high-energy photon beams has been enormous. A large number of experiments and Monte-Carlo calculations have been performed that mainly contribute to improving the understanding of basic phenomena in dosimetry, both at the level of the penetration properties of therapeutic beams and the influence of the different components of ionization chambers. They also yield a more precise estimation of the final uncertainty in the determination of absorbed dose in clinical beams. The dosimetry of beams with energies close to 50 MeV, mainly produced by a new type of accelerator, deserves, however, further investigation. It is in connection with these beams that new available stopping-power data might be of relevance, although systematic differences at high electron energies due to different sets of density-effect corrections contribute to a larger uncertainty.


  4. Uncertainty Estimation in Radiation Dosimetry Top


Uncertainty is a quantitative measure of the quality of a measurement or computation. Such a measure is needed since neither measurement nor computations provide exact results. Dose estimates based on measurements (or sometimes computations) usually require a number of assumptions and intermediate steps of data analysis which are often non-trivial, all of which contribute to the estimated uncertainty of the dose. Therefore, the values obtained either from computation or from measurements or, as is frequently the case in dosimetry, from a combination of both are only acceptable if they are accompanied by a statement that allows their reliability or trustworthiness to be inferred.

Modern uncertainty evaluation is based on both the principally incomplete knowledge about the measurement and computation used and the values of the input quantities that influence the output quantities. However, in this field one encounters in detail quite some problems in uncertainty assessment. Therefore, one finds in the literature still results without an acceptable statement of uncertainty.

There are three different types of uncertainty, namely, epistemic uncertainty, aleatory uncertainty and error. The term aleatory uncertainty describes the inherent variation of the physical system. Such variation is usually because of the random nature of the input data, and can be mathematically represented by a probability distribution when enough experimental data is available. Epistemic uncertainty in non-deterministic systems arises due to ignorance, lack of knowledge or incomplete information. Error is considered to be different from uncertainty though some researchers refer to error as 'numerical uncertainty'. Error is defined as a recognizable deficiency in any phase or activity of modeling and simulation that is not due to lack of knowledge. Identification of the sources of uncertainty is the key to develop a general methodology to quantify epistemic uncertainty, variability and error. Uncertainty occurs in the different phases of modeling and simulation. These phases can be categorized as follows: conceptual modeling of the physical system, mathematical modeling of the physical system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the computer program model, and representation of the numerical solution. This is schematically represented in [Figure 3].
Figure 3: Schematic diagram of error propagation in computational models

Click here to view


The uncertainties involved in dose evaluation can be illustrated, as an example, using the methodologies adopted in internal dose evaluation with biokinetic modeling. A biokinetic model for an element is built around a framework, or structure, that determines how the data are reduced and extrapolated beyond the region of observation. Uncertainties may arise because the structure provides an oversimplified representation of the known processes, because unknown processes have been omitted from the model, or because part or all of the model formulation is based on mathematical convenience rather than consideration of processes.

Quantification of uncertainty, variability and error is a non-trivial challenge and researchers have proposed different mathematical models to represent them. When sufficient data is available, any form of variability can be mathematically represented by probability density functions. Error or numerical uncertainty is a recognizable deficiency. For example, error is introduced when discretizing the solution of partial differential equations and due to round-off (finite precision arithmetic on a computer). It occurs due to practical constraints and not because of lack of knowledge. It is normally possible to have an estimate of the error in such cases. The problem lies in developing a methodology to quantify epistemic uncertainty. It originates due to incomplete information or due to a lack of knowledge of some characteristic of the system or the environment. Examples of sources of epistemic uncertainty are incomplete modeling of the physical system, unavailability of experimental data of some parameters in the physical system, etc. When fewer or no experimental data is available, it becomes extremely difficult for the analyst to represent the epistemic uncertainty associated with the parameter. The application of traditional probabilistic methods to epistemic or subjective uncertainty is very much questionable because, when not sure, there is no reason to prefer one probability distribution function over another.

The Bayesian approach is generally recommended (by ICRP and others) as a technique to handle uncertainties arising out of insufficient data particularly in the field of risk analysis. However, there are criticisms too. This is a statistical technique used to correct for random fluctuation of events in small populations by putting them into the context of what would be expected in a larger population where variation would be less. Bayesian methods involve a specification of the subjective prior knowledge of the relative risks that must be estimated. Thus an area where there is a larger population will be corrected to a lesser degree than one with a smaller population. If the population is small the posterior estimate will be close to the overall mean. The article by Andrew Gelman (Gelman, 2008) and the references therein may be referred for detailed discussions on objections to Bayesian statistics.

The evidence theory (Yager et al., 1994) offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. This is a potentially valuable tool for the evaluation of risk and reliability when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. Evidence theory uses two measures of uncertainty, belief and plausibility. In comparison, probability theory uses just one measure, the probability of an event. Belief and plausibility measures are determined from the known evidence for a proposition without being necessary to distribute the evidence to subsets of the proposition. This means that evidence in the form of experimental data or expert opinion can be obtained for a parameter value within an interval. The evidence does not assume a particular value within the interval or the likelihood of any value with regard to any other value in the interval.

In evidence theory the analyst does not need to make any additional assumptions beyond what is already available, whereas in Bayesian approach the analyst has to assume prior distributions and this assumption can affect the results significantly. This approach treats uncertainty due to imprecision differently than uncertainty due to randomness. This Bayesian approach yields a single estimate of the probability of failure of the system. Whereas this makes it easier for a decision-maker to rank alternative options, it does not help the decision maker assess the relative importance of imprecision over random uncertainty. The evidence theory approach, yields maximum and minimum bounds of the probability of survival (and/or the probability of failure) of a system, which can help to assess the relative importance of the two types of uncertainty. These results could help a decision-maker decide if it is worth collecting additional data to reduce imprecision. On the other hand, if the gap between maximum and minimum probabilities is large, the decision-maker would have difficulty in ranking alternative options. When there is little information on which to evaluate a probability or when that information is nonspecific, ambiguous, or conflicting, probability theory lacks the ability to handle such information. Where it is not possible to characterize uncertainty with a precise measure such as a precise probability, it is reasonable to consider a measure of probability as an interval or a set and this can be accomplished by using the evidence theory.


  5. Summary and Conclusions Top


The ability to use numerical means to simulate practical situations is one of the most advantageous and powerful tools of computational dosimetry. Computational dosimetry greatly facilitates the optimization of experiments, supplements analysis of the data and can even replace experiments in certain cases. Development of sophisticated computational techniques, particularly those based on Monte Carlo simulations, has made it possible to undertake otherwise intractable calculations. Results of these calculations do serve as necessary adjuncts to experiments where measurements are too difficult or remain in the borderline of impossibility. It is proposed that more and more advanced numerical techniques, especially those based on soft computing methodologies such as evolutionary based genetic algorithm, simulated annealing, polynomial chaos theory, fuzzy inference, human intervention analysis and wavelet analysis, may be explored for possible use in this field for carrying out more sophisticated analysis. In particular, it is recommended that evidence theory based algorithms may be explored in conjunction with Bayesian analysis for radiological risk estimation. In the field of dosimetry for ionizing radiation it is not only necessary to assess uncertainty as a quantitative measure of the quality of a measurement in order to achieve scientific progress but perhaps even more so in applications where in the end overall health of the environment, human and non-human biota is at stake.


  6. References Top


  1. Fell T.P., Phipps A.W., Smith T.J. (2007), The internal dosimetry code PLEIADES. Rad. Prot. Dosim., 124, 327.
  2. Heilbronn L., Nakamura T., Iwata Y., Kurosawa T., Iwase H., Townsend L. W. (2005), Rad. Prot. Dosim., 116-140.
  3. Hunter G. (2001), Developments in radioecology in the new millennium. J. Environ. Radioactivity, 56, 1-6.
  4. ICRP (1991), Recommendations of the International Commission on Radiological Protection,
  5. ICRP Publication 60, Ann. ICRP 21 (1-3), Elsevier Science, Oxford.
  6. ICRP (1996), Conversion coefficients for use in radiological protection against external radiation, ICRP Publication 74, ICRP, Ann. ICRP 26(3-4), Elsevier Science, Oxford.
  7. ICRP (2003), Relative Biological Effectiveness (RBE), Quality Factor (Q), and Radiation Weighting Factor (W R ). ICRP Publication 92, ICRP, Elsevier Science, Oxford.
  8. ICRP (2007), The 2007 Recommendations of the International Commission on Radiological Protection, ICRP Publication 103, ICRP, Elsevier Science, Oxford.
  9. ICRP (2009), Adult Reference Computaional Phantom, ICRP Publication 110, ICRP, Elsevier Science, Oxford.
  10. Leggett R.W., Reliability of ICRP's dose coefficients for members of the public. 1. Sources of uncertainty in biokinetic models. Rad. Prot. Dosim., 116, 140.
  11. Sarkar P.K., 2010. Neutron dosimetry in the particle accelerator environment. Radiat. Measurements,45, 1476-1483.
  12. Sato Tatsuhiko, Endo Akira, Zankl Maria, Petoussi-Henss Nina and Niita Koji (2009), Fluence-to-dose conversion coefficients for neutrons and protons calculated using the PHITS code and ICRP/ICRU adult reference computational phantoms (Phys. Med. Biol. 54 , 1997).
  13. Sunil C. and Sarkar P.K. (2007), Empirical estimation of photoneutron energy distribution in high energy electron accelerators, Nucl. Instrum. Methods Phys. Res., A534, 518-530.
  14. Sunil, C., Nandy, M. and Sarkar, P.K. (2008), Measurement and analysis of energy and angular distributions of thick target neutron yields from 110 MeV 19 F on 27 Al, Phys. Rev., C 78, 064607.
  15. Yager R.R., Fedrizzi M., Kacprzyk J. (Eds) (1994), Advances in the Dempster-Shafer theory of evidence, Wiley.



    Figures

  [Figure 1], [Figure 2], [Figure 3]



 

Top
   
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
1. Introduction
2. Concept of Do...
3. Computational...
4. Uncertainty E...
5. Summary and C...
6. References
Article Figures

 Article Access Statistics
    Viewed2779    
    Printed196    
    Emailed0    
    PDF Downloaded410    
    Comments [Add]    

Recommend this journal