新加坡六合彩开奖直播

 

Probe for EPMA

For those users who have used or will use the newly-implemented Probe for EPMA software package (PfEPMA/PfE), the following information is part of the public domain and presented by an expert on the use of PfEPMA software, and can be found on the web at , in its entirety with respect to the 鈥淐ounting Statistics鈥 section in this document.

We desire to count X-ray intensities of peak and backgrounds, for both standards and unknowns, with high precision and accuracy. X-ray production is a random process (Poisson statistics), where each repeated measurement represents a sample of the same specimen volume. The expected distribution can be described by Poisson statistics, which for large number of counts is closely approximated by the 鈥榥ormal鈥 (Gaussian) distribution. For Poisson distributions, 1 sigma = square root of the counts, and 68.3% of the sampled counts should fall within 卤1 sigma, 95.4% within 卤2 sigma, and 99.7 within 卤3 sigma.

Lifshin and Gauvin, 2001, Fig. 6, p 172

The precision of the composition ultimately is a combination of the counting statistics of both standard and unknown, and Ziebold (1967) developed an equation for it. Recall that the K-ratio is

where P and B refer to peak and background. The corresponding precision in the K ratio is given by

where n and n鈥 are the number of repetitions of counts on the unknown and standard respectively. (The rearranged sK/K -- with square roots taken-- term was sometimes referred to as the 鈥榮igma upon K鈥 value.)

Another format for considering cumulative precision of the unknown is the above graph.听 A maximum error at the 99% confidence interval can calculated, based upon the total counts acquired upon both the standard and the unknown: e.g. to have 1% max counting error you must have at least ~120,000 counts on the unknown and on the standard; you could get 2% with ~30,000 counts on each.

Probe for EPMA Statistics

PfE provides several statistics in the normal default 鈥榣og window鈥 printout for background subtracted peak counts: average, standard deviation, 1 sigma, std dev/1 sigma (SIGR), standard error, and relative standard deviation. For Si: the average is 4479 cps, and the average sample uncertainty (SDEV) for each of the 3 measurements is 15 cps. The counting error (1 sigma) is somewhat larger (21 cps), and the ratio of standard deviation to sigma is <1, indicating good homogeneity in Si.

For homogeneous samples, we can define a standard error for the average: here, 8 cps.

Finally, the printout shows the relative standard deviation as a percentage (0.3%, excellent).

Please note that these 2 measurements described in the boxes above speak only to precision, both in counting variation and sample variation.

After the raw counts, the elemental weight percents are printed, with some of the same statistics, followed by the specific standard (number) used (see the box above, 鈥淩esults in Elemental Weight Percent鈥).听 Following that are the standard K-ratio, and standard peak (P-B) count rate. Below that are the unknown K-ratio, the unknown peak count rate, and the unknown background count. Below that are the ZAF (鈥淶COR鈥) for the element, the raw K-ratio of the unknown, the peak-background ratio of the unknown, and any interference correction applied (鈥淚NT%鈥, as percentage of measured counts).

PfE software provides for additional optional statistics. One set relates to detection limits, i.e. what is the lowest level you can be confident in reporting.

The other set of statistics relates to the homogeneity of the unknowns as well as calculation of analytical error. We will now discuss these statistics.

This calculation is for analytical sensitivity of each line (= one measurement), considering both peak and background count rates (Love and Scott, 1974). It is a similar type of statistic as the 1 sigma counting precision figure, but it is presented as a percentage.

Probe for EPMA provides a more advanced set of calculations for analytical statistics. The calculations are based on the number of data points acquired in the sample and the measured standard deviation for each element. This is important because although x-ray counts theoretically have a standard deviation equal to square root of the mean, the actual standard deviation is usually larger due to variability of instrument drift, x-ray focusing errors, and x-ray production.

A common question is whether a phase being analyzed by EPMA is homogeneous, or is the same or distinct from another separate sample. A simple calculation is to look at the average composition and see if all analyses are within some range of sigmas (2 for 95%, 3 for 99% normal probability).

A more exacting criterion is calculating a precise range (in wt%) and level (in %) of homogeneity. These calculations utilize the standard deviation of measured values and the degree of statistical confidence in the determination of the average.

The degree of confidence means that we wish to avoid a risk a of rejecting a good result a large per cent of the time (95 or 99%) of the time. 鈥Student鈥檚 t distribution鈥 gives various confidence levels for evaluation of data, i.e. whether a particular value could be said to be within the expected range of a population -- or more likely, whether two compositions could be confidently said to be the same. The degree of confidence is given as 1- a, usually .95 or .99. This means we can define a range of homogeneity, in wt%, where on the average only 5% or 1% of repeated random points would be outside this range.

The general problem, where the sample size is small and the population variance is unknown, was first treated in 1905 by W.S. Gossett, who published his analysis under the pseudonym 鈥淪tudent鈥. His employer, the Guinness Breweries of Ireland, had a policy of keeping all their research as proprietary secrets. The importance of his work argued for its being published, but it was felt that anonymity would protect the company.听听 (S.L. Meyer, Data Analysis for Scientists and Engineers, 1975, p. 274.)

Test for Homogeneity

Olivine Analysis Example of Homogeneity Test:

The following data-set of olivine analyses will be used to show how the test for homogeneity works 鈥 these are the original analytical data:

What this means: for Si, at highest level (95%), we can say that there is chance that only 5% of听 a number of random points will be 0.14 wt% greater or lesser than 18.89 wt% (or as a percent, 0.7%).

PfE also provides a handy table to show if the sample is homogeneous at the 1% precision level, and if so, at what confidence level.

Counting Statistics

Analytical sensitivity is the ability to distinguish, for an element, between two measurements that are nearly equal.

So here, at the 95% confidence level, two samples would have to have a difference in Si of > .20 wt% to be considered reliably different in Si.

There have been cases where people have taken reported compositions (i.e. wt % elements or oxides) from probe printouts and then faithfully reproduced them exactly as they got them. Once someone took figures that were reported to 3 decimal points and argued that a difference in the 3rd decimal place had some geochemical significance.

The number of significant figures reported in a printout is a 鈥渕ere鈥 programming format issue, and has nothing to do with scientific precision! (However, a feature of PfE is an option to output only the actual significant number of digits. This is not normally enabled.)

Having said that, it is 鈥渢radition鈥 to report to 2 decimal places. However, that should not be taken to represent precision, without a statistical test, such as given before.

example of the olivine analysis above, where Si was printed out as 18.886 wt%, it would be reported as 18.89 -- but looking at the limited number of analyses and the homogeneity tests, I would feel uncomfortable telling someone that another analysis somewhere between 18.6 and 19.2 were not the same material. Nor would I be uncomfortable with someone reporting the Si as 18.9 wt% (though I stick to tradition.)

Considering silicate mineral or glass compositions, Si is traditionally reported with 4 significant figures. If we were to be rigorous regarding significant figures, we would follow the rule that we would be bound by the least number of figures in a calculation where we multiply our measurement (K-ratio, which will have thousands of counts divided by thousands of counts) by the ZAF. As you can appreciate there are many calculations that comprise each part of the ZAF, and it would be stretching it to argue that the ZAF itself can have more than 3 significant figures. Ergo, we should not strictly report Si with more than 3 significant figures.

When we enable the PfE Analytical Option 鈥淒isplay only statistically significant number of numerical digits鈥 for the olivine analysis, here is the result:

For comparison, here鈥檚 the original printout:

Errors in Matrix Correction

The K-ratio is multiplied by a matrix correction factor. There are various models 鈥 alpha, ZAF, f(rz) 鈥 and versions. Assuming that you are using the appropriate correction type, there may be some issues regarding specific parameters, e.g. mass absorption coefficients, or the f(rz) profile.

There is a possibility of error for certain situations, particularly for 鈥渓ight elements鈥 as well as compounds that have drastically different Z elements where pure element standards are used. The figure above shows that a small (2%) error in the mass absorption coefficient for Al in NiAl will yield an error of 1.5% in the matrix correction.

This is a strong incentive to either use standards similar to the unknown, and/or use secondary standards to verify the correctness of the EPMA analysis.

Summary: How to know if the EPMA results are 鈥済ood鈥?

There are only 2 tests to prove your results are 鈥済ood鈥 鈥 actually, it is more correct to say that if your results can pass the test(s), then you know they are not necessarily bad analyses:

100 wt% totals (NOT 100 atomic % totals). The fact that the total is near 100 wt%. Typically, a range from 98.5 - 100.5 wt% for silicates, glasses and other compounds is considered 鈥済ood鈥. It extends on the low side a little to accommodate a small amount of trace elements that are realistically present in most natural (earth) materials. These analyses typically 鈥渄o oxygen by stoichiometry鈥 which can introduce some undercounting where the Fe:O ratio has been set to a default of 1:1, and some the iron is ferric (Fe:O 2:3). So for spinels (e.g. Fe3O4), a perfectly good total could be 93 wt%.

Stoichiometry, if such a test is valid (e.g. the material is a line compound, or a mineral of a set stoichiometry.

Checking the olivine analysis

The total is excellent, 99.98 wt%.听 The stoichiometry is pretty good (not excellent): on the 4 oxygens, there should be 1.00 Si atoms and we have .985. The total cations Mg+Fe+Ca+Ni should be 2.00, and we have 2.03. The analysis is OK and could be published. If this were seen at the time of analysis, it might be useful to recheck the Si and Mg peak positions, and reacquire standard counts for Si and Mg. If this were only seen after the fact, you could re-examine the standard counts and see if there are any obvious outliers that were included and could be legitimately discarded.