|
smoothing versus repeated measurements
hi folks
the problem is the following:
we collected many measurements under certain conditions and create a "golden signature reference". subsequent parts results are compared to the golden for dispositioning. this golden can be generated after a curve fitting or by averaging all measurements. at this stage, metrology noise is not an issue. customer "a" insists in multiple measurements and argue that fitting is not acceptable as we use a monotonic function (residuals after fitting are normal, which would indicate the model is adequate). well, multiple measurements would be a burden on certain fabs and we would like to avoid that as much as possible.
if someone could help me with some ideas or different approaches, it will be very much appreaciated
check out our whitepaper library.
if i understand you correctly, you draw a reference line (or curve) from your measurements and you test subsequent parts (or measurements) how far are they from your calibration or reference or golden signature reference, as you call it, curve. perhaps you should construct a 95% confidence limits lines on both sides of your curve, exel cannot do that, by the way, and argue with your client what dispositioning is acceptable to him. by constructing 95%(if it is not enough, goto 98% )confidence limits lines you can use all your data, single measurements as well as replications. of course you need curve fitting to get the lines.
another way is to select some discrete points, make a set of calibration measurements at each point, make a statistics for each set and then compare a measurement of a tested item, mesured at the same independent variables values, to the previously done "golden signature reference statistics".
you do not need to make replications of your tested item, but results will be miserable and unreliable. replications increase time consumption, money and reliability on the other side.
it is my impression that your customer does not know much about quality control or better, he (or she)is totaly ignorant. it may be a good idea to persuade him(her) to
build up a quality control relations together.
m777182
thanks "m". very interesting points. about the customer, they over-engineer things a bit. you got it right, now it is taking longer and longer to replicate. let you know next week.rgds
someone help me on this:
i did a fitting on my data usinga monotonic function and the residuals are normally distributed, mean ~ 0, constant sigma. so i assume that my function describes the data very well. now a colleague did principal components analysis and found bumps. now, do we have other systematics or not?
probably.
your statement, "the residuals are normally distributed," begs lots of questions, because what you've written as apparent confirmation, "mean ~ 0, constant sigma," doesn't prove that your distribution is normal. it merely suggests that your distribution can be approximated by a normal distribution.
ttfn
good point. the normal quantile plots have very good linearity and normal fitting pass shapiro-wilk w test.
back to pca: i have one variable and multiple data sets. can i do a pca in such situation? does pca requires we have more than one variable?
the aim of the pca is to find a subset of variables that accounts for the largest proportion of the variation in the standardized data set so that you can simplify your model by omitting some of the variables. if you have only one variable what can you omit then? i would rather look into the fact that you have "one variable and multiple data sets" as you say.if your subsets were gathered on different occassions they might contain some additional variables that were not under control so they were not involved into your model and secondly, there could be a sort of unlinearity like step response of some variable.
m777182 |
|