几何尺寸与公差论坛

 找回密码
 注册
查看: 1645|回复: 0

how to evaluate the different fitting function for a set of

[复制链接]
发表于 2009-9-5 20:57:13 | 显示全部楼层 |阅读模式
how to evaluate the different fitting function for a set of
i fit a set of data (around 10 points) with several polynomial functions. all of them are fitting well with similar coefficient of correlation. i want to compare these functions and choose a best one.
are there some other parameters or methods can be used to compare the effects of these fitting function?
thanks in advance.
no, there are not.  
unless you have some other theoretical basis, you should simply pick the simplest one
ttfn
an anova table will show which one of the curves is  a significant fit to the data. it may show you that the signifcance of a linear fit is hardly any different to a quadratic fit, for example. in that case choose the linear fit.
corus
is their any way to obtain a few more data points?  let's say that you have data points x=-15, -10.5, -.75, 0, 5.25, 8.3.  is there any way that you can get at least 4 more data point, say x=-25, -5, 2, 20?  if so, take each of your "fitting functions" and predict the y value for the new x points.  this may halp you determine which function actual represents your physical system the best.
what are the polynomial functions?  
as a general rule, higher order polynomials will fit the same curve as a simpler one, but will more bumps and wiggles that are not physically meaningful.
ttfn
in some cases a so called "penalty" function is introduced to reduce one's strive to include too many parameters for a better fit: the purpose of a best fit procedure is to describe your data as good as possible but with as little parametra as possible. so with each additional parameter that makes a better fit you are more far from the basic idea, why you are doing best fit and at the end you have a perfect fit but you have as many parameters as you have the data points, so what is the use of it!
in nonlinear best fit procedures a criterium (one among others) that has been widely adopted is so called aikake information criterium (aic)and is ussually calculated as aic=-2*(log(-likelihood))+2*(number of parameters in the model).we select the model with the smallest aic.
m777182
try the ks test very simple and easy to use
try a variation on bootstrapping.  when testing the sensitivity of a linear fit previously, i have used this to good effect.
with such a small number of data points, just remove a single point, and recalculate the fit.  if the curve fit is good, then removing a single data point should not alter the fit greatly.  you can try removing each of the data points to see what effect they have on the fit returned.  what you are doing really is determining how sensitive your fit is to the data you have used to create the fit.
an example of this would be a linear fit on a tight cluster of points, with one outlier well outside the cluster.  if you calculate a linear fit on this, you get r squared close to 1, but if you then remove that outlier, the descriptors (slope and intercept) change majorly, as does the fit quality.
this method is quite good at testing fitting graphically, and requires very little knowledge of statistics.
您需要登录后才可以回帖 登录 | 注册

本版积分规则

QQ|Archiver|小黑屋|几何尺寸与公差论坛

GMT+8, 2024-12-22 13:24 , Processed in 0.035468 second(s), 20 queries .

Powered by Discuz! X3.4 Licensed

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表