Monday 19 December 2016

linear algebra - Improvement of Minimum description length (MDL) estimate


I earnestly request apology if this question is inappropriate for the forum. The question has two parts one technical and the other is not technical. I would appreciate any response.


Let me consider a specific use of MDL: model order estimation. For radar/sonar it is equivalent to the estimation of the number of targets (location of target is not considered here). MDL has two parts: maximum likelihood (ML) estimate of the parameter and a penalty function. Many variations exist in the literature related to the penalty function but ML estimate is hardly touched as ML is considered optimal. The model, the ML estimate and the MDL estimate all fine with large sample size; but for small sample size or high noise it is not the case as the 'model' is not accurate. The model is valid only as an expected value which is an asymptotic value.



I have an algorithm which improves the estimate of the number of targets considerably at low number of sample size or higher noise. The algorithm uses concepts of quasi-maximum likelihood.


My questions are: 1) Can we prove that a asymptotically good model, whose parameter we want estimate, for small sample size can have better estimator than ML?


The non-technical question is:


2) Anyone interested in this problem or use MDL for number of targets?


Thanks a lot.




No comments:

Post a Comment

readings - Appending 内 to a company name is read ない or うち?

For example, if I say マイクロソフト内のパートナーシップは強いです, is the 内 here read as うち or ない? Answer 「内」 in the form: 「Proper Noun + 内」 is always read 「ない...