In most situations the best estimator of a function of the parameter exists, but sometimes it has a complex form and we cannot compute its variance explicitly. Therefore, a lower bound for the variance of an estimator is one of the fundamentals in the estimation theory, because it gives us an idea about the accuracy of an estimator. It is well-known in statistical inference that the Cramer-Rao inequality establishes a lower bound for the variance of an unbiased estimator. But one has no idea how sharp the inequality is, i.e., how.close the variance is to the lower bound. It states that, under regularity conditions, the variance of any estimator cannot be smaller than a certain quantity. An important inequality to follow the Cramer-Rao inequality is that of a Bhattacharyya (1946, 1947).We introduce Bhattacharyya lower bounds for variance of estimator and show that Bhattacharyya inequality achieves a greater lower bound for the variance of an unbiased estimator of a parametric function, and it becomes sharper and sharper as the order of the Bhattacharyya matrix increases.Also, we study the structure and behavior of Bhattacharyya bound for natural exponential family (NEF) especially in the case of the negative binomial and exponential distributions as a member of natural exponential family with quadratic variance function (NEF-QVF). Shanbhag (1972, 1979) showed that the diagonality of the Bhattacharyya matrix characterizes the set of normal, Poisson, binomial, negative binomial, gamma, or Meixner hypergeometric distributions which are members of NEF-QVF. In view of Blight and Rao (1974), we approximate the variance of an unbiased estimator of parameter (p) in negative binomial and parameter (e=-a/q) in exponential distribution by a simulation
study. Furthermore, in this research, the structure and behavior of Bhattacharyya bound for natural exponential family with cubic variance function (NEF-CVF) is considered in two subsections: (1) in distributions that the variance is a cubic function of q and E(X) a is linear function of q, e.g., inverse Gaussian distribution, and (2) in distributions that the variance is a cubic function of E(X) but the E(X) is not a linear function of q, e.g., Abel and Tackas distributions. In the first part we find that the general form of 5 x 5 Bhattacharyya matrix is as follow:
j11 0 0 0 0
j22 j23 0 0
j33 j34 j35
j44 j45
j55
where Jrs = cov (f(r) (X|q)/f(x|q), f(s) (x|q)/f (X|q)), and f(r) (X|q) IS the rth order denvatives of f(XIO) with respect to O. We calculate the 5 x 5 Bhattacharyya matrix for the inverse Gaussian distribution and evaluate different Bhattacharyya bounds for the variance of estimator of the failure rate and coefficient of variation due to inverse Gaussian distribution.In the second part, we calculate the 4 x 4 Bhattacharyya matrix for the Abel distribution and evaluate different Bhattacharyya bounds for the variance of estimator of r(q) = e-q due to the Abel distribution. Also, we calculate the 2 x 2 Bhattacharyya matrix for the Tackas distribution and evaluate different Bhattacharyya bounds for the variance of estimator of t(q) = 1/q due to the Tackas distribution.The variance of an estimator can be approximated by Bhattacharyya bounds when the order of Bhattacharyya matrix is more than one. Hence, via a simulation study, it is shown that this approximation is better than the approximation by Cramer-Rao bound.