Modelling Nigeria population growth rate
Abstract Thomas Robert Malthus Theory of population highlighted the potential dangers of over population. He stated that while the populations of the world would increase in geometric proportions, the food resources available for them would increase in arithmetic proportions. This study was carried out to find the trend, fit a model and forecast for the population growth rate of Nigeria.The data were based on the population growth rate of Nigeria from 1982 to 2012 obtained from World Bank Data (data.worldbank.org). Both time and autocorrelation plots were used to assess the Stationarity of the data. Dickey-Fuller test was used to test for the unit root. Ljung box test was used to check for the fit of the fitted model. Time plot showed that the random fluctuations of the data are not constant over time. There was an initial decrease in the trend of the growth rate from 1983 to 1985 and an increase in 1986 which was constant till 1989 and then slight fluctuations from 1990 to 2004 and a general increase in trend from 2005 to 2012. There was a slow decay in the correlogram of the ACF and this implied that the process is non stationary. The series was stationary after second differencing, Dickey-Fuller = -4.7162, Lag order = 0, p-value = 0.01 at ?= 0.05. The p-value (0.01) and concluded that there is no unit root i.e the series is stationary having d=2. Correlogram and partial correlogram for the second-order differenced data showed that the ACF at lag 1 and lag 5 exceed the significant bounds and the partial correlogram tailed off at lag 2.The identified order for the ARIMA(p,d,q) model was ARIMA(2,2,1). The estimate of AR1 co-efficient =1.5803 is observed to be statistically significant but the estimated value does not conforms strictly to the bounds of the stationary parameter hence was excluded from the model. =-0.9273 is observed to be statistically significant and conformed strictly to the bounds of the stationary parameter , hence was maintained in the model. The estimate of MA1 co-efficient = - 0.1337 was observed to be statistically significant conformed strictly to the bounds of the parameter invertibility. For ARIMA (2, 2, 0) the estimate of AR1 co-efficient =1.5430 was observed to be statistically significant and not conformed strictly to the bounds of the parameter stationary, hence excluded from the model. The estimate of AR 2 co-efficient =-0.9000 is observed to be statistically significant and conformed strictly to the bounds of the parameter stationary, hence retained in the model. The ARIMA (2, 2, 0) is considered the best model. It has the smallest AIC. The Ljung test showed that residuals are random and implies that the model is fit enough for the data. The forecast Arima function gives us a forecast of the Population Growth Rate in the next thirty eight (38) years, as well as 80% and 95% prediction intervals for those predictions i.e up to 2050. Keywords: Modelling, ARIMA Model, Parameter, Dickey-Fuller, Stationarity
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Ordinal logistic model for finding the risk factors of HIV testing in injecting drug users
The ordinal regression is a method that is used to robust the model when dependent variable is ordinal and Independent variables may be dichotomous, polytomous, and continuous or combination of these. Ordinal logistic regression is used to predict the “odds” of having a lower or a higher value for dependent variable (y), based on independent variable (x). In practice, the frequently used type of model is a proportional odds model in ordinal logistic regression. HIV testing is necessary for preventing and reducing the HIV transmission. However, there are various Socio-demographic and HIV related behavior factors contribute the high or low HIV testing in general population and high risk groups. Intend of this study find out the important factors of the HIV testing in Injecting drug users (IDUs) patients. The ordinal logistic regression model makes assumptions about the nature of the relationship between the order response variable HIV testing Methods: Total 139 IDUs patients’ collect the information for this research based on specific questioner from the district Kamur in Bihar. In study, Ordinal logistic regression analysis to determine the factors which are considered to be a significant contributor in HIV testing. The ordinal logistic regression model was used to build models for dependent variable HIV testing and independent variables which are Age, Marital Status, Education, Occupation, Stigma, Income, STI/STD problems, Needle injecting sharing and HIV information. Results: In this research apply the proportional odds model for confirm the applicability of the ordinal logistic model. We determine the all parameter the significant of the model. We found that Needle sharing, Abscess problem, Abuse, Heard about STI, HIV, Income, HIV knowledge, HIV transmission through multiple partners shows significant contribution to IDUs patient for HIV testing. Conclusion: This study has made an attempt to recognize the predictors of HIV testing for injecting drug users by developing an ordinal logistic regression model.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Big Data: The next frontier for advance, competition and efficiency
Nowadays organizations are starting to realize the importance of using more data in order to support decision for their strategies. The size of data in world is growing day by day. Data is growing because of vast use of internet, smart phone and social network. Big data is a collection of data sets which is very large in size as well as complex. Generally size of the data is Petabyte and Exabyte. Traditional database systems are not able to capture, store and analyze this large amount of data. As the internet is growing, amount of big data continue to grow. Big data analytic provide new ways for businesses and government to analyze unstructured data. Nowadays, Big data is one of the most talked topic in IT industry. It is going to play important role in future. Big data changes the way that data is managed and used. Some of the applications are in areas such as healthcare, defense, traffic management, banking, agriculture, retail, education and so on. Organizations are becoming more flexible and more open. New types of data will give new challenges as well.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Bayesian Analysis of Shape Parameter of Frechet distribution using Non-Informative Prior
In this paper we work on Frechet distribution with Bayesian paradigm. Posterior distribution is obtained by using Uniform, Jeffreys and generalization of non-informative priors. We use the quadrature numerical integration to solve the posterior distribution. Bayes estimator and their risk have been obtaining four loss functions. The performances of Bayes estimators are compared by using Monte Carlo simulation study.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Bayesian inference for exponential distribution based on progressive type-II censored data with random scheme
In this paper, we propose Bayes estimator of parameter of exponential distri¬bution Under General Entropy Loss Function (GELF) for Progressive Type-II censored data with random scheme. The proposed estimator has been com¬pared with corresponding Bayes estimator under Square Error Loss Function (SELF) and Maximum Likelihood Estimator (MLE) in terms of their risks based on simulated samples from exponential distribution.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Comparison of similarity coefficients and clustering methods with amplified fragment length polymorphism markers in Colletotrichum gloeosporioides isolates from yam
The choice of the similarity coefficient used in clustering could have great impact on the resulting classification, there is need to study and understand these coefficients better to be able to make the right choice for specific situations. In this study, variations caused by three similarity coefficients: Dice, Jaccard and Simple matching with five clustering methods: (Unweighted Pair-Group Mean Arithmetic (UPGMA), Weighted Pair-Group Mean Arithmetic(WPGMA), complete linkage, single linkage and Neighbour-Joining with AFLP markers in Colletotrichum gloeosporioides isolates from yam were assessed. Comparison among the similarity coefficients and clustering methods were made using correlation analysis, multidimensional scaling and principal component analysis. Dendrogram topology was compared using consensus fork index (CFI) and node counts. The grouping of the pathogens by the markers is not related to their agro-ecological zones. The CFI results showed varying level of similarity for the cluster analysis CA methods. It was observed that high correlation does not necessarily imply similarity in the topology of a tree, therefore care should be taken in its interpretation. The cophenetic correlation with original distances suggests that the UPGMA method gives consistent results with respect to grouping irrespective of the similarity coefficient. The use of UPGMA method is therefore recommended for its consistency.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
A Class of Chain Ratio-Product Type Estimators for Population Mean Under Double Sampling Scheme in The Presence of Non-Response
In this paper, we propose conventional and alternative ratio-product type estimators for population mean using two auxiliary variables in the presence of non-response. The purposed estimators have been found to be more efficient than the relevant estimators for the fixed values of first-phase sample of size n' and sub-sample of size n(<n') taken from the first-phase sample size n' under the specified condition. The purposed estimators are more efficient than the corresponding estimators for population mean (Y ?) of a study variable y in the case of fixed cost and have less cost in comparison to the cost incurred by the corresponding relevant estimators for a specified variance. The conditions under which the purposed estimators are more efficient then the relevant estimator have been obtained. An empirical as well as a Monte-Carlo simulation study have been done to demonstrate the efficiencies for the purposed estimators over other relevant estimators.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Algorithmic Modelling of Boosted Regression Trees’ on Environment’s Big Data
In tackling with a big dataset, a new and better approach is crucial to be used for. As in this paper, to develop an algorithm modelling for Boosted Regression Trees (BRT), author are decided to use the programming R statistical data analysis tool. The data used in this research, is a one-hour time range of data collected from 2009 up to 2012 for an environment station located at coastal-environment area somewhere in northern of Malaysia. Thus, step by step flowchart from the beginning till the objective been achieve, were provided, and created. Sensitive testing of model been carried out with the three main parameters. Only the number of trees (nt) is to be determine by using the method of estimating the optimal number of iterations; an independent test set (test), out-of-bag estimation (OOB), and five-fold CV. While the learning rate (lr) and interaction depth (tc) been fixed at 0.001 and 5 respectively. Results indicated that the BRT analysis algorithm best modelled with the best combination of parameters nt of 10000 together with lr and tc that achieves minimum predictive error (minimum error for predictions). Besides, with the boosting output of relative influence plot, and partial dependency plot, the variables significantly influenced Ozone are humidity, ambient temperature, NO, and wind speed with 61.72%, 18.17%, 10.27% and 4.5% respectively. The algorithm model for BRT produced by using the simulated data is best guidance to be used in the field of air pollution specifically. As a matter of fact, the BRT Algorithm can be modelled in varies field with big dataset.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Estimation of Population Mean in Calibration Ratio-Type Estimator under Systematic Sampling
This paper introduces the theory of calibration estimator to ratio estimation in stratified systematic sampling scheme and proposes a class of calibration ratio-type estimators for estimating population mean Y ? of the study variable y using auxiliary variable x. The bias and variance of the proposed estimator have been derived under large sample approximation. Calibration Asymptotic optimum estimator (CAOE) and its approximate variance estimator are derived. An empirical study to evaluate the relative performances of the proposed estimator against members of its class is carried out. Analytical and numerical results proved the dominance of the new proposal.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Efficient Product-cum-Dual to Product Estimators of Population Mean in Systematic Sampling
This paper proposes, with justification, a class of product-cum-dual to product estimators for estimating the population mean in systematic sampling using auxiliary information. The bias and variance of the proposed class of estimators have been derived under large sample approximation. Asymptotic optimum estimator (AOE) and its approximate variance estimator are derived and efficiency comparisons made with existing related estimators in theory. Analytical and numerical results show that at optimal conditions, the proposed class of estimators is always more efficient than all existing estimators under review.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]