Forthcoming articles

International Journal of Computational Economics and Econometrics

International Journal of Computational Economics and Econometrics (IJCEE)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Economics and Econometrics (19 papers in press)

Regular Issues

  • Bootstrapping the Log-periodogram Estimator of the Long-Memory Parameter: is it Worth Weighting?   Order a copy of this article
    by Saeed Heravi, Kerry Patterson 
    Abstract: Estimation of the long-memory parameter from the log-periodogram (LP) regression, due to Geweke and Porter-Hudak (GPH), is a simple and frequently used method of semi-parametric estimation. However, the simple LP estimator suffers from a finite sample bias that increases with the dependency in the short-run component of the spectral density. In a modification of the GPH estimator, Andrews and Guggenberger, AG (2003) suggested a bias-reduced estimator, but this comes at the cost of inflating the variance. To avoid variance inflation, Guggenberger and Sun (2004, 2006) suggested a weighted LP (WLP) estimator using bands of frequencies, which potentially improves upon the simple LP estimator. In all cases a key parameter in these methods is the need to choose a frequency bandwidth, m, which confines the chosen frequencies to be in the neighbourhood of zero. GPH suggested a square-root rule of thumb that has been widely used, but has no optimality characteristics. An alternative, due to Hurvich and Deo (1999), is to derive the root mean square error (rmse) optimising value of m, which depends upon an unknown parameter, although that can be consistently estimated to make the method feasible. More recently, Arteche and Orbe (2009a,b), in the context of the GPH estimator, suggested a promising bootstrap method, based on the frequency domain, to obtain the rmse value of m that avoids estimating the unknown parameter. We extend this bootstrap method to the AG and WLP estimators and to consideration of bootstrapping in the frequency domain (FD) and the time domain (TD) and, in each case, to blind and local versions. We undertake a comparative simulation analysis of these methods for relative performance on the dimensions of bias, rmse, confidence interval width and fidelity.
    Keywords: Long memory; bootstrap; log-periodogram regression; variance inflation; weighted LP regression; time domain; frequency domain.

  • R&D cooperation performance inside innovation clusters   Order a copy of this article
    by B. G. Jean Jacques Iritié 
    Abstract: This paper theoretically analyzes the effects of innovation cluster on R&D cooperation outcome in terms of private R&D investments. We develop a two-stage strategic R&D game with a duopoly in an innovation cluster. The two firms first conduct cooperatively their R&D decisions in a research joint venture (RJV) before competing in Cournot fashion on the market product. The results shows that an innovation cluster improves private R&D investments and social welfare through informational incentives. Then, belonging to an innovation cluster strengthens the willingness to cooperate in R&D and partnership. However, the model also show that innovation clusters can lead to a risk of monopolization of the market that could be extended even beyond cooperation; this risk thus makes it possible to relativize the expected positive effects of innovation cluster policy on R&D cooperation.
    Keywords: Innovation cluster; research joint venture ; localised knowledge spillovers ; informational incentives; R&D cooperation performance.

  • Two algorithms in sign restrictions: An exploration in an empirical SVAR   Order a copy of this article
    by Lance Fisher, Hyeon-seung Huh 
    Abstract: This paper investigates whether the key choices for the implementation of algorithms in sign restrictions are consequential for the range of accepted impulse responses in an empirical SVAR. Two algorithms in sign restrictions are considered. In one algorithm, the key choices concern the method by which an initial set of orthogonal shocks are formed and the method by which they are rotated. For this algorithm, the range of accepted responses are invariant to these choices. In the other algorithm, the key choice is the selection of the set of coefficients on the contemporaneous variables in the structural equations which are to be generated. Each selection corresponds to an ordering of the variables. For this algorithm, the range of accepted responses can be affected by the ordering of the variables in which case the algorithm is extended to loop over all variable orderings.
    Keywords: structural vector autoregressions; sign restrictions; Givens rotations; QR decomposition; instrumental variables; impulse responses.

  • The impact of the CAP Health Check on the price relations of the EU food supply chain: A dynamic panel data cointegration analysis   Order a copy of this article
    by Anthony Rezitis, Andreas Rokopanos 
    Abstract: The CAP Health Check in 2008 attested to the liberalization change initiated with the 2003 CAP Reform. The changes agreed are expected to affect the causal relationships among the supply chain actors. This study investigates the price causality along the EU food supply chain, employing a panel cointegration and error correction vector autoregressive (EC-VAR) approach. We utilize monthly data from January 2005 to September 2012, from nineteen European countries. The sample is split in December 2008 to examine how the Health Check affected causality among agricultural commodity prices (ACPs), producer prices (PPs) and consumer prices (CPs). The results indicate the existence of a long-run equilibrium. Furthermore, before the Health Check, ACPs are exogenous while both PPs and CPs are endogenous. After it, all the prices become endogenous. Short-run bidirectional causality is shown in both periods. The results suggest that decreased support has rendered ACPs more respondent to market signals.
    Keywords: panel cointegration; error correction vector autoregressive (EC-VAR) model; causality analysis.

  • ENSEMBLE MARGIN BASED RESAMPLING APPROACH FOR A COST SENSITIVE CREDIT SCORING PROBLEM   Order a copy of this article
    by Meryem Saidi, Nesma Settouti, Mostafa El Habib Daho, Mohammed El Amine Bechar 
    Abstract: In the past few years, a growing demand for credit compel banking institution to contemplate machine learning techniques as an answer to obtain decisions in a reduced time. Different decision support systems were used to detect loans defaulters from good loans. Despite good results obtained by these systems, they still face some problems such as imbalanced class and imbalanced misclassification cost problems. In this work, we propose a cost sensitive credit scoring system, based on a two-step process. The first is a resampling step which handles the imbalance data problem followed by a cost sensitive classification step that recognizes potential insolvent loans red in order to reduce financial loss. A resampling algorithm called Ensemble Margin for Imbalanced Instance (EM2I) is suggested to manage imbalanced datasets in cost sensitive learning. We compare our algorithm with other techniques from the state of the art and experimental results on German credit dataset demonstrate that EM2I leads to a significant reduction of the misclassification cost."
    Keywords: Cost sensitive learning; imbalanced problem; ensemble margin; credit scoring.

  • A nonparametric estimator for stochastic volatility density   Order a copy of this article
    by Soufiane Ouamaliche, Awatef Sayah 
    Abstract: This paper aims at improving the accuracy of stochastic volatility density estimation in a high frequency setting using a simple procedure involving a combination of kernel smoothing methods namely, kernel regression and kernel density estimation. The employed data, which are thirty years worth of hourly observations, are simulated through a Constant Elasticity of Variance-Stochastic Volatility (CEV-SV) model, namely the Heston Model, calibrated to fit the S&P500 index, in the form of a two-dimensional diffusion process (Yt,Vt) such that only (Yt) is an observable coordinate. Polynomials of different degrees are then adjusted using weighted least squares to filter the observations of the variance coordinate (Vt) from a convolution structure before applying a straightforward kernel density estimation. The obtained estimates did well when compared to previous results as they have displayed a certain improvement, linked to the degree of the fitted polynomial, by reducing the value of the Mean Integrated Squared Error (MISE) criterion computed with respect to a benchmark density suggested in the literature.
    Keywords: nonparametric estimation; kernel smoothing; kernel regression; kernel density estimation; convolution structure; stochastic volatility; Monte Carlo simulations.

  • Simulating Long Run Structural Change with a Dynamic General Equilibrium Model   Order a copy of this article
    by Roberto Roson, Wolfgang Britz 
    Abstract: There is an increasing demand for the construction of long-term economic scenarios, possibly with a sectoral detail comparable with the one provided by input-output or social accounting matrices. This paper illustrates how an especially designed, recursive dynamic CGE model can be used to this purpose. The general equilibrium model (G-RDEM) features a non-homothetic demand system, endogenous saving rates, differentiated industrial productivity growth, interest payments on foreign debt and time-varying input-output coefficients. When the model is forced to follow given projections of GDP and population, it produces a wealth of detailed, consistent data on macroeconomic variables such as: consumption patterns, industrial production volumes, resources use and trade flows. We present in this paper the results of a baseline construction exercise, which allow us to identify which structural adjustment processes will likely be most relevant in shaping the future structure of the global economy. Our numerical tests suggest that one very important factor is the decline in the aggregate saving rates (due to higher dependency ratios in the demographic structure), which influences capital stock accumulation, investments, composition of the final demand and productivity. In terms of employment of primary resources, we detected a general pattern of decline in the primary sector, compensated by an increase in several service industries. We also found that some forces shape the global outcomes, some others produce implications which are relevant at the regional or industrial scale.
    Keywords: Computable General Equilibrium models; Long-run economic scenarios; Structural change; Economic growth.

  • Persistent dynamics in (in)determinate equilibrium rational expectations models   Order a copy of this article
    by Marco Maria Sorge 
    Abstract: Equilibrium indeterminacy in rational expectations models is often claimed to produce higher time series persistence relative to determinacy. Proceeding by means of a simple linear stochastic model, I formally show that, for reasonable parameter configurations, there exists an uncountable (continuously infinite) set of indeterminate equilibria in low-order AR(MA) representation, which exhibit strictly lower persistence than their determinate counterpart. Implications for empirical studies concerned with, e.g., testing for indeterminacy and macroeconomic forecasting are discussed.
    Keywords: rational expectations; indeterminacy; persistence.
    DOI: 10.1504/IJCEE.2021.10033208
     
  • Size-distribution analysis in the study of urban systems: evidence from Greece   Order a copy of this article
    by Dimitrios Tsiotas, Labros Sdrolias, Georgios Aspridis, Dagmar Škodová-Parmová, Zuzana Dvořáková-Líšková 
    Abstract: This paper examines empirically the utility of size-distribution analysis in the study of urban systems, on data referring to every urban settlement recorded in 2011 national census of Greece. The study lowers the scale of the size-distribution analysis to the regional, instead of the national level, where is commonly being applied, examining two aspects of size-distributions, the rank-size and the city-size-distribution, in comparison with three well-established statistical dispersion indices, the coefficient of variation, the Theil index and the Gini coefficient. The major research question is to detect how capable are the size-distribution exponents to operate as measures of statistical dispersion and to include socioeconomic information. Overall, the analysis concludes that the size-distribution assessment is useful for the initialisation of the study of urban systems, where the available information is restricted to population size, and is capable to provide structural information of an urban system and its socioeconomic framework, but not more effective than other measures of statistical dispersion.
    Keywords: power-laws; Zipf's law; regional economics; cities; regional science; econophysics; Greece.
    DOI: 10.1504/IJCEE.2021.10033209
     
  • Factor decomposition of disaggregate inflation: the case of Greece   Order a copy of this article
    by Nikolaos A. Krimpas, Paraskevi K. Salamaliki, Ioannis A. Venetis 
    Abstract: We use static and dynamic factor models to decompose Greek inflation into common components. Static factor analysis suggests the need to develop comprehensive underlying inflation measures for Greece. Dynamic factor analysis decomposes inflation into three components: pure inflation and relative price inflation both driven by aggregate shocks and, an idiosyncratic component reflecting sector specific shocks. We verify the idiosyncratic component as the main source of inflation variability while pure inflation and its associated shocks are dominant compared to relative inflation. Based on pure inflation correlations, the relative weight of anticipated monetary shocks is large only for the spread between the ten-year government bond yield and a three-month short run rate and only in times of monetary stability.
    Keywords: disaggregate CPI; dynamic factor model; pure inflation; relative prices; Greece.
    DOI: 10.1504/IJCEE.2021.10033210
     
  • Computational method for approximating the behaviour of a triopoly: an application to the mobile telecommunications sector in Greece   Order a copy of this article
    by Yiannis C. Bassiakos, Zacharoula Kalogiratou, Theodoros Monovasilis, Nicholas Tsounis 
    Abstract: Computational biology models of the Volterra-Lotka family, known as competing species models, are used for modelling a triopoly market, with application to the mobile telecommunications in Greece. Using a data sample for 1999-2016, parameter estimation with nonlinear least squares is performed. The findings show that the proportional change in the market share of the two largest companies, Cosmote and Vodafone, depends negatively on the market share of each other. Further, the market share of the marker leader, Cosmote, depends positively on the market share of the smallest company, Wind. The proportional change in the market share of Wind, depends negatively on the market share of the largest company Cosmote but it depends positively by the change in the market share by the second company, Vodafone. In the long-run it was found that the market shares tend to the stable equilibrium point where all three companies will survive with Cosmote having a projected number after eleven years (in 2030) of approximately 7.3 million subscribers, Vodafone 4.9 and Wind 3.7, the total number of projected market size being approximately 16 million customers.
    Keywords: Volterra-Lotka models; triopoly; mobile telecommunications sector; Greece.
    DOI: 10.1504/IJCEE.2021.10033211
     
  • Separating yolk from white: a filter based on economic properties of trend and cycle   Order a copy of this article
    by Peng Zhou 
    Abstract: This paper proposes a new filter technique to separate trend and cycle based on stylised economic properties, rather than relying on ad hoc statistical properties such as frequency. Given the theoretical separation between economic growth and business cycle literature, it is necessary to make measures of trend and cycle match what the respective theories intend to explain. The proposed filter is applied to the long macroeconomic data collected by the Bank of England (1700-2015).
    Keywords: filter; trend; cycle.
    DOI: 10.1504/IJCEE.2021.10033212
     
  • Efficiency of microfinance institutions of South Asia: a bootstrap DEA approach   Order a copy of this article
    by Asif Khan, Rachita Gulati 
    Abstract: The microfinance institutions (MFIs) operate with the dual goals; financial sustainability and social outreach. Therefore, the present paper aims to assess the twin objectives of MFIs operating in the selected four South Asian countries (i.e., Bangladesh, India, Nepal and Pakistan) during the financial year 2010 to 2015. First, we remove the outliers from the dataset by following Banker and Gifford (1988) and Banker and Chang (2006) guidelines. Thereafter, the study use bootstrap data envelopment analysis (DEA) by designing two separate models to measure bias-corrected financial and social efficiency estimates. The empirical results confirm that the South Asian MFIs remain more financially efficient than socially during the study period. Further, the Indian MFIs outperform in terms of both the aspects followed by Nepali and Bangladeshi MFIs, respectively. However, the Pakistani MFIs are the least performers in terms of both social outreach and financial sustainability.
    Keywords: bias-corrected efficiency; financial efficiency; social efficiency; bootstrap data envelopment analysis; DEA; bootstrap DEA; microfinance institutions; MFIs; microfinance; South Asia.
    DOI: 10.1504/IJCEE.2021.10033213
     

Special Issue on: ICOAE2017 Applied Economics

  • Demand Monotonicity of a Pavement Cost Function Used to Determine Aumann-Shapley Values in Highway Cost Allocation   Order a copy of this article
    by Dong-Ju Lee, Saurav Kumar Dubey, Chang-Yong Lee, Alberto Garcia-Diaz 
    Abstract: Pavement thickness and traffic lanes are two essential requirements affecting the cost of a highway design project. The traffic loadings on a pavement are typically measured in 18-kip Equivalent Single-Axle Loads (ESAL). In this paper both ESAls and lanes are treated as two types of players and a pavement cost function is developed to determine the average marginal cost for each type of players. These averages are known as the Aumann-Shapley (A-S) values and are used to allocate the highway cost among all vehicle classes. The proposed pavement cost function is proved to be monotonically increasing as the traffic loadings (ESALs) are increased, a necessary condition for the function to be acceptable for computing AumannShapley values. A severe limitation of the procedure to calculate marginal costs for the trafficloading players is the extremely large number of permutations since the number of players is enormously high. To overcome this limitation this article derives a compact form for the discrete A-S values of ESALs and lanes that allows the Aummann-Shapely values to be calculated in a computationally effective manner. rn
    Keywords: Cooperative Game Theory; Discrete Aumann-Shapley Values; Linear Optimization; Demand Monotonicity; Transportation Economics; Computational Efficiency.

Special Issue on: Machine Learning, Artificial Intelligence, and Big Data Methods and New Perspectives

  • A SAS Macro for Examining Stationarity Under the Presence of Endogenous Structural Breaks   Order a copy of this article
    by Dimitrios Dadakas, Scott Fargher 
    Abstract: The endogenous structural break literature present numerous computationally intensive procedures for the examination of stationarity under the presence of single or multiple structural breaks. Application of these grid-search procedures is rather complicated and not many researchers have access to code that can easily be applied. In this article, we present a SAS macro, that allows the examination of stationarity under the assumption of either one or two, endogenously determined, structural breaks using the Zivot and Andrews (1992) and the Lumsdaine and Pappel (1997) methodologies. We demonstrate the macro using the Nelson-Plosser (1982) data, that was also used by Zivot and Andrews (1992) and Lumsdaine and Pappel (1997), to highlight differences and similarities of the macro command with the original results published.
    Keywords: Endogenous Structural Breaks; Stationarity; Time Series; SAS; Macro.

  • Automated detection of entry and exit nodes in traffic networks of irregular shape   Order a copy of this article
    by Simon Plakolb, Christian Hofer, Georg Jäger, Manfred Füllsack 
    Abstract: We devise an algorithm that can automatically identify entry and exit nodes of an arbitrary traffic network. It is applicable even if the network is of irregular shape, which is the case for many cities. Additionally, the method can calculate the nodes attractiveness to commuters. This technique is then used to improve a traffic model, so that it is no longer dependent on expert knowledge and manual steps and can thus be used to analyse arbitrary traffic systems. Evaluation of the algorithm is performed twofold: The positions of the identified entry nodes are compared to existing traffic data. A more in-depth analysis uses the traffic model to simulate a city in two ways: Once with hand-picked entry nodes and once with automatically detected ones. The evaluation shows that the simulation yields a good match to the real world data, substantiating the claim that the algorithm can fully replace a manual identification process.
    Keywords: traffic modelling; network analysis; commuting; automated detection; entry nodes; exit nodes; traffic simulation; mobility behaviour; agent-based model; road usage; congestion.

  • Depth based support vector classifiers to detect data nests of rare events   Order a copy of this article
    by Rainer Dyckerhoff, Hartmut Jakob Stenz 
    Abstract: The aim of this project is to combine data depth with support vector machines (svm) for binary classification. To this end, we introduce data depth functions and svm and discuss why a combination of the two is assumed to work better in some cases than using svm alone. For two classes X and Y, one investigates whether an individual data point should be assigned to one of these classes. In this context, our focus lies on the detection of rare events, which are structured in data nests: Class X contains much more data points than class Y and Y has less dispersion than X. This form of classification problem is akin to finding the proverbial needle in a haystack. Data structures like these are important in churn prediction analyses which will serve as a motivation for possible applications. Beyond the analytical investigations, comprehensive Simulation studies will also be carried out.
    Keywords: Data depth; DD-Plot; Mahalanobis depth function; support vector machines,rnbinary classification; hybrid methods; rare events; data nest; churn prediction; big data.

  • Does time-frequency-scale analysis predict inflation? Evidence from Tunisia   Order a copy of this article
    by AMMOURI BILEL, Fakhri Issaoui, Habib ZITOUNA 
    Abstract: Forecasting macroeconomic indicators has always been an issue for economic policymakers. Different models are available in the literature; for example univariate and/or multivariate models, linear and/or non-linear models. This diversity requires a multiplicity of the used techniques.. They can be classified as pre and post-time series. However, this multiplicity allows the improvement of a better forecast of the macroeconomic indicators during unrest (be it political, economic, and/or social). In this paper, we deal with the problem of the performance of the macroeconomic models for predicting Tunisias inflation during instability following the 2011-revolution. To achieve this goal, the time-frequency-scale analysis (Fourier transform, Wavelet transform, and Stockwell transform) is used. In fact, we are interested in the ability of these techniques to improve predictor performances. We suggest the performance of the adopted approach (time-frequency-scale analysis). This performance is not quite absolute because their performance is less than multivariate model (Dynamic Factor Model) during economic instability.
    Keywords: Inflation forecast; uni-varied model; multi-varied model; time-frequency analysis; Fourier transform; wavelet transform; Stockwell transform.

Special Issue on: Spatial Analysis and Interaction in Economics and Econometrics Data and Modelling for Sustainable Spatial Systems

  • EXPLORING BREXIT IMPLICATIONS: THE IMPACT OF LONGER JOURNEY TIMES   Order a copy of this article
    by Bernard Fingleton 
    Abstract: Brexit implies longer journey times between UK and EU regions. In this paper the elasticity of trade with respect to journey time by goods vehicles is estimated, and the impact of this on employment is evaluated using a dynamic spatial panel data model. The estimator allows for the presence of endogenous and predetermined causal variables, regional interdependence, and attempts to control for common factors causing macro-economic variation over the estimation period. The estimates show that a job shortfall can be expected in both the UK and EU regions, with considerable diversity of outcome across regions.
    Keywords: journey times; dynamic spatial panel model; regional employment.