Explore our journals

Browse journals by subject

Research picks

  • The concept of rurality is well-trodden ground in policy discussions, but less attention has been given to its more elusive sibling, remote-rurality. A study in the World Review of Entrepreneurship, Management and Sustainable Development has looked at this concept and reveals the complexities of defining and addressing the needs of remote-rural areas, particularly in Scotland, where the distinction is not merely academic but vital to economic sustainability and infrastructure planning.

    Sayed Abdul Majid Gilani and Naveed Yasin of the Canadian University Dubai, UAE, and Peter Duncan and Anne M.J. Smith of Glasgow Caledonian University, UK, introduce five dimensions to discuss remote-rural regions: population size, proximity to urban centres, level of development, cultural characteristics, and social perception. These categories highlight the inadequacy of relying on a single definition for remote-rural areas, emphasizing the need for a multidimensional approach.

    Defining "rural" is no simple task, the team points out, as various countries use different metrics – such as population thresholds of under 2500 in the USA and fewer than 10000 in the UK. However, the addition of remote-rurality introduces further layers of isolation, limited services, and distinct cultural identities that demand attention from researchers and thence from policymakers.

    In Scotland, the government distinguishes between "accessible-rural" and "remote-rural" regions, the latter being considerably more isolated from urban hubs. This distinction is more than theoretical – it has implications for infrastructure, most notably transport, food availability, and now, broadband connectivity, which remains alarmingly inadequate in many remote-rural areas. The research highlights that over 80% of businesses in Scotland's remote regions are small or medium-sized enterprises (SMEs), many of which cannot operate effectively nor efficiently because they lack access to basic services those in urban regions take for granted.

    In this study, the team urges policymakers to adopt a more nuanced understanding of remote-rural areas when considering infrastructure investments. By addressing the challenges faced by such communities, governments might create conditions that enable businesses not just to survive, but to thrive, and so preclude the exodus of SMEs to the cities. This would not only benefit those business but reduce some of the pressure on already overcrowded cities as well as reducing the cultural and economic divide between urban and rural areas.

    The team emphasizes that enhanced broadband access, for instance, could allow SMEs to operate more efficiently and allow them to exploit national and global markets more effectively. The survival of these SMEs, is often critical to the economic sustainability of remote-rural regions.

    Gilani, S.A.M., Yasin, N., Duncan, P. and Smith, A.M.J. (2024) 'What is remote-rural and why is it important?', World Review of Entrepreneurship, Management and Sustainable Development, Vol. 20, No. 5, pp.517–537.
    DOI: 10.1504/WREMSD.2024.140706

  • A new system aimed at improving the monitoring and detection of forest fires through advanced real-time image processing is reported in the International Journal of Information and Communication Technology. The work could lead to faster and more accurate detection and so help improve the emergency response to reduce the environmental, human, and economic impacts.

    Zhuangwei Ji and Xincheng Zhong of Changzhi College, in Shanxi, China, describe an image segmentation model based on STDCNet, an enhanced version of the BiseNet model. Image segmentation involves classifying areas within an image to allow flames and forest background to be differentiated. The STDCNet approach can extract relevant features efficiently without demanding excessive computational resources.

    The team explains how their approach uses a bidirectional attention module (BAM). This allows it to focus on distinct characteristics of different image features and determine the relationships between adjacent areas in the image within the same feature. This dual approach improves the precision of fire boundary detection, particularly for small-scale fires that are often missed until they have grown much larger.

    Tests with the model on a public dataset showed better performance than existing approaches in terms of both accuracy and computational efficiency. This bolsters the potential for real-time fire detection, where early identification can prevent fires from spreading uncontrollably.

    The new system has several advantages over standard fire detection methods, such as ground-based sensors and satellite imagery. These have limitations such as high maintenance costs, signal transmission issues, and interference from environmental factors such as clouds and rugged terrain. The researchers suggest that drones equipped with the new image processing technology could offer a more adaptable and cost-effective alternative to sensors or satellites, allowing fire detection to be carried out in different weather conditions and in challenging environments.

    Ji, Z. and Zhong, X. (2024) 'Bidirectional attention network for real-time segmentation of forest fires based on UAV images', Int. J. Information and Communication Technology, Vol. 25, No. 6, pp.38–51.
    DOI: 10.1504/IJICT.2024.141434

  • Online gaming is increasingly popular. As such, server efficiency is becoming an increasingly urgent priority. With millions of players interacting in real-time, game servers are under enormous pressure to process a huge amount of data without latency (delays) or crashes. Research in the International Journal of Information and Communication Technology discusses an innovative solution to the problem, offering a promising path to greater stability and performance in mobile real-time strategy games and beyond.

    WenZhen Wang of the Animation Art Department at Zibo Vocational Institute, Zibo, Shandong, China, hope to address the most critical issue in online multiplayer games – load balancing to ensure there is a high level of concurrency and interactivity. Load balancing refers to the distribution of work across multiple servers to prevent any one server from being overwhelmed. If a server receives too many requests at once, it can slow down, leading to frustrating lag or even server crashes. Ensuring efficient distribution of this workload is essential to maintaining a seamless gaming experience.

    Wang has introduced a new method for handling load balancing using a "consistent hash" algorithm. In simple terms, a hash function takes an input – player activity or game data – and converts it into a fixed-size output, a sequence of characters, or hash. This consistent hash helps the system allocate data and tasks across multiple servers more evenly because it knows in advance the size of the packets of data, rather than having to handle packets of different sizes on the fly. The main advantage lies in its adaptability to the highly dynamic environments of multiplayer games, where the number of players and the complexity of in-game interactions changes quickly throughout the game.

    To test the effectiveness of the algorithm, Wang ran virtual simulations replicating real-world game scenarios and demonstrated that the approach allowed for load balancing that led to stable server operations. The system could then handle large numbers of simultaneous player interactions while maintaining performance quality.

    Wang, W. (2024) 'Virtual simulation of game scene based on communication load balancing algorithm', Int. J. Information and Communication Technology, Vol. 25, No. 6, pp.18–37.
    DOI: 10.1504/IJICT.2024.141435

  • The media landscape is increasingly complicated. It is also plagued by sensationalism and a disconnection between media literacy and management practices. Many observers worry about the proliferation of 'click bait' and "fake news". Misleading reports rife with hyperbole exacerbate the problems faced by many people, and the distortion of serious issues creates a turbulent environment where the lines between information, disinformation, and misinformation are often blurred. Moreover, the lack of a clear distinction between the news and the public relations and marketing output of companies, especially in the age of influencers, is also of increasing concern.

    Research in the International Journal of Information and Communication Technology, suggests that the only way to address these problems is to make a determined shift towards more rigorous news ethics, adapted to the modern media environment. An Shi of Fujian Business University, in Fuzhou, Fujian, China, points out that the use of mathematical algorithms, specifically the Fredholm integral equation algorithm, could help us tackle many of the complex problems we have with the news media. Despite the often negative press about artificial intelligence (AI), ironically it is the use of machine learning, trained algorithms, and neural networks that might provide us with an escape route from the era of clickbait and fake news.

    It is worth noting that the concept of 'non-standard' press behaviour has been with us for many years – a term introduced to address deviations from accepted professional standards in the media. Where these ethical shortcomings undermine societal responsibilities and negatively affect audiences, there is a serious problem. This has been exacerbated by the move from traditional media channels, such as newspapers, radio, and television – to dynamic platforms like social media and online news outlets where the frontiers are wide open.

    Empirical studies have demonstrated that the influence of unethical practices in the media extends way beyond public perception into financial markets and politics. The rapid dissemination of news can significantly impact stock prices and market stability and even affect the outcomes of election and referenda, or at the least colour the public response to them. The current work offers policy recommendations and governance schemes that could help market regulators and company managers ameliorate the negative impact of clickbait and fake news.

    Shi, A. (2024) 'News media coverage and market efficiency research based on Fredholm integral equation algorithm', Int. J. Information and Communication Technology, Vol. 25, No. 6, pp.68–77.
    DOI: 10.1504/IJICT.2024.141438

  • Research published in the International Journal of Information and Communication Technology may soon help solve a long-standing challenge in semiconductor manufacture: the accurate detection of surface defects on silicon wafers. Crystalline silicon is the critical material used in the production of integrated circuits and in order to provide the computing power for everyday electronics and advanced automotive systems needs to be as pristine as possible prior to printing of the microscopic features of the circuit on the silicon surface.

    Of course, no manufacturing technology is perfect and the intricate process of fabricating semiconductor chips inevitably leads to some defects on the silicon wafers. This reduces the number of working chips in a batch and leads to a small, but significant proportion of the production line output failing.

    The usual way to spot defects on silicon wafers has been done manually, with human operators examining each wafer by eye. This is both time-consuming and error-prone due to the fine attention to detail required. As wafer production has ramped up globally to meet demand and the defects themselves have become harder to detect by eye, the limitations of this approach have become more apparent.

    Chen Tang, Lijie Yin and Yongchao Xie of the Hunan Railway Professional Technology College in Zhuzhou, Hunan Province, China explain that automated detection systems have emerged as a possible solution. These too present efficiency and accuracy issues in large-scale production environments. As such, the team has turned to deep learning, particularly convolutional neural networks (CNNs), to improve wafer defect detection.

    The researchers explain that CNNs have demonstrated significant potential in image recognition. They have now demonstrated that this can be used to identify minute irregularities on the surface of a silicon wafer. The "You Only Look Once" series of object detection algorithms is well known for being able to balances accuracy against detection speed.

    The Hunan team has taken the YOLOv7 algorithm a step further to address the specific problems faced in wafer defect detection. The main innovation in the work lies in using SPD-Conv, a specialized convolutional operation to enhance the ability of the algorithm to extract fine details from images of silicon wafers. Additionally, the researchers incorporated a Convolutional Block Attention Module (CBAM) into the model to sharpen the system's focus on smaller defects that are often missed in manual inspection or by other algorithms.

    When tested on the standard dataset (WM-811k) for assessing wafer defect detection systems, the team's refined YOLOv7 algorithm achieved a mean average precision of 92.5% and had a recall rate of 94.1%. It did this quickly, at a rate of 136 images per second, which is faster than earlier systems.

    Tang, C., Yin, L. and Xie, Y. (2024) 'Wafer surface defect detection with enhanced YOLOv7', Int. J. Information and Communication Technology, Vol. 25, No. 6, pp.1–17.
    DOI: 10.1504/IJICT.2024.141433

  • Odd as it may seem, coal seams that cannot be mined might provide an underground storage medium for carbon dioxide produced by industries burning coal above ground. Research in the International Journal of Oil, Gas and Coal Technology has undertaken controlled experiments designed to simulate the deep geological environments where carbon dioxide might be trapped as a way to reduce the global carbon footprint and ameliorate some of the impact of our burning fossil fuels. Coal seams represent a potential repository for long-term storage of carbon dioxide sequestered from flue gases, as they can trap a lot of carbon dioxide gas in a small volume.

    Major Mabuza of the University of Johannesburg, Johannesburg, Kasturie Premlall of Tshwane University of Technology, Pretoria, and Mandlenkosi G.R. Mahlobo of the University of South Africa, Florida, South Africa, subjected coals to a synthetic flue gas for 90 days at high pressure (9.0 megapascals) and a mildly high raised temperatures of 60 degrees Celsius. These conditions were intended to replicate the pressures and temperatures found deep underground, providing a realistic model for how coal might behave when used for carbon dioxide sequestration.

    The team then looked at how the chemical structure of coal was changed by exposure to flue gas under these conditions using various advanced analytical chemistry techniques – carbon-13 solid-state nuclear magnetic resonance spectroscopy, universal attenuated total reflectance-Fourier transform infrared spectroscopy, field emission gun scanning electron microscopy with energy dispersive X-ray spectroscopy, and wide-angle X-ray diffraction.

    The results showed that exposure to synthetic flue gas led to major changes to the chemical makeup of the coal. For instance, key functional groups, such as aliphatic hydroxyl groups, aromatic carbon-hydrogen bonds, and carbon-oxygen bonds, were all weakened by the process and the overall physical properties of the coal were also changed.

    By clarifying how coal interacts with flue gas under simulated, but realistic, conditions, the team fills important gaps in our knowledge about the long-term stability and effectiveness of carbon dioxide storage below ground and specifically in coal seams.

    Mabuza, M., Premlall, K. and Mahlobo, M.G.R. (2024) 'In-depth analysis of coal chemical structural properties response to flue gas saturation: perspective on long-term CO2 sequestration', Int. J. Oil, Gas and Coal Technology, Vol. 36, No. 5, pp.1–17.
    DOI: 10.1504/IJOGCT.2024.141437

  • Research in the International Journal of Computational Science and Engineering describes a new approach to spotting messages hidden in digital images. The work contributes to the field of steganalysis, which plays a key role in cybersecurity and digital forensics.

    Steganography involves embedding data within a common media, such as words hidden among the bits and bytes of a digital image. The image looks no different when displayed on a screen, but someone who knows there is a hidden message can extract or display the message. Given the vast numbers of digital images that now exist, and that number grows at a remarkable rate every day, it is difficult to see how such hidden information might be found by a third party, such as law enforcement. Indeed, in a sense it is security by obscurity, but it is a powerful technique nevertheless. There are legitimate uses of steganography, of course, but there are perhaps more nefarious uses and so effective detection is important for law enforcement and security.

    Ankita Gupta, Rita Chhikara, and Prabha Sharma of The NorthCap University in Gurugram, India, have introduced a new approach that improves detection accuracy while addressing the computational challenges associated with processing the requisite large amounts of data.

    Steganalysis involves identifying whether an image contains hidden data. Usually, the spatial rich model (SRM) is employed to detect those hidden messages. It analyses the image to identify tiny changes in the fingerprint that would be present due to the addition of hidden data. However, SRM is complex, has a large number of features, and can overwhelm detection algorithms, leading to reduced effectiveness. This issue is often referred to as the "curse of dimensionality."

    The team has turned to a hybrid optimisation algorithm called DEHHPSO, which combines three algorithms: the Harris Hawks Optimiser (HHO), Particle Swarm Optimisation (PSO), and Differential Evolution (DE). Each of these algorithms was inspired by natural processes. For example, the HHO algorithm simulates the hunting behaviour of Harris Hawks and balances exploration of the environment with targeting optimal solutions. The team explains that by combining HHO, PSO, and DE, they can work through complex feature sets much more quickly than is possible with a current single algorithm, however sophisticated.

    The hybrid approach reduces computational demand by eliminating more than 94% of the features that would otherwise have to be processed. The stripped back information can then be processed with a support vector machine (SVM) classifier. The team says this method works better than meta-heuristic (essentially trial-and-error methods) and better even than several deep learning methods, which are usually used to solve more complex problems than steganalysis.

    Gupta, A., Chhikara, R. and Sharma, P. (2024) 'An improved continuous and discrete Harris Hawks optimiser applied to feature selection for image steganalysis', Int. J. Computational Science and Engineering, Vol. 27, No. 5, pp.515–535.
    DOI: 10.1504/IJCSE.2024.141339

  • Cloud computing has become an important part of information technology ventures. It offers a flexible and cost-effective alternative to conventional desktop and local computer infrastructures for storage, processing, and other activities. The biggest advantage to startup companies is that while conventional systems require significant upfront investment in hardware and software, cloud computing gives them the power and capacity on a "pay-as-you-go" basis. This model not only reduces initial capital expenditures at a time when a company may need to invest elsewhere but also allows businesses to scale their resources based on demand without extensive, repeated, and costly physical upgrades.

    A study in the International Journal of Business Information Systems has highlighted the role of fuzzy logic in evaluating the cost benefits of migrating to cloud computing. Fuzzy logic, a method for dealing with uncertainty and imprecision, offers a more flexible approach compared to traditional binary logic. Fuzzy logic recognises the shades of grey inherent in most business decisions rather than seeing things in black and white.

    The team, Aveek Basu and Sraboni Dutta of the Birla Institute of Technology in Jharkhand, and Sanchita Ghosh of the Salt Lake City Electronics Complex, Kolkata, India, explains that conventional cost-benefit analyses often fall short when assessing cloud migration due to the inherent unpredictability in factors such as data duplication, workload fluctuations, and capital expenditures. Fuzzy logic, on the other hand, addresses these challenges by allowing decisions to be made that take into account the uncertainties of the real world.

    The team applied fuzzy logic to evaluate three factors associated with the adoption of cloud computing platforms. First, the probability of data duplication, secondly capital expenditure, and finally workload variation. By incorporating these different factors into the analysis, the team obtained a comprehensive view of the potential benefits and drawbacks of cloud computing from the perspective of a startup company. The approach offers a more adaptable assessment than traditional models.

    One of the key findings is that cloud computing leads to a huge reduction in the complexity and costs associated with managing business software and the requisite hardware as well as the endless upgrades and IT support often needed. Cloud service providers manage all of that on behalf of their clients, allowing the business to focus instead on its primary operations rather than IT.

    Basu, A., Ghosh, S. and Dutta, S. (2024) 'Analysing the cloud efficacy by fuzzy logic', Int. J. Business Information Systems, Vol. 46, No. 4, pp.460–490.
    DOI: 10.1504/IJBIS.2024.141318

  • Research in the International Journal of Global Energy Issues has looked at the volatility of rare earth metals traded on the London Stock Exchange. The work used an advanced statistical model known as gjrGARCH(1,1) to follow and predict market turbulence. It was found to be the best fit for predicting rare earth price volatility and offers important insights into the stability of these crucial resources.

    Auguste Mpacko Priso of Paris-Saclay University, France and the Open Knowledge Higher Institute (OKHI), Cameroon, with OKHI colleague explain that the rare earths, are a group of 17 metals* with unique and useful chemical properties. They are essential to high-tech products and industry, particularly electric vehicle batteries and renewable energy infrastructure. They are also used in other electronic components, lasers, glass, magnetic materials, and as components of catalysts for a range of industrial processes. As the global transition to reduced-carbon and even zero-carbon technologies moves forward, there is an urgent need to understand the pricing of rare earth metals, as they are such an important part of the technology we need for that environment friendly future.

    The team compared the volatility of rare earth prices with that of other metals and stocks. Volatility, or the degree of price fluctuation, was found to be persistent in rare earths, meaning that prices tend to fluctuate continually over time rather than reaching a stable point quickly. For investors and manufacturers dependent on these metals, such constant volatility poses a substantial economic risk. As such, forecasting the price changes might be used to mitigate that. It might lead to greater stability and allowing investors to work in this area secure in the returns they hope to see.

    Other models used in stock price prediction failed to model the volatility of the rare earth metals well, suggesting that this market has distinctive characteristics that affect prices differently from other more familiar commodities. Given that the demand and use of rare earth metals is set to surge, there is a need to understand their price volatility and to take this into account in green investments and development. It is worth noting that there is a major political component in this volatility given that China, and other nations, with vast reserves of rare earth metal ores, do not necessarily share the political views or purpose of the nations demanding these resources.

    Mpacko Priso, A. and Doumbia, S. (2024) 'Price and volatility of rare earths', Int. J. Global Energy Issues, Vol. 46, No. 5, pp.436–453.
    DOI: 10.1504/IJGEI.2024.140736

    *Rare earth metals: cerium, dysprosium, erbium, europium, gadolinium, holmium, lanthanum (sometimes considered a transition metal), lutetium, neodymium, praseodymium, promethium, samarium, scandium, terbium, thulium, ytterbium, yttrium

  • Container ports are important hubs in the global trade network. They have seen enormous growth in their roles over recent years and operational demands are always changing, especially as more sophisticated logistics systems emerge. A study in the International Journal of Shipping and Transport Logistics sheds new light on how the changes in this sector are affecting port efficiency, the focus is on the different types of container activities.

    Fernando González-Laxe of the University Institute of Maritime Studies, A Coruña University and Xose Luis Fernández and Pablo Coto-Millán of the Universidad de Cantabria, Santander, Spain, explain that container ports handle cargo that is packed in standardized shipping containers, the big metal boxes with which many people are familiar commonly transported en masse on vast sea-going vessels, unloaded port-side, and loaded on to trains and road transporters for their onward journey. The increasing size of ships used for transporting these containers, some of which can carry up to 25000 TEUs (twenty-foot equivalent units, the containers), means there is increasing pressure on ports to increase their capacity. As such, there is a lot of ongoing effort to automate processes and optimize port operations to allow the big container ports to remain viable and competitive.

    The team used Data Envelopment Analysis (DEA) to evaluate the efficiency of container ports by comparing the input and output of their operations. The focused on ten major Spanish container ports – among them the major ports of Algeciras, Barcelona, and Valencia – in order to understand how various types of container activities – import/export, transshipment, and cabotage (coastal shipping) – influence port performance.

    One of the key findings from the study is the relationship between port efficiency and the types of container activities handled. The team found that there is an inverted U-shape relationship: ports that balanced transshipment (transferring containers between ships at intermediate points) with import/export activities tended to perform better than those that specialized in only one type of activity. This suggests that a diversified approach to container activities may enhance port efficiency.

    The work suggests that by adopting a balanced approach to their activities, container ports could boost efficiency and reinforce their role in the global supply chain.

    González-Laxe, F., Fernández, X.L. and Coto-Millán, P. (2024) 'Transhipment: when movement matters in port efficiency', Int. J. Shipping and Transport Logistics, Vol. 18, No. 4, pp.383–402.
    DOI: 10.1504/IJSTL.2024.140429

News

Dr. Shoulin Yin appointed as new Editor in Chief of International Journal of Intelligent Systems Design and Computing

Dr. Shoulin Yin from the Shenyang Normal University in China has been appointed to take over editorship of the International Journal of Intelligent Systems Design and Computing.

Prof. Rongbo Zhu appointed as new Editor in Chief of International Journal of Radio Frequency Identification Technology and Applications

Prof. Rongbo Zhu from Huazhong Agricultural University in China has been appointed to take over editorship of the International Journal of Radio Frequency Identification Technology and Applications.

Associate Prof. Debiao Meng appointed as new Editor in Chief of International Journal of Ocean Systems Management

Associate Prof. Debiao Meng from the University of Electronic Science and Technology of China has been appointed to take over editorship of the International Journal of Ocean Systems Management.

Prof. Yixiang Chen appointed as new Editor in Chief of International Journal of Big Data Intelligence

Prof. Yixiang Chen from East China Normal University has been appointed to take over editorship of the International Journal of Big Data Intelligence.

International Journal of Computational Systems Engineering is now an open access-only journal 

Inderscience's Editorial Office has announced that the International Journal of Computational Systems Engineering is now an Open Access-only journal. All accepted articles submitted from 15 August 2024 onwards will be Open Access and will require an article processing charge of USD $1600. Authors who have submitted articles prior to 15 August 2024 will still have a choice of publishing as a standard or an Open Access article. You can find more information on Open Access here.