2026 Research news

A study in the International Journal of Business and Emerging Markets has looked at the performance of small and medium-sized enterprises (SMEs) and found that there are several factors that determine whether they succeed in international markets. The findings move the attention away from the firms themselves to the consultants who advise them.

The research draws on the experiences of export consultants working within a Brazilian public support programme. Unlike individual firms or policymakers, these consultants observe multiple businesses across industries and over extended periods, which gives them a unique perspective. Their insights can show the patterns in how SMEs approach exporting and where they tend to encounter difficulties.

The work focuses on critical success factors, the essential areas that a business must manage effectively to achieve its objectives. In the area in question, exporting is not treated as a single decision but as a process requiring different capabilities and conditions to work together for success.

Among the most prominent of these factors is accumulated knowledge of international markets. This means knowing what foreign customers like, what the rules are, and how to deal with competition. Such knowledge is built over time and is linked to long-term commitment. Firms that treat exporting as a long-term strategic activity, rather than a short-term opportunity, are more likely to establish a stable presence abroad, the research suggests.

The team also found that having a clear export strategy was also a decisive factor. SMEs with structured planning regarding which markets to target, how products should be positioned, and how resources are allocated were generally more successful than those pursuing sporadic opportunities. In addition, management capability and product quality, as well as external factors, had an effect on success.

Critically, the work showed that no single factor alone guaranteed success. Rather, export performance depends on how well an SME coordinates all of these elements by taking a resource-based view.

Dorneles, C.P., Vieira, G.B.B., Lazzari, F., Salvador, C.K. and Ceballos-Ramírez, S.L. (2026) 'Critical success factors in exports: evidence from technical consultants in a Brazilian export support program', Int. J. Business and Emerging Markets, Vol. 18, No. 6, pp.1–28.
DOI: 10.1504/IJBEM.2026.152742

A review spanning a decade of the scientific literature has looked at the growing food waste crisis in which about a third of the food we produce is wasted. The work, published in the International Journal of Integrated Supply Management, has focused specifically on citrus crops grown across subtropical belts from Spain to Brazil to China and found that the waste is closer to half in this sector. The researchers suggest that we need a fundamental rethink on how food is grown and processed and how we can ensure that it reaches the people who need it.

The team used a systematic, quantitative approach to analyse 871 scientific papers published between 2010 and 2023. Of these, 111 met the criteria for examining sustainability in agricultural supply chains. Food supply chains account for about 70 per cent of all freshwater used by humans and use nearly a third of the world's energy and are the second biggest source of carbon emissions.

Citrus was chosen as the case focus because fruit in this sector is the most widely produced and represents vast environmental costs at every stage. Citrus fruits are highly perishable, which makes them particularly vulnerable to waste. The researchers point out, however, that citrus represents an opportunity in the form of the "pomace" waste generated when the fruit is juiced. This is the peel and pulp that remain after extraction and represents half the weight of the fruit.

The researchers suggest that pomace may have economic and environmental value. Until now it has been treated as waste or, at best, low-grade animal feed. But it might be converted through anaerobic digestion into biogas, for instance. It can also be composted or processed into a soil improver. It also has the potential to become the raw material for bioplastics. A more surprising application might be in its use as a bio-adsorbent in wastewater treatment to remove pollutants from water.

Supply chain management theory has not kept pace with this kind of circular development in the food industry as it has historically focused only on the flow of goods, information, and capital, rather than considering the biological nature of the materials in the supply chain. The researchers suggest that this needs to change if environmental and sustainability problems are to be addressed.

Alzubi, E., Kassem, A., Melkonyan-Gottschalk, A., Gruchmann, T. and Noche, B. (2026) 'Socio-technical transformations in citrus supply chains: a literature review based on bibliometric analysis', Int. J. Integrated Supply Management, Vol. 18, No. 6, pp.1–45.
DOI: 10.1504/IJISM.2026.152741

Information and communication technology (ICT) has reshaped our lives, how we live, how we work, how we entertain ourselves. That much is true, at least for the developed and developing world.

ICT refers to everything from smartphones and laptops to software and cloud-based platforms and increasingly to the so-called Internet of Things (IoT), smart devices in the workplace our homes and places of entertainment and recreation. ICT has enabled constant connectivity and more flexible working arrangements, fundamentally altering the structure of the modern workplace.

But that connectivity may have come at a cost. One of the problems with the ubiquitous nature of ICT in our lives is that many people now have no boundary between their professional obligations and their personal lives. ICT has put many people in 24/7 contact with their work colleagues and their boss and conversely, they are always able to connect and access work-related information wherever and whenever. Research in the International Journal of Electronic Finance has now examined the social and psychological consequences of digital work environments.

The study highlights a tension that has become familiar across many sectors. On one side, digital tools have improved efficiency and expanded flexibility. Remote working arrangements, such as telecommuting and telework, allow people to integrate professional tasks into periods that were previously unproductive. Time spent commuting or waiting in public spaces can now be repurposed for work, offering workers greater autonomy over their schedules.

Yet this same flexibility introduces new pressures. The expectation that employees remain reachable anytime, anywhere has led to the rise of so-called techno-stress. Techno-stress encompasses several experiences, such as diminished control over one's personal time, anxiety about keeping pace with technological change, and frustration when systems fail.

It is this latter issue that is highlighted in the study. Systems failure is a particularly acute trigger of techno-stress. When the very tools on which people now rely for so much malfunction, the inability to resolve the issue independently create a sense of helplessness that can affect both emotional well-being and job performance. In such cases, technology becomes less an enabler of productivity and more a source of disruption.

While digital technologies are usually adopted with the expectation of improved productivity, this research suggests that they introduce hidden costs, particularly in the form of mental health challenges. These effects can accumulate at a societal level, influencing healthcare demands, workforce sustainability, and overall economic performance.

For employers and policymakers, there is, therefore, a need for a broader understanding of technical well-being. Measures to improve system reliability, provide training, and set clearer work-life boundaries are now needed across sectors.

Dhas, H.M., Ancy, R.J., Sreejith, S. and Rani, R.K. (2026) 'Technophobia and ICT device adaptability in financial services workers', Int. J. Electronic Finance, Vol. 15, No. 2, pp.170–188.
DOI: 10.1504/IJEF.2026.152734

As China's population ages at an unprecedented pace, research in the International Journal of Information and Communication Technology suggests that homes increasingly fail to meet the needs of older citizens. By 2050, almost one-third of China's population will be over 60, meaning the government and policymakers need to focus on safety, independence, and the quality of life for hundreds of millions of people.

The researchers propose a biologically informed approach to housing design. This would take into account the predictable physical, sensory, and cognitive changes associated with aging. Conventional residential designs often fail to accommodate the realities of physical and mental changes as people age. Small, cramped bathrooms, insufficiently separated functional areas, poor lighting, and excessive noise can combine to create environments that affect comfort and safety. According to the research, a more responsive design framework must consider not only structural changes but also daily behaviour and psychological needs.

The team offers a three-pronged strategy for adapting living spaces. The first part considers spatial layout and emphasises barrier-free access and the clear separation of dynamic zones, such as kitchens and corridors, from static areas like bedrooms and lounges, to improve accessibility and reduce the risk of falls. Secondly, furniture and facility design should be optimised for ergonomics, incorporating features such as adjustable seating, well-lit bathrooms, and sanitary fixtures suitable for those with reduced strength or flexibility. The third consideration is the integration of intelligent systems. This could include health-monitoring devices, environmental controls for lighting and temperature, and security technologies, all of which are meant to help older residents without making them feel like they have too much technology.

The team argues that such design improvements have benefits that extend beyond individual households. Age-adapted housing has the potential to improve public health, reduce medical and long-term care expenditures, and sustain social cohesion by promoting autonomy and dignity among the elderly.

Zhou, Y. and Fu, S. (2026) 'Upgrading path of aging friendly functional layout in residential spaces based on biology and computer software engineering', Int. J. Information and Communication Technology, Vol. 27, No. 28, pp.60–72.
DOI: 10.1504/IJICT.2026.152551

Peer-to-peer (P2P) lending, a form of finance that allows individuals and small businesses to borrow directly from each other through online platforms, has attracted growing academic and policy attention in recent years, especially as it reshapes traditional credit markets. An analysis in the International Journal of Accounting and Finance has looked at more than three decades of research in this area. The results suggest that while the field has expanded rapidly, there are many gaps in our understanding of P2P lending that could have implications for international financial systems.

The researchers examined more than 500 hundred scholarly articles published between 1990 and 2023. The analysis charts how interest in P2P lending has changed as financial technology, or FinTech, itself has developed over that period. By removing conventional intermediaries such as banks, these platforms not only reduce costs and accelerate loan processing but also broaden access to credit. P2P lending now serves borrowers globally who lack access to conventional financial systems. This opens up opportunities for many previously disenfranchised parts of society worldwide.

There has been a marked increase in research into P2P lending in recent years. This suggests that it is growing in complexity and economic relevance. Most of the research focuses on loan default risk and on investor behaviour, looking at the psychological factors influencing financial decisions and trust on both sides.

The emphasis on trust is central to the P2P lending model. Unlike traditional banking, where institutions act as gatekeepers and risk assessors, P2P lending relies almost entirely on digital signals of reliability and user-generated information. There are, however, geographical imbalances in the research, with most of it having been conducted in Europe and the USA, despite rapid growth of P2P lending in emerging markets. This issue suggests that our current understanding may not fully explain how these platforms operate in different regulatory environments or cultural contexts, where financial behaviour and institutional trust can be very different.

The gaps in the research limit the ability of policymakers and practitioners to design effective frameworks. The absence of regulation can expose participants to fraud or default. Nevertheless, in emerging economies, where access to traditional banking is often limited, P2P lending has the potential to expand financial inclusion by offering credit to small businesses and individuals without established credit histories.

Ritika and Khanna, A. (2025) 'Unveiling the dynamics of peer-to-peer lending: a bibliometric analysis', Int. J. Accounting and Finance, Vol. 12, No. 3, pp.145–184.
DOI: 10.1504/IJAF.2025.152574

Research in the International Journal of Computational Systems Engineering introduces a hybrid recommendation model that could help with one of the common challenges facing universities offering online courses. How to recommend the most appropriate course for prospective students.

The approach uses Naive Bayes classification and collaborative filtering to improve accuracy and personalised course suggestions. This, the researchers suggest, could ultimately enhance the learning experience for students.

Online course recommendation systems have long struggled with issues such as the "cold start" problem, data sparsity, and inadequate personalisation. The "cold start" problem occurs when a recommendation system lacks sufficient historical data about new users or courses, making it difficult to provide relevant suggestions. Data sparsity, on the other hand, refers to the limited amount of data available for each course, which can hinder the system's ability to capture students' preferences. Additionally, inadequate personalisation leads to generalised recommendations that may not match the unique needs of individual students, resulting in a less effective user experience.

The hybrid model discussed in IJCSE could resolve these issues. By using Naive Bayes classification, it can predict the likelihood that a particular course aligns with the interests of a given student based on course features. Collaborative filtering then examines patterns in student character and identifies similar users to recommend courses based on what others with similar learning habits have chosen.

The system also adds a dynamic weight adjustment feature that adjusts the model's recommendations depending on whether a student is a new user or an experienced one. This mechanism improves the precision and diversity of the suggestions, ensuring that the system remains useful for all types of students.

The team tested the system with data from 25,000 students and 1,000 courses. Compared to traditional methods, it demonstrated a 12% improvement in Precision@10 (the percentage of relevant courses within the top 10 recommendations) and a 10.5% improvement in Recall@10 (the percentage of relevant courses among the top 10 recommendations). Most notably, in cold start scenarios, the hybrid model significantly outperformed deep neural networks. Even with a data sparsity of 98%, the hybrid model's accuracy fell at half the rate of traditional algorithms.

Chen, Z. and He, M. (2026) 'Research on integrating naive Bayes and collaborative filtering into an online-course recommendation model for universities', Int. J. Computational Systems Engineering, Vol. 10, No. 6, pp.12–21.
DOI: 10.1504/IJCSYSE.2026.152654

A study of junior high schools in Indonesia has found that educational leadership influences how well they cultivate entrepreneurial skills in their students. Indeed, these kind be improved by encouraging innovation from the top and by fostering collaborative environments in which students, teachers, and communities all work together to shape educational outcomes. The details are reported in the International Journal of Business Innovation and Research.

The research surveyed 350 schools and examined the relationship between entrepreneurial leadership and entrepreneurial performance. Entrepreneurial leadership refers to a style of management that prioritises vision, innovation, and the mobilisation of others. In schools, this translates into principals and senior staff who support experimentation in teaching, promote creative problem-solving, and encourage initiative among both students and educators.

Entrepreneurial performance, on the other hand, is defined more broadly than business creation. It includes the ability of a school to generate innovative activities, equip students with problem-solving and adaptive skills, and contribute to longer-term socio-economic objectives such as employability and resilience in changing labour markets.

The study's main finding is that leadership alone is not the sole driver of such outcomes in educations. Rather, its effects are mediated by what researchers describe as value co-creation. This term derives from service management theory and refers to a process in which value is produced through interaction, rather than being delivered unilaterally by an organisation to passive recipients. In the educational context, this implies a shift away from viewing teaching as a one-way transfer of knowledge, towards a model in which students, teachers, school leaders, and other stakeholders work together to design appropriate learning experiences and solve problems.

In countries where entrepreneurship plays a significant role in economic development, schools are increasingly seen as a foundation for developing the entrepreneurial mindset in students. The research indicates that policy initiatives which focus solely on embedding entrepreneurship in the curriculum may not work as well as those that also improve and guide leadership practices and institutional culture.

Indira, S.S., Sasmoko S., Bandur, A. and Pradipto, Y.D. (2026) 'Business perspectives on value cocreation as a mediator for entrepreneurial performance in educational contexts', Int. J. Business Innovation and Research, Vol. 39, No. 8, pp.1–24.
DOI: 10.1504/IJBIR.2026.152515

Research in the International Journal of Business Information Systems suggests that the adoption of artificial intelligence (AI) is remarkably uneven across Italian firms. While some may have made a deliberate choice not to use AI, of the many that are planning to use it, some still lack the organisational structures needed to deploy the technology effectively.

This is one of the first systematic studies of AI adoption in Italy. It found that there are lots of early innovators eagerly integrating AI into their operations, but others are moving more cautiously and remain in the preliminary stages of exploration. This uneven uptake is seen elsewhere and reflects a broader international pattern, as businesses look for AI opportunities but struggle with the complexities of this rapidly evolving area of computing.

Despite the growing interest and investment in, specifically, generative AI, this research shows that many firms do not have a structured approach to the technology. The researchers propose an "AI Readiness Level" (AIRL) framework that could help organisations develop their AI strategy.

This notion of readiness is not just about technical capability, it takes into account the quality of a company's data infrastructure, the availability of skilled personnel, leadership support, and external factors such as regulatory pressures or market competition. AIRL provides a model of the progressive stages of development, from initial awareness to full operational integration.

The team points out that firms that have adopted AI have reported improvements in operational efficiency, enhanced customer engagement, and more informed decision-making through predictive analytics. The research suggests that adopting AI is less a matter of installing new software than carrying out organisational transformation. Companies need to align their technological capabilities with workforce skills, management strategies, and governance structures, the authors explain. Those that fail to do so risk falling behind competitors that are already using this technology to their advantage.

Garlatti Costa, G., Pugliese, R. and Venier, F. (2026) 'Exploring artificial intelligence adoption among Italian firms: the AI readiness level', Int. J. Business Information Systems, Vol. 51, No. 7, pp.1–22.
DOI: 10.1504/IJBIS.2026.152513

Research in the International Journal of Environment and Pollution has looked at carbon-reduction strategies across supply chains. The findings suggest that uncertainty in consumer demand need not preclude environmental gains.

The team looked at a four-stage supply chain, encompassing suppliers, producers, retailers, and consumers. They used a structured economic model, the Stackelberg game, to examine the dominant "actor", in this case the manufacturer. The dominant actor makes the initial decisions, and the other players adjust their behaviour accordingly. Such a sequential decision-making framework models the way many industries function, where firms exert influence over pricing and production conditions downstream.

In contrast to other studies that have isolated individual parts of the supply chain, this latest study adopts a system-wide perspective. In it, retailers are not merely intermediaries but are active participants shaping demand. As such, retailers then influence consumer behaviour through pricing strategies and promotional efforts, such as emphasising low-carbon products or highlighting environmental credentials. This affects consumer decisions about the price of "greener" goods, and this then feeds back into the incentives at the manufacturer level for reducing emissions and pollution earlier in production.

The challenge in green manufacturing is demand uncertainty. Firms somehow need to be able to predict how positively consumers would respond to those greener, low-carbon products. This uncertainty complicates investment decisions. The research indicates that supply chain participants can still achieve what economists term Pareto improvements, where at least one party benefits without leaving others worse off, through coordinated adjustments in pricing, subsidies and emission reduction efforts.

The results reveal a set of trade-offs. Subsidies aimed at boosting retail promotion tend to increase marketing efforts and allow retailers to charge higher prices, reflecting stronger consumer demand for environmentally friendly products. However, these same measures weaken the producers' incentives to invest in their own emission reductions and may lead to higher wholesale prices. The overall effect, however, is emission reduction across the supply chain, suggesting that policies or strategies that appear inefficient at the manufacturer level may still deliver environmental benefits.

Shen, Q. and Hou, X. (2026) 'Carbon reduction coordination and pricing strategy of a four-level supply chain under demand uncertainty', Int. J. Environment and Pollution, Vol. 76, No. 5, pp.36–57.
DOI: 10.1504/IJEP.2026.152507

The rapid expansion of the Internet of Things (IoT) has changed how digital systems interact with the physical world. Millions, if not billions, of connected devices, from household appliances to industrial machinery, environmental sensors, medical diagnostic tools, and more, collect and exchange data with minimal human intervention.

This growing "network" has led to the automation of many mundane tasks as well as enormous improvements in efficiency across all these areas and beyond. However, researchers writing in the International Journal of Critical Infrastructures warn that the increasing complexity of the digital world brings with it vulnerabilities. This is perhaps of growing interest and concern as artificial intelligence is incorporated into the way in which IoT devices work.

The team explains that many IoT devices have limited computing resources, and so they are constrained in terms of how well they can address security issues. As a result, many devices are security targets and can, for instance, be added to so-called botnets, networks of affected machines used to carry out bigger attacks on networks and infrastructure using Distributed Denial of Service (DDoS) attacks and other methods.

Addressing these problems is vital if critical IoT systems are to be protected in energy grids, medical environments, factories, and across so-called smart cities. The research focuses on anomaly detection as a powerful strategy for identifying potential threats and system failures. Unlike standard rule-based security systems that use predefined patterns of known threats, anomaly detection can use machine learning to identify patterns based on training data and algorithmic analysis rather than explicit programming.

As IoT technology spreads, anomaly detection in real time is an essential part of implementation and a requirement for maintaining system integrity. Failures or breaches in interconnected systems could have cascading effects, disrupting essential services and undermining public trust.

Ultimately, securing IoT networks through this kind of proactive monitoring is not just a technical necessity but a safeguard for infrastructure that depends on all those millions of devices.

Xu, J. (2026) 'Integrating IoT and machine learning for scalable anomaly detection in smart city infrastructure', Int. J. Critical Infrastructures, Vol. 22, No. 10, pp.1–16.
DOI: 10.1504/IJCIS.2026.152499

A new way for computers to recognise and translate complex place names is reported in the International Journal of Information and Communication Technology. The approach offers a roadmap to address a long-standing weakness in digital language systems used for mapping, navigation, and international communication.

Place names often carry historical, geographical, and cultural significance, and errors in translation can lead to confusion or loss of context. More accurate handling of such names could improve digital maps, navigation systems, logistics platforms, and multilingual communication tools.

The research focuses on English-derived place names, those created by adding prefixes, suffixes, or descriptive elements to existing names. While common in geographic data, these constructions are hard for automated systems to work with because they combine meaning and pronunciation in ways that do not transfer neatly across languages.

To address this, the researchers developed a computational model that integrates two complementary approaches: a knowledge graph and a phonetic generation algorithm. A knowledge graph is a structured representation of information that maps relationships between concepts, allowing the system to understand how place names are formed and how their components relate to one another. This captures the semantic dimension of language, its meaning and contextual associations.

The phonetic generation algorithm focuses on the sound of the spoken names. It converts written words into standardised representations of pronunciation, enabling the system to align how a place name is written with how it is spoken. This is particularly important in translation, where names often need to preserve recognisable sounds alongside meaning.

These two elements interact using what the team refers to as a bidirectional dynamic interaction fusion mechanism. In this system, the semantic and phonetic information feed each other to improve recognition and translation. The system also uses a Long Short-Term Memory (LSTM) network, a type of neural network commonly used for language processing.

The model demonstrated an error rate of just 1.3 per cent in recognising place names and 0.8 per cent in translating them. Its outputs are more than 95 per cent fluent and consistent.

Ma, D. (2026) 'English-derived place name recognition and translation based on knowledge graph and phonetic generation algorithm', Int. J. Information and Communication Technology, Vol. 27, No. 27, pp.109–132.
DOI: 10.1504/IJICT.2026.152532

China is facing a rapidly ageing population, with almost a quarter of its population over the standard retirement age in many regions of 60 years. This coincides with a declining birth rate and given more flexible retirement policies, the workforce itself is getting older. Research in the International Journal of Economics and Business Research recognises that within this workforce, older, experienced knowledge workers are a growing human resource asset. Understanding their needs and ensuring they are not so disenfranchesed that they take retirement as early as possible is now high on the organisational agenda and a critical part of modern management.

The research emphasises career capital, a concept that brings together human capital, social capital, and decision-making capital. Human capital refers to an individual's skills, knowledge, and experience. Social capital encompasses professional networks and relationships. Decision-making capital involves accumulated judgement and problem-solving abilities. The research found that these all contribute to ongoing professional effectiveness in the later stages of employment.

Two psychological factors specifically were identified as important in mediating the relationship between career capital and workplace success: self-efficacy and job crafting. Self-efficacy is an individual's belief in their abilities, while job crafting refers to the adjustment they make to tasks and work relationships to align with personal strengths and interests. The accumulation of skills, networks, and decision-making abilities are all fully realised when older employees feel capable and empowered to shape their roles.

In an effort to ensure older employees are not disenfranchised and continue to play an important role, the researchers suggest that the various dynamics at play need to be integrated into a new model of human resource management. This model should pay attention to different forms of career capital, activation of self-efficacy and adaptability, and flexible organisational support strategies tailored to age-specific needs. If such an approach is implemented, organisations will be able to sustain productivity, encourage innovation, and preserve the professional value of older knowledge workers.

Wei, J-l. and Chen, C-s. (2026) 'Exploring the impact of older knowledge workers' career capital on career success: with self-efficacy and job crafting as mediators and perceived organisational support as a moderator', Int. J. Economics and Business Research, Vol. 30, No. 1, pp.1–28.
DOI: 10.1504/IJEBR.2026.151764

Research in the International Journal of Computational Intelligence Studies has looked at how we might improve artificial intelligence (AI) systems for interpreting human emotion in written communication. The new system is capable of identifying sentiment not only in broad terms, positive, negative, and neutral, but also at a more detailed, aspect-specific level.

Sentiment analysis usually evaluates entire sentences or documents as a single unit. This can hide the subtleties of human expression. For instance, a restaurant review may praise the food while criticising the service. Previous AI models could struggle to separate these differing opinions, often assigning a generalised sentiment score. The new model overcomes this limitation by emphasising emotionally charged keywords, the words that carry the most significant emotional weight in a sentence. It does this using an attention network, a computational mechanism that allows AI to prioritise certain inputs over others.

This focus on the most emotional terms in a piece of text allows the AI to classify sentiment directed at specific aspects of a text. In the restaurant example, the model can distinguish the positive sentiment aimed at the food from the negative sentiment about the service, producing a more nuanced interpretation. Moreover, the system's ability to pay attention to the most emotionally charged words is a useful advance in natural language processing.

Such a tool could help businesses that rely on customer feedback, social media analysis, and online reviews. With it a company could spot concerns being discussed online as they arise and so make a timely response to help manage their image and refine their marketing. They might even be able to offer targeted responses to individuals or groups to improve customer satisfaction and perception.

This research is part of a growing trend in AI research towards improving the way in which computers interpret language and emotion. By enabling machines to analyse sentiment at the level of individual aspects rather than entire texts, this approach contributes to the development of more perceptive, context-aware AI.

Yuan, Z. and Yuan, J. (2026) 'Aspect-level sentiment classification with emotional keywords attention network', Int. J. Computational Intelligence Studies, Vol. 13, No. 5, pp.1–13.
DOI: 10.1504/IJCISTUDIES.2026.152417

Irrespective of the ethics and the apocalyptic predictions, artificial intelligence (AI) has already become a central component of economic and institutional decision-making. Research in the International Journal of Intelligent Systems Design and Computing has gone beyond an industry-specific analysis of the state-of-the-AI-art and offers a detailed framework of how the many different AI tools are being adopted.

The main point that arises from the analysis is that while AI technologies are being used widely across sectors, organizations do not yet have a strategy that allows AI to be integrated in a way that balances innovation with accountability.

AI encompasses so-called machine learning for recognising patterns in data, natural language processing that can interpret and human language, and generative tools that produce text, images, video, computer code, and other output. All these tools are changing many sectors from healthcare diagnostics to processing industrial and financial data, to produce hit pop songs and accompanying videos.

Education and business operations are undergoing similar shifts. Adaptive learning platforms in education adjust course material to suit the way individual students learn. In retail and logistics, AI is being used to refine supply chains, manage inventory, and personalize the customer "experience". Even in the world of law, law enforcement is using AI to assess crime scenes and weigh evidence, while judges are using these tools to summarise their concluding remarks from massive briefs.

One of the most pressing issues highlighted by the research is data privacy, as AI systems depend on large volumes of often sensitive and personal information. In addition, there is the notion of algorithmic transparency, wherein we are are losing the ability to understand how a given AI system is arriving at a specific decision. Indeed, many of the most advanced AI models now work essentially as black boxes, meaning their internal processes simply cannot be interpreted…perhaps without resorting to another AI to do the interpretation! Such a lack of transparency might undermine trust in high-stakes contexts such as medical diagnoses or judicial decisions.

To address the issues, the researchers propose a framework based on stakeholder theory, which maintains an emphasis on the importance of all parties affected by the decisions AI might make. In the business context, they stress that organisations should bot focus solely on efficiency or profit, they must have perspective that them to weigh the interests of employees, customers, regulators, and society at large when adopting AI. This might only come about, of course, with governance, regulations, and ethical obligations.

Idemudia, E.C. (2025) 'Artificial intelligence's effect and influence on multiple disciplines and sectors', Int. J. Intelligent Systems Design and Computing, Vol. 3, Nos. 3/4, pp.254–274.
DOI: 10.1504/IJISDC.2025.152183

A study of more than 500 employees in the fast-moving consumer goods sector has demonstrated how employers might mitigate social undermining in the workplace. Social undermining is a pattern of behaviour in which colleagues or supervisors hinder an individual's performance or professional relationships. This might include withholding critical information, spreading rumours, or criticising colleagues in a public setting. Unlike overt harassment, such actions are often subtle and cumulative, gradually weakening an employee's capacity to function effectively within a team.

Social undermining leads to stress, anxiety, and burnout. Such problems are not only detrimental to the employee being targeted but are also linked to reduced productivity and higher staff turnover within an organisation.

The research looks at self-efficacy, an individual's belief in their own abilities. The team found that self-efficacy acts as a psychological buffer so that those who have greater self-efficacy are less likely to succumb to the effects of social undermining. The work also found that hostility from supervisors had a more pronounced emotional impact than similar actions by peers, but strong self-efficacy could buffer targeted individuals even more effectively in such situations.

Fundamentally, employees with greater confidence in their abilities were more likely to interpret negativity from supervisors as a challenge to be managed rather than as evidence of personal failure. This personal reframing of issues reduces the psychological toll of that kind of interaction for those individuals.

In contrast, negativity from peers affects social standing and workplace relationships, making it more difficult for even those with the greatest level of self-efficacy to cope with such issues. In these cases, the harm is less about task performance and more about belonging and reputation within a group.

The findings suggest that employers might address toxic behaviour in the workplace by strengthening how well their employees can cope given that some degree of interpersonal conflict is inevitable in any organisation and might not always be something that can be stopped directly. By promoting the development of personal resources and self-efficacy, they may have a more practical way to intervene without recourse to disciplinary approaches.

Tosun, B., Güner Kibaroglu, G. and Basim, H.N. (2026) 'Self-efficacy as the saviour: defending psychological well-being against the destructive power of social undermining', Middle East J. Management, Vol. 13, No. 2, pp.137–159.
DOI: 10.1504/MEJM.2026.152269

A new fire detection system designed for lithium battery energy storage facilities described in the International Journal of Environmental Technology and Management could improve safety in the renewable energy sector.

Electricity generation that uses intermittent energy sources, such as wind and solar, relies on large-scale rechargeable batteries for storage. Unfortunately, a phenomenon known as thermal runaway is a well-known issue with lithium batteries. It refers to the feedback that occurs when battery temperature rises, triggering chemical reactions that generate further heat and so on. Thermal runaway can lead to catastrophic fire or explosion, causing damage to infrastructure and releasing hazardous substances, including toxic gases and heavy metals, into the surrounding environment.

The new approach discussed in IJETM addresses the risk through a more responsive and reliable method of fire detection. It uses a combination of sensors to monitor key indicators of potential failure, including temperature changes, smoke levels and the presence of carbon monoxide.

The system integrates these multiple data streams using a mathematical approach known as Dempster–Shafer evidence theory instead of using a single measurement. The framework works with uncertain or incomplete information from different sources and so can make reliable judgements on whether the system is stable or on the verge of catastrophic failure. In so doing, it reduces the number of false alarms and improves detection of genuine fire risk. The processing unit analyses the data in real time and can trigger an alarm and response within two seconds with over 95 per cent accuracy. Both response time and accuracy improve on earlier systems.

The same multi-factorial approach might be used in other sectors that rely on interconnected, sensor-driven technologies, including industrial safety monitoring, transportation networks, and urban infrastructure, where early detection of anomalies can prevent accidents and improve efficiency.

Deng, D.L. and Du, X.C. (2025) 'Fire warning of lithium battery energy storage power stations for environmental sustainable development', Int. J. Environmental Technology and Management, Vol. 28, Nos. 4/5/6, pp.355–366.
DOI: 10.1504/IJETM.2025.148986

A new study of the vast Guangxi Beibu Gulf Marine Region (GBGMR) in southern China takes a close look at how environmental limits are being stretched by economic growth. It highlights the disparities between provinces and asks how more effective environmental policies might be put in place across different parts of the region.

The GBGMR is an important coastal zone spanning several provinces. It lies along southern China's coast on the Beibu Gulf near the border with Vietnam. It acts as an ecological barrier stabilising environmental conditions as well as supporting fisheries, water supply, and industry. The GBGMR encompasses an incredibly varied geography but represents an uneven distribution of natural resources. Both these factors make it especially vulnerable to all kinds of pressures from human activity.

Research in the International Journal of Global Energy Issues has shown that while the region currently operates within what we might call environmental limits, the buffer zone is steadily shrinking based on an assessment of its Ecological Carrying Capacity (ECC). ECC is a measure of an ecosystem's ability to support human activity without causing long-term damage to the natural environment. In their study, the team combined two indicators of impact: carbon footprint and water footprint.

Their analysis shows clear variation across regions in the GBGMR and over time. Provinces that depend on energy-intensive industries, such as coal and chemicals, face much higher ecological stress whereas areas that have diversified are more resilient and can maintain a better balance between growth and environmental limits.

The findings could help guide policymakers so that locally pertinent regulations are put in place instead of blanket measures. The team suggests that regions with high emissions should accelerate the move to sustainable energy, while water-scarce areas should prioritise conservation and move away from water-intensive industries.

Song, H., Wang, X., Zhao, J., Yuan, S. and Yu, J. (2026) 'Marine ecological governance and green development in Beibu Gulf of Guangxi under the digital context', Int. J. Global Energy Issues, Vol. 48, No. 7, pp.1–20.
DOI: 10.1504/IJGEI.2026.152134

Research in the Electronic Government, an International Journal discusses the growing need for protecting one's personal financial data as the online world faces increasingly sophisticated cyber threats. The researchers argue that no single measure is sufficient to secure the modern financial ecosystem. As such, they set out a framework that combines technological tools, regulatory oversight, and individual responsibility to combat the problem.

There are three foundational principles in online financial security: confidentiality, integrity, and availability. Confidentiality is about making sure that sensitive information, such as account details and biometrics, is accessible only to authorised users. Integrity involves maintaining the accuracy and reliability of data and blocking unauthorised changes. Availability ensures that only legitimate users can access their financial information and no third party.

The researchers explain that a breakdown in any one of these areas can lead to personal financial loss, reputational harm for institutions, and more broadly, an erosion of trust in digital services.

Phishing, in which attackers pose as legitimate entities to extract sensitive information via a rogue email or website, is the most common digital fraud. Malware, software designed to infiltrate or damage systems, is a close second and continues to evolve to evade antivirus systems and get around firewalls. Insider threats, involving individuals within organisations misusing access, add another layer of risk. Then there are institutional, industrial-scale breaches where data is sold to malicious third parties on the dark web.

Financial institutions operate within stringent regulatory systems to reduce the risks but even with protections in place such as data regulation laws, encryption, multi-factor authentication, and routine security audits, vulnerabilities still exist.

All the protection in the world cannot save users from themselves, though. Even the least naïve digital native can succumb to social engineering or the sleekest of phishing attacks. The researchers suggest that user education is key. Users need to learn about avoiding weak passwords, about not repeating passwords, about how to recognise phishing attempts, and about how to be consistent in their practices online to avoid being caught out.

Kumari, A. (2026) 'Personal data protection in the age of digital financial systems', Electronic Government, Vol. 22, No. 2, pp.220–240.
DOI: 10.1504/EG.2026.151989

Urban congestion is a big problem in our cities. It leads to commuter delays and economic inefficiency. More tragically, though, it leads to a million deaths annually worldwide. Research in the International Journal of Reasoning-based Intelligent Systems shows how artificial intelligence (AI) might be able to carry out real-time traffic forecasting and so provide a way for the authorities to manage our road networks better.

Road vehicles do not behave as individual entities, traffic flow is a dynamic system in which there are no truly isolated events at individual locations, but conditions that ebb and flow over time. The researchers describe this phenomenon as spatiotemporal dependency. Events at one point at a given time influence conditions elsewhere on the roads. For example, a slowdown on a motorway might trigger congestion further down the route or in areas fed by the motorway some time later.

The researchers explain that capturing these delayed and distributed effects has long proved difficult for conventional forecasting models. Existing systems rely on simplified assumptions or short-term data patterns. The new approach using a hybrid deep learning system known as STG-Former. This brings together two computational approaches: graph neural networks and transformer models. A graph neural network represents the road system as a network of connections. The model can thus learn about traffic conditions over an area. The transformer component uses an attention mechanism to identify the most relevant information at any given time. It can thus detect patterns as they change through time.

Tests with this new system on standard traffic datasets show the model is much more accurate in its predictions than even the leading rivals and works well during periods of peak congestion when those other models often fail. The improvement is significant in the context of urban congestion, where even a small improvement in predictions can help traffic management improve its operational decisions and so avoid gridlock or major stalls in the flow of traffic.

Cheng, H., Cao, Y. and Li, W. (2026) ‘Transformer-GNN hybrid architecture for optimising real-time traffic forecasting on highways', Int. J. Reasoning-based Intelligent Systems, Vol. 18, No. 9, pp.38–50.
DOI: 10.1504/IJRIS.2026.152190

Modern energy infrastructure is increasingly defined as cyber-physical systems where physical power distribution and digital communication are closely tied together. While this digitalisation boosts efficiency, it exposes electricity grids to sophisticated cybersecurity risks. To combat such threats, researchers have developed an artificial intelligence (AI) method that integrates network structure analysis with data tracking to identify complex attacks that conventional security systems might miss. Details are reported in the International Journal of Global Energy Issues.

Energy infrastructure is vulnerable to Advanced Persistent Threats (APTs). Unlike localised glitches, APTs involve long-term infiltration where attackers quietly gather data or manipulate operational signals. A major problem is the False Data Injection (FDI) attack, where sensor measurements are altered to feed operators misleading information. Such changes can cause catastrophic errors in energy flow and paralyse physical fuel supplies across entire regions. Such vulnerabilities are manifest as ransomware attacks, but increasingly, there are the risks associated with international conflict.

Detecting these incursions is difficult because malicious commands often mimic routine operational activity. Legacy detection systems use "signatures", predefined rules based on known past threats. Such an approach is generally ineffectual in the face of new, "zero-day" exploits or attacks that otherwise do not match existing patterns.

The new AI approach uses two distinct types of information: structural information (the physical and digital layout of devices and control centres) and temporal information (the chronological sequence of commands and signals) to identify an ongoing attack. The dual-layered deep learning architecture is based on a Graph Neural Network (GNN) that maps the system's spatial layout, and a Transformer model analyses data sequences over time. The former gives the AI a picture of the physical aspects of the infrastructure, and the latter understands how it changes over time. Such a spatiotemporal AI detection system can identify coordinated, multi-stage attacks that appear harmless when viewed as isolated events.

Testing with standard cybersecurity datasets proved the new AI model to have an accuracy of more than 93 per cent. Critically, it identifies suspicious activity in less than two seconds of it starting. This offers a viable way to near-real-time protection of power infrastructure, the research suggests.

Dai, Y., Lu, J., Li, Z., Li, J. and Rafieipour, M. (2026) ‘Network security threat identification based on GNN-transformer fusion model in energy cyber systems', Int. J. Global Energy Issues, Vol. 48, No. 7, pp.64–84.
DOI: 10.1504/IJGEI.2026.152150

Digital technologies have over the last few decades reshaped how we consume music, films, and live performances. Consumers can access content with the click of a mouse or the tapping of an icon, and while there are countless legitimate sources for that content, there are perhaps just as many illegal sources, so-called pirate sites.

Content piracy is nothing new. Back in the pre-recorded days when many people had musical instruments, such as pianos, in their homes or access to them in pubs and other venues, printed sheet music was the equivalent of a recording of a song. You could recreate the song in your own home for pennies or less if you could get hold of a pirated copy of said sheet music. Today, the world is a very different place, but the principles are the same. People want to hear music, and many of them don't want to pay much, if anything, for that privilege.

Research in the International Journal of Intellectual Property Management has looked at piracy in the age of online live streaming. The work shows that copyright systems are struggling to keep pace with technological change, particularly in fast-growing digital markets such as India. The study focuses on the legal and technological obstacles confronting regulators as new forms of piracy proliferate.

Digital piracy refers to the unauthorised copying, distribution or use of copyrighted works. Copyright itself is a form of intellectual property protection covering creative output such as music, books, films, sculpture, artworks, artistic performances, and even light shows. Unlike patents and trademarks, copyright largely operates as what legal scholars call a negative right. This means that rights holders cannot compel others to use their work but can prevent others from reproducing or distributing it without permission or payment.

Copyright is meant to encourage creators to produce new work by protecting their economic interests, while at the same time allowing the public to gain reasonable access to knowledge and culture. That balance becomes more difficult in a digital environment where copying and distribution occur almost instantaneously across global networks. Online streaming services, user-content platforms are on the cold front of the copyright conflict. Platforms support legitimate creative activity but also make it easy for users to distribute unauthorised material.

To counter copyright theft, rightsholders often use Digital Rights Management, or DRM. DRM refers to technological systems designed to control how digital content can be used. These protections may limit copying, restrict access only to authorised devices, or require authentication through a paid account. However, as with much of illegal activity, those developing DRM systems are increasingly vulnerable to circumvention from pirates who develop their own systems to counter the DRM systems. The newest programmes, sometimes enhanced or even developed with generative artificial intelligence, GenAI, can break or evade DRM protection with relative ease.

International agreements such as the Rome Convention and treaties administered by the World Intellectual Property Organization have common standards recognising performers' rights and requiring member states to extend equivalent protection to foreign creators, but with increasingly sophisticated piracy techniques, legal mechanisms lag way behind.

Nath, A. and Chakravarty, G. (2026) 'Evolving copyright paradigms in the age of live streaming in music and video piracies', Int. J. Intellectual Property Management, Vol. 16, No. 1, pp.28–44.
DOI: 10.1504/IJIPM.2026.152063

Rapid urbanisation is reshaping cities across the globe. This is having a detrimental effect on many green spaces, such as parks, urban forests, green corridors, and landscaped public areas. Ultimately, these changes represent a loss of ecological and social benefits, such as helping moderate temperatures, improve air quality, manage stormwater, support biodiversity, and contribute to the wellbeing of city dwellers.

Of course, as people head for the cities, housing, infrastructure, and commercial development must change to accommodate their needs. Understanding how urbanisation and the loss of green spaces affect the city's sustainability is high on the agenda for urban planners and environmental scientists.

A study in the International Journal of Environment and Sustainable Development has looked at one of the limitations of earlier research: the reliance on a static assessment of those urban green spaces. Conventional approaches capture conditions at a single moment in time or compare only a few snapshots, and this does not reflect the complex and dynamic nature of urban landscapes. In reality, green spaces expand, contract, and shift unevenly across neighbourhoods and time periods. This makes it difficult to home in on the causes and consequences of change.

To tackle this problem, the researchers have turned to advanced spatiotemporal analytical methods. Spatiotemporal refers to the combined study of where and when changes occur. An algorithm then detects clusters within the complex shifting datasets and identifies hotspots where green space coverage changed significantly and areas where landscapes became increasingly fragmented.

The team then used a second layer of analysis to understand the underlying causes. They used a geographically and temporally weighted regression model, which considers how population growth, development intensity, land-use policy, and other factors vary across locations and over time. Their approach could then link changes in landscape structure directly to the degradation of the ecological "services" provided by those urban green spaces and point to how urban planning might be used to remedy the problem by countering the losses.

Ouyang, L., He, Y., Chen, Z. and He, K. (2026) 'Dynamic monitoring and evolution of urban green space landscape sustainability based on spatiotemporal analysis algorithm', Int. J. Environment and Sustainable Development, Vol. 25, No. 5, pp.3–23.
DOI: 10.1504/IJESD.2026.151846

Efforts to increase gender diversity on corporate boards have often been justified on grounds of fairness and representation. Research in the International Journal of Corporate Governance suggests that the presence of women in supervisory roles may also shape how companies are run, influencing both who becomes a top executive and how closely senior leaders are monitored.

The study examined publicly listed German companies during a period when political pressure to increase female representation in corporate leadership was high on the agenda. Germany formally introduced a gender quota in 2016 requiring large listed companies to ensure that at least 30 per cent of supervisory board members are women. This led to a significant increase in female representation on supervisory boards by the end of the decade. Yet, say the researchers, women were not as well represented on management boards. A mere one per cent of executive roles were held by women in the mid-2000s, and that figure had only risen to about 10 per cent by 2019.

To understand how female representation influences corporate leadership, the study analysed almost 100 publicly listed firms subject to codetermination rules. It focused on two outcomes: the composition of management boards, particularly the presence of female executives, and executive turnover, the rate at which top leaders leave their positions. The analysis focused on chief executive officers (CEOs), chief financial officers (CFOs), and chief human resources officers (CHRO).

The study used a statistical method known as an instrumental variable approach to address a common difficulty in this kind of research, endogeneity. Endogeneity arises when cause and effect are intertwined. For example, firms that are already committed to diversity may appoint more female supervisors and promote more women to executive roles, making it difficult to determine whether one caused the other. By using earlier levels of female representation as a statistical instrument, the analysis could isolate the causal impact of women serving on supervisory boards.

The results suggest that the influence of female supervisors depends less on their overall numbers than on where they sit within the governance structure. Women serving as shareholder representatives on the remuneration and personnel committee significantly increase the proportion of women on management boards. Because this committee prepares decisions about executive appointments, membership provides direct influence over who joins the leadership team.

This pattern is consistent with a concept in social science known as the similarity attraction paradigm, like attracts like, if you will. The theory holds that individuals often favour colleagues who resemble themselves, whether in background, experience or identity. Applied to corporate boards, it suggests that female supervisors may be more likely to support female candidates for executive roles, particularly when they have direct authority over appointments.

Carow, J. (2026) 'The effect of female supervisors on the structure and dynamics of the management board', Int. J. Corporate Governance, Vol. 16, No. 1, pp.1-37.
DOI: 10.1504/IJCG.2026.152092

Research in the International Journal of Corporate Governance suggests that the makeup of corporate boards can affect how companies approach sustainability, particularly in emerging economies where governance systems are still developing.

The study is based on observations amounting to almost 20000 firm-years across 25 emerging markets. A firm-year is a single observation representing one company's data for one year in empirical business research. Thus, 20,000 firm-years consists of data collected for many companies over several years, where each company contributes one observation for each year it appears in the data.

The work shows that companies with more women on their boards tend to have better environmental, social, and governance (ESG) performance. The work also questions the received wisdom of governance that increasing the number of independent directors strengthens corporate responsibility.

Sustainability performance refers to how companies manage ESG issues. Environmental factors include carbon emissions, pollution, and resource use. Social factors relate to employee welfare, diversity, and community engagement. Governance concerns how firms are directed and controlled, including leadership accountability and board oversight. These various factors can be scored together to give investors and regulators a single metric with which they can assess long-term corporate risk and resilience.

A key feature of the current study is that it distinguishes between female executive directors who hold senior management positions and influence operational decisions and non-executive directors that provide oversight and strategic guidance but are not involved in the daily management of the company.

The works shows that the presence of women in both types of role is associated with better ESG scores. The researchers suggest that gender diversity broadens perspective in boardroom decision-making and encourages focus on long-term risks and stakeholder concerns.

The analysis also identifies an unexpected pattern regarding board independence. Independent directors—board members who are not part of company management—are widely viewed as essential for objective oversight. However, the study finds that a higher proportion of independent directors is linked to lower sustainability scores in the sampled emerging markets.

Elbayoumi, A.F., Elmoursy, H., Eljilany, S.M., Bouaddi, M. and Basuony, M.A.K. (2026) 'Females on board and sustainability performance: evidence from the emerging markets', Int. J. Corporate Governance, Vol. 16, No. 1, pp.67–89.
DOI: 10.1504/IJCG.2026.152096

A study in the International Journal of Intellectual Property Management suggests that planned obsolescence drives software innovation but also leads to customer lethargy or worse piracy.

The research has looked at the software upgrade cycle and highlights the complex role of planned obsolescence in shaping user behaviour across both legitimate and pirate markets. Planned obsolescence, in the context of software, involves discontinuing updates and technical support for older versions to encourage users to adopt newer releases. While often criticised as a tactic to extract additional revenue, the study notes that this strategy reflects practical considerations in software development. Companies continually invest in new features, security improvements, and interface enhancements, and revenue from upgrades sustains ongoing innovation.

However, the research, which focuses primarily on personal computer operating systems (OS), suggests that when companies end support for older versions of their software, this influences not only consumer choice but also broader patterns of technology adoption.

The team has analysed how users respond to these transitions using a push-pull-mooring (PPM) model. This framework was originally used to study geographic relocation but couches OS updates in terms of push and pull factors. Push factors are the drawbacks to remaining with outdated software, such as vulnerability to security breaches or incompatibility with modern applications and hardware. Pull factors represent advantages of upgrading, including enhanced functionality and a better user experience. The third type of factor, mooring factors, by contrast, are the costs or attachments that inhibit switching, such as financial expense, learning curves, or habit.

The team surveyed almost 300 users of perhaps the most common operating system on personal computers the world over. They found that the recognition of planned obsolescence increased a person's intention to upgrade but that there is a split between users following the official channel to upgrade or turning to a pirate source. They also found that social influences and the appeal of improved features were particularly strong motivators for legitimate upgrades, whereas high switching costs, including technical challenges and monetary considerations, drove some users almost inevitably towards pirated software.

There exists a dynamic tension that the software companies face. If they discontinue older products, this eventually forces users to upgrade and so leads to new revenues. But that constant cycle of upgrade and obsolescence pushes people towards software piracy, especially in regions where higher cost sensitivity is a major decisive factor, such as in the developing world.

The work suggests that planned obsolescence is more than a marketing tactic. This hints that software companies could increase legitimate adoption and reduce piracy by designing upgrade processes that lower learning costs, clearly communicate benefits, and carefully manage the phasing-out of older products.

Thi, T.D.P. and Duong, N.T. (2026) 'Intentions to upgrade software: evidence from Microsoft Windows users', Int. J. Intellectual Property Management, Vol. 16, No. 1, pp.45–69.
DOI: 10.1504/IJIPM.2026.152062

The advent of generative artificial intelligence, GenAI, has changed how businesses use digital technologies. Where for many years AI was used as a predictive, analytical, and diagnostic tool, now it can produce ideas, articles, computer code, images, video, and music.

The turning point perhaps came in late 2022 with the public release of systems such as ChatGPT. These new tools allowed users to interact with complex AI models through conversational prompts. They could give the GenAI written, and more recently, spoken instructions, and the system would respond. These tools have since then become increasingly sophisticated and are now used across the corporate world and beyond.

The change happened partly because there were major developments in machine learning, a branch of computer science in which algorithms learn patterns from large datasets and can produce an output to a given prompt based on what they have learned. Central to this process is the so-called transformer model. This is a type of neural network architecture that can analyse relationships between different entries in a large volume of data. Neural networks are computational systems loosely inspired by the structure of the human brain. Transformer-based systems, including the GPT family of models, are particularly effective at generating coherent language from their training data given an appropriate prompt.

There are other approaches to GenAI. Generative adversarial networks (GANs), for instance, use two neural networks that play off each other. One creates synthetic data based on its training, and the second evaluates how real that data is based on its own training. The process goes back and forth until the output is deemed optimal and the system can no longer improve the synthetic output or make it any more real than it is.

There are various other approaches, such as variational autoencoders, which compress and simplify data and then generate variations on the themes. Diffusion models, widely used for image generation, begin with random noise and gradually transform it into structured images. More often than not, a GenAI might be using at least two of these approaches in a multimodal system that can produce text, images, and audio together.

Writing in the International Journal of Generative Artificial Intelligence, researchers discuss how well all of these systems work, the value they create, and the ethics associated with GenAI. Where GenAI is augmenting one-on-one human interaction or helping make business decisions, there are issues of bias inherent in training data as well as labour disruption to consider.

As AI systems assist increasingly in analytic, writing, and creative work, knowledge workers and many other people will collaborate more and more with machines. The change is disruptive, it is likely that many jobs will become redundant. However, with automation there will be a greater need for critical thinking and ethical judgement.

Zouaghi, I. and Fosso Wamba, S. (2026) 'Business transformation in the age of generative AI: from strategy to societal impact', Int. J. Generative Artificial Intelligence in Business, Vol. 1, Nos. 1/2, pp.238–262.
DOI: 10.1504/IJGAIB.2026.151813

Emotional support from parents and teachers can play an important role in how satisfied students feel with university life in Pakistan, according to research in the International Journal of Services, Economics and Management based on a survey of almost 600 undergraduates. The study suggests that encouragement and understanding from family and faculty do more than provide comfort: they appear to strengthen students' psychological resources in ways that make campus life more manageable and rewarding for them.

The researchers turned to social support theory, a framework for understanding how caring relationships enhance psychological well-being and resilience, to help them investigate campus life.

Their analysis of the survey data did not just ask whether support improves satisfaction but explored how it does so. In particular, they assessed whether two psychological characteristics, self-efficacy and problem-solving ability, act as mediators of support. Self-efficacy describes a person's belief in their own ability to succeed. Problem-solving capacity refers to one's skills and confidence in resolving difficulties.

The team found that parental support is linked to stronger self-efficacy and improved problem-solving skills, which in turn contribute to greater satisfaction. Encouragement from home seems to foster confidence and a sense of competence. Emotional support from teachers follows a different pattern. Students who see their instructors as respectful, attentive, and supportive also report higher satisfaction with campus life. This relationship, the researchers suggest, is partly explained by enhanced problem-solving ability. Supportive teachers appear to help students think through challenges and develop strategies to address them. Teacher support did not significantly influence self-efficacy in this study. In others, words, teachers might help students tackle specific problems without fundamentally shaping the student's self-belief.

The team adds that the cultural setting is important. In a society where family bonds and collective aspirations remain central even into early adulthood, parental influence may continue to outweigh that of teachers in shaping self-belief. This contrasts with studies in the West, where support from teachers in higher education, are more strongly associated with a student's sense of competence.

Ahmad, M.S., Ahmad, M.A. and Elgammal, I. (2026) 'Emotional support and satisfaction with university campus life: mediation of self-efficacy and problem-solving', Int. J. Services, Economics and Management, Vol. 17, No. 1, pp.81–101.
DOI: 10.1504/IJSEM.2026.151937

Artificial intelligence, AI, has become one of the defining technologies of what economists and policymakers describe as the Fourth Industrial Revolution. This is an era in which digital, physical, and biological systems are increasingly intertwined. In practical terms, AI refers to computer systems capable of performing tasks that typically require human intelligence, such as recognising patterns, learning from data, making predictions, and assisting in complex decisions.

Aside from the generative AI and search tools that are at the forefront of the media and economic hyperbole, analytical and related AI systems already underpin smart manufacturing platforms, digital twins for testing and optimising equipment performance, adaptive cybersecurity tools, medical diagnostics, and much more. It is unlikely that within a decade or so many occupations will not have been augmented or displaced by AI tools. The potential for productivity, innovation, and economic growth is great.

As with any new technology, however, there are good reasons to look closely at the social and economic impact AI might have. It would be prudent to put safeguards in place urgently given the way in which technologies have often amplified inequality, weakened democratic norms, and introduced new systemic risks in the past.

Research in the International Journal of Generative Artificial Intelligence has looked closely at many of the issues that are coming to the fore, such as labour disruption, deepfakes, the opacity of advanced AI models, bias, copyright, privacy, and security issues. Then, there is the issue of whether a superintelligent AI might surpass human abilities and redefine our very existence, perhaps even determining, algorithmically or some kind of awareness, that we as a species are redundant, or worse, a problem that needs to be removed.

The researchers suggest that at the geopolitical level, international coordination is a major challenge, not least given the rogue behaviour of some so-called state actors. The trajectory that AI takes in this Fourth Industrial Revolution is not fixed, nor is it predictable. We need to work together to ensure that it works for the benefit of humanity and the planet.

Min, H. (2026) 'The dark side of artificial intelligence', Int. J. Generative Artificial Intelligence in Business, Vol. 1, Nos. 1/2, pp.199–209.
DOI: 10.1504/IJGAIB.2026.151820

Underground metro (subway) stations are no longer merely points of departure and arrival. As cities grow denser and transit networks expand, these spaces have the potential to function as some of the most widely shared public interiors in urban life. They are places where millions pass daily, cutting across age, income, and neighbourhood. They offer a rare platform for collective cultural experience. Stations can, suggests research in the International Journal of Environment and Sustainable Development, anchor local identity, narrate a city's history, and shape how residents and visitors alike perceive the character of the urban environment.

The research addresses a practical question confronting transport authorities and urban designers: how can large-scale public art projects fit into this infrastructure as it changes? Traditional artist-led design processes, though highly creative, can be time-intensive. By contrast, deep learning has allowed computers to generate high-quality images at speed. The missing link is that the computer-generated images may not understand the cultural meaning that the images need to convey. There is also a need to take into account how well a design might be installed in a real site.

The researchers hope to bridge this gap and have developed a multi-stage framework that integrates cultural analysis, visual cognition modelling, and spatial feasibility testing into a single pipeline.

Their approach is based on a semantic labelling system. The system can organise cultural concepts, such as local history, regional traditions, and environmental identity, into a knowledge graph. This graph can map relationships between ideas, enabling the computer to understand individual symbols and how they fit with broader narratives.

The framework then uses Contrastive Language-Image Pretraining, CLIP, is a deep neural network trained on vast datasets of containing pairings of images and text. An additional layer simulates human perception through a visual attention prediction network, considering composition, spatial layout, and pedestrian flow. By predicting where passengers are likely to focus while moving through a station, the system can position key symbolic elements in high-attention zones. The researchers suggest this could improve not only the aesthetic impact of the art installation but also the way in which pedestrians navigate the subway stations.

Wang. Q. (2026) 'Application of deep learning algorithms in the design of urban subway public art space', Int. J. Environment and Sustainable Development, Vol. 25, No. 5, pp.44–72.
DOI: 10.1504/IJESD.2026.151850

In a country where the physical scars of war remain visible in shattered buildings and disrupted markets, research in the International Journal of Diplomacy and Economy suggests that the moral architecture of business may be just as important to recovery in Syria as capital investment and bricks & mortar.

A study of 200 business leaders working in international companies in Aleppo and Damascus finds that ethical decision-making in Syria can be explained, to a significant degree, by a well-established psychological framework known as the Theory of Planned Behaviour. This theory suggests that human behaviour is primarily shaped by intention, a person's conscious plan or readiness to act. Those intentions, in turn, are influenced by three factors: personal attitudes, perceived social expectations, and perceived control over whether the behaviour is realistically achievable.

In practical terms, individuals are more likely to act ethically if they believe ethical conduct is right, think that others expect it of them, and feel capable of acting accordingly.

The researchers applied this theory in the context of Syria as part of an effort to understand how business leaders make ethical choices amid conflict, economic disruption, and institutional fragility. Their focus was Syria's post-war reconstruction drive, a national strategy aimed at restoring infrastructure, reviving markets, and rebuilding social trust after years of violence.

Trust, the study notes, is not an abstract virtue in such an environment. It is a prerequisite for attracting investment, stabilising supply chains, and enabling cooperation between domestic firms and international partners. Ethical business conduct is thus a functional prerequisite of economic recovery.

For practitioners, the implications are concrete. The findings indicate that organisations seeking to strengthen ethical leadership cannot rely solely on written rules. Codes of ethics must be actively communicated and embedded within organisational culture, the shared values and practices that shape everyday work. When ethical expectations become part of that culture, they function as powerful social norms, guiding behaviour even in the absence of direct oversight.

Amoozegar, A., Lata, A., Falahat, M., Shakib, S., Kumar, M., Ramzani, S.R. and Yadav, M. (2026) 'Mediating role of ethical intention between social norms, code of ethics and ethical decision-making', Int. J. Diplomacy and Economy, Vol. 12, No. 5, pp.1–20.
DOI: 10.1504/IJDIPE.2026.151858

The earliest stage of drug discovery is governed by a simple constraint: there are far more possible drug-like molecules than any pharmaceutical laboratory could ever test. A new deep learning system, reported in the International Journal of Reasoning-based Intelligent Systems, offers a way to speed up research and could unblock industry bottlenecks.

Bringing a new pharmaceutical to market can take more than a decade and will inevitably cost billions of dollars in research and development, testing, regulatory compliance, and marketing. A large share of that investment is spent on identifying compounds that bind to biological targets, these are commonly proteins involved in disease, whether a protein found in a pathogen or a protein in our bodies involved in the disease. Virtual screening, so-called in silico studies, has for decades used computer models to predict which molecules from a library of candidates might be suitable for testing in vitro (in the laboratory) and ultimately in vivo (in animals, then humans).

That said, established methods fall into two categories. The first are receptor-based approaches, such as molecular docking, that simulate how a molecule fits into a protein's three-dimensional binding site and estimate the strength of the bond that forms between. The accuracy of this approach depends on high-quality protein structures and simplified scoring formulae. A second approach is the ligand-based approach and this instead looks for compounds resembling known active molecules, using predefined chemical features, or descriptors.

These techniques can be computationally efficient and heavily successfully led to many pharmaceuticals on the market today. However, they rely heavily on prior knowledge and expert assumptions. In both cases, human-designed rules limit how much chemical complexity can be captured. The advent of deep learning systems is opening up a new approach.

Instead of manual feature selection, deep learning, a form of machine learning that uses multi-layered neural networks to detect patterns directly from raw data, can treat drug candidate molecules as graphs, with atoms as nodes and chemical bonds as edges. A graph neural network updates each atom's representation based on its neighbours, allowing the model to learn subtle structural relationships.

Crucially, this new approach uses another information channel in addition to the graph. It handles the drug candidate's SMILES string. A SMILES string is a unique text-based representation of the chemical structure of a molecule. By using structural and sequential representations together, the researchers could improve performance significantly. In tests on standard public benchmarks, the model achieved a score of 0.889; where 1.000 would be a perfect score. This score is a measure of how well the system distinguishes between active and inactive drug candidates. A score of 1 is ideal prediction whereas 0.5 reflects a 50:50 chance, a guess. Incredibly, the system could screen one million molecules in a quarter of an hour, which is 80 per cent faster than conventional approaches.

Zhang, C. (2026) 'Deep learning-based virtual screening system for drug molecules', Int. J. Reasoning-based Intelligent Systems, Vol. 18, No. 8, pp.44–55.
DOI: 10.1504/IJRIS.2026.151726

Electricity pylons, or transmission towers, have been a critical component of energy infrastructure for decades. The structural integrity of these power towers, which stride across landscapes the world over, is vital to power supply and public safety.

A study in the International Journal of Energy Technology and Policy has investigated a novel, more precise and efficient way to inspect pylons using advanced 3D scanning and geometric analysis. The approach might speed up the shift from labour-intensive field checks to what might be referred to as a fully digital inspection regime.

The researchers explain that a laser system can be used to scan a pylon's geometry in minute detail to generate a "point cloud". This is a collection of millions of spatial points representing the pylon's surface. To assess structural integrity, multiple scans are taken from different angles and then must be aligned into a single coordinate system in a registration process. This typically occurs in two stages: coarse registration, which provides an initial alignment, and fine registration, which refines it to high precision.

The lattice frameworks of pylons with their intersecting beams and sharp edges generate extremely large datasets and create ambiguities when identifying matching features, so registration even with the best algorithms is tough and consequently error-prone. In the IJETP paper, the researchers propose the use of Gaussian curvature in the feature-extraction process required for registration. Gaussian curvature is a mathematical measure of how a surface bends at a given point: flat areas have near-zero curvature, while sharp edges or corners have higher values. Because beam intersections and joints exhibit high curvature, they provide distinctive geometric markers for alignment.

Once aligned, the digital model of the pylon can then be compared with a high-precision reference design to identify geometric deviations. This allows engineers to detect misalignments or structural problems with confidence and so prioritise maintenance and repair across the power grid.

Qi, X., Yan, H., Tu, X., Liu, Y. and Ding, W. (2025) 'Quality inspection of power transmission towers based on point cloud registration', Int. J. Energy Technology and Policy, Vol. 20, No. 7, pp.3–22.
DOI: 10.1504/IJETP.2025.151788

Educators are using digital platforms more and more alongside conventional classroom teaching. A study in the International Journal of Continuing Engineering Education and Life-Long Learning has taken a look at the important question of whether or not this "blended" educational model enhances learning.

Blended online-offline education, sometimes referred to as smart education, combines face-to-face instruction with tools such as learning management systems, digital resources, and data-driven feedback. It was already being used prior to the covid pandemic, but in 2020 it became a critical part of educational life and since then has become embedded in education. It is flexible and holds the promise of personalised learning. However, systematic research into how students experience offline-online education has not kept pace with digital developments.

The research in IJCEELL identified 14 different factors that could shape the student learning experience. They grouped these into five broad dimensions: course environment and platform, course design, teacher characteristics, learner characteristics and social interaction. The factors included the reliability and usability of digital systems, the clarity and coherence of course structure, the responsiveness of teachers, the capacity of students for self-directed learning, and the quality of peer engagement.

Rather than treating these factors as separate variables, the researchers examined how they interact to give particular outcomes. As such, they used an interpretive structural model to find the hierarchical relationships. In practical terms, this approach can distinguish between foundational elements, intermediary influences, and the educational outcomes.

Their structural model has course content and resource infrastructure at its foundations. Teaching interaction and learner-related factors such as motivation and self-regulation then sit on top of these foundations. The layer above that is the learning outcomes, including satisfaction and performance. As one might expect, the model showed that student experience emerges from interconnected factors from the base to the top, rather than isolated inputs.

The framework demonstrated more than 95 per cent accuracy and performed better than earlier approaches that used static surveys or business-derived models where factors are all treated independently. Ultimately, it showed that investment in digital technology alone is unlikely to transform learning outcomes without close attention also being paid to course design and teacher development.

Fang, Y. and Hu, J. (2026) 'Analysis of factors influencing student learning experience in the blended online-offline smart education model', Int. J. Continuing Engineering Education and Life-Long Learning, Vol. 36, No. 7, pp.23–34.
DOI: 10.1504/IJCEELL.2026.151814

For decades, China's ascent to become the world's second-largest economy was powered by coal-fired energy, steel mills, and chemical plants. The environmental toll grew increasingly visible. In 2016, Beijing launched one of its most sweeping regulatory interventions, the centralised Environmental Protection Inspection (EPI) programme. This would dispatch inspection teams to scrutinise the activities of local governments and major polluters.

Research in the International Journal of Environment and Pollution suggests that the programme has done more than curb emissions. It has improved what economists call green total factor productivity (GTFP) among some of the country's heaviest polluters.

Traditional productivity measures assess how efficiently companies turn inputs such as labour and capital into output. They ignore pollution generated in the process. GTFP adjusts for this by counting emissions and other environmental damage as undesirable outputs. In effect, it measures not only how much a firm produces but also how cleanly it does so. A rise in GTFP means a company is generating more economic value for the same environmental cost or maintaining output while reducing pollution.

The research analysed almost a decade's worth of data from Chinese A-share listed companies across heavily polluting industries. The researchers tracked changes over time using specialist efficiency models, which incorporated environmental factors into their productivity calculations. They then used statistics to compare firms subject to inspections with those that were not, before and after the introduction of the policy. This approach would isolate the effect of the inspections from other economic trends.

The results show a statistically significant increase in GTFP among heavily polluting enterprises following the inspections. Importantly, the gains do not appear to stem primarily from temporary production cuts to meet emissions targets. Instead, the evidence points to increased green technological innovation. Firms invested in cleaner technologies, energy-efficiency upgrades, and process improvements that reduced their environmental footprint in the long term.

Gu, Y., and Liu, C. (2026) 'Empowering sustainable growth: the transformative impact of environmental protection inspections on heavy polluters', Int. J. Environment and Pollution, Vol. 76, Nos. 1/2, pp.40–56.
DOI: 10.1504/IJEP.2026.151756

A study of university students has demonstrated a link between heavy smartphone use, forward head posture, and neck pain. The work, published in the International Journal of Medical Engineering and Informatics, highlights growing concerns about the physical costs of constant digital connectivity among young adults.

The researchers surveyed 404 students in Malaysia aged between 17 and 30 years old in what is referred to as a cross-sectional study. In such a study, data are collected at a single point in time rather than over an extended period of months or years. The students, 216 male and 188 female, completed an online questionnaire detailing their smartphone habits and any physical problems they experienced, such as backache or neck pain.

The team's statistical analysis revealed that those using their smartphones for prolonged periods tended to have a forward neck posture and suffer neck pain. The analysis suggests that just 1 per cent of those had neck pain purely by coincidence and that it was unconnected to posture and smartphone use.

The cervical spine has seven vertebrae that support the head and protect the spinal cord. Forward neck posture describes the common position adopted while looking down at a phone, in which the head tilts forward and downwards. This posture increases the effective weight borne by the neck, placing added strain on muscles, ligaments, and joints. Over time, such strain can lead to irritation of soft tissues, cause nerve compression, and even affect the natural curvature of the spine detrimentally.

Although the study does not establish cause and effect, the strength of the association and its consistency with previous research point to the obvious conclusion that forward head posture during smartphone use is a modifiable risk factor for mechanical neck pain. Given that this problem is reportedly on the increase among younger people, the suggestion is that a little education and guidance on posture and reducing smartphone use would be well placed to preclude an epidemic of chronic spinal problems in this demographic.

Antoniraj, S., Hassan, H.C. and Baleswamy, K. (2026) 'Forward neck posture on cervical pain among university students: effect of smartphone addiction', Int. J. Medical Engineering and Informatics, Vol. 18, No. 2, pp.198–205.
DOI: 10.1504/IJMEI.2026.151773

Thick cloud cover can completely obscure the surface of the earth from satellite view, while thinner haze and shadows distort the image of rural and urban regions. As such, many remote sensing images for monitoring climate, crops, and urban growth are only partially usable.

Research in the International Journal of Bio-Inspired Computation offers a way for satellites to see through clouds using a hybrid artificial intelligence system. The system essentially removes clouds from the images sent back by the satellite and reconstructs the land surface beneath with greater fidelity than is possible with earlier techniques. Almost all optical satellite images are affected by clouds to some degree, so improvements in AI cloud removal could expand the reliability of high-resolution Earth observation data.

Traditional approaches have relied either on physical models of atmospheric light scattering or on image-processing techniques that compare multiple images through time or across different wavelengths of light. Those methods are useful but struggle with varying cloud thickness or large, fully obscured areas. More recent machine learning systems, in which algorithms learn patterns from large datasets, have improved results, but they need clear reference images, without them, they simply produce blurred areas where the landscape was obscured by clouds.

The new approach is a deep denoising application known as SenseNet. It treats those image pixels with clouds or haze as being structured noise that can be removed. The system uses a model inspired by nature called a hybrid Coyote Fox Optimisation algorithm, which works by modelling the social, cooperative behaviour in canines to take the input data and process it to find the optimal solution. In computational terms, it helps tune the network's internal parameters so that training does not stall on suboptimal solutions that would otherwise confound the learning algorithm.

Compared with existing denoising approaches, the system improved signal-to-noise ratios by more than two decibels and reduced residual errors. An improvement of just 2 dB is an almost 60 per cent improvement.

By clearing the clouds away, the system can more readily delineate agricultural boundaries and map road networks and bodies of water so that phenomena such as deforestation, crop yields, and infrastructure can be viewed with more detail. In persistently cloudy regions, including much of the tropics, more reliable cloud removal could reduce data gaps, supporting climate adaptation and disaster response strategies that increasingly depend on near-real-time satellite intelligence.

Gound, R.S. and Thepade, S.D. (2026) 'SenseNet: satellite image enhancement using optimised deep denoiser for cloud removal', Int. J. Bio-Inspired Computation, Vol. 27, No. 1, pp.45–59.
DOI: 10.1504/IJBIC.2026.151783

As cities build upwards to accommodate growing populations, the safety of deep excavation, the process of digging large foundation pits to anchor high-rise buildings, has become a significant challenge in the construction industry. These pits must withstand the problem of shifting of the underlying earth, changes in groundwater pressure, and the heavy machinery while remaining stable enough to protect workers and nearby structures. Failures at this stage can trigger collapses, flooding or structural damage.

Work in the International Journal of Critical Infrastructures discusses an AI (artificial intelligence) system designed to improve safety monitoring at deep foundation pit support sites. The system aims to identify abnormal behaviour, such as unsafe actions, improper equipment use, or entry in restricted zones without protective gear, in close to real time so that warnings can be sounded in a timely manner.

Construction sites have traditionally relied on manual supervision and earlier generations of automated monitoring. But these approaches often struggle to detect unsafe behaviour quickly and accurately. Many systems record high false acceptance rates, meaning they mistakenly classify dangerous actions as safe. Others process video feeds too slowly to intervene effectively in rapidly changing environments.

The new system combines several advanced AI techniques to address those weaknesses. It begins by extracting key frames from surveillance footage using the fractional Fourier transform. This is a mathematical method that analyses data across different domains. By identifying the most informative frames rather than scanning every second of video, the system reduces computational load but still retains critical information.

The system then uses a spatiotemporal graph convolutional network, a form of deep learning that analyses both space and time data. The spatial analysis examines how workers and machinery are positioned relative to one another, while the temporal analysis tracks how movements change over time. Unlike conventional image-recognition models that treat frames in isolation, this approach captures sequences of actions and interactions. This is vital for working out what is happening moment to moment on the construction site.

The final step is to use a hybrid model that combines a convolutional neural network (CNN) with a so-called long short-term memory network (LSTM). The CNN can recognise visual features such as body posture or equipment shape. The LSTM can detect patterns in sequences of data. Working together, those two tools allow the system to determine not only what is happening in a single frame, but whether a series of movements constitutes a safety violation.

In their tests on active deep excavation sites, the researchers got a minimum false acceptance rate of 2.43 per cent and a peak abnormal behaviour recognition accuracy of 99.12 per cent. Processing time was as low as 0.19 seconds per analysis cycle, allowing near real-time monitoring.

Qi, W. (2026) 'An adaptive recognition of abnormal behaviour in deep excavation support construction site of high-rise buildings', Int. J. Critical Infrastructures, Vol. 22, No. 7, pp.1–17.
DOI: 10.1504/IJCIS.2026.151633

A large-scale study published in theInternational Journal of Business Innovation and Research has looked at what factors lead to sustained gains in the construction industry. The team looked at 226 nationally registered firms and found that operational efficiency and collaboration, long seen as the sector's primary remedies for underperformance, are insufficient on their own to lead to sustained gains. Instead, the decisive factor is whether companies fundamentally rethink how they create, deliver, and capture value.

The research used a statistical tool known as Partial Least Squares Structural Equation Modelling to analyse information from the 226 companies and to look for any relationships between various organisational factors. The approach allowed them to look at how lean construction practices and strategic partnerships affect performance. It was also possible to discern whether business model innovation acts as a bridge between these strategies and measurable outcomes such as profitability, operational efficiency and competitive position.

Lean construction is a systematic project management approach designed to eliminate waste and maximise value throughout a project's lifecycle. Waste includes excess materials, redundant labour, delays, reworking, and poor coordination between contractors. Unlike simple cost-cutting, lean methods emphasise continuous improvement, integrated workflows, and delivering greater value to clients.

The study confirms that those companies that adopt lean practices do tend to perform better. However, the most significant improvements did not stem solely from streamlining their processes. Instead, lean thinking proved most powerful when it also prompted broader strategic change.

That broader shift is captured in the concept of business model innovation. A business model defines how a company creates value for customers, how it delivers that value, and how it generates revenue. Innovation in this context involves reconfiguring those core elements. For example, this might include moving from one-off, project-based contracts to long-term integrated service models, adopting digital coordination platforms, redesigning revenue structures, and embedding sustainability into what the company offers to clients.

Business model innovation was found to have a strong and direct positive effect on performance. More importantly, it amplified the impact of lean construction. When lean methods were embedded within a redesigned business model, performance gains were significantly greater than when lean was treated as a stand-alone efficiency tool. The research found that partnerships boosted performance only when it allowed companies to innovate in their business models. Access to shared knowledge, resources, and trust-based relationships yielded gains only if companies used them to reconfigure how they compete and deliver value.

Arifin, J., Prabowo, H., Hamsal, M. and Elidjen, E. (2026) 'Innovating for performance: the role of lean construction and strategic partnerships in construction firms', Int. J. Business Innovation and Research, Vol. 39, No. 6, pp.1–25.
DOI: 10.1504/IJBIR.2026.151634

In the age of global branding, instantaneous communication, and generative AI images, the symbols that we see in our daily lives circulate at an unprecedented rate. A study in the International Journal of Information and Communication Technology argues that if the symbols we share are to foster understanding rather than confusion, designers must treat them as carriers of cultural meaning, not mere decoration.

The team has used communication science, design theory, and semiotics, the study of signs and how they create meaning, to propose a systematic, evidence-based framework to identify, refine and test traditional cultural symbols. Their concept echoes an insight by Ferdinand de Saussure that suggests that a sign is not simply a form but a form bound to shared content. A flower or mythical creature, in this view, evokes memories, values and beliefs as much as it depicts the object it illustrates.

As digital platforms accelerate the circulation and mutation of images, we experience the fragmentation of symbols and signs. Moreover, in the age of generative artificial intelligence, almost all content is being cannibalised and regurgitated as derivative works, visual motifs are thus losing their inherited symbolism or, at best, being misappropriated or diluted. In the face of these changes, the researchers suggest that semiotics has now become a necessary part of creativity and perhaps the only hope of our conserving our symbols and their significance.

In their paper, the researchers discuss a five-step process beginning with systematic data collection and identification of culturally significant symbols. They followed this with a cross-cultural analysis, design refinement, and empirical testing. Statistical analysis together with expert review allowed them to look at specific symbols, such as the blue-and-white porcelain motifs featuring the lotus, peony, and plum blossom. As a good example of symbolic art, these patterns scored highly for clarity, adaptability, and perceived authenticity. The lotus is widely associated in East Asia with purity and renewal, the peony with prosperity and honour, and the plum blossom with resilience in adversity. Their visual simplicity combined with layered symbolism appears to aid translation into contemporary branding, the analysis found. More complex imagery failed to ignite the imagination of general audiences, although it was recognised as culturally significant by the experts.

Quantitative evaluation thus shows the different priorities associated with authenticity and meaning, challenging assumptions of universal interpretation for even familiar symbols that might be used in marketing and branding.

Li, A. (2026) 'Research on the identification and optimisation of traditional cultural symbols from the perspective of cross-cultural communication', Int. J. Information and Communication Technology, Vol. 27, No. 9, pp.18–38.
DOI: 10.1504/IJICT.2026.151653

Mental health problems are among the most pressing of public health challenges, affecting millions across different age groups and societies. Depression, anxiety, and stress-related conditions rank among the leading causes of diminished quality of life worldwide. They exact a heavy social toll and economic cost. Yet diagnosis still relies largely on self-reported symptoms and intermittent clinical interviews, which means diagnosis is vulnerable to memory lapse, stigma, and limited access to trained professionals.

Research in the International Journal of Networking and Virtual Organisations discusses an artificial intelligence (AI) diagnostic system that can spot early signs of various mental health conditions by analysing how people write online. The model, known as a Fossa-based Graph Neural Network (FbGNN), examines language patterns in text drawn from social media platforms and online forums. Instead of relying solely on questionnaires, it studies sentiment-driven textual information, the emotional tone, word choices and behavioural cues embedded in a person's online writing.

The researchers explain that their system combines two advanced computational techniques. The first is the Fossa optimisation, a feature-selection method based on search strategies seen in nature. In machine learning, features are identifiable pieces of information, specific words, phrases or emotional markers. By applying Fossa optimisation, the system can filter out any irrelevant data from those features and identify pertinent indicators of mental distress.

The second component is a Graph Neural Network, a GNN. A GNN analyses relationships by representing information as a network of nodes and connections. Here, nodes correspond to features, and the connections are the interactions between them. This allows the model to detect complex patterns, such as recurring combinations of emotional expression and behavioural signals.

By training the system to classify text based on categories such as depression, anxiety, stress, bipolar disorder, suicidal ideation, and personality disorders, the team was able to then test its accuracy against known sample data. It was able to predict a person's mental health status with an accuracy of almost 99 per cent in the trials. Such accuracy would be useful in screening for mental health problems among a cohort of users, such as students, employees, or any other group. It would allow healthcare follow-ups to be directed at those most likely to have problems that might be addressed and would only miss one in a hundred. Further refinements of the system could bring that accuracy closer to 100 per cent.

Shobitha, G.S., Kataksham, V.S., Nagalaxmi, T., Spandana, V., Sreelatha, G. and Radha, V. (2025) 'A smart intelligent Internet of Things framework for predicting mental health', Int. J. Networking and Virtual Organisations, Vol. 33, No. 3, pp.251–278.
DOI: 10.1504/IJNVO.2025.151510

Digital payments are a routine part of daily life for many people. As such, the risk of online fraud is rising alongside this convenience. Identity theft, email compromise, scams, and misleading investment schemes all exploit technological weaknesses and often user naivety and can lead to big financial losses.

Research in the American Journal of Finance and Accounting has looked at technological threat avoidance theory (TTAT), a framework used to understand how individuals respond to technology-related risks. The study sheds new light on what motivates users to protect themselves from online financial threats, if they do at all. It considers user attitudes towards fraud and the perception of potential financial loss with the aim of identifying the specific influences that lead to a user taking protective action.

The team surveyed users of online payment platforms and found that rather than an abstract fear of fraud, the decisive factor in whether or not people took preventative measures was simply the perceived financial loss. This finding suggests that awareness campaigns focused on general threats may be less effective than approaches that point out the direct financial consequences of online fraud.

Online fraud costs us roughly US$1 trillion per annum, and it is likely that figure is rising year on year. There are millions of reported cases and probably many more that are never reported. The losses that people bear when a victim of online fraud erodes overall trust in the digital systems on which we rely. Moreover, widespread, organised fraud can disrupt financial infrastructure, threatening broader economic stability and making it almost impossible for regulators to maintain oversight and control.

Facing such problems, the digital economy needs technological innovation in payment systems to incorporate effective strategies to influence user behaviour. Such strategies need to make it difficult for users to compromise themselves through technological naivety. Policymakers, platform developers, and financial educators also need to help in the design of interventions that align perceived risk with actual behaviour and so strengthen the individual against threats as well as help maintain trust in digital financial systems.

Peswani, R. and Vijay, P. (2026) 'Minimising exposure to cyber frauds in digital finance: perspectives from technology threat avoidance theory', American J. Finance and Accounting, Vol. 9, No. 1, pp.76–98.
DOI: 10.1504/AJFA.2026.151476

The migration to electronic medical records, used by healthcare providers, hospitals, and medical insurers, continues. However, this switch from paper records is leading to an accumulation of data, a lot of which is in free-text form that cannot be processed easily by an algorithm searching for knowledge and looking for patterns.

A study in the International Journal of Business Process Integration and Management has looked at using basic text-mining methods to convert this free text, which might be as unsophisticated as the jottings of a doctor or nurse, into something more organised. This kind of processing could make decisions in medicine faster and more consistent as well as potentially opening up new avenues for medical research and epidemiology.

The research focused on the specific medical condition of lower back pain and the reports associated with it. Lower back pain is a big problem for a lot of people and a major reason people miss days in work or file for disability. Experts can evaluate symptoms and consider what medical scans show and make a diagnosis and offer a prognosis. Administrators have to read through reports manually to determine fees and payments. A system to convert free text to structured text would be a boon, allowing dates and diagnoses to be searched, checked, and analysed much more easily.

The team used pattern-matching rules to look for regular expressions that allow software to detect specific phrases or formats in text. This could then be used to extract clinical and administrative details. This rule-based text mining was combined with machine learning algorithms that can learn from past data and make predictions about new cases.

The researchers tested their system on 255 anonymised reports. Medical specialists validated the extracted information, confirming a precision rate of 98 per cent. The structured information was then used to train three established predictive models: AdaBoost, which combines multiple simple models to improve accuracy; Random Forest, which aggregates the results of many decision trees; and Support Vector Machines, which identify boundaries between categories in complex datasets.

In tests, AdaBoost achieved perfect accuracy in predicting when rest should be prescribed. Random Forest reached 91 per cent accuracy and 93 per cent recall, a measure of how many relevant cases are correctly identified, in return-to-work assessments. The Support Vector Machine recorded a 98 per cent recall rate in classifying disability cases.

Beyond performance metrics, the researchers argue that the approach reduces processing time and limits transcription errors. Because the extraction rules are explicit, the system remains interpretable. This is important, as decisions still need to be explained to patients and others regardless of how structured or unstructured the data is.

Zwawi, R., Elhadjamor, E.A., Ghannouchi, S.A. and Ghannouchi, S-E. (2025) 'Optimising text mining applications for enhanced medical decision making', Int. J. Business Process Integration and Management, Vol. 12, No. 4, pp.295–306.
DOI: 10.1504/IJBPIM.2025.151626

As self-driving, autonomous, vehicles head out on to public roads, one of the field's most persistent challenges remains collision avoidance in unpredictable traffic. A study in the International Journal of Vehicle Design discusses an artificial intelligence (AI) control system that has a 97 per cent success rate in avoiding obstacles, with a maximum response time of about half a second.

Urban roads present a shifting landscape of pedestrians, stalled vehicles, roadworks and erratic drivers. For a self-driving car, safe operation depends not only on accurate sensors but also on rapid decisions made under such uncertain conditions. Conventional obstacle-avoidance systems often rely on fixed rules or straightforward processing of sensor data. These approaches can sometimes fail in heavy rain, fog, or headlight glare.

Other systems that use reinforcement learning, a branch of AI in which the algorithm learns by trial and error, such as Deep Deterministic Policy Gradient, need a lot of computing power and often struggle to work quickly enough for real-world driving conditions.

The new approach described in IJVD builds on a reinforcement learning framework called Soft Actor-Critic, or SAC. In this computing system, the software actor proposes driving actions while the software critic evaluates whether or not the given manoeuvre would be sensible or not. SAC is designed to learn so that positive outcomes boost the actor-critic interactions that led to them. The system also embeds entropy, a statistical measure of randomness that allows it to continue to explore the best manoeuvres rather than settling prematurely on a single solution. This helps the system remain adaptable in uncertain environments.

The model also incorporates a self-organising cluster mechanism inspired by the collective movement of a flock of birds, that famously avoid mid-air collisions. At close range, a mathematically defined repulsion force pushes vehicles apart to prevent impact. At medium distances, a velocity calibration rule aligns speed with an ideal braking curve to reduce the risk of rear-end collisions. Additional rules govern wall and obstacle avoidance. This layered design allows multiple autonomous vehicles to coordinate their movements without relying on a single lead vehicle.

Ma, Y., Qian, Y., Ma, T., Li, Y. and Wan, J. (2025) 'Intelligent obstacle avoidance control method for autonomous vehicles based on improved SAC algorithm', Int. J. Vehicle Design, Vol. 99, No. 5, pp.1–19.
DOI: 10.1504/IJVD.2025.151524

A study in the Journal of Business and Management has shown that self-esteem plays an important part in determining whether someone wishes to pursue a leadership role. The findings have implications for both organisational success and career development, underscoring, as they do, how self-esteem affects personal motivation.

The research suggests that self-esteem affects a person's regulatory focus, a psychological framework that influences how individuals approach challenges and goals. There are two main types of regulatory focus: promotion focus and prevention focus. Promotion focus is characterised by a drive for growth, achievement, and opportunity-seeking. In contrast, prevention focus is concerned more with the avoidance of failure, staying safe, and fulfilling one's basic duties and no more.

Individuals with high self-esteem are more likely to be promotion focused, which then drives them to seek leadership roles. Those with lower self-esteem tend to lean towards prevention focus, which makes them less inclined to pursue leadership roles.

The effect is not solely down to the individual's personality, however. The work also showed that career encouragement and support from supervisors and peers can affect a person's focus and the motivational pathways they might take. Encouragement can boost the positive effects of promotion focus, motivating individuals to pursue leadership. However, for those with lower self-esteem, encouragement can have the opposite effect, reinforcing their reluctance to take on leadership responsibilities due to their prevention focus. The research thus highlights a need to consider individual psychological states when offering career support so that talented people who have leadership potential are not lost to those roles because of their lower self-esteem.

The team adds that unlike static predictors, such as personality traits or gender, regulatory focus can be affected by one's experiences and external support. This makes it a more pliable characteristic that might be influenced to the person's benefit through good career development advice for those with the potential for leadership.

Guo, J. (2025) 'Regulatory theory and career encouragement in explaining leadership aspiration', J. Business and Management, Vol. 30, No. 2, pp.75–98.
DOI: 10.1504/JBM.2025.151596

Research into the COVID-19 crisis, which began in December 2019, suggests that although there was widespread loss and disruption, the international crisis also planted the seeds for grassroots innovation and resilience. A study in the International Journal of Entrepreneurial Venturing of one hundred initiatives that emerged in Belgium during the pandemic finds that when established institutions struggled to respond quickly, individuals and organisations were able to step up to create new economic and social value.

The research focuses on initiatives defined broadly to include both newly created ventures and existing organisations that adapted their activities. These ranged from informal mutual aid efforts to repurposed businesses and newly launched services. Some were started by people with no prior experience of entrepreneurship. Other initiatives were started by established entrepreneurs responding to the sudden changes in demand and regulation. What they shared was a capacity to adjust rapidly under pressure.

The pandemic created conditions of extreme uncertainty. Lockdowns and business closures, imposed to limit the spread of the virus, caused sharp falls in income, consumption, and investment. Many people perceived formal support systems as too slow or rigid to meet urgent needs. This gap became the space in which these initiatives emerged, often spontaneously and with limited resources.

The study looks at this kind of resilience and rather than treating it simply as endurance in the face of a crisis, defines it as a dynamic process of recovery, adjustment, and innovation. Resilience was, during the pandemic and in its aftermath, both the route through which initiatives developed and the results they produced. The researchers argue that action was not driven solely by compassion or urgency, but by the ability to reframe the crisis as an opportunity to meet unmet needs.

The study suggests that locally driven, resilience-based initiatives can complement government and aid responses, particularly in the early stages of a crisis. As such, for policymakers, the challenge is how to recognise and sustain such efforts without undermining their flexibility. We will face pandemic and other shocks in the future, our ability to adapt and innovate in these conditions will be key to an effective disaster response.

Wuillaume, A., Ferritto, A. and Janssen, F. (2025) 'A note on resilience in the face of adversity when small droplets trigger big changes', Int. J. Entrepreneurial Venturing, Vol. 17, No. 3, pp.249–273.
DOI: 10.1504/IJEV.2025.151370

A study in the International Journal of Global Environmental Issues has looked at "ritualistic" hunting practices in eastern India. It finds that they are contributing markedly to a worrying decline in wildlife and forest health. The work raises difficult questions about how cultural traditions can coexist with modern conservation goals.

The research focuses on Jungle Mahal, a forested region in western West Bengal, where hunting remains an integral part of religious and social life for several communities, particularly the Santhal. Ritualistic hunting, defined in the study as the killing of wild animals for ceremonial rather than commercial or subsistence purposes, is shown to be placing increasing pressure on ecosystems that are inherently vulnerable.

West Bengal hosts a range of ecologically significant species, including pangolins, fishing cats, and diverse bird populations. Such animals play crucial roles in the functioning of the ecosystems across the region. They help to regulate prey populations, disperse seeds, and recycle nutrients, among other things. The study reports a clear reduction in wildlife richness, biodiversity. It also notes a marked decline in forest density in Jungle Mahal. It is worth noting, that residents and hunters are well aware of these changes to their local environment, however, there is the paper reports, little inclinations towards matters of conservation.

Hunting in the region employs traditional techniques such as bow-and-arrow, traps, nets, and the use of smoke to flush animals from burrows. It occurs throughout the year, but intensifies during festival periods between March and June. During this period, large communal hunts with hundreds or even thousands of participants take place and huge numbers of animals are killed in a very short time.

India's Wildlife Protection Act of 1972 prohibits the hunting of wild animals, but the researchers found that enforcement is weak in remote forest areas. Awareness of conservation laws among local communities is limited, and illegal hunting continues unchecked. The study highlights the fact that there is great mistrust of authorities in such regions and a general perception that conservation policies are detrimental to indigenous values and livelihoods. It remains an open-ended question as to how this disconnection between culture and conservation might be remedied.

Baitalik, A., Bhattacharjee, T., Bera, D., Paladhi, A., Kar, R.R., Ojha, M., Hazra, A., Begum, M.D., Lohar, R., Karan, M. and Dandapat, R. (2025) 'Ritualistic hunting in selected districts of West Bengal (India): implications on wildlife diversity and conservation', Int. J. Global Environmental Issues, Vol. 24, No. 2, pp.85–117.
DOI: 10.1504/IJGENVI.2025.150931

Climate change and worsening environmental conditions have brought into sharp relief how we must reconcile development with sustainability. This issue is nowhere more starkly relevant than among the fastest-growing economies. Research in the International Journal of the Energy-Growth Nexus that examined the BRICS countries, Brazil, Russia, India, China and South Africa, suggests that investment in education and training might play a significant role in reducing environmental harm, a role that has often been overlooked.

The researchers analysed several years worth of data from the BRICS countries. These nations account for a large proportion of the world's population, energy use, and greenhouse gas emissions. The analysis found a close relationship between higher levels of human capital and lower levels of environmental degradation, measured primarily through carbon emissions. Human capital refers to the stock of education, skills and knowledge embodied in a workforce, commonly captured through indicators such as schooling level and training.

According to the analysis, improvements in human capital is associated with reduced emissions across the BRICS economies. The results hold across several statistical techniques designed to address common problems in these kinds of studies, such as cultural and social differences, the differing impact of global shocks, and the two-way causality between growth and pollution.

The findings are rooted in endogenous growth theory that says long-term economic progress depends on knowledge, innovation, and research rather than on physical inputs alone. In environmental terms, a more educated and skilled workforce is better able to develop and adopt cleaner technologies, improve energy efficiency, and comply with environmental regulations. Innovation, measured in this study by patent activity, is also associate with better environmental outcomes. This latter point reinforces the idea that technological progress can decouple growth from emissions.

The team adds that globalisation emerges as another factor associated with improved environmental quality. This phenomenon perhaps reflects technology transfer and the sharing of cleaner production methods across borders. Trade openness itself, however, has the opposite effect. More international trade means higher levels of environmental degradation in the BRICS countries. This is consistent with concerns that trade can encourage the expansion of pollution-intensive industries or the import of environmentally inefficient technologies.

As emerging economies continue to drive global growth and emissions, this study shows how education and training are key to climate and environmental strategy. Policies that open up access to high-quality education, raise average years of schooling, and support research and development could lead to environmental benefits as well as providing an economic boost. There is a need, however, to improve trade policy and environmental regulation so that economic development is not to the detriment of environmental sustainability.

Sachan, A. and Pradhan, A.K. (2026) 'Examining the impact of human capital on environmental degradation in BRICS nations', Int. J. Energy-Growth Nexus, Vol. 1, No. 3, pp.201–218
DOI: 10.1504/IJEGN.2026.151371

Public sector organisations in emerging economies could improve their performance and resilience by taking a more systematic approach to knowledge management, according to a review in the International Journal of Business Excellence.

The review examined research into how government institutions create, share and retain knowledge. It also considered why these practices are important to the institutions' ability to deliver their services, adapt to change, and withstand disruption. The main conclusion is that effective knowledge management should not be considered as a peripheral administrative exercise, but must be seen as an essential strategic component of governance.

Knowledge management refers to the systematic processes through which organisations generate, store, and use knowledge. This includes policy documents, reports, and databases. It also includes tacit knowledge, the experience, skills, and judgement of individual members of the institution. In the public sector, much of this kind of tacit knowledge can be lost through staff turnover and political change if it is not deliberately captured and shared in a timely manner.

Across the research papers covered in this IJBE review, there is strong evidence that public organisations that invest in knowledge management perform better. They are more able to innovate, they can respond more effectively to new social and economic challenges, and they can maintain continuity during periods of political and social upheaval. This kind of institutional resilience is strengthened by effective knowledge management.

However, the review also shows that too many public bodies rely on informal or fragmented approaches to knowledge management. This tends to limit its long-term impact. The underlying problem is often inadequate technological infrastructure. If the digital platforms for storing information and enabling collaboration are not present, then there is no functionality within the institution to allow for effective knowledge management. In addition, cultural and organisational barriers often stymie efforts to share knowledge in institutions with rigid hierarchies and siloed departments, and low levels of trust among employees in different areas within the institution.

Good leadership is the decisive factor in overcoming these various obstacles. Indeed, the review found that ethically inclined and committed leaders who actively promote collaboration and learning can embed knowledge management into everyday practice. Technology helps but human factors such as motivation and skills can make all the difference.

Yshikawa-Arias, J.F. and Arana-Barbier, P.J. (2025) 'Knowledge management in the public sector of emerging economies: a literature review', Int. J. Business Excellence, Vol. 38, No. 6, pp.1–21.
DOI: 10.1504/IJBEX.2026.151398

Social media influencers have become a prominent part of modern advertising. They can shape how brands communicate with consumers and how people decide what to buy in ways that conventional marketing perhaps never achieved in the past. A review of research into this phenomenon published in the International Journal of Business Excellence suggests that the impact influencers have is now sufficiently well established that there is a need to study their commercial effectiveness, as well as looking into any ethical or regulatory questions that arise.

The study systematically examined peer-reviewed research in this area to assess what is known about influencer marketing and what information is lacking. Influencer marketing refers to the promotion of products, services or ideas by individuals who have built large and engaged followings on social media platforms. Unlike conventional celebrities, influencers are typically perceived as ordinary people who share their daily lives or specialist interests online. This quality fosters a degree of trust in what they say and what they promote that might be elusive to the traditional scripted advertising and endorsements made by actors, pop stars, and other such well-known individuals.

Researchers consistently find that influencers affect consumer behaviour, particularly purchasing decisions. Many studies measure purchase intention, a term used to describe how likely a consumer is to buy a product after encountering marketing content. Influencers appear to shape purchase intention in several ways, through credibility, meaning how knowledgeable and trustworthy they seem, attractiveness, encompassing both physical appeal and likeability, and the fit between influencer and product, which refers to how closely an influencer's image aligns with the brand they promote.

Influencer endorsements might be referred to as electronic word of mouth, digitally mediated opinions that consumers often perceive as more authentic than traditional advertising. Beyond individual purchases, the research literature suggests that there is a broader effect on brand awareness, improved brand perception, and stronger engagement for the company being promoted with their target audience.

By reviewing what is considered to be a rather fragmented body of research, the paper suggests that influencer marketing is now a permanent feature of contemporary commerce, at least as permanent as any phenomenon might be in such a fickle world as marketing. The researchers say that there is now a need for a more context-specific analysis of this evolving industry, one that also takes into account the ethics, such as those surrounding children and vulnerable adults.

Trehan, U., Siddiqui, I.N. and Dewangan, J.K. (2025) 'Social media influencer marketing: a systematic literature review', Int. J. Business Excellence, Vol. 37, No. 4, pp.488–505.
DOI: 10.1504/IJBEX.2025.150870

Climate change and sustainability issues are high on the agenda, and the fashion industry is facing increased scrutiny over its practices with regard to their environmental impact. Research in the International Journal of Sustainable Society has looked at how fast-fashion and luxury brands communicate their purported sustainability efforts. The findings reveal a sector grappling with progress and persistent shortcomings that suggests consumers need more dyed-in-the-wool greenwashing from manufacturers.

The research analysed 42 scholarly and industry papers focusing on corporate social responsibility disclosures, website content, and other public reports. Corporate social responsibility refers to the ways in which companies report their efforts to act responsibly towards the environment, society, and stakeholders. The study highlights a growing tension between brand messaging and actual environmental impact, particularly in the form of what is often called "greenwashing". Whereas whitewashing is a metaphor for painting over problems, greenwashing refers to companies exaggerating or misrepresenting their environmental credentials and eco-friendliness of their products.

Experts argue that greenwashing is symptomatic of a larger issue and that is the absence of clear, enforceable standards defining sustainable fashion. In other sectors, such as the food industry, terms such as "organic" are strictly regulated, but in the fashion industry, claims of sustainability are not monitored nor regulated in the same way. This regulatory gap allows companies to gain reputational benefits without verifiable proof, placing the onus on consumers to check their green credentials before buying.

The IJSS paper recommends various measures that could be used to improve transparency and accountability, including obtaining third-party certifications, sharing detailed production processes, and educating consumers on the complexities of sustainable clothing. Of course, there are obstacles in that overproduction and continuous consumption underpin the fashion economy, making the notion of sustainability difficult to achieve.

It is suggested that regulatory oversight could both protect consumers and encourage systemic reform in the industry. For consumers, policymakers, and industry professionals, there is a need for critical assessment of sustainability claims and for structural reform that will help the industry achieve meaningful environmental responsibility.

Zaidi, A.A. and Gandhi, A. (2025) 'Green or green washing? A review paper on the current state of sustainability of fashion brands', Int. J. Sustainable Society, Vol. 17, No. 4, pp.334–354.
DOI: 10.1504/IJSSOC.2025.150884

A study of business school graduates, published in the International Journal of Management Concepts and Philosophy, challenges the widely held belief that such students enter the workforce ill-prepared for the world of work. The finding is based on in-depth interviews with employers in Estonia and the research overall takes a broader view than earlier studies rather than focusing on specific skills such as communication, leadership, and problem-solving. Fundamentally, the research found a much smaller gap between what business schools provide and what employers want than is commonly assumed.

The paper explains that employers cited three main factors supporting this conclusion. First, holding a business degree itself serves as a reliable signal of employability. The credential indicates a graduate has the capacity to learn and adapt, traits employers value highly. Secondly, any deficiencies in technical knowledge or practical experience can often be addressed through on-the-job training. Thirdly, the qualities employers seek, such as adaptability, critical thinking, and ethical awareness, largely align with what business schools already cultivate in their students.

The research has implications for higher education leaders who may have been developing new curricula on the basis of a misconception. The work suggests that a complete overhaul of business school programmes is not needed. They might better focus on improving how their courses develop a graduate's ability to learn continuously. This would then allow employees to adapt as job requirements evolve. For students, the study reinforces the value of a business degree not only as a basic academic credential but also as the foundation for their ongoing professional development.

Of course, the work focused on graduates and employers in Estonia. Future work might increase the sample size, adjust the interview methodology, and widen the reach of the work to other countries.

Örtenblad, A., Koris, R. and Kerem, K. (2026) 'The much-discussed gap between employers' demands and business school graduates' competence: an intriguing finding', Int. J. Management Concepts and Philosophy, Vol. 19, No. 5, pp.1–20.
DOI: 10.1504/IJMCP.2026.151273

Farmers in Kazakhstan's steppe region make production decisions based not only on potential profit, but must weigh expected income against the risk of economic volatility. That's according to research in the International Journal of Business Information Systems, which has examined agricultural decision-making in the region. The researchers analysed detailed farm survey data and found that a combination of government subsidies and farmers' attitudes toward risk also have an effect on the structure of agricultural production.

Risk aversion is a preference for more predictable outcomes over uncertain but potentially higher returns. In the agricultural context, this usually involves diversification. Spreading activities across different crops or livestock to reduce the likelihood that a single shock, such as a failed harvest or price drop, will severely damage income. The study confirms that diversification remains a central strategy for Kazakh farmers. However, it also found that even a limited degree of diversification into complementary activities could reduce risk especially for farms with limited resources.

The research found a big difference between outcomes from crop and livestock production. Crop farming is generally producing higher and more stable returns under current market conditions and needs less government support. Beef and dairy farming, on the other hand, often rely heavily on subsidies to remain viable.

The researchers point out that subsidies do more than simply raise income, they affect how willing a farmer might be to engage in riskier activities that might be beneficial in the long-term. While the work focused on Kazakhstan, it could have similar implications for other developing regions where governments might actively intervene to stabilise food supplies and farm incomes. Of course, it is worth noting that state support does introduce additional uncertainties, since subsidy schemes and price supports can change abruptly with policy shifts.

Mussina, G., Kussaiynov, T., Kadrinov, M., Sarsembayeva, G. and Assilov, B. (2025) 'Searching for a risk-efficient production structure on crop-livestock farms', Int. J. Business Information Systems, Vol. 50, No. 8, pp.22–37.
DOI: 10.1504/IJBIS.2025.151328

Digital payments, online banking, investment apps, and automated credit assessments have become routine parts of our everyday financial lives. A study in the International Journal of Business Information Systems argues that because of this the money management skills we need have changed fundamentally.

Financial literacy, the research suggests, is no longer simply about budgeting or understanding interest rates, we need digital skills to cope as well as psychological preparedness and the ability to make sensible economic decisions when faced with always-on apps and notifications. This means we need improved financial education that takes into account the digital tools so that no one is excluded for lack of understanding or access.

The study has reviewed the academic research and found that access to digital financial tools does not automatically lead to better financial outcomes. While financial technology, often referred to as "fintech", promises convenience and wider access to services, it also exposes users to unfamiliar risks. Such risks might include online fraud, shape lending, and inappropriate investment opportunities. People who lack confidence with digital systems are particularly vulnerable.

The work found that digital competence, the ability to use digital tools effectively and safely, can change financial behaviour by affecting a person's perceived control. In practice, this means that people who feel capable and in control can use their technical skills to make better financial decisions. That said, even when individuals have access to digital services and the skills to use them, positive results depend on their motivation, self-confidence and their sense of agency.

In modelling the findings from their review, the team saw a reciprocal relationship between motivation and capability. Stronger skills build confidence and intention, while higher motivation encourages additional skill development. The implications are that initiatives that focus solely on technical training may not work well, there needs to be a component of behavioural nudging too to help deliver better results.

Putri, A.M., Wiryono, S.K., Damayanti, S.M. and Rahadi, R.A. (2025) 'Exploring digital financial literacy through the lens of planned behaviour theory and technology acceptance model', Int. J. Business Information Systems, Vol. 50, No. 8, pp.1–21.
DOI: 10.1504/IJBIS.2025.151330

Global shipping has a large carbon wake and as such the industry is pushing to reduce emissions. One approach has been to turn to a non-carbon fuel, ammonia as an alternative fuel. Research in the International Journal of Shipping and Transport Logistics, however, warns that international maritime law has not kept pace with the speed at which ammonia-powered vessels are being designed, tested, and promoted. This, the researchers suggest, might leave unresolved safety and liability questions unanswered, which could stall the transition to cleaner shipping.

Shipping accounts for a significant three percent of global carbon dioxide emissions. In 2023, the International Maritime Organization (IMO), the United Nations body that regulates global shipping, adopted a strategy committing the sector to net-zero greenhouse gas emissions by 2050. The aim is to achieve substantial reductions by 2030 and 2040. Achieving these goals will require a large-scale shift from heavy fuel oil to zero-carbon fuels.

Ammonia is a promising candidate. It contains no carbon, so releases no carbon dioxide when used as a fuel. It can also be produced efficiently using renewable electricity rather than fossil fuels. Indeed, it is anticipated that ammonia use as a fuel will expand rapidly in the next few years. Ammonia has an additional benefit over hydrogen as a fuel is it can be liquefied at more moderate temperature and pressure, which means it is compatible with much of the existing infrastructure used to transport liquefied gases over long distances. Hydrogen as fuel would require entirely new transport and storage infrastructure.

Despite the many advantages of ammonia as fuel, there are legal and safety complications. Ammonia is a highly toxic and corrosive substance. This creates engineering challenges that require specialised engine designs to ensure reliable ignition and efficiency.

Ammonia has been transported by sea for many years and the regulations around its transport are well established but do not account for it actually being used as a fuel on ships. The International Gas Carrier Code sets standards for ships carrying ammonia as cargo, while the International Code of Safety for Ships Using Gases or Other Low-Flashpoint Fuels was developed primarily with liquefied natural gas (methane) in mind and offers no specific guidance for ammonia as a fuel.

The regulatory gaps and internal legal inconsistencies urgently need to be closed so that shipbuilders, operators, flag states, and port authorities can have certainty in the building and use of ammonia-fuelled vessels.

Choi, J. and Lim, S. (2026) 'Legal challenges and regulatory improvements regarding ammonia as an alternative marine fuel or cargo', Int. J. Shipping and Transport Logistics, Vol. 22, No. 5, pp.1–27.
DOI: 10.1504/IJSTL.2026.151317

University libraries hold vast collections of scholarly work, yet most academic books are borrowed only a handful of times each year. A study in the International Journal of Information and Communication Technology suggests that the problem lies less in library logistics than in the lack of a sophisticated recommendation system available to readers. The team behind the research have come up with a new approach to library recommendation systems that replaces the static models with an approach that adapts to the readers' changing learning needs.

For decades, most library and commercial platforms have relied on collaborative filtering, a technique that recommends items based on aggregated past behaviour, such as borrowing or purchasing patterns. While effective at scale, the method treats readers as having a fixed profile. It ignores the level of difficulty of material relative to a reader's ability. Moreover, it does not work well with a cold-start, where little data exist for new users or new books. This latest research suggests that overcoming such limitations could open up knowledge to more readers and stop those books gathering dust on the library shelves.

The new system models readers as learners whose knowledge changes over time. It uses a gated recurrent unit, a form of neural network designed for time-series data. This tracks changes in a reader's mastery of a subject and so can produce what the researchers refer to as a continuously updated "cognitive state matrix". This analysis reflects what a reader is likely to understand at any given moment in their education of research.

The team adds that their model incorporates behavioural signals, such as borrowing rhythms and search intent, and an environmental feedback mechanism that adjusts recommendations to balance a resource's difficulty against its popularity.

The approach was tested using real borrowing data from a university library. The team found improvements over established baselines in ranking quality and measured learning gains, while maintaining low response times compatible with live deployment.

Deng, F. (2025) 'Personalised book recommendation model for university libraries based on multi-factor knowledge tracking', Int. J. Information and Communication Technology, Vol. 26, No. 50, pp.1–16.
DOI: 10.1504/IJICT.2025.151070

Research in the International Journal of Data Science has looked at how network security technologies can be integrated into the redesign of ordinary homes for older adults with a view to improving their quality of life.

The approach could offer an alternative to institutional care for members of an ageing population. The research suggests that conventional housing could be adapted for older residents with a new ethos that overcomes the limitations of earlier approaches. Some of those earlier approaches could not address the complex and evolving risks associated with later-life living.

Home-based care of older people is commonly the preferred choice, but it is often stymied by interiors that were designed for younger, mobile individuals rather than those with reduced mobility, sensory impairment, or cognitive changes. The work suggests that network security systems that protect personal data as well as the interconnected sensors and monitoring systems that manage risk, detect hazards, and respond to changes in a resident's condition or environment.

The researchers have considered a standard two-bedroom flat and tested an approach that combined a useful setup with intelligent monitoring. The redesigned interior used networked sensors to identify potential dangers, support adaptable layouts, and define functional zones that could change according to daily routines and care needs.

The work highlights how the integration of useful technology into the home can be done so as not to detract from domestic comfort or visual aesthetics, which are also important to quality of life. The modified flat demonstrated how improvements not only in safety but also in usability and overall livability could be undertaken.

The findings have implications for social policy and public finance. Safer, more adaptable homes could allow older adults to remain independent for longer, reducing society's reliance on residential care facilities. This could thus reduce the pressure on public care budgets and pension schemes. The next step will be to look at other forms of housing and to investigate whether the approach is scalable.

Yu, T. (2025) 'Design and transformation of the interior space for home-based care for the aged based on network security', Int. J. Data Science, Vol. 10, No. 7, pp.1–15.
DOI: 10.1504/IJDS.2025.151177

The interaction of silver materials with light is well-known as the basis of film photography. But, there are much more sophisticated interactions when we consider very, very small particles of silver that could have applications in a wide range of technologies.

Research in the International Journal of Nanoparticles has looked at the behaviour of the tiniest of silver particles, just billionths of a metre in diameter when exposed to light. The behaviour of these silver nanoparticles when exposed to light is different depending on the exact size of the particles.

The team has modelled the absorption, scattering, and quenching of light of different wavelengths with silver nanoparticles from 10 to 240 nanometres in diameter. They found that the smaller nanoparticles primarily absorb light. This could be useful in boosting photothermal effects used in targeted medical therapies for cancer.

By contrast, larger particles, rather than absorbing light, scatter it. This phenomenon might be used to make reflective coatings and solar energy capture devices.

Those particles that are of intermediate size, 40 to 60 nanometres, displayed a third type of behaviour, plasmonic resonance. In this phenomenon, the incident light causes the conducting electrons in the silver to oscillate. This action could be used to detect chemicals in medical or environmental samples, as the presence of chemicals of interest even in very low concentration will change the pattern of these oscillations.

This new understanding of the behaviour of silver nanoparticles could thus open up a range of applications in medicine, biomedical, chemical, and environmental research. The team adds that not all silver nanoparticles are created equal, and this could also be useful in different technologies. For example, silver nanoparticles with complex geometries such as internal layers, like an onion or hollow nanoparticles might behave differently again. Their model could open up the exploration of such complex silver nanoparticles, which might be even more amenable to fine-tuning for specific applications.

Lamsiah, A., Atmani, E.H., Meziane, J., Fazouan, N. and Oumouloud, M. (2026) 'Influence of particle size on optical scattering properties of silver nanoparticles', Int. J. Nanoparticles, Vol. 15, No. 5, pp.1–17.
DOI: 10.1504/IJNP.2026.151259

Entrepreneurial success can emerge through the gradual development of reflexive decision-making rather than linear planning or favourable starting conditions, according to research in the International Journal of Management and Enterprise Development. The research looked at how a business moved from stalled operations to sustained competitiveness by navigating structural constraints in Britain's health and social care market over more than a decade.

The study follows a single enterprise, a London-based social enterprise founded by an African refugee woman over the course of thirteen years. The research was a longitudinal case study that tracked change over an extended period of time rather than capturing a simple snapshot of activity at a specific time. Moreover, it founded in a critical realist framework, which examined how an individual organisation operates within, and is shaped by, wider social and institutional structures. Central to the analysis is the notion of reflexivity, which is defined as the internal process through which an individual evaluates their circumstances, reassess their goals and adjust their actions in response to changing conditions.

In their case study, the team notes an early period of fractured reflexivity. Social ambition was strong, but strategic focus was limited. This resulted in zero measurable performance outcomes. Progress followed only as the entrepreneur developed autonomous reflexivity, enabling more disciplined decision-making, engagement with local business networks, and ultimately the establishment of operational credibility.

As the enterprise matured, communicative reflexivity became more and more important. Where there was dialogue with public-sector bodies then stats improved and access to competitively funded contracts opened up. Moreover, there was gradual recognition within London's regulated health and social care system. This later phase coincided with the building of reputation, quality certifications, and even national awards. In turn, these all further supported access to the market.

More recently, the entrepreneur involved has demonstrated what we might call meta-reflexivity, continually evaluating the enterprise's social mission along with its financial performance. She has reinvested profits into free training programmes for refugee women, embedded social value creation directly into the business model but still maintained commercial viability.

Given that conventional narratives often frame refugee entrepreneurs in terms of barriers and vulnerabilities, this case study demonstrates that refugee entrepreneurship within broader debates on migration, urban economies, and demographic change, can be framed far more positively.

Mutiganda, J.C. (2026) 'Understanding the process of starting up and managing the performance of a refugee enterprise: a critical realist case study', Int. J. Management and Enterprise Development, Vol. 25, No. 5, pp.1–17.
DOI: 10.1504/IJMED.2026.151258

A novel facial expression recognition system designed to overcome the conflict between accuracy and real-world use is discussed in the International Journal of Applied Pattern Recognition. The approach performs well while remaining computationally lightweight and addresses one of the main challenges facing emotion-aware technologies for vehicles, consumer devices, and healthcare applications.

Facial expression recognition involves classifying human emotions based on a visual analysis of the face. It has benefited from deep learning technology that use multilayered neural networks to examine an image. But, such technology generally requires a lot of computational power. The new work combines classical image analysis with a streamlined deep-learning architecture that preserves performance while lowering computational requirements.

The team has used a convolutional neural network, a type of model well suited to image processing. And, rather than solely learning from training data, the system uses traditional texture descriptors and grey levels. By combining these well-used computer vision techniques with the neural network outputs that can analyse fine-grained facial detail at low computational cost.

The team has tested their approach using two benchmark data sets, large collections of facial images annotated for emotional content. The system achieved recognition accuracies of almost 80 per cent for one and almost 87 per cent for the other. Real-world type tests on still images, recorded video, and live camera feeds in real time also showed how well the system can perform.

Such work is part of a broad area known as affective computing, the discipline concerned with recognising and responding to human emotion. By showing that hybrid designs can offset the computational resource demands of deep learning, the work opens up the possibility of developing emotion recognition that can be integrated into public infrastructure, mobile devices, and clinical environments for a wide range of applications.

Zhang, X. and Yan, C. (2025) 'Face expression classification and recognition based on LBP+GLCM features and attention mechanism in CNN', Int. J. Applied Pattern Recognition, Vol. 8, No. 1, pp.1–15.
DOI: 10.1504/IJAPR.2025.150992

A review in the International Journal of Business Excellence of half a century of scholarship has found that academic interest in why migrants return to their countries of origin has expanded sharply over the past decade. The review reframes return migration as a central feature of the global circulation of skills, rather than a marginal or corrective movement.

The researchers studied 375 peer-reviewed papers published during the period 1972 to 2022. The work thus offers the most comprehensive mapping to date of how this field of social science has evolved in recent decades. The study used bibliometric analysis, a quantitative method that examines patterns in academic publishing such as citation trends, collaboration networks, and thematic clustering. The analysis revealed a steady growth in output, with publication rates rising particularly quickly after 2010. Total citations increases continuously, but the average citations per article declined from 2015 onwards. The authors suspect that this change was down to rapid diversification and specialisation within the field at that time.

They point out that high-ranking journals in migration studies, business, and management dominated the output, as one might expect. This, they suggest, highlights the relevance of return migration to organisational strategy, economic performance, and institutional governance. Scholarly leadership is concentrated in Canada, Spain, the UK, and the USA, although many papers have international authorship.

The review also shows that the focus in this area of research has changed. In the early years covered by the review, research largely addressed aggregate population movements, demographic change, and macro-level migration flows. However, in the two most recent decades covered, research has moved towards the lived experience of return. Gender emerges as a central analytical category, while education, particularly higher education and international student mobility, form a core thematic pillar. The team believes that this reflects a growing engagement with human capital theory, an economic framework that views education and skills as investments shaping productivity and earnings.

Yadav, M., Kumar, M., Dagar, M., Tiwari, N.K., Pandey, A. and Amoozegar, A. (2025) 'Revisiting return migration: literature insights and a bibliometric perspective on emerging global mobility trends', Int. J. Business Excellence, Vol. 37, No. 7, pp.1–26.
DOI: 10.1504/IJBEX.2025.150979

A new forensic framework designed specifically for the Internet of Things (IoT) is discussed in the International Journal of Electronic Security and Digital Forensics. This deep learning-driven system offers benefits over earlier approaches in detecting and reconstructing cyberattacks on components of the vast network of connected sensors, appliances and machines. It achieves an accuracy of almost 98 percent, according to the researchers, and cuts analysis time by more than three quarters.

There has been a sharp rise in malware aimed at IoT environments. Standard digital forensics tools struggle in this space with the volume, diversity, and the enormous and constant flow of data. The researchers suggest that existing methods, built for relatively static computers and servers, are increasingly mismatched to the IoT world. Given that IoT systems now underpin a lot of transport networks, domestic technologies, and urban infrastructure they will be increasingly vulnerable unless security systems can keep up.

At the heart of this new approach is a hybrid deep learning model that combines a convolutional neural network. This can identify patterns in data using its long short-term memory architecture. When applied to IoT network traffic, the system can detect the subtle signatures of a cyberattack as they evolve over time, rather than simply spotting isolated events.

The team has improved performance by refining the detection approach with a so-called particle swarm optimisation. This technique was inspired by collective behaviour in nature, such as starling murmurations, and honeybee swarming. It can dynamically adjust the detection parameters to home in on the optimal approach without heavy increasing computational cost. This is particularly important for protecting IoT devices, many of which operate with limited processing power and low energy budgets.

Tests conducted across simulated vehicle networks, smart homes, and smart city infrastructures showed that the model works better than existing forensic tools. It is faster and more accurate, but also has the ability to trace and classify multiple forms of cyberattack.

Almadud, W. and Al-Shargabi, A.A. (2026) 'Efficient digital forensics in the IoT environment: a hybrid framework using deep-federated learning', Int. J. Electronic Security and Digital Forensics, Vol. 18, No. 7, pp.1-33.
DOI: 10.1504/IJESDF.2026.150991

Urban roof gardens can help with removal of atmospheric pollutants at measurable, controllable rates, according to a study in the International Journal of Environment and Pollution. The research suggests that rather than simply being decorative, recreational features, such gardens can become part of an active and living environmental infrastructure.

The team report that a dynamically managed rooftop system can be established to absorb hazardous fine particulate matter from the cityscape including (PM2.5, airborne particles that are smaller than 2.5 micrometres) at a rate of about 42.5 micrograms per square metre per hour. It can also absorb nitrogen oxides (NOx, toxic combustion gases) at a rate of 15.6 micrograms per square metre per hour.

The work begins to address growing concern that conventional urban greening, typically static plantings designed for visual appeal, has limited capacity to respond to pollution or climate change. More adaptive and responsive planting, on the other hand, to construct layered plant communities in roof gardens could be functional as well as aesthetic. The team suggests that by grouping species together according to their known capacity to absorb different pollutants, it should be possible to address the problem of different contaminants in the same growing patch. They have carried out tests in an environmental chamber and found that such coordinated but mixed planting can be more effective than single-species approaches given the common mix of urban pollution.

The work also demonstrated that by using a lightweight, bioactive growing substrate containing activated carbon (pollutant absorbing) and vermiculite (for aeration and moisture retention), such planting could improve the rate of pollutant mineralisation.

Guo, R. and Xiao, Z. (2025) 'Roof garden plant selection and ecological application: comprehensive strategies to deal with environmental pollution', Int. J. Environment and Pollution, Vol. 75, No. 4, pp.338–360.
DOI: 10.1504/IJEP.2025.150943

A set of indicators, natural chemicals found in the blood, known as biomarkers, can help predict when an aneurysm in the brain might rupture. The work, published in the International Journal of Data Mining and Bioinformatics, looks at the risk of rupture associated with the ballooning out of a weakened blood vessel in the brain, that can lead to catastrophic bleeding. By analysing genetic data from three groups of patients, the team has identified characteristics associated with increased instability of an aneurysm.

The researchers used genetic profiling to look at activity associated with stable and ruptured aneurysms, as well as interactions between proteins that were linked to the latter and not the former. Across all the data, they found two interactions that were active in aneurysms prone to rupture. Then, by using machine learning techniques, specifically Least Absolute Shrinkage and Selection Operator (LASSO) regression, they were able to develop a prediction curve that gives a patient rupture risk based on the presence of the biomarkers.

The findings highlight an underlying mechanism that links chronically raised elevated blood pressure, hypertension, and inflammation in the vasculature of the brain. Hypertension puts mechanical strain on the walls of the blood vessels, while, at the same, activating a hormone-driven regulator of blood pressure, known as the local renin-angiotensin system. This system triggers inflammation and can weaken blood vessels. The research suggests that those genes associated with these biological systems come together to increase a person's risk of a ruptured aneurysm. As such, they also now become targets for the development of novel therapies that are aimed at reducing mechanical stress in the brain's blood vessels as well as lowering local inflammation.

This new understanding might improve the medical outcome for at-risk patients as well as precluding unnecessary medical intervention for those at lower risk who happen to have other risk factors. Given the prevalence of intracranial aneurysms and the high morbidity associated with rupture, such strategies could shift management from reactive emergency treatment to proactive, targeted prevention.

Liu, J-Y., Yuan, J., Luo, L. and Yin, X. (2026) 'Hypertension-driven mechano-immune crosstalk related novel genes may be potential targets for IA rupture progression', Int. J. Data Mining and Bioinformatics, Vol. 30, No. 5, pp.1–14.
DOI: 10.1504/IJDMB.2026.150996

Researchers have developed a new algorithmic model that can improve predictions of cooling demand for greener buildings. This kind of control will be a key factor in energy efficiency, allowing interior climate control systems to optimise cooling periods and so reduce energy demands.

The framework for the new model is based on a probabilistic neural network (PNN), which has been tested across varied climatic conditions. According to the research published in the International Journal of Environment and Pollution, it delivers accurate forecasts and quantifies the uncertainty in a way that conventional models do not.

Cooling systems account for a substantial proportion of a building's energy consumption in the hottest parts of the world. Their operation is dependent on outside temperature, humidity, building characteristics, and occupant behaviour. The standard control models usually assume linear relationships and so cannot capture the nonlinear dynamics of climatic variability and requirements. The PNN approach overcomes this problem by modelling the nonlinear relationships. This allows the system to understand the intricacies of the building-specific data and to provide better predictions to optimise climate control. The team was able to demonstrate almost 97 percent reliable control across various scenarios.

Such a system could be used by policymakers, developers, and energy managers hoping to optimise cooling in hot climates and to reduce the carbon footprint of air-conditioning systems. By providing a more subtle understanding of cooling load variability, the PNN allows for accurate data-driven decisions regarding system design, operational scheduling, and regulatory compliance. The team explains that plans can be put in place for both typical and extreme conditions with greater assurance, reducing energy waste while maintaining occupant comfort.

The same framework might have broader energy management use, allowing for short-term control as well as long-term planning of infrastructure in low-carbon developments. The construction industry must incorporate green systems, and such tools as PNN-managed climate control could play an important role in the development of sustainable buildings.

Zheng, H. and Wang, P. (2025) 'Predicting the cooling capacity of green buildings using probabilistic neural network models', Int. J. Environment and Pollution, Vol. 75, No. 4, pp.261–279.
DOI: 10.1504/IJEP.2025.150925

Diapers (nappies) and feminine hygiene products (menstrual pads and tampons) are emerging as a critical challenge in the waste management. They account for a disproportionate share of municipal waste, according to work in the International Journal of Sustainable Society.

An analysis conducted across 31 Slovak cities showed that these products alone make up 10% of total mixed waste in both urban and rural settings. There have been recent efforts to improve waste reduction and recycling. However, addressing the environmental impact of this waste stream remains a significant challenge in the Slovak Republic and elsewhere and is hindering sustainability efforts in many places.

The research points out that diapers and sanitary products, though comprising a substantial propportion of waste, are not covered by current waste legislation. These items, primarily comprising plastics and superabsorbent polymers as well as biological materials after use, represent a major problem in recycling and are generally fed into landfill or incinerated rather than being recycled in most places.

While their contents after use will degrade biologically, the materials from which they are manufactured might take centuries to decompose in landfills. While the global market produces billions of units annually, the lack of regulation and effective recycling for these products exacerbates the waste management issues, especially in the Slovak Republic, where recycling rates overall are below EU averages.

The study points to potential solutions, including composting, which has been shown to reduce the volume of diaper waste. However, these methods are limited by the non-biodegradable materials involved. Emerging technologies, such as vermicomposting and thermal pyrolysis, offer promising alternatives by recycling used diapers into usable materials. However, these technologies require proper infrastructure and legislative support to be fully effective.

Peterkova, V., Ilko, I., Martincova, R. and Preinerova, K. (2025) 'Analysis of municipal waste and management of baby nappies and sanitary napkins in the Slovak Republic', Int. J. Sustainable Society, Vol. 17, No. 4, pp.355–369.
DOI: 10.1504/IJSSOC.2025.150907

The modern network is a place where danger whispers rather than shouts. Corporate systems, public services, and critical infrastructure are increasingly complex and increasingly vulnerable to more subtle cyberattack. Where an old-school hacker might try brute-force techniques or an army of bots that pound the system until it breaks, modern threats can work more insidiously. They might masquerade as ordinary server traffic, draining resources or slowly siphoning off data, while the anti-malware systems and firewalls are focused on the brutes.

New intrusion-detection models are needed, according to the author of work published in the International Journal of Reasoning-based Intelligent Systems. While it is generally easy to hear the alarm bells ringing when the brutes are pounding the servers, the sinister-but-subtle attackers need a different approach, one that listens out for the whispers.

In the work, a new model, called ST-CCNet, promises this kind of protection. In tests against standard benchmarks, it accurately – 98.2 percent – identified covert attacks better than existing approaches. More specifically, it was able to spot low-rate distributed denial-of-service (dDOS) attacks, botnet activity, and subtle web intrusions that had been designed to look like legitimate behaviour. The model can now detect slow-burn attacks that exhaust server capacity over long periods, or threats that unfold over weeks or months. Such attacks have long been the nemesis of network security systems.

One part of the ST-CCNet system uses causal convolution to analyse traffic in temporal order, capturing tiny, momentary deviations that may appear only for microseconds but can mark the opening move of an attack. In parallel with this, a spatio-temporal transformer scans across much longer timescales, identifying patterns that only become meaningful when viewed in context, such as the rhythmic exchanges between compromised machines and their controllers.

This balanced approach addresses the shortcomings of conventional security. By combining short-term acuity with long-term memory, ST-CCNet aligns with the way real sinister-but-subtle attacks operate.

Chi, W. (2025) 'Multidimensional covert traffic attack detection via coupled spatio-temporal transformer and causal convolutional networks', Int. J. Reasoning-based Intelligent Systems, Vol. 17, No. 12, pp.35–44.
DOI: 10.1504/IJRIS.2025.150501

Sustainable entrepreneurship in Nigeria is being stymied by a lack of engagement among business owners because of structural economic and institutional barriers, according to research in the World Review of Entrepreneurship, Management and Sustainable Development that has studied one of Africa's largest entrepreneurial ecosystems.

The research used quantitative data from 310 entrepreneurs across manufacturing, sales, and food services. An analysis of this data showed that unstable macroeconomic conditions, limited access to finance, weak technological infrastructure, and inconsistent government support are the main barriers faced by entrepreneurs hoping to adopt environmentally responsible business practices. Moreover, they found that many entrepreneurs operate under conditions in which immediate cash-flow pressures outweigh long-term environmental considerations. The result is that sustainability initiatives are difficult to get underway and even harder to maintain.

Entrepreneurs in Nigeria, the study found, are somewhat aware of sustainability principles, but currency volatility, high inflation, and unreliable public services restrain action. The researchers add that access to affordable credit remains limited, particularly for micro, small and medium-sized enterprises (MSMEs). Such companies with fewer than 250 employees form the backbone of the Nigerian economy. Without financial buffers or capital, investments in cleaner technologies or resource-efficient processes are often postponed indefinitely. There is thus an urgent need to improve conditions for entrepreneurs to encourage those that are less than willing to engage that there are long-term benefits, and to nudge the more engaged further towards sustainability. Regulatory incentives and green technologies that have remained largely inaccessible to smaller companies need to be opened up to Nigeria's MSMEs.

There are obvious implications for other emerging economies facing similar constraints, which also risk missing out on the economic, environmental, and social benefits associated with sustainable enterprise. There is a need to align financial systems, policy instruments, and educational initiatives with sustainability objectives across the whole of the developing world, the research would suggest.

Ogbolu, G., Adelaja, A.A. Ohanagorom, M.I. and Shwedeh, F. (2025) 'Examining the inhibiting factors of sustainable entrepreneurship: evidence from emerging economies', World Review of Entrepreneurship, Management and Sustainable Development, Vol. 21, No. 6, pp.1–26.
DOI: 10.1504/WREMSD.2025.150508

The future of urban green space might be written in code, according to research in the International Journal of Reasoning-based Intelligent Systems. The age-old image of the landscape architect, sketchbook in hand, guided by intuition and a feel for the land, is being dug over by digital disruption. The work suggests that for city and town planners facing increasingly dense populations and the problems that climate change brings, the art of urban garden design needs reseeding with modern tools to fertilise new ideas.

Urban green spaces are now recognised as increasingly important for the recreation, enjoyment, and wellbeing of city dwellers, Moreover, such as spaces and in particular the protective effects of trees during scorching summers and the atmospheric cleansing they bring are no longer an aesthetic luxury but an essential part of the modern cityscape. The concrete jungle needs to go green, and an algorithmic augmentation of human intuition can help balance the competing pressures in landscaping our urban spaces.

The researchers talk of "landscape optimization" wherein a green space or garden is not simply a canvas on which to paint trees, lawns and shrubberies, but a complex data problem that can be more effectively solved algorithmically without compromising art nor beauty. The team merging aesthetics and ecology reframe the problem into a "rationality index" which considers the terrain profile, soil health, and the local climate to provide the computer with a unified metric it can interpret and from which it can provide novel design solutions using various algorithms based on natural systems such as honeybee behaviour and ant colonies.

In preliminary tests, the team found that their hybrid algorithmic approach worked better than conventional methods used to calculate land-use efficiency. They emphasise that by treating landscape design as an optimizable process, city planners can produce evidence-based layouts that are reproducible, resilient, and reliable. While the immediate focus is on gardens, the implications for wider urban planning are significant. As public authorities face mounting pressure to meet sustainability targets, the "intuition" of the past may soon give way to the "optimization" of the future.

Cheng, Y., Guo, L., Ao, S. and Wu, W. (2025) 'Spatial layout design of garden landscapes based on a hybrid metaheuristic optimisation algorithm', Int. J. Reasoning-based Intelligent Systems, Vol. 17, No. 12, pp.13–23.
DOI: 10.1504/IJRIS.2025.150502

In terms of sustainability and competitiveness, modern agriculture depends on information across the whole of food-production. Research in the International Journal of Agricultural Resources, Governance and Ecology has looked at how data, innovation, and collaboration shape farm performance in the facing of growing climate change issues and under diverse market pressures. The work suggests that without knowledge frameworks, policies and technologies designed to improve resilience are likely to underperform.

The researchers show that information quality is a decisive factor linking farm-level decisions to wider economic and environmental outcomes. Data on production volumes, input costs, as well as resource use can be combined with national statistics and market intelligence to help farmers and policymakers to respond to price signals, supply chain disruption, and climate stress. Unfortunately, many farms still operate without formal accounting systems or even consistent record-keeping, which means their decision-making is not clear. Moreover, a lack of detailed economic awareness might be limiting the capacity of many farms to adapt production in response to changing conditions.

The detrimental effects of this information gap are worsened by social and organisational factors within the sector. Farmers' associations, cooperatives, and informal networks can play a role in knowledge exchange, but many farmers do not make full use of such networks, with differences in uptake being linked to farm size, education level, and age. The team adds that the retention of younger people in rural areas emerges is a major concern, as demographic decline threatens the sector's capacity to absorb new skills and sustain innovation over time.

The bottom line is that digitalisation, used systematically, rather than casually, might offer a structural shift in how agriculture is managed that could help overcome some of these problems. Digital systems can reduce wasted resources and wasted effort. With improved resource efficiency and decision-making supported by data in a sector where timing is often critical, farming practices might be improved. Ultimately, there is a need to embed this digitalisation within farming networks supported by leadership, coherent policy and trained personnel.

Figurek, A., Semenova, E., Thrassou, A., Semenov, A. and Vrontis, D. (2025) 'Innovative tools for the agricultural information system: a conceptual framework', Int. J. Agricultural Resources, Governance and Ecology, Vol. 20, No. 6, pp.19–36.
DOI: 10.1504/IJARGE.2025.150483