Research news

Electric rickshaws, e-rickshaws, are becoming commonplace in India, according to researchers writing in the International Journal of Intelligent Enterprise. However, battery power means electricity supply and that is a problem in terms of a lack of sustainable energy sources. Ravinder Kumar and Ravindra Jilte of the School of Mechanical Engineering, at Lovely Professional University, in Phagwara, Punjab, India, and Mohammad Hossein Ahmadi of the Faculty of Mechanical Engineering, at Shahrood University of Technology, in Shahrood, Iran, outline efforts that might take us down the road to a green city.

The team has investigated the potential of biogas in the face of massive urban population growth in India's cities. They point out that there will be massive and growing demands on electricity supply as the number of people living in India's cities approaches the total population of the USA today. They add that the Smart Cities Mission launched in 2015 aims to undertake urban renewal and retrofitting with a welcome emphasis on integrated planning and the provision of urban services, including power, water, waste, and mass transportation. However, as with any infrastructure project there remain many obstacles to be surmounted.

Transportation is one of the most prominent of those obstacles, which is why the emergence of the e-rickshaw might represent an intriguing alternative to conventional modes of transport, especially if there is potential to make the supply of power to such "vehicles" sustainable and non-polluting. The team describes the results of their feasibility study on generating electricity for e-rickshaw recharging using biogas in their paper.

Kumar, R., Jilte, R. and Ahmadi, M.H. (2018) 'Electricity alternative for e-rickshaws: an approach towards green city', Int. J. Intelligent Enterprise, Vol. 5, No. 4, pp.333-344.
DOI: 10.1504/IJIE.2018.095721

These days, Moore's Law is not so much a scientific law as an aspiration. The notion that there is a doubling every year of the number of components that can be squeezed on to the same area of integrated circuitry was first observed in the mid-1960s by Gordon Moore, the co-founder of Fairchild Semiconductor and Intel. Ever since the microelectronics industry has strived to Moore's Law although in some periods that annual doubling seems to occur over a period of 18 months if not longer.

Nevertheless, it still offers a rule-of-thumb for how rapidly technology advances and posits a guideline as to what technology industries might aim for. Now, a paper in the International Journal of Technology Management asks whether technology improvement rates in knowledge industries, microelectronics, mobile communications, and genome-sequencing technologies might follow this law.

Yu Sang Chang of Gachon University, in Seongnam, Jinsoo Lee of the KDI School of Public Policy and Management, in Sejong, and Yun Seok Jung of the Institute for Information and Communications Technology Promotion, in Daejeon, Korea, have tracked technology developments to see whether Moore's Law held over the period 1971 to 2010. Their study shows that indeed it did, moreover they suggest that an analogous exponential law also applies to mobile cellular and genome-sequencing technologies.

While there has been no downward trend in transistor density, the team has found that the improvement rate in microprocessor clock speed has not been sustained. That said, for genome sequencing technology which is essentially still in the early stages of development, developments continue apace.

The team points out that the 5-nanometre limit on the quantum tunnelling effect will represent a barrier to the further shrinking of transistors and that we are fast approaching that limit. However, developments in nanotechnology might still allow the industry to sustain Moore's Law in microelectronics even into its centenary year.

Chang, Y.S., Lee, J. and Jung, Y.S. (2018) 'Are technology improvement rates of knowledge industries following Moore's law? An empirical study of microprocessor, mobile cellular, and genome sequencing technologies', Int. J. Technology Management, Vol. 78, No. 3, pp.182-207.
DOI: 10.1504/IJTM.2018.095629

Albert Einstein is famous for a lot of reasons, but the movement of sediments in rivers is perhaps not one of them. Yet, his name is associated with those of Ackers, White, and Shields who developed equations to help explain how grainy materials transported as particles in a river move. Given the importance of sediment from the physical or chemical degradation of rocks in a waterway and the impact they have on erosion, entrainment, transportation, deposition, and compaction, it is not surprising that geologists, geographers, and others involved in understanding waterways are more than a little familiar, however.

Now, Hydar Lafta Ali, Badronnisa Binti Yusuf, and Azlan Abdul Aziz of the Universiti Putra Malaysia, Thamer Ahamed Mohammed of the University of Baghdad, Iraq, have attempted to simplify the Einstein equation for the calculation of suspended sediment transport in rivers. Writing in the International Journal of Hydrology Science and Technology, the team explains how they have validated their simplified form of the equation against data from eleven rivers located in different parts of the world. Indeed, their results show that the new simplified equation performs well when compared with Einstein's and Bagnold's equations and when tested on data from the Atchafalaya, the Red, the South American, the Rio Grande and the Al-Garraf rivers.

It is important to understand river sediment especially in the face of changes driven by natural disasters and global climate changes. Sediment plays an important role after all, in the delivery of nutrients for aquatic ecosystems, as well as for agricultural purposes, the formation and preservation of river deltas, the provision of sand as a building material, as well as the course taken by a river.

The team says that future studies will employ the proposed equation statistically on other rivers around the world to verify its accuracy still further.

Ali, H.L., Mohammed, T.A., Yusuf, B. and Aziz, A.A. (2018) 'A simplification of the Einstein equation for the calculation of suspended sediment transport in rivers', Int. J. Hydrology Science and Technology, Vol. 8, No. 4, pp.393-409.
DOI: 10.1504/IJHST.2018.095536

Melanoma is a type of skin cancer, it is the most lethal of the various forms of this disease. It can be cured but only if detected early enough in its progress. Now, writing in the International Journal of Advanced Intelligence Paradigm, a research team from India has developed a new way to analyse skin lesions that may or may not be melanoma and so allow a more reliable diagnosis to be developed.

The World Health Organisation (WHO) says that one third of all cancer cases are skin cancers and there are currently more than 135,000 new melanoma cases diagnosed annually. The five-year survival rate for patients with melanoma diagnosed and treated early is 98%, whereas the survival rate is 62% for cases of melanoma that have spread beyond the local tissues. Survival for cases where the cancer has spread to other tissues and bone well away from the primary tumour site is a mere 16% survival rate after five years.

Vikash Yadav and Vandana Dixit Kaushik of Harcourt Butler Technical University, Kanpur explain that the diagnosis of skin cancer is difficult using conventional methods but that modern image processing and analysis could improve the outlook significantly. Their approach looks at asymmetries in high-level features of skin lesions and then combining the data with low-level features to create a computer algorithm that can accurately classify a skin lesion as being melanoma or not. The features that emerge as indicative are asymmetries, border irregularities, and colour differences within the same lesion that mark out a common mole or other skin blemish from a melanoma.

Yadav, V. and Kaushik, V.D. (2018) 'Detection of melanoma skin disease by extracting high level features for skin lesions', Int. J. Advanced Intelligence Paradigms, Vol. 11, Nos. 3/4, pp.397–408.
DOI: 10.1504/IJAIP.2018.095493

Many nations have recovered to some extent from the economic crash of 2008 and the subsequent financial downturn although on the whole that recovery has been sluggish at best. Tatyana Boikova of the Department of Business Administration, at the Baltic International Academy and Aleksandrs Dahs of the Centre for European and Transition Studies, at the University of Latvia, both in Riga, Latvia, have now demonstrated that this recovery has been very uneven across the European Union's economic and social area.

Writing in the International Journal of Sustainable Economy, the researchers point out that studies of growth and development do not find a solid relationship between income inequality and the rate of economic growth and there are discrepancies that make interpreting the results and seeing the bigger empirical and theoretical picture difficult.

The team has now explored in detail the impact of income inequality, poverty, and wealth on the rate of economic growth in the Eurozone. "We find that the effect of income inequality on economic growth is statistically insignificant, whereas poverty and savings have a negative, statistically significant effect on growth, while the effect of financial assets is positive and statistically significant," the team reports. They have also seen a negative, statistically significant effect of consumption on growth and demonstrated that the dynamics of the link between inequality and growth across countries do not take the inverted-U shape curve for all observations and the average values per country in the Eurozone.

"Given the still-sluggish recovery after the financial crisis, specific features of economic cycles within each country should be taken as the basis of the macroeconomic regulation of the Eurozone," the team concludes. They add that that effort must be aimed at encouraging business investment in order to enhance smart competitiveness and to create long-term economic growth.

Boikova, T. and Dahs, A. (2018) 'Inequality and economic growth across countries of the Eurozone', Int. J. Sustainable Economy, Vol. 10, No. 4, pp.315-339.
DOI: 10.1504/IJSE.2018.095254

Take nothing but memories, leave nothing but footprints...that well-worn traveller's mantra might be modernised to say "take nothing but photos". Indeed, modern travellers take and share billions of photos every year thanks to the advent of smartphones, digital cameras, and social media. The digital footprints they leave offer a hidden treasure of geotagged information about popular and not-so-popular tourist destinations.

Now, Zhenxing Xu, Ling Chen, Haodong Guo, Mingqi Lv, and Gencai Chen of the College of Computer Science, at Zhejiang University, in Hangzhou, China, have investigated how data-mining online photo collections and their geotags might be used to develop recommendations for other travellers. Until now, most data mining of tourist photographs has focused on time and location and ignored the context of the images. Xu and colleagues have added another layer to a recommendation algorithm.

"[Our system] uses an entropy-based mobility measure to classify geotagged photos into tourist photos or non-tourist photos," they explain. "Secondly, it conducts gender recognition based on face detection from tourist photos," they add. "Thirdly, it builds a gender-aware profile of travel locations and users and finally, it recommends personalised travel locations considering both user gender and similarity."

The team has tested the approach with a dataset of geotagged photos from eleven popular tourist destinations across China. "Experimental results show that our approach has the potential to improve the performance of travel location recommendation," they conclude.

Xu, Z., Chen, L., Guo, H., Lv, M. and Chen, G. (2018) 'User similarity-based gender-aware travel location recommendation by mining geotagged photos', Int. J. Embedded Systems, Vol. 10, No. 5, pp.356-365.
DOI: 10.1504/IJES.2018.095023

Many people who use the web are concerned about privacy, but they are also concerned about web page load times. If improving privacy led to slower websites there might be some attrition that would turn people away from more secure sites.

Now, a new study from Eric Chan-Tin of the Department of Computer Science, at Loyola University Chicago, in Illinois, and Rakesh Ravishankar of the Computer Science Department, at Oklahoma State University, in Stillwater, USA, reveals that the average time taken to load a web page encrypted with standard certification techniques is a mere a few fractions of a second slower (12 per cent slower the load time of an unencrypted page. They explain that a standard, unencrypted page prefixed with http:// takes 2.6 seconds to load compared to the 2.9 seconds of an encrypted https:// page (the s after the http indicates to the browser and to users that the page is encrypted using TLS, transport layer security).

Given the benefits of encryption and this small compromise coupled with the fact that many browsers now flag sites that are not encrypted as not being secure, and search engines lowering the ranking of the latter, there is a need to push for https to be the default instead of http.

There have been problems with some of the certification authorities in recent years where the very core of the encryption system has been accessed by hackers. However, the team suggests that the strength of ten trusted authorities would allow 80 percent of the web to be protected. They are not advocating the use of those ten specifically but do point out that with those and an additional roster, it should be possible to secure almost the whole of the web.

Chan-Tin, E. and Ravishankar, R. (2018) 'The case for HTTPS: measuring overhead and impact of certificate authorities', Int. J. Security and Networks, Vol. 13, No. 4, pp.261-269.
DOI: 10.1504/IJSN.2018.095191

Textual passwords remain the most common and cumbersome format for logins to online services. For many user groups, such as the visually impaired and the elderly, this can be a problem. Now, a team in the USA has developed an alternative, graphical password system to circumvent some of the barriers to accessibility for the older internet user.

Nancy Carter, Cheng Li, Qun Li, and Jennifer Stevens of the College of William & Mary in Williamsburg, Virginia, Ed Novak of Franklin & Marshall College, in Lancaster, Pennsylvania, and Zhengrui Qin of Northwest Missouri State University, in Maryville, Missouri, USA, explain that not all users have sufficient cognitive skills nor manual dexterity to readily easily create, recall, and enter strong text-based passwords. The new system is based on embedding familiar facial images among random unfamiliar images so that a user with stymied abilities might still be able to use a password to login.

Tests with a group of over-60s showed that the graphical password technique can have recall rate of 97%, shows password "entropy" superior to a short PIN, and authentication time comparable to that possible with short text passwords. The system, as it stands, is particularly suited to users with limited manual dexterity who do not need the additional barrier of having to type convoluted text-based passwords when clicking with a mouse on images or tapping a touchscreen would suffice for many applications.

Carter, N., Li, C., Li, Q., Stevens, J.A., Novak, E. and Qin, Z. (2018) 'Graphical passwords for older computer users', Int. J. Security and Networks, Vol. 13, No. 4, pp.211-227.
DOI: 10.1504/IJSN.2018.095170

Deep learning has been applied to the problem of intelligent plankton classification, which could have important implications for understanding marine ecosystems, the food chain, and the environmental impact of oceanic microbes on climate.

Hussein Al-Barazanchi and Shawn Wang of California State University, in Fullerton and Abhishek Verma of New Jersey City University, Jersey City, USA, discuss the importance of plankton in the International Journal of Computational Vision and Robotics, and outline their intelligent plankton image classification.

Plankton is an umbrella term for any organism that lives in a large body of water, such as an ocean and cannot propel itself against the current. It is an extremely diverse group that encompasses bacteria, archaea, algae, protozoa and any drifting or floating animals that inhabit large water columns. Plankton is a source of food for fish and other marine animals. Moreover, the distribution of plankton underpins the persistence of marine ecosystems as well as having an impact on chemical concentrations of the oceans and the Earth's atmosphere.

The team explains that because of the diversity of plankton in terms of their nature, size and shape, accurate classification is daunting and the mixed quality of images collected for different types of plankton and species makes this problem even more challenging.

The team's new intelligent machine learning system based on convolutional neural networks (CNN) for plankton image classification does not depend on features engineering and can be efficiently extended to encompass new classes. Tests on standard images show the new approach to be more accurate than even state-of-the-art tools available today.

Al-Barazanchi, H., Verma, A. and Wang, S.X. (2018) 'Intelligent plankton image classification with deep learning', Int. J. Computational Vision and Robotics, Vol. 8, No. 6, pp.561-571.
DOI: 10.1504/IJCVR.2018.095584

The European Union, EU, is purportedly fighting anthropogenic climate change through its carbon emissions targets. Writing in the International Journal of Management and Network Economic, researchers in Italy point out that: "By 2050, the EU aims to reduce its greenhouse gas emissions by 80%-95% compared with 1990 levels. New objectives up to 2030 provide for a 40% reduction of GHG emissions and an increase of 27% for renewables and energy efficiency."

In their paper, the team of Idiano D'Adamo and Domenico Schettini of the University of L' Aquila, Michela Miliacca of the University of Rome Tor Vergata, describe their research aims as twofold. First, they present the inventory data of greenhouse gas emissions, final energy consumption, the share of renewable energy, and other data and compare achievements so far with the 2020 targets. Secondly, they look for a correspondence between the increasing number of certified companies and positive results with respect to mitigation.

The team adds that regulatory obligations and a growing awareness of climate change have led companies to adopt systems voluntarily with a view to improving environmental management and/or energy management. The benefit to such companies is not only one of an improved public image but also the improved competitiveness that ensues.

Every member of the EU plays a key role in addressing the major issue of the day: climate change. Eighteen member states have achieved the goals set, but the others are yet to do so, and some are performing once than they were almost three decades ago. It seems that economic stagnation leads to under-achieving in this context. The team will next look at other major economic regions of the world to see whether or not their targets are being approached.

D'Adamo, I., Miliacca, M. and Schettini, D. (2018) 'Climate change mitigation: evidences from the European scenario', Int. J. Management and Network Economics, Vol. 4, No. 2, pp.95-114.
DOI: 10.1504/IJMNE.2018.095079

Journal news

Nivedita Agarwal from Friedrich-Alexander-Universität Erlangen-Nürnberg in Germany has been appointed to take over joint editorship of the International Journal of Entrepreneurial Venturing, together with Terrence Brown from the KTH Royal Institute of Technology, Sweden, from 1 January, 2019. She is succeeding Alexander Brem, whom the publisher wants to thank for his four years of dedication to the further development of the journal.

Newly announced title: International Journal of Big Data Management

Dr. Weihua Liu from Tianjin University in China has been appointed to take over editorship of the International Journal of Modelling in Operations Management.

Associate Prof. Yam B. Limbu from Montclair State University in the USA has been appointed to take over editorship of the International Journal of Business and Emerging Markets.

Dr. Daniel Palacios from Universitat Politècnica de València in Spain has been appointed to take over editorship of the International Journal of Services Operations and Informatics.

Prof. Domingo Enrique Ribeiro-Soriano from Universitat de València in Spain has been appointed to take over editorship of the International Journal of Intellectual Property Management.

Dr. Liang Zhou from Shanghai Jiao Tong University in China has been appointed to take over editorship of the International Journal of Granular Computing, Rough Sets and Intelligent Systems.

Newly announced title: International Journal of Intelligent Internet of Things Computing

Our newsletter