Open Access Article

Title: Application of an AI-driven visual aesthetic scoring system for style calibration in art works

Authors: Feng Tan; Mei Wang

Addresses: Modern Logistics and Intelligent Manufacturing College, Wuhu Vocational Technical University, Wuhu, Anhui, 241003, China ' Office of Scientific Research, Wuhu Vocational Technical University, Wuhu, Anhui, 241003, China

Abstract: This research explores AI-driven visual aesthetic scoring systems as tools for evaluating and refining artistic styles, with a particular focus on interior design and computational modelling. The study demonstrates how artificial intelligence can enhance artistic quality and align computer-generated imagery with human aesthetic preferences. By integrating compounded loss functions, curated datasets, and diffusion-based architectures, the model significantly improves visual appeal, stylistic consistency, and task performance. A composite loss-based AI framework was developed using a customised interior design dataset annotated with style tags, aesthetic ratings, and spatial attributes. The system, fine-tuned with user-defined parameters, produced results that were both visually appealing and contextually appropriate. Experimental outcomes revealed statistically robust improvements of 52.54% in portal engagement (Cohen's d = 1.69, p < 0.001, 95% CI: [47.8%, 57.3%]) and 40.08% in agency engagement (d = 1.52, p < 0.001), validated through rigorous statistical testing including permutation tests, bootstrap resampling, and multiple comparison corrections. User studies further indicated that AI-selected or AI-generated images were preferred over other sources, receiving higher aesthetic ratings and engagement levels.

Keywords: visual aesthetic scoring; style calibration; AI in art; diffusion models; aesthetic evaluation; artistic style transfer; generative design; computational aesthetics.

DOI: 10.1504/IJICT.2026.151654

International Journal of Information and Communication Technology, 2026 Vol.27 No.9, pp.39 - 69

Received: 07 Oct 2025
Accepted: 30 Nov 2025

Published online: 11 Feb 2026 *