Decomposition gradient descent method for bi-objective optimisation
by Jingjing Chen; Genghui Li; Xi Lin
International Journal of Bio-Inspired Computation (IJBIC), Vol. 23, No. 1, 2024

Abstract: Population-based decomposition methods decompose a multi-objective optimisation problem (MOP) into a set of single-objective subproblems (SOPs) and then solve them collaboratively to produce a set of Pareto optimal solutions. Most of these methods use heuristics such as genetic algorithms as their search engines. As a result, these methods are not very efficient. This paper investigates how to do a gradient search in multi-objective decomposition methods. We use the NBI-style Tchebycheff method to decompose a MOP since it is not sensitive to the scales of objectives. However, since the objectives of the resultant SOPs are non-differentiable, they cannot be directly optimised by the classical gradient methods. We propose a new gradient descent method, decomposition gradient descent (DGD), to optimise them. We study its convergence property and conduct numerical experiments to show its efficiency.

Online publication date: Mon, 22-Jan-2024

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Bio-Inspired Computation (IJBIC):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com