In optimal design we are tasked with finding a solution that meets a set of requirements, such as the upper limit on the mechanical stress in a part. Additionally, our solution should suggest an ideal state for another one or more metrics, such as a component’s weight or the fundamental frequency.
We may attempt to solve this problem through trial and error by trying different designs until we find the right match. Engineers today more frequently apply sophisticated techniques like optimization, where constraints may be set and objectives identified. However, an optimal solution that meets the requirements may not be found. When this happens we are quick to blame faulty software, but more often than not, this failure may be avoided with a more prudent application of advanced optimization techniques.
The effect of local minima on optimization problems can have a dramatic effect, in particular with the popular class of gradient based optimization methods. While efficient, these techniques exhibit a strong dependence on the initial design. One common work around is a multi-start approach, but this is another form of a brute force trial and error approach. Optimization schemes that try to avoid the trap of local minima are known as global optimization methods.
Another problem is that no solution may exist to the problem posed. This most frequently comes from unrealistic constraints, and it becomes a challenge to identify unrealistic requirements in advance. A multi-objective optimization produces a Pareto front that indicates the trade-offs between competing objective metrics. Pareto optimality is the idea that one objective cannot improve without worsening another. Identification of a Pareto front early in the design process has two benefits. First, the range of possible values for the metrics is identified which should stop any futile efforts to solve impossible problems with unrealistic expectations. Second, the trade-off between metrics is quantified and can be used to guide decision making downstream.
As our computing technology advances, it is not enough to expect to pose the same problems to ever more efficient computers. Solving global or multi-objective optimization problems is extremely computationally costly. Optimization technology must continue to evolve to solve these complex problems not only quickly, but also efficiently. As computing power becomes more powerful and less expensive, we should expect our optimization schemes to keep pace and enable us to tackle more aggressive problems without using trial and error.
Toward that end, I would like to invite you to read my whitepaper discussing how HyperStudy’s new Global Response Surface Method (GRSM) algorithm advances optimization technology.
Latest posts by Joseph Pajot (see all)
- Thought Leader Thursday: Philosophical Quotes on Life as a CAE Engineer - June 15, 2017
- Thought Leader Thursday: Seven Mistakes to Avoid in Getting a Fit - September 15, 2016
- Thought Leader Thursday: Design Exploration versus Design Optimization - April 21, 2016