金融产品设计与约束优化(Fnancial product and Optimization)

金融产品设计与约束优化

金融产品的设计类似与数学上约束优化算法,设计目的也是为在监管约束与有限市场条件下达到某种利益最优。

目标:客户资产与管理收益最大化,简单些说就是收窄波动,增加收益期望

约束条件:

1.法律、法规以及监管机构的管理办法;

2.市场行情,发行规模等等

3.IT系统TA的技术问题

……

优化搜索可行域:可投资品种、可利用的估值手段,可利用的认购赎回规则,各种费率设置等等!

金融产品设计与约束优化(Fnancial product and Optimization)In mathematics, the simplest case of optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. This (a scalar real valued objective function) is actually a small subset of this field which comprises a large area of applied mathematics and generalizes to study of means to obtain "best available" values of some objective function given a defined domain where the elaboration is on the types of functions and the conditions and nature of the objects in the problem domain.

Optimization problems

An optimization problem can be represented in the following way

Given: a function f: A R from some set A to the real numbers
Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that f(x0) ≥ f(x) for all x in A ("maximization").

Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming - see History above). Many real-world and theoretical problems may be modeled in this general framework. Problems formulated using this technique in the fields of physics and computer vision may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled.

Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space, while the elements of A are called candidate solutions or feasible solutions.

The function f is called, variously, an objective function, cost function, energy function, or energy functional.[1] A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution.

Generally, when the feasible region or the objective function of the problem does not present convexity, there may be several local minima and maxima, where a local minimum x* is defined as a point for which there exists some δ > 0 so that for all x such that

the expression

holds; that is to say, on some region around x* all of the function values are greater than or equal to the value at that point. Local maxima are defined similarly.

A large number of algorithms proposed for solving non-convex problems – including the majority of commercially available solvers – are not capable of making a distinction between local optimal solutions and rigorous optimal solutions, and will treat the former as actual solutions to the original problem. The branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a non-convex problem is called global optimization.

[edit] Notation

Optimization problems are often expressed with special notation. Here are some examples.

This asks for the minimum value for the objective function x2+ 1, where x ranges over the real numbers . The minimum value in this case is 1, occurring at x = 0.

This asks for the maximum value for the objective function 2x, where x ranges over the reals. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined".

This asks for the value (or values) of x in the interval that minimizes (or minimize) the objective function x2+1 (the actual minimum value of that function does not matter). In this case, the answer is x = -1.

This asks for the (x,y) pair (or pairs) that maximizes (or maximize) the value of the objective function x * cos(y), with the added constraint that x lies in the interval [ − 5;5] (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form ( 5; 2kπ ) and ( −5;(2k+1)π ), where k ranges over all integers.

[edit] Techniques

Crudely all the methods are divided according to variables called:-
SVO:- Single Variable Optimization
MVO:- Multi Variable Optimization
For twice-differentiable functions, unconstrained problems can be solved by finding the points where the gradient of the objective function is zero (that is, the stationary points) and using the Hessian matrix to classify the type of each point. If the Hessian is positive definite, the point is a local minimum, if negative definite, a local maximum, and if indefinite it is some kind of saddle point.

However, existence of derivatives is not always assumed and many methods were devised for specific situations. The basic classes of methods, based on smoothness of the objective function, are:

Actual methods falling somewhere among the categories above include:

Should the objective function be convex over the region of interest, then any local minimum will also be a global minimum. There exist robust, fast numerical techniques for optimizing twice differentiable convex functions. This is for your concern.

Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers.

Here are a few other popular methods:

[edit] Multi-objective optimization

Adding more than one objective to an optimization problem adds complexity. For example, if you wanted to optimize a structural design, you would want a design that is both light and rigid. Because these two objectives conflict, a trade-off exists. There will be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and stiffness. This set of trade-off designs is known as a Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier.

A design is judged to be Pareto optimal if it is not dominated by other designs: a Pareto optimal design must be better than another design in at least one aspect. If it is worse than another design in all respects, then it is dominated and is not Pareto optimal.