|
3 | 3 | {
|
4 | 4 | "cell_type": "markdown",
|
5 | 5 | "id": "0",
|
6 |
| - "metadata": { |
7 |
| - "jp-MarkdownHeadingCollapsed": true |
8 |
| - }, |
| 6 | + "metadata": {}, |
9 | 7 | "source": [
|
10 | 8 | "# Integer Linear Programming"
|
11 | 9 | ]
|
|
15 | 13 | "id": "1",
|
16 | 14 | "metadata": {},
|
17 | 15 | "source": [
|
18 |
| - "## Introduction\n", |
19 |
| - "\n", |
20 |
| - "In Integer Linear Programming (ILP), we seek to find a vector of integer numbers that maximizes (or minimizes) a linear cost function under a set of linear equality or inequality constraints [[1]](#ILP). In other words, it is an optimization problem where the cost function to be optimized and all the constraints are linear and the decision variables are integer.\n", |
21 |
| - "\n", |
| 16 | + "Integer Linear Programming (ILP) seeks a vector of integer numbers that maximizes (or minimizes) a linear cost function under a set of linear equality or inequality constraints [[1]](#ILP). In other words, it is an optimization problem where the cost function to optimize and all the constraints are linear and the decision variables are integers.\n", |
22 | 17 | "\n",
|
23 | 18 | "\n",
|
24 |
| - "### Mathematical Formulation\n", |
25 |
| - "The ILP problem can be formulated as follows:\n", |
26 |
| - "\n", |
27 |
| - "Given an $n$-dimensional vector $\\vec{c} = (c_1, c_2, \\ldots, c_n)$, an $m \\times n$ matrix $A = (a_{ij})$ with $i=1,\\ldots,m$ and $j=1,\\ldots,n$, and an $m$-dimensional vector $\\vec{b} = (b_1, b_2, \\ldots, b_m)$, find an $n$-dimensional vector $\\vec{x} = (x_1, x_2, \\ldots, x_n)$ with integer entries that maximizes (or minimizes) the cost function:\n", |
| 19 | + "## Mathematical Formulation\n", |
| 20 | + "The ILP problem can be formulated as follows: given an $n$-dimensional vector $\\vec{c} = (c_1, c_2, \\ldots, c_n)$, an $m \\times n$ matrix $A = (a_{ij})$ with $i=1,\\ldots,m$ and $j=1,\\ldots,n$, and an $m$-dimensional vector $\\vec{b} = (b_1, b_2, \\ldots, b_m)$, find an $n$-dimensional vector $\\vec{x} = (x_1, x_2, \\ldots, x_n)$ with integer entries that maximizes (or minimizes) the cost function:\n", |
28 | 21 | "\n",
|
29 | 22 | "\\begin{align*}\n",
|
30 | 23 | "\\vec{c} \\cdot \\vec{x} = c_1x_1 + c_2x_2 + \\ldots + c_nx_n\n",
|
31 | 24 | "\\end{align*}\n",
|
32 | 25 | "\n",
|
33 |
| - "subject to the constraints:\n", |
| 26 | + "subject to these constraints:\n", |
34 | 27 | "\n",
|
35 | 28 | "\\begin{align*}\n",
|
36 | 29 | "A \\vec{x} & \\leq \\vec{b} \\\\\n",
|
37 | 30 | "x_j & \\geq 0, \\quad j = 1, 2, \\ldots, n \\\\\n",
|
38 | 31 | "x_j & \\in \\mathbb{Z}, \\quad j = 1, 2, \\ldots, n\n",
|
39 | 32 | "\\end{align*}\n",
|
40 | 33 | "\n",
|
41 |
| - "\n", |
42 |
| - "\n", |
43 |
| - "## Solving with the Classiq platform\n", |
44 |
| - "\n", |
45 |
| - "We go through the steps of solving the problem with the Classiq platform, using QAOA algorithm [[2](#QAOA)]. The solution is based on defining a Pyomo model for the optimization problem we would like to solve." |
| 34 | + "This tutorial guides you through the steps of solving the problem with the Classiq platform, using QAOA [[2](#QAOA)]. The solution is based on defining a Pyomo model for the optimization problem to solve." |
46 | 35 | ]
|
47 | 36 | },
|
48 | 37 | {
|
49 | 38 | "cell_type": "markdown",
|
50 | 39 | "id": "2",
|
51 | 40 | "metadata": {},
|
52 | 41 | "source": [
|
53 |
| - "## Building the Pyomo model from a graph input\n", |
| 42 | + "## Building the Pyomo Model from a Graph Input\n", |
54 | 43 | "\n",
|
55 |
| - "We proceed by defining the Pyomo model that will be used on the Classiq platform, using the mathematical formulation defined above:" |
| 44 | + "Define the Pyomo model to use on the Classiq platform, using the mathematical formulation defined above:" |
56 | 45 | ]
|
57 | 46 | },
|
58 | 47 | {
|
|
160 | 149 | "source": [
|
161 | 150 | "## Setting Up the Classiq Problem Instance\n",
|
162 | 151 | "\n",
|
163 |
| - "In order to solve the Pyomo model defined above, we use the `CombinatorialProblem` quantum object. Under the hood it tranlastes the Pyomo model to a quantum model of the QAOA algorithm, with cost hamiltonian translated from the Pyomo model. We can choose the number of layers for the QAOA ansatz using the argument `num_layers`, and the `penalty_factor`, which will be the coefficient of the constraints term in the cost hamiltonian." |
| 152 | + "To solve the Pyomo model defined above, use the `CombinatorialProblem` quantum object. Under the hood it translates the Pyomo model to a quantum model of QAOA, with the cost Hamiltonian translated from the Pyomo model. Choose the number of layers for the QAOA ansatz using the `num_layers` argument. The `penalty_factor` is the coefficient of the constraints term in the cost Hamiltonian." |
164 | 153 | ]
|
165 | 154 | },
|
166 | 155 | {
|
|
195 | 184 | "source": [
|
196 | 185 | "## Synthesizing the QAOA Circuit and Solving the Problem\n",
|
197 | 186 | "\n",
|
198 |
| - "We can now synthesize and view the QAOA circuit (ansatz) used to solve the optimization problem:" |
| 187 | + "Synthesize and view the QAOA circuit (ansatz) used to solve the optimization problem:" |
199 | 188 | ]
|
200 | 189 | },
|
201 | 190 | {
|
|
222 | 211 | "id": "11",
|
223 | 212 | "metadata": {},
|
224 | 213 | "source": [
|
225 |
| - "We also set the quantum backend we want to execute on:" |
| 214 | + "Set the quantum backend on which to execute:" |
226 | 215 | ]
|
227 | 216 | },
|
228 | 217 | {
|
|
244 | 233 | "id": "13",
|
245 | 234 | "metadata": {},
|
246 | 235 | "source": [
|
247 |
| - "We now solve the problem by calling the `optimize` method of the `CombinatorialProblem` object. For the classical optimization part of the QAOA algorithm we define the maximum number of classical iterations (`maxiter`) and the $\\alpha$-parameter (`quantile`) for running CVaR-QAOA, an improved variation of the QAOA algorithm [[3](#cvar)]:" |
| 236 | + "Solve the problem by calling the `optimize` method of the `CombinatorialProblem` object. For the classical optimization part of QAOA, define the maximum number of classical iterations (`maxiter`) and the $\\alpha$-parameter (`quantile`) for running CVaR-QAOA, an improved variation of QAOA [[3](#cvar)]:" |
248 | 237 | ]
|
249 | 238 | },
|
250 | 239 | {
|
|
270 | 259 | "id": "15",
|
271 | 260 | "metadata": {},
|
272 | 261 | "source": [
|
273 |
| - "We can check the convergence of the run:" |
| 262 | + "Check the convergence of the run:" |
274 | 263 | ]
|
275 | 264 | },
|
276 | 265 | {
|
|
322 | 311 | "id": "18",
|
323 | 312 | "metadata": {},
|
324 | 313 | "source": [
|
325 |
| - "We can also examine the statistics of the algorithm. The optimization is always defined as a minimzation problem, so the positive maximization objective was tranlated to a negative minimization one by the Pyomo to qmod translator." |
| 314 | + "Examine the statistics of the algorithm. The optimization is always defined as a minimization problem, so the positive maximization objective is translated to negative minimization by the Pyomo-to-Qmod translator." |
326 | 315 | ]
|
327 | 316 | },
|
328 | 317 | {
|
329 | 318 | "cell_type": "markdown",
|
330 | 319 | "id": "19",
|
331 | 320 | "metadata": {},
|
332 | 321 | "source": [
|
333 |
| - "In order to get samples with the optimized parameters, we call the `sample` method:" |
| 322 | + "To get samples with the optimized parameters, call the `sample` method:" |
334 | 323 | ]
|
335 | 324 | },
|
336 | 325 | {
|
|
431 | 420 | "id": "21",
|
432 | 421 | "metadata": {},
|
433 | 422 | "source": [
|
434 |
| - "We also want to compare the optimized results to uniformly sampled results:" |
| 423 | + "Compare the optimized results to uniformly sampled results:" |
435 | 424 | ]
|
436 | 425 | },
|
437 | 426 | {
|
|
497 | 486 | "id": "25",
|
498 | 487 | "metadata": {},
|
499 | 488 | "source": [
|
500 |
| - "Let us plot the solution:" |
| 489 | + "Plot the solution:" |
501 | 490 | ]
|
502 | 491 | },
|
503 | 492 | {
|
|
529 | 518 | "id": "27",
|
530 | 519 | "metadata": {},
|
531 | 520 | "source": [
|
532 |
| - "## Comparison to a classical solver" |
| 521 | + "## Comparing to a Classical Solver" |
533 | 522 | ]
|
534 | 523 | },
|
535 | 524 | {
|
536 | 525 | "cell_type": "markdown",
|
537 | 526 | "id": "28",
|
538 | 527 | "metadata": {},
|
539 | 528 | "source": [
|
540 |
| - "Lastly, we can compare to the classical solution of the problem:" |
| 529 | + "Compare to the classical solution of the problem:" |
541 | 530 | ]
|
542 | 531 | },
|
543 | 532 | {
|
|
571 | 560 | "\n",
|
572 | 561 | "## References\n",
|
573 | 562 | "\n",
|
574 |
| - "<a id='MVC'>[1]</a>: [Integer Programming (Wikipedia).](https://en.wikipedia.org/wiki/Integer_programming)\n", |
| 563 | + "<a id='MVC'>[1]</a> [Integer Programming (Wikipedia).](https://en.wikipedia.org/wiki/Integer_programming)\n", |
575 | 564 | "\n",
|
576 |
| - "<a id='QAOA'>[2]</a>: [Farhi, Edward, Jeffrey Goldstone, and Sam Gutmann. \"A quantum approximate optimization algorithm.\" arXiv preprint arXiv:1411.4028 (2014).](https://arxiv.org/abs/1411.4028)\n", |
| 565 | + "<a id='QAOA'>[2]</a> [Farhi, Edward, Jeffrey Goldstone, and Sam Gutmann. (2014). \"A quantum approximate optimization algorithm.\" arXiv preprint arXiv:1411.4028.](https://arxiv.org/abs/1411.4028)\n", |
577 | 566 | "\n",
|
578 |
| - "<a id='cvar'>[3]</a>: [Barkoutsos, Panagiotis Kl, et al. \"Improving variational quantum optimization using CVaR.\" Quantum 4 (2020): 256.](https://arxiv.org/abs/1907.04769)\n" |
| 567 | + "<a id='cvar'>[3]</a> [Barkoutsos, Panagiotis Kl, et al. (2020). \"Improving variational quantum optimization using CVaR.\" Quantum 4: 256.](https://arxiv.org/abs/1907.04769)\n" |
579 | 568 | ]
|
580 | 569 | }
|
581 | 570 | ],
|
|
0 commit comments