Algorithm for generating passive filter circuits

In this article, I will describe an algorithm that can be used to compute component values in a passive filter circuit from the filter circuit's transfer function or poles and zeros. For this article, I will consider a passive filter circuit to be a filter circuit with one input and one output consisting of only ideal inductors, capacitors, and resistors with positive values.

Inductance, capacitance, and resistance matrices

Consider the following circuit:
This circuit has 5 nodes (\(V_i\), \(V_g\), \(V_1\), \(V_2\), and \(V_3\)) and 5 components. Recall that the impedance of an inductor is \(j\omega L\) and the impedance of a capacitor is \(\frac{1}{j\omega C}\). Thus, for example, the current through \(L_1\) is \(\frac{V_i - V_3}{j\omega L}\). For nodes \(V_1\), \(V_2\), and \(V_3\), we want to calculate the total current flowing into the node. By Kirchoff's current law (KCL), the total currents flowing into each node must be zero (i.e. what comes in must come out). Thus, we get the following equations: $$ I_i = \frac{V_3 - V_i}{j\omega L_1} $$ $$ I_g = \frac{V_1 - V_g}{j\omega L_2} + \left(V_1 - V_g\right)\left(j\omega C_2\right) $$ $$ I_1 = \frac{V_g - V_1}{j\omega L_2} + \left(V_g - V_1\right)\left(j\omega C_2\right) + \left(V_2 - V_1\right)\left(j\omega C_1\right) $$ $$ I_2 = \frac{V_3 - V_2}{R} + \left(V_1 - V_2\right)\left(j\omega C_1\right) $$ $$ I_3 = \frac{V_i - V_3}{j\omega L_1} + \frac{V_2 - V_3}{R} $$ These can be rewritten as a matrix equation: $$ \begin{bmatrix} I_i \\ I_g \\ I_1 \\ I_2 \\ I_3 \end{bmatrix} = \begin{bmatrix} -\frac{1}{j\omega L_1} & 0 & 0 & 0 & \frac{1}{j\omega L_1} \\ 0 & -\frac{1}{j\omega L_2} - j\omega C_2 & \frac{1}{j\omega L_2} + j\omega C_2 & 0 & 0 \\ 0 & \frac{1}{j\omega L_2} + j\omega C_2 & -\frac{1}{j\omega L_2} - j\omega C_2 - j\omega C_1 & j\omega C_1 & 0 \\ 0 & 0 & j\omega C_1 & -j\omega C_1 - \frac{1}{R} & \frac{1}{R} \\ \frac{1}{j\omega L_1} & 0 & 0 & \frac{1}{R} & -\frac{1}{R}-\frac{1}{j\omega L_1} \end{bmatrix} \begin{bmatrix} V_i \\ V_g \\ V_1 \\ V_2 \\ V_3 \end{bmatrix} $$ This can be expanded as: $$ \begin{bmatrix} I_i \\ I_g \\ I_1 \\ I_2 \\ I_3 \end{bmatrix} = \left(\frac{1}{j\omega} \begin{bmatrix} -\frac{1}{L_1} & 0 & 0 & 0 & \frac{1}{L_1} \\ 0 & -\frac{1}{L_2} & \frac{1}{L_2} & 0 & 0 \\ 0 & \frac{1}{L_2} & -\frac{1}{L_2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \frac{1}{L_1} & 0 & 0 & 0 & -\frac{1}{L_1} \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\frac{1}{R} & \frac{1}{R} \\ 0 & 0 & 0 & \frac{1}{R} & -\frac{1}{R} \end{bmatrix} + j\omega\begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & -C_2 & C_2 & 0 & 0 \\ 0 & C_2 & -C_2-C_1 & C_1 & 0 \\ 0 & 0 & C_1 & -C_1 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\right)\begin{bmatrix} V_i \\ V_g \\ V_1 \\ V_2 \\ V_3 \end{bmatrix} $$ By KCL, \(I_1 = I_2 = I_3 = 0\). However, \(I_i\) and \(I_g\) are not necessarily zero, as these nodes are fed by sources, and the currents from thouse sources are not accounted for. Thus, the rows corresponding to \(I_i\) and \(I_g\) can be removed. The voltage at \(V_g\) (ground) is also known to be zero and the voltage at \(V_i\) is defined to be 1. The columns in the matrices corresponding to \(V_i\) can be moved to the other side. $$ -\frac{1}{j\omega}\begin{bmatrix} 0 \\ 0 \\ \frac{1}{L_1} \end{bmatrix} = \left(\frac{1}{j\omega} \begin{bmatrix} -\frac{1}{L_2} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{L_1} \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 \\ 0 & -\frac{1}{R} & \frac{1}{R} \\ 0 & \frac{1}{R} & -\frac{1}{R} \end{bmatrix} + j\omega\begin{bmatrix} -C_2-C_1 & C_1 & 0 \\ C_1 & -C_1 & 0 \\ 0 & 0 & 0 \end{bmatrix}\right)\begin{bmatrix} V_1 \\ V_2 \\ V_3 \end{bmatrix} = \left(\frac{1}{j\omega}\boldsymbol{L} + \boldsymbol{R} + j\omega\boldsymbol{C}\right)\begin{bmatrix} V_1 \\ V_2 \\ V_3 \end{bmatrix}$$ For the rest of this article, I will use \(s\) instead of \(j\omega\) to somewhat simplify the formulas. Mutliply through by \(s\): $$ \begin{bmatrix} 0 \\ 0 \\ \frac{1}{L_1} \end{bmatrix} = \left(\boldsymbol{L} + s\boldsymbol{R} + s^2\boldsymbol{C}\right)\begin{bmatrix} V_1 \\ V_2 \\ V_3 \end{bmatrix} $$ Notice that each of the matrices \(\boldsymbol{L}\), \(\boldsymbol{R}\), and \(\boldsymbol{C}\) can be written as the sum of constant matrices times a component value. For example, \(\boldsymbol{C}\) can be decomposed as: $$ \boldsymbol{C} = C_1 \begin{bmatrix} -1 & 1 & 0 \\ 1 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix} + C_2 \begin{bmatrix} -1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} $$ In general, the circuit matrix of a circuit consisting of \(k\) components can be written as: $$ \boldsymbol{M} = \boldsymbol{L} + s\boldsymbol{R} + s^2\boldsymbol{C} = \sum_{i=1}^k K_i\boldsymbol{D}_i s^{P_i} $$ where \(\boldsymbol{D}_i\) is a constant matrix as described above, \(K_i\) is the corresponding component value (either 1/resistance, 1/inductance, or capacitance), and \(P_i\) is the power of \(s\) corresponding to that component (0 for inductors, 1 for resistors, and 2 for capacitors). For this example, I will use \(K_1=\frac{1}{L_1}\), \(K_2=\frac{1}{R}\), \(K_3=C_2\), \(K_4=\frac{1}{L_2}\), \(K_5=C_2\).

The current vector can be decomposed similarly: $$ \vec{M} = \sum_{i=1}^k K_i\vec{D}_i s^{P_i} $$ Thus, the following general form can be used to represent any circuit with any number of nodes: $$ \vec{M} = \left(\boldsymbol{M}\right)\vec{V} $$ We can now solve for \(\vec{V}\), but the circuit only has one output, \(V_1\). Thus, we can use Cramer's rule to find the voltage at that node only. $$ V_1(s) = \frac{\det\left(\boldsymbol{M'}\right)}{\det\left(\boldsymbol{M}\right)} $$ where \(\boldsymbol{M'}\) is \(\boldsymbol{M}\) with the first column replaced with \(\vec{M}\). This formula now gives the transfer function of the filter circuit. Recall that the poles of this circuit are values of \(s\) such that \(V_1(s) = \pm\infty\) and the zeros are values of \(s\) such that \(V_1(s)=0\). (For now, I will only focus on the poles. The zeros can be calculated in a similar way). Since \(\det\left(\boldsymbol{L} + s\boldsymbol{R} + s^2\boldsymbol{C}\right)\) is the denominator of the transfer function, poles occur when \(\det\left(\boldsymbol{L} + s\boldsymbol{R} + s^2\boldsymbol{C}\right)=0\).

Relation to generalized eigenvalues

Recall that the eigenvalues of a matrix are the roots of the polynomial \(\det\left(\boldsymbol{A} - \lambda\boldsymbol{I}\right)\). Similarly, generalized eigenvalues are solutions to the polynomial \(\det\left(\boldsymbol{P} - \lambda\boldsymbol{Q}\right)\). We can define $$ \boldsymbol{P} = \begin{bmatrix} \boldsymbol{0} & \boldsymbol{I} \\ \boldsymbol{L} & \boldsymbol{R} \end{bmatrix} $$ and $$ \boldsymbol{Q} = \begin{bmatrix} \boldsymbol{I} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{-C} \end{bmatrix} $$ Each generalized eigenvalue/eigenvector pair must satisfy $$ \boldsymbol{P}\vec{\phi} = s\boldsymbol{Q}\vec{\phi} $$ We can expand this, replacing \(\vec{\phi}\) with \(\begin{bmatrix} \vec{\phi_1} \\ \vec{\phi_2} \end{bmatrix}\): $$ \begin{bmatrix} \boldsymbol{0} & \boldsymbol{I} \\ \boldsymbol{L} & \boldsymbol{R} \end{bmatrix}\begin{bmatrix} \vec{\phi_1} \\ \vec{\phi_2} \end{bmatrix} = s\begin{bmatrix} \boldsymbol{I} & \boldsymbol{0} \\ \boldsymbol{0} & -\boldsymbol{C} \end{bmatrix}\begin{bmatrix} \vec{\phi_1} \\ \vec{\phi_2} \end{bmatrix} $$ $$ \left\{\begin{matrix} \vec{\phi_2} & = & s\vec{\phi_1} \\ \boldsymbol{L}\vec{\phi_1} + \boldsymbol{R}\vec{\phi_2} & = & -s\boldsymbol{C}\vec{\phi_2} \end{matrix}\right. $$ $$ \boldsymbol{L}\vec{\phi_1} + s\boldsymbol{R}\vec{\phi_1} + s^2\boldsymbol{C}\vec{\phi_1} = \vec{0} $$ Thus, we can use a function like scipy.linalg.eig to find the eigenvalues and eigenvectors of \(\left(\boldsymbol{P},\boldsymbol{Q}\right)\). The finite eigenvalues will then be the poles of the circuit, as they will satisfy \(\det\left(\boldsymbol{L} + s\boldsymbol{R} + s^2\boldsymbol{C}\right)=0\)

Solving for the values of the components

Recall that the voltage at the output of the circuit id given by: $$ V_1(s) = \frac{\det\left(\boldsymbol{M'}\right)}{\det\left(\boldsymbol{M}\right)} $$ Since each component of \(\boldsymbol{M'}\) and \(\boldsymbol{M}\) is a polynomial in \(s\), both \(\det\left(\boldsymbol{M'}\right)\) and \(\det\left(\boldsymbol{M}\right)\) will also be polynomials in \(s\). Further, the coefficients of those polynomials will be multilinear with respect to the component values. For example, using the example I introduced in the beginning: $$ \det\left(\boldsymbol{M}\right) = -s^5K_2K_3K_5 - s^4K_1K_3K_5 - s^3\left(K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4\right) - s^2K_1K_3K_4 - sK_1K_2K_4 $$ $$ \det\left(\boldsymbol{M'}\right) = s^3K_1K_2K_3 $$ Note that the terms are added together without any constant coefficients and that all coefficients will have the same sign given that all of the component values are positive.

Step 1: Determining the coefficients of the numerator and denominator of the transfer function

We know the roots of \(\det\left(\boldsymbol{M'}\right)\) and \(\det\left(\boldsymbol{M}\right)\) to be the zeros and the poles of the transfer function respectively. Thus, \(\det\left(\boldsymbol{M}\right)\) can be written as \(\left(s - p_1\right)\left(s - p_2\right)...\left(s - p_m\right)\) and \(\det\left(\boldsymbol{M'}\right)\) can be written as \(\left(s - z_1\right)\left(s - z_2\right)...\left(s - z_n\right)\). However, if you multiply these polynomials out, you will find that the leading coefficients of both must be 1. But, as we saw above, this need not be the case (the lead coefficients were \(K_2K_3K_5\) and \(K_1K_2K_3\)). So, we must add in an additional coefficient, \(k\), which is determined by dividing the actual voltage at a frequency \(\omega_0\) by the predicted voltage \(\frac{\left(j\omega_0 - z_1\right)...\left(j\omega_0 - z_n\right)}{\left(j\omega_0 - p_1\right)...\left(j\omega_0 - p_m\right)}\)

Step 2: Scaling the target polynomial to improve performance

If the poles and zeros have large magnitudes, then the coefficients for highers powers of \(s\) will be many orders of magnitude smaller than the coefficients for lower powers of \(s\). For example, if the poles are at \(s=-1000\), \(s=-2000\), and \(s=-3000\), the polynomial will be \(s^3 + 6000s^2 + 11000000s + 6000000000\). This will negatively affect the performance and exacerbate precision loss issues. To fix this, \(s\) is scaled so that the coefficients are closer together. In the example, by scaling \(s\) by 1000, the polynomial becomes \(s^3 + 6s^2 + 11s + 6\).

Let \(a_i\) be the coefficient of \(s^i\) in the numerator of the transfer function and let \(b_i\) be the coefficient of \(s^i\) in the denominator of the transfer function. Let \(d_n\) and \(d_d\) be the degrees of the numerator and denominator respectively. The scale factor \(S\) is determined with the following formula: $$ S = \exp\left(\frac{\sum_{i=0,a_i\ne0}\left(d_n-i\right)\ln a_i + \sum_{i=0,b_i\ne0}\left(d_d-i\right)\ln b_i}{\sum_{i=0,a_i\ne0}\left(d_n-i\right)^2 + \sum_{i=0,b_i\ne0}\left(d_d-i\right)^2}\right) $$ This formula is equivalent to performing a linear regression between the logarithm of the coefficient and the degree to which that coefficient corresponds.

Determining the coefficients when given the component values

As part of the algorithm, it is necessary to determine the coefficients of the transfer function given component values. This is done in a similar way to the method described above: find the poles/zeros and multiply out the polynomials. Finding the leading coefficient is similar, but instead of doing both at the same time, the two polynomials are done separately. The leading coefficient for the numerator is given by \(\frac{\det\left(\boldsymbol{M'}\left(1\right)\right)}{\left(1 - z_1\right)...\left(1 - z_n\right)}\) and the leading coefficient for the denominator is given by \(\frac{\det\left(\boldsymbol{M}\left(1\right)\right)}{\left(1 - p_1\right)...\left(1 - p_m\right)}\). All of the coefficients are then multiplied by the leading coefficient. After this process, the calculated coefficients will agree with the coefficients you would get if you evaluated the determinant symbolically.

Step 3: The algorithm

The operation of the algorithm is based on the fact that under certain conditions, the arithmetic mean is close to the geometric mean. Consider the term \(K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4\) from above. This coefficient consists of three terms added together, so we can equate the geometric and arithmetic means of those three terms. $$ \frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{3}\approx\left(K_1^2K_2^3K_3^2K_4K_5\right)^{1/3} $$ However, in order to compute either mean, we must know the number of terms that make up that coefficient. To determine this, one can find the coefficients of the transfer function that results from setting all component values to one. Each term in each coefficient will be one, so the number of terms will be equal to the value of the coefficient. This step is also used to determine which coefficients to use: coefficients with no terms are ignored.

The algorithm for finding the component values requires an initial guess. Usually, guessing all of the component values to be one is sufficient. Let \(\hat{K_1}, \hat{K_2}, ..., \hat{K_k}\) be the current guesses for the component values and \(K_1, K_2, ..., K_k\) be the correct component values. Since $$ \frac{\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}}{3}\approx\left(\hat{K_1}^2\hat{K_2}^3\hat{K_3}^2\hat{K_4}\hat{K_5}\right)^{1/3} $$ and $$ \frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{3}\approx\left(K_1^2K_2^3K_3^2K_4K_5\right)^{1/3} $$ we can say that $$ \frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}}\approx\frac{\left(K_1^2K_2^3K_3^2K_4K_5\right)^{1/3}}{\left(\hat{K_1}^2\hat{K_2}^3\hat{K_3}^2\hat{K_4}\hat{K_5}\right)^{1/3}} $$ $$ \left(\frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}}\right)^3 \approx \left(\frac{K_1}{\hat{K_1}}\right)^2\left(\frac{K_2}{\hat{K_2}}\right)^3\left(\frac{K_3}{\hat{K_3}}\right)^2\left(\frac{K_4}{\hat{K_4}}\right)\left(\frac{K_5}{\hat{K_5}}\right) $$ Note that we know the value for \(K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4\), as it is equal to the coefficient of \(s^3\) in the numerator of the transfer function, and we have found all of the coefficients in Step 1. Using the method from the above section, we can also find the coefficients of the transfer function of the circuit with component values \(\hat{K_1}, \hat{K_2}, ..., \hat{K_k}\), so we also know the value of \(\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}\). Using the rest of the coefficients, we can set up a system of equations: $$ \left(\frac{K_1K_2K_3}{\hat{K_1}\hat{K_2}\hat{K_3}}\right)^1 \approx \left(\frac{K_1}{\hat{K_1}}\right)\left(\frac{K_2}{\hat{K_2}}\right)\left(\frac{K_3}{\hat{K_3}}\right) $$ $$ \left(\frac{K_2K_3K_5}{\hat{K_2}\hat{K_3}\hat{K_5}}\right)^1 \approx \left(\frac{K_2}{\hat{K_2}}\right)\left(\frac{K_3}{\hat{K_3}}\right)\left(\frac{K_5}{\hat{K_5}}\right) $$ $$ \left(\frac{K_1K_3K_5}{\hat{K_1}\hat{K_3}\hat{K_5}}\right)^1 \approx \left(\frac{K_1}{\hat{K_1}}\right)\left(\frac{K_3}{\hat{K_3}}\right)\left(\frac{K_5}{\hat{K_5}}\right) $$ $$ \left(\frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}}\right)^3 \approx \left(\frac{K_1}{\hat{K_1}}\right)^2\left(\frac{K_2}{\hat{K_2}}\right)^3\left(\frac{K_3}{\hat{K_3}}\right)^2\left(\frac{K_4}{\hat{K_4}}\right)\left(\frac{K_5}{\hat{K_5}}\right) $$ $$ \left(\frac{K_1K_3K_4}{\hat{K_1}\hat{K_3}\hat{K_4}}\right)^1 \approx \left(\frac{K_1}{\hat{K_1}}\right)\left(\frac{K_3}{\hat{K_3}}\right)\left(\frac{K_4}{\hat{K_4}}\right) $$ $$ \left(\frac{K_1K_2K_4}{\hat{K_1}\hat{K_2}\hat{K_4}}\right)^1 \approx \left(\frac{K_1}{\hat{K_1}}\right)\left(\frac{K_2}{\hat{K_2}}\right)\left(\frac{K_4}{\hat{K_4}}\right) $$ (Note that here, some of the equations are actually exact, but this is not generally true). The left-hand side of each equation is known and can be found by dividing the actual coefficient by the corresponding guessed coefficient. After taking the logarithm of both sides, the equations can be rearranged into a matrix: $$ \begin{bmatrix} \ln\left(\frac{K_1K_2K_3}{\hat{K_1}\hat{K_2}\hat{K_3}}\right) \\ \ln\left(\frac{K_2K_3K_5}{\hat{K_2}\hat{K_3}\hat{K_5}}\right) \\ \ln\left(\frac{K_1K_3K_5}{\hat{K_1}\hat{K_3}\hat{K_5}}\right) \\ 3\ln\left(\frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}}\right) \\ \ln\left(\frac{K_1K_3K_4}{\hat{K_1}\hat{K_3}\hat{K_4}}\right) \\ \ln\left(\frac{K_1K_2K_4}{\hat{K_1}\hat{K_2}\hat{K_4}}\right) \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 2 & 3 & 2 & 1 & 1 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} \ln\left(\frac{K_1}{\hat{K_1}}\right) \\ \ln\left(\frac{K_2}{\hat{K_2}}\right) \\ \ln\left(\frac{K_3}{\hat{K_3}}\right) \\ \ln\left(\frac{K_4}{\hat{K_4}}\right) \\ \ln\left(\frac{K_5}{\hat{K_5}}\right) \end{bmatrix}$$ These linear equations cannot be solved exactly, so a solution is estimated using a least-squares method (such as numpy.linalg.lstsq). Furthermore, if the matrix is ill-conditioned, the solution found using this method may be quite large, resulting in a sudden large change in the component values. To fix this, an identity matrix multiplied by a parameter \(\alpha\) is appended to the bottom of the matrix. A zero vector is also appended to the bottom of the left hand side vector. As a result, the least-squares solver takes the length of the output vector into account as well and tries to minimize it. For larger values of \(\alpha\), the lower part of the right-hand side is larger than the upper part, so the least-squares solver focuses more on producing a smaller vector than meeting the actual criteria. Sometimes \(\alpha\) can be a fixed value, but it may be necessary to adjust \(\alpha\) as the algorithm progresses to ensure convergence. $$ \begin{bmatrix} \ln\left(\frac{K_1K_2K_3}{\hat{K_1}\hat{K_2}\hat{K_3}}\right) \\ \ln\left(\frac{K_2K_3K_5}{\hat{K_2}\hat{K_3}\hat{K_5}}\right) \\ \ln\left(\frac{K_1K_3K_5}{\hat{K_1}\hat{K_3}\hat{K_5}}\right) \\ 3\ln\left(\frac{K_1K_2K_3 + K_1K_2K_5 + K_2K_3K_4}{\hat{K_1}\hat{K_2}\hat{K_3} + \hat{K_1}\hat{K_2}\hat{K_5} + \hat{K_2}\hat{K_3}\hat{K_4}}\right) \\ \ln\left(\frac{K_1K_3K_4}{\hat{K_1}\hat{K_3}\hat{K_4}}\right) \\ \ln\left(\frac{K_1K_2K_4}{\hat{K_1}\hat{K_2}\hat{K_4}}\right) \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 2 & 3 & 2 & 1 & 1 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 & 0 \\ \alpha & 0 & 0 & 0 & 0 \\ 0 & \alpha & 0 & 0 & 0 \\ 0 & 0 & \alpha & 0 & 0 \\ 0 & 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & 0 & \alpha \end{bmatrix} \begin{bmatrix} \ln\left(\frac{K_1}{\hat{K_1}}\right) \\ \ln\left(\frac{K_2}{\hat{K_2}}\right) \\ \ln\left(\frac{K_3}{\hat{K_3}}\right) \\ \ln\left(\frac{K_4}{\hat{K_4}}\right) \\ \ln\left(\frac{K_5}{\hat{K_5}}\right) \end{bmatrix}$$ Finally, the values of \(\ln\frac{K_1}{\hat{K_1}}, ..., \ln\frac{K_k}{\hat{K_k}}\) can be used to compute the component values \(K_1, K_2, ..., K_k\). However, since the above system of equations is only approximate, the component values computed here are also only approximate. They are then used as the next values of \(\hat{K_1}, \hat{K_2}, ..., \hat{K_k}\). In this way, we can find closer and closer guesses for the values of the components in the circuit. The algorithm terminates when the norm of the left-hand side vector is below a given threshold.

Step 4: un-scaling the values

Recall that in Step 2, we scaled the coefficients of the polynomial to bring them closer together. This was equivalent to scaling the complex frequency \(s\). Thus, components corresponding to a higher power of \(s\) (capacitors) must be scaled more than those corresponding to a lower power of \(s\) (inductors). Now that we have our final component values \(K_1, K_2, ..., K_k\), we must adjust them to undo the scaling we applied to the coefficients of the transfer function. Each component value \(K_i\) must be multiplied by \(S^{-P_i}\).

Time complexity analysis

Computing the poles and zeros of a circuit using generalized eigenvalues takes \(O\left(n^{3}\right)\) time and expanding those poles and zeros into a polynomial takes \(O\left(n^{2}\right)\) time. Overall, generating the coefficients of the transfer function therefore takes \(O\left(n^{3}\right)\) time.

Computing the coefficient matrix requires computing the coefficients once per component. Thus, the time complexity of this step is \(O\left(n^{3}k\right)\). Each iteration of the algorithm also only requires computing coefficients once, so it has a time complexity of \(O\left(n^{3}\right)\)

Performance

What it does well

The algorithm seems to do well on circuits that can be split up into two sub-circuits: one between the input and the output and one between the output and ground. The following table shows each circuit alongside the plot showing how quickly the algorithm converged for that circuit. For each of the following examples, \(\alpha\) was zero.

What it doesn't do well

The algorithm tends to struggle with circuits with multiple stages. The algorithm will generally diverge after a few iterations if \(\alpha=0\). If \(\alpha\ne0\), the algorithm will converge to a value that is not optimal, i.e. it will not reach the desired error. Some examples include the 8-pole Butterworth low-pass or the 5-pole inverse Chebyshev low-pass

Comments

Popular posts from this blog

Improving and calibrating the capacitive water sensor

Turn a buck converter module into a buck-boost converter with only two components

Self-starting inverter