The search for the Holy Grail in Analog Design Automation
The optimization of analog circuit sizing has long been recognized as a challenging task within the field. This difficulty has given rise to two distinct approaches to design optimization:
?? 1. Knowledge-based optimization
?? 2. Simulation-based optimization.
In the first approach, referred to as knowledge-based optimization, tools like OASYS and ASTRX/OBLX have been developed. These tools operate on the basis of pre-defined rules and expert knowledge, guiding the optimization process. On the other hand, simulation-based optimization, as exemplified by ANACONDA, relies on simulating the circuit's behavior under various conditions to determine the optimal sizing.
Despite the efforts made by both schools of thought, neither has been successful in creating a widely accepted commercial design automation tool that fully satisfies the expectations of industrial design experts. These expectations include aspects such as design productivity, full traceability, easy debugging, design reuse, quality assurance, minimum cost, and comprehensive design documentation.
Therefore, the question remains: Is analog circuit sizing optimization an NP-Complete or NP-Hard problem??
Our intention is to delve into this subject matter, carefully dissecting the mathematical principles that underpin this claim, while also striving to construct a fresh computational model capable of effortlessly transforming this problem into a P-type one.
P, NP, NP-Complete and NP-Hard problems:
In computational complexity theory, P, NP, NP-complete, and NP-hard are classes of decision problems.
1. P (Polynomial Time): P is the class of decision problems that can be solved by a deterministic Turing machine in polynomial time. This means that the solution can be found efficiently with a time complexity of O(n^k), where n is the size of the input and k is a fixed constant.
2. NP (Nondeterministic Polynomial Time): NP is the class of decision problems that can be verified by a nondeterministic Turing machine in polynomial time. This means that given a potential solution, it can be verified efficiently with a time complexity of O(n^k). However, finding the solution itself may be computationally expensive.
3. NP-complete: NP-complete problems are a subset of NP problems. A decision problem is NP-complete if every problem in NP can be reduced to it in polynomial time. In other words, if a polynomial-time algorithm exists for one NP-complete problem, then it would imply the existence of polynomial-time solutions for all NP problems. This makes NP-complete problems some of the most challenging and important problems in computer science.
4. NP-hard: NP-hard problems are a broader class that includes both NP and NP-complete problems. NP-hard problems are at least as hard as the hardest problems in NP. They may or may not be in NP themselves. Essentially, NP-hardness implies that a problem is as difficult as the most difficult problems in NP but does not necessarily mean it is solvable within polynomial time.
Is P=NP Statement Valid?
A question that’s fascinated many computer scientists is whether or not all algorithms in NP belong to P.
For our definitions, we assumed that P≠NP, however, P=NP may be possible. If it were so, aside from NP or NP-Complete or problems being solvable in polynomial time, certain algorithms in NP-Hard would also dramatically simplify. For example, if their verifier is NP or NP-Complete, then it follows that they must also be solvable in polynomial time, moving them into P = NP = NP-Compete as well.
We can conclude that means a radical change in computer science and even in the real-world scenarios. Currently, some security algorithms have the basis of being a requirement of too long calculation time. Many encryption schemes and algorithms in cryptography are based on the number factorization which the best-known algorithm with exponential complexity. If we find a polynomial-time algorithm, these algorithms become vulnerable to attacks.
How did Analog Circuit Optimization start?
Since the 80's, analog circuit optimization was assumed to be NP due to its complex nature. Following the definitions of NP mentioned above, NP problems can be easily verified but are hard to solve. Consequently, it was decided to use an optimization engine to generate candidate solutions and verify each candidate by a performance evaluator such as SPICE simulators. This point of view led to the emergence of simulation-based optimization tools such as ASF, ANACONDA and MAELSTROM as depicted in Figure 2.
A general and honest observation has been outlined in [31]: despite the huge computational effort used in this work to optimize the Equalizer, the optimization engine was not capable of reaching the precision of hand analysis.
This observation is very fundamental to our analysis and the real questions become:
A Deeper Look into SPICE Computational Model:
In order to determine how efficient is the computational model implemented in SPICE simulator, we analyzed graphically the nonlinear DC analysis since it is the only backbone algorithm in SPICE engine. Other analyses such as AC and TRANSIENT are simply variants of the nonlinear DC analysis:
To measure the complexity of the nonlinear DC analysis implementing the Newton-Raphson algorithm, we modeled the DC analysis as a cyclic graph as illustrated below:
A deeper look into the graph on the right side reveals how complex is the computation model in SPICE. Loops exist on the level of each vertical branch. Forward and backward values propagation occur from one branch to the next one increasing the dependency between branches. These propagations exist due to the presence of diagonal and nondiagonal Jacobian elements. All these computations occur at each iteration of the nonlinear DC simulation. We conclude that the computational model of SPICE inherently requires lots of computations and suffers from convergence issues due to the presence of Jacobian elements in each forward and backward propagation. Over the years, lots of research efforts led to the reduction of the complexity of the DC analysis to O(n^1.2) = α. n^1.2 where n is the number of node voltages to compute.
To summarize this section, the computational model of SPICE:
领英推荐
The next question that arises is the following: Analog designers do not explicitly compute the Jacobian matrix during hand analysis, so how they do it?
Computational Model used by an experienced analog designer
To compute the size W1 of the upper transistor in the schematic, the design would propose values for the biasing current, MOS Length and the node voltage Vn1, then he will compute W1 using a simple MOS model. Next, the designer will propagate Vn1 to the next transistor, proposes a vnode voltage Vn2 and then computes W2 assuming same current and Length, and so on.
We conclude that during hand analysis, the model of computation followed by the analog designer is much simpler than the SPICE one since:
Here, the big question that arises is the following: Can we combine the simplicity of the structured resolution approach used during hand analysis with the precision of SPICE simulation to design, explore, optimize and migrate analog circuits ?
Structured Resolution Approach for Analog Design Automation
It is possible to build an analog synthesis system that mimics analog designer behavior during hand analysis but does the same job at SPICE precision. This can be quite easily achieved by replacing MOS level 1 models with standardized API interface for Process Design Kits that are compatible with all device types and all foundries PDKs. The resulting approach is illustrated in Figure 5, where sizing and biasing functions F interface with PDK files to compute the sizes and biases of each transistor sequentially.
Doing so, an efficient Cognitive Computing system has been created by capitalizing on the efficiency of human designers while supported by the accuracy of computer simulation programs.
Here, the next question that arises is: How to build a product to automate analog design and migration?
Digitally-Inspired Analog Circuit Synthesis System
Figure 6 below shows how to build an analog circuit synthesis flow similar to digital high-level synthesis systems that are very mature and in production since decades. The flow for analog will rely on four modules :
1- The schematic scheduler: This module scans the schematic and determines the sequence of transistors to compute in order. Here, it acts like a human designer and implementing cognitive computing strategies.
2- The Function Allocator: This module will assign a sizing and biasing function that shall compute the device biases and geometries based on its connectivity context.
3- Technology Mapper: This module links each transistor to the PDK files based on the requested device variant in order to size and bias the device accurately using standard compact device models or simply accurate DC simulations.
4- Graph Evaluator: In this analog flow, we previously described steps (1)-(3) as directed acyclique graphes that shall be evaluated in step (4) to get sizes and biases.
P-type Knowledge Graphs Generated by the synthesis system
To appreciate the proposed methodology, we illustrate in Figure 7 the Miller OTA amplifier to be synthesized:
Figure 8 shows the corresponding generated bipartite knowledge graph taking as input the designer chosen parameters and the topology of the OTA Miller and computing the size and bias of each device as the output of its allocated function.
The complexity of this knowledge bipartite graph is linear with respect to the number of devices or nodes in the circuit. Therefore, it represents the P-type computational model for the OTA Miller describing the full design space of the DC operating point. It has been generated cognitively and without any data-driven learning.
We conclude that the analog circuit sizing optimization task is neither NP-Complete nor NP-Hard. It is solvable as a P-type problem.
References:
__________________________________________
Associate Professor at GEEPS/CENTRALESUPELEC/CNRS/SORBONNE
3 个月Please check this post: https://www.dhirubhai.net/posts/activity-7227002113459965953-ZjIB?utm_source=share&utm_medium=member_desktop
Your article kindly reminds us that transistor level simulation was created 50 years ago. Even if technology has evolved, if we run bigger and bigger netlists, and decrease simulation time, we still use the same approach based on linearization and conductance matrix solving. Your refreshing approach based on graphs and closer to the human way of thinking shows that we have room for improvement and gives us hope to see a breakthrough in this domain. This would unlock very concrete problems we have today, for example when spending time on design migration. And why not talk about analog design synthesis...Thank you for sharing this.
Associate Professor at GEEPS/CENTRALESUPELEC/CNRS/SORBONNE
10 个月Thank you Alexandre for your precious feedback. You are right, many unsolved fields in analog EDA can be unlocked today.