When looking forward in EDA, disruption is always seen as the way to make big gains. But when you look back, large gains often come from a lot of small changes.
						
Sometimes, we spend so much time looking for the next big thing that we actually miss something even bigger. I have to admit I was guilty of this while employed by a large EDA company 20 years ago. I was one of those ESL people — Electronic System Level acolytes, with Gary Smith as our standard bearer. We wanted to do many things, including raising the level of abstraction for design and verification, doing function/architecture definition and mapping, doing hardware/software co-design, performance analysis, high-level synthesis (HLS), and so much more. What we actually ended up with were virtual prototypes and a highly reduced form of HLS that could only accept a restrictive subset of SystemC, a language that was quite a long way from software.
We have not got much closer to those goals over the past two decades, and yet we are now able to design systems that are thousands of times bigger in size and complexity, with more heterogeneous and homogeneous processing cores than we ever could have imagined. It is not without some limitations because we still do not know how to properly do performance or power analysis on those systems, but what we are unable to do through analysis is dealt with either by margining, real-time measurement, or the incorporation of guardrails that ensure the chip remains within safe operating conditions.
What the ESL crowd missed was the massive gain that comes from compounding little gains. I am still somewhat blinkered by going for the big win (that would probably make me a bad gambler), but content recently gathered for a couple of articles has provided examples of how those small gains are a better use of time and money than gambling on the next big thing.
The first example is from Marc Swinnen at Ansys, who talked about power optimization. I had asked about the amount of power being wasted in a typical design. “In the past, I have talked to customers about this very problem. I might tell them that using my tool will save them 10%, 15% power. Their response might be, ‘That’s not going to make my day. That’s not worth it to me. Then there is another technique. What will that save me? I tell them that it will save you 5% or 7%. That’s not worth my time.’ Every technique was shot down because it wasn’t worth their time. And at the end they say, how is it that my competitor can manage to get these really low-power designs? Because they pay attention to power at every single step along the way, even the small increments, it all adds up. You can disregard the small contributions at every step, but in the end, it’s like going on a diet. Any particular cookie, any particular walk, isn’t going to make a big difference, but it all adds up over time. And that’s how you achieve a result — by being conscientious at every step.”
Many others agree this is the only way to achieve a low-power design, except some add that it only takes one bad cookie to spoil the whole thing. A very power-efficient piece of hardware cannot provide the expected savings if bad software is put on it, especially if it doesn’t use the very features that the hardware inserted to make it power efficient. Lots of small improvements can make a very big difference.
The second example comes from the roundtable posted this month on the subject of formal verification. As journalists, we tend to hear about the big things that happen in our industry. Big changes are normally associated with new market entrants, people who challenge the status quo. It has been a long time since I heard about anything happening in the space of formal, and thus expected to hear about little change. Those views were quickly shot down.
Jeremy Levitt at Siemens said, “There have been points where there have been discontinuities, and new algorithms have come along. But even without that, from release to release, from year to year, you see the tools get 25% faster, twice as fast. You see exponential performance gains.”
Sean Safarpour at Synopsys added: “SAT solvers are one of the main foundation solvers for formal verification. And the SAT competition has been around for about 20 years. Every once in a while, a new SAT solver comes up. There was Chaff and zChaff. This was groundbreaking, and people said this is probably as good as it gets for an NP complete problem. And then MiniSAT comes up. These things happen, and then for a while, even SAT experts would say things quieten down. But you go back five years, kissat, MapleSAT, a lot of new solvers are coming. Recent results are blowing the previous year’s results out of the water.”
I doubled down and asked them if massive data centers had not provided a perfect opportunity for formal now that it had access to unlimited memory and unlimited horsepower — the two things that I had always been told were the limiters to more complex proofs.
Levitt responded, “These new architectures reliably give you a linear speed up, but the problems are exponential, or at least NP hard. Improvements in engineering related to the algorithms we run, how we orchestrate them, are also getting exponentially better. Increased compute power, we will take that, but it is a linear speed up, sometimes super linear if it enables you to do more things at once.”
“You can throw the biggest machine at [a formal problem], but you can’t actually make a difference,” said Ashish Darbari at Axiomise. “People who have been using formal understand that. Would I be able to get the same result on a smaller machine? We got the same proofs, except it took a little bit longer. Compute will make a difference, but it’s asymptotic, and throwing more and more compute at the problem wouldn’t make a significant difference.”
I still get more excited by big changes. But the point is that in this industry, which does not like change because of the extra risk associated with a disruption, lots of small changes probably produce a better return over a long period of time. At least, in this case, that is what history tells us.
.png)
  
