The Era of Full Stack Chip Designers

5 hours ago 3

Disclaimer: Opinions shared in this, and all my posts are mine, and mine alone. They do not reflect the views of my employer(s), and are not investment advice.

A few years back, when I was talking to a student that was interested in both the front-end and back-end stages in chip design, I made a cheeky remark that they should become a “Full Stack Chip Designer”. (I can’t remember who I was talking to, but if you are reading this, this post is dedicated to you!) It was a term I took from software engineering - a full stack engineer is someone that has the skillsets to work on both the front-end and back-end aspects of a software application. When I said “full-stack” in the chip design context, I just meant it as a joke - there are very few people that actively work on all the steps in the RTL-to-GDS chip design flow, especially when designing complex, real-world chips. I must admit I have never done full stack software engineering, but from what I gather, the skills are more transferrable between front-end and back-end software design, compared to chip design. But with AI in the picture, something tells me we might be heading towards the era of full-stack chip designers.

I briefly introduced the idea earlier, but let me be more specific here. The typical digital chip design process can be divided into two broad categories:

  • Front-End, which involves designing microarchitecture specifications and describing the logic using Hardware Description Languages (HDLs) like Verilog. (the resulting “code” is called Register Transfer Language, or RTL.)

  • Back-End, which involves taking the RTL, and converting it into a layout (called GDS) that a foundry can use to manufacture the chip. This is a multi-step process that uses EDA tools from companies like Cadence and Synopsys.

Typically, teams managing these two aspects work in isolation to each other - the front-end team finalizes the RTL, then hands it off to the back-end team to generate the GDS. There are a few “leads” involved in the back-and-forth between the teams, but their role is to pass feedback to each other’s teams. The leads have a good understanding of both front-end and back-ends, but typically do not directly work on implementing in either. On the contrary, when I refer to a full-stack chip designer, I mean an individual (or a small team) that can take a design idea through from front-end to back-end - fully implementing all the steps in the process.

There are a lot of smart people in chip design - so if I could think of this idea, you can be sure that a lot of people have thought of it. In fact, a lot of early chip designers were truly full-stack - they worked on all the steps themselves. I can even say that if you have ever taped-out a chip for a project at your university, you are technically a full stack chip designer too. So it’s clearly not an impossible problem to solve - it is just very hard to solve at the scale of today’s commercial chips with billions of transistors. There are two main reasons for this:

I think this is a well documented problem. (I have also talked about it in one of my earlier posts.) Due to a large design space, EDA tools are complex to use, and produce reports filled with convoluted terminology. As a result, it is not straightforward for a front-end designer to dive into back-end tools. To make matters worse, bugs in the back-end design process are very hard to spot, and if left unattended, could result in a chip that does not function (a chip re-spin is a billion dollar affair, so no one wants that.)

I also think writing good RTL is an art that takes time to master - unlike high level languages, HDLs are less intuitive, which makes it harder for back-end experts to pick up. All this has meant that back-end designers spend all their time training to be experts in the tools they are working with, while front-end engineers prefer to do what they do best.

This aspect is more nuanced and is not seen in small chips or university projects. When working on industry-scale chips, front-end and back-end designers have different priorities. Front-end designers start by becoming experts on a small part of the design. In order to make this happen, all other parts of the design need to be treated like black-boxes. However, for back-end design to be effective, it needs to happen at a bigger granularity - if a chip has two blocks, but the back-end is handled for each block separately, the interactions between the blocks (signals routed across, how often they access hard macros, and so on) would not be captured well - which would result in a low-quality layout. Hence, the goals of a back-end designer are quite different - it is to gain a basic understanding of all parts of the design, by sacrificing depth in a single part.

Essentially, when it comes to understand the chip, a front-end designer should be a specialist, while a back-end designer should be a generalist. Getting an individual (or team) to do both is a challenge.

Short answer: Time.

Although front-end and back-end work in isolation, they are working towards the same goal - to build a better chip. However, there is a long feedback loop between the two stages: Typically, once the RTL is delivered by the front-end teams, back-end teams find ways to optimize the layout which need changes in the front-end. The front-end team now needs to distil out the feedback, figure out if the request is even feasible, and if so, implement the change. This takes weeks, or sometimes even months. Projects have multiple iterations like this, which either massively increases chip design timelines, or results in a sub-par chip getting shipped.

Bringing both front-end and back-end into the same umbrella can massively shrink this timeline, by:

  • Making the iteration time faster (”I know exactly what needs to change for a better layout, and whether that’s possible to do in the RTL”)

  • Reducing the number of iterations (”When I write RTL, I also know how it is going to impact the layout”)

This time saving is crucial - since the semiconductor industry is highly cyclical, when the chip you build is hot, quickly coming up with better versions is key to dominate your industry.

I spoke about the potential of AI in chip design EDA in an earlier post - I think reading this would add value to this discussion:

In addition to the points mentioned there, I want to specifically talk about the two ways I feel AI can make full-stack chip design a reality:

If you look at both the reasons why full-stack is hard in chip design, they boil down to the same thing - knowledge transfer is hard. I think AI, even in it’s current form, can solve this problem to a good extent.

For instance, it is possible to develop and maintain a database with all the scripts, best practices, and terminologies associated with back-end tools - effectively allowing an expert in front-end design to move ahead with back-end flows. Similarly, if LLM based RTL generation can be solved effectively, that allows a back-end designer to also manage front-end changes.

I also think LLMs are going to play a big role in documentation and teaching - if basic questions about all parts of a design could be answered immediately, then you can have a specialist in one part of the design expand into being a generalist when needed.

Although I didn’t mention it so far, having different front-end and back-end teams is also important to maintain realistic working hours - from the conversations I have had, chip designers work more hours than software engineers, especially close to deadlines. So in the current state, even if an individual can manage both front-end and back-end, it’s not practical to have them work on both. This is where I feel AI’s big productivity promise is going to play a role.

I think an agentic RTL-to-GDS flow is coming soon, which would automate a lot of mundane tasks that chip designers must do today. I think this would greatly help with the workload problem, and make the idea of a full stack chip designer practical.

So far, I talked about going “full-stack” like an opportunity - if someone wants to do both front-end and back-end, they might be able to do it in the near future and build better chips. This is actually how it works in software engineering today - some developers become full-stack, but others choose to stay with one. But can chip design afford to offer this flexibility?

I can imagine a future where becoming a full-stack chip designer may not just be an option - it could become the norm. To understand why I feel this way, let’s look at the economics of chip design. At a high level, chip design companies spend money on the following:

  • Payroll for chip designers

  • Cost to manufacture wafers

  • EDA tool licenses

With the way things are headed today, manufacturing costs are increasing sharply - each new node is becoming more challenging to implement, and as a result, more expensive. So this is going to start eating into the profit margins of most chip design companies.

The next factor is EDA licenses. There are many AI-first EDA startups coming up today, but they still don’t have access to the Process Design Kit (PDK) from chip manufacturers. (If you don’t know what a PDK is, think of it as a secret recipe that a manufacturer like TSMC provides to EDA vendors like Cadence and Synopsys in order to correctly map chip designs to a layout that can be manufactured.) As long as PDKs remain under tight control, the legacy EDA vendors cannot be replaced. This leaves two possible outcomes:

  • AI EDA tools add a new agentic layer in chip design

  • Legacy EDA vendors do all the AI themselves and become more dominant

Either way, I see the cost related to EDA going up. (Not to mention, a lot of chip design companies also need to upgrade compute to handle the new AI workflow.) So to maintain their profit margins, chip design companies might raise their prices - but that may not be sustainable unless they have a very strong moat.

This leads us to the point I’m trying to make - companies need to innovate, and improve productivity with smaller teams. What better way to do this, than with full stack chip designers? (By the way, when I say smaller teams, I don’t necessarily think jobs are going away - on the contrary, I expect shorter design timelines, and a large variety of chips customized for different use cases.)

I took front-end and back-end design as an example here since the analogy fits nicely with the software engineering world. But chip design actually has many more steps, and could see consolidation at other levels too. Here are a few I could think of:

  • RTL Design and Functional Verification (software engineering has largely consolidated Programming and Verification)

  • Different aspects of Verification (like Functional, Power, Performance, and so on)

  • Architecture Simulation and RTL Design (through High-Level-Synthesis)

If you can think of some other aspects in chip design that can be consolidated, I’d love to hear as a comment below :)

Read Entire Article