What’s wrong with RTL for ASIC designs?

January 14, 2015, anysilicon

I think this is an appropriate first post, because this is a question that we’ve heard many times when talking with hardware engineers trying to sell our product. The fact that there are (now) about a dozen companies trying to replace RTL with alternatives (I’ll talk about HLS in other posts) should be proof enough that there actually are other ways to design hardware than RTL.


So first, what is RTL? Register Transfer Level. This means that you describe a digital circuit as registers + combinational logic (the logic is the ‘transfer’ between registers). This is higher level (and also a lot more convenient) than gate level, or transistor level. So what is wrong with it? Well for me the two main problems are:


  1. Lack of a model of computation. For instance, if you take software languages, the majority of them (with the notable exception of functional languages of course, and a few other esoteric languages) target a von Neumann architecture, meaning that the implementation of the program will read words from memory, perform computations, and write back words to memory. All RTL says is really ‘move data from register to register passing through combinational logic’. Common elements in hardware architectures such as RAM, ROM, FIFO, Finite State Machines (FSM or finite automaton) must all be described manually, relying on the synthesizer for correct inference.
  2. Semantics is implicit. In a way this is due to the lack of model of computation, but not only. This issue is present, too, when using assembly language instead of a high-level language. Say you want to declare a synchronous register. Well you can’t do that. You declare a register, but it is the way you use it that will ultimately make it synchronous or not: you have to assign it in a process/block with ‘if rising_edge(clock)’ or ‘@(posedge clock)’. You cannot declare a FSM, instead you must declare a register that takes enumerated values, which you must be careful to update in every path of your design. Like when writing assembly, you never declare a loop; you just jump to the same location several times in a row until the loop is done.


There is another issue with RTL hardware design, but it has as much to do with the level of description itself as with how the hardware is described. Two languages have been dominating the world of design for years, they are VHDLand Verilog. While they are pretty good to describe RTL hardware, they have many limitations:


  • Every time you declare a scalar, you must say the range of bits. For instance, a 9-bit variable can go from bit 3 to bit 11. Same thing when you declare an array.
  • The type system varies from very strong and restrictive (VHDL) to weak and flexible (Verilog). In both cases it will surprise you (and not in a good way) if you are not careful.
  • There are many ways to do the same thing. For example, initialization of signals: you can say ‘initial value of X is 5’ and when resetting the circuit, you can say ‘initial value of X is 18’. You have two or three different ways to design asynchronous logic. You have blocking and non-blocking assignments, but it is sometimes accepted to use one in the place or another, which will sometimes cause unexpected behavior (see one of the excellent posts by Jan Decaluwe on this, e.g. http://www.sigasi.com/content/verilogs-major-flaw), etc…


These limitations, combined with the limitations of tools themselves, are the main reasons behind the so-called coding styles that dictate how you should write your design if you want it to be properly synthesized. Examples are you should not set initial values to signals outside reset blocks, integer computation should use signed/unsigned in the numeric_std package in VHDL, a N-bit scalar should be declared with a N-1 down to 0 range (and a M-entries array with 0 to M-1 range), etc.


So this is a summary of what is wrong with RTL (and also with VHDL/Verilog) in my opinion. If you prefer to stick to RTL design, or you’re actually fond of VHDL or Verilog, good for you. If you are looking for improvements, however, I’ll explore alternatives in a future post.


This is a guest post by Matthieu Wipliez CTO of Synflow, an innovative EDA company based in Europe.

  • Mike Demler

    Since this is written from a first-person perspective, may we know the author?

    • http://www.anysilicon.com AnySilicon

      We have added the author name to the post. Tnx for your comment!

  • Mahesh

    Yes i’m fond of VHDL and Verilog, and i know basics of this languages. I’m looking for improvements, Please Suggest.

    • https://www.synflow.com/ Matthieu Wipliez

      Hi Mahesh, I suggest you take a look at the Cx programming language, it’s much easier to learn and use than VHDL/Verilog. See why at http://cx-lang.org

  • Steve Hoover

    I am the founder of Redwood EDA, one of the “about a dozen” ventures to which Matthiew refers. Our answer to the language problems Matthiew describes, as well as answers to many design methodology challenges, is Transaction-Level Verilog (TL-Verilog), which extends a Verilog environment, rather than replacing it. Links: redwoodeda.com, and soon makerchip.com.