The IP Core Distribution Challenge

February 19, 2015, anysilicon

hat comes to mind when you hear the term IP Distribution? How do people like ARM and MIPS get their cores into people’s hands? Pricing, contracts and legal issues? Maybe third-party Web sites like Chip Estimate and Design & Reuse? Yes, they are all factors in how independently developed IP gets distributed to users. But as the commercial IP industry matures, these things are getting much more efficient and well-oiled.

For me, when I talk to people about IP distribution, the conversation generally focuses on how IP—mainly internally developed design elements, including things like PDKs, libraries, test files, constraint information are shared among designers and across companies. In the ‘old days’ this might not have been that big of a deal, as internal networks could handle the relatively small amounts of data being shared. Plus, more centralized design teams made the challenges a bit easier, as they had more direct and efficient communication.


Two things have changed dramatically that are making internal IP distribution a major issue, and neither of them should come as a big surprise.


  1. The volume, complexity and sources of design data required for a modern SoC have exploded, bringing even well-designed networks to their knees in terms of the speed at which they can deliver data to clients. This can result in very frustrating delays and unpredictable design schedules—neither of which a manger wants to hear about.
  2. design teams are much more dispersed than in years past, with engineers of different backgrounds, disciplines and knowledge levels spread across the globe. This scenario can add layers of inefficiency as designers work with unfamiliar IP, the wrong versions of IP, and suffer from a general lack of coordination project-wide. This highly connected network of activities (as shown in the image on the right) drastically increases the complexity of IP distribution.


Let’s look at what I would call the infrastructure issue first, the bandwidth of existing networks. Networks aren’t keeping up with the huge amount being moved around, and the Band-Aid solution—remote data centers—complicates matters more. For example, complex PDKs often can take a full working day to propagate to remote design centers using traditional methods like Scp and RSync. This first problem has become one of the main bottlenecks for keeping distributed teams connected and able to collaborate successfully (see problem #2).

One of the issues is the serial nature of how IP is distributed, i.e., one IP block at a time along a serial pipe without leveraging the existing versions and generating deltas between the before and after versions. Also, managing and scheduling the transfers can become a bottleneck.



We don’t actually need faster networks, although like never being too rich or too thin, you can never have too much bandwidth. A more sophisticated way to manage design data would go a long way to improving the performance of data distribution. We are seeing more companies moving to centrally managed systems that allow IP use and revisions to be tracked across the enterprise, but also use project IP workspaces at the client level.


Work gets done at the local level, and updates are easily checked in to the master system. Data distribution is parallelized, which is particularly useful in expediting delivery to remote sites. In our SoC Integrator system, for example, read operations (version history, locks, etc.) on the repository are always local operations for remote users, and write operations are sent automatically over TCP/IP to the master. As a result, communications between IP “owners” and “users,” regardless of where they are located, becomes much more streamlined. That allows for easy defect tracking and project management. These portal-like structures are great for monitoring IP validation regressions and tracking bugs, too, and generally keep people “on the same page” as designs evolve, without huge amounts of network traffic being generated.



Not that there aren’t things that can be done to improve overall network performance. A well-conceived IP distribution system as shown in the image to the right 1) offers scalability that takes advantage of an optimized relational database to ensure rapid response times; 2) supports an efficient streaming network protocol to minimize the effects of latency, and 3) uses an intelligent, server-centric data model that is network-friendly and keeps the database performing at top speed. Also critical is on-demand data replication across remote data centers, which reduces network traffic and storage requirements.


IP distribution is the lifeblood of any company developing advanced SoCs today. It shouldn’t be left to chance, or be a bottleneck, through the use of insecure and overly burdened corporate networks. Think strategically about how your design data is shared, updated and archived, and I am certain you will see significant improvement in designer efficiency.



This is a guest post by Methodics that delivers state-of-the-art semiconductor data management (DM)  for analog, digital and SoC  design  teams.