Chuyển trọng tâm sang khối lượng công việc dành riêng cho ứng dụng

Experts At The Table: EDA has undergone numerous workflow changes over time. Different skill sets have come into play over the years, and at times this changed the definition of what it means to design at the system level. To work out what this means for designers today, and how it looks going forward, Semiconductor Engineering sat down with Michal Siwinski, chief marketing officer at Arteris; Chris Mueth, new opportunities business manager at Keysight; Neil Hand, director of marketing at Siemens EDA; Dirk Seynhaeve, vice president of business development at Sigasi; and Frank Schirrmeister, executive director, strategic programs, systems solutions at Synopsys. What follows are excerpts of that discussion, which was held at the Design Automation Conference. To view part one, click here.


L-R: Keysight’s Mueth, Siemens’ Hand, Sigasi’s Seynhaeve, Synopsys’ Schirrmeister, Arteris’ Siwinski. Source: Jesse Allen/Semiconductor Engineering

SE: Are we going to see more application-specific system design workflows in the future?

Mueth: In a sense, yes. Systems are a collection of different workflows, whether it’s digital design or photonics. In the end, those have their own needs and wants and specialties that have to be catered to. But in the global system in terms of making all these things play together, it’s not going to be one company that does that. It’s going to be a collection of companies working together — big platforms, point tools, everything. And it’s going to be an interesting landscape over the next 10 years as companies figure out what their new identities are in this ecosystem and start catering to those.

SE: Is this as simple as implementing some standards that we already have, and we’re not using?

Hand: You can never have enough standards. We’ve proven that as an industry. Standards will play a part. But more importantly, with workflows and collaboration, no one company is going to own everything. And even within a group, how do you share knowledge and do tradeoffs? I don’t know what the answers will be, whether it’s going to be intelligent agents, whether it’s going to be AI. But just take something as simple as during validation, where you find out that an automobile isn’t braking quickly enough for a pedestrian. The solution could be a faster processor, it could be better sensing, it could be bigger brakes, it could be different tires. In an ideal world, with a true system-level design, you could look at all that and say, ‘For that particular thing, let’s spec better tires. That’s going to be the solution for this and we don’t have to change anything.’ Or they might say, ‘It’s actually going to be easier to upgrade the processor.’ But with that level of communication, there’s not one standard or even 10 standards that are going to do that. It’s going to be, ‘How do you manage the workflows? How do you manage the data in there? How do you manage the communications?’ We’re going to find out as more companies try to figure that out.

Seynhaeve: Also, you can’t really define or use standards if you don’t quite understand the workflow. The workflow we’re developing for system-level is still in flux. It’s so much in flux by introducing AI, that we need to verify the verification flow that we put in place is actually correct. And it’s impossible to do because there’s this creativity aspect to AI that we don’t fully understand yet. How do you put a standard into this chaos?

Hand: You have to do it with the standard in mind, too. The end result has to be open.

Mueth: From an internal perspective, we make electronic test equipment, that’s our bread and butter in production lines and things like this. Internally, we’ve tuned our application workflows pretty well, I think, over the last five years. I think we’ve got good productivity improvement, but they’re not connected together very well. We use tools from every EDA and CAE company internally, the best tool for the job, but they don’t interoperate. And we also have the problem with integrating hierarchies, where the system level designers or someone who is looking at this whole robotic line, how does he talk to the instrument guys, to the robotic guys and then inside of those, there are subassemblies, chips and none of that’s really connected.

Siwinski: This translation problem is only getting worse. Thirty years ago if you tried to get a software and a hardware guy together, good luck. And that was in the easy days. Nowadays, you add in all the different dimensions. There are going be a lot of situations where the standards could be useful once you have a vocabulary and common definitions. That becomes a way to at least make sure you have some guardrails. But unless you have better understanding of the intent across different applications, stitching these things together is going to be a challenge.

Hand: Where does the gravity go from there? It’s part of the interesting question. One way you could look at it is, ‘Does it go toward a lifecycle management chart?’ That’s going to be the single source of truth. That’s where all the data for the digital twin is going to go, and that’s where you’ll see the discontinuities. But you’ve still got to get the data into that lifecycle management, or the metadata, or abstract it and find a way in which that lifecycle can be tracked and managed.

Schirrmeister: Taking a step back, what’s probably instructive to see is where things work and where standards work. Going back to the loops, one flow that works reasonably well within its constraints is the power flow. There, you have standards. You have things like UPF, you have intent for power, you can articulate this, you can drive this down to the flow. You can generate a lot of data with emulation to stimulate the software, run the software, run it long enough. You can annotate power information from the .lib files, and you can close this process loop reasonably well. That’s power by itself. By the time you have done all these things — the standards, the flows, the tools connection — complexity is already at the next point. That’s the whole premise from beginning. The complexity of the things our customers demand are simply going faster than us being productive. Related to this, I love the old ITRS charts. They are more conceptual and not seen as absolutes, but it continues like that. So now power works well. To your point on the industries, that in itself is causing different standards. In aerospace, I have different standards than I have in automotive, and you have to follow those. Then,you have regulation coming in. People say, ‘Oh, the regulator just said I need to have these cameras, or I need to follow these constraints. As a result, I really need now some new versions of camera or new resolution of camera.’ Things get very application-specific very fast.

Mueth: You need project management and requirements management tied into the process to manage all of that. Otherwise, it’s unwieldy.

Schirrmeister: In automotive there is something called ODD, which is the operational design domain, in which you can operate your car. If you leave the city, it doesn’t work. There’s something like that for our flows. They work for a set of constraints. Within that domain, you’re rather safe from a power point of view.

SE: When we look at the microwave/RF domain, for example, if you’re hooking all of these things together, you know that they’re not talking to each other as well as you would like. Can we learn from the work that happens to get those tools to talk to each other, and share the data so it can be extended to other application flows?

Hand: Perhaps. This is why most people are saying the workflow is what you’ve got to figure out first, because that workflow is where you’ll do the learning.

SE: Where do we start to figure out a workflow?

Hand: What’s the problem you’re trying to solve? That’s the first question.

Mueth: As technology progresses, there will be more and more workflows because there’s new technology that’s constantly being developed. You have to come up with new workflows, and that never ends.

Hand: You asked the question earlier, is it as simple as using an existing standard? Standards are going to be a critical part of what we do, because the standard ultimately will enable the finalized workflow. It will enable that communication. It will enable us to close the loops. But especially when you look at the different systems for the automotive guys, the workflows they have are going to be very different from the mil/aero guys, which are going to be very different from the consumer guys. The acceptable level of risk in different situations is going to be variable. The acceptable timelines are going to be variable. So a large part is going to be who is it that drives these. So, take automotive. Why is automotive one of the first ones that was driving a lot of this digital twin and the system-level design? It’s because they had a reason to do it. They had to get these domains to work together. There were regulatory requirements, there were safety requirements that forced the people to work together. Other industries are going to be the same. As you get new industries, they will come to all of us and say, ‘We need to do XYZ. Help us do it.’

Siwinski: And at some point you can say this is common enough across all so it can be unified. At the end of the day, software developers across all these applications will still have their software vocabulary needs. The reliability, safety guys will have their own nomenclature and requirements, no matter if it’s a satellite, a car, or a data center. So there’s going to be some level of commonality. The question is what percentage that is. I would venture it’s no more than 20% because there’s so much difference in the different workflows for different applications, which all the great advances in technology are only fueling the desire to do more. So, no matter how fast we go, we’re innovating and we’re enabling innovation to open new doors.

Hand: But if you look at the IP industry, for example, once you get a certain level of commonality, people will give up flexibility for that benefit. It would be the same at the workplace level. If you can show a way of doing it, then you may give up a little bit of flexibility because you get there faster and you don’t have to worry about it. I will give up a degree of freedom. I will give up ‘this’ in order to take ‘that.’

Siwinski: So will it ever unify? No. Will it ever be completely one-off? No. The question is whether there will be different main principals driving the optimizations a company will want to do because for them, maybe it’s going to be about the user experience or it’s going to be about performance or it’s going to be about cost. So the anchor that is the primary criteria when making a decision will change based on who’s making it. But at the end of the day, that will switch the percentages of how quick and one-off it is, versus how customized and unique and value-added it is. That will change.

Hand: But there’s going to be one commonality that spreads through all of them, which is the point you’re making about the requirements and traceability. Being able to define those requirements and track them throughout the whole system design and system validation and integration — that’s going to be something that in some way, shape, or form has to be done across the whole thing, because that’s the only way you’re going to know as soon as something starts to deviate from the path whether it has deviated.

Mueth: One example is the 2.5G phone versus the 5G phone. A 2.5G front-end module had 140 specs. Those are more performance specs, not digital specs. Now, there’s 2,000 specs, and many of those are interdependent. Just to qualify that chip and the module is a huge, huge task,a nd it’s not going to get any easier. I can imagine what a 6G chip will be.

Hand: And when you get into the systems, a single requirement ends up being split across multiple domains. So now you’ve got to ask, ‘Have you met that spec?’ You need to thread together, not just under what conditions, but potentially electrical analysis with mechanical analysis with thermal analysis with functional verification. And all of them need to be tied back together to say, ‘To meet this spec, I need all of these different verifications from all of these people to come back with a thumbs up. And only then is that spec met.

SE: What will we see from EDA tool providers when we’ve identified all the workflows?

Seynhaeve: I don’t think you’ll see ‘something.’ It constantly changes as we go down in node size. If you look at glitch power, a couple of years ago nobody was talking about it. Now it’s the main component of power that we still need to manage. We were talking about the flow to close power. I’m sure that a couple of years ago that flow broke down because nobody had thought about glitch power. And before that leakage power was a problem, but then with gate all around, we solved this. The flow changes and changes, so what are you going to see? I don’t know. We are constantly learning, constantly adapting its communication.

source

Facebook Comments Box

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *