Jeff Allison shares practical experience from running capability experiments in committed programs that taught him how to drive innovation and meet commitments
Jeff Allison: How to Drive Innovation and Meet Commitments
Innovation requires experimentation that allows for the possibility of failure but the whole point of creating a program commitment process is to avoid failure.
How do you as an engineering manager or executive:
- Reconcile the need or experimentation to drive new capability development with the absolute requirement to meet your commitments to customers, partners, upper management and other departments.
- Leverage a product portfolio to run experiments to minimize risk for future programs?
- Sell and deploy new capabilities throughout engineering?
What questions can you as an engineering manager or executive ask to understand the scope of the risk?
In the video below, Jeff Allison presents lessons learned from working with early adopters in a fast cycle or rapid prototyping approach that anticipates the need for a scalable reliable development process and lays the groundwork for it.
Edited Transcript for How to Drive Innovation and Meet Commitments
Jeff Allison: I’m going to talk today about product development and how you minimize risk across developing products by using a product portfolio approach.
As engineering managers know, you must balance many factors. You need to minimize risk, control schedules, and eliminate uncertainties in development. You also have to introduce new capabilities, invent solutions, and continually experiment to validate ideas and determine what will work.
I joined Cisco in 1992 and eventually became VP of Engineering. I want to talk about when I first joined. We had to introduce a new design capability into the hardware organization—we needed to design our own silicon. Off-the-shelf components couldn’t keep up with performance demands. This is the story of how we approached that challenge and what we learned.
In the early 90s, we primarily focused on access and core routing. At the time, we had well-established methodologies and skills for this. The designs weren’t overly complex, and we knew how to execute them efficiently. However, we needed to build faster, more capable systems to handle increased traffic and users at higher speeds. Our existing methods lacked the expertise and experience to achieve this.
To meet these demands, we needed an ASIC design methodology. But before diving into that, I want to discuss how organizational change happens and how understanding its sources can shape implementation strategies.
Change typically comes from three sources: top-down, bottom-up, and sideways.
- Top-down change includes corporate mandates and regulatory requirements. Engineers often resist these because they take time away from product design.
- Sideways change is disruptive and requires quick adaptation. It could stem from product issues in the field, competitive pressure, customer concerns, or entirely new challenges. This was our situation—we had to design products at a scale we had never attempted before.
- Project-based learning ties into disruptive change. Adjustments made in response to disruptions become best practices. Post-mortems and lessons learned help refine development processes.
Our approach to change involved handling disruptive shifts while ensuring the organization could learn and build upon new methodologies.
We categorized programs as either pre-committed (still being evaluated) or committed (fully funded and in development). Pre-committed programs included requirement gathering, feasibility testing, and business justifications—like a funnel. Once a project passed feasibility and secured funding, it moved into the committed phase, where real resources were assigned, customer commitments were made, and deadlines were set.
Four Types of Risk: Schedule, Performance, Cost, and Technology Maturity
We assessed risk across several factors to create an overall risk number or at least a high, medium, and low rating:
- Schedule
- Performance
- Cost
- Technology Maturity
Our next-generation router technology was high risk. The schedule was aggressive, performance targets were unprecedented, costs were uncertain, and it required technologies that didn’t exist yet. Adding a new ASIC methodology only increased the risk.
When evaluating our readiness, we identified major gaps—we had no tool chains, libraries, or testing processes, and our supply chain and manufacturing teams lacked the necessary expertise. Closing these gaps required extensive experimentation.
We had about 18 months to introduce this methodology across the entire engineering organization, not just within a single team. We mapped out risks and debated where to start. Some suggested waiting until we needed it for a high-risk project, but that would have added even more uncertainty. Instead, we targeted a low-risk, in-flight project to serve as a pilot program.
We analyzed the product portfolio, identified lower-risk programs, and selected one where we could introduce the ASIC methodology in a controlled way. This allowed us to refine the process before applying it to high-risk initiatives.
Sean Murphy: Jeff, this approach is quite different than what you see commonly advocated. The received wisdom is that you have this breakthrough project that must succeed. And then you use that to drive other scheduled breakthroughs.
Jeff Allison: Yes, we had that discussion because many people had experience trying to push new ideas toward the next shiny thing. Sometimes it worked, but more often, it failed. We decided to take a different approach—how do we de-risk it? How do we introduce this new methodology into the organization without adding risks to other projects? In the end, it worked out well. But you’re absolutely right—we had to address that concern.
Q: How to Avoid Resume Driven Development
Question from Audience: This reminds me of when I worked at Sun Microsystems. We would have 80 people in a room engaging in a design-by-committee approach where everyone wants to do the latest shiny, cool thing without regard for gate count and yield. No one felt responsible for success or failure. My question is, how do you manage the incentives so people feel responsible for the success or failure of the product? How do you gently make people aware of their collective responsibility?
Jeff Allison: That was something we were very conscious of as we introduced this new approach. Interestingly, there was no real resistance from engineers—they all wanted to design silicon because it looked great on a resume.
To ensure accountability, I made it a point to over-communicate with every group in the organization. I clearly explained what we were doing, why we were doing it, and how it would eventually impact them. Even if they weren’t the first to adopt the methodology, they would use it in the future. Once the pilot program was successful, every subsequent project incorporated the new design flow.
Finding the right pilot program wasn’t easy. We had to integrate this into a committed program with a schedule and customer expectations. Asking an engineering manager to change direction early in development wasn’t a simple request. So, we searched for the right team and project where we could introduce the change without causing major disruptions.
To make it appealing, we had to offer tangible benefits. Many engineers were incentivized based on their program’s success, so they naturally asked, ‘What’s in it for us?’ Fortunately, the program we selected was a high-volume product. By developing our own silicon, we significantly reduced costs, saving the company a substantial amount. That financial benefit made it an easy sell for the team and helped gain their buy-in.”
Q: How to Align Incentives to Drive Innovation and Meet Commitments
Question from Audience: If you detect in a large group that people’s incentives are misaligned, then I guess you have to have some quiet, probing conversations to find out what their incentives are. And work your way up to whatever the grandparent node is, where the incentives are balanced against each other. How do you approach that?
Jeff Allison: “Yes, you’re absolutely right. We addressed this by focusing on material costs, significantly reducing the cost of goods.
To make this transition, we identified an engineering team willing to take it on. We deliberately chose a low-complexity design—we didn’t need an 8-million-gate ASIC to test the process. We also had a backup plan in case things didn’t go as expected. If the new approach failed, we could revert to a fallback solution without jeopardizing the project.
The program we selected was high-volume, so reducing costs had a substantial impact. This created a clear incentive for the team to adopt the new methodology.
From a broader perspective, we moved this opportunity to develop ASIC capabilities earlier in the process. It was a low-risk program in terms of schedule, performance requirements, and technology. While we added complexity by introducing a new methodology, we did so in a controlled way to demonstrate its value to the organization.
To further minimize risk, we kept the design simple—low gate count, no cutting-edge ASIC technology, and only basic combinational logic. We also worked closely with a strategic supplier. If the ASIC approach failed, we could always fall back to an FPGA, ensuring we met schedules and customer commitments, even if the cost structure changed.
This process took time. While we started in 1992, we didn’t fully implement our vision until around 1997. Our first milestone was a pilot program, and we built from there, progressing through several stages.
- Milestone 1: Introduced the ASIC design process with a simple pilot project.
- Milestone 2: Expanded capabilities to include language-based design.
- Milestone 3: Established a verification methodology—we previously had no dedicated ASIC verification engineers.
- Milestone 4: Achieved multi-chip, multi-core, multi-site design by 1997.
Another key point: we only worked on committed projects, products that were approved because had customers ready to pay, committed headcount, funding in the budget and a real schedule. When you run decoupled experiments on the side they are not as visible, they are often not reviewed, and get abandoned when a committed project has problems.
This means that you have to use committed projects as capability development vehicles. So that’s what we did.
Throughout this journey, we had to consider the entire organization, not just engineering. Manufacturing also had to adapt, learning how to support the new approach within their processes. When you’re trying to introduce something across the whole organization, you have to break it down into something that every part of the organization can absorb. Different parts of an organization absorb change differently.
There is another factor to consider that I call the engineering program reporting clock cycles. All organizations have a pulse and a cadence. There was a monthly R&D review of all projects that were in flight which we needed to hit to communicate what was going on to the execs. But beyond that you have project team meetings that were normally weekly and for large projects there might be multiple subteam meetings in a month.
There are also ad hoc meetings like design reviews and there are problems that act as interrupts: customer critical account issues can arrive without warning and disrupt plans. But we had to communicate both at an exec level and with each program team. We wanted everyone to know what was happening and feel included on the long journey the organization was taking.
So communication is critical: not just explaining and answering questions but listening and adjusting efforts in response. By integrating our efforts into real, funded projects with clear business value, we made the transition tangible. Unlike side experiments, which often lack visibility, this approach ensured the new methodology was taken seriously, reviewed, and refined as part of our standard process.
Q: How to Get the Right Information To and From the Right People?
Question from Audience: Can you go into a little more detail about how you propagated the right information to the right people? I understand project milestones but it’s not easy to get information to and from so many people.
Jeff Allison: “No, it’s definitely not.
We developed a scorecard system to communicate key information clearly and succinctly. This wasn’t an exhaustive list of tasks but focused on the big priorities—team capabilities, skills, and projects. The scorecard evolved over time, but I always highlighted high-priority items and color-coded them.
Engineering time is valuable, so we made sure engineers focused on critical issues. Anything marked red needed immediate attention, while green and yellow items were more informational. In executive meetings, I provided a high-level overview, while at team and program levels, I shared more specific updates on ongoing projects.
As adoption grew, we quickly went from one project using the new methodology to three, then five. The scorecards helped everyone stay aligned and focused on what truly mattered—not on details like gate counts or performance metrics, but on key strategic goals.
Key Lessons Learned:
- Risk and Innovation Go Hand-in-Hand – Assigning and understanding risk levels across a product portfolio is crucial. If a project has excessive risk, consider shifting some elements to a lower-risk program to balance the overall portfolio.
- Commit to Running Experiments in Real Programs – Experiments need to be tied to actual, funded programs to gain traction and visibility. Running side experiments in isolation doesn’t work as effectively.
- Organizations Absorb Change in Stages – Change can’t be introduced all at once; it has to be broken into manageable steps. Organizations can only handle so much at a time.
- Over-Communicate – Consistently share updates through structured methods like scorecards to ensure transparency and alignment.
- Understand Organizational Inertia and Motivations – People resist change for various reasons—habit, personal investment, or political considerations. Managing these dynamics is key.
- Celebrate Every Success – Even small wins matter. Recognizing and promoting early successes builds momentum and helps drive broader adoption.
Once we got things moving, the momentum took off like a rocket. The organization embraced the change so quickly that my challenge became ensuring teams had the resources and support they needed before moving faster than they were ready for. By fostering excitement and demonstrating early successes, we built the foundation for widespread adoption.
Red, Yellow, and Green Status
Sean Murphy: So when you show red, yellow, or green status what did they mean. Which ones indicated a request for help?
Jeff Allison: Green means things are fine and we are executing well. Yellow was a heads up but the team has a plan and is working on it. Red means we need help. The goal was to streamline conversation in the executive review meetings. Red drove focus to where more investigation and more resources were needed. Red was an ask, yellow was a heads up this may be an issue, green was no discussion to save time.
Development Portfolio Risk Analysis
Sean Murphy: When you started to analyze the portfolio of products in development through a lens of capability risk, it must have been a significant change. It’s different from traditional stage-gate management: how did the management team deal with it?
Jeff Allison: In the beginning, they did not want to diversify risk across the portfolio; they wanted the first program that needed the capability to bear all the risk. But, as we discussed, they were happy to move risk to early, simpler projects as they already faced significant challenges. There was a lot of groundwork laid in discussions with different engineering directors and senior management to come up with this approach.
Make Product Teams Interdependent For Capability Development
Sean Murphy: This is a very different methodology from most of what gets written about. Today, the focus seems to be minimizing any dependency or interaction between product teams. However, with this methodology, the most important projects agree to rely on this little cost-reduction effort to be in the pipeline for capabilities needed in six to nine months. The major project will not take on all of the capability development tasks because they already face considerable risk.
Jeff Allison: They were incentivized to meet or exceed their schedules and performance goals for their products—not to introduce a new design capability.
Their perspective was,” That’s great, but just get it done. If you can complete 80% of it, that’s good enough—we don’t have to worry about the rest.” Their focus was on hitting deadlines because that’s what their bonuses were tied to. If we could help them meet their schedule, they were on board.
So the engineers working on the next-generation boxes saw this as the way forward. They were open to the new approach, and I was lucky to have strong support from them.
Question from Audience: Were there any teams that resisted the changes or took a different approach than the one you found most effective?
Jeff Allison: Yes, quite a few teams were hesitant. Some said, ‘We’ve done it this way before and it’s worked, so why change now?’
In those cases, we had to look at their incentives. Many teams, especially those working on low-end, high-volume products, quickly adopted the new approach because even a small reduction in cost—just 10 cents or half a dollar—was significant. They were already focused on cost reduction, often using FPGA designs that later transitioned to hard logic, but they couldn’t achieve the same cost efficiencies without this technology. For them, adoption was straightforward since COGS (Cost of Goods Sold) was a daily priority.
The bigger challenge came at the high end, where performance was the main driver. Here, we encountered conflicting opinions—some directors wanted to go one way, others another. This kind of internal push and pull is common in engineering. The key was to resolve these disagreements quickly before they became larger issues.
Fortunately, our commit and review processes were pretty clean, you could not do too much engineering work without it becoming visible. So it was hard for a “side project” to get too far out of the mainstream from the overall effort.
The real complexity came later when we acquired multiple companies, each bringing its own processes and methodologies. Integrating those different approaches—deciding what to keep, what to drop, and what to adopt—was a challenge in itself. But that’s a topic for another discussion.
Sean Murphy: So there was a separate organization for the system software, how did this approach affect them?
Jeff Allison: Later on, we worked hard to provide the software team with a hardware model before the actual hardware was built. I wanted to give them more than a simulation environment—I wanted something they could actively use to meet the high-performance requirements for the system.
We attempted to implement an emulation system where a netlist could be loaded into a box that would emulate the chip—not in real-time, but enough for software engineers to interact with registers and observe behavior. Unfortunately, that approach never fully worked. However, we did develop a virtual emulation environment, allowing them to test and verify their programs against a real-time spec. It wasn’t at full speed, but it provided valuable insight into how the hardware would behave.
Early on, we had some interesting discussions, especially with the ASIC teams. We were shifting certain functions from software to hardware for performance reasons. The software engineers realized they could offload tasks to hardware, which opened up new possibilities for software development.
At that point, the architects got involved. The key question became: ‘If we move this function into silicon, how does that impact the overall architecture?’ These discussions helped bring hardware and software teams closer together. Instead of treating hardware as a fixed engine that software had to run on as fast as possible, they started collaborating on architectural decisions.
This changed the dynamic between the teams, improving communication and aligning hardware and software development more effectively. It was great to see that evolution take place.
Sean Murphy: Jeff, thanks very much. This was extremely thought provoking and quite insightful.
About Jeff Allison
Jeff Allison has worked in high tech industry for over 30 years. During that time he acquired considerable experience in product development, change management, new technology adoption, sales, and marketing.
Jeff worked at Cisco from 1992 to 2012 at a time of dynamic growth in the company and the networking industry. He joined Cisco as a manager responsible for EDA tools. Then spent the next 20 years at Cisco in various Engineering leadership positions and provided Engineering Design services for all the high-end routing platforms. He was vice president of engineering for the last decade and drove a significant number of cross functional internal and customer facing initiatives. These included a rationalization of best practices across engineering teams from various Cisco acquisitions and quality and customer satisfaction initiatives as Cisco established a leadership position in global service provider markets.
Jeff’s first position was working for Racal-Redac in the Engineering Design Automation (EDA) industry. At Redac, he set up an Engineering sales support organization in North America for design entry and simulation tools. During that time, he experienced the rapid growth and consolidation in the EDA industry. Jeff graduated from the University of Wales in ’84 with degree in Engineering.
Related Blog Posts
- Jeff Allison: An Executive Briefing on Microservices
- Jeff Allison: How To Blend New Capability and New Product Development
- Jeff Allison joins SKMurphy Team to Drive Intrapreneur Focus for 2017
- Intrapreneur Mindset and Key Skills
This blog post was republished on LinkedIn at https://www.linkedin.com/pulse/how-drive-innovation-meet-commitments-sean-murphy-bzj0c/