A mother steps on the brakes, bringing her car to a stop as she drops her kids off for dance lessons. At the time, she doesn’t notice anything wrong, but when she takes her car in for its regular service appointment, the mechanic conducts a diagnostic check and discovers that the primary brake system on the car had failed because of a faulty braking controller without anyone realizing it. Fortunately, the car was able to stop successfully due to the vehicle’s system redundancies, and the dealer’s diagnostic test confirms that since that first chip failure, another one has not occurred. The braking systems are behaving normally.

Following that, the dealership sends the information about the braking failure to the manufacturer, where an analyst notes that over the last 60 days, and around the country, six other brake failures traced back to the same controller system have been reported for the same make and model. In each of these situations, the backup system successfully brought each car to a complete stop. And, as in the case with the mother who dropped her kids off at dance class, the analyst looks at the reporting samples for these six other failures and determines that each is isolated and non-recurring.

For high-performance computing, artificial intelligence, and data centers, the path ahead is certain, but with it comes a change in substrate format and processing requirements. Instead of relying on the quest for the next technology node to bring about future device performance gains, manufacturers are charting a future based increasingly on heterogeneous integration.

But while heterogeneous integration promises more functionality, faster data transfer, and lower power consumption, these chiplet combinations, with different functionalities and nodes, will require increasingly larger packages, with sizes at 75mm x 75mm, 150mm x 150mm, or even larger.

To further complicate matters, these packages will also feature elevated numbers of redistribution layers, in some cases as high as 24 layers. And with each of those layers, the threat of a single killer defect, which would effectively ruin an entire package, increases. As such, the ability to maintain high yields becomes increasingly difficult.

The More than Moore era is upon us, as manufacturers increasingly turn to back-end advances to meet the next-generation device performance gains of today and tomorrow. In the advanced packaging space, heterogeneous integration is one tool helping accomplish these gains by combining multiple silicon nodes and designs inside one package.

But as with any technology, heterogeneous integration, and the fan-out panel-level packaging that often enables it, comes with its own set of unique challenges. For starters, package sizes are expected to grow significantly due to the number of components making up each integrated package. The problem: these significantly bigger packages require multiple exposure shots to complete the lithography steps for the package. Adding to this, multiple redistribution layers (RDL) may cause stress to both the surface and inside of the substrate, resulting in warpage. And then there is the matter of tightening resolution requirements and more stringent overlay needs.

As logic and memory semiconductor devices approach the limits of Moore’s Law, the requirements for accuracy in layer transfer become increasingly stringent. One leading silicon wafer manufacturer estimates that 50% of epitaxial wafer supply for logic will be on nodes equal to or less than 7nm. This is up approximately 30% from earlier in the decade.

To meet the demands of extreme ultraviolet (EUV) lithography, these leading-edge epi-deposited substrates have tighter specifications than previous substrates. Consider 3-5nm logic nodes: the image placement requirement can be as low as 3nm [1].

With the more stringent requirements of EUV lithography in mind, wafer makers are searching for new solutions, such as those addressing the primary reason for inaccuracies in image transfer: macro defects.

In our previous blog, I talked about the essential factors that a company must consider in leveraging cloud resources to accelerate their goals. The objective here is not just about putting some of the workload in the cloud; rather, it is about realizing the transformation adopting cloud technologies will bring about. In particular, it is more important to think of the cloud not only as a set of infinitely scalable services and resources but as the underlying technologies and best practices that can be adopted as a framework for your software architecture in general. While I don’t mean to belabor the point, adopting the cloud in a manner where it can truly unlock operational efficiencies and leverage its advantages is truly an organizational change.

Business strategy

You’re unlikely to see the phrase “we want to use cloud more” in any organization’s strategic plans. However, the cloud can play a strategic role in expediting innovation; optimizing and reducing IT costs; helping a company meet the scalability demands required as new services, products or markets are launched; and making data accessible and actionable across disciplines in the organization. The cloud can assist with predictive maintenance, supply chain transparency, testing, quality, process automation and smart manufacturing. The chart from Accenture below captures the broad areas where the cloud can be leveraged easily in support of an organization’s goals (Fig 1).

It’s no secret the cloud is a driving force powering the digital transformation. However, cloud adoption is rarely a one-size-fits-all operation. Even when done correctly, it can bring about company-wide transformations unique to each organization. At the very core, the move to the cloud is akin to a culture change, and understanding these changes can make the transition successful. The following factors are worth considering:

Business strategy

Your cloud journey should start with company buy in, a budget and a roadmap with clear objectives, outcomes and performance metrics. The objectives need to be realizable and long-term. Having the right stakeholders involved helps keep initiatives going in the organization and ensures that the effort is not done in a siloed manner. Milestones, playbacks, celebrating successes and recognizing failures transparently is extremely important. Having the ability to change the organizational design to best leverage a transition is also necessary.

Choosing a platform

Plenty of providers can help you realize what best fits your organization. The key, however, is to know the strengths and weaknesses of each platform. While a relocation from one provider to another is possible, you generally are in partnership with a provider for the long haul. Remember, you are not just provisioning resources — it’s about data, data-lifecycle, applications, leveraging services, cost, building up expertise and other factors.