With the continued advancement of environmental, social and governance goals, corporations are increasingly focused on reducing their carbon footprints. To accomplish this, these companies are being asked to operate their businesses more efficiently than ever before, whether the matter is reducing waste, water usage or power consumption. This is true for the semiconductor industry as well.
Although semiconductor manufacturing is not a smokestack industry, it is truly amazing just how many resources – from water to materials and electricity – goes into making chips. To better understand the carbon footprint and environmental impact a typical fab has, consider this: based on estimates in a 2021 article in The Guardian, a 1% improvement in a factory’s production capability could save that factory 450 tons of waste, 37 million gallons of fresh wafer and 22.5 million kilowatt-hours of electricity over the course of a year. That small 1% change is a substantial reduction in resources used, one that not only makes operations managers happy but ESG-minded stockholders as well.
Packaging is becoming more and more challenging and costly. Whether the reason is substrate shortages or the increased complexity of packages themselves, outsourced semiconductor assembly and test (OSAT) houses have to spend more money, more time and more resources on assembly and testing. As such, one of the more important challenges facing OSATs today is managing die that pass testing at the fab level but fail during the final package test.
But first, let’s take a step back in the process and talk about the front-end. A semiconductor fab will produce hundreds of wafers per week, and these wafers are verified by product testing programs. The ones that pass are sent to an OSAT for packaging and final testing. Any units that fail at the final testing stage are discarded, and the money and time spent at the OSAT dicing, packaging and testing the failed units is wasted (figure 1).
A mother steps on the brakes, bringing her car to a stop as she drops her kids off for dance lessons. At the time, she doesn’t notice anything wrong, but when she takes her car in for its regular service appointment, the mechanic conducts a diagnostic check and discovers that the primary brake system on the car had failed because of a faulty braking controller without anyone realizing it. Fortunately, the car was able to stop successfully due to the vehicle’s system redundancies, and the dealer’s diagnostic test confirms that since that first chip failure, another one has not occurred. The braking systems are behaving normally.
Following that, the dealership sends the information about the braking failure to the manufacturer, where an analyst notes that over the last 60 days, and around the country, six other brake failures traced back to the same controller system have been reported for the same make and model. In each of these situations, the backup system successfully brought each car to a complete stop. And, as in the case with the mother who dropped her kids off at dance class, the analyst looks at the reporting samples for these six other failures and determines that each is isolated and non-recurring.
In our previous blog, I talked about the essential factors that a company must consider in leveraging cloud resources to accelerate their goals. The objective here is not just about putting some of the workload in the cloud; rather, it is about realizing the transformation adopting cloud technologies will bring about. In particular, it is more important to think of the cloud not only as a set of infinitely scalable services and resources but as the underlying technologies and best practices that can be adopted as a framework for your software architecture in general. While I don’t mean to belabor the point, adopting the cloud in a manner where it can truly unlock operational efficiencies and leverage its advantages is truly an organizational change.
Business strategy
You’re unlikely to see the phrase “we want to use cloud more” in any organization’s strategic plans. However, the cloud can play a strategic role in expediting innovation; optimizing and reducing IT costs; helping a company meet the scalability demands required as new services, products or markets are launched; and making data accessible and actionable across disciplines in the organization. The cloud can assist with predictive maintenance, supply chain transparency, testing, quality, process automation and smart manufacturing. The chart from Accenture below captures the broad areas where the cloud can be leveraged easily in support of an organization’s goals (Fig 1).
It’s no secret the cloud is a driving force powering the digital transformation. However, cloud adoption is rarely a one-size-fits-all operation. Even when done correctly, it can bring about company-wide transformations unique to each organization. At the very core, the move to the cloud is akin to a culture change, and understanding these changes can make the transition successful. The following factors are worth considering:
Business strategy
Your cloud journey should start with company buy in, a budget and a roadmap with clear objectives, outcomes and performance metrics. The objectives need to be realizable and long-term. Having the right stakeholders involved helps keep initiatives going in the organization and ensures that the effort is not done in a siloed manner. Milestones, playbacks, celebrating successes and recognizing failures transparently is extremely important. Having the ability to change the organizational design to best leverage a transition is also necessary.
Choosing a platform
Plenty of providers can help you realize what best fits your organization. The key, however, is to know the strengths and weaknesses of each platform. While a relocation from one provider to another is possible, you generally are in partnership with a provider for the long haul. Remember, you are not just provisioning resources — it’s about data, data-lifecycle, applications, leveraging services, cost, building up expertise and other factors.
Over the past ten years, primarily driven by a tremendous expansion in the availability of data and computing power, artificial intelligence (AI) and machine-learning (ML) technologies have found their way into many different areas and have changed our way of life and our ability to solve problems. Today, artificial intelligence and machine learning are being used to refine online search results, facilitate online shopping, customize advertising, tailor online news feeds and guide self-driving cars. The future that so many have dreamed of is just over the horizon, if not happening right now.
The term artificial intelligence was first introduced in the 1950s and used famously by Alan Turing. The noted mathematician and the creator of the so-called Turing Test believed that one day machines would be able to imitate human beings by doing intelligent things, whether those intelligent things meant playing chess or having a conversation. Machine learning is a subset of AI. Machine learning allows for the automation of learning based on an evaluation of past results against specified criteria. Deep learning (DL) is a subset of machine learning (FIGURE 1). With deep learning, a multi-layered learning hierarchy in which the output of each layer serves as the input for the next layer is employed.
Currently, the semiconductor manufacturing industry uses artificial intelligence and machine learning to take data and autonomously learn from that data. With the additional data, AI and ML can be used to quickly discover patterns and determine correlations in various applications, most notably those applications involving metrology and inspection, whether in the front-end of the manufacturing process or in the back-end. These applications may include AI-based spatial pattern recognition (SPR) systems for inline wafer monitoring [2], automatic defect classification (ADC) systems with machine-learning models and machine learning-based optical critical dimension (OCD) metrology systems [1][7].