Material Witness by Chet Guiles
In a previous article we talked a little bit about what a low-flow prepreg was, made some comments about flow and rheology, ranted a bit about the IPC low-flow test and how and why it was not a perfect predictor of production performance. All that being said, the industry regularly uses substantial amounts of low-flow materials in a wide variety of rigid-flex and heat-sink bonding applications. And, since highly filled materials often approach low-flow status, the line between low-flow prepregs and prepregs that just happen to have lower-than-standard flows is becoming somewhat fuzzy. An example of that is filled “green,” high temperature epoxy (“lead-free”) systems used for bonding aluminum heat-sinks to PWBs to mount high intensity LED lighting.
Given the admitted difficulties inherent in trying to use industry test methods and standards, it is of little surprise that the IPC more or less gives laissez faire to both supplier and user to establish specifications. It’s interesting to see the highlighted “shalls” in a specification that has no explicit requirements and to see resin flow described as a “percentage” when the actual test calls for measuring flow in mils of reduction of a 1” diameter cut-out circle. (“Shall” in an IPC document means that this defines a requirement that has to be met.) From IPC 4101B we read:
18.104.22.168.3 No Flow (NF) When specimens are tested in accordance with Table 3-2 (which calls out IPC-TM-650 Method 22.214.171.124) the nominal resin flow range for no flow shall be as indicated on the procurement document. The resin flow percent for no flow shall not vary from the nominal value more than specified on the applicable specification sheet or as agreed upon between user and supplier.
Generally speaking users will accept the values for flow and variation from their suppliers’ data sheets, and, not unreasonably, expect that those will be met consistently on material shipped to them. In many cases when incoming material is tested, lo and behold, the test result falls outside the published value and is different from the test result reported by the supplier. This triggers all kinds of requests for cause and corrective action, results in changes in supplier quality ratings, etc. And in many cases it’s a reaction to an issue that’s not really an issue. Why? A spec is a spec, yes? So what happens? And does it really impact product performance?
Data sheet values are generally based on averages of significant amounts of real data and represent statistically significant values that an in-control process will generate with 6-sigma control when a meaningful number of samples are taken. Test data reported for individual rolls of material produced in production will represent a limited sampling that are reported normally as the average of several individual readings (necessary in this writer’s opinion because of the inherent variability in the test method itself) which must be in spec for material to be shipped. An additional requirement that no individual measurement shall be out of the spec range is problematic because the inherent variability of the method itself can result in individual readings that are “out of spec” when the batch or lot is actually well controlled and centered.
An incoming test often is a single measurement taken on one test sample expected to be representative of the lot and when it fails it is assumed the lot is deficient. But as the old song goes, “It ain’t necessarily so! A single test result can have as much as a ±30% deviation from the average of the lot simply based on the inherent “noise” in the test method and measurement. Lab to lab correlation is at best marginal and so in many cases users wind up employing a “real-lamination” procedure to accept/reject material based on the empirical proposition: “Does it work OK in a lamination simulation test?” This protects the user, but may well reject in-spec material, resulting in delays, claims and counter claims and pain for both parties involved. Moreover, the supplier does not have the time to do a “real lamination” cycle for manufacturing testing purposes because it would take too much time and would not give any real-time indicator of product performance or process drift, key to process control.
We can measure the diameter of a test hole repeatably and between operators, to within 0.001” to 0.002” – but when we are accurately measuring data from a test that may vary by as much as ±0.015 on any given sample (more on higher flow “low flow” materials) measurement accuracy isn’t the issue.
I wish I could say that there is a comprehensive and “works for everybody” answer to this issue, but there isn’t. Suppliers are working hard to make their processes run in tighter control (which does not resolve the test method variability question) and most users recognize that there is a testing “gray area” but whenever there is a nonconformance, nobody wants to take the responsibility for the risk of bad product, so “Does it work?” becomes the gold standard, and suppliers either suck it up or get tossed out. While our in-house specifications are designed to be as process-friendly as we know how to make them, there will always be some percentage of the end user’s part numbers that seem to defy the ability of the product to “work” even when it is in spec.
Newer generations of low flow products have been designed to provide a wider process window – in other words they are more likely to work well in a wide range of process conditions without the need for constant tweaking. Subject to, of course, Murphy’s Law!
That all leads to the second issue in using low-flow materials, whatever the questions there may be about testing and correlation to in-use conditions. And that second issue is that once a material has been used for a while successfully, the end use processes have been tweaked to fit its idiosyncrasies, and alternative, competitive materials never quite fit in. (Chet’s Corollary to Murphy’s Law is that “There is no such thing as a drop-in replacement!”) This is almost always attributed to the alternative material’s not being as process friendly – but in fact if it were the other way around, the same thing would occur.
I know I risk being charged with heresy here, but no two products process exactly the same, no matter what generic slash sheet they purport to meet, and no two suppliers produce identical products, even if they totally meet the same IPC specification. There are always “phantom variables” because no man-made process is perfect no matter how good the engineering or how accurate the metrics by which the control is established. And in the case of low flow materials, the method used to test the material is inherently more “noisy” than the coating process itself.
I’d love to say that the answer to all your problems is to use Arlon’s low-flow products, which we are able to keep in spec consistently (taking a statistically valid sample and averaging results), but that would be overstating their benefits unless users are prepared to tweak their processes to take advantage of the unique properties of the materials. So it will also be with anyone else’s products. The “secret” to making the best use of any material is to work with the material supplier to get the most out of it by taking advantage of its unique properties, and not trying to force a round peg into a square hole.
Chet Guiles is a consultant for Arlon Electronic Materials.