One of the poorly kept secrets of electronic semi-conductors is that they're quite susceptible to high heat. Electronics that you use in your house or even in a car's cockpit don't see a lot of heat - maybe 140F max - but those in the engine compartment see a LOT of heat in massive swings from startup to 20 minutes after shut-down. That, and vibration, cause them to fail over time. Components CAN be designed and built to withstand greater temperature extremes (like "Military Spec" parts), but they cost more. There are also several levels of "commercial grade" depending on the demands of the design and environment.
As an aside and totally not connected to carburetor problems, did you ever wonder how some computer companies test out failures like this for a more reliable product?
My last company blazed the trail in the computer storage industry for running our production computer boards through a near-Mil-Spec factory test series of temperature and vibration swings to weed out system failures at the factory. At the computer board level we used "Tenny" chambers like this:
They were giant ovens with a shake table inside. We could fit four full computer systems inside at once (Each system is a card cage the size of a large Microwave oven) and run them on simulators doing actual work while the chamber would cycle from 0F to +145F (-17C to +65C ) and back to 0F (we flooded the hot test chamber with Nitrogen gas to drop the internal temperature, then re-heated it and repeated). While it was temp cycling, it would also run through a vibration profile from 20 hz to about 3K hz. All of this would run 24/7 for 2 weeks at the board level and then another 2 weeks at the system level. Everything going on was recorded and analyzed by automatic programs to give us failure data down to the transistor level.
Full Systems were about the size of a kitchen refrigerator, or two, or three wide depending on what system was bought, so we had huge rooms, the size of a Football field, that would run at either 40F or 122F (we couldn't go farther than that because people worked in there running test programs or moving systems in and out between rooms). Here's a typical room:
The system would spend a few days in one room, then get moved to the other (running over a grooved vibration floor between) and back and forth for 2 weeks while we looked for further system-level failures.
So what did all this get us? Well, before we went to this form of testing we had about a 40% install start up failure rate and an in-service failure rate of about 25%. We also had a field service technician force of about 500 people to service a yearly output of about 2,000 systems. Once we instituted the new factory test profile we were shipping just under 3,000 systems each quarter, ramping up to about 20,000 per year when I left. Our install start up failure rate was about one in every 2,000 systems shipped (usually an interconnection/connector problem) and the in service failure rate was less than 1% (seven sigma, for you quality geeks). That was unheard of in our industry.
During that time as we ramped up production, we chased failures back to their root cause and then designed in stronger/better parts throughout the system. We also increased the field tech force from 500 to 1,200 people, which was about 75% less people than our competition, while we got the reputation that, not only was our product line the fastest out there, but "EMC stuff doesn't fail", both of which allowed us to cover all of those extended test costs and charge a premium for our products.
It just goes to show, that building in Quality sells.