Greetings! Model 3 Owners Club is expanding! Our new name is Tesla Global Owners Club. Content remains unaffected, just a new logo, name and new domain. You'll just have to login to your existing account and carry on as before. Thank-you for your continued support!
Read more here
Many tech companies design their own "chips". Whether it's a customized CPU or subsystem or a full SOC, many "off the shelf" offerings are not optimized for specific applications. For example, Apple's own CPUs are custom designed. Of course, companies do not manufacture their own chips. A cutting edge fab can cost upwards of $4 Billion to build. The designs are physically made by a handful of well known manufacturers.
The world laughed at Elon when he said that he would compete with banks to provide payment processing services. The world laughed at Elon when he said he would be the premier non-government affiliated entity to provide orbital transport services. The world laughed at Elon when he said he could compete with the established auto industry. The short sellers were laughing when Tesla stock went over $100 per share. Who's laughing now?
As someone who does this for a living, I'll say that Designing a chip is (almost) always done by a separate company than the one that Fabricates the chip. There are likely 10,000 companies in the world that design chips - and perhaps a dozen or two that fabricate them. It used to be that chip design houses (National Semiconductor, TI, etc) owned their own Fabs; these days, as Brokedoc noted, the costs of a Fab are so high and the lifetime so short that they are owned by companies specializing in Fabrication. They take designs from hundreds of customers and build them, keeping the machines in their Fab busy 24 hours a day so they can pay them off before they become obsolete. Intel, as a counterexample, still builds and runs their own Fabs, but they're one of the few.
Could Tesla build a better autopilot chip than NVidia? The answer is likely "Yes" - they know precisely what algorithms they're expecting to run on the chip, so they can optimize the architecture of the chip for those specific algorithms. What NVidia designed is a very capable chip that supports all kinds of possible workloads. Some of those workloads don't match with what Tesla is going to do, and so the circuits designed in to support those workloads are dead space as far as Tesla is concerned. When you're building millions of copies of a chip, it's cost is mostly a function of the chip area (size) - a chip that's twice the size costs roughly twice as much. Eliminating that dead space can make a significantly cheaper chip - using it for new circuitry can make a significantly faster chip.
It's important to recognize that the cost of a chip is driven by it's area (size) if you're going to build tens of millions of them, or by it's development costs if you're building tens of thousands. If Tesla knows that they're only going to use single-precision floating point numbers, they can eliminate support for double-precision. This gives them two options: Reduce the size (and thus the cost) of the chip, or add more circuits to make the chip faster at the same cost.