Ever since the late 1940’s, the Turing Machine has been the central paradigm of how computing machines are defined and designed, including the material devices that power the current wave of AI. But there are other paradigms of computing that have known a certain measure of success in the past or that are currently being developed, including different forms of analog computing. Using the properties of materials or with a minimum of electronic components, some of these analog methods of computing exploit the laws of physics to obtain quantitative results. Such methods were used in special purpose computers that were built to only solve specific equations. Other devices known as General Purpose Analog Computers have a generality that is similar to Turing machines but compute with continuous values and are inherently parallel. All these devices use orders of magnitude fewer components and energy to perform their computations. Together, they show a bewildering variety of approaches that is in marked contrast to the standardized world of universal Turing machines. Are there things that AI can learn from analog computers?