From space travel to the World Wide Web, email to nanoscale robotics, humanity’s constant drive for improvement and innovation has changed the course of history. So hop aboard and get comfortable — the time machine is all fired up and ready to relive the greatest scientific discoveries of decades past. Let’s go!
First stop? Space-race technology from the 1960s that put humans on the moon and laid the foundation for computer development at breakneck speed: the microchip.
To the Moon!
Today, travel to the moon and Mars is the subject of governmental funding debates, executive orders and the drive to recapture public interest. But in the mid-1960s, the United States was fully committed to winning the space race against the Soviet Union, and NASA brought its A-game with Apollo 11. On July 20, 1969, NASA’s Mission Control got its first message from the surface of the moon: “Houston, Tranquility Base here. The Eagle has landed.” It was the culmination of a decade’s worth of hard work, political one-upmanship and — perhaps most importantly — technological innovation.
To reach the stars, NASA needed computer technology that was lightweight and scalable and that could out-perform anything on the market, but nothing like that had ever existed. At $10,000 a pound, it’s astronomically expensive to put anything into lower Earth orbit, says Computer World. It was important to keep Apollo’s weight low to reduce the amount of fuel needed and free up budget for other projects. Computing speed was also a big hurdle: According to NASA, Apollo required “a small, lightweight guidance and navigation unit that could process complex trajectory equations and issue guidance commands to the Apollo spacecraft in ‘real time’ during the flight.”
The Integration Generation
In the late 1950s, transistor technology was struggling to stay ahead of shrinking device sizes, but a revolution was coming. In July 1958, electric engineer Jack Kilby realized that entire circuits — transistors and all — could be integrated onto silicon crystals, offering both better performance and significantly reduced size. Months later, Robert Noyce had the same idea, but he refined the chip by linking component parts with copper lines printed on an oxide layer to create a monolithic circuit, reports Computer History.
On April 12, 1961, the first patent for an integrated circuit was issued. Harvey Cragon, Kilby’s colleague, designed a demo computer for the U.S. Air Force showing that 587 integrated circuits could replace 8,500 transistors and deliver the same functionality. Noyce’s company and MIT were chosen by NASA to build the Apollo Guidance Computer, says Computer World.
While there’s no doubt that integrated circuit technology from the 1960s helped the U.S. win the space race, microchips were just getting started.
Innovation at Scale
Much like the transistor technology that preceded it, microchip manufacturing took off, allowing companies to scale down the size of components and create computing products small enough — and cheap enough — for consumers. It’s the natural second stage of the greatest scientific discoveries: leveraging what began as something revolutionary to drive mass production at scale. From microwaves to personal computers, portable cassette players to advanced touchscreen tablets, microchips are now the mainstay of electronic design.
Consider smartphones. This computer-in-your-pocket revolution wouldn’t have been possible without the use of tiny, high-performance microchips capable of carrying massive amounts of information at speed. Competition among manufacturers has led to increasingly small transistors and smaller chips. According to The Verge, scientists at the Lawrence Berkeley National Laboratory have managed to create a transistor that’s just 1 nanometer long — a feat that pushes right up against the notion of what physical laws suggest is possible.
Commercially, the result of integrated circuit innovation is stunning. As reported by Venture Beat, there are more than 3 billion smartphone users worldwide. Emerging products such as wearable devices and injectable microchips that can track bodily drug and alcohol levels, meanwhile, are helping to expand the microchip market in new (and sometimes worrisome) ways.
Breaking Moore’s Law
Moore’s Law was conceived in 1965 when Gordon Moore observed that the number of transistors on microchips doubled each year even as prices were halved. As noted by TechRadar, however, silicon is reaching its physical limits. The time it takes for chips to shrink is increasing — up to 18 months and counting. Threshold transistors won’t work properly past a certain size because there isn’t enough room for both “on” and “off” states. In addition, new devices and applications now require reduced latency, increased speed and enhanced light detection not possible with existing silicon chips.
Is this the end for integrated circuits? Not quite. New, compound semiconductors made of elements such as gallium and nitrogen (GaN) could offer 100 times improvements to current silicon capabilities. While this (probably) won’t break any immutable physical laws, it could bridge the gap between emerging device demands and existing circuit limitations.
It All Starts Here
There’s no denying it: Microchip technology from the 1960s was not only the greatest scientific discovery of the decade but laid the groundwork for generations of human advancement, ingenuity and innovation.
Ready for another ride? Flash forward to the disco-driven discoveries of the 1970s.