This is only a preview of the February 2026 issue of Silicon Chip. You can view 35 of the 104 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Items relevant to "Mains LED Indicator":
Articles in this series:
Items relevant to "The Internet Radio, Part 1":
Items relevant to "Mains Hum Notch Filter":
Items relevant to "DCC Remote Controller":
Articles in this series:
Items relevant to "Tiny QR Code Reader":
Purchase a printed copy of this issue for $14.00. |
Image source: https://pixabay.com/photos/intel-8008-cpu-old-processor-3259173/
T
o r y of
t
s
i
h
he
intel
Pa
rt 1
b y D r D avid Mad
K3D
V
,
n
o
d is
SM
Intel is (or some would say was) one of the world’s most influential and
largest manufacturer of computer chips, including microprocessors. That
includes the central processing units that power a large portion of modern
computers and related devices.
S
tarting with the world’s first
microprocessor in 1971, which
sparked the personal computer
revolution, Intel grew to a market capitalisation of US$509 billion in 2000
($930 billion in today’s money).
Today it sits at around US$188 billion and fluctuating, while facing AI
challenges, serious competition and
the legacy of management deficiencies,
leading to failures to innovate, among
other problems.
Intel is currently building new
foundries in the United States but
still has management challenges after
a rocky few years.
The founding of Intel
Fairchild Semiconductor was
founded in 1957 by the “traitorous
eight” engineers from Shockley Labs,
who were dissatisfied with the way
Shockley ran it. Two of those eight
were Gordon Moore (famous for
Moore’s Law) and Robert Noyce, the
co-inventor of the integrated circuit
(see Fig.1).
Moore and Noyce left Fairchild to
16
Silicon Chip
found Intel on the 18th of July 1968.
Another Fairchild employee, Andy
Grove, also left and joined Intel on the
day of its incorporation, although he
was not a founder. He helped get Intel’s
manufacturing operations started and
move Intel’s focus from memory to
CPUs in the 1980s, establishing it as
the dominant player in the market.
In addition, investor Arthur Rock
provided US$2.5 million in funding
(equivalent to US$23.3 million today
or AU$35.5 million).
The new company was originally
proposed to be named Moore Noyce,
but they decided it was best to avoid
the “more noise” pun, which is understandable for an electronics company.
It was named NM Electronics initially,
but after a few weeks, was renamed
to Intel, which is derived from “integrated electronics”.
Intel was already a trademark of the
hotel chain Intelco, so they also had to
buy the rights to that name.
Intel’s first headquarters was in
Mountain View, California (it is now
in Santa Clara, California). Its first
Australia's electronics magazine
106 employees are shown in Fig.2
(in 1969).
Noyce and Moore left Fairchild
because they saw the potential of
integrated circuits (ICs) and wanted
to create a company centred on their
research and production. For more on
Fairchild and the traitorous eight, see
our articles on IC Fabrication in the
June-August 2022 issues (siliconchip.
au/Series/382).
They had become dissatisfied at
Fairchild because they felt it was not
reinvesting enough in research and
development. They felt Fairchild
wasn’t growing enough, were dissatisfied with the administrative workload, and stated that it no longer had
a hands-on creative culture like it
used to have.
They also wanted to standardise the
mass-production of ICs. Specifically,
what they wanted to standardise was
a manufacturing process for chips
that could be widely adopted, was
cost effective, scalable and could
be applied to many different chip
designs.
siliconchip.com.au
Fig.1: Andy Grove, Robert Noyce and Gordon Moore in 1978.
Source: www.flickr.com/photos/8267616249 (CC BY-SA 2.0)
Noyce invented the first commercially viable monolithic IC (a circuit
on a single piece of silicon or other
material containing all the circuit’s
transistors, resistors, capacitors etc)
and licensed Fairchild’s “planar process” for manufacturing it.
Thus, the new company was to
be based on investing extensively in
research into the manufacturing of
integrated circuits, with a focus on
standardisation of the production processes for the monolithic ICs. Moore’s
Law provided an ongoing objective for
Intel to strive toward.
Moore’s Law was an observation
he made in 1965 that the number
of components on a chip doubles
roughly every two years, a compound
growth rate of 41%. Moore’s Law
held until roughly 2016, at which
time the physical limits of component density were reached. The rapid
increase in computing power continues through advanced chip packaging methods, architectures and higher
clock speeds.
Intel’s striving to fulfil Moore’s Law
siliconchip.com.au
Fig.2: a photo of Intel’s first 106 employees in 1969.
Source: https://intelalumni.org/memorylane
allowed for an ongoing reduction in
the cost of ICs and computers to consumers. That’s because fitting more
components onto one silicon chip
means a more powerful device for the
same cost or less.
Conversely, the cost to producers,
including Intel, to continue to manufacture higher and higher component
densities increases as it becomes more
difficult to make cheaper and faster
chips. The hope is that improvements
in manufacturing technology and
economies of scale reduce the cost
enough that chips become both more
powerful and also cheaper.
Intel processor history
overview
Intel is mostly identified with lines
of microprocessors, although it has
created many other products, which
we will also discuss. Since Intel has
produced such a wide range of processors, its history is complicated and can
be hard to follow.
An abbreviated timeline of Intel processor release dates is shown in Table
Australia's electronics magazine
1 overleaf. Many of these will be discussed in more detail later.
Understanding Intel’s history
Intel has a complex history, so we
have broken it up into its dominant
features in every decade. The main
features of each decade can be summarised as follows:
1970s invented the microprocessor almost by accident with the 4004;
the 8080 derivative launched the
microprocessor revolution.
1980s Intel dominated the establishment of the PC era. The IBM PC
was released, using the 8088, 80286,
80386 or 80486. Along with clones, it
became the dominant PC.
1990s Intel continued to dominate the PC market. Intel and Pentium
became household names, helped by
the “Intel Inside” advertising campaign.
2000s the NetBurst architecture ultimately failed, losing market
share to AMD, which reached 25% in
2006. They clawed back some ground
with the Core microarchitecture
February 2026 17
diversification, but faced various challenges.
2010s stagnation, delays in the
10nm process node, mobile market
failure, AMD catching up.
2020s Taiwan Semiconductor
Manufacturing Company (TSMC)
technologically overtook Intel. Despite
this, Intel still has foundry ambitions
and developed hybrid cores. Unlike
TSMC, Intel is an integrated device
manufacturer (IDM) that designs,
manufactures and sells its own chips;
Intel wants to become the TSMC of the
West. The IDM 2.0 strategy of CEO Pat
Gelsinger saw five nodes in four years
from 2021 to 2025: Intel 7, Intel 4, Intel
3, Intel 20A and Intel 18A.
Now that we’ve given a broad overview, let’s look at Intel’s history in
more detail.
Table 1: Intel processor families
Processor family
Release date
4004
1971
8086/8088
1978
80286
1982
80386
1985
80486
1989
Pentium
1993
Pentium Pro
1995
Pentium II
1997
Pentium III
1999
Pentium 4
2000
Core & Core 2
2006
Core i3/i5/i7
(1st-8th gen)
2008-2017
Core i3/i5/i7/i9
(9th-14th gen)
2018-2023
1969-1970s: starting as a
memory company
Core Series 1
2023
Intel began the decade as the world’s
leading memory chip maker and ended
it by accidentally igniting the personal
computer revolution with the 4004
(1971) and then the 8080 (1974).
The 4004 microprocessor was originally just a side project for calculators, but became the company’s future
when dynamic random access memory (DRAM) profit margins started to
collapse.
Core Series 2
2024-2025
Core Series 3
Early 2026
Intel’s first products
Intel’s most important early products, which established the microcomputer revolution, were based around
five chips or chipsets. These were the
3101 (memory), 1101 (memory), 1103
(memory), 1702 (EPROM or erasable
programmable read-only memory)
and the 4004 (microprocessor) and
its associated chipset. We will now
describe each of these chips.
1969: Intel 3101
Intel’s first product was the 3101
Schottky TTL bipolar 64-bit static
random access memory (SRAM) chip,
released in April 1969. By today’s
standards, it had an incredibly small
storage capacity, equivalent to just
eight characters (64 bits). Nevertheless, it was a remarkable achievement
as the company was only established
in July 1968.
Due to the use of Schottky technology, it was nearly twice as fast as earlier implementations of such chips
and was designed for use with computer CPUs.
Even though Intel initially wanted
to focus on research and development,
they were incentivised to produce this
chip by Honeywell’s announcement
that they would purchase SRAMs
from anyone who made them. This
triggered a competition among memory manufacturers.
Honeywell ended up not using the
chips because they wanted more than
64 bits, but Intel’s achievement made
it known to the world that Intel was
now a serious company, no longer
the underdog, and other companies
became interested in the 3101.
The 3101 was unsuitable for main
memory, the dominant form of which
at the time was magnetic core memory, which had capacities in mainframes up to around 4MiB (in the IBM
360 model 195). Still, it was suitable
where high-speed memory devices
were needed, such as for processor
registers in minicomputers as offered
by Burroughs, Xerox and Interdata.
1969: Intel 1101
Following soon after the 3101 was
an even more important product,
the 1101 256-bit SRAM chip (Fig.3),
which was the first with two key technologies: metal oxide semiconductor
(MOS) and silicon gates rather than
metal. The MOS technology allowed
for higher memory capacity (more
memory per area of silicon) and higher
chip densities.
It had access times of 1.5 microseconds (1.5μs) and ran at 5V, consuming 500mW.
1970: Intel 1103
The 1103 (Fig.4) was the first commercial DRAM (dynamic random
access memory) memory chip with a
The difference between SRAM and DRAM
SRAM is faster than DRAM while using less power, as it doesn’t need constant
refreshing to maintain data, but it is more expensive and has a lower capacity
per chip than DRAM.
On the flip side, DRAM is cheaper and has a higher
capacity per chip, but it uses more power and is
slower than SRAM as it needs to be constantly
refreshed. Both types of memory are volatile, meaning they lose their data when power is removed.
Fig.3 (top): Intel’s first really successful product,
the 1101 256-bit SRAM chip. Source: www.
cpu-zone.com/1101.htm
Fig.4 (bottom): Intel’s first DRAM chip, the
1103 introduced in 1970. Source: https://w.wiki/
GYXb (CC BY-SA 4.0)
18
Silicon Chip
Australia's electronics magazine
Fig.5: the three-transistor memory
cell was invented in 1969 by William
Regitz and colleagues at Honeywell.
Original source: https://w.wiki/GYJp
(GNU FDL v1.2)
siliconchip.com.au
capacity of 1024 bits or 128 extended
ASCII characters. It had a sufficiently
high capacity and low enough cost
that it began to replace magnetic core
memory. By 1972, it was outselling all
other types of memory combined due
to costing less and being smaller than
core memory.
The chip was discontinued in 1979.
It was used in computers such as the
HP 9800 series, Honeywell minicomputers and the PDP-11. The actual
three-transistor dynamic memory
cell configuration shown in Fig.5 was
invented by Honeywell, who asked
the fledgling Intel to manufacture
it. It was later also manufactured by
National Semiconductor, Signetics
and Synertek.
1971: Intel 1702
The first EPROM chip was developed by Dov Frohman at Intel – see
Figs.6 & 7. It had 2048 bits of memory
that could be erased with UV light and
rewritten electrically.
It was revolutionary because, before
then, “firmware”, the most basic
instructions for a computer or similar
device to boot, had to be in the form
of hardwired logic that was difficult
or impossible to change.
Intel offered another cheaper version of this chip, which was ‘write
once’ and could not be erased. The
only differences were that it did not
have an expensive transparent quartz
window for UV erasure, and it came
in a plastic rather than ceramic package.
Today, flash memory has replaced
EPROM memory for things like firmware, but the 1702 was an important
development as it made prototyping
new products much easier, along with
allowing product updates.
Fig.6: a demonstration of the 1702
chip in 1971, using its stored
information to display the Intel logo
on an oscilloscope. Source: https://
timeline.intel.com/1971/the-world’sfirst-eprom:-the-1702
Fig.7: the Intel 1702 had a transparent
window through which the contents
could be erased by UV light and then
electronically rewritten. Source:
https://timeline.intel.com/1971/theworld’s-first-eprom:-the-1702
1970s: the microprocessor
revolution
Intel’s and the world’s first microprocessor would not have happened at
the time had it not been for a request
from the Japanese Busicom calculator company.
The Busicom calculator
In 1969, Busicom asked Intel to
design a set of chips for their proposed
electronic calculator. At the time, calculators contained large numbers of
discrete components and complex
wiring, so they wanted to reduce the
cost by using a dedicated chipset. The
siliconchip.com.au
Fig.8: a Busicom 141-PF / NCR 18-36 circuit board with chips Intel developed
for it. Note the blank space for the optional 4001 ROM for the square root
function. Source: Nigel Tout, http://vintagecalculators.com
Busicom engineers designed a calculator that required 12 ICs and asked Intel
to make these custom chips.
Ted Hoff at Intel, aided by Federico Faggin and Stanley Mazor,
came up with a much more elegant
design needing only four chip types
Australia's electronics magazine
containing ROM (read-only memory),
RAM (random-access memory), a shift
register and what was to become the
4004 microprocessor. These chips
were developed, produced and sent
to Busicom in January 1971, and they
had exclusive rights to them.
February 2026 19
The 4004 microprocessor was a single silicon chip that contained all the
basic functional elements of a computer’s central processing unit (CPU).
Until the 4004, CPUs had to be fabricated using multiple individual components at much greater cost and complexity.
The resulting calculator was the
Busicom 141-PF, also marketed as the
NCR 18-36 (see Fig.8). An optional
ROM chip was available to provide a
square root function. In common with
other calculators of the era, it printed
the results rather than displaying them
on a screen.
This was an important moment in
the history of calculators because,
at the time, calculators had to have
their functionality designed into hardware, which meant every calculator
required extensive customised hardware. The new Intel microprocessor
and ROM allowed new designs to be
made simply by changing the programming of the microprocessor via
ROM.
The calculator used four 4001 ROM
chips, two 4002 RAM chips, three 4003
shift registers and one 4004 microprocessor. More about this chipset later.
At the same time as the Intel developments, Busicom commissioned
Mostek to produce a ‘calculator on
a chip’, which resulted in an even
lower chip count than the Intel solution. The chip developed and released
in November 1970 was the Mostek
MK6010, but that’s another story.
In mid-1971, Busicom asked Intel
to lower the chip prices, which
resulted in Intel renegotiating the
contract such that Busicom gave up
their exclusive rights, enabling Intel
to sell the chips. Then, in November
1971, Intel announced the release of
the MCS-4 chipset family based on the
chips developed for Busicom.
1971: the beginning of the
microprocessor revolution
On the 15th of November 1971,
Intel commercially released the 4004
microprocessor that they had developed for Busicom and licensed back
to themselves.
The Intel 4004 was a revolutionary
product for the computer industry. It
was designed to be affordable, easy-touse and accessible to a wide variety of
computer designers.
Early microprocessors such as the
4004 were not initially intended for
general-purpose computing, but to
run embedded systems such as calculators, cash registers, computer games,
computer terminals, industrial robots,
scientific instruments etc.
In addition to the Busicom calculator mentioned above, it was used in
Busicom automated teller and cash
machines, the Intellec 4 microcomputer from Intel (Fig.9) to support
software development for the 4004, a
prototype pinball machine by Bally,
and the Pioneer 10 spacecraft.
The software to run such systems
could be developed on the Intellec 4
and then permanently programmed
into ROMs such as the 4001 during
manufacture, or burned into EPROMs
such as the 1702 (which could be
erased and updated).
The 4004 cost US$60 at the time,
which in today’s money would be
US$501 or AU$774. The MCS-4 (see
Fig.9: the Intellec 4 microcomputer for software development for the 4004,
available to developers only. It was programmed via front panel switches or an
optional terminal interface. Source: https://w.wiki/GYJr (CC BY-SA 3.0)
20
Silicon Chip
Australia's electronics magazine
Fig.10) included the 4001 ROM, 4002
RAM and 4003 I/O chips that together
formed the basic elements of a complete computer. The ~$750 price is
similar to that of a high-end (consumer) CPU today.
The 4004 contained 2300 transistors and was fabricated using a
10-micron (10μm) process. It could
execute 60,000 instructions per second with a 740kHz clock speed and a
4-bit architecture. It could address 640
bytes of RAM and up to 4kiB of ROM
– see Fig.11. The specifications of the
MCS-4 chipset chips were:
4001 a 256 × 8-bit (256 byte) ROM.
4002 a 4 × 20 × 4-bit (40 byte)
DRAM.
4003 an I/O chip with a 10-bit
static shift register, serial and parallel outputs. A static shift register comprises flip-flops that store and shift
binary data.
4004 the microprocessor.
Using this chipset, a fully expanded
4004 system using sixteen 4001s could
have 4kiB of ROM and sixteen 4002s
for a total of 640 bytes of RAM, plus
an unlimited number of 4003s for I/O.
The most powerful 4004 system?
The most powerful Intel 4004 system, called Linux/4004, was built by
Dmitry Grinberg in 2024. It was created to use “ancient” 4004 hardware
merged with a modern Linux operating
system. It is a testament to the powerful and flexible nature of the 4004
chip, which was originally intended to
power a calculator, but is not exactly
practical.
The system took 4.76 days to boot
a stripped-down Linux kernel to the
Fig.10: the Intel MCS-4 chipset.
Source: https://en.wikichip.org/wiki/
File:MCS-4.jpg
siliconchip.com.au
Fig.11: the chip layout (a
drawing, not a photograph)
of the 4004 processor.
Source: https://w.wiki/
GYJq (CC0 1.0)
4004 image source:
https://w.wiki/GYZY
8008 image source:
https://w.wiki/GYZZ
i960 image source:
https://w.wiki/GYK8
Fig.12: the die of the Intel
8008, their first 8-bit CPU.
Source: https://x.com/
duke_cpu/status/
1980293005644107812
Fig.13: an Intel i960 die (80960JA).
Note the large cache memory banks
(rectangular grids); the actual core is
pretty small since it’s a RISC processor.
Source: https://w.wiki/GYK9 (CC BY 3.0)
siliconchip.com.au
Australia's electronics magazine
February 2026 21
Fig.14: an 8080 chip made by Intel. Source:
https://w.wiki/GYJy (CC BY 4.0)
Fig.15: the Altair 8800 computer was sold as a
kit, and also has an optional 8-inch floppy drive.
It popularised the use of the Intel 8080 processor.
Source: https://americanhistory.si.edu/collections/
object/nmah_334396
command prompt. It could perform
rudimentary mathematical fractal
calculations of the Mandelbrot set. A
full description of the project can be
found at siliconchip.au/link/ac9t and
there is a video on it at https://youtu.
be/NQZZ21WZZr0
After the 4004
The success of the 4004 led to the
development of the 8008 and the 8080
CPUs, which established Intel as the
world leader and led to great expansion of the company in the 1970s,
1980s and 1990s.
8008 the 4004 led to the development of the 8008 in April 1972.
It was the first 8-bit microprocessor
and could address 16kiB of memory.
It was manufactured but not designed
by Intel. CTC (Computer Terminal Corporation) designed it for use in their
Datapoint 2200 programmable terminal, but Intel licensed the design for
use in other products.
The 8008 was discontinued in 1983.
Its clock speed was 500-800kHz and it
used 10-micron technology, with 3500
transistors. The 8008 is most famous
for being the microprocessor used in
the first enthusiast personal computers: the SCELBI (US, 1974), the Micral
N (France, 1973) and the MCM/70
(Canada, 1973). It was also used in the
HP 2640 computer terminals.
8080 the 8080 followed in 1974
(Fig.14). It was originally conceived
for embedded systems, but it was
broadly adopted and remained in
production until 1990. Made with a 6
micron (6μm) process node, it had a
clock rate of 2-3.125MHz and was an
8-bit processor but had the ability to
execute 16-bit instructions. It could
address 64kiB of memory.
22
Silicon Chip
A variety of support chips were
available for it. It had about 6000 transistors and could execute several hundred thousand instructions per second. It was used in the first commercially successful personal computers,
like the Altair 8800 (see Fig.15), and
other S-100 bus systems running the
CP/M operating system.
8085 the 8085, introduced in
March 1976 and discontinued in
2000, was the successor to the 8080
and Intel’s last 8-bit processor. It was
compatible with the 8080 but had the
advantage of only needing to be supplied with one voltage, not three like
the 8080, making system development
simpler.
It ran at a clock speed of 3MHz,
5MHz or 6MHz, used a 3 micron process node and had 6500 transistors.
It was not widely adopted in microcomputers because the Zilog Z80 chip
(1976-2024) was introduced, which
took over much of the 8-bit market (eg,
running the Osborne 1, TRS-80 and
ZX Spectrum). However, the 8085 was
used as a microcontroller and in video
terminals like the VT-102.
8086 in 1978, Intel introduced
the 8086, its first 16-bit processor
with 29,000 transistors, built on a
3.5 micron process (switching to
2 microns in 1981) – see Fig.16. It
extended the 8080 architecture, introduced segmented memory addressing,
ran at up to 10MHz and could support 1MiB of RAM. It had a simple
two-stage pipelining unit to improve
performance.
It laid the foundation of the x86
instruction set family of processors.
This processor, along with dominance
of the memory chip market, paved the
way for the commercial personal computer boom.
The x86 instruction set
The x86 instruction set that’s still
widely used today was introduced
with the 8086. It became standardised
with the release of the 8088 processor thanks to its use by IBM in their
open PC architecture in 1981. x86 has
had many updates over the years, but
today’s processors can still run code
that was written back in the late 1970s.
This does not mean that such code
will run on a modern operating system
like Windows 11, but that is a restriction of Windows, not the processor
itself. It is possible to boot Microsoft
DOS from 1981 on a current x86 CPU.
There would be problems such as a
lack of USB and other driver support,
and a lack of compatibility with a modern UEFI (unified extensible firmware
interface) BIOS.
There is a video of a system with a
2016 Intel Celeron N3450 CPU booting a 45-year-old version of DOS at
https://youtu.be/BXNHHUmVZh8 (the
Celeron name was generally applied
to a cut-down or simplified Pentium
processor).
Microsoft also played a role in the
standardisation of x86 by supporting a wide range of hardware that
used x86. With time, new
instructions have
been added
Fig.16: an 8086 chip in a ceramic
dual-inline package (DIP). Source:
https://w.wiki/GYK4 (CC BY-SA 4.0)
Australia's electronics magazine
siliconchip.com.au
to x86, but the old ones have been kept
to ensure compatibility.
Intel and AMD, who both make
x86-compatible processors, have
formed an alliance to standardise
future instructions to ensure their consistent implementation across future
products from both companies. Competing instruction sets include ARM,
MIPS and RISC-V.
Backward compatibility is important because there are enormous
amounts of commercial, financial,
industrial, military, medical and
domestic software written for old
processors that may still be in use.
Some of this software, which can be
decades old, runs on DOS, including
accounting software, payroll systems,
programmable logic controllers, CNC
machines and retro games.
This is one reason that attempts to
replace the x86 instruction set have not
generally been successful, although
ARM has made some inroads. Emulation (where software running on one
processor can interpret instructions
from a different set) can help to ease
the transition.
From 2020 to 2023, Apple moved
away from the x86 architecture as they
transitioned from Intel microprocessors (which they used since 2006) to
their own designs based on the ARM
architecture.
Apple’s reasons were they wanted
a common technology across all their
platforms, better performance per watt
and they wanted to integrate all components on a single chip (see also the
section later on the stagnation of Intel’s
innovation).
Over the years, Intel has developed
extensions to the x86 instruction set,
including:
● MultiMedia eXtensions (MMX)
● the Streaming SIMD (single
instruction, multiple data) Extensions,
which superseded MMX: SSE, SSE2,
SSE3 and SSE4
● Advanced Vector eXtensions
(AVX, AVX2 and AVX-512)
● Advanced Encryption Standard –
New Instructions (AES-NI)
● Software Guard eXtensions (SGX)
● Trusted eXecution Technology
(TXT)
● Transactional Synchronisation
eXtensions (TSX)
● haRDware RANDom number generator (RDRAND)
● Carry-Less MULtiplication for
cryptography (CLMUL)
siliconchip.com.au
Table 2 – Intel’s process node names (only consumer CPUs listed)
Year
Process Name
Chips made
# transistors
1972
10μm
10μm
4004
2.3k
1974
8μm
10μm
4040
3k
1976
6μm
6μm
8080
6k
1977
3μm
3μm
8085, 8086, 8088
29k
1979
2μm
2μm
80186
134k
1982
1.5μm
1.5μm
80286, 80386
275k
1987
1μm
1μm
80386, 80486 (up to 33MHz)
1.2M
1989
800nm
800nm
80486 (up to 100MHz)
1.3M
1991
600nm
600nm
80486 (100MHz), Pentium (60200MHz)
3.1M
1995
350nm
350nm
Pentium (120-200MHz), Pentium
MMX (166-233MHz), Pentium Pro
(150-200MHz)
5.5M
1997
250nm
250nm
Pentium Pro, Pentium II (233450MHz), Pentium III (450600MHz)
9.5M
1999
180nm
180nm
Pentium III (500-1133MHz),
Pentium 4 (NetBurst, 1.3-1.8GHz)
42M
2001
130nm
130nm
Pentium III (1.0-1.4GHz), Pentium
4 (NetBurst, 1.6-3.4GHz)
125M
2003
90nm
90nm
Pentium 4 (NetBurst, 2.4-3.8GHz),
Pentium M
169M
2005
65nm
65nm
Final Pentium 4, Core, early Core 2
Solo / Duo
291M
2007
45nm
45nm
Late Core 2 Duo / Quad, Core i3/
i5/i7 (1st gen)
731M
2009
32nm
32nm
Core i3/i5/i7 (1st gen refresh &
2nd gen)
1.17B
2011
22nm
22nm
Core i3/i5/i7 (3rd & 4th gen)
1.4B
2014
14nm
14nm
Core i3/i5/i7/i9 (5th to 9th gen)
3B
2019
10nm
10nm
Core i3/i5/i7/i9 (10th & 11th gen)
4.1B
2021
10nm+
Intel 7
Core i3/i5/i7/i9 (12th & 13th gen)
21B
2023
5nm
Intel 4 & 3
Core i3/i5/i7/i9 (14th gen), Core
Ultra 1
30B
2024
3nm
Intel 20A
Core Ultra 2
~45B
2025
2nm
Intel 18A & 14A Core Ultra 3
~80B
● x86-64, a 64-bit version of x86 that
allows, among other things, access to
more than 4GB of RAM (developed by
AMD but also implemented by Intel)
● Advanced Performance eXtensions (APX)
Process nodes
Throughout Intel’s history, it was
shrinking the feature size of its chips,
achieving higher numbers of transistors and higher component densities.
We will divert from the history for a
moment to describe process nodes, an
essential part of understanding subsequent processor development.
Australia's electronics magazine
A process node (or technology node,
which means the same thing) is a term
used in semiconductor manufacturing
representing a specific generation of
chip technology. It was traditionally
named based on the size of a transistor
gate, which continued to shrink while
Moore’s Law still applied.
As it is difficult to shrink transistors much more than they are now,
the names no longer correspond to
any particular physical size, and are
more of a marketing term representing
performance and density increases,
which continue due to 3D packaging
and other techniques.
February 2026 23
Intel 4004 Architecture
D0-D3 bidirectional
Data Bus
Data Bus
Buffer
4 Bit internal Data Bus
Temp.
Register
Register
Multiplexer
Instruction
Register
Stack
Multiplexer
Flag
Flip Flops
ALU
Stack Pointer
Instruction
Decoder and
Machine
Cycle
Encoding
Index Register Select
Accumulator
Program Counter
Level No. 1
Level No. 2
Level No. 3
Address
Stack
Decimal
Adjust
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Scratch
Pad
Timing and Control
ROM Control
RAM Control
Test Sync Clocks
CM ROM
CM RAM 0-3
Test Sync Ph1 Ph2
Reset
Fig.17: the microarchitecture of Intel’s (and the world’s) first microprocessor,
the 4004 from 1971. Source: https://w.wiki/GYJu (GNU FDL v1.2)
32 KB Instruction Cache
(8 way)
128 Entry
ITLB
Shared Bus
Interface
Unit
128 Bit
32 Byte Pre-Decode,
Fetch Buffer
Instruction
Fetch Unit
6 Instructions
18 Entry
Instruction Queue
Complex
Decoder
Microcode
Simple
Decoder
Simple
Decoder
4 µops
1 µop
Simple
Decoder
1 µop
1 µop
Shared
L2 Cache
(16 way)
7+ Entry µop Buffer
4 µops
Register Alias Table
and Allocator
4 µops
4 µops
96 Entry Reorder Buffer (ROB)
Retirement Register File
(Program Visible State)
256 Entry
L2 DTLB
4 µops
32 Entry Reservation Station
Port 0
ALU
SSE
Shuffle
ALU
ALU
128 Bit
FMUL
FDIV
Intel Core 2 Architecture
SSE
Shuffle
MUL
128 Bit
FADD
Internal Results Bus
Port 3
Port 5
Port 1
ALU
Branch
SSE
ALU
Store
Address
Port 4
Store
Data
Port 2
Load
Address
Memory Ordering Buffer
(MOB)
128 Bit
Store
128 Bit
32 KB Dual Ported Data Cache
(8 way)
Load
256
Bit
16 Entry
DTLB
Fig.18: the microarchitecture of the much more advanced Intel Core 2
processor from 2006. Source: https://w.wiki/GYJv (GNU FDL v1.2)
24
Silicon Chip
Australia's electronics magazine
The number of atoms across the
smallest dimension of a transistor of
the Intel 18A process node (representing 18Å or 1.8nm) is estimated to be
180, but because of the 3D nature of
the transistor, the overall number is
estimated to be thousands. This is currently the minimum number required
for reliable function.
That might not be improved on for a
long time, if ever, for practical devices
as adverse quantum mechanical effects
like electron tunnelling are already a
concern with the 18A process node.
But technology always develops in
unexpected ways...
By way of comparison, the smallest
process node described by Samsung
is 2nm or 20Å. The distance between
centres of silicon atoms in a crystal lattice is 0.235nm or 2.35Å. Commonly
used terms for Intel fabrication processes are listed in Table 2.
The 18A process node (1.8nm) is
what Intel is focusing on for the future.
It will be produced at its Arizona and
Oregon foundries, which are its most
advanced in the world and will lead
the way to the “one trillion transistor
laptop”.
This process node incorporates all
the above technologies and is the culmination of the so-called 5N4Y (five
nodes in four years), which was former
CEO Patrick Gelsinger’s turnaround
strategy, announced in 2021. Gelsinger
was asked to leave in December 2024
when the board felt improvements
were not being made fast enough (his
replacement has had some controversies).
The 5N4Y plan nodes were:
Intel 7 (~10nm): the first use of their
Enhanced SuperFin transistors.
Intel 4 (~5nm): produced with
extreme ultraviolet (EUV) lithography and moving to chiplets/tiles and
associated interconnect technologies,
like Foveros and EMIB (more on these
later).
Intel 3 (~5nm): with improved performance per watt.
Intel 20A: A marks the move to
Angstrom-
b ased measurements. It
didn’t go into full production, but led
the way to the implementation of Ribbon FETs and PowerVias in 18A (more
on these later).
Intel 18A: the current process node
with the first processor being the Core
Ultra series 3 (Panther Lake) and the
second to be the Xeon 6+ (Clearwater Forest).
siliconchip.com.au
Microarchitectures
Microarchitecture (or μarch) is the
particular way a processor’s internal
hardware (pipelines, execution units,
caches etc) is designed and organised
to implement a given instruction-set
architecture (ISA) such as x86. It is
typically illustrated with pipeline or
block diagrams, like Figs.17 & 18.
Intel re-uses microarchitectures
across multiple processor generations
and models. Most (but not all) major
new Intel processor families introduce
a new or significantly revised microarchitecture. A new microarchitecture
appeared every 2-4 years, while new
processor series (new brand names or
model numbers) were released every
12-18 months; this was called their
tick-tock model. Examples of Intel
microarchitectures are shown in Table
3. Let’s now look at more recent eras
of Intel products.
The 1970s PC boom
Intel’s processors of the 1970s had a
great cultural impact and were a leap
forward for microcomputing via the
hobbyist PC boom of that era. They
were responsible for democratising
computing and sparking a global DIY
computer revolution, which ultimately led to the widespread commercial development of microcomputers.
As mentioned, the 8080 was
released in 1974. It was the first truly
affordable 8-bit CPU that a hobbyist
could purchase. It cost US$360 in single units, but kit manufacturers like
MITS, the creators of the Altair 8800,
could get them for US$75 (equivalent
to AU$757 today) in volume and sell
them via mail order.
The chip was small, relatively inexpensive and well-documented, so it
was something hobbyists could make
something with. Thus, computing
moved out of the corporate lab and
into garages and bedrooms.
The Altair 8800 featured on the
cover of Popular Electronics magazine
in 1975 and, after that, 4000 were sold
in weeks at US$439 (AU$4000 today)
pre-assembled or US$297 (AU$2750
today) as a kit. Hobbyists saw the chip
and the Altair computer that used it
as a ‘blank canvas’.
After seeing the magazine, Bill Gates
and Paul Allen wrote Altair BASIC in
1975 as Microsoft’s (then called MicroSoft) first product. They used a PDP10 mainframe running an 8080 emulator. Gates released the source code
siliconchip.com.au
Table 3 – Intel microarchitectures from 1993 to the present
Microarchitecture
Years
Processor families or brands
P5
1993-1997
Pentium (60–200 MHz), Pentium MMX
P6
1995-2003
Pentium Pro, Pentium II, Pentium
III, Celeron (early), Pentium II Xeon,
Pentium III Xeon
NetBurst
2000-2007
Pentium 4, Pentium D, early Xeon
Core
2006-2008
Core 2 Duo / Quad (Yonah → Penryn)
Nehalem
2008-2010
Core i3/i5/i7 (1st gen)
Sandy Bridge
2011-2012
Core i3/i5/i7 (2nd & 3rd gen)
Ivy Bridge
2012-2013
3xxx series (22nm shrink + tweaks)
Haswell → Broadwell →
Skylake → … → Coffee Lake →
Comet Lake → Rocket Lake
2013-2021
4th gen → 11th gen Core (various),
Skylake derivatives used for six
consecutive generations (2015-2021)
Alder Lake (Golden Cove +
2021-2023
Gracemont cores), Raptor Lake
12th, 13th & 14th gen Core
Meteor Lake
2023
Series 1, chiplet-based design
Arrow Lake / Lunar Lake
2024-2025+
Series 2 (15th Gen)
in April 2025 to mark Microsoft’s 50th
anniversary.
Steve Wozniak was also inspired
by the Altair, which motivated him to
design his own computer, the Apple I
kit, released in July 1976. It used fewer
parts than the Altair. He demonstrated
it at the Homebrew Computer Club
and shared the design and software
for free, but the basic kit was sold for
US$666.66 or AU$5800 today. It did
not use an Intel processor, but a MOS
6502 instead.
The Homebrew Computer Club held
Silicon Valley garage meetings where
hobbyists shared 8080 designs and
code. Intel provided free datasheets,
reference designs and even engineers
who attended. Their slogan was “Build
it. Share it. Improve it.”
Other hobbyist computers of the
1976-1979 era were the IMSAI 8080,
with the Intel 8080, and computers
inspired by the 8080, like the TRS-80
(1977) that used the Zilog Z80 (which
was 8080 compatible), and the Commodore PET (1977), which used the
MOS 6502 like the Apple I.
Intel provided open documentation
for its products and encouraged chips
such as the Z80 which, being compatible with the 8080, helped establish
the 8080 ecosystem. This led to the
dominant x86 architecture, which is
still in widespread use today.
Hobbyist computer magazines supported this new technology; magazines like BYTE, Creative Computing, Kilobaud Microcomputing and
Dr Dobb’s Journal.
During this period, there were price
drops of the 8080, 8085 and 8088
chips, which led to mass adoption of
microprocessors. By 1980, hundreds
of thousands of hobbyists worldwide
were programming in assembly language, swapping floppies and “building the future”.
In 1978, Intel released the first electrically erasable programmable readonly memory (EEPROM), the 2816,
which had a capacity of 16kib (2kiB).
It is non-volatile, meaning it retains its
memory when the power is switched
off, but it can be erased and rewritten when desired without needing a
UV light source, as the earlier 1702
EPROM did.
It is considered a major achievement
in the history of computing, allowing
easy in-system reprogramming for
both hobbyists and commercial users.
The IBM PC is introduced
In 1979, the 16-bit 8088 CPU with
29,000 transistors was introduced as
a lower-cost version of the 8086 (see
Fig.19). It was the heart of the original
Fig.19: an original Intel
8088 processor. Source:
https://w.wiki/GYJw (CC
BY-SA 4.0)
Australia's electronics magazine
February 2026 25
IBM Personal Computer, which was
released on the 12th of August 1981
(see Fig.20).
Even though it was a 16-bit processor, external communications were via
an 8-bit data bus for cost efficiency,
but it could address 1MiB of memory
with its 20-bit memory address bus.
It was designed in Israel (as many of
Intel’s processors have been).
IBM’s decision to use the 8088 led to
the standardisation of the x86 instruction set, because IBM’s open architecture approach encouraged cloning of
the computer and the development of
compatible expansion cards, which
led to the rapid expansion of the Intel
and x86 ecosystem.
Also, IBM insisted on a second
source for their PC chips, leading
to Intel licensing their designs to
AMD. AMD continues to make Intel-
compatible CPUs to this day.
It had simple pipelining in the form
of a prefetch queue that read instructions from memory before they were
needed. This enabled a performance
increase. An 8087 mathematical
coprocessor was available to complement the 8086 or 8088, which dramatically improved the speed of floating-
point arithmetic operations.
1980s: dominating the PC era
A low point of the 1980s for Intel
was being forced out of the DRAM
market by Japanese competition.
Intel’s DRAM market share had fallen
from over 80% in the 1970s to 2-3%
by 1985, and they decided to withdraw from the market and fully focus
on microprocessors.
Intel bet everything on the x86
family. The 80386 (1985), in particular, turned the IBM PC standard into
a near-monopoly and made Intel the
indispensable heart of personal computers.
The IBM PC and its clones dominated the PC market and cemented the
legacy of the x86 instruction set that
is used in almost all Intel and many
competing processors (eg, from AMD)
to this day.
By the end of the decade, x86 processors generated almost all the company’s profit, and Intel processors
dominated the PC market. Other processors they developed in this era
were:
iAPX 432
The iAPX 432 (1981-1985) was
Intel’s ambitious but ultimately unsuccessful first attempt at a true 32-bit
microprocessor. It comprised two
chips (the 43201 general data processor and 43202 interface processor),
was not based on the x86 architecture,
and represented a radical departure
from Intel’s prior designs.
The 432 was designed from the
ground up to support high-level languages like Ada directly in hardware,
with features like object-oriented
memory management, ‘garbage collection’ (a means to manage and recover
unused memory) and capability-based
addressing (a memory and resources
access model in which access is
granted via tokens rather than raw
addresses).
These ideas were decades ahead
of their time. This allowed modern
operating systems to be implemented
with significantly less code. However,
technological limitations resulted in a
performance roughly one-quarter that
of the 80286, despite its advanced
architecture.
Compounding the problem, the 432
was not backward compatible with
any existing Intel processor, alienating developers accustomed to the
8086/8088 ecosystem. These factors, combined with its high cost
and complexity, led to its commercial failure.
Fig.20: the original
IBM PC from
1981, built around
the Intel 8088.
Source: https://w.wiki/
GYJx (CC BY-SA 3.0)
26
Australia's electronics magazine
80286
The 16-bit 80286 microprocessor
was introduced in 1982 (Fig.21). It
added ‘protected mode’ operation,
enabling it to address up to 16MiB
of memory instead of the 1MiB of
the 8088, with improved multitasking capabilities compared to the ‘real
mode’ limitations of earlier x86 chips.
16-bit data could be fetched in one bus
cycle, while the 8088 required two
bus cycles.
Clock speeds up to 20MHz were supported, and the ‘286 facilitated more
advanced operating systems such as
IBM’s OS/2, Windows 3.0, Concurrent
DOS, Minix and QNX that supported
multitasking and more memory access
compared to standard DOS.
A disadvantage of ‘286 protected
mode was that there was no way to
return to real mode without a CPU
reset, so standard DOS programs could
not be run once the CPU was switched
to protected mode. The ‘286 had simple pipelining, allowing the instruction unit, address unit, bus unit and
execution unit to work concurrently
to improve performance.
An 80287 mathematics coprocessor
was available. The ‘286 had between
120,000 and 134,000 transistors
depending upon the variant, and was
built using a 1500nm (1.5μm) process.
The direct competitor to the ‘286
was Motorola’s 68000 (“68k”), which
was used in the first Apple Macintosh, Commodore Amiga and Atari
ST. It was a 32-bit processor with a
16-bit bus, but the ‘286 gave superior
real-world benchmarks, and the IBM
PC had an open architecture, giving it
more software compatibility and therefore more popularity than the 68000.
80386
The 80386 was released in 1985,
and came in two versions: the lower-
priced SX, with a 32-bit internal architecture but a 16-bit external data bus
and 24-bit memory address bus; and
the DX, which was the ‘full’ version
with a 32-bit external bus (Fig.22). It
could support up to 4GiB of physical
memory and up to 64TiB of virtual
memory using advanced segmentation and paging.
It was designed specifically with
multitasking in mind. It had a simple
six-stage instruction pipeline to allow
the execution of different phases of
certain instructions somewhat in parallel over multiple clock cycles, to
siliconchip.com.au
Fig.21: an 80286 chip.
Source: https://w.wiki/
GYK6 (CC BY-SA 3.0)
keep the processor busy at all times.
Mathematical co-processors (80387)
were available for both versions of
the ‘386.
It had 275,000 transistors and was
built with a 1000nm (1μm) process. A
special version produced for IBM, the
386SLC, had a large amount of on-chip
cache, with 855,000 transistors.
i960
Intel’s i960 (also known as the
80960), sold over 1988-2007, was a
major shift away from the x86 architecture toward RISC (reduced instruction set computer) principles, which
streamlines the instruction set, theoretically enabling faster execution –
see Fig.13.
It was mainly used as an embedded processor in military, industrial,
and networking systems and achieved
great success in niche markets such
as laser printers, routers and even the
F-22 Raptor stealth fighter.
Intel discontinued the i960 in 2007
as part of a legal settlement with Digital Equipment Corporation (DEC)
over patent disputes. In exchange,
Intel gained rights to DEC’s Strong
ARM design.
80486
The 80486 (Fig.23) was introduced
in 1989. It had a built-in floating-point
unit and so did not need an external
coprocessor. It also had an inbuilt 8kiB
cache, later increased to 16kiB in the
DX4 variant, which gave it much better
performance compared to the ‘386. It
also had a five-stage pipelining architecture, similar to the ‘386 but with a
more advanced architecture.
Even though the 8088, 8086, ‘286
and ‘386 had instruction pipelining,
the ‘486 was the first in which pipelining was tightly integrated. The
486SL variant was optimised for lower
power consumption in laptops. It had
1.2-1.6 million transistors dependent
upon variant and was only discontinued in 2007.
The underside of
the AMD version of
the 80286, which
had a higher clock
frequency. Source:
https://w.wiki/GYaQ
Fig.22: an 80386DX
chip. Source:
https://w.wiki/GYK7
(CC BY-SA 3.0)
The AM386 is a clone
of the 80386. Source:
https://w.wiki/GYaY
Next month
That’s all we have space for in this
issue. We’ll pick up the rest of the
Intel story in the March issue, at the
start of the 1990s. That second article
will bring us up to date, and then in
the final instalment, we’ll look at the
current state of microprocessor technology and how Intel plans to remain
SC
competitive in the future.
siliconchip.com.au
Fig.23: an exposed 80486 chip die. Source: https://w.wiki/GYKB (CC BY-SA 3.0)
Australia's electronics magazine
February 2026 27
|