This is only a preview of the April 2026 issue of Silicon Chip. You can view 36 of the 104 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Articles in this series:
Items relevant to "PicoSDR Shortwave Receiver":
Articles in this series:
Items relevant to "DCC/DC Stepper Motor Driver":
Items relevant to "Calliope Amplifier":
Items relevant to "Micromite-based Music Player":
Purchase a printed copy of this issue for $14.00. |
Image source: https://pixabay.com/photos/intel-8008-cpu-old-processor-3259173/
T
o r y of
t
s
i
h
he
intel
Pa
rt 3
b y D r D avid Mad
K3D
V
,
n
o
d is
SM
Over the last two issues, we have traced Intel’s history from its beginnings in
1968 until recently. That included a lot of information on the primary product
that Intel is known for: computer CPUs. Now that we’ve caught up to the
present, we’ll investigate their current and future technologies.
W
e finished part two last month
with information on Intel’s
hybrid CPUs with high-performance
P-cores and high-efficiency E-cores.
They were introduced in their mainstream products starting with the 12th
generation Core CPUs launched in
2021, but those cores were still part
of a single, monolithic die.
That’s in stark contrast to their direct
competitor, AMD, which launched
its Ryzen 2 series processors in 2019.
They used a different approach, combining multiple silicon die “tiles” (or
“chiplets”) to form a complete CPU.
Intel started doing something similar
in 2023, although with some important differences.
New interconnection
techniques
Microprocessors and AI chips
are now so complex and contain so
many components that the silicon
area required exceeds that which can
be produced by a single lithography
reticle field. That is, the area that a
design can be projected onto, currently
20
Silicon Chip
around 858mm2 or a rectangle of about
26 × 33mm. This means that multiple
chips are required to fulfil the latest
designs.
Also, even if it’s possible to make
a 26 × 33mm chip, yield (ie, the percentage of chips that are usable) drops
with increasing die size, so it’s more
economical to make multiple smaller
dies than one large one.
These smaller chips are called
(generically) chiplets, and processor
designs may comprise a variety of
different chiplets such as CPU, GPU,
AI accelerators, memory, I/O etc, as
described last month for Meteor Lake
(siliconchip.au/Article/19823).
Intel’s preferred name for the generic
chiplet is “tile”; this describes their
specific implementation of the chiplet
approach, but in their literature and in
industry, both terms are used.
The chiplet approach was popularised by AMD with its Ryzen 2
and EPYC processors, the first high-
volume products to use this approach.
AMD chiplets are mounted side-byside (with some exceptions, eg, 3D
V-Cache), while Intel’s tile approach
using Foveros technology can vertically stack tiles, allowing for higher
overall chip density – see Table 6.
Each chiplet or tile is specialised
and optimised, and can be “mixed
and matched”, including (critically)
using different process nodes in the
one package. The chiplets are connected by a variety of 2D (side-by-side)
and 3D (stacked) methods as part of a
modular design.
Meteor Lake
Meteor Lake, released in late 2023,
was Intel’s first consumer CPU to adopt
Feature size measurements
1 micron or 1µm: 0.001mm, 0.000001m (1 × 10-6m)
1 nanometre or 1nm: 0.000001mm, 0.000000001m (1 × 10-9m)
1 ångström or 1Å: 0.1nm, 0.0000001mm, 0.0000000001m (1 × 10-10m)
Australia's electronics magazine
siliconchip.com.au
Fig.41: a die shot of the compute tile of Meteor Lake, one of the four active tiles (see Fig.42). This version contains two
P-cores and eight E-cores. Source: https://x.com/Locuza_/status/1524465856167792640/photo/1
Fig.42: the function of the four tiles in a Meteor Lake processor’s base tile. Source: Intel – siliconchip.au/link/ac9v
siliconchip.com.au
the NPU (Neural Processing Unit) for
AI workloads.
Importantly, it also contains a separate cluster of low-power Crestmont
“LP E-cores”. These ultra-efficient
cores form a ‘low-power island’ capable of handling light background and
OS tasks while the compute tile is
powered down, significantly improving idle and low-load power consumption. This is why the SoC tile
occupies such a large proportion of
the total area.
● I/O tile: provides high-speed I/O
such as PCIe, USB4/Thunderbolt, display PHYs (physical layers) and memory PHYs. It is built on a mature TSMC
process (N6), well suited to mixed-
signal and I/O circuitry.
Underneath all these is the base
tile, which mechanically supports
the active tiles and provides the high-
density interconnect between them.
Intel uses technologies described
overleaf, such as Foveros 3D stacking, EMIB (Embedded Multi-Die Interconnect Bridge) and TSVs (Through-
Silicon Vias) to bond the tiles together.
Australia's electronics magazine
Graphics Tile
SOC Tile
IOE Tile
a tile (chiplet) architecture (Figs.41 &
42). Instead of a single monolithic die,
the processor is built from multiple
specialised tiles, each manufactured
on the most appropriate process node
for its function. The design includes
four active tiles plus a passive base tile:
● Compute tile: contains the
high-performance Redwood Cove
P-cores, the main cluster of Crestmont
E-cores, their associated L2/L3 caches,
and the core interconnect fabric.
This tile is manufactured on Intel
4, Intel’s first EUV-enabled (extreme
ultraviolet) process, chosen because
the CPU cores benefit most from cutting-edge lithography. As a result, it
isn’t the largest tile on the chip.
● Graphics tile: includes the Intel
Arc integrated GPU, based on Xe-LPG
architecture. It is manufactured by
TSMC (N5/N6 process).
● SoC (System-on-Chip) tile: the
largest tile, made using Intel 6 process.
It contains a wide range of system-level
functions: the media engine, display
engine, power management, memory
fabric, connectivity controllers and
Compute Tile
April 2026 21
Fig.43: a better look at the die, and tile structure, of Meteor Lake. Compare it to Fig.42. Source: https://wccftech.com/intelcore-ultra-meteor-lake-cpu-die-shots-closer-look-at-various-cpu-gpu-io-chiplets/
Fig.43 provides further information on Meteor Lake die, illustrating
the arrangement of tiles and interconnect structures. Meteor Lake’s modular design allows Intel to update or
replace tiles independently, mix process nodes, and reduce wafer costs
while improving yields.
It also represents a major architectural shift: by moving media, display,
AI acceleration and low-power processing to the SoC tile, the compute tile
can power down completely, delivering better efficiency than previous
Intel laptop processors.
Foveros Direct 3D
Foveros Direct 3D is an Intel chiplet (tile) connection technology for the
direct attachment of a tile to an active
base die. The second generation of this
technology uses copper vias in the tiles
with a pitch of 3 microns (3μm).
Attachment can be by thermocompression bonding, using heat and
pressure to join individual tiles to
the underlying die. Foveros replaces
the earlier solder-based microbumps
and provides a much higher (10100×) interconnect density, plus better power and thermal performance
– see Fig.44.
EMIB
Intel Embedded Multi-die Interconnect Bridge is a silicon ‘bridge’ embedded in a substrate to connect between
tiles – see Fig.46.
Intel Foundry FCBGA 2D+
Intel’s Flip Chip Ball Grid Array
2D+ is a type of processor packaging
used in laptops, which replaces traditional pin grid arrays. A grid of solder
balls on the bottom mates with corresponding lands on the motherboard;
the chip is heated to solder it in place
– see Fig.45.
The processors are not removable,
replaceable or upgradeable except by
replacement of the motherboard. Processors designed for desktop platforms
use traditional LGA (land grid array)
pins and are removable.
PowerVia
In Intel’s earlier technology, all
external connections, for both power
delivery and signal I/O, were made to
the top layer of the chip. There was no
connection beneath the chip, which
provided only structural support and
heat transfer.
From the 18A process node, Intel
decoupled the connections for power
and signals, calling the method Power
Via. Thus, power is provided from
beneath the die, and signal connections are made on the top side – see
Fig.47.
This means that the power and signal
connections and routing can be independently optimised, giving 90% more
efficient area utilisation, lower power
consumption and lower voltage drop
Figs.44 & 45: the tiles (labelled “die”) are connected to an active base die using Foveros Direct 3D (left). Intel’s FCBGA 2D+
method for mounting processors on motherboards in laptops (right). Source: www.intel.com/content/dam/www/centrallibraries/us/en/documents/2024-02/intel-tech-clearwater-wp.pdf
22
Silicon Chip
Australia's electronics magazine
siliconchip.com.au
Table 6 – Intel vs AMD chiplet technology
Foveros 3D stacking of tiles
Chiplets mounted
and EMIB for connection
horizontally on an
between tiles. An active or
organic substrate.
passive base die or “interposer”
allows vertical stacking. The
interposer contains TSVs
(through-silicon vias).
Interconnects
High-bandwidth, low-latency
links between tiles; no need for
a full bus as they collectively
act like a single chip.
Infinity Fabric serial
bus for inter-chiplet
communications.
Simpler, but can
increase latency.
Scalability
Optimised for low power
consumption (eg, laptops).
Tiles can be swapped for
different applications.
Optimised for desktops
and servers with large
numbers of cores, eg,
128+ in EPYC.
Complexity and
cost
Complex and expensive to
assemble. Variants require new
base dies.
Simpler and cheaper.
Power efficiency
Almost no power overhead.
A small amount of extra
power is consumed by
interconnects.
RibbonFET
With Intel’s present 18A process
node, adverse quantum mechanical
and other effects are a significant concern. Hence, Intel developed the gateall-around (GAA) transistor architecture known as RibbonFET (Fig.49) to
mitigate effects like electron tunnelling, leakage currents and to provide
improved electrostatic control compared to the earlier FinFET (Fig.48).
While Samsung and AMD also have
GAA technology, Intel’s nanosheets
are engineered to be extremely uniform and scalable for their PowerVia
backside power delivery. Intel intends
RibbonFET + PowerVia to be a tightly
integrated technology pair.
Moore’s Law is over
From the 1960s until roughly 2016,
Intel largely followed, and was driven
by, Moore’s Law, doubling transistor
density every couple of years. But
physical and practical limits have
now been reached: quantum effects,
heat dissipation and lithography challenges mean that simple geometric
scaling is no longer providing the historical gains. Clock speeds have also
plateaued.
To improve performance, the industry has shifted focus. Instead of shrinking transistors indefinitely, manufacturers now rely on advanced packaging
technologies: stacking multiple chips
vertically (3D packaging, such as that
seen in AMD’s X3D series of CPUs),
using chiplets or tiles placed side-byside, and high-bandwidth interconnects such as EMIB to combine multiple dies in a single package.
New transistor architectures, like
Intel’s RibbonFET gate-all-around
design, increase performance and efficiency even when further shrinking is
impractical.
Power
Packaging
technology
(a 30% reduction) as well as an overall
6% performance improvement.
AMD chiplet
Signal
Intel tile
Power & Signal
Feature
Transistors
Fig.47: the old die connection
technology (left) compared to the
new PowerVia technology (right).
Source: www.intel.com/content/
dam/www/central-libraries/us/
en/documents/2024-02/intel-techclearwater-wp.pdf
Software and algorithms are also
evolving. Specialised architectures
– particularly GPUs and AI accelerators, which contain many parallel processing units – enable significant performance gains despite the
slowdown in raw transistor density
improvements.
Artificial Intelligence (AI)
Intel and its chips have a long history of involvement in AI. In the 1980s,
Intel collaborated in the development
of the Connection Machine, a massive
supercomputer built for AI research,
which influenced early neural computing. It was said to have provided
i860 RISC processors and custom
chips for the project.
In 1997, they launched the MMX
instruction set, which accelerated
multimedia and early machine learning tasks like image processing.
In 2013, Intel acquired Indisys,
a Spanish company specialising in
Fig.46: an
illustration of
various Intel
interconnect
technologies
for tiles.
Figs.48 & 49: the older
FinFET technology (left) and the
new RibbonFET technology (right).
Source: same as Fig.47
siliconchip.com.au
Australia's electronics magazine
April 2026 23
natural language processing and AI.
In the same year, they acquired Israeli
company Omek Interactive, which had
technology that enabled users to interact with devices via hand and body
gestures with 3D cameras.
RealSense makes 3D cameras, vision
processors and AI vision systems. It
was ‘incubated’ by Intel internally
from 2014 and spun off in 2025 as an
independent company.
In 2016, the AVX-512 x86 instruction set extension was released. It can
accelerate AI workloads by processing
in parallel using wide 512-bit registers,
speeding up machine learning, image
and speech processing and large language models (LLM).
In 2017, Intel acquired US company
Nervana Systems for its expertise in
deep learning software, which was
later integrated into Intel processors as
the Nervana Neural Processor (NNP).
However, that was discontinued, to be
replaced with Habana Labs’ technology, an Israeli company Intel acquired
in 2019 for their Gaudi2 and Gaudi3
AI accelerator technology.
In 2016, Intel purchased Movidius
for its vision processing chips. Also in
2016, Intel established the Nervana AI
Academy to train AI developers.
In 2017, Intel purchased Mobileye,
which specialised in autonomous
driving and related technologies.
In 2019, Intel released the oneAPI
suite of tools, libraries and a programming model for developing and
optimising AI applications across all
Intel hardware such as CPUs, GPUs
and FPGAs.
During 2019-2024, Intel produced
the Ponte Vecchio AI accelerator with
over 100 billion transistors and 47
tiles (chiplets) using five different
process nodes. It is to be replaced by
Gaudi2/3.
In 2022, Intel released the Gaudi2 AI
accelerator. The Gaudi3 was released
in 2023. It is claimed to be capable of
50% faster training than the NVIDIA
H100 at half the cost. Also in 2023, the
Meteor Lake series of processors was
released with integrated NPUs (Neural Processing Units) for on-device AI.
In mid-2026, Intel plans to release
the Jaguar Shores AI accelerator
designed for data centres. It will use
the 18A process node.
Graphics Processing Units
Intel has included integrated graphics in its CPUs since the mid-2000s
24
Silicon Chip
Fig.50: an Intel Arc A770 graphics
processing unit. Source:
https://w.wiki/GdyA
and considering this, by
unit volume, has long been the
world’s largest GPU (Graphics Processing Units) vendor.
However, these integrated solutions were designed mainly for desktop display output and light graphics use. Intel left the performance
GPU market to AMD and NVIDIA
for decades.
The rise of AI changed that. GPUs,
originally designed for massively
parallel graphics workloads, proved
far better suited to machine-learning
tasks than traditional CPUs. Recognising that GPUs would become strategically important across consumer,
workstation and data centre markets,
Intel entered the discrete GPU space
in 2022 with the launch of the Intel
Arc family.
Arc is based on the Xe architecture,
which scales from integrated laptop
GPUs through to high-performance
compute accelerators. The first generation (Arc A-series, “Alchemist”)
included six desktop cards (from the
A310 4GB to the A770 16GB) and
seven mobile variants (A350M to
A770M – see Fig.50).
A second generation of Arc products (B-series, “Battlemage”) began
arriving in late 2024/early 2025 with
models such as the B570 10GB, B580
16GB and B50 24GB – see Figs.51
& 52.
Although Intel lacks a competitor
for ultra-high-end GPUs like AMD’s
Radeon RX 7900 XTX or NVIDIA’s
RTX 5090, Arc performs well in the
low to midrange when compared at
similar price points. Arc also offers
industry-leading AV1 video encoding
and strong efficiency, making it attractive for media, gaming and general-
purpose GPU workloads.
Driver maturity was initially a
weakness, but Intel has significantly
improved support, especially for older
DirectX 9, 10 & 11 games.
Intel’s long-term commitment has
occasionally been questioned, but
Australia's electronics magazine
multiple factors suggest Arc is
here to stay. Intel has already
announced future generations (“Celestial” and “Druid”), and its Xe graphics
architecture is now embedded in its
laptop CPUs, data-centre accelerators
and AI platforms.
With AMD and NVIDIA struggling
to meet global AI-related demand, a
third major competitor is beneficial
for the industry and consumers. It
therefore seems likely that Intel will
continue to refine Arc, with “C-series”
products expected to arrive sometime
in 2026, quite possibly in the first
half of the year if development stays
on track.
More details on Intel’s CEOs
Last month we provided a list of
Intel CEOs but with only a very brief
description of each person. Here is
some more detailed information on
some of the key figures who became
CEOs at Intel and their major contributions to the company.
Robert Noyce, 1968-1975
Visionary founder, and inventor of
the first monolithic IC.
Gordon Moore, 1975-1987
He defined Moore’s Law, which
gave Intel an objective to strive for:
increased chip density and performance each year.
Fig.53:
Andrew
Grove,
Robert Noyce
& Gordon
Moore in
1978; from
part one.
Source:
www.flickr.
com/photos/
8267616249
siliconchip.com.au
Figs.51 & 52: a render showing the parts breakdown for an Intel Arc B580 card and the die. Source:
https://newsroom.intel.com/client-computing/intel-launches-arc-b-series-graphics-cards
Andrew Grove, 1987-1998
A strict management disciplinarian,
driven by results, and the author of the
book “Only the Paranoid Survive”.
Craig Barrett, 1998-2005
He was a materials scientist and
focused on high-volume, reliable fabrication of microprocessors and the
“Copy Exactly” system, which standardised equipment, processes and
even minor details like the colours
each fabrication plant was to be
painted.
This approach was responsible for
the explosive growth of Intel during
the 1980s and 1990s. As CEO, he
brought the company through the dotcom boom and bust.
Paul Otellini, 2005-2013
He was the first non-engineer CEO
at Intel, bringing a sales and marketing
mindset to a company built by technical visionaries. In 1993, he oversaw
the rollout of the Pentium processor
and the “Intel Inside” campaign. As
CEO, he generated more revenue in
2012 (US$53 billion) than Intel had
seen in its entire prior history.
On the downside, he admitted to
missing the shift to mobile computing
and turned down a deal for the ARM
processor for the iPhone.
Brian Krzanich, 2013-2018
Brian Krzanich came from the manufacturing side of Intel, with experience in semiconductor process engineering and supply-chain operations.
As CEO, he pushed Intel to diversify beyond the declining PC market
toward what he called data-centric
computing.
This strategy included major acquisitions such as Nervana Systems (AI
accelerators) and Mobileye (autonomous driving technology).
He also promoted internal cultural
and workplace reforms, some of which
were praised and some criticised, particularly around restructuring and
workforce reductions.
By 2018, Krzanich’s strategy had
succeeded in changing Intel’s revenue mix: approximately half of Intel’s
revenue now came from data-centric
businesses rather than PCs, which was
a significant shift.
However, his tenure is strongly
associated with the 10nm process
delay, arguably the most damaging
manufacturing slip in Intel’s history.
Under his leadership, Intel attempted
to make too many major process innovations simultaneously. This opened
the door for TSMC and Samsung to
establish leadership in advanced process nodes and allowed AMD to regain
CPU market share.
Krzanich resigned in 2018 due to a
personal misconduct policy violation
unrelated to business performance.
Robert Holmes Swan, 2019-2021
He was a finance executive and an
external appointment from outside
Intel, and was Intel’s shortest tenure CEO. He contributed financial
stewardship, “cultural overhaul” and
“organisational unity” to the company.
Like Krzanich, he was also criticised
for delays related to the 10nm process node.
Patrick Gelsinger, 2021-2024
He was Intel’s chief technology officer (CTO) from 2001 to 2009. He managed the development of USB, WiFi
integration and was the architect of the
80486 processor, oversaw the development of the Pentium 4, Core, Xeon
and 64-bit computing.
Figs.54-59 (left-to-right): Craig Barrett, Paul Otellini, Brian Krzanich, Robert Swan, Patrick Gelsinger & Lip-Bu Tan.
Source: Craig Barrett’s photo – https://w.wiki/GkEz; all the other photos are from Intel Corporation
siliconchip.com.au
Australia's electronics magazine
April 2026 25
Table 7 – major Intel fabs
Years active
Location
Wafer size and process node
Notes
1968-1983
Mountain View,
California
2-inch (50.8mm), 10µm from
1972.
Mainly for research and to produce the
4004.
1984-1990s
Santa Clara, California,
3-inch (76.2mm), 8µm from 1974,
6µm from 1976.
Fabs 1-5. Produced the 8080.
1980-present
Chandler, Arizona
4-inch (101.6mm), 3µm from
1982, 0.13µm from 2001.
Produced 300mm wafers from
2000. Switched to 65nm in 2006,
45nm in 2008, 22nm in 2012, Intel
3 and 20A (cancelled) in 2024.
Fabs 12, 22, 32, 42, 52, & 62. Produced
the 80386. Core 2, 1st Gen. Core i7 and
4th Gen Core. Fab 62 will start Intel 18A
production in 2026.
1996-present
Hillsboro, Oregon
200mm, 0.25µm in 1998. 300mm,
130nm, in 2002. Supports Intel 4
and 3 as of 2023.
Fabs D1A-D & D1X. Mainly for research
and development (R&D).
1996-present
Kirygat, Israel
300mm, 45nm in 1996, 22nm in
2011,
Fab 28, Intel 7 in 2023.
2002-present
Leixlip, Ireland
300mm, 130nm in 2004, Intel 4 in
2023, Intel 18A in 2026.
Fabs 10, 14, 24 & 34.
2030-2032?
Licking County, Ohio
Intel 14A.
Fab 27, expected production dates
2030-2032.
He became CEO with a vision to
reclaim Intel’s manufacturing and
technology leadership and “bet”
US$100 billion plus on “IDM 2.0”
(Integrated Device Manufacturer)
to make Intel the world’s leading
foundry, and restore American chip
making dominance. He wanted Intel to
be a foundry that designs, makes and
sells chips both for itself and others.
Despite his bold strategy, he was
“ousted” by the board that lost confidence in him due to failure to reduce
process nodes fast enough, poor financial performance, and poor response
to the market such as missing the AI
boom that needed (NVIDIA) GPUs,
which Intel had failed to adequately
develop. During this time, Intel lost
market share to AMD.
Lip-Bu Tan, 2025-present
Ex-CEO of Cadence, a company that
provides software to design integrated
circuits (ICs) and PCBs (one of the ‘big
three’ EDA vendors that dominate the
global semiconductor design tooling
industry). He has BS in Physics, Master’s in Nuclear Engineering and Master of Business Administration.
He is attempting a turnaround of
Intel by slashing bureaucracy, doing
foundry deals and becoming more
customer-focused.
Intel’s development models
Until 2006, Intel had no formally
named development model, but
26
Silicon Chip
improvements were a continuing
cycle of:
1. develop a new microarchitecture;
2. release it;
3. shrink the process size once or
more with incremental improvements
(eg, the P6 microarchitecture of the
Pentium Pro was shrunk three times);
4. repeat at irregular intervals as
technological improvements allowed.
After problems with the NetBurst
microarchitecture of the Pentium
4, Intel management decided they
wanted a more formal and disciplined
development model.
The process-architecture-optimisation (PAO) model was introduced in
2016 and remained in use until 2021
to address the limitations of the ticktock model (see our panel on p24 last
month). It operated on a three-year
cycle comprising three stages:
1. Process: a die shrink to the next
manufacturing node to give a higher
density of transistors, but typically
using an existing microarchitecture.
2. Architecture: a major redesign of
the microarchitecture for improved
performance.
3. Optimisation: iterative improvements to the architecture.
This model allowed Intel to introduce new processor generations every
12-18 months while spreading the risk
and cost of new process nodes over a
range of products.
Intel phased out the PAO model
around 2021-2023 as it was becoming
Australia's electronics magazine
increasingly difficult to develop new
process nodes on a three-year schedule. That’s similar to how the tick-tock
model was abandoned when further
feature shrinkage was no longer economically feasible.
The “process leadership” roadmap
was adopted around 2023 to emphasise node advancements such as Intel 3,
20A, 18A with less emphasis on strict
two- and three-year cycles, but with
a focus on “five nodes in four years”.
Note that the 20A process was cancelled in 2024, and they skipped from
Intel 3 straight to 18A. Taiwan Semiconductors was instead contracted to
make parts planned for the 20A node,
such as Arrow Lake.
Intel’s fabrication facilities
Some significant Intel past, present
and future fabrication facilities (fabs)
include those shown in Table 7.
Other Intel developments and
inventions
Apart from CPUs, GPUs, the x86
instruction set, memory chips and
related chipsets, Intel has also been
involved in inventing, innovating or
contributing in the following areas:
3D XPoint
This was a form of non-volatile
storage media technology developed
jointly between Intel and Micron. It
was introduced to the market in 2017
and discontinued 2022.
siliconchip.com.au
It was designed to fit in the speed
gap between faster traditional non-
volatile NAND flash and slower volatile DRAM.
It was marketed under the brand
name Optane (see Figs.60 & 61), Intel’s
commercial implementation of 3D
XPoint memory. Optane could act as
extremely fast cache storage for hard
drives, improving performance, but
its more significant role was in early
high-performance SSDs and in the
persistent-memory DIMMs designed
for data centres.
Optane was not made obsolete
by normal SSDs; rather, Intel discontinued the product line in 20222023 after its manufacturing partner
Micron exited 3D XPoint production
and demand failed to meet expectations.
Technically, Optane was exceptional: it offered dramatically lower
latency than NAND SSDs and
extremely high endurance. Because
of this, some users still prefer Optane
drives for specialised workloads.
XPoint was not based on traditional
charge storage in cells, but on a change
in some other physical property, generally thought to be a material phase
change, although Intel never confirmed this.
The structure of the memory chip
had multiple layers in a 3D stack. The
first generation of XPoint had two layers, and the second generation four
layers, allowing up to 256GB per die
– see Fig.62.
Accelerated Graphics Port (AGP)
The Accelerated Graphics Port was
introduced in 1996. It was a dedicated
graphics port intended as an improvement in speed over the PCI slots used
for other accessory cards.
It provided faster data transfer rates
with a dedicated connection to the
CPU, and dedicated memory bandwidth, which was necessary because
of the development of 3D graphics
and gaming.
AGP cards had their own memory
and could also access system RAM.
The first chipset to support AGP was
Intel’s legendary Celeron 440 series of
CPUs from 1997/1998.
In 1998, Intel also introduced the
i740 dedicated AGP graphics chip to
help promote AGP as a standard (see
Fig.63). AGP was superseded by the
PCI Express (PCIe) standard introduced in 2003.
siliconchip.com.au
Fig.60: Optane storage in an SSD (M.2) format. Source:
https://hothardware.com/photo-gallery/
article/2720?image=big_intel-optane800p-pair.jpg
Fig.61: Optane
is non-volatile
but fast enough to use like
system RAM! Source: www.forbes.
com/sites/tomcoughlin/2022/08/08/gifts-fromintels-optane-memory (from Intel)
Memory cell
Wire
Positive charge
Selector
Negative charge
Voltage affects selector, causing it to read/write the memory cell
Fig.62: the 3D XPoint technology used in Optane memory. The memory cells
are light grey and green, and the address lines (bit lines and word lines) are a
darker grey. Source: www.bbc.com/news/technology-33675734
Fig.63: the Intel i740 was their first AGP-slot graphics card. It was one of Intel’s
ealiest ventures into the dedicated GPU market. It wasn’t very successful,
compared to the NVIDIA GeForce 256 or 3dfx Banshee, which used a PCI slot.
Australia's electronics magazine
April 2026 27
One of the people at Intel who
worked on AGP was Ajay Bhatt – this
won’t be the last you hear of him.
ATX power supplies
These widely used PC power supplies and their compatible motherboards conform to a standard developed by Intel and released in 1995.
It replaced the AT form factor, which
originated with the IBM PC in 1981.
Flash memory
Intel introduced the first commercial NOR flash chip in 1988, marking a
major advance in non-volatile memory
technology. Intel later co-developed
3D NAND flash, with the first generation announced in 2015. By 2020,
Intel’s 3D NAND products had reached
144 layers and triple-level cell (TLC)
technology (three bits per cell).
In late 2020, Intel sold its entire
NAND and 3D NAND flash business,
including its Dalian fab, to SK Hynix
(now operating as Solidigm).
Ethernet
Ethernet was originally developed
at Xerox PARC (Palo Alto Research
Center) in the early 1970s by Robert
Metcalfe and colleagues.
In 1980, Xerox partnered with DEC
and Intel to create the DIX Ethernet
specification (also called Ethernet
v1.0 and later 2.0). This work formed
the basis for the IEEE 802.3 standard,
published in 1983.
Integrated graphics (iGPU) on
motherboards
This was introduced by Intel in
1982, in the form of the 82720 Graphics Display Controller. In 2010, Intel
integrated a graphics chip into the
CPU itself.
Nowadays, most Intel desktop and
laptop CPUs include an integrated
GPU, the exceptions being those with
an “F” or “KF” suffix. In those cases,
the onboard graphics circuitry is disabled. As always, for niche or OEMonly variants, it pays to check the specification sheet rather than rely solely
on naming.
Even if the CPU has onboard graphics, dedicated graphics cards can still
be added. In fact, it is usually possible
to use both simultaneously. An external card will generally have better 3D
performance.
Movidius Vision Processing Units
VPUs are specialised chips designed
specifically for accelerating computer
vision and related AI tasks. They allow
the processing to be offloaded from
the CPU and GPU, and can be used in
applications such as drones, robots,
smart security systems (to recognise
targets), real-time AI powered video
processing, machine vision, virtual
reality, augmented reality headsets
and smart cameras.
Such chips include dedicated hardware for deep learning, such as a Neural Compute Engine in the Myriad X
chip. They are designed for energy
efficiency. Intel acquired Movidius
in 2016.
The DJI Phantom 4 (see Fig.64),
released in 2016, was the world’s first
consumer drone with autonomous
flight capabilities thanks to a Movidius Myriad 2 VPU chip with functions
such as forward-facing obstacle avoidance and subject tracking. It can also
hover at a fixed location using object
tracking alone, without the need for
satellite navigation signals.
PCI
Peripheral Component Interconnect
was introduced by Intel in 1992 as a
modern, processor-agnostic expansion bus to replace ISA and EISA. It
quickly became the industry standard
throughout the 1990s and early 2000s
– see Fig.65. Ajay Bhatt – whom readers may recognise from several other
entries – played a key role in its design.
PCI Express
The PCI Express expansion standard
that’s widely used today was invented
by a consortium of companies including Dell, IBM and HP, although Intel
was the dominant player. It was introduced in 2003. Ajay Bhatt made a
major contribution to the development
of the specification.
Platform power management
PPM was co-invented by Ajay Bhatt
at Intel. It is a series of technologies
that dynamically adjust the CPU clock
speed and voltage to reduce power
consumption dependent upon processor load.
PresentMon
PresentMon is software used to track
performance primarily for games. It’s
mostly maintained by Intel (https://
game.intel.com/us/intel-presentmon),
and is useful for benchmarking.
Fig.64: the DJI Phantom 4 drone uses the Movidius Myrid 2 vision processing
unit (VPU). Movidius was acquired by Intel in 2016, although they haven’t
released any new products in the last few years. Source: www.pexels.com/
photo/a-drone-camera-across-the-blue-sky-4355183/
Thunderbolt
Thunderbolt is a high-speed interface developed by Intel in collaboration with Apple. It allows data, video
and power to be transferred through a
single cable, supporting devices such
as monitors, external storage, docks
and high-performance peripherals. A
Thunderbolt-supported USB-C port is
typically marked with a lightning-bolt
icon (see Fig.66).
Thunderbolt 5 is the latest standard, offering up to 120Gbps of
Australia's electronics magazine
siliconchip.com.au
28
Silicon Chip
bi-directional bandwidth using its
“Bandwidth Boost” mode when driving high-resolution displays, with a
base bandwidth of 80Gbps.
Although Thunderbolt was once
most strongly associated with Apple
systems, it is now widely available
across Windows laptops and desktops. Modern versions of Thunderbolt
use the USB-C connector, meaning the
same physical port may support USB,
Thunderbolt, DisplayPort and power
delivery.
In recent years, many Thunderbolt
capabilities have been incorporated
into the USB standard, particularly
with USB4 and USB4 v2, which are
based on Intel’s Thunderbolt 3 specification, contributed to the USB-IF
(USB Implementer’s Form).
USB
The USB interface was invented by
Intel’s Ajay Bhatt (him again!). He and
his team at Intel developed the first
USB standard, which was released
in 1996.
Wi-Fi
Intel has been a major force behind
Wi-Fi adoption since the early 2000s.
Their Centrino platform (2003) effectively made Wi-Fi standard in laptops,
pushing the entire PC industry toward
wireless networking.
Intel played significant roles in
the IEEE committees for 802.11n,
802.11ac, Wi-Fi 6 (802.11ax), and
Wi-Fi 7 (802.11be), contributing reference designs, test silicon, and architectural proposals. Today, Intel is
one of the largest suppliers of Wi-Fi
chipsets for PCs, and its engineering
teams continue to help shape future
Wi-Fi standards.
Fig.65: Intel developed both PCI (lower) and the more modern and faster PCI
Express (upper) expansion slots. PCI Express slots come in different lengths,
from a single lane to 16 lanes, and in different generations, from Gen1 to Gen6.
Source: https://w.wiki/GdyB (CC BY-SA 2.0)
Fig.66: while Thunderbolt 1 & 2 used
unique connectors, Thunderbolt 3, 4 &
5 use USB-C connectors. That means
the same ports can be compatible
with USB and Thunderbolt. Source:
https://w.wiki/GdyC (CC BY-SA 4.0)
Figs.67 & 68: Intel’s quantum
computer chip; bare die (above)
and in packaging (below). Source:
https://newsroom.intel.com/newtechnologies/quantum-computingchip-to-advance-research
Quantum computing
Intel is developing Tunnel Falls
(see Figs.67 & 68), an experimental
12-qubit quantum computer chip,
which is being made available to
researchers at universities and the US
military. A qubit is a basic element of
quantum information. Whereas a transistor can have two bit states, a 0 or 1,
a qubit can be a 0, a 1 or both simultaneously. It has an infinite amount of
superposition states, but when measured, it will still be a 0 or 1.
This ability can enable it to solve
many types of problems that are
insoluble or very slow to solve on
conventional computers, including
siliconchip.com.au
Australia's electronics magazine
April 2026 29
decryption. Intel’s silicon spin qubits
are up to 1 million times smaller than
other qubit types, and the Tunnel Falls
qubit chip is highly scalable.
Intel aims to sell both the computing hardware and software as a complete solution.
Past Intel failures
Intel, like any company, has had its
fair share of failures:
● The NetBurst architecture used
for the Pentium 4 was supposed to be
the way of the future, but it was a dead
end, never reaching the clock speeds
they were aiming for.
● Delays in the 10nm process node,
at least partly due to the failure to
adopt new technology such as EUV
lithography.
● Defects in large numbers of 13th& 14th-generation processors, leading
to an extended warranty and a large
number of warranty replacements.
● A failure to develop discrete
GPUs at the appropriate time.
● A failure to recognise the mobile
market.
● Turning down an offer from Apple
to make chips for the iPhone, along
with no longer supplying Intel CPUs
for Apple computers.
● The acquisition of McAfee, which
had little to do with Intel’s core business.
● Intel declined to invest in OpenAI
in 2017.
● Losing its dominant position in
the CPU market to AMD after spending many years making minimal gains.
Intel processor model differentiation
Intel produces or recently produced the Core, Xeon, Pentium, Celeron and “Intel
Processor” models of microprocessors. They are differentiated as follows:
Core: Intel’s main CPU product line
Xeon: The enterprise and workstation product line
Pentium: Once the mainline product, later becoming the entry-level
processor line. Retired in 2023.
Celeron: The even more entry-level processor line. Retired in 2023.
Intel Processor: the replacement for the Pentium and Celeron models.
▪
▪
▪
▪
▪
From left-to-right: the original Pentium logo from 1993, the Celeron logo used
during 2008, and the current Xeon and Core logos.
● Failure to take the lead in providing hardware for AI.
● In 1972, they purchased the
Microma watch company to produce
complete digital watches, but struggled in the market and sold the company in 1977.
● Intel purchased Basic Science in
2014 to enter the fitness tracker market, but the product was discontinued
after Intel acquired it.
Intel also had a series of CEOs
that stifled innovation and even got
involved in social politics and moved
the focus away from its core mission
of being a chip company. There were
also inappropriate share sales by a
CEO before bad news was announced
Fig.69: Intel’s Gaudi3 chip costs about US$16,000, has 128GB of HBM (highbandwidth memory), and is meant for AI applications. Source: https://
newsroom.intel.com/artificial-intelligence/next-generation-ai-solutions-xeon-6gaudi-3
30
Silicon Chip
Australia's electronics magazine
regarding the discovery of a chip security vulnerability.
Conclusion
In this series, we have covered the
founding of Intel and how it “accidentally” became a microprocessor
company after being asked to produce
a calculator chipset. We examined
Intel’s early focus on memory chips, its
loss of the DRAM market to Japanese
competitors, and its subsequent shift
to becoming an almost exclusively
microprocessor-focused company.
We then detailed the 8008’s adoption by hobbyists, the selection of the
8088 for the IBM PC and how that decision fuelled the explosive growth of
personal computing. From there, we
followed Intel’s development of the
80286, 80386 and 80486, and later the
Pentium and Core series, charting how
Intel maintained a leadership position
for decades.
Intel made a big mistake around
2005/2006 when it declined a request
from Apple to make iPhone chips and
was mind-bogglingly overambitious
when it came to the NetBurst design.
From the 2010s, Intel further stumbled, failing to develop the 10nm node
in a timely manner (again, because of
overambition) and failing to recognise
many market opportunities, including the mobile market, which allowed
competitors to take hold.
It is now trying to remake itself with
new, better processors, new foundries, new management, job cuts and a
commitment to re-establish itself as a
SC
market leader.
siliconchip.com.au
|