This is only a preview of the March 2026 issue of Practical Electronics. You can view 0 of the 80 pages in the full issue. Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Items relevant to "Power LCR Meter Part 2":
|
Feature Article
This month we write about
an important piece of
mostly invisible
internet
infrastructure:
data centres.
They are rarely
seen and little is
known of them by the
general public.
By Dr David Maddison
Data Centres,
Servers & Cloud Computing
E
very time you use a search engine,
watch an online video, use an
email service, use social media, read or
write blogs, buy products online, use
an AI system, or even read Practical
Electronics or most other magazines
or newspapers online, you are almost
certainly using a data centre.
Data centres contain large numbers
of computer servers where information is received, stored, managed, processed and disseminated. A server is
a computer on which software runs
remotely, to ‘serve’ other computers
called ‘clients’ over a network. Such
software applications include web servers, email servers, databases, custom
servers and more.
Small companies might start with
their own central computer system
with an in-house server to store and
process their data. As they grow, it
might become more economical to move
these services to offsite data centres,
especially for companies with multiple locations. Companies can:
● pay a data centre to host their
own hardware
● rent hardware from a third party
but manage the software themselves
58
● have their off-site hardware and
software managed entirely by a thirdparty or multiple parties
More and more these days, individuals also pay companies to manage offsite services and data for them, often
referring to those services as being in
‘the cloud’.
For example, you might be a Google
customer and use Google Docs, Gmail,
Google Drive etc; or an Apple customer using iCloud, Apple Mail etc; or a
Microsoft customer using OneDrive,
Office 365 etc.
Those services may use local apps
(or run in a web browser) but most of
the ‘heavy lifting’ is done in servers
located in data centres. In most cases,
those servers are distributed around the
world, so there will always be a local
server for fast access (and also so that
the entire service doesn’t go down due
to one network outage).
In some cases (or in certain areas), it
is also necessary to store data locally
to comply with local laws.
Cloud services providers can be
huge; they might contain tens of thousands of servers, or even millions,
as they service numerous compa-
nies (and individuals) from all over
the world.
The origins of data centres
Early computers were room-sized,
used large amounts of power and
needed a specialised environment
with air conditioning, raised floors
for cables, provision of a large power
system and a building capable of
taking the weight of the computer.
Such computers were known as “mainframes” (see Fig.1). They were typically accessed via a ‘dumb terminal’,
as shown in Fig.2.
That was the case from the late 1940s
through to the 1970s. Only large businesses, government organisations and
scientific establishments could afford
such computers. Due to the cost, computing was often done through ‘timesharing’ arrangements, where many
users accessed a portion of the power
of one large computer through a terminal at their desk or some common
location.
In the 1970s, the microcomputer
was invented, and it was popularised
in the 1980s. Software could then be
run by individuals from their personal
Practical Electronics | March | 2026
Data Centres
Fig.1: the NASA Mission Control computer room in 1962 which used two IBM
7094-11 computers. Source: https://archive.org/details/S66-15331
computer (PC), which is also where
data was stored. Software was developed that did not need specialised training to use (it was more
‘user-friendly’).
Unfortunately, having a computer
on every desk led to other problems,
such as organisations losing control
of their IT resources. This created an
incentive to again centralise computing resources.
Some larger government and corporate entities still maintained special rooms with traditional mainframe
computers where critical data was
stored, even with the rollout and acceptance of microcomputers. Still, by
and large, desktop PCs were widely
used throughout the 1980s and 1990s
until the internet started to expand
rapidly.
The expansion of the internet and
the resulting vast requirement for data
storage and e-commerce created a need
for centralised computing and data
storage. This coincided with the socalled ‘dot.com bubble’, from about
1995 to 2000, with large investments
in IT-related companies.
Central data storage was expensive,
and eCommerce companies needed a
fast internet connection, which at the
time was costly. There was also the
need for backup power for the computers and dedicated staff to maintain
the systems.
It thus became preferable for organisations to subcontract their data
storage and computing requirements
Practical Electronics | March | 2026
to an external organisation, such as a
data centre, where economies of scale
helped to minimise costs.
In a way, the modern data centre
represents a return to the earliest days
of computing via centralised systems
with dedicated staff.
What is a data centre?
A data centre is a dedicated facility that houses computers, storage
media, networking, power & telecommunications infrastructure, cooling
systems, fire suppression systems,
security systems, staff facilities and
anything else required to run networked computers.
What is “the cloud”?
This expression is often used in reference to computers running in data
centres. ‘The cloud’ represents the
availability of computing resources to
an end user anywhere that an internet
connection exists. That generally implies that the resources are located in
one or more data centres.
While cloud resources could be
hosted in one central location, more
likely, they will be distributed over
a range of locations for redundancy,
to reduce bandwidth requirements
over long-distance connections and to
reduce latency (access time).
Most commonly, a ‘cloud’ service is
a type of Software as a Service (SaaS),
as per the following section on delivery models. That means that both the
cloud hardware and software (includ-
Fig.2: a typical way to interact with a
computer in the early 1960s was via a
printing Teletype, such as this ASR-33
model, introduced in 1963. Source:
https://w.wiki/B5fn
ing the operating system, applications
etc) are managed by a third party. All
the customer has to do is access it. It
is a somewhat nebulous concept (like
a cloud!).
Clouds may be public, such as many
of the services operated by Microsoft,
Google and Apple, or ‘private’, where
only specific customers with contracts
can access them. Hybrid clouds contain a mix of public and private data
and/or services.
Service delivery models
A data centre or cloud can be managed in various ways, as shown in
Fig.3. It can either be completely
in-house, or with infrastructure as a
service (IaaS), platform as a service
(PaaS) or software (applications) as
a service (SaaS) representing reducing levels of customer management
and increasing levels of data centre
or cloud provider management.
As an example, the Silicon Chip website (and some of our other software)
use the IaaS model. They chose this
model to retain maximum control over
our systems, without having to worry
about provisioning high-speed internet, backup power, cooling etc. It also
saves money because they only need
a fraction of the power of a computer, so they can share hardware with
others to split the costs.
Tenancy refers to the sharing of
resources. Multi-tenancy is popular on public cloud services, such
as Microsoft Azure. In this case, an
59
Feature Article
Fig.3: four different data centre service delivery models (related to the concept of tenancy).
Original source: https://w.wiki/B5fq
individual customer’s data remains
invisible and inaccessible to others,
but they share hardware, networking, other infrastructure, databases
and memory. In that case, there are
limited possibilities for the customisation of application software.
Examples of multi-tenancy software
providers include Google Apps, Salesforce, Dropbox, Mailchimp, HubSpot,
DocuSign and Zendesk.
With single-tenancy, there is no
sharing of resources, which means
maximum control over the software
– see Fig.4.
Virtual machines and servers
A virtual machine or virtual server
is an emulated version of a physical
computer running within a physical
computer.
To put it another way, from the customer’s perspective, they have access
to an entire computer, with which
they can do whatever they like. But
it doesn’t exist as a physical computer; instead, it is software running on
a physical computer, alongside many
other customers’ virtual machines.
Businesses can create their own virtual server, which can run software
and operating systems, store data,
perform networking functions and do
other computing functions as though
it was a real physical computer.
This virtual server runs under a
software layer known as a ‘hypervisor’, which manages the memory,
CPU, storage, networking and other
physical resources of the physical
computer and allocates them to the
virtual machines as required.
● lower latency and faster transfer speeds
● hardware maintenance performed
by third parties with access to experts
and parts
● multi-tenancy allows costs and
resources to be shared among a large
pool of users
● data centres typically have a lot of
redundancy, making them resistant to
power outages and natural or humaninduced disasters
Why use the cloud?
These reasons include those for using
a data centre, plus:
● device independence; applications
can be typically via a web browser, so
will work from any operating system,
including mobile devices
● software maintenance, including
updates, performed by expert third
parties
● performance monitoring and security by expert third parties
● scalability and elasticity so resources can be increased as required
How many data centres exist?
There are currently around 523
data centres in the United Kingdom
and approximately 139 in Ireland.
Europe has approximately 3346 data
centres spread among 44 countries.
Worldwide, there are approximately
11,000 data centres, with the United
States of America having the most
at 5387.
Data centre infrastructure
Data centres have major network infrastructure to connect the data centre
to the outside world with plenty of
bandwidth.
The internal network is also handy
for transferring data between multiple computers operated by the same
customer (and sometimes even different customers, eg, web crawlers
for search engines).
There is also significant storage infrastructure for storing data and software; it may be integrated with the
computing nodes, or separate and accessed through internal high-speed
networking.
Of course, there are plenty of computing resources for data processing
with onboard memory, with connections to data and applications storage,
plus internet infrastructure. These
are supported by cooling systems,
power supplies and fire suppression
systems. The work of a data centre
is done in various forms of processing units:
Why use a data centre?
We touched on this earlier when we
explained why we use IaaS, but there
are other reasons, including:
● lower costs (due to economies
of scale)
60
Fig.4: the single tenancy vs multi-tenancy models for data centres. DB is short
for database. Original source: https://resources.igloosoftware.com/blog/multitenancy-database
Practical Electronics | March | 2026
Data Centres
Fig.5: the NVIDIA
GH200 Grace Hopper
platform, based on
the Grace Hopper
Superchip. This board is
capable of four petaflops
(4 × 1015 floating point
operations per second)
and includes 72 ARM
CPUs, 96GB of HBM3
memory for the CPUs
plus 576GB for the
GPUs. Source: TechSpot
– https://pemag.au/link/
ac19
CPUs (central processing units)
CPUs are at the heart of traditional
computers and generally continue to
be, including in data centres. They
may be supplemented by GPUs, TPUs
and DPUs (each described below) to
improve performance or provide new
capabilities.
An example of a CPU designed for
data centres is the fourth-generation
AMD EPYC based on the x86 architecture, as used in most PCs and servers (Fig.7). It is designed to be energy
efficient, secure and give high performance. Each of these processors may
include up to 128 Zen 4 or Zen 4c
cores, allowing each server to potentially handle thousands of requests at
any time.
GPUs (graphics processing units)
GPUs are special processors to accelerate the rendering of images, including 3D scenes. They are also capable
of image processing.
While they were originally designed
for graphics applications, they are highly
suitable for non-graphics applications
such as parallel processing, accelerated computing and neural networks as
needed in machine learning and artificial intelligence (AI). As such, they are
commonly found in AI systems.
The term ‘accelerated computing’
refers to using specialised hardware
such as GPUs to more efficiently performing complex computing tasks than
traditional CPUs can.
An example of a GPU used in accelerated computing and AI data centres
is the NVIDIA Grace Hopper Superchip processor, which forms part of the
GH200 Grace Hopper platform (Fig.5).
It is specifically designed for accelerated computing and generative AI, primarily in data centres. It utilises the
latest HBM3e high bandwidth memory
technology that provides 10TB/sec of
memory bandwidth.
TPUs (tensor processing units)
TPUs are proprietary ASICs (application specific integrated circuits) by
Google, optimised for neural network
machine learning and artificial intelligence. Various versions have been produced since 2015. They are designed
for high computational throughput at
low precision, handling numbers with
as few as eight bits.
The chips (see Fig.6) are designed
specifically for Google’s TensorFlow
framework for machine learning and
artificial intelligence, and are incorporated into ‘packages’, as shown in
Fig.8.
Fig.7: a range of AMD fourth-generation EPYC processors designed specifically
for data centre applications. Source: www.amd.com/en/products/processors/
server/epyc/4th-generation-9004-and-8004-series.html
Practical Electronics | March | 2026
Fig.6: Google’s v5p TPU chip. Source:
https://thetechportal.com/2024/04/09/
google-ai-chip
A notable application was Google’s
use of TPUs to find and process all the
text in the pictures of Google’s Street
View database in under five days.
Google has developed what they call
the Cloud TPU v5p AI Hypercomputer (Fig.9).
DPUs (data processing units)
DPUs, also called infrastructure processing units (IPUs) or SmartNICs (NIC
stands for network interface controller)
are used to optimise data centre workloads and to manage networking, security and storage. They relieve system
CPUs of these workloads.
An example is the SolidNET DPU,
an ARM-based software-defined DPU
with a PCIe half-height-half-length
(HHHL) format. It is based on an offthe-shelf 16-core NXP LX2161A System
on Card (SOC) and uses open standards
(see Fig.10). For more information, see
https://pemag.au/link/ac0b
Power supply
A typical data centre power system
includes:
● transformer(s) to reduce the utility voltage, if necessary
● automatic switching gear to switch
to backup power sources such as a generator in the event of a utility failure
Fig.8: Google’s TPU v4 board. It
has 4 PCIe connectors and 16 OSFP
connectors. Source: https://w.wiki/B5fr
61
Feature Article
● a UPS (uninterruptible power
supply) supplied by a battery bank to
provide backup power in the event
of a utility failure, until the generator
starts, as well as to condition power
and remove voltage spikes in normal
operation
● power distribution units (PDU),
an electrical board to distribute
power from the UPS to equipment
locations
● a remote power panel (RPP),
an electrical sub-board to distribute
power from the PDU to individual
rack-mounted power distribution
units (rPDU)
rPDUs are much like power boards.
Individual servers or other equipment
are plugged into them. Some of these
components may be absent, depending on the size and sophistication
of the data centre. All of the above
has cables, wiring, circuit breaker
boards etc.
Some data centres use flywheel
energy storage rather than a battery-
based UPS (see https://pemag.au/link/
ac1b). They can be slightly more costly
to install, but they don’t degrade over
time as much as batteries do.
Power consumption
Data centres, especially AI data centres, use an enormous amount of electrical power. That’s both to power the
computers themselves, particularly
their CPUs, GPUs and TPUs, as well
as their cooling systems. So it is important that these be designed to be as
efficient as possible to minimise power
consumption.
Data centres need access to inexpensive, reliable 24/7 power supplies.
They consume a significant amount
of the world’s electrical power; one es-
timate is 1%-1.5% (https://pemag.au/
link/ac0i). According to another estimate (https://pemag.au/link/ac0j), AI
currently uses 8% of the world’s electrical energy.
The IEA predicts that data centres
will consume 6% of electrical power in
the United States by 2026, and 32% in
Ireland by 2026, up from 17% in 2022
(https://pemag.au/link/ac0k).
A typical ‘hyperscale’ data centre
consumes up to 100MW according
to Oper8 Global (the largest is up to
960MW). But that is just internal consumption. Given a power usage effectiveness (PUE) of 1.3, 130MW will need
to be provided from the grid.
At a time when dispatchable (on
demand) power capacity is diminishing in many countries and being
replaced with intermittent solar and
wind production, plus the energy
demand for charging electric vehicles,
it is not clear where all this power
will come from.
The shortage of power has been recognised. According to the CBRE Group
(https://pemag.au/link/ac0l):
A worldwide shortage of available
power is inhibiting growth of the global
data center market. Sourcing enough
power is a top priority of data center operators across North America, Europe,
Latin America and Asia-Pacific. Certain secondary markets with robust
power supplies stand to attract more
data center operators.
Data centres are being set up in
New Zealand with access to 200MW
of relatively inexpensive hydroelectric, gas and geothermal energy, from
which 79% of New Zealand’s total
production is derived (https://pemag.
au/link/ac0m).
In the United States, Equinix, a data
centre provider, signed a 20-year nonbinding agreement with Oklo to purchase up to 500MW of nuclear power
(https://pemag.au/link/ac0n).
Microsoft is proposing to use nuclear power for its data centres (see
https://pemag.au/link/ac0o), as is
Google (https://pemag.au/link/ac0p).
Amazon purchased a nuclear-powered
data centre in Salem Township, Pennsylvania, USA (https://pemag.au/link/
ac0q). It consumes an almost unbelievable 960MW of electrical power.
According to Funds Europe, the
rapid growth of data centres is putting
an unsustainable strain on the European electrical grid (https://pemag.au/
link/ac0r). They already use 2.7% of
their power, expected to increase to
3.2% by 2030. It has been suggested they use small modular reactors
(SMR) and micro modular reactors
(MMR) to power data centres. There
is a growing interest in using nuclear power for AI data centres: https://
pemag.au/link/ac0j
Cooling
One of the most critical aspects of a
data centre, apart from the computing
resources, is the provision of cooling.
This is because the vast majority of
the enormous amount of power used
by data centres ultimately gets converted into heat.
Data centres are cooled by air conditioning the rooms the computers are in,
and also possibly some type of liquid
cooling of the servers themselves.
A data centre can be designed with
hot and cold aisles between server
racks to help maximise the efficiency
of the cooling system. Cold air may
be delivered from beneath perforated
floor tiles and into the server racks
Fig.9: inside part of Google’s
‘hypercomputer’ based on v5p
TPUs arranged into ‘pods’. Each
pod contains 8960 v5p
TPUs. Source: Axios
– https://pemag.
au/link/ac1a
Fig.10: a
SolidRun
SolidNET
Software-Defined
DPU (data processing
unit). Source: www.
storagereview.com/news/
solidrun-solidnet-software-defineddpu-for-the-edge-unveiled
62
Practical Electronics | March | 2026
Data Centres
before being discharged into the hot
aisles (see Fig.12). Alternatively, hot
air may be collected at the top of the
server racks rather than being blown
into an aisle.
Some data centres are using emerging technologies such as immersing
the computer equipment in a fluid
to efficiently remove heat (Fig.13). In
two-phase cooling, a volatile cooling
liquid boils and condenses on a coil
which is connected to a heat exchanger
to remover heat, after which it drips
down into the coolant pool.
Silicon Chip magazine published an
article in the November 2018 issue on
the DownUnder GeoSolutions supercomputer in Perth that was immersed
in an oil bath for cooling at https://
siliconchip.au/Article/11300
Water usage
Some data centres, especially those
used for AI, consume water for cooling and hydroelectric generation as
well. One would think that cooling
a data centre would mostly involve
a closed loop system, like a typical
car. But apparently that is not always
the case, as many data centres use
large amounts of water. Nature magazine states:
...in July 2022, the month before
OpenAI finished training the model,
the cluster used about 6% of the district’s water. As Google and Microsoft
prepared their Bard and Bing large language models, both had major spikes
in water use — increases of 20% and
34%, respectively, in one year, according to the companies’ environmental reports... demand for water for AI
could be half that of the United Kingdom by 2027 – https://doi.org/10.1038/
d41586-024-00478-x
Details of Microsoft’s water consumption for AI is at https://pemag.
au/link/ac0u
About 2/3 of the water used by Amazon
data centres evaporates; the rest is used
for irrigation (https://pemag.au/link/
ac0v). That source also states that the
amount of water to be consumed by
a proposed Google data centre is regarded as a trade secret!
Fire detection and suppression
Due to the very high electrical power
density inside a data centre, if a fire
breaks out, it could get serious very
quickly. Fire detection systems need
to give early warning to prevent major
damage, and fire extinguishing systems
Practical Electronics | March | 2026
Fig.11: part of the elaborate plumbing for the cooling system for the Google
data centre in Douglas County, Georgia. Source: www.google.com/about/
datacenters/gallery
Fig.12: one possible configuration of a data centre using the concept of hot and
cold aisles between rows of servers. Original source: www.techtarget.com/
searchdatacenter/How-to-design-and-build-a-data-center
Fig.13: the concept of twophase immersion cooling for
server equipment Source:
www.gigabyte.
com/Solutions/
liquidstack-twophase
Vapor condenses on
coil or lid condenser
Fluid recirculates
passively to bath
Vapor rises to top
Heat generated on
chip and fluid turns
into vapor
63
Feature Article
Fig.14: a comparison of the VESDA early warning smoke detection to
conventional fire detection systems. Source: https://xtralis.com/product_
subcategory/2/VESDA-Aspirating-Smoke-Detection
need to cause minimal damage to electrical equipment.
VESDA (Very Early Smoke Detection
Apparatus) is a highly sensitive smoke
detector (Fig.14), at least 1000 times
more sensitive than a typical smoke
alarm. It sucks air through perforated
pipes that are routed around a protected area, then analyses the sample for
the presence of smoke with sensitive
detectors. It is an Australian invention in use in many data centres for
the early detection of fires.
Victaulic Vortex is a fire suppression system used in many data centres
(Fig.15). It is a combined water and nitrogen fire extinguishing system. Tiny
droplets of water and nitrogen gas, like
a fog, are discharged from nozzles to
absorb heat, reduce oxygen and extinguish the fire.
It causes minimal or no wetting and
therefore no equipment damage, avoiding a costly clean-up. After rectifying
the fire damage, the data centre can be
quickly returned to operation.
Security
Physical security, data security, environmental security (avoiding flooding, earthquakes etc) and power supply
security are all important considerations for data centres. Human entry
usually requires some type of biometric system (like a retinal scan) via
a secure doorway – see Fig.16. That
shows a Circlelock door, which is described at https://pemag.au/link/ac0c
Fig.15: an artist’s impression of the Victaulic Vortex fire suppression system
in operation, discharging a water and nitrogen fog. Source: https://youtu.be/
qmhO7E4c0tM
Fig.16: the entry lobby of a Google data centre uses a Circlelock door and retinal
scan, emphasising the high security requirements of data centres. Source: www.
google.com/about/datacenters/gallery
64
Server racks
Server racks are standardised frames
(typically made from metal) that hold
computer servers, network switches
or other equipment. They help to organise wiring, airflow or plumbing for
cooling, provide access for service &
maintenance, and sometimes physical
security – see Fig.17.
Server racks are mounted together
in single or multiple rows in whatever
number is required, as shown in Fig.18.
An important feature of server racks
is that they allow a very high density,
with up to 42 individual systems in
one standard rack, or over 100 with a
‘blade’ configuration.
A server rack is designed to accommodate equipment that is 19 inches
(482.6mm) wide; that standard was
established in 1922 by AT&T. The
height of equipment is standardised in
heights representing multiples of 1.75
inches (44.45mm). A single-height unit
Practical Electronics | March | 2026
Data Centres
Fig.17: this server rack is mostly
populated with network switches and
patch panels. Source: Fourbs Group –
https://pemag.au/link/ac1c
Fig.18: a group of server racks in a
data centre. Source: https://kpmg.
com/jp/en/home/insights/2022/03/
datacenter-business.html
Fig.19: removing a 1U rack-mounted
server mounted with sliding rails.
Source: https://youtu.be/fWaW9lA_
pA0
is designated 1U (see Fig.19), double
height 2U etc.
Equipment might be mounted on
rails so it can easily be slid out for service. Alternatively, and more simply, it
may be bolted to the edges of the rack
using ‘rack ears’.
Almost all aspects of server racks
are covered by CEA, DIN, EIA, IEC and
other standards. The so-called 19-inch
rack is used for many other types of
equipment as well.
There are some other rack standards. One example is Open Rack, an
initiative of the Open Compute Project. This rack was specifically designed for large-scale cloud deployments and has features such as a pair
of 48V DC busbars at the rear to power
the equipment.
It is designed for equipment that is
21-inches (538mm) wide instead of
19in (482.6mm), with a vertical spacing of 1.89in (48mm) instead of 1.75in
(44.45mm) to improve cooling.
The racks are strong to accommodate the extra weight of equipment,
all cables connect at the front rather
than the back, and IT equipment is
hot pluggable. See Fig.20 for a typical
Open Rack configuration.
au/link/ac0d), over 90% of online data
stored in data centres is on hard disk,
with the remainder on SSDs.
Western Digital sells a drive intended for use in data centres, the Ultrastar
DC HC680, with a capacity of 28TB.
Seagate’s Exos X series of hard drives
have capacities up to 32TB.
Tape drives are also used in data
centres for archiving data and backups. They have great durability and
longevity, and can provide an ‘air gap’
(no physical connection to the rest
of the system) to protect stored data
against hacking attempts and ransom-
ware. They are also low in cost for their
high capacity.
Enterprise and Datacenter Standard
Form Factor (EDSFF) is a specification
designed to address the limitations of
the 2.5-inch and M.2 sizes for solid-
state drives. EDSFF drives provide
better signal integrity, can draw more
power and have higher maximum read/
write speeds.
Data storage
While there is a general move to
solid-state drives (SSDs) for data storage, hard disk drives (HDDs) retain
some advantages over SSDs such as
lower price, especially for higher capacities; they last longer, with little
degradation with constant read/write
cycles; and data recovery is easier for
certain failure modes.
According to Seagate (https://pemag.
Practical Electronics | March | 2026
Standards for data centres
Various international standards exist
for the design of data centres and their
security and operational efficiency.
Examples include:
● ISO/IEC 22237-series
● ANSI/TIA-942
● ANSI/BICSI 002-2024
● Telcordia GR-3160
Data centre ratings
Data centres can be rated according
to the TIA-942 standard:
Rated-1: Basic Site Infrastructure
The data centre has single-capacity
components, a non-redundant distribution path for all equipment and limited protection against physical events.
Rated-2: Redundant Component
Site Infrastructure
The data centre has redundant capacity components, but a non-redundant
distribution path that serves the computer equipment.
Fig.20: a typical configuration for
an Open Compute Project V2 rack.
Original source: Mission Critical
Magazine – pemag.au/link/ac1e
Rated-3: Concurrently
Maintainable Site Infrastructure
The data centre has redundant capacity components and redundant distribution paths that serve the computer
65
Feature Article
equipment, allowing for concurrent
maintainability of any piece of equipment. It also has improved physical
security.
Fig.21: the Google Cloud TPU v5e AI infrastructure in a data centre. Source:
https://cloud.google.com/blog/products/compute/announcing-cloud-tpu-v5eand-a3-gpus-in-ga
Rated-4: Fault Tolerant Site
Infrastructure
The data centre has redundant capacity components, active redundant
distribution paths to serve the equipment and protection against single
failure scenarios. It also includes the
highest level of security.
A ‘hyperscale’ data centre is one designed to accommodate extreme workloads. Amazon, Facebook, Google, IBM
and Microsoft are examples of companies that use them.
Artificial intelligence (AI)
Fig.22: the Microsoft Azure infrastructure that runs ChatGPT. Source: https://
news.microsoft.com/source/features/ai/how-microsofts-bet-on-azure-unlockedan-ai-revolution
Fig.23: inside a small section of the Google data centre in Douglas County,
Georgia, USA. Source: www.google.com/about/datacenters/gallery
66
Some data centres are specialised
for AI workloads.
AI data centres are much like regular data centres in that they require
large computing resources and specialised buildings. However, the resource requirements for AI are substantially more than a conventional
data centre.
According to Australia’s Macquarie
Data Centres, conventional data centres require around 12kW per rack, but
an AI data centre might require 60kW
per rack. Oper8 Global (https://pemag.
au/link/ac0w) states that an ‘extreme
density’ rack can have a power consumption of up to 150kW!
An AI data centre requires far
more computing resources. Instead
of mainly using CPUs, it will also
contain a significant number of GPUs
and TPUs.
Deep learning & machine learning
AI data centres can use either machine learning or deep learning. Machine learning uses algorithms to interpret and learn from data, while deep
learning uses similar algorithms but
structures them into layers, within
an artificial neural network simulating how a brain learns.
A neural network is hardware and/
or software with architecture inspired
by that of the human (or other) brains.
It is used for deep learning, a form of
artificial intelligence. Large versions
of these are run in data centres. Machine learning does not necessarily use
neural networks (but it can).
Machine learning is best for structured tasks with small datasets, with
thousands of data points, but may
Practical Electronics | March | 2026
Data Centres
Fig.24: Google Cloud (Cloud
CDN) locations (dots) and their
interconnecting subsea cables.
Source: https://cloud.google.com/
about/locations#network
require human intervention if a learned
prediction is incorrect. Deep learning is
best for making sense of unstructured
data with large datasets and millions
of data points. Deep learning can determine for itself whether a prediction
is wrong or not.
Machine learning is relatively quick
to train but less powerful; deep learning can take weeks or months to train,
like a person.
CPUs have advantages for implementing recurrent neural networks
(RNNs). Typical applications for RNNs
are for translating language, speech recognition, natural language processing
and image captioning.
GPUs have advantages for some
fully connected neural networks.
They are probably the most common
type of processor used for neural networks, hence the huge stock value
of companies that make GPUs like
NVIDIA, which at the time of writing is one of the most valuable publicly listed companies in the world
at US$2.6 trillion.
Fully connected neural networks
are suitable for deep learning and have
applications in speech recognition,
image recognition, visual art characterisation, generating art, natural language processing, drug discovery and
toxicology, marketing, medical image
analysis, image restoration, materials
science, robot training, solving complex
mathematical equations and weather
prediction, among others.
TPUs have advantages for convolutional neural networks (CNNs). Applications for CNNs include pattern
recognition, image recognition and
object detection. Fig.21 shows part of
Practical Electronics | March | 2026
the Google Cloud TPU data centre artificial intelligence infrastructure. Also
see the video titled “Inside a Google
Cloud TPU Data Center” at https://
youtu.be/FsxthdQ_sL4
Vault), Open Rack (mentioned previously), energy-efficient power supplies and network switches based on
SONiC (Software for Open Networking in the Cloud).
ChatGPT
This popular AI ‘chatbot’, developed
by OpenAI, is hosted on a Microsoft
Azure cloud computing data centre infrastructure (see Fig.22). It runs on tens
of thousands of NVIDIA’s H100 Tensor
Core GPUs with NVIDIA Quantum-2
InfiniBand networking.
Underwater data centres
Google data centres
Google is among the largest owners
of data centres, storing vast amounts
of the world’s data. Fig.23 shows the
inside of a part of a Google data centre,
while Fig.24 shows the location of
Google Cloud data centres and their
interconnection via undersea cables.
The locations of data centres for delivering media such as videos (such
as for YouTube) can be seen at https://
pemag.au/link/ac1d
Open Compute Project (OCP)
The OCP (www.opencompute.org)
was founded in 2011 with the objective of sharing designs for data centre
products and practices. Companies involved include Alibaba Group, Arm,
Cisco, Dell, Fidelity, Goldman Sachs,
Google, Hewlett Packard Enterprise,
IBM, Intel, Lenovo, Meta, Microsoft,
Nokia, NVIDIA, Rackspace, Seagate
Technology and Wiwynn.
Their projects include server designs,
an accelerator module for increasing
the speed of neural networks in AI applications, data storage modules (Open
Because of the significant cooling
requirements of data centres and the
need for physical security, experiments
have been made in placing data centres underwater.
They would be constructed within a
pressure-resistant waterproof container, with only electrical and data cables
coming to the surface. They would not
have any staff. With no people, there is
no need for a breathable atmosphere, so
it can be pure nitrogen to reduce corrosion of connectors and other parts.
There is also no possibility of accidental damage such as people dislodging wires etc. Also, there would
be no dust to clog cooling fans or get
into connectors.
The underwater environment has a
stable temperature, resulting in fewer
failures than when the temperature
can vary a lot.
It is much easier and more efficient
to exchange heat with a fluid such as
water than with air, reducing the overall power consumption.
An underwater environment also
provides protection from some forms
of nuclear radiation, which can cause
errors in ICs, as water is a good absorber of certain types of radiation.
Water can also absorb electromagnetic
pulses (EMP) from nuclear explosions.
The fact that the electronics are also
effectively housed in a Faraday cage
will also help with disaster resistance.
67
Feature Article
Fig.25: cleaning Microsoft’s
underwater data centre after
being on the seabed for two
years, off the Orkney Islands in
Scotland. Source: https://news.
microsoft.com/source/features/
sustainability/project-natickunderwater-datacenter
Fig.26: an IBM modular data centre built
into a standard 40ft (12.2m) long shipping
container. Source: https://w.wiki/B5ft
Fig.27: looking like somewhere
where Superman might live,
this 65-storey data centre is
proposed to be built in Iceland.
Source: www.yankodesign.
com/2016/04/01/the-internetsfortress-of-solitude
Physical security is improved as
being underwater, even if a diver could
get to it, there would be no practical
way to get inside without flooding the
whole container.
An underwater data centre can also
contribute to reduced latency (response
time) because half the world’s population lives within 200km of the sea,
so they can be optimally placed near
population centres and possibly undersea cables.
Underwater data centre projects
include:
● Microsoft Project Natick (https://
natick.research.microsoft.com), an experiment first deployed in 2015 with
a data centre built within a pressure
vessel 12.2m long, 3.18m in diameter,
and is about the same size as a standard 40ft (12.2m) shipping container
– see Fig.25.
Its power consumption was 240kW.
It had 12 racks containing 864 standard Microsoft data centre servers with
FPGA acceleration and 27.6 petabytes
of storage. The atmosphere was 100%
nitrogen at one bar. Its planned operational period without maintenance
was five years.
● Subsea Cloud (www.subseacloud.
com) is proposing to put data centres
3km below sea level for physical security.
● Chinese company Highlander
plans to build a commercial undersea
data centre at the Hainan Island free
trade zone, with a facility for 100 airtight pressure vessels on the seabed.
Modular data centres
A modular data centre is designed to
be portable and is built into a structure
like a shipping container – see Fig.26.
They might be used to supplement the
capacity of an existing data centre, for
disaster recovery, humanitarian purposes or for any other reasons where
a data centre has to be moved into a
place it is needed.
Iceland data centre
A 65-storey data centre has been
proposed to be built in the Arctic (see
Fig.27). It was designed by Valeria Mercuri and Marco Merletti.
If built in Iceland, it could take advantage of inexpensive geothermal
energy and be close to international
cable networks. The low temperatures
would minimise cooling costs, and
the vertical design would minimise
land usage.
PE
68
Practical Electronics | March | 2026
|