9 minute read
Camera Sensors: What Are They and How Do They Work?
Learn about digital camera sensors and understand the essential role they play in digital photography
Digital cameras are everywhere – from high-end professional equipment used by the media to everyday smartphone cameras, webcams, and even doorbells. At the heart of every single one is a digital camera sensor, also known as an image sensor. Without this vital piece of technology, digital cameras as we know them today simply would not exist.
But what are camera sensors and how do they work? We aim to outline the basics behind the most common type of camera sensor and explain how this ever-crucial technology has evolved.
What is a Camera Sensor?
At the most basic level, a camera sensor is a solid-state device that absorbs particles of light (photons) through millions of light-sensitive pixels and converts them into electrical signals. These electrical signals are then interpreted by a computer chip, which uses them to produce a digital image.
While there are a number of different types of camera sensor, by far the most prevalent is the complementary metal-oxide semiconductor (CMOS) sensor, which can be found inside the vast majority of modern digital cameras.
This includes smartphones, compact cameras and mirrorless interchangeable-lens cameras (ILCs).

Figure 1: Cross section of a camera sensor pixel. For illustrative purposes only.
How Does a CMOS Sensor Work?
A CMOS sensor is made up of a grid of millions of tiny pixels. Each pixel is an individual photosite, often called a well (see Figure 1).
When photons enter the photosite, they hit a light-sensitive semi-conductor diode, or photodiode, and are converted into an electrical current that directly corresponds to the intensity of the light detected.
This signal is amplified on-pixel, then sent to an analog-to-digital converter (ADC), which converts it into digital format and sends it to an image processor.
The image processor is able to read these digital signals collectively and translate them into an image, because each pixel is assigned an individual value, depending on the intensity of light it was exposed to.
As you can see in Figure 1, because the conversion and amplification processes happen on-pixel, the transistors, wiring, and circuitry have to be included in the spaces between each photosite.
To minimize the amount of light bouncing off this circuitry, a microlens is placed on the top of each pixel to direct the light into the photodiode and maximize the number of photons gathered.
You may have also noticed the inclusion of a color filter in Figure 1. The reason for this is that pixels detect light, not color, so a camera sensor by itself can only produce black & white images.
In order to create color images, a color filter array needs to be added.
What Is a Color Filter Array?
A color filter array is a pattern of individual red, green, and blue color filters arranged in a grid – one for every pixel. These filters sit on top of the photosites and ensure that each individual pixel is exposed to only red, green, or blue light.
This means the image processor can assign color to each pixel readout and in turn produce color images.
Color filter arrays come in different patterns, the most common of which is the Bayer filter array.
The Bayer Filter Array

Figure 2: The Bayer color filter array. For illustrative purposes only.
The Bayer filter array (see Figure 2) is made up of a repeating 2×2 pattern in which each set of four pixels consists of two green, one red, and one blue pixel. This equates to an overall split of 50% green, 25% red, and 25% blue.
The reason there is a higher frequency of green filters is because the filter array has been designed to mimic the human eye’s higher sensitivity to green light.
While the Bayer filter array is effective, it does create a problem – the appearance of an unwanted effect called moiré.
What Is Moiré?
Moiré occurs when photographing a uniform pattern equal to or of a higher resolution than the camera’s sensor.
Common instances in which moiré can be seen are when photographing brick walls from a distance, fabrics, or display screens. If the pattern being photographed misaligns with the grid created by the color filter array, strange effects appear, as illustrated in Figure 3.
This was a major problem in the early days of digital photography when sensor resolutions were lower. However, with sensors now enjoying much higher resolutions, moiré is less common.
One way to prevent moiré is by adding an optical low-pass filter to the sensor. Another is to use a different color filter array.
What Is an Optical Low-Pass Filter?
An optical low-pass filter – also known as an anti-aliasing filter – is a filter placed in front of a camera sensor to slightly blur the fine details of the scene being exposed, thereby reducing its resolution to a level below that of the sensor.
Optical low-pass filters can certainly be effective in preventing moiré, but this comes at a cost.
Although the effects of the filter are so slight that they are invisible to many everyday photographers, blurring inevitably equates to a reduction in sharpness. This is undesirable for many professionals, and is one of the reasons Fujifilm developed the X-Trans color filter array.
X-Trans Color Filter Array

Figure 4: The FUJIFILM X-Trans color filter array. For illustrative purposes only.
The X-Trans color filter array was introduced in 2012 with the release of FUJIFILM X-Pro1.
Made up of approximately 55% green, 22.5% red, and 22.5% blue filters, it creates similar proportions of red, green, and blue pixels as the Bayer array. But it uses a more complicated 6×6 arrangement, comprised of differing 3×3 patterns.
Using a less uniform pattern helps reduce moiré, eliminating the requirement for an optical low-pass filter and in turn creating sharper images.
Sensor resolutions have risen dramatically since the 16-megapixel X-Trans CMOS sensor in X-Pro1, making it less likely for moiré to occur. As a result, optical low-pass filters have all but disappeared – though increased image sharpness is not the only potential advantage of the X-Trans color filter array.
Every vertical and horizontal line in an X-Trans CMOS sensor includes a combination of red, green, and blue pixels, while every diagonal line includes at least one green pixel. This helps the sensor reproduce the most accurate color.
Additionally, the less uniform pattern is closer to the random arrangement of silver particles on analog photographic film, which contributes to Fujifilm’s much-loved film-like look.
Demosaicing
As covered above, a single pixel can only record a single value. But if you zoom into a digital image, each individual pixel can contain a mixture of colors, rather than just the red, green, or blue allowed by the color filter array.
So, how are the other colors added? And how does the camera know the correct amount to use?
The answer is a process called demosaicing, in which a demosaicing algorithm predicts the missing color values for an individual pixel based on the strength of the color recorded by the pixels that surround it.
This is done automatically by the camera’s built-in processor, which then turns it into a viewable image file format such as JPEG or HEIF.
In many cases, such as photographing on a smartphone, that is the end of the process. However, most mirrorless cameras have the ability to save images in RAW format, providing photographers with more options.
As the name suggests, a RAW file contains the raw image data before any demosaicing has taken place. This allows photographers to demosaic images using external software such as Capture One.
Different types of software use distinct demosaicing algorithms, each offering unique aesthetics. An obvious advantage of this is that photographers can choose their personal preference, but the benefits of creating in RAW format extend much further.
Advantages of RAW Files
File types such as JPEG and HEIF are designed to make image files easily portable, so significant compression takes place to achieve the smallest possible file sizes.
During the compression process, a large amount of tonal and color information read by the sensor is lost. Less information means lower quality and, in turn, restricted freedom to edit.
As a result, RAW files contain a wider dynamic range and broader color spectrum, which allows for more effective exposure correction and color adjustments.
Evolution of the CMOS Sensor
While the basic operation of the CMOS sensor has remained fundamentally the same throughout its history, its design has evolved to maximize efficiency and speed.
Back-Side Illuminated Sensor

Figure 5: Cross section of a front-side illuminated vs back-side illuminated CMOS sensor. For illustrative purposes only.
In the case of the original front-side illuminated (FSI) sensor design, all the wiring and circuitry necessary for storing, amplifying, and transferring pixel values runs along the borders between each pixel. This means light has to travel through the gaps to reach the photodiode beneath.
As its name suggests, the back-side illuminated (BSI) sensor flips this original design around so the light is now gathered from what was its back side, where there is no circuitry.
By removing the obstruction caused by the circuitry, a greater surface area can be exposed to light, allowing the sensor to gather more photons and subsequently maximize its efficiency.
The result is increased sensitivity, lower noise, and ultimately higher-quality images.
Stacked Sensor

Figure 6: Cross section of a stacked CMOS sensor. For illustrative purposes only.
While the BSI sensor design increases quality, the stacked sensor is all about increasing speed.
Until the introduction of the stacked sensor, CMOS sensors operated on a single layer. This meant the signal readouts from each pixel had to travel along strips of wiring all the way to the outside of the sensor before they were processed.
With stacked sensors, these processing chips have been added to the back of the sensor, essentially creating a ‘stack’ of chips sandwiched together.
By stacking them in this way, the distance the pixel values have to travel is drastically reduced, resulting in much faster processing speeds.
For example, the X-Trans CMOS 5 HS stacked sensor found in FUJIFILM X-H2S enjoys four times the reading speed of its predecessor and 33 times the reading speed of the original X-Trans CMOS sensor featured in X-Pro1.
What’s more, without the problem of obstructing light entering the sensor, it’s possible to keep stacking additional chips, offering huge potential for future developments.
Conclusion
Like any technology, camera sensors have come a long way in the past decade alone, and look to continue this development into the future.
With the move to back-side illumination enabling much higher resolutions and stacked sensors increasing readout speeds so significantly, recent developments amount to nothing short of a revolution in CMOS camera sensor technology.
The door is now open for huge future advances, equipping CMOS sensors with capabilities that simply weren’t possible only a few years ago.
Learn more by exploring the rest of our Fundamentals of Photography series, or browse all the content on Exposure Center for education, inspiration, and insight from the world of photography.