Earth Observation 101: A Beginner's Guide to Satellite Imaging and Technology

Earth Observation 101: A Beginner's Guide to Satellite Imaging and Technology

Introduction

A picture is worth a thousand words, it is because of the depth of information it entails or the elements that have stories to convey. But in the end, a picture is nothing but a depiction of an object in some medium, and it is the person who captures such fine details and brings them on a piece of paper.

For centuries, pictures have remained and acted as a critical source of information for future generations to understand about the past. From being engraved on walls to clothes and paper, objects and events have been captured in many ways and mediums. Today, the medium has evolved in terms of technology and its purpose, which is beyond documentation of events, as it is more about monitoring real-time activities around the globe.?

For applications of such need, it is the design of cameras and their components that matter. For instance, in the past, if an event was meant to be remembered by generations, the painter would not focus on the paint first, but on the quality of the cloth or the board on which the subject would be painted.?

The format of visualisation leads to the format of technology.

Similarly, to document and monitor real-time activities around the globe, the focus is on the camera, its components and electronics first, and then the rest, which is the satellite bus. While many can imagine and would have witnessed the process of painting, what one might ask is the process of designing an Earth Observation Satellite.

It all begins with the ask, or the purpose of launching the satellite at first. For instance, if it is meant for strategic purposes i.e. defence, then the ask would be very high-resolution image-capturing capability. But that’s just scratching the surface, because a layer below the resolution is the clarity of the objects and subjects that the EO satellite will capture, and that is where the design of the camera and sensors begin.?

There are many technical aspects when it comes to designing an optimal or an ideal camera system, and among them is a set of layers that dictates the quality of an image in terms of the depth of information it can entail.?

Journey of a payload designer through various degrees??

It is always the end-users and their set of requirements that dictate the design of an EO Satellite system. Hence, it is of significant importance to understand the nature of the end-user and their set of asks regarding the type of image. But before we dissect the requirements, what is an ideal demand by any customer? To explain it in simple terms, the requirements can be as simple as detecting or identifying any object on the ground, to detecting the health or presence of some rare-earth minerals in a region of interest.?

On the same note, we will understand how the degree of complexities grows as the demand for finer details increases, from just identification to the characterisation of an object. This graph denotes the design of an ideal EO Optical Payload, and below, we will dive deep into each question and understand how they are met and through which all metrics are met.

Clarity Seekers - Spatial Resolution

Given a strategic use case application, the first thing a user shall ask is the resolution of the image the satellite can provide which must be 50 cm at max. That is a requirement which would need to provide clarity that consists of first, visibility, and second the definition of the object from a satellite that is probably orbiting at 400+ km.?

With that resolution of the image, the main requirement is to distinguish between all the objects in the image. Like, differentiate between the car and the road? Differentiate between the trees and the building? Observe the various spectrum of colours of the subjects in the image.

If the image resolution is high and is around 30 cm or less, then you definitely can. But what does the magnitude of resolution (in this case 30 cm resolution) entail? This magnitude is a combination of High GSD + High MTF which enables achieving the following metrics:

  • Sharp details and edges: The combination of high GSD and high MTF leads to images with sharp details and edges. This is particularly important in applications such as surveillance, where identifying small objects or changes in the environment can be critical. For example, detecting unauthorized vehicles in restricted areas or monitoring changes in land use requires high-resolution imagery that can capture these details accurately.

  • Fine spatial resolution: This level of detail is crucial for applications requiring precise measurements, such as urban planning and infrastructure development. The ability to distinguish between objects, such as vehicles on a road or individual trees in a forest, is directly tied to the GSD. Higher resolution allows for more accurate spatial analysis and decision-making.

  • Accurate colour representation: High MTF not only contributes to spatial resolution but also ensures true-to-life colour fidelity in the imagery. Accurate colour representation is essential for applications such as agricultural monitoring, where different crops may require specific colour analysis for health assessment. The ability to discern between various shades and tones helps in making informed decisions regarding crop management and environmental monitoring.

Below is an example of the MTF metric and its impact.

Now, that the task of seeking clarity is met, the other concern that will arise is the ability to view the same set of subjects in the dark, that is in the night or low light conditions, which is the majority of the scenarios, the user requests for. That is the second ask which enables the user to have an all-round vision.?

All-round vision: Seeking in dark

The presence of light is very crucial to enable an instrument or a person to visualise an object. For instance in the case of? Low light: When the intensity of light is very low, it becomes very difficult to visualise an object. Still, it can be viewed, in terms of nature, there are a few animals that can view objects in low light, and in terms of humans, we use an instrument, mostly night vision goggles to view the surroundings in the absence of light.?

So, how do night vision goggles or an animal like a cat or owl view things in the dark? The answer lies in their eyes (animals) and sensors (night vision goggles). This is because light is a particle, called photons, and at any given time of the day, photons or light is always present.?

The ability to see is the ability to absorb/detect those photons through the eyes. The same applies even for sensors on any given instrument meant for imaging.?

For example, when shooting or recording a video at night time through a mobile phone, it one can observe some white particles spread across the frame. That is noise, in short, it is present because the sensor of the camera on the mobile phone is not large enough to capture all the existing photons in the surrounding, hence creating noise.?

This in the field of EO industry is referred to as SNR, i.e. Signal Noise Ratio which is a measure of the strength of the desired signal (the light reflected from the Earth's surface) compared to the background noise. A high SNR means that the sensor can effectively detect and record the photons reflected from the target, resulting in clear, low-noise images.

One of the key factors that influence SNR is the size of the sensor's pixels. Larger pixels have a greater surface area, allowing them to capture more photons and thus increasing the signal strength. This is why high-resolution Earth observation satellites often use large sensors with relatively large pixels.?

For example, the Landsat 8 satellite has a panchromatic band with a resolution of 15 meters, while the multispectral bands have a resolution of 30 meters. The panchromatic band has larger pixels, enabling it to capture more light and achieve a higher SNR.

Another important factor is the size of the sensor's aperture, which controls the amount of light that reaches the sensor. A larger aperture allows more photons to enter, increasing the signal strength and improving the SNR. This is why night vision goggles often have large, bulky lenses - to maximize the amount of light that reaches the sensor.

With this capability on a given high-res EO satellite, certain applications like surveillance, construction, and maritime can be solved.?

Now, since you have a better resolution and a good SNR score, which enables you to not only identify a definition of an object but also view the object in night light conditions, you may want to understand the minor details of the object.?

For instance, you can view the forest at night with a better resolution, but what if you want to know the health of the forest? Is it healthy? Is it dying? This is where imaging instruments on satellites have spectral bands that not only detect light but also detect the characteristics of light, or the wavelength. Because the light at the end is part of a larger electromagnetic spectrum after all.?

What’s your wavelength

  1. How do you want to view the area of your interest?
  2. What do you want to understand within the area of your interest?

All these questions can be answered just by receiving light in different types of brackets ranging from 400 nm to 15000 nm. Within this bracket lies the entire Earth Observation Industry. For instance, below is the breakdown of each bracket and their applications.

In short, the entire Earth Observation industry lies between the 400 nm - 15000 nm wavelength bracket. Out of this, the commercial application bracket is between 400 nm - 700 nm. The remaining segment of wavelengths up to 15000 nm are then extensively used by and for military applications.?

Considering just the Visible spectrum, and the most used bands, Red, Green, Blue, and Panchromatic bands, each one of them has their different resolutions. For example, Landsat 8's PAN band has a resolution of 15 meters, while the RGB bands have a resolution of 30 meters.?

Sensor Design and Pixel Size

Why so? The PAN band typically captures a broader range of wavelengths (approximately 400 to 700 nm) and combines the intensity of all visible light into a single channel. This allows for a higher spatial resolution because the sensor can utilize more light energy from a wider spectral range, resulting in smaller pixel sizes and finer detail.

Whereas in contrast, the RGB bands capture specific narrow wavelength ranges (red: 610-700 nm, green: 500-550 nm, blue: 450-500 nm). Each of these bands requires a larger pixel size to gather enough light energy for effective imaging, resulting in lower spatial resolution compared to the PAN band.?

Light Energy Capture

The PAN band, by integrating light across the entire visible spectrum, benefits from a higher total intensity of light per pixel. This means that the sensor can achieve better detail and clarity at a smaller pixel size. In contrast, the RGB bands, which sample specific wavelengths, have less light energy available per pixel, necessitating larger pixels to maintain image quality, which leads to lower resolution.

Hence a high-resolution multispectral or colour image can be achieved through two methods:

Innovating at the image level?

Pansharpening

  • Pansharpening is a widely used technique that combines high-resolution panchromatic imagery with lower-resolution RGB bands to create a higher-resolution colour image. This process involves:
  • Using a Panchromatic Band: If available, a panchromatic band (which typically has a higher resolution, such as 15 m) can be fused with RGB bands to enhance the spatial resolution.
  • Pan-sharpening Methods: Techniques like Gram-Schmidt pan-sharpening or the Brovey transformation can be applied to effectively merge the data, resulting in a higher resolution RGB image that retains the colour information while improving spatial detail.

Super-Resolution Techniques

Super-resolution methods, particularly those utilizing deep learning, can enhance the resolution of RGB images:

  • Convolutional Neural Networks (CNNs): These networks can be trained to upscale low-resolution RGB images by learning the relationships between low-resolution and high-resolution images. This approach can significantly improve the spatial resolution, potentially achieving 20 cm resolution from lower-resolution RGB data.
  • Auxiliary Tasks: Combining RGB super-resolution with other tasks, such as hyperspectral image super-resolution, can provide additional training data, improving the overall performance of the model.

Synthetic Panchromatic Band Creation

If a true panchromatic band is not available, a synthetic panchromatic band can be created by averaging the RGB bands. This synthetic band can then be used in pansharpening techniques to enhance the resolution of the RGB imagery.

Innovating at the sensor level

High-Resolution RGB Sensors

Investing in advanced imaging sensors that are specifically designed to capture high-resolution RGB imagery can significantly improve spatial resolution. This includes:

  • Using High-Quality Lenses: High-performance optics can enhance the clarity and detail of images captured by RGB sensors, allowing for finer resolution.

  • Multi-Sensor Arrays: Employing multiple RGB sensors in a single platform can help capture more detailed images simultaneously, effectively increasing the overall resolution.

Integration of NIR and RGB Sensors

Combining RGB sensors with near-infrared (NIR) sensors can enhance the overall imaging capability:

  • RGB-NIR Cameras: These cameras capture both RGB and NIR data, allowing for better vegetation analysis and other applications. The integration can improve the spectral information while maintaining high spatial resolution.

  • Sensor Fusion: Utilizing both RGB and NIR data in a single imaging system can provide richer datasets that enhance the detail and accuracy of the resulting images.

Conclusion

Based on the various stages an EO Payload designer has to go through, the complexity from a broad architecture of a lens system to choosing the right sensors to replicate the subjects and their true colour signatures. While most of the work is being done through innovations at the image level, the industry today is going up a notch and designing EO Payloads in a certain way that reduces the intensity of work on fine-tuning the images.

This is a design theory, in which I have mentioned and defined what metrics must one tick off before venturing into component-level engineering. Because with each stage, the understanding of the payload architecture gets finer and finer to a point, where at the end an engineer can estimate the following factors:

Now, the next step is to design and plan the subsystems and accommodate various sensors, electronics, and optical systems followed by various stages at different each stage of configurations.

Marshall Kaplan

Founder at Launchspace Services

5 个月

Most comprehensive course on spacecraft design is now available. Get exposed to an extensive and coherent treatment of the fundamental principles involved with the interdisciplinary design of spacecraft. https://www.launchspace.com/course-catalog/2000-2/spacecraft-design-development-and-operations/

回复
Anurag Mathur

Institutional Cross Sell Specialist

5 个月

Very informative

要查看或添加评论,请登录

Krishna Reddy的更多文章

社区洞察

其他会员也浏览了