Planned satellite constellations may swamp future orbiting telescopes

Bernedoodle

Seniorius Lurkius
37
Subscriptor++
It's not just about having the small telescopes fart apart. It's also keeping their position to a small fraction of the wavelength of light that you're going to image. Orbital interferometers are being considered for an orbital version of LIGO (LISA), but even that's a rudimentary first step to what's required to achieve imaging that's diffraction limited by such a large separation.

“Our team in Cambridge has, however, recently commissioned the world's first optical aperture synthesis telescope. The instrument, called COAST, currently incorporates five 40cm telescopes arranged in a `Y' configuration, with a maximum telescope-telescope separation of approximately 22m. Up to four of the telescopes are used simultaneously. COAST already produces images showing several times more detail than the Hubble Space Telescope at less than a thousandth of the cost, and has been designed to operate with the telescopes up to 100m apart allowing another factor of 10 improvement in the sharpness of its images.”

“The total cost of the telescope was £850,000.”

I’m optimistic about what a space based version could do considering the lack of sound, wind, seismic waves and atmospheric distortions.
 
Upvote
4 (5 / -1)

MilanKraft

Ars Tribunus Angusticlavius
6,817
The only silver linings here (and my immediate thought before reading the article), are that Hubble is nearing the end of its service life, and Webb is out at L2, operating perfectly and far from the orbital "pollution" of Musk, the Chinese, and others. Much harder to do on a technical level, but I think La Grange type orbits are going to be the only viable solution for some future space telescopes. Also curious how badly all these (let's face it: mostly unnecessary) constellations are going to impact ground-based facilities like VLT and upcoming ELT as well. What an absolute travesty if the reckless commercialism putting these things into orbit even partially hamstrings the science done (and beauty of imagery captured) by these programs. My misanthropic tendencies grow by the week.
 
Upvote
-6 (4 / -10)

pagh

Ars Praetorian
530
Subscriptor++
The discussion here is for new observatories. Rather than set aside orbits for science, why not give them all of space above commercial orbits? If you send your telescopes to 1,000 km you're going to have fewer satellites above you than Hubble has had to deal with for almost all of its life.
New observatories can solve the problems they will encounter at launch time. They can anticipate some of the problems that might happen in the years immediately after they launch. They can't anticipate all the problems that they might encounter over their lifetimes. Today's constellations are degrading the scientific value of today's observatories. Tomorrow's commercial space activity will probably degrade the scientific value of tomorrow's observatories.

It's the job of legislators and regulators to balance the value of commercial and scientific activities, and to adequately restrict commercial activities (through regulation, taxation, or other means) that significantly impact other societal priorities. They don't often do that job very well, but in a functioning society, that's what they should be doing.
 
Upvote
13 (15 / -2)

Smeghead

Ars Praefectus
4,630
Subscriptor
Beyond S/N ratio concerns associated with lots of short-duration exposures vs fewer long-duration ones, there are fixed costs in taking an exposure from a time perspective.

It takes time to sample the pixels from a sensor. You can try to go faster and speed up the process, but there's always a limit beyond which horrible things happen to the image produced. There are also a bunch of other overheads on every exposure unique to each system.

While reading out a sensor, the shutter typically must remain closed. For a CCD, the conversion process involves moving the charge around on the device, where an exposed object's accumulated charge is physically marched across the device's structure to one or more outputs where the voltage associated with the accumulated charge is then sampled into digital data. Leaving the shutter open results in trailed images as light falls on the sensor during the marching process, causing the charge to become smeared.

CMOS devices tend to have on-device active circuitry that are addressed and sampled without needing to slosh charge around, but leaving the shutter open while reading out would result in a non-uniform exposure time across the image.

IR devices tend to get even weirder in their operation and usage, but still have readout times that cannot be reduced past a certain point.

As a real-world example, the time to sample the complete image from the CCDs of Pan-STARRS' two Gigapixel cameras is about 7 seconds. Once converted, the digital pixel data can be more or less transferred within the shadow of the readout time (minus the last little chunk of pixels). The network is quite parallel and is largely not a bottleneck.

There's also another second and change of dead time per exposure associated with just physically moving the shutter blades. Other various smaller operations before and after the shutter actuation nibble a few tenths here and there, all of which add up.

(source: me)

Someone mentioned 600 1-second exposures vs a single 600-second exposure. In the former case, a 1-second exposure with about 10 seconds of operations that have to complete before the shutter can be opened again would be crippling from an efficiency perspective. The observatory would effectively spend all of its time doing nothing.
 
Upvote
11 (11 / 0)

Chuckstar

Ars Legatus Legionis
37,290
Subscriptor
It's not just about having the small telescopes fart apart. It's also keeping their position to a small fraction of the wavelength of light that you're going to image. Orbital interferometers are being considered for an orbital version of LIGO (LISA), but even that's a rudimentary first step to what's required to achieve imaging that's diffraction limited by such a large separation.
There are key differences between LISA’s interferometry and using interferometry to observe light sources:

1) The LISA satellites don’t need to know the distances between them. They just rely on watching the change in distance.

2) The LISA satellites can use nice bright lasers for their interferometry. Telescope interferometry has to use the incoming photons from the distant object for the interferometry.
 
Upvote
6 (6 / 0)

fenncruz

Ars Tribunus Militum
1,774
And I'm not so convinced that astronomers have actually worked to block the noise yet.
Then your talking bollocks. Astronomers have spent decades working out how to reduce the impact of satellites.

But it always comes at a cost, either you have to look at object longer to get the same quality or you accept lower quality data for the same observing time. Either way your reducing the effectiveness of the observations.

Is the infrastructure being built worth it? I don't know but why is the astronomers paying the cost and not the multi billionaires funding these things? Why do they get to externalise their costs on everyone else over what is a shared resource (space)? As a non-american I've got no say in the regulations that do and do not get applied to things like spacex, yet I pay the cost of loss to the shared resource that is the night sky.
 
Upvote
4 (13 / -9)

Chuckstar

Ars Legatus Legionis
37,290
Subscriptor
Then your talking bollocks. Astronomers have spent decades working out how to reduce the impact of satellites.

But it always comes at a cost, either you have to look at object longer to get the same quality or you accept lower quality data for the same observing time. Either way your reducing the effectiveness of the observations.

Is the infrastructure being built worth it? I don't know but why is the astronomers paying the cost and not the multi billionaires funding these things? Why do they get to externalise their costs on everyone else over what is a shared resource (space)? As a non-american I've got no say in the regulations that do and do not get applied to things like spacex, yet I pay the cost of loss to the shared resource that is the night sky.
The astronomers aren’t the ones paying for the telescopes. Astronomers overwhelmingly use pubic funds for their work, and you’re complaining that their work might be curtailed by the rollout of large networks that couldn’t/won’t exist if not serving a wide public use. Umm….
 
Upvote
7 (12 / -5)

fenncruz

Ars Tribunus Militum
1,774
The astronomers aren’t the ones paying for the telescopes. Astronomers overwhelmingly use pubic funds for their work, and you’re complaining that their work might be curtailed by the rollout of large networks that couldn’t/won’t exist if not serving a wide public use. Umm….
As a tax payer, yes I'm happy to pay for astronomers work.
 
Upvote
4 (4 / 0)
Then your talking bollocks. Astronomers have spent decades working out how to reduce the impact of satellites.

But it always comes at a cost, either you have to look at object longer to get the same quality or you accept lower quality data for the same observing time. Either way your reducing the effectiveness of the observations.

Is the infrastructure being built worth it? I don't know but why is the astronomers paying the cost and not the multi billionaires funding these things? Why do they get to externalise their costs on everyone else over what is a shared resource (space)? As a non-american I've got no say in the regulations that do and do not get applied to things like spacex, yet I pay the cost of loss to the shared resource that is the night sky.
On the flip side, why should astronomers get to monopolize that shared resource? What about the externalities they generate?

All human activities create tradeoffs. Here it's the tradeoff between astronomy and commercial and military utility. Prioritizing astronomers' interests over that of the rest of those interested in developing space generated its own costs just as surely as these constellations would (if developed as predicted, a dubious notion as already explored earlier in the thread). Do you also expect the astronomers to pay for those costs as you expect the constellation operators to pay for the astronomers' costs?

You might say that they have more of a right to access because they were there first. I would counter that the locking down of that space is an externality no matter when it occurred.
 
Upvote
1 (11 / -10)

Wickwick

Ars Legatus Legionis
39,919
New observatories can solve the problems they will encounter at launch time. They can anticipate some of the problems that might happen in the years immediately after they launch. They can't anticipate all the problems that they might encounter over their lifetimes. Today's constellations are degrading the scientific value of today's observatories. Tomorrow's commercial space activity will probably degrade the scientific value of tomorrow's observatories.

It's the job of legislators and regulators to balance the value of commercial and scientific activities, and to adequately restrict commercial activities (through regulation, taxation, or other means) that significantly impact other societal priorities. They don't often do that job very well, but in a functioning society, that's what they should be doing.
It's a shared space among multiple countries. If you can get a consensus among all competing interests, then great. However, given the military advantages of distributed sensing, I'm fairly certain that's an impossibility. As such, astronomy needs to figure out how to perform in a compromised environment rather than publishing an article in Nature whining about the problem - by exaggerating the total number of satellites by 5x.

Yeah, it's a reality. Yeah, it's going to make seeing difficult. Time to deal. Political arguments aren't going to help.
 
Upvote
8 (14 / -6)

Wickwick

Ars Legatus Legionis
39,919
There are key differences between LISA’s interferometry and using interferometry to observe light sources:

1) The LISA satellites don’t need to know the distances between them. They just rely on watching the change in distance.

2) The LISA satellites can use nice bright lasers for their interferometry. Telescope interferometry has to use the incoming photons from the distant object for the interferometry.
1) I think that for a no-filled aperture one can be multiple cycles off and still achieve focus gains, but that's certainly not my expertise.

2) one can use a common reference beam for interferometry, but only at that wavelength. That seems only remotely useful for very bright sources.
 
Upvote
0 (0 / 0)

pavon

Ars Tribunus Militum
2,319
Subscriptor
It's all about the signal to noise and how different noise sources scale with what your doing. There's a noise term based on simply measuring the data in the CCD out. So 600 hundred exposures has 600 times the read noise in the CCD's as 1 long exposure.
That's not entirely true. The noise is random and will thus cancel out some. In theory, stacking N exposures each with a noise stddev of σ will have noise stddev of sqrt(N)*σ. The bigger problem with that particular example is that the CCD has a sensitivity floor, and if you don't exceed the number of photons needed to cross that you won't read anything.

But the 600 exposure example was extreme. You can do a lot with stacking a smaller number of exposures. It will require longer collects to make up for higher noise floor, as well as ensuring you have enough exposures without streaks for each pixel, but in exchange you also have more flexibility for how to handle other objects (astroids etc) that are moving in the scene.

Edit: For example 4 300-second exposures would have about the same SNR as a single 600-second exposure in most of the image, and would let you remove all the streaks (which would have 3/4 the SNR of the rest of the image). So the takeaway should be that there are image processing methods for dealing with the streaks, but they decrease the amount of data you can collect in a given time. I think most people here understand that but I've seen too many mainstream news articles that would lead you to believe that it makes collecting long-exposure data impossible.
 
Last edited:
Upvote
9 (9 / 0)

NetMage

Ars Tribunus Angusticlavius
9,967
So in theory, it should be possible to build a sensor that literally doesn't add any charge to a pixel when a satellite's (known) position comes by?
That would create a situation where pixels in satellite tracks are under exposed compared to the rest of the pixels, which would create a negative image of the track across any objects that were bigger than a single pixel, and darken any single pixel objects relative to the rest of the image.

In other words, it would be just as bad as the satellite tracks.
 
Upvote
3 (4 / -1)

Wickwick

Ars Legatus Legionis
39,919
That would create a situation where pixels in satellite tracks are under exposed compared to the rest of the pixels, which would create a negative image of the track across any objects that were bigger than a single pixel, and darken any single pixel objects relative to the rest of the image.

In other words, it would be just as bad as the satellite tracks.
If it's a 600 second exposure and you had pixels made opaque for 1 second, you're looking at 599/600 the exposure time.

And the actual time a pixel would need to be darkened could be far less than one second. An entire track might be made in a fraction of a second.

Yes, it's fractionally darker, but you still divide each pixel by its actual exposure time. So what you're losing is 1/600 of the SNR.

Edit: And inverse tracks (if they exist at all) are certainly not just as bad as a satellite streak that saturates a line of pixels. Astronomers work to invert the diffraction effects of mirror struts all the time. As long as there's still information there, lots of aberrations can be inverted. But a saturated pixel carries no information except that it received at least the saturation amount of light. Any fine structure beyond that is lost.
 
Last edited:
Upvote
3 (3 / 0)

kwajboy

Smack-Fu Master, in training
91
A one second exposure does not collect near the amount of light/photons as 600s...kind of the whole point of a long exposure.
That doesn't matter. In principle you can stack up short exposures just fine. That's how all the Hubble deep images of various sorts were done. There are two problems, though.

The first is read noise - whenever you read out a CCD, there's a bit of noise in reading the number of electrons. If that noise is larger than the Poisson noise from background light, then you do lose sensitivity. The background in space is very low (that's indeed usually why we go to space, certainly for IR telescopes), so you may quickly get killed the read noise. I'm not up space-qualified CCDs, though, so maybe it's solvable.

The second problem is just handling the data. Downloading raw data is often one of limiting factors for satellites, and you've just upped your required bandwidth by a factor of 600. In low-Earth orbit it may be fine, but it's definitely a major pain to deal with.
 
Upvote
2 (3 / -1)

Wickwick

Ars Legatus Legionis
39,919
If the mask is in a relay-image plane and/or if it's not in the sharp-focus plane at all you're going to have edge effects. The challenge with that is you're diffracting an unknown pattern. It's invertable, but not perfectly. However, if the mask elements are on the sensor elements the worst you'll end up with is potentially polluting the pixels on either side of the masked element (assuming the masks are thin wrt the pixel size). Altering the gain on a per-pixel basis doesn't suffer even that effect.
 
Upvote
1 (1 / 0)

stefan_lec

Ars Scholae Palatinae
996
Subscriptor
Astronomers have stated time and again that they don't want a plethora of telescopes of existing capability. They want a handful of large, expensive flagship observatories that they then have to fight over for access.

Which astronomers? Was there a survey done or something? I kinda wonder if the ones with this attitude are those who don't have trouble getting instrument time.

Plus, I'm not convinced those two goals are mutually exclusive. A more vibrant market of less capable instruments might make it easier to competently build the ginormous every-other-decade projects. Could be a win-win.
 
Upvote
3 (5 / -2)

Wickwick

Ars Legatus Legionis
39,919
Which astronomers? Was there a survey done or something? I kinda wonder if the ones with this attitude are those who don't have trouble getting instrument time.

Plus, I'm not convinced those two goals are mutually exclusive. A more vibrant market of less capable instruments might make it easier to competently build the ginormous every-other-decade projects. Could be a win-win.
These are the discussions held by the National Academies of Sciences during their Decadal Surveys.
 
Upvote
6 (6 / 0)

stefan_lec

Ars Scholae Palatinae
996
Subscriptor
New observatories can solve the problems they will encounter at launch time. They can anticipate some of the problems that might happen in the years immediately after they launch. They can't anticipate all the problems that they might encounter over their lifetimes. Today's constellations are degrading the scientific value of today's observatories. Tomorrow's commercial space activity will probably degrade the scientific value of tomorrow's observatories.

It's the job of legislators and regulators to balance the value of commercial and scientific activities, and to adequately restrict commercial activities (through regulation, taxation, or other means) that significantly impact other societal priorities. They don't often do that job very well, but in a functioning society, that's what they should be doing.

This is another excellent reason why we should be doing less-ambitious instruments, more often.
 
Upvote
1 (4 / -3)

wagnerrp

Ars Legatus Legionis
31,800
Subscriptor
Upvote
2 (2 / 0)

wagnerrp

Ars Legatus Legionis
31,800
Subscriptor
That doesn't matter. In principle you can stack up short exposures just fine. That's how all the Hubble deep images of various sorts were done.
The UDF was 400 orbits, two exposures per orbit, and 1200s per exposure. So stacking and long exposures. JWST exposures are supposed to be up to six hours long.
 
Upvote
6 (6 / 0)
The telescopes should be launched to higher orbits. They're discussing asteroid spotting, and needing to observe the horizon at dawn to capture these asteroids in similar orbits to Earth. That means it's a bad design. They should be in high orbit, or somewhere like L1 inside of Earth's orbit, rather than one so obviously ill suited to their needs.

"Oh, but it costs more to get there, and it costs more to communicate with there."

Cheaper launches are a thing, and they're getting cheaper continuously. Laser communications are a thing, and no longer require dedicated time on the DSN.
This.

Especially this, since neither Xuntian nor ARRAKHIS has been launched yet. I guess it would be hard to make the mods for Xuntian, which is designed to be serviced periodically by Tiangong, but ARRAKHIS isn't launching until 2030.

That said, astronomers are going to have to live with satellite interference forever, and the problem will only get worse. So let's describe the problem in a bit more detail:

Research-grade telescopes use CCDs as the means by which photons coming through the optics are harvested and processed. CCDs have superior signal-to-noise, and the fact that every bit is measured by a single analog-to-digital converter makes calibration much easier. But CCDs are an enormous pain, for several reasons:

1) There's no electronic "shutter" for a CCD. It requires an actual mechanical shutter, which is rolled out of the way after the CCD has been cleared and re-calibrated for noise. As others have pointed out, there's some degree of noise associated with the clear / recalibrate / roll-back process, so going from one long exposure to many short ones reduced the SNR.

2) CCD pixels aren't bit-addressable. If they were, you could plot the path of a satellite across the field, read out the values just before the bird reached the pixel, wait for it to pass, clear the pixel, and continue accumulating photons. But you can't do that: CCDs are read by shifting them, column by column, into a device that then reads each pixel in the column through a single A-to-D.

3) CCDs can saturate. Each photon that hits the pixel generates additional charge in the pixel's "well", and only a certain amount of charge can be held in the well. When that charge is exceeded, not only will additional photons not be recorded, but the well will overflow into neighboring pixels.

4) CCD saturation can also leave trails as the columns are shifted in for readout. This spoils perfectly good pixels in columns farther upstream from a saturated pixel.

The biggest problem with satellites is that they're extremely bright, which vastly increases the chance of saturating pixels, which, as you can see, is very bad. Once a pixel saturates, it's worthless for any kind of photometry--and may make other upstream pixels worthless as well. No amount of compositing of images will fix this.

But limiting the number of satellites, or even moving the telescope out of the way of the satellites, is a solution that's like swatting a fly with a nuclear weapon.

I am not a semiconductor engineer, but the proper answer is to innovate to improve the detector technology so it meets the requirements of living in an environment where bright things slide across the field of view at predictable times. CCDs may be the lowest-noise solution, but everything else about them is godawful. CMOS is worse in terms of SNR and calibration, but it's largely bit-addressable and controllable, which solves most of the problems above.

Another possibility is to magick up some fix for CCDs so they can be stopped from collecting light at well-defined times and then resume collecting when told to. CCD detectors for microscopy have "gutters" between the pixels, so that saturated pixels shed the saturated charge into the gutters, preventing them from saturating and contaminating nearby pixels. If you could set the saturation level at which the gutters started hoovering up charge, you could set it to the current level just before the satellite was due to hit the pixel, wait for the saturation charge to bleed away, and then change the level back to its maximum saturation value.

This is complicated; it requires a bit-addressable CCD, with some kind of magic to latch the gutter values at the right levels. Also, the gutters will reduce the resolution of the detector, which is a big deal. But even a lower resolution would be a big improvement for a variety of kinds of photometry and spectroscopy.

If that doesn't work... Fly, meet nuclear weapon. Lots of super-heavy lift vehicles to choose from for launching telescopes.
 
Upvote
5 (6 / -1)

Wickwick

Ars Legatus Legionis
39,919
What? Why would you do that?
To put things back on even footing.

Let me exaggerate to make the point.

Let's assume I have a sensor being exposed for 100 seconds, but I cover the right side of it for 50 seconds and I'm shooting an image of a blank wall of constant intensity.

I would have a two-tone grey image. If left side is nearly-saturated white, the right side would be a 50% grey. But, that's because the exposure time on the right side was half that on the left. So if I divide each pixel by its exposure time, I end up a constant value on both halves.

Put another way, if I double my exposure time on a normal camera, I expect twice the value in each pixel, right? So if I cover a pixel for 1% of the exposure, I should divide by 0.99 to bring it back to even.

This is how one can combine multiple exposures for a high-dynamic-range image (with some details on floor levels, etc.)
 
Upvote
3 (3 / 0)

Wickwick

Ars Legatus Legionis
39,919
This.

Especially this, since neither Xuntian nor ARRAKHIS has been launched yet. I guess it would be hard to make the mods for Xuntian, which is designed to be serviced periodically by Tiangong, but ARRAKHIS isn't launching until 2030.

That said, astronomers are going to have to live with satellite interference forever, and the problem will only get worse. So let's describe the problem in a bit more detail:

Research-grade telescopes use CCDs as the means by which photons coming through the optics are harvested and processed. CCDs have superior signal-to-noise, and the fact that every bit is measured by a single analog-to-digital converter makes calibration much easier. But CCDs are an enormous pain, for several reasons:

1) There's no electronic "shutter" for a CCD. It requires an actual mechanical shutter, which is rolled out of the way after the CCD has been cleared and re-calibrated for noise. As others have pointed out, there's some degree of noise associated with the clear / recalibrate / roll-back process, so going from one long exposure to many short ones reduced the SNR.

2) CCD pixels aren't bit-addressable. If they were, you could plot the path of a satellite across the field, read out the values just before the bird reached the pixel, wait for it to pass, clear the pixel, and continue accumulating photons. But you can't do that: CCDs are read by shifting them, column by column, into a device that then reads each pixel in the column through a single A-to-D.

3) CCDs can saturate. Each photon that hits the pixel generates additional charge in the pixel's "well", and only a certain amount of charge can be held in the well. When that charge is exceeded, not only will additional photons not be recorded, but the well will overflow into neighboring pixels.

4) CCD saturation can also leave trails as the columns are shifted in for readout. This spoils perfectly good pixels in columns farther upstream from a saturated pixel.

The biggest problem with satellites is that they're extremely bright, which vastly increases the chance of saturating pixels, which, as you can see, is very bad. Once a pixel saturates, it's worthless for any kind of photometry--and may make other upstream pixels worthless as well. No amount of compositing of images will fix this.

But limiting the number of satellites, or even moving the telescope out of the way of the satellites, is a solution that's like swatting a fly with a nuclear weapon.

I am not a semiconductor engineer, but the proper answer is to innovate to improve the detector technology so it meets the requirements of living in an environment where bright things slide across the field of view at predictable times. CCDs may be the lowest-noise solution, but everything else about them is godawful. CMOS is worse in terms of SNR and calibration, but it's largely bit-addressable and controllable, which solves most of the problems above.

Another possibility is to magick up some fix for CCDs so they can be stopped from collecting light at well-defined times and then resume collecting when told to. CCD detectors for microscopy have "gutters" between the pixels, so that saturated pixels shed the saturated charge into the gutters, preventing them from saturating and contaminating nearby pixels. If you could set the saturation level at which the gutters started hoovering up charge, you could set it to the current level just before the satellite was due to hit the pixel, wait for the saturation charge to bleed away, and then change the level back to its maximum saturation value.

This is complicated; it requires a bit-addressable CCD, with some kind of magic to latch the gutter values at the right levels. Also, the gutters will reduce the resolution of the detector, which is a big deal. But even a lower resolution would be a big improvement for a variety of kinds of photometry and spectroscopy.

If that doesn't work... Fly, meet nuclear weapon. Lots of super-heavy lift vehicles to choose from for launching telescopes.
From a semiconductor point of view, there has to be a single-pixel-column version of a CCD. It might not be trace or space efficient, but there cannot be a detector element that cannot be cast as a stand-alone unit. I'll spot you that the current state-of-the-art doesn't do that and astronomers don't want to pay the costs of developing that tech or that it might not have a bunch of downsides that hasn't made it appealing. But that doesn't mean it's impossible.
 
Upvote
4 (4 / 0)
I'm a huge supporter of astronomy, both from the ground and from space. However, orbital infrastructure is critical to live on Earth and it's importance is only going to increase. It has to take precedence. Hopefully, technology and budgets will progress to the point that we can place observatories beyond LEO and the interference from satellites.
 
Upvote
3 (6 / -3)
From a semiconductor point of view, there has to be a single-pixel-column version of a CCD. It might not be trace or space efficient, but there cannot be a detector element that cannot be cast as a stand-alone unit. I'll spot you that the current state-of-the-art doesn't do that and astronomers don't want to pay the costs of developing that tech or that it might not have a bunch of downsides that hasn't made it appealing. But that doesn't mean it's impossible.
Ehhh...

The problem is that CCDs get read out by applying an electric field that slurps all of column 1's pixels into column 0 (the readout column), which then results in each column n receiving column n+1's pixels. So the idea of a standalone column is more complicated.

Note that this is the reason why a saturated pixel leaves a trail of charge behind it as it shifts closer to the readout, which in turn erroneously increases the charge in columns that follow it.

FWIW, I'm betting it's far from impossible. But until the tech is proven out, nobody's gonna design it into a space telescope--or even most terrestrial ones. Since the lead time on a space telescope is at least ten years, there's an intermediate-term problem.

Note that detectors are a significant chunk of the cost of the optical path of a terrestrial scope (Vera Rubin is the poster child for this), so even replacing the detector with a bird-insensitive detector is a major undertaking.

Still, getting the tech working soon is very important. There are thousands of research-grade scopes on Earth. I doubt that launching thousands of scopes to ESL-2 or EML-4/5 is going to be an acceptable solution.
 
Upvote
5 (5 / 0)

Wickwick

Ars Legatus Legionis
39,919
By definition that wouldn't be a CCD. Having a single readout at the end of the row is its defining characteristic.
Hence the term "coupled." Fair enough. Except that the coupling need not be to an active pixel. It could be in a storage pixel next to a live pixel to hold when the bright light source comes by. Then you can transfer back and continue integrating. When it comes time to read the entire array, you couple along live pixels.
 
Upvote
2 (2 / 0)

redleader

Ars Legatus Legionis
35,856
Correct me if I'm wrong, but one can adjust the gain across the charge well during the exposure without read noise, right? So in theory, it should be possible to build a sensor that literally doesn't add any charge to a pixel when a satellite's (known) position comes by?
The "gain" in a CCD or CMOS is electronic gain applied after the pixel is read out by the preamp that drives the ADC pins. Photon counts, dark counts and read noise have already happened by the time the gain is applied, so the only thing it influences is quantization noise. Incidentally, now that 14 and 16 bit ADCs are common, and most pixels have less than 2^14 photons, this is why you don't see gain mentioned as much as the old days of 8 and 10 but ADCs on early CCD sensors. Closest thing you can do is read out the pixel (or row of pixels if CCD) and then dump the charge accumulated after the object passes.

You can build sensors that do have actual photoelectron gain that do let you black out pixels, but they're not a good choice for astronomy for other reasons.
 
Upvote
3 (3 / 0)

Chuckstar

Ars Legatus Legionis
37,290
Subscriptor
1) I think that for a no-filled aperture one can be multiple cycles off and still achieve focus gains, but that's certainly not my expertise.

2) one can use a common reference beam for interferometry, but only at that wavelength. That seems only remotely useful for very bright sources.
1) You still need to know the distance to some precision. IIUC, LISA satellites don't do any absolute measurements of distance. They're doing the equivalent of just counting fringe lines go by (although it ends up being more complicated than that in practice, since they use some computational methods to approximate an equal-armed interferometer and cancel laser source noise).

2) My point on #2 was just that it's wayyyyyy easier to do such interferometry using light from a known laser source, rather than light from a distance astronomical object.
 
Upvote
3 (3 / 0)