Vera C. Rubin Observatory/LSST

parejkoj

Ars Praetorian
429
Subscriptor++
JWST got its own thread, so it's only fair...

The Vera C. Rubin Observatory/LSST, an 8.4m telescope on Cerro Pachón in Chile, will see first light in under two years, with first light of ComCam (the commissioning camera: a smaller, engineering-focused camera) likely to happen next summer. Construction was delayed by about 2.5 years due to COVID-19: a 9 month shutdown at the site trickled down into a much longer total delay, as contractors flew home and had to get new visas, companies picked up other construction contracts, Chile implemented strict COVID entry requirements, and COVID-safety policies had to be put in place. First light was originally going to be this past winter; hopefully that's given us time to perfect the analysis software, which is all available on GitHub.

During operations, LSST will take a 3.2 gigapixel image of a 3.5 degree field of view (that's 9 full moons across!) every 30 seconds, resulting in about 15 terabytes of data every night. That means we'll observe the whole southern sky to roughly 24th magnitude every 2-3 nights and basically find everything that goes bump in the night: supernovae, variable stars, quasars/AGN, asteroids, and comets. We'll provide a nightly stream of transient sources to the community via difference imaging (roughly 10,000 alerts on each image every 30 seconds), and roughly annual public data releases of all data taken to that point. The survey is planned for 10 years, and will result in very deep coadds, down to about 28th magnitude over the whole southern sky by survey's end.

Rubin Observatory is also the most impacted by the large number of satellites that are planned to be launched by Starlink and others. Every exposure goes very deep, covers a very large sky area, and an important part of the survey is finding near earth asteroids, which is best done during twilight when satellites are brightest. Mitigation approaches and conversations with SpaceX on how they can make their satellites less bright are ongoing, but it's definitely going to impact our data quality.

We're the first US national observatory to be named after a woman! Took them long enough, but there are more to come (e.g. the Nancy Grace Roman Space Telescope, formerly WFIRST, expected to launch in ~5 years).

See the photo gallery for images and mockups of the observatory site, camera, telescope construction process, and various webcams (as I post this, some are unavailable due to a snow storm shutdown on site). It's nice to see a completed telescope dome up there now, with the mirror support structure in place (the mirrors haven't been installed yet, but are waiting on site).

I'm a member of the LSST Data Management Alert Production team, and I'll try to post relevant press releases, data tidbits, and public outreach information and links as they come available, and of course answer questions.
 

MilleniX

Ars Tribunus Angusticlavius
7,786
Subscriptor++
Don't just think of the storage space at the Observatory - think of the networking to get 15TB of data daily off a mountain in Chile to where it needs to go.
Run the numbers - it's not nearly as big as you think. If it's uploaded in real time, over an 8 hour observing period, it's roughly 520 Mbps.
 

parejkoj

Ars Praetorian
429
Subscriptor++
Data processing happens at facilities in the US. In order to get the data from Chile to those data centers, we funded the construction of multiple dedicated fiber optic lines from the summit to La Serena, from there to Santiago, and from Santiago to the US and Brazil and also from Brazil to the US, so there are redundant pathways, I believe totaling more than 100Gbps to the US. We don't actually need all of that all of the time, and we've significantly overbuilt it (if you're running any fiber, you might as well run a lot!), so we're effectively drastically increasing bandwidth to several Chilean universities. I don't recall the details of those community benefits numbers, and can't find the doc with details right now.

Yes, compared with some other projects, ~8Gbytes/30 seconds isn't "that much". Except that we need to have that data fully processed and all of the alerts sent out within 60s of closing the shutter. We don't want data transfer to take more than a fraction of that time budget, hence the fiber optic pipes described above, so we can get the whole image from Chile to the US Data Facility in California within a couple seconds, to start processing it in basically real time.

I assume that unlike the particle colliders at CERN, you actually do need to keep all that data, right?

Yes, unlike CERN or many radio observatories, we don't throw anything away. There's also the unfortunate terminology of "data reduction" in astronomy: going from a raw image to a fully processed image with derived data products. "Reducing the data" in our context results in increasing storage requirements by a factor of >8 (16 bits/pixel raw -> 32 bits/pixel floating point processed image + 32bits/pixel floating point variance plane + 64 bits/pixel integer mask plane + derived background, PSF, and calibration data + catalog of measured source properties). Lossless compression doesn't work very well on this kind of of data either--typically compressing by a factor of ~1.5--although we are using it.

Guy on a bicycle with a backpack with a few hard drives in it every morning at 6am, he returns with a croissant.

It's Chile; the bicyclist would either return with pisco, or a coffee. Also, biking on that unpaved mountain road would be... exciting. And I say this as an avid mountain biker.

That said, sending hard drives every few days via Fedex was absolutely used by SDSS during its first years in the early 2000s, because the network from the mountain couldn't transfer a full night of data in 24 hours.
 

Colm

Ars Tribunus Angusticlavius
7,593
Subscriptor
Guy on a bicycle with a backpack with a few hard drives in it every morning at 6am, he returns with a croissant.

It's Chile; the bicyclist would either return with pisco, or a coffee. Also, biking on that unpaved mountain road would be... exciting. And I say this as an avid mountain biker.

That said, sending hard drives every few days via Fedex was absolutely used by SDSS during its first years in the early 2000s, because the network from the mountain couldn't transfer a full night of data in 24 hours.

Me: shitposting.
Other, better posters: Responding to my shitposts with information.

I do love pisco though. Only SA country I've been to is Ecuador.
 

Klockwerk

Ars Praefectus
3,756
Subscriptor
I'm a virtualisation/storage person, so the networking stuff is all just magic/someone-else's-problem - really appreciate the response.

Yes, compared with some other projects, ~8Gbytes/30 seconds isn't "that much". Except that we need to have that data fully processed and all of the alerts sent out within 60s of closing the shutter. We don't want data transfer to take more than a fraction of that time budget, hence the fiber optic pipes described above, so we can get the whole image from Chile to the US Data Facility in California within a couple seconds, to start processing it in basically real time.

This jumped out at me - what's the requirement to process all data and alerts sent out within 60s of closing the shutter? I'm guessing if something exciting gets caught?
 
  • Like
Reactions: continuum

parejkoj

Ars Praetorian
429
Subscriptor++
This jumped out at me - what's the requirement to process all data and alerts sent out within 60s of closing the shutter? I'm guessing if something exciting gets caught?

There are a few reasons for that requirement.

The first is a practical one: if you want to be able to identify transient and variable sources in the same night as the observation, you have to process the data within some multiple of the exposure time. So, if it takes X computing resources (in our case, one CPU core per detector, so 189 cores) to process a 30s exposure, you need n*X resources to keep up in n*30s. So, two servers with ~200 cores each let us process two exposures without falling behind, if we take less than 60s to process each exposure.

The second reason is why I chuckled at "if something exciting gets caught..." LSST's known discovery space, especially in the first few years, is vast.

One example: Currently, between ZTF, PS1, and ATLAS, we're finding about 20,000 supernovae per year. LSST will observe about 1000 per night. Of those, about 1/3 will be type Ia, which are the ones used for constraining cosmology. There's not enough spectroscopic telescope facilities available to follow up all of those sources with spectroscopy, but different observatories will followup what they can. Getting spectra at different times in a supernova's evolution is important for characterizing and modeling them. Catching a supernova at the very earliest part of its explosion, the shock breakout flash is important for understanding exactly how supernova occur. Shock breakout only lasts an hour or so at most, so immediately identifing sources for rapid followup is crucial if we want to better understand the early phases of supernovae.

Similar examples exist for main belt asteroids (likely few hundred thousand per night), quasars and AGN (a few thosand per night), cataclysmic variable stars (several thousand per night), tens of M-dwarf flares per night, and hundreds of thousands to more than a million "normal" variable stars per 30s exposure. For each of those classes of objects, you want followup observations to characterize a sample of the sources so you can model the rest of them. You also want to very rapidly followup some of the most interesting or short-lived ones, for example potentially Earth crossing asteroids, or any M-dwarf flare. We want LSST's processing time to not be a limiting factor in that rapid followup.

Finally, there's the unknown discovery space. See Figure 8.6 of the LSST Science Book for the image, but I'll try to explain in less than a thousand words. There's a significant area of the "brightness vs. timescale" parameter space that has not yet been explored. Faint sources that have a decay time of less than a day need a telescope that goes deep enough over a large area in a single observation to catch them. We don't yet know what kind of sources we'll find there, because we haven't really had such a facility operating continuously. There are also many classes of objects for which we have a handful of candidates observed so far (e.g. tidal disruption flares in AGN, luminous supernovae, GRB orphan afterglows), but we need to discover and then rapidly followup much larger samples to truly understand and model them.

If you want to know more about all the reasons why the Rubin Observatory is designed the way it is, the Science Book goes into gory detail. My estimated numbers above come from Ridgway et al. 2014. See section 9, and sections 12-14 for more details on those estimates, some of which are quite rough. Even the "known discovery space" is not that well characterized: that's why we're building this thing!
 

parejkoj

Ars Praetorian
429
Subscriptor++
Because you're never too old to believe in Santa.

I think you're surely right, but am curious about the specifics. What events qualify for immediate re-targeting of the telescope (or of other observatories?) if it were detected?

If by "Santa" you mean "Nobel Prize", then yes, we're not too old. :D

There are a handful of potentially Nobel-worthy first year LSST discoveries: finding the hypothesized planet 9 is the one that comes to mind right now, though the discovery space for that is rapidly shrinking with current HSC and ACT observations.

See my post just above for descriptions of the kinds of sources that LSST will identify for possible followup by other observatories. How exactly that followup time will be allocated and used is an ongoing discussion within the community. Everyone has their favorite class of object, and we're going to produce orders of magnitude more sources than can be followed up, even given all of the the telescope time available in the southern hemisphere--which, obviously, won't all be allocated to LSST followup.

Going the other way: LSST is a survey telescope, with a pre-defined (currently under discussion and negotiation) observing cadence, so there aren't a lot of objects discovered by other facilities in the southern hemisphere that we won't already observe within a day or two anyway. The main things that are currently strong candidates for potential LSST Target of Opportunity (ToO) observations--we trigger and target an observation on someone else's alert signal--are gravitational wave (GW) electromagnetic (EM) counterpart followup, and orbit characterization of fast moving potentially hazardous asteroids.

The LIGO+VIRGO+KAGRA on-sky 90% error ellipse is easily covered by a handful of LSST exposures, so it's not too disruptive to our survey observations. LSST depth+area means we're best placed for finding a GW EM counterpart within minutes of the GW measurement. GW sources measurable by LIGO are going to be in a relatively nearby galaxy (closer than ~150Mpc), which should drastically cut down the number of transient sources in an LSST exposure that could be the EM counterpart.

LSST will itself identify many Potentially Hazardous Asteroids, potentially finding 60-90% of objects larger than 140m on its own. However, there are other facilities that are better equipped for finding the brighter ones, with dedicated searches of the ecliptic plane during twilight. Very nearby and thus very fast moving objects have a short window to properly characterize their orbit before they become too faint for the discovery telescope, sometimes less than a day. Again, LSST's field of view and single exposure depth mean we can get a few exposures in the predicted orbital path to help decide whether an object is a "send Bruce Willis and a nuke" level of hazard or not (WARNING: don't nuke an asteroid, it just makes things worse!). There are organizations like the B612 foundation working on the problem of what to do once those hazardous objects are found.

Exactly what conditions could cause LSST to trigger a ToO observation are still under discussion: it messes up the observing cadence if we do too many in a week, but we don't want to miss out on potential discoveries of short-lived sources. In some ways, this is the hardest part of designing a scientific survey: balancing all of the different science cases and tradeoffs.
 

parejkoj

Ars Praetorian
429
Subscriptor++
This video was taken in early December, but I didn't realize it was posted publicly online until recently. It has the audio removed, so you don't hear the discussion between the engineers in Spanish before the moves, or the sounds of the motors engaging. This is a real time video, not a timelapse: that's how fast we can move this 8m-class telescope (~4x faster than the required design speed).

These slews are moving at ~6º/second, much faster than any existing telescope of this size (most are of order 1º/second, with a much lower acceleration). I'm curious about other things of this size (~10m across) with that kind of rotational speed. I think e.g. large lift cranes used for building construction typically move much slower. For comparison, Iowa class battleships had a ~4º/second azimuth speed for their main guns.

To get a sense of the scale here, look at the stairs on the left, or the height of the safety railings. The mirrors and camera are not in place: the large yellow bars are metal weights that serve as "mirror mass simulators", for system testing.


View: https://www.youtube.com/watch?v=Kd_ZbK1zwcA
 

Peldor

Ars Legatus Legionis
10,884

parejkoj

Ars Praetorian
429
Subscriptor++
Rapid movement, and quick acceleration, were design requirements for the telescope: it's how we can survey the whole southern sky every 2/3 nights.

It's a much squatter telescope than most, with the 3-mirror design resulting in the 8.4m diameter telescope only being 6.4m in length. Compare with e.g. Gemini, at ~20m telescope length for a similar mirror diameter. That squat size means we can accelerate and decelerate quicker, and stop at the target without wobbling as much.
 

Peldor

Ars Legatus Legionis
10,884
Yes, this is an excellent thread for the Observatory (likely could get a Front Page piece done as well). Kudos indeed.

parejkoj, you said back in August:
The Vera C. Rubin Observatory/LSST, an 8.4m telescope on Cerro Pachón in Chile, will see first light in under two years, with first light of ComCam (the commissioning camera: a smaller, engineering-focused camera) likely to happen next summer.
How's that looking? Construction still on target for the ComCam first light this summer?
 

parejkoj

Ars Praetorian
429
Subscriptor++
How's that looking? Construction still on target for the ComCam first light this summer?
You fight dirty...

The "summer 2023 ComCam first light" schedule I stated back in August was before the full post-COVID replan was finished this winter. The current plan no longer involves ComCam, because the various COVID delays have resulted in the Telescope and Mirror Assembly (TMA) and science camera (LSSTCam) both (hopefully!) becoming ready at about the same time in Spring 2024. Assuming there are no further delays with LSSTCam--there was a >1 month delay this January/February due to the lab it's housed in; I don't think the details are public information yet--this means that ComCam is unnecessary for commissioning the whole system. In addition, the time required to unmount ComCam and then mount LSSTCam would eat into the now tighter schedule of full TMA commissioning.

ComCam is currently mounted on the telescope to test the mounting/unmounting process, electrical and cooling integration, etc. The mirrors are still in storage on site: we don't want the expensive glass mounted on the telescope while there's ongoing construction on the dome, and while the camera installation/removal process is being tested. There's a timelapse video at that link: you can't see it in the timelapse, but I believe there's about 1cm of clearance around the camera as it slides into the top of the telescope.

The schedule now has "first photon" (the first time we open the whole telescope+camera to the sky, even with really ugly data quality, e.g. unfocused, camera voltages/temperatures unstable, alignment not complete) circa early summer 2024, and "system first light" (the first good quality image, with everything working roughly as expected) in mid August 2024. There's a lot left to do between now and then, and there's still a chance that ComCam might have to be used, if the camera integration at SLAC isn't finished before the TMA is ready. If that happens, or the telescope mirror integration is delayed, it will push us very close to the not flexible (due to agency requirements and funding) Operation Readiness Review (ORR) date in December. That would be... not good.

The official schedule is kept up to date here: http://ls.st/dates
 

Captain Proton

Smack-Fu Master, in training
1
If you really feel like going for a spin, how about 300 degrees per second? https://www.nasa.gov/ames/research/space-biosciences/20-g-centrifuge
HA HA! Ok, you win the spin contest.
I love where @Peldor was going with this. Maybe I'd take a millisecond pulsar over the 20g monster? It won't swing a companion star around as quickly. But it makes up for it with a greater payload capacity giving a small star an orbital period of a few hours.

But joking aside, thanks @parejkoj for the thread. A great one to mark my as my 1st follow.
 

parejkoj

Ars Praetorian
429
Subscriptor++
I love where @Peldor was going with this. Maybe I'd take a millisecond pulsar over the 20g monster? It won't swing a companion star around as quickly. But it makes up for it with a greater payload capacity giving a small star an orbital period of a few hours.
Yes, but we simple humans haven't actually built one of those...
 

halse

Ars Praefectus
3,976
Subscriptor
Good article on the new telescopes in Chile and elsewhere, focusing on the Giant Magellan:
I asked Dr. Mulchaey what it would do that the James Webb and Hubble space telescopes could or would not.
“A lot,” he said.
For one thing, the Giant Magellan instruments were being prioritized for studying exoplanets, and would be capable of detecting rocky, Earthlike planets as far as 30 light-years away. Moreover, as technology improves over time, astronomers will be able to change and upgrade the main instruments, whereas space-based telescopes are stuck with whatever technology they carried at launch.
https://www.nytimes.com/2023/04/18/...es-magellan-chile.html?searchResultPosition=3
 
  • Like
Reactions: continuum

parejkoj

Ars Praetorian
429
Subscriptor++
I have many problems with the design and planning of the 30m class telescopes (GMT and TMT), but they really are the only good way to get spectroscopic followup of the faint end of sources that will be found by 8m class telescopes (like LSST). For faint sources, mirror size is everything; witness how quickly JWST was able to surpass HST's deep field limits: ~1 day vs. ~1 week. Spectroscopy makes that much worse, because you're splitting the light up into many more components. LSST will find a lot of faint sources that we just cannot followup with any existing facilities. VLT and Keck would require many hours to get even poor signal-to-noise spectra of sources at our single-epoch (30s exposure) limit of ~24th magnitude, let alone the final 10-year full sky depth of 27th magnitude. They're not really comparable to JWST or HST: putting a >10m class telescope in space is a huge and extremely expensive challenge.

My qualms, in particular with GMT, is that they currently aren't planned around significant multiplexing capabilities. If you're building a ~1billion dollar instrument, you'd better be able to observe more than one target at a time. GMT, being mostly privately funded (Carnegie) doesn't have to care about how it's facilities benefit the whole community, but I still think it's poor planning.
 

parejkoj

Ars Praetorian
429
Subscriptor++
The Rubin Education and Public Outreach (EPO) team has produced some animated videos explaining the telescope and some of the science we'll do. Here's the full playlist; the latest one is "Boom! Using supernovae to map our expanding Universe".


View: https://www.youtube.com/watch?v=NUwV_vc9fik&list=PLPINAcUH0dXbEQbSMDSKilynCN_KdEVyy


They're also available in Spanish (our telescope is in Chile, after all!):


View: https://www.youtube.com/watch?v=5h0AuoSBlj8&list=PLPINAcUH0dXZJkiE38TyouQlwiaV9LofQ
 

parejkoj

Ars Praetorian
429
Subscriptor++
That's really incredible. Is there anything comparable for the northern sky?
The northern hemisphere has had the Sloan Digital Sky Survey (SDSS) in the early 2000s (the original all-sky digital imaging survey, and from which my profile image comes), Pan-STARRS, the Panoramic Survey Telescope and Rapid Response System from ~2009-now, Palomar Transient Factory (PTF) and Zwicky Transient Facility (ZTF) from 2009-2017, and 2018-now, respectively, and Hyper SuprimeCam from 2014-now, as probably the most comparable precursors. None of them do all the things that LSST will (HSC comes the closest: they're even using our software for their analysis pipeline!), but the northern hemisphere has had plenty of all sky surveys over the past couple of decades.

What we really need is some massively-multiplexed 10m+ class spectrographs in the southern hemisphere to followup LSST sources. We'll hopefully have the Prime Focus Spectrograph (PFS) operating on Subaru within the next year or so, but that's on Hawaii, so can't reach all the way south, and is only 2400 fibers (funny I'm saying that, coming from only 600 fibers on SDSS).
 
  • Like
Reactions: davidtheweb

Ecmaster76

Ars Legatus Legionis
16,979
Subscriptor
How hideous would the cost be to build up a 25,000 ft mountain on some "empty" land somewhere in the northern hemisphere? (Might have a bit of ground settling as time goes, just saying...)
If you started on top of Denali and built something matching the tallest building in the world, you'd still be a couple thousand feet short of that
 
  • Like
Reactions: davidtheweb

Peldor

Ars Legatus Legionis
10,884
Slightly more seriously, if you built a concrete cone 8000 m tall and 4000 m diameter at the circular base it would take about 33.5 billion cubic meters of concrete. Roughly 10% of all concrete made since the industrial revolution if you believe the first hit on google, and far more than the annual global production.

But this is more of a spike than a mountain, if you want to give your mountain a more navigable slope (after all the concrete trucks have to get up there somehow), you'll need to quadruple the base for 16x as much concrete.

Let's say cost is about $175 per cubic meter (as if your mad plan hasn't disrupted the world economy and concrete prices). So roughly $5.8 trillion for a spike and $90 trillion for a broader mountain. The good news is the global GDP has passed $100 trillion so this is totally feasible if we just get everyone to dedicate 90% of their output to it.
 
  • Love
Reactions: davidtheweb