Project descriptions

On this page, you'll discover projects that are still in progress and for which we are actively seeking students or collaborators to contribute.

Image Description
NeuroDiffEq

One-Shot Transfer Learning for Nonlinear Differential Equations with Neural Networks

The ability to rapidly adapt neural networks for solving various differential equations holds immense potential. Achieving "one-shot transfer learning" would pave the way for foundational models applicable to entire families of differential equations, encompassing both ordinary (ODEs) and partial differential equations (PDEs). Such models could efficiently handle diverse initial conditions, forcing functions, and other parameters, offering a universally reusable solution framework.

Background and Prior Work:

Our research has made significant strides in this direction. We previously demonstrated one-shot transfer learning for linear equations [1,2]. Subsequently, we built upon this success by employing perturbation methods to achieve iterative one-shot transfer learning for simple polynomial nonlinearities in differential equations [3].

Project Goals:

This project aims to extend our prior work by tackling non-polynomial nonlinearities in differential equations. While our prior work utilized the homotopy perturbation method, its limited convergence regions pose a challenge. Here, we propose exploring alternative expansion techniques, such as Pade approximations [4], as a means to effectively handle a broader range of nonlinearities.

Methodology:

  1. Exploration of Expansion Techniques: We will delve into Pade approximations and potentially other expansion methods suitable for representing diverse nonlinearities in differential equations.
  2. Model Development: We will integrate the chosen expansion technique into a neural network architecture, enabling the model to learn the solution structure for various non-polynomial nonlinearities.
  3. Benchmarking and Validation: The model's performance will be evaluated across a diverse set of ODEs and PDEs.
  4. Real-World Application: We will select a specific real-world application involving non-polynomial nonlinearities and demonstrate the effectiveness of the developed model in solving the corresponding differential equations.

References:

  1. One-shot transfer learning of physics-informed neural networks

  2. Generalized One-Shot Transfer Learning of Linear Ordinary and Partial Differential Equations

  3. One-Shot Transfer Learning for Nonlinear ODEs

  4. Algebraic approximants and the numerical solution of parabolic equations

 

Future Directions in Stiffness Modeling: Expanding Multi-Head PINNs

Ordinary differential equations (ODEs) are fundamental in modeling a vast range of physical, biological, and engineering systems. However, solving these equations, particularly for stiff systems, remains a significant computational challenge. Stiffness arises when solutions evolve on vastly different timescales, requiring specialized numerical methods to capture rapid transients and slow dynamics simultaneously. Traditional solvers like Runge-Kutta methods often struggle with efficiency and stability, necessitating extremely small time steps for stiff systems. This inefficiency is amplified when exploring varying initial conditions or force functions within stiff regimes.

In this context, Physics-Informed Neural Networks (PINNs) offer a promising alternative. By integrating governing equations into neural network structures via automatic differentiation, PINNs can approximate solutions directly without traditional mesh-based discretization. Building on this foundation, this work introduces a novel multi-head PINN architecture and leverages transfer learning to address the unique challenges of stiffness. These methods aim to improve computational efficiency and broaden the applicability of PINNs across diverse stiffness regimes.

Previous Work

In our prior work [1], we proposed a novel approach to solving stiff ODEs using Physics-Informed Neural Networks (PINNs) with a multi-head architecture and transfer learning. By integrating governing equations directly into neural networks through automatic differentiation, PINNs provide an alternative to traditional numerical solvers.

Our method introduced a multi-head architecture, where each “head” specializes in a specific stiffness regime. The network was first trained on non-stiff regimes, then fine-tuned for stiff systems using transfer learning to leverage pre-trained weights. This strategy significantly reduced computational costs compared to methods like RK45 and Radau, particularly when exploring varying initial conditions or force functions.

We validated the approach on benchmark linear and nonlinear ODEs with varying stiffness ratios, demonstrating improvements in accuracy and execution time over vanilla PINNs and traditional solvers.

Future Work

Building on the success of this project, we aim to extend the applicability of the proposed approach in the following directions:

  1. Extension to Stiff PDEs: Expand the use of the multi-head architecture and transfer learning to partial differential equations (PDEs) with stiff dynamics. This includes addressing complex problems like the one-dimensional advection-reaction system, a benchmark in atmospheric modeling [2, 3], and extending to systems relevant in fluid dynamics and materials science.
  2. Broadening Stiffness Regime Coverage: Investigate the effectiveness of the multi-head architecture across diverse stiffness types, such as boundary layer stiffness, oscillatory stiffness, and thermal runaway stiffness. This work aims to generalize the methodology for applicability to various domains.
  3. Applications in Astronomy and Physics: Explore the use of this framework for astrophysical simulations, such as modeling stellar interiors, planetary atmospheres, or accretion disk dynamics, where stiffness arises from coupled thermodynamic and radiative processes. Similarly, in physics, apply the method to problems like plasma dynamics or high-energy particle interactions, where disparate timescales and sharp gradients are prevalent.
  4. Other High-Impact Domains: Extend the approach to industrial applications, including chemical reaction networks, biological systems, and climate modeling, which often involve stiff systems and require efficient, accurate solvers.

References

  1. Emilien Sellier and Pavlos Protopapas, submitted to AISTATS
  2. Physics-informed neural networks for stiff partial differential equations with transfer learning.
  3. Brasseur, G. P., & Jacob, D. J. (2017). Atmospheric chemistry and global change. Oxford University Press.
Image Description
Astromer

Spectromer

Foundational Model for Spectra

Foundational Model for Spectra Spectroscopic data are crucial for astronomical research. Observing celestial objects across various wavelengths reveals more information about them than most other observational techniques. Spectra typically represent the intensity of light at various wavelengths or wavelength bins. A typical spectrum can range from a few hundred to several thousand observations. As data, a spectrum is a sequence of light intensities at the center of each wavelength bin, which typically ranges from 400 to 1000 nm.

Typical stellar spectra will exhibit a blackbody radiation spectrum following the Stefan-Boltzmann law [1], which depends on the object's temperature, along with a series of absorption and emission lines. Similarly other celestial objects have unique characteristics. These features provide insights into the object's temperature, mass, age, and elemental abundances [2]. Moreover, these spectra lines are shifted according to the object's proper motion, enabling the deduction of intrinsic velocities and cosmological properties [3].

Spectra are orders of magnitude more scarce than traditional imaging, as it require longer exposure times and more delicate equipment. Due to the significance of spectroscopic data, astronomers have developed sophisticated instruments for spectral observations. In particular, observing the spectra of numerous celestial objects has motivated outstanding engineering achievements. Currently, millions of spectra are available through various surveys, such as the Sloan Digital Sky Survey (SDSS) [4] and the Gaia mission [5].

Spectra have been extensively used in the analysis and classification of celestial objects. Stars are classified according to their spectral features using the Harvard Classification system [6], and objects can be assigned to variability types and classified into categories like stars, galaxies, and active galactic nuclei (AGNs) based on their spectra [7].

In this project, we aim to create a foundational model using transformers for embeddings of various types of spectra. We will leverage millions of available spectra to pre-train the model following the paradigms of masked language models (though adapted for continuous data) and next-sentence predictions. We will then test the embeddings by fine-tuning and using either regression or classification tasks. The project will take advantage of our previous work on time series analysis and adapt it to spectroscopic data, as described in the Astromer paper by Donoso et al. [8]. The final models will be made available as pre-trained models for the community to use. Additionally, we will adhere to proper software development and ML-OPS standards, utilizing cloud infrastructure for training and deployment, as well as data and code versioning practices.



References:

[1] Stefan-Boltzmann law: https://en.wikipedia.org/wiki/Stefan–Boltzmann_law

[2] Spectroscopic analysis of stellar properties: https://www.annualreviews.org/doi/10.1146/annurev-astro-081817-051846

[3] Doppler shift and spectral line analysis: https://astronomy.swin.edu.au/cosmos/D/Doppler+Shift

[4] Sloan Digital Sky Survey (SDSS): https://www.sdss.org/

[5] Gaia mission: https://www.cosmos.esa.int/web/gaia

[6] Harvard Classification system: https://en.wikipedia.org/wiki/Stellar_classification

[7] Spectroscopic classification of celestial objects: https://iopscience.iop.org/article/10.3847/1538-4357/aa6890

[8] ASTROMER-A transformer-based embedding for the representation of light curves. Astronomy & Astrophysics https://www.aanda.org/articles/aa/pdf/2023/02/aa43928-22.pdf

Multi-Band Astromer

Modeling Multi-band Lightcurves for Astronomical Surveys

Astronomical surveys rely on collecting data across various wavelengths, known as bands. Each band captures light at a specific range within the electromagnetic spectrum. This multi-band approach provides rich information about celestial objects, enabling astronomers to infer properties like temperature, composition, and age.

Building robust models for astronomical lightcurves, which represent brightness variations over time, presents a unique challenge when dealing with multi-band data. A straightforward method, inspired by Astromer [1], involves feeding multi-band observations directly into a transformer-based encoder as a multi-dimensional input. However, telescopes often employ different physical filters for observations bands, leading to inconsistencies in data acquisition times. Traditional transformer architectures struggle with such asynchronous data.

This project aims to explore alternative approaches for effectively handling multi-band lightcurve data. We will investigate methods beyond the basic multi-dimensional input approach:

  • Late Fusion with Embedding Mixing: Here, we will train independent transformer encoders for each band, similar to Astromer. However, instead of feeding the combined bands directly, we will extract embeddings from each individual encoder. These embeddings will then be combined using techniques like embedding mixing [2] only when addressing the final task (e.g., classification, regression). This approach allows for independent processing of asynchronous data while leveraging the power of transformers.
  • Multi-attention Layers: We will go into more complex architectures, specifically exploring the use of multi-attention layers within the transformer framework. These layers enable the model to attend to information across different bands at various time steps, potentially capturing the relationships between asynchronous observations.

Flexible Input Pipeline and Model Applications:

To ensure maximum adaptability, we will develop a flexible input pipeline capable of ingesting data from various multi-band astronomical sources. This pipeline will preprocess and format the data seamlessly, regardless of the specific survey or telescope behind the observations.

Once the models are trained, we can fine-tune them using rich public datasets like Gaia DR3 (third data release of the Gaia space mission) and ZTF DR20 (the Zwicky Transient Facility DR20). These datasets provide a wealth of multi-band lightcurve information for a multitude of celestial objects.

The fine-tuned models can then be employed for various astronomical tasks, including:

  • Classification of Variable Objects: By analyzing lightcurve patterns, the models can identify objects whose brightness fluctuates over time, such as pulsating stars or eclipsing binary systems.
  • Prediction of Physical Parameters: The models can be trained to predict the physical properties of objects based on their lightcurve characteristics. This could involve estimating an object's temperature, mass, or even its distance through techniques like redshift estimation.

As we will go deeper into this project, potential areas for future exploration include:

  • Integrating additional data sources, such as spectroscopic information, to enhance model performance.
  • Investigating the effectiveness of self-attention mechanisms specifically designed to handle asynchronous data.
  • Applying the developed models to real-world astronomical datasets to assess their practical capabilities.

[1] ASTROMER-A transformer-based embedding for the representation of light curves. Astronomy & Astrophysics

https://www.aanda.org/articles/aa/pdf/2023/02/aa43928-22.pdf

[2] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2021). Attention is all you need. arxiv:1706.03762: https://arxiv.org/abs/1706.03762

Image Description
NNETH

Computer Vision methods for the Event Horizon Telescope

The Event Horizon Telescope (EHT) is an ambitious project that, simply speaking, aims to resolve images of black holes, thereby confirming or challenging theories about black holes, and potentially discovering other interesting phenomena. Because black holes serve as extreme laboratories for testing the underlying governing laws of the universe, this makes them a very interesting subject of study.

In 2019, the EHT collaboration released the first image of a black hole, which was expected but still an amazing confirmation of theory and a thriump of science and engineering working together. One may argue it was one of the greatest discoveries ever made.

Skipping the details of how the telescope is constructed and how it operates, at the end of all the herculean efforts by the EHT collaboration, the primary data products are "images" of black holes. So far, only two black holes have been imaged: M87* and Sgr A* . We use "images" in quotes because the actual product is a sample of the image in the frequency domain, which we call visibility data. From the visibility data, there is a direct but lossy way to produce the images that we have all seen.

Images are visually appealing and can capture the imagination of everyone. Who hasn't been impressed with the first M87* image when it was released? However, to connect these observations with theoretical models, we need to characterize the black holes given these images.

The good news is that despite black holes being very complex objects, all of this complexity is hidden inside the event horizon (the boundary beyond which we cannot see). A black hole can be characterized by just three parameters: mass, spin, and charge (often represented by electron density). This is known as the no-hair theorem. Consequently, our task is to determine the mass, spin, and electron density given an image as input.The mass of a black hole can often be estimated through various observational techniques, such as analyzing the motion of nearby stars or measuring the size of the black hole shadow in EHT images. However, accurately determining the spin remains a significant challenge. The charge is typically assumed to be negligible for astrophysical black holes.

The problem statement, then, is simple: given an input image, can we accurately estimate the spin? We propose to use deep learning methods to determine the spin and R-high for the black hole images of M87* and Sgr A*.

Using supervised machine learning requires training sets, which are not available in our case; we have no images of black holes with known spin values. Instead, we use simulations, or in other words, synthetic black holes. Since these are simulations, we know the spin. These simulations incorporate general relativity and magnetohydrodynamics (GRMHD simulations). While they are computationally expensive to run, they have been conducted by the EHT team and provided to us.The problem, then, is again simple: given a set of simulated images, we train a model and then use the trained model to predict the physical characteristics of the actual black hole images.

Below find the projects that we are looking for collabortors


Denoising Data in the Visibility Space Using InstructPix2Pix Models

The quality of reconstructed images in astrophysical observations, such as those from interferometric arrays like the Event Horizon Telescope (EHT), depends heavily on denoising and deblurring techniques. While methods like probabilistic denoising models (e.g., Stable Diffusion and its variants, including InstructPix2Pix) have demonstrated success in deblurring image-space data, their direct application to visibility-space data remains unexplored.

This project proposes the adaptation and extension of InstructPix2Pix models for deblurring directly in the visibility space, where the raw observational data resides. The visibility space, being the Fourier domain representation of interferometric data, introduces unique challenges due to its complex structure and noise characteristics. A direct application of diffusion-based denoising models in this domain requires specialized adjustments, such as tailoring loss functions for the visibility space and preserving phase and amplitude coherence during model training.

Our approach will involve:

  • Model Extension: Adapting the InstructPix2Pix framework to work with visibility-space inputs by designing conditional inputs that correspond to specific observational constraints.
  • Data Preparation: Generating synthetic training datasets in the visibility space that incorporate noise and blur artifacts to train the model effectively.
  • Evaluation: Validating the model's performance on simulated and real visibility data, comparing reconstructed images in the image space to those processed through standard pipelines.

This project has the potential to significantly improve the fidelity of interferometric reconstructions by addressing noise and blur directly in the visibility space, setting the foundation for improved scientific insights from observational data.




AI-Driven Polarized Imaging for Black Hole Characterization

The Event Horizon Telescope (EHT) has provided polarized images of supermassive black holes, including M87* and Sgr A*, offering insight into the magnetic fields and plasma dynamics surrounding these enigmatic objects. These observations present an opportunity to extract critical parameters, such as black hole spin and accretion disk geometry.

This project aims to enhance black hole characterization by incorporating polarization data into deep learning models. Polarization measurements, particularly the Stokes parameters (I, Q, U, and V), reveal valuable information about the magnetic fields and plasma near black holes. These data are crucial for understanding processes like relativistic jet formation and energy extraction, which are linked to strongly magnetized and spinning black holes. Models that integrate polarization data with total intensity observations can provide a more comprehensive understanding of black hole environments.

The approach focuses on developing advanced neural network architectures capable of processing multi-dimensional inputs, including polarization data. General Relativistic Magnetohydrodynamic (GRMHD) simulations that incorporate multi-frequency polarization intensities will be used to train and validate the models.

Key components of the project include:
  1. Multi-Dimensional Neural Network Design: The project will adapt existing deep learning models to handle multi-dimensional inputs, such as polarization data. Specialized loss functions and enhanced input layers will be developed to process polarization features effectively.
  2. Feature Extraction and Integration: Techniques will be created to extract meaningful features from polarization data that correlate with black hole spin and accretion states. These features will help distinguish between physical models like the “magnetically arrested disk” (MAD) and “standard and normal evolution” (SANE) states.
  3. Noise Resilience: To ensure robustness against observational uncertainties, the models will be trained on synthetic data augmented with noise. This will make the models better suited to handle real-world EHT data, which is often noisy and incomplete.
The ultimate goal of the project is to provide joint estimates of black hole spin and magnetic field configurations, enabling detailed mapping of regions where relativistic jets are launched. This research will address fundamental questions about energy extraction from spinning black holes and the role of magnetic fields in these processes. By integrating polarization data into machine learning frameworks, the project aims to maximize the scientific insights derived from EHT observations and potentially lead to groundbreaking discoveries.


Visual Transformers

In our previous work and works of others simple CNNs and RNNs have been deployed.


Optimnizing the antenna configuration based on the model performances





References:

[1] Generating Images of the M87* Black Hole Using GANs:
https://doi.org/10.1093/mnras/stad3797

[2] Deep Horizon: A machine learning network that recovers accreting black hole parameters:
https://doi.org/10.1051/0004-6361/201937014

[3] Feature Extraction on Synthetic Black Hole Images:
https://www.semanticscholar.org/paper/8e42b9a02bba6e15c0300d20dfa3ebc2ce4fa8bd

[4] Using Machine Learning to Link Black Hole Accretion Flows with Spatially Resolved Polarimetric Observables:
https://doi.org/10.1093/mnras/stad466

[5] Classification of a black hole spin out of its shadow using support vector machines:
https://doi.org/10.1103/PhysRevD.99.103002

Image Description
CINNs

Cosmologically Informed NNs

Studies of alternative cosmological models with artificial neural networks

Motivation for the research topic. The accelerated expansion of the Universe is one of the most intriguing problems in modern cosmology, as there is currently no consensus within the scientific community regarding the physical mechanism responsible for it. Although the standard cosmological model can explain this phenomenon, it presents some unresolved issues. For these reasons, the study of alternative cosmological models to the standard cosmological model and their comparison with recent observational data has become relevant. To address the aforementioned issues with the standard cosmological model, a set of alternative cosmological models has been considered. In this work, we will focus on cosmological models constructed assuming an alternative theory of gravitation to General Relativity. To quantify the effect that each modification to the Standard Cosmological Model will have on observable quantities, it is necessary to solve a complex system of differential equations. Usually, this type of system can be solved using numerical methods. However, these methods tend to be computationally intensive.

 

Recently, methods using neural networks for solving systems of differential equations through unsupervised learning (i.e., where numerical solutions are not used in NN training) have been developed. In contrast to solutions obtained with numerical methods, solutions provided by NNs are continuous, completely differentiable, require less computational capacity than classical methods [1], and can be stored in small memory spaces. An extension of the unsupervised method for solving differential equations with NNs was proposed in [2], which introduces the possibility of training NNs that represent a set (or bundle) of solutions corresponding to a continuous range of parameter values for the differential system, which may include initial and boundary conditions. The great advantage of this method is that once the neural networks are trained, the solution can be used indefinitely without the need to re-integrate the process, as is the case with numerical methods. This results in a reduction of computational times in Markov chain inference processes. The proposed method is implemented in the neurodiffeq library [3], developed by the our group.

 

The NN method was applied to solve the background dynamics equations of the Universe in 4 different cosmological models [4]. The results showed significant optimization of parameter inference computational times. The application of the method described was then optimized. The key to improving computational times lies in the calculation of an integral using the same NN bundle method [5]. And finally similar method was applied to solve the matter perturbations equation (undergraduate thesis by Luca Gomez Bachar).

The objective of the thesis work is to calculate the uncertainties of the solutions obtained with the neural network bundle method. One of the major flaws of the neural network bundle method applied to cosmology currently is that it cannot estimate its uncertainty. To date, solutions obtained with the method have been compared with those of a numerical method. The proposal of the current work plan is to focus on the matter perturbations equation, which can be written as a simple system of two ODEs. The proposal is to go one step further and calculate the uncertainties of the previously obtained solutions. To do this, we will rely on similar estimations developed for other contexts by our group [6].

 

This work will be carried out in collaboration with a group in Argentina led by Professor Susana Landau. The entire work group meets remotely once a week.

 

References

[1] Lagaris I E, Likas A and Fotiadis D I 1998 IEEE Transactions on Neural Networks 9 987–1000

[2] Flamant C, Protopapas P and Sondak D 2020 arXiv e-prints arXiv:2006.14372

[3] Chen F, Sondak D, Protopapas P, Mattheakis M, Liu S, Agarwal D and Di Giovanni M 2020 Journal of Open Source Software 5 1931

[4] Chantada A T, Landau S J, Protopapas P, Scóccola C G and Garraffo C 2023 Phys. Rev. D 107(6) 063523 URL https://link.aps.org/doi/10.1103/PhysRevD.107.063523

[5] Chantada A T, Landau S J, Protopapas P, Scóccola C G and Garraffo C 2023 arXiv e-prints ar- Xiv:2311.15955 (Preprint 2311.15955)

[6] Liu S, Huang X and Protopapas P 2023 arXiv e-prints arXiv:2306.03786 (Preprint 2306.03786)"

Image Description
ADSML

ADSML

LLMs for Bibliography Curation

Introduction and Motivation

A well-established way to assess the scientific impact of an observational facility in astronomy is the quantitative analysis of the studies published in the literature which have made use of the data taken by the facility. A requirement of such analysis is the creation of bibliographies which annotate and link data products with the literature, thus providing a way to use bibliometrics as an impact measure for the underlying data. Creating such links and bibliographies is a laborious process which involves specialists searching the literature for names, acronyms and identifiers, and then determining how observations were used in those publications, if at all (Observatory Bibliographers Collaboration, 2024).

The creation of such links represents more than just a useful way to generate metrics: doing science with archival data depends on being able to critically review prior studies and then locate the data used therein, a basic tenet behind the principle of scientific reproducibility. From the perspective of a research scientist, the data-literature connections provide a critical path to data discovery and access. Thus, by leveraging the efforts of librarians and archivists, we can make use of telescope bibliographies to support the scientific inquiry process. We wish to make the creation of such bibliographies simpler and more consistent by using AI technologies to support the efforts of data curators.

Typical Curation Process

While different groups use different approaches and criteria to the problem of bibliography creation and maintenance, the steps involved typically consist of the following:

  1. Use a set of full-text queries to the ADS bibliographic database in order to find all possible relevant papers. This first step aims to identify articles that contain mention of the telescope/instrument of interest so that they can be further analyzed. For instance, the set of query terms used to find papers related to the Chandra X-Ray telescope may be “Chandra,” “CXC,” “CXO,” “AXAF,” etc.

  2. Analyze the text containing mentions of the telescope/instrument and its variations in order to disambiguate the use of the terms of interest. For the Chandra example, this includes teasing apart the different entities associated with “Chandra,” which may correspond to a person, a ground-based telescope, or a space-based telescope.

  3. Identify whether the paper in question shows evidence of the use of datasets generated by the telescope or hosted by the archive of interest. The mention of data use may be explicit (e.g. the listing of dataset identifiers), or implied in the text (e.g. mention of analysis and results without identification of the actual dataset). Whenever dataset ids are used, they should be extracted and identified.

  4. In some cases, additional classification of the dataset may be collected, such as the instrument used in the observations. This information is also correlated with the kind of data that was used (e.g. image vs. spectra vs. catalog) and its characteristics. In the case of Chandra, there are 7 different instruments that can be used for the data collection (ACIS, HRC, HETG, LETG, HRMA, PCAD, EPHIN), and their use, if explicitly mentioned in the paper, should be reported.

  5. For some bibliographies, additional information is collected, such as the relevance of the paper to the scientific use of the data archive. For example, for the Chandra bibliography, the following categories are defined:

    1. Direct use of Chandra data
    2. Refers to published results
    3. Predicts Chandra results
    4. Paper on Chandra software, operations, and/or instrumentation
    5. General reference to Chandra

An automated assistant able to emulate the supervised curation activities listed in the steps 2-5 above would provide a valuable contribution to the human effort involved. LLMs have shown flexibility in interpreting and classifying scientific articles which are the basis for this curation activity. They have also been successfully used for information extraction tasks, which would help identify the specific datasets mentioned in the papers. This shared task aims at improving the state of the art technologies to support these curation efforts. To this end, a dataset consisting of open access fulltext papers and annotated bibliography from institutions that collect this information is being solicited.

Call for Contributions

For the upcoming 2024 WIESP Shared Task Challenge, we are soliciting contributions of labeled data that can be used to train an expert assistant. Contributions towards this goal include:

  1. A set of full-text, liberally licensed papers from the ADS
  2. A dump of the Chandra Archive bibliography, providing a classification of the articles according to the criteria above
  3. Other observatory’s labeled data [please indicate your interest here]

Potential data contributors:

  • MAST
  • HEASARC
  • NASA HPD
  • ESA

This work will be carried out in collaboration with the ADS group lef by Alberto Accomazzi and Raffaele D’Abrusco. The entire work group meets remotely once a week.

Image Description
Other

Other projects

Besides the projects described above, we have engaged in various other projects in particularly in astronomy and physics. Work on StellarNet , eigenvalue problems, embedding on galaxy images using auto-encoders and other topics are described in the page link provided.