Dataset Viewer
Auto-converted to Parquet Duplicate
ascl_id
stringclasses
752 values
repo_url
stringclasses
737 values
file_path
stringlengths
4
194
content
large_stringlengths
1
1M
language
stringclasses
29 values
license_type
stringclasses
24 values
size
int64
1
1M
[ascl:2306.019]
https://github.com/realfastvla/realfast
.travis.yml
language: python python: - 2.7 - 3.6 branches: only: - main - development install: - sudo apt-get update -y - sudo apt-get install -y libfftw3-dev # set up conda - wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh - bash miniconda.sh -b -p $HOME/miniconda - export PATH="$HOME/miniconda/bin:$PATH" - hash -r - conda config --set always_yes yes --set changeps1 no - conda update -q conda - conda config --add channels conda-forge - conda config --append channels pkgw-forge # install with conda and pip - conda create -n conda pip "pycosat>=0.6.3" - source activate conda - conda install -q python=$TRAVIS_PYTHON_VERSION numpy scipy jupyter bokeh cython matplotlib numba astropy pytest setuptools coverage pyfftw - conda install -q python=$TRAVIS_PYTHON_VERSION pwkit casa-tools casa-python casa-data - python setup.py install script: - pytest notifications: email: false slack: realfastvla:ID1qIFG70HGYf3G6Y3oWoFFk
YAML
BSD
1,017
[ascl:2306.019]
https://github.com/realfastvla/realfast
README.md
# realfast Realfast is the name of a project and software package related to radio interferometric data analysis at the Very Large Array. For more information, see [realfast.io](http://realfast.io) or visit the [VLA](https://public.nrao.edu/telescopes/vla/) web site. This repo includes a Python 3 application that integrates a transient search pipeline with the real-time environment at the VLA. This requires running asynchronous processes in a cluster environment and using VLA services for distributing data, metadata, and archiving. This application: * monitors multicast messages with [evla_mcast](https://github.com/demorest/evla_mcast), * catches visibility data via the [vys](https://github.com/mpokorny/vysmaw) protocol, * defines a fast transient search pipeline with [rfpipe](http://github.com/realfastvla/rfpipe), * submits search pipeline to a compute cluster with [distributed](https://github.com/dask/distributed), * indexes candidate transients and other metadata for the [search interface](https://github.com/realfastvla/realfast.io-search), and * writes and archives new visibility files for candidate transients. These are the core functions required for the realfast project. Other features include: * Support for GPU algorithms, * Management of distributed futures, * Blind injection and management of mock transients, and * Reading of sdm data via [sdmpy](https://github.com/demorest/sdmpy). Requirements: --------- The realfast application is build upon the following libraries: * Python 3.5 and the scientific python stack (numpy, etc.) * [evla_mcast](https://github.com/demorest/evla_mcast) * [vys](https://github.com/mpokorny/vysmaw) and [vysmaw_apps](https://github.com/realfastvla/vysmaw_apps) * [distributed](https://github.com/dask/distributed) * [rfpipe](http://github.com/realfastvla/rfpipe) * [rfgpu](http://github.com/realfastvla/rfgpu) * [sdmpy](https://github.com/demorest/sdmpy) (to read/write sdm/bdf data) * [elasticsearch](https://github.com/elastic/elasticsearch-py) (version<7.0) * [supervisor](http://supervisord.org) (to manage/daemonize processes) * [click](http://click.pocoo.org) * [fetch](https://github.com/devanshkv/fetch) (incl cuda, tensorflow, etc.) Install ------ This application is designed to integrate specific protocols and services only availble on the correlator cluster at the VLA. If you'd like to build it yourself, you will need to use the [anaconda](http://anaconda.com) installer and follow instructions at [rfpipe](http://github.com/realfastvla/rfpipe). Then install the dependencies above (vys and pycuda being the trickiest) and: python setup.py install Contributors: -------- The custom realfast software includes contributions from: * Sarah Burke-Spolaor -- WVU * Paul Demorest -- NRAO * Andrew Halle -- UC Berkeley * Casey Law -- UC Berkeley * Joe Lazio -- JPL * Martin Pokorny -- NRAO
Markdown
BSD
2,874
[ascl:2306.019]
https://github.com/realfastvla/realfast
setup.py
from setuptools import setup, find_packages import glob setup( name='realfast', description='Real-time data analysis at the VLA', author='Casey Law and the realfast team', author_email='caseyjlaw@gmail.com', version='3.6.12', url='http://realfast.io', include_package_data=True, packages=find_packages(), package_data={'realfast': ['xsd/*.xsd']}, data_files=[('conf', glob.glob('conf/*'))], # scripts=['scripts/sdm_chop-n-serve.pl'], # add non-python scripts install_requires=['rfpipe', 'evla_mcast', 'sdmpy', 'click', 'dask', 'requests', 'elasticsearch>=6.0.0,<7.0.0', 'distributed', 'future'], # 'rfgpu', 'vysmaw_reader'], entry_points=''' [console_scripts] realfast=realfast.cli:cli realfast_portal=realfast.cli:cli2 ''' )
Python
BSD
848
[ascl:2306.019]
https://github.com/realfastvla/realfast
scripts/rf_get_cand.py
#! /usr/bin/env python import sys import os import json import astropy.coordinates import astropy.units as u import sdmpy from elasticsearch import Elasticsearch # This script takes a realfast portal candidate ID, and # assembles the SDM, BDF and PNG files into the current # directory. It then modifies the SDM Annotation table # to contain correct metadata about the candidate, as # retrieved from the portal. try: candid = sys.argv[1] except IndexError: print("Usage: rf_get_cand.py candId") sys.exit(1) es = Elasticsearch('http://realfast.nrao.edu:9200/') rfidx = 'new' # Get the cand from portal q = {'query':{ 'match':{'candId':candid}, }} r = es.search(index=rfidx+'cands',body=q,size=1) try: cand = r['hits']['hits'][0]['_source'] except IndexError: raise(RuntimeError,"No matching candId") noiseid= '%s.%d.*' % (cand['scanId'], cand['segment']) #print(cand) # Get the noise values from portal q = {'query':{ 'wildcard':{'_id':noiseid} }} rn = es.search(index=rfidx+'noises',body=q,size=1) try: noise = rn['hits']['hits'][0]['_source'] except IndexError: raise(RuntimeError,"No noise entry found") #print(noise) print('SDM: %s' % (cand['sdmname'])) print('PNG: %s' % (cand['png_url'])) # Get the SDM from MCAF dir sdmname = cand['sdmname'] mcafdir = '/home/mctest/evla/mcaf/workspace' if not os.path.exists(sdmname): os.system('rsync -aqP %s/%s .' % (mcafdir,sdmname)) # Get the BDF from CBE lustre bdfdir = '/lustre/evla/wcbe/data/realfast' sdm = sdmpy.SDM(sdmname,bdfdir=bdfdir,use_xsd=False) scan = sdm.scan(1) bdfpath = os.path.join(sdmname,'ASDMBinary') try: os.mkdir(bdfpath) except OSError: pass rsync_cmd = "rsync -aq cbe-master:%s %s" % (scan.bdf_fname, bdfpath) print(rsync_cmd) os.system(rsync_cmd) # Get the PNG from web os.system("wget -q -N %s" % (cand['png_url'])) # Fix up Annotation values in SDM ann = json.loads(str(sdm['Annotation'][0].sValue)) if cand['frbprob']>0.9: ann['rf_QA_label'] = 'Good' elif cand['frbprob']>0.5: ann['rf_QA_label'] = 'Marginal' elif cand['frbprob']>0.0: ann['rf_QA_label'] = 'Questionable' else: ann['rf_QA_label'] = 'None' ann['rf_QA_zero_fraction'] = noise['zerofrac'] ann['rf_QA_visibility_noise'] = noise['noiseperbl'] ann['rf_QA_image_noise'] = noise['imstd'] # Fix busted RA and dec if ann['transient_RA']=='[': c = astropy.coordinates.SkyCoord(cand['ra']*u.deg, cand['dec']*u.deg) ann['transient_RA'] = c.ra.to_string(unit=u.hourangle,sep=':',pad=True) ann['transient_Dec'] = c.dec.to_string(unit=u.deg,sep=':',pad=True,alwayssign=True) # Fix units on RA error ann['transient_RA_error'] = ann['transient_Dec_error'] / 15.0 sdm['Annotation'][0].sValue = json.dumps(ann) # Write out updated SDM sdm.write(sdmname)
Python
BSD
2,779
[ascl:2306.019]
https://github.com/realfastvla/realfast
scripts/rfarchive.sh
#!/usr/bin/env bash MODE=$1 SDMNAME=$2 if [ $MODE == "create" ]; then ssh rfr "conda activate development; realfast buildsdm --indexprefix final --copybdf --sdmname $SDMNAME; rsync -rL --remove-source-files ${SDMNAME} claw@nmpost-master:~/fasttransients/realfast/tmp ; find ${SDMNAME} -type d -empty -delete" elif [ $MODE == "ingest" ]; then ssh vlapipe@nmngas "activate_profile dsoc-test; realfastIngest -s /lustre/aoc/projects/fasttransients/realfast/tmp -p /lustre/aoc/projects/fasttransients/realfast/plots/final ${SDMNAME}" else echo "Oops" fi
Shell
BSD
562
[ascl:2306.019]
https://github.com/realfastvla/realfast
scripts/rfarchive.py
import os.path from subprocess import Popen, PIPE import shutil import pickle from realfast import elastic from time import sleep from astropy.time import Time # get all Ids Ids = elastic.get_ids('finalcands', caseyjlaw_tags="astrophysical,archive") # any project code filters can be added here # Ids = [Id for Id in Ids if <code string> in Id] # filter out most recent mjds = [int(Id.split('.eb')[1].split('.')[1]) if '.eb' in Id else 0 for Id in Ids] now = Time.now().mjd Ids = [Id for i, Id in enumerate(Ids) if mjds[i] < now-365*4.9] # archived after 14 days try: with open('finished_Ids.pkl', 'rb') as fp: # sdmnames already done finished = pickle.load(fp) except FileNotFoundError: print("No finished pkl found. Starting anew") finished = [] raningest = False for i, Id in enumerate(Ids): source = elastic.get_doc('finalcands', Id)['_source'] if 'sdmname' in source: sdmname = source['sdmname'] if sdmname in finished: print(f"Skipping ingested file: {sdmname}") continue print(f"Working on Id {Id} with sdmname {sdmname}") # p = Popen(["./rfarchive.sh", "create", sdmname], stdout=PIPE) p = Popen(["./archive.sh", sdmname, "dsoc-prod"], stdout=PIPE, stderr=PIPE) p.wait() # parse p.stdout for "BDF not found" and "No SDM found", then ingest stdout = p.stdout.read().decode("utf8") stderr = p.stderr.read().decode("utf8") # if "BDF not found" not in stderr and "No SDM found" not in stderr and "No bdf found" not in stderr: if "No SDM found" not in stderr: # p = Popen(["./rfarchive.sh", "ingest", sdmname], stdout=PIPE) # p.wait() raningest = True else: raningest = False print("Not archiving if no SDM or BDF") # line = p.stdout.read().decode("utf8") # print(line) # if os.path.exists(f"/users/claw/fasttransients/realfast/tmp/{sdmname}"): # print("Cleaning up...") # shutil.rmtree(f'/users/claw/fasttransients/realfast/tmp/{sdmname}') # else: # print("No SDM to clean up") finished.append(sdmname) # occasionally... if not i % 5: print("Updating finished_Ids.pkl") with open('finished_Ids.pkl', 'wb') as fp: pickle.dump(finished, fp) if raningest: print('Ran ingest. Sleep for 10 seconds...') sleep(10) print() print()
Python
BSD
2,516
[ascl:2306.019]
https://github.com/realfastvla/realfast
scripts/archive.sh
#!/usr/bin/env bash SDMNAME=$1 PROFILE=$2 # can be dsoc-test or dsoc-prod ssh rfr "conda activate deployment; cd lustre_workdir; realfast buildsdm --indexprefix final --sdmname "${SDMNAME}"; ~/soft/sdmpy/scripts/realfast_sdm_fix.py "${SDMNAME}"; rsync -aL --bwlimit=20m --remove-source-files "${SDMNAME}".fix claw@nmpost-master:~/fasttransients/realfast/sdm_archive/"${SDMNAME} ssh nmngas "activate_profile "${PROFILE}"; realfastIngest -s /lustre/aoc/projects/fasttransients/realfast/sdm_archive -p /lustre/aoc/projects/fasttransients/realfast/plots/final "${SDMNAME}
Shell
BSD
572
[ascl:2306.019]
https://github.com/realfastvla/realfast
conf/realfast.yml
rfpipe: default: nthread: 2 # not taking all in case multiple workers going dtarr: [1] # integer to integrate in time for independent searches maxdm: 100 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 6.4 # sigma_kalman: 0. npix_max: 2048 badspwpol: 2. uvoversample: 1.5 savenoise: True savecandcollection: True savecanddata: True returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 1 # in GB NRAOdefaultX: # X-band 8-10 GHz, 16 spw, 2 MHz channels, 10 ms, 4 pols nthread: 2 dtarr: [1,2,4,8] # integer to integrate in time for independent searches maxdm: 3000 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 6.8 # sigma_kalman: 2.8 npix_max: 2048 badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB NRAOdefaultC: # C-band 4-8 GHz, 16 spw, 2 MHz channels, 10 ms, 4 pols nthread: 2 dtarr: [1,2,4,8] # integer to integrate in time for independent searches maxdm: 3000 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 7.0 # sigma_kalman: 0.0 # snr1=7.5 and snrk=2.8 gives snrtot=8 npix_max: 2048 badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB NRAOdefaultS: # S-band 2-4 GHz, 16 spw, 2 MHz channels, 10 ms, 4 pols nthread: 2 dtarr: [1,2,4,8] # integer to integrate in time for independent searches maxdm: 2000 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 7.0 # sigma_kalman: 0.0 # snr1=7.5 and snrk=2.8 gives snrtot=8 # sigma_image1: 7.5 # sigma_kalman: 2.8 # snr1=7.5 and snrk=2.8 gives snrtot=8 npix_max: 2048 # spw: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] spw: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13] # avoid low-level RFI badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB VLASS: # S-band 2-4 GHz, 16 spw, 2 MHz channels, 18 ms, 4 pols nthread: 2 dtarr: [1,2] # integer to integrate in time for independent searches maxdm: 2000 flagantsol: True #timesub: 'cs' timesub: 'splfit2' searchtype: 'image' sigma_image1: 8.0 # sigma_kalman: 0.0 # snr1=7.5 and snrk=2.8 gives snrtot=8 # sigma_image1: 7.5 # sigma_kalman: 2.8 # snr1=7.5 and snrk=2.8 gives snrtot=8 npix_max: 2048 # spw: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] spw: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13] # avoid low-level RFI excludeants: !!python/tuple ['ea17',] badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB NRAOdefaultL: # L-band 1-2 GHz, 16 spw, 1 MHz channels, 10 ms, 4 pols nthread: 2 dtarr: [1,2,4,8] # integer to integrate in time for independent searches maxdm: 1500 # maxdm: 1000 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 7.0 # sigma_kalman: 0.0 # snr1=7.5 and snrk=2.8 gives snrtot=8 npix_max: 2048 spw: [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB 20A-346: # X-proposal, L-band 1-2 GHz, 8 spw, 20 ms nthread: 2 dtarr: [1,2,4,8] # integer to integrate in time for independent searches maxdm: 1500 # maxdm: 1000 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 7.0 # sigma_kalman: 0.0 # snr1=7.5 and snrk=2.8 gives snrtot=8 npix_max: 2048 # spw: [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB NRAOdefaultP: nthread: 2 dtarr: [1,2,4,8] # integer to integrate in time for independent searches maxdm: 500 flagantsol: True timesub: 'mean' searchtype: 'image' # sigma_image1: 7.0 # sigma_kalman: 0.0 # snr1=7.5 and snrk=2.8 gives snrtot=8 npix_max: 2048 # spw: [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] badspwpol: 2. uvoversample: 1.5 savenoise: False savecandcollection: True savecanddata: False returncanddata: True saveplots: True savesols: False selectpol: 'auto' fftmode: 'cuda' clustercands: !!python/tuple [2, 5] cluster_downsampling: 1 memory_limit: 5.0 # in GB realfast: vys_timeout: 2 # fixed timeout factor relative to realtime reading # vys_sec_per_spec: !!float 3.0e-5 # timeout factor from time per spectrum searchintents: ['OBSERVE_TARGET'] ignoreintents: ['CALIBRATE_FLUX', 'CALIBRATE_PHASE', 'CALIBRATE_AMPLI', 'CALIBRATE_POINTING'] read_overhead: 3 # scale reader memory higher than nominal vismem requirement read_totfrac: 0.5 # require total of readers memories usage throttle: 0.7 # scale submission relative to realtime and conditional on cluster status requirecalibration: True # must have calibration to submit mockprob: 0.0 # chance of injecting mock transient per scan indexprefix: 'new' # save to production indices indexresults: True # push info to elasticsearch indices createproducts: True # save minisdm classify: True # use fetch to classify and index results index_with_fetch: False # use fetch workers to run indexcands_and_plots index_with_reader: True # use reader to run indexcands_and_plots # testing trigger # voevent: 0 # False, 'FRB', or DM value to select cands for voevent xml generation # voevent_snrtot: 6.6 # filter candidates based on snrtot # voevent_frbprobt: 0.00 # filter candidates based on frbprob # voevent_dt: 999 # filter on max width in seconds # swift trigger voevent_destination: '3.13.26.235' # ip address or comment out to set to None voevent: 'FRB' # False, 'FRB', or DM value to select cands for voevent xml generation voevent_snrtot: 9.0 # filter candidates based on snrtot voevent_frbprobt: 0.2 # filter candidates based on frbprob voevent_dt: 0.039 # filter on max width in seconds
YAML
BSD
7,498
[ascl:2306.019]
https://github.com/realfastvla/realfast
conf/realfastdev.sh
#!/bin/bash . ~/anaconda/etc/profile.d/conda.sh conda activate development36 cd /lustre/evla/test/realfast exec realfast run --mode development
Shell
BSD
144
[ascl:2306.019]
https://github.com/realfastvla/realfast
conf/env_realfast.yml
name: development36 channels: - anaconda - conda-forge - pkgw-forge - defaults dependencies: - _tflow_select=2.1.0=gpu - absl-py=0.7.1=py36_0 - astor=0.7.1=py36_0 - atk=2.25.90=hf2eb9ee_1001 - blas=2.10=openblas - boost=1.68.0=py36h8619c78_1001 - boost-cpp=1.68.0=h11c811c_1000 - bzip2=1.0.6=h14c3975_1002 - c-ares=1.15.0=h7b6447c_1 - ca-certificates=2019.3.9=hecc5488_0 - cairo=1.14.12=h8948797_3 - casa-data=r20190419=0 - casa-python=5.4.1=py36hde29b37_2 - casa-tools=5.4.1=h738cfa5_2 - casacore=2.4.2a0=py36hde8ec18_1005 - certifi=2019.3.9=py36_0 - cfitsio=3.430=hc04bd9f_1001 - cloudpickle=1.0.0=py_0 - cudatoolkit=9.2=0 - cudnn=7.3.1=cuda9.2_0 - cupti=9.2.148=0 - curl=7.64.0=h646f8bb_0 - cython=0.29.7=py36he1b5a44_0 - cytoolz=0.9.0.1=py36h14c3975_1 - dask-core=1.2.2=py_0 - dbus=1.13.2=h714fa37_1 - dbus-cxx=0.9.2=h9ea1f2e_1000 - decorator=4.4.0=py36_1 - expat=2.2.5=hf484d3e_1002 - fftw=3.3.8=nompi_h7f3a6c3_1105 - fontconfig=2.13.1=he4413a7_1000 - freeglut=3.0.0=hf484d3e_1005 - freetype=2.10.0=he983fc9_0 - gast=0.2.2=py36_0 - gdk-pixbuf=2.36.12=h4f1c04b_1001 - gettext=0.19.8.1=hc5be6a0_1002 - glib=2.56.2=hd408876_0 - gobject-introspection=1.56.1=py36hbc4ca2d_2 - graphite2=1.3.13=hf484d3e_1000 - grpcio=1.14.1=py36h9ba97e2_0 - gsl=2.4=h294904e_1006 - gst-plugins-base=1.14.0=hbbd80ab_1 - gstreamer=1.14.0=hb453b48_1 - gtk2=2.24.31=h5baeb44_1000 - h5py=2.9.0=py36h7918eee_0 - harfbuzz=1.9.0=he243708_1001 - hdf5=1.10.4=nompi_h3c11f04_1106 - icu=58.2=hf484d3e_1000 - imageio=2.5.0=py36_0 - joblib=0.13.2=py36_0 - jpeg=9c=h14c3975_1001 - keras=2.2.4=0 - keras-applications=1.0.7=py_0 - keras-base=2.2.4=py36_0 - keras-preprocessing=1.0.9=py_0 - kiwisolver=1.1.0=py36hc9558a2_0 - krb5=1.16.3=hc83ff2d_1000 - libblas=3.8.0=10_openblas - libcblas=3.8.0=10_openblas - libcurl=7.64.0=h01ee5af_0 - libedit=3.1.20170329=hf8c457e_1001 - libffi=3.2.1=he1b5a44_1006 - libgcc-ng=8.2.0=hdf63c60_1 - libgfortran-ng=7.3.0=hdf63c60_0 - libiconv=1.15=h516909a_1005 - liblapack=3.8.0=10_openblas - liblapacke=3.8.0=10_openblas - libopenblas=0.3.3=h5a2b251_3 - libpng=1.6.37=hed695b0_0 - libprotobuf=3.7.1=hd408876_0 - libsakura=3.0.0=hf484d3e_1000 - libssh2=1.8.0=h1ad7b7a_1003 - libstdcxx-ng=8.2.0=hdf63c60_1 - libtiff=4.0.10=h648cc4a_1001 - libuuid=2.32.1=h14c3975_1000 - libxcb=1.13=h14c3975_1002 - libxml2=2.9.9=h13577e0_0 - libxslt=1.1.32=h4785a14_1002 - llvmlite=0.28.0=py36hdbcaa40_0 - markdown=3.1=py36_0 - matplotlib=3.0.3=py36h5429711_0 - matplotlib-base=3.1.0=py36h5f35d83_0 - mock=2.0.0=py36_0 - ncurses=6.1=hf484d3e_1002 - networkx=2.3=py_0 - numba=0.43.1=py36hf2d7682_0 - numpy=1.16.3=py36h99e49ec_0 - numpy-base=1.16.3=py36h2f8d375_0 - olefile=0.46=py36_0 - openblas=0.3.6=h6e990d7_2 - openssl=1.0.2r=h14c3975_0 - pandas=0.24.2=py36he6710b0_0 - pango=1.40.14=hf0c64fd_1003 - pathlib=1.0.1=py_1 - pbr=5.1.3=py_0 - pcre=8.43=he6710b0_0 - pgplot=5.2.2=h3433452_1003 - pillow=6.0.0=py36h34e0f95_0 - pip=19.1=py36_0 - pixman=0.34.0=h14c3975_1003 - protobuf=3.7.1=py36he6710b0_0 - pthread-stubs=0.4=h14c3975_1001 - pwkit=0.8.20=pyh24bf2e0_0 - pyfftw=0.11.1=py36h3010b51_1001 - pyparsing=2.4.0=py_0 - pyqt=5.9.2=py36h22d08a2_1 - python=3.6.7=hd21baee_1002 - python-dateutil=2.8.0=py_0 - pytz=2019.1=py_0 - pywavelets=1.0.3=py36hdd07704_1 - pyyaml=3.13=py36h14c3975_0 - qt=5.9.6=h8703b6f_2 - qt4=4.8.7=hde88ee2_5 - qwt5=5.2.3=h6bb024c_6 - readline=7.0=hf8c457e_1001 - rpfits=2.24=h9ac9557_1002 - scikit-image=0.15.0=py36he6710b0_0 - scipy=1.2.1=py36he2b7bc3_0 - setuptools=41.0.1=py36_0 - sip=4.19.13=py36he6710b0_0 - six=1.12.0=py36_1000 - sqlite=3.28.0=h8b20d00_0 - tensorboard=1.13.1=py36hf484d3e_0 - tensorflow=1.13.1=gpu_py36h9b25d83_0 - tensorflow-base=1.13.1=gpu_py36h611c6d2_0 - tensorflow-estimator=1.13.0=py_0 - tensorflow-gpu=1.13.1=h0d30ee6_0 - termcolor=1.1.0=py36_1 - tk=8.6.9=h84994c4_1001 - toolz=0.9.0=py36_0 - tornado=6.0.2=py36h516909a_0 - tqdm=4.31.1=py36_1 - wcslib=5.20=h9ac9557_1000 - werkzeug=0.15.2=py_0 - wheel=0.33.4=py36_0 - xerces-c28=2.8.0=h449daf6_1002 - xorg-fixesproto=5.0=h14c3975_1002 - xorg-inputproto=2.3.2=h14c3975_1002 - xorg-kbproto=1.0.7=h14c3975_1002 - xorg-libice=1.0.9=h516909a_1004 - xorg-libsm=1.2.3=h84519dc_1000 - xorg-libx11=1.6.7=h14c3975_1000 - xorg-libxau=1.0.9=h14c3975_0 - xorg-libxdmcp=1.1.3=h516909a_0 - xorg-libxext=1.3.4=h516909a_0 - xorg-libxfixes=5.0.3=h516909a_1004 - xorg-libxi=1.7.9=h516909a_1002 - xorg-libxrender=0.9.10=h516909a_1002 - xorg-libxt=1.1.5=h14c3975_1002 - xorg-renderproto=0.11.1=h14c3975_1002 - xorg-xextproto=7.3.0=h14c3975_1002 - xorg-xproto=7.0.31=h14c3975_1007 - xz=5.2.4=h14c3975_1001 - yaml=0.1.7=h96e3832_1 - zlib=1.2.11=h14c3975_1004 - pip: - ansiwrap==0.8.4 - astropy==3.2rc1 - atomicwrites==1.3.0 - attrs==19.1.0 - backcall==0.1.0 - bleach==3.1.0 - bokeh==1.1.0 - chardet==3.0.4 - click==7.0 - cycler==0.10.0 - defusedxml==0.6.0 - distributed==1.28.1 - elasticsearch==6.3.1 - entrypoints==0.3 - evla-mcast==0.2.1 - fetch==0.1.8 - future==0.17.1 - fuzzywuzzy==0.17.0 - hdbscan==0.8.18 - idna==2.8 - ipykernel==5.1.1 - ipython==7.5.0 - ipython-genutils==0.2.0 - jedi==0.13.3 - jinja2==2.10.1 - jsonschema==3.0.1 - jupyter-client==5.2.4 - jupyter-core==4.4.0 - kalman-detector==0.0.4 - lxml==4.3.3 - markupsafe==1.1.1 - meld3==1.0.2 - mistune==0.8.4 - more-itertools==7.0.0 - msgpack==0.6.1 - nbconvert==5.5.0 - nbformat==4.4.0 - packaging==19.0 - pandocfilters==1.4.2 - papermill==1.0.0 - parso==0.4.0 - pexpect==4.7.0 - pickleshare==0.7.5 - pluggy==0.11.0 - prompt-toolkit==2.0.9 - psutil==5.6.2 - ptyprocess==0.6.0 - py==1.8.0 - pygments==2.4.0 - pyrsistent==0.15.2 - pytest==4.5.0 - pyzmq==18.0.1 - realfast==3.4.1 - requests==2.22.0 - rfpipe==1.3.2 - scikit-learn==0.20.3 - sdmpy==1.70.3 - sortedcontainers==2.1.0 - supervisor==4.0.3 - supervisor-wildcards==0.1.3 - tblib==1.4.0 - tenacity==5.0.4 - testpath==0.4.2 - textwrap3==0.9.2 - traitlets==4.3.2 - urllib3==1.25.3 - wcwidth==0.1.7 - webencodings==0.5.1 - zict==0.1.4 prefix: /home/cbe-master/realfast/anaconda/envs/development36
YAML
BSD
6,574
[ascl:2306.019]
https://github.com/realfastvla/realfast
conf/realfast.sh
#!/bin/bash . ~/anaconda/etc/profile.d/conda.sh conda activate deployment3 cd /lustre/evla/test/realfast exec realfast run --mode deployment
Shell
BSD
141
End of preview. Expand in Data Studio

ASCL Astronomy Source Code

The Astrophysics Source Code Library (ASCL) is a curated registry of source code used in astronomy and astrophysics research. This dataset contains source files extracted from ASCL-listed repositories, paired with catalog metadata.

Dataset Structure

Manifest (manifest.parquet)

One row per ASCL catalog entry with the following fields:

Field Description
ascl_id ASCL identifier (e.g., [ascl:2306.019])
title Software title
authors Author list
description Abstract / description from ASCL
detail_url ASCL detail page URL
repo_url GitHub/GitLab/Bitbucket URL (if found)
code_site Project homepage URL
ads_url ADS bibcode URL
license_type Detected license (e.g., MIT, GPL-3.0)
license_file Path to license file in repo

Source Code (code/*.parquet)

Stack-style source files extracted from cloned repositories (one row per file):

Field Description
ascl_id ASCL identifier
repo_url Source repository URL
file_path Relative path within repo
content File text content
language Detected programming language (from file extension)
license_type License detected from the repository
size File size in bytes

Data Collection Methodology

Phase 1: Catalog Scrape

The ASCL catalog is scraped to extract metadata for each entry: title, authors, description, repository URLs, and ADS bibcode links. Only entries with a repository URL on GitHub, GitLab, or Bitbucket proceed to Phase 2.

Phase 2: Code Extraction

Each repository is shallow-cloned (--depth 1), its license file is detected and classified via regex pattern matching, and all recognised source files are extracted into Parquet batches. Language detection uses file extension mapping (Python, C, C++, Fortran, Julia, R, MATLAB/Octave, IDL, Java, Rust, Go, JavaScript, Shell, and others).

Limitations

  • Repository coverage: only repos hosted on GitHub, GitLab, or Bitbucket are included; code distributed via tarballs, personal websites, or other non-git hosting is skipped.
  • Shallow clones only: only the latest commit is captured — no version history.
  • Language detection is extension-based: file extensions are mapped to languages; there is no content-based language classification.
  • License detection is regex-based: licenses are identified by pattern matching against common license file names and text; unusual or custom licenses may be misclassified or reported as Unknown.
  • No deduplication: if multiple ASCL entries point to the same repository, its files may appear more than once.

Licensing

This is a multi-license dataset. Each row carries a license_type field indicating the license detected for that repository. Individual source files retain their original licenses as set by their authors. Catalog metadata originates from ASCL.

Usage

from datasets import load_dataset

# Load catalog metadata
ds_manifest = load_dataset("Smith42/ascl-code", data_files="manifest.parquet")

# Load source code files
ds_code = load_dataset("Smith42/ascl-code", data_files="code/*.parquet")

# Filter to a specific license
mit_code = ds_code["train"].filter(lambda x: x["license_type"] == "MIT")

# Filter to Python files
python_code = ds_code["train"].filter(lambda x: x["language"] == "Python")
Downloads last month
7