Eagle and Hawk Reprocessed data

So you have to reprocess data... Let's see if we can help you out with this. Good luck!

Acessing the Data

First of all, if the data is not in our system, download the data from CEDA: https://data.ceda.ac.uk/neodc/arsf/ The password is here: https://rsg.pml.ac.uk/intranet/trac/wiki/Projects/ARSF-DAN/Passwords

Ideally there would be RAW data available and you can process this dataset as any other. If not, then download the leve1b files and hdf files delivered. You will also need to download the log file and any documentation you can find.

Create project directory and setup

Create a project directory where in appropriate location: </users/rsg/arsf/arsf_data/year/flight_data/country/project>

Choose an appropiate project name <ProjCode-year-_jjjs_Site> and create the project directories as "arsf" user by using build_structure.py -p . Then change to user 'airborne' and create the missing directories. If the year is not found in folder structure, you can build_structure from another year and simply edit manually the directories or move them.

Once the structure is created, copy the level1b files to processing/hyperspetral/flightlines/level1b/

If there is any project info needed not found in the log. You might be able to get from the hdf files. First of all, get the hdf code activated for you source ~utils/python_venvs/pyhdf/bin/activate

Now you can get extra info from hdf files for example like this: get_extra_hdf_params.py e153081b.hdf

You might need to edit the missing information for some of the scripts. One example is:

--base="Almeria" --navsys="D.Davies" --weather="good" --projcode="WM06_13" --operator="S. J. Rob" --pi="R. Tee" --pilot="C. Joseph"

If the project is not in the database or the processing status page, it is better to include it (just follow the usual steps for unpacking a new project). Otherwise, many of the commands below will need that extra metadata information or might simply fail.

Extracting navigation and setting up

Inspect the level1b hdr files. For running APL the eagle and Hawk need the binning and "x start" in the hdr as well as "Acquisition Date", "Starting time" and "Ending time". You can get some of that info by running

hdf_reader.py --file e153031b.hdf --item MIstime Where MIstime is the Starting Time that will be returned in format hhmmss. A list of keywords are available in the az guide found under az_docs. You will also need (if not present in the data) the keywords --MIdate and --MIetime. You need to calculate manually the binning and the x start following visual inspection of the hdf lines in tuiview as that information is usually not saved in the hdf file.

It is possible that the level1b hdr files contain errors as well. Please double check the most common errors: -Sensor name and sensor id for Hawk is showing details for Eagle. Correct manually for: 300011, sensor type = SPECIM Hawk. -reflectance scale factor = 1000.000 -> Should be -> Radiance data units = nW/(cm)2/(sr)/(nm)

An example of corrected file should look like this:

binning = {2, 2}
x start = 27
acquisition date = DATE(dd-mm-yyyy): 02-06-2006
GPS Start Time = UTC TIME: 10:58:19
GPS Stop Time = UTC TIME: 11:03:49
wavelength units         = nm
Radiance data units = nW/(cm)^2/(sr)/(nm)

An example of corrected file should look like this:

binning = {2, 2}
x start = 27
acquisition date = DATE(dd-mm-yyyy): 02-06-2006
GPS Start Time = UTC TIME: 10:58:19
GPS Stop Time = UTC TIME: 11:03:49
wavelength units         = nm
Radiance data units = nW/(cm)^2/(sr)/(nm)

On the next step, you need to create the nav files by extracting the navigation information from the hdf files. Run the following command for each line (remember to activate pyhdf above):

hdf_to_bil.py processing/hyperspectra/flightlines/level1b/hdf_files/e153101b.hdf  processing/hyperspectra/flightlines/navigation/interpolated/post_processed/e153101b_nav_post_processed.bil

Please note that it is the original navigation rather than 1b_nav_post_processed but the scripts will be looking for that keyword. Rename the files if necessary to match the naming format.

If you have extracted the navigation, then you will be able to automatically create a DEM model by specifying the BIL navigation files directory and running: create_apl_dem.py --aster -b ./processing/hyperspectral/flightlines/navigation/interpolated/post_processed/

And you can create the specim config file giving specific information

generate_apl_config.py --base="Almeria" --navsys="D.Davies" --weather="good" --projcode="WM06_13" --operator="S. J. Rob" --pi="R. Tee" --pilot="C. Joseph" --hdfdir processing/hyperspectral/flightlines/level1b/hdf_files

Please note you will not be able to process the data with the config file if there were no RAW data. The script will print lots of errors but it will create a config file that will allow you to create the xml files much easier at the last stage. The config file will have all the metadata for each line including altitude (for the pixelsize calculator) and should have the UTM zone for the projection. However, the config file will think that all files are CASI; rename them and make sure all files are matching the Eagle and Hawk fligthlines. Doble check and complete all information (like any other airborne request) including projection, DEM, pixel size, bands to map...

Processing flightlines

As there is no raw data, you will need to run each apl command manually. Simply, create a python script that goes over the files and runs each step: aplcorr, apltran, aplmap and aplxml. An example of the aplcoorr is:

    aplcorr -vvfile ~arsf/calibration/2006/hawk/hawk_fov_fullccd_vectors.bil \
    -navfile processing/hyperspectral/flightlines/navigation/interpolated/post_processed/h153{:02}1b_nav_post_processed.bil \
    -igmfile processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil \
    -dem processing/hyperspectral/dem/WM06_13-2006_153-ASTER.dem \
    -lev1file processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \
    >> processing/hyperspectral/logfiles/h-{:02}-log

The output of the script will be directly saved on the logfile "h-{:02}-log" allowing later on to create xml files. All the APL commands must only show once in the config file so therefore, you should check the output and make sure the lines can be processed before doing a loop over all lines. If something went wrong, you can simply delete the log file and create a new one. Otherwise, you will have to delete the extra commands manually.

An example for running apltran is:

    apltran -igm processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil \
    -output processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm_utm.bil -outproj utm_wgs84N ZZ \
    >> processing/hyperspectral/logfiles/h-{:02}-log

where ZZ ist the UTM zone.

And finally aplmap:

    aplmap -igm processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm_utm.bil -lev1 processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \
    -mapname processing/hyperspectral/flightlines/georeferencing/mapped/h153{:02}3b_mapped.bil -bandlist ALL -pixelsize 1 1 -buffersize 4096 -outputdatatype uint16 \
    >> processing/hyperspectral/logfiles/h-{:02}-log

If you create a script to run that code for each line, then the processing should be mostly done and can also create the xml files

    cmd= 'aplxml.py --line_id=-{} \
    --config_file "/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/2006153.cfg" \
    --output=/users/rsg/arsf/arsf_data/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/delivery/WM06_13-153-hyperspectral-20211005/flightlines/line_information/h153{:02}1b.xml \
    --meta_type=i --sensor="hawk" \
    --lev1_file=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \
    --igm_header=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil.hdr \
    --logfile={} \
    --raw_file=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/level1b/hdf_files/h153{:02}1b.hdf \
    --reprocessing --flight_year yyyy --julian_day jjj --projobjective="-" --projsummary="-"'.format(line,line,line,line,logfile,line)'

Please note the --reprocessing flag is needed in this case.

That way your general script structure in python should look like this

#A python script for running processed data
for line in range (1, len(flightlines)):
    =TODO: define cmd such as aplcorr, apltran... With the examples above
    cmd = "EDIT HERE".format(line) =commands to run are above: aplcorr, apltran, aplmap and aplxml
    stream = os.popen(cmd)
    output = stream.read()
    print(output)

If everything went according to plan, then you should have all ready for creating a delivery.

Delivery creation

You should create the structure first

make_arsf_delivery.py --projectlocation <insert location and name> \
                      --deliverytype hyperspectral --steps STRUCTURE

If happy, then run again with --final

Now, you should first check the other steps, run

make_arsf_delivery.py --projectlocation <insert location and name> \
                      --deliverytype hyperspectral --notsteps STRUCTURE

Inspect the output, specially the files that will be moved. In this case, might be easier to move the files yourself (or copy them if unsure) and skip this step before --final. For the Eagle and Hawk, most of the steps will run as expected except morst liklley the PROJXML step. For this one, it might need extra information 1 --maxscanangle=0 --area Sitia --piemail unknown --piname "G Ferrier" --projsummary "-" --projobjective "-" `

Or you can simply run aplxml to create this xml file an example is:

aplxml.py --meta_type p --project_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15 --output /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/delivery/MC04_15-2005-130/project_information/MC04_15-2005_130-project.xml --config_file /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/hyperspectral/2005130_from_hdf.cfg --lev1_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/delivery/MC04_15-2005-130/flightlines/level1b/ --igm_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/hyperspectral/flightlines/georeferencing/igm/ --area Sitia --piemail unknown --piname "G Ferrier" --projsummary "-" --projobjective "-" --project_code "MC04_15"

If everything went according to plan, the delivery should be nearly all done. You are likely to encounter new errors along the processing chain so pay special attention to all steps error messages.

Delivery Readme file

Once the delivery creation was succesful, the only thing left should be creating a Readme file. Simply run the command: generate_readme_config.py -d <delivery directory> -r hyper -c <config_file> This will create a config file for the delivery. If there are no mask files, the apl example command are likely to fail. You will need to enter apl commands manually. Do not leave the aplmask as an empty field as the scripts will fail. Enter a "void" string to later remove from the tex file.

Once again, if there are no mask files, the readme creation (create_latex_hyperspectral_apl_readme.py ) will fail as it tries to read the underflows and overflows. You need to skip this step by running the script with --skip_outflows. A simple example is: create_latex_hyperspectral_apl_readme.py -o . -f hyp_genreadme-airborne.cfg --skip_outflows -s eagle

This should create the Readme file. Edit the tex file and remove all references to the mask files and any other section that does not apply. Complete the data quality section as required, you can use the reprocessed data quality report as a template or another reprocessed dataset as an example (such as 153 2006).

Last modified 3 years ago Last modified on Oct 13, 2021, 3:19:19 PM