Changes between Version 93 and Version 94 of Procedures/RCDPhotoProcessing


Ignore:
Timestamp:
Aug 3, 2017, 3:05:38 PM (7 years ago)
Author:
dac
Comment:

Updates for PhaseOne

Legend:

Unmodified
Added
Removed
Modified
  • Procedures/RCDPhotoProcessing

    v93 v94  
    1 = Leica digital camera processing =
     1= Tagging Digital Camera Images =
    22
    3 The RCD produces raw files that need to be processed in order to create TIFF files.  See the [wiki:Sensors/RCD_CH39 RCD page] for instrument details, including filename convention.
     3== Generate Events File (Phase One only) ==
    44
    5 === Raw to Tiff ===
     5The PhaseOne camera doesn't generate CSV file with the location of each image so one needs to be created manually for subsequent processing. To do this use the following command:
    66
    7 The first stage in processing the photographic data is to convert the raw file format into a 16-bit tiff format. The procedure for processing raw images to tif images can be found [wiki:Procedures/RCDPhotoProcessing/RawtoTif here].
     7{{{
     8create_phaseone_events_file.py -o processing/phaseone/20170613_ImageEvents1.csv \
     9             -n posatt/ipas20/proc/2017164_ipas20.sol \
     10            --time_diff 2 processing/phaseone/proc_images/*.tif
     11}}}
    812
    9 === Post-processing ===
     13
     14== Navigation post-processing (Leica RCD 105 only) ==
    1015
    1116 1. Get a camera .sol file
    1217  * Check there is a *_camera.sol file in the IPAS/proc directory. If there is not, then you will need to create one. See details [wiki:Procedures/ProcessingChainInstructions/NavigationProcessing here]
    13  1. Remove "bad" images and images that don't correspond to flightlines.
    14   * Run '''rcdimages_in_pdf.py''' from the top level project directory.
    15   * This will create thumbnails and a pdf preview page. Also, any tif images which do not correspond to a flightline will be moved to proc_images/outbounds and any images with corrupted eventfile information will be moved to proc_images/nogps.
    16   * Look through the pdf and remove all the over/under exposed tif images (leave the raws) from proc_images/ and nogps/.
    17   * If any nogps images fall between the outbounds images then remove the tif image. e.g. if images 1, 2, 3 and 6 are in outbounds the nogps images 4 and 5 will be outbound as well.
    18   * Note down any images left in the nogps folder because these images will not be fully tagged and need to be mentioned in the Read Me, then move them into proc_images with the other tif files.
    19   * Remove the outbound and (now empty) nogps directories
    2018 1. Create a new image event file with post-processed positional data and omega,phi,kappa values
    2119  * Open IPAS CO (on the windows machine)
     
    3129 The flight line logs in /lidar/als50/logs/ also need to be combined into one file, as they're used to find the flight line names when creating the delivery readme.
    3230
     31== Create thumbnails to check images ==
    3332
    34 === Image Tagging ===
     33  1. Run `digital_camera_images_in_pdf.py` from within the project directory.
     34       * If there is no LiDAR data or the LiDAR failed pass in the `--nolidar` flag so the script will not check LiDAR lines
     35       * The script will create thumbnails and a pdf preview page. Also, any tif images which do not correspond to a flightline will be moved to proc_images/outbounds and any images with corrupted eventfile information will be moved to proc_images/nogps.
     36  1. Look through the pdf and remove all the over/under exposed tif images (leave the raws) from proc_images/ and nogps/.
     37  1. If any nogps images fall between the outbounds images then remove the tif image. e.g. if images 1, 2, 3 and 6 are in outbounds the nogps images 4 and 5 will be outbound as well.
     38  1. Note down any images left in the nogps folder because these images will not be fully tagged and need to be mentioned in the Read Me, then move them into proc_images with the other tif files.
     39  1. Remove the outbound and (now empty) nogps directories
     40
     41== Image Tagging ==
    3542
    3643Image tagging inserts exiftags into the TIFF files that contain information such as: project details, camera parameters and photograph pos/att information.
    3744
    38 This is scripted and can be run as follows.
     45=== Fix IPASCO CSV (Leica RCD105 only)
    3946
    4047Assuming a CSV file as output from IPASCO - the first thing to do is to fix the header in the csv file (space delimited file with spaces in the header names!):
    4148
    42 {{{rcd_tiff_tagging.py --eventfile <eventfilename> --fixipascoheader <fixedeventfilename>}}}
     49{{{
     50digital_camera_tif_tagging.py --eventfile <eventfilename> --fixipascoheader <fixedeventfilename>
     51}}}
    4352
    44 This creates a new event file with a parseable header. Note that the DEFAULT header is HARD CODED in the script. If it is not the same as the below then you can call the function {{{rcdclasses.FixIPASCOEventFileHeader(filename,newfilename,origheader,newheader)}}} from within python to convert the origheader to the newheader.
     53This creates a new event file with a parseable header. Note that the DEFAULT header is HARD CODED in the script. If it is not the same as the below then you can call the function `rcdclasses.FixIPASCOEventFileHeader(filename,newfilename,origheader,newheader)` from within python to convert the origheader to the newheader.
    4554
    4655Default header:
     
    4958}}}
    5059
    51 You are now ready to tag the tiffs. '''rcd_tiff_tagging.py''' has a lot of options to help in the tagging of images, but for usual tagging (i.e. in an ARSF repository porject) you (probably :) ) only need to specify the solfile, eventfile and project location. For other options see the --help. First run I suggest you do as the following ('''run from main project directory'''):
     60=== Tag files ===
    5261
    53 {{{rcd_tiff_tagging.py --eventfile <eventfilename> --solfile <solfilename> --projectlocation <toplevelprojectpath> }}}
     62You are now ready to tag the tiffs, the script `digital_camera_tif_tagging.py` has a lot of options to help in the tagging of images, but for usual NERC-ARF processing you should only need to specify the solfile, eventfile and project location:
    5463
    55 (Where <eventfilename> is the event file with the fixed header, generated from previous steps).
     64{{{
     65digital_camera_tif_tagging.py --eventfile <eventfilename> --solfile <solfilename> --projectlocation <toplevelprojectpath>
     66}}}
    5667
    57 If that outputs everything OK (no errors etc) then add {{{--final}}} to the command to actually perform the tagging. Note also that the script can be loaded in as a library of functions, and the accompanying rcdclasses.py library is also of use here. This will also create an updated event file in the same directory as the photographs.
     68where <eventfilename> is the event file, for the RCD105 this will be the one with the fixed headers generated in the step above.
    5869
    59 === Making the Delivery ===
    60 
    61 Now that the processing has been completed the delivery can be made. This, as with the hyperspectral and lidar, uses the [wiki:Procedures/DeliveryCreation/pythonlibrary arsf_delivery_library] together with the convenience script make_arsf_delivery.py. An example usage:
    62 
    63 {{{make_arsf_delivery.py --projectlocation /users/rsg/arsf/arsf_data/2013/flight_data/uk/MYPROJECT --deliverytype camera --steps STRUCTURE}}}
    64 
    65 check that the output looks correct, and if so repeat with {{{--final}}}. This creates an empty structure. Then run:
    66 
    67 {{{make_arsf_delivery.py --projectlocation /users/rsg/arsf/arsf_data/2013/flight_data/uk/MYPROJECT --deliverytype camera --notsteps STRUCTURE}}}
    68 
    69 This will do a dry-run on the camera delivery and output information clearly labelled for each step. If this script outputs no error messages, repeat with {{{--final}}}.
    70 
    71 
    72 === Creating the Read me ===
    73 
    74  * Run `generate_readme_config.py` with the -d option (giving the delivery directory) and "-r camera" as airborne.
    75  * Edit the config (should be located in processing directory). Remember to add information on any photos which could not be tagged fully, or any images which look like they have anomalies or over/under exposure. To add new line characters enter '\\'. Tagtype should be "full" if photos have been tagged with pos/att information or "min" if only tagged with project details. If both types are present in the delivery then use full and in the "data_quality_remarks" section add a sentence explaining which photos have been unable to be tagged with pos/att and why. Line_numbering should contain a space separated list of names/numbers identifying flight lines.
    76  * Create the LaTeX TeX file. To create the TeX file run the script create_latex_camera_readme.py from the processing directory with the -f option, giving the location and name of the config file generated above. This is editable in any text editor and can be manually edited to correct mistakes or insert new text
    77  * Convert the TeX file into a PDF file. This is done using the command `pdflatex <TeXFile>` and should create a file named Read_me-<TODAYSDATE>.pdf. If you get an error about missing .sty files, then yum install whatever is missing. For example, if you get complaints about supertabular.sty being missing, do yum install texlive-supertabular.
    78  * It is advisable to keep the TeX file somewhere safe until after the delivery has been checked in case some changes to the Read_me need to be made (the TeX file should not be part of the delivery).
    79 
    80 === Subsequent processing ideas ===
    81 
    82 There are several other steps that could be undertaken in the future:
    83  * orthorectification (map the photos with respect to the ground/aircraft position)
    84  * geocorrection (map the photos with respect to the ground + a DEM) - possibly only Bill's azgcorr mods could do this
    85  * compositing orthorectified photos and seam-line adjustment
    86    * compositing is easy, but will have ugly problems when you get different views on an object with vertical structure
    87    * to improve the look of this, you have to manually edit the positioning of the joins - this is currently a very manual process and we do not currently have software for it
     70If the script outputs are OK (no errors etc) then add `--final` to the command to actually perform the tagging.