ORIGINAL_ARTICLE
Evaluation of a 2-D transformation model for georeferencing of Synthetic Aperture RADAR imagery
The Synthetic Aperture Radar (SAR) geometry imaging includes geometric distortions, which cause errors. To compensate the geometric distortions, the information about sensor position, imaging geometry, and target altitude from ellipsoid should be available. In this paper, a method for geometric calibration of SAR images is proposed. The method uses Range-Doppler (RD) equations. In this method, the georeferencing is carried out using the Digital Elevation Model (DEM) and also exact ephemeris data of the sensor. First, the digital elevation model is transferred to the range and azimuth directions. Then, the original image is registered to the transferred DEM from the previous step with transformation equations: conformal, affine, and projective. The advantage of the method described in this article is the elimination of required control points and rotational parameters of the sensor. Since the ground range resolution of used images is about 30m, in best stance, the geocoded images using the method described in this paper have an accuracy of about 20m (subpixel) in planimetry and of about 33m in altimetry.
https://eoge.ut.ac.ir/article_72632_93313cecedd31619a1d18b925b2320a3.pdf
2019-06-01
1
11
10.22059/eoge.2019.248253.1018
Range-Doppler
Digital Elevation Model
Transformation
Georeferencing
Topographic errors
Majid
Esmaeilzadeh
m.esmaeilzade@ut.ac.ir
1
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
AUTHOR
Jalal
Amini
jamini@ut.ac.ir
2
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
LEAD_AUTHOR
ORIGINAL_ARTICLE
Combination of Post-Earthquake LiDAR Data and Satellite Imagery for Buildings Damage detection
Earthquakes are known as one of the deadliest natural disasters that have caused many fatalities and homelessness through history. Due to the unpredictability of earthquakes, quick provision of buildings damage maps for reducing the number of losses after an earthquake has become an essential topic in Photogrammetry and Remote Sensing. Low-accuracy building damage maps waste the time that is required to rescue the people in destructed areas by wrongly deploying the rescue teams toward undamaged areas. In this research, an object-based algorithm based on combining LiDAR raster data and high-resolution satellite imagery (HRSI) was developed for buildings damage detection to improve the relief operation. This algorithm combines classification results of both LiDAR raster data and high-resolution satellite imagery (HRSI) for categorizing the area into three classes of “Undamaged,” “Probably Damaged,” and “Surely Damaged” based on the object-level analysis. The proposed method was tested using Worldview II satellite image and LiDAR data of the Port-au-Prince, Haiti, acquired after the 2010 earthquake. The reported overall accuracy of 92% demonstrated the high ability of the proposed method for post-earthquake damaged building detection.
https://eoge.ut.ac.ir/article_72633_9eaf7f2f7ae9d22757cc5fd33f0918f1.pdf
2019-06-01
12
20
10.22059/eoge.2019.278307.1046
earthquake
Building Damage Detection
High Resolution Satellite Image (HRSI)
LiDAR
Niloofar
Khodaverdi
niloofar.khodaverdizahrai1992@gmail.com
1
School of Surveying and Geospatial Engineering, College of Eng., University of Tehran, Iran
AUTHOR
Heidar
Rastiveis
hrasti@ut.ac.ir
2
School of Surveying and Geospatial Engineering, College of Eng., University of Tehran, Iran
LEAD_AUTHOR
Arash
Jouybari
arash.jouybari@hig.se
3
Faculty of Engineering and Sustainable Development, Department of Computer and Geospatial sciences, University of Gävle, Sweden
AUTHOR
ORIGINAL_ARTICLE
An integrated Fuzzy AHP-VIKOR method for Gold Potential Mapping in Saqez prospecting zone, Iran
Mineral prospectivity mapping (MPM) is a Multi-Criteria Decision Making (MCDM) task that prioritizes mineralized areas from high to low potential through methodologies that deal with data fusion problems. The primary purpose of this research is to produce a mineral potential map of the Saqez area in Kurdistan, Western Iran, to delimit the promising areas for subsequent studies in gold exploration. This area is well-known for its orogenic gold occurrences where several deposits/prospects have been discovered recently. To seek blind targets in the region, three evidential criteria, including geological, remote sensing, and geochemical layers, were used to generate the gold potential map of the Saqez area. The technique used to gain the weights of evidential layers is the fuzzy analytical hierarchy process (fuzzy AHP). Regarding this technique, at least three expert decision-makers (DMs) are required to run the method, where the results have superiority compared to the conventional AHP method with high values of uncertainty. The obtained weights of criteria were used to integrate all indicator layers incorporated in geospatial datasets through three methods, including conventional VIKOR, modified VIKOR, and multi-class index overlay. In order to validate the final MPMs, the predication ratio of seven gold prospects along with favorable zones were calculated as an efficiency index, where it was 85.65, 85.29, and 78.47 %, respectively. It proved the applicability of the VIKOR approach in gold potential mapping in comparison to the multi-class index overlay. The southwestern portions of the prospecting zone were highlighted as the most favorable potential area for gold occurrences.
https://eoge.ut.ac.ir/article_72634_9c85fbdfbdcf4f0e2f023dd8f45aefaf.pdf
2019-06-01
21
33
10.22059/eoge.2019.263180.1027
Fuzzy-AHP
Vikor
Multi-class Index Overlay
Orogenic Gold Deposit
Saqez
Farzaneh
Khalifani
mami.farzane@gmail.com
1
School of Mining Engineering, College of Eng., University of Tehran, Tehran, Iran
AUTHOR
Abbas
Bahroudi
bahroudi@ut.ac.ir
2
School of Mining Engineering, College of Eng., University of Tehran, Tehran, Iran
LEAD_AUTHOR
Samaneh
Barak
samaneh_barak@ut.ac.ir
3
School of Mining Engineering, College of Eng., University of Tehran, Tehran, Iran
AUTHOR
Maysam
Abedi
maysamabedi@ut.ac.ir
4
School of Mining Engineering, College of Eng., University of Tehran, Tehran, Iran
AUTHOR
ORIGINAL_ARTICLE
Handling ill-posedness and overparameterization of rational function model using Bi-objective particle swarm optimization
The existence of both ill-posedness and overparameterization phenomena in the rational function model (RFM), makes it difficult to determine rational polynomial coefficients (RPCs). In this regard, Meta-heuristic algorithms have been widely used. Despite the extensive efforts in this field, it is still challenging to find optimum structures of RFM due to the above-mentioned phenomena. The existing meta-heuristic methods focus on overparameterization and try to remove some unnecessary RPCs using binary particles. Although solving overparameterization can automatically address the ill-posedness phenomenon, meta-heuristics do not achieve desired results by solely focusing on overparameterization. Therefore, it seems necessary to consider both ill-posedness and overparameterization phenomena to achieve an optimum structure of the RFM. Accordingly, in this study, a bi-objective particle swarm optimization (PSO) algorithm, namely BOPSO-RFM, is proposed to determine the optimum RFM structure. This method has two objective functions that should be minimized: 1) the Root Mean Square Error (RMSE) over some of the ground control points (GCPs), and 2) the maximum Pearson correlation coefficient between the columns of the design matrix, each of which corresponding to one of RPCs. While binary meta-heuristic algorithms mostly address the overparameterization phenomenon by considering binary particles and calculating the RMSE over some GCPs, the added objective function tries to address ill-posedness. Experiments conducted on three high-resolution datasets show that the proposed method has led to average improvements of 95% and 29% in terms of accuracy and RMSE values and 99% and 76% improvements in terms of stability, over well-known PSORFO and the state-of-the-art PSO-KFCV method, respectively. Moreover, the analysis of the final design matrix obtained from the final RFM structure revealed that the average of condition numbers corresponding to the BOPSO-RFM results had been 1.14e+9 and 7.39e+4 times lower than those of PSORFO and PSO-KFCV.
https://eoge.ut.ac.ir/article_72635_c017dc174ba0c0e7146713d1f21d4361.pdf
2019-06-01
34
42
10.22059/eoge.2019.282476.1049
Rational Function Models (RFMs)
Particle Swarm Optimization (PSO)
ill-posedness and over-parameterization phenomena
Multi-objective optimization
Saeid
Gholinejad
saeid.gholinejad@trn.ui.ac.ir
1
Department of Geomatics Engineering, Faculty of Civil Engineering and Transportation, University of Isfahan, Isfahan, Iran.
AUTHOR
Amin
Alizadeh Naeini
a.alizadeh@eng.ui.ac.ir
2
Department of Geomatics Engineering, Faculty of Civil Engineering and Transportation, University of Isfahan, Isfahan, Iran.
LEAD_AUTHOR
Alireza
Amiri-Simkooei
amiri@eng.ui.ac.ir
3
Department of Geomatics Engineering, Faculty of Civil Engineering and Transportation, University of Isfahan, Isfahan, Iran.
AUTHOR
ORIGINAL_ARTICLE
Application of Local Supervised Feature Selection Approach to Target Detection in Hyperspectral Imagery
Feature selection (FS) for target detection (TD) attempts to select features that enhance the discrimination between the target and the image background. Moreover, TD usually suffers from background interference. Therefore, features that help detectors suppress the background signals and magnify the target signal effectively are considered more useful. Accordingly, in this paper, a supervised FS method, called autocorrelation-based feature selection (AFS), is proposed based on the TD concept. This method uses the image autocorrelation matrix and the target signature in the detection space (DS) for FS. Features that increase the first-norm distance between the target energy and the mean energy of the background in DS are selected as the optimal features. To evaluate the proposed method and to explore the impact of FS on the TD performance, the target detection accuracy (TDA) measure is employed. The experiment shows that the proposed FS method outperforms the two existing FS methods used for comparison. In fact, AFS achieves the maximum TDA value of 19.02% using 58 features while, compared to FS, the other methods achieve much lower values. Furthermore, the effect of image partitioning on the TD performance in both full-band and reduced-dimensionality feature spaces is investigated. The experiment results show that partitioning, as a way of adding local spatial information to TD, dramatically improves the TD performance. For experiments, the HyMap dataset is employed.
https://eoge.ut.ac.ir/article_72656_2bb3e621d119045a14163a8d5d18cb84.pdf
2019-06-01
43
53
10.22059/eoge.2019.279956.1047
Supervised Feature Selection
Target Detection
Background Suppression
Hyperspectral Imagery
Amir
Moinirad
amoinirad@mail.kntu.ac.ir
1
KNT Department of Photogrammetry and Remote Sensing, Faculty of Geodesy and Geomatics Engineering, K.N. Toosi University of Technology, Tehran, Iran
AUTHOR
Aliakbar
Abkar
abkar@alumni.itc.nl
2
GeoInfoSolutions BV, Almelo, the Netherlands
AUTHOR
Barat
Mojaradi
mojaradi@iust.ac.ir
3
Department of Geomatics Engineering, School of Civil Engineering, Iran University of Science and Technology, Tehran, Iran
LEAD_AUTHOR
ORIGINAL_ARTICLE
Developing a VGI method for 3D city modeling based on CityGML and Open Data Kit
Due to technological developments, 3D city models have become valuable in various domains such as emergency services, facilities management, tourism and entertainment along with several applications such as the estimation of solar irradiation, routing, lighting simulations, etc. However, many cities in the world, especially in developing countries, still suffer from lack of 3D city models. It seems that the main reason for this deficiency is that 3D city models are expensive. Furthermore, acquiring semantic and thematic data as an indispensable part of 3D city models is an exhausting and time-consuming task. Nowadays, a geospatial data collecting technique, which is an inexpensive and promptness solution, has been developed. This technique is based on crowdsourcing concept and is recognized as Volunteered Geographic Information (VGI). In this paper, we have used VGI as a free and promptness technique for data gathering to solve the abovementioned problems in the Shahid Rajaee Teacher Training University as the study area. We gathered the minimum required data for creating a 3D city model based on the CityGML standard as the most well-known and acceptable standard by VGI. Also, 3DcityDB that supports CityGML was used for data storage task. In order to collect the required data, an Android mobile application was developed based on Open Data Kit (ODK). In this study, the volunteers were asked to provide their estimations of the heights of buildings as well as some other spatial and attribute data. Consequently, a 3D city model was produced based on the CityGML standard that achieved LOD 1 and 2. For validation, the heights of buildings obtained from VGI were compared to the accurately measured heights. The calculated RMSE for this comparison was 1.33 meter, proving the abilities of VGI in collecting reliable datasets.
https://eoge.ut.ac.ir/article_72829_a298c12a9c77c89e041abdf6e7ff9220.pdf
2019-06-01
54
63
10.22059/eoge.2019.271705.1042
3D city model
CityGML
Volunteered Geographic Information
Open Data Kit
3DcityDB
Farhad
Hosseinali
f.hosseinali@sru.ac.ir
1
Department of Surveying Engineering, Faculty of Civil Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
LEAD_AUTHOR
Ali
Khosravi Kazazi
alikhosravi@sru.ac.ir
2
Department of Surveying Engineering, Faculty of Civil Engineering, Shahid Rajaee Teacher Training University
AUTHOR
ORIGINAL_ARTICLE
Preparation of flood susceptibility mapping using an ensemble of frequency ratio and adaptive neuro-fuzzy inference system models
Floods are among the most common natural disasters that impose severe financial and human losses every year. Therefore, it is necessary to prepare susceptibility and vulnerability maps for comprehensive flood management to reduce their destructive effects. This study is trying to provide a flood susceptibility mapping in Jahrom (Fars Province) using a combination of frequency ratio (FR) and adaptive neuro-fuzzy inference system (ANFIS) and compare their accuracy. Totally, 51 flood locations areas were identified, 35 locations of which were randomly selected to model flood susceptibility and the remaining 16 locations were used to validate the models. Nine flood conditioning factors namely: slope degree, plan curvature, altitude, topographic wetness index (TWI), stream power index (SPI), distance from river, land use/land cover, rainfall, and lithology were selected, and the corresponding maps were prepared using ArcGIS. After preparing the flood susceptibility maps using these methods, the relative operating characteristic (ROC) curve was used to evaluate the results. The area under the curve (AUC) obtained from the ROC curve indicated the accuracy of 89% and 91.2% for the ensembles of FR and ANFIS-FR models, respectively. These results can be useful for managers, researchers, and designers in managing flood vulnerable areas and reducing their damages.
https://eoge.ut.ac.ir/article_72828_6a619bdf290d46b1d33231af77183cda.pdf
2019-06-01
64
77
10.22059/eoge.2019.269239.1035
Flood susceptibility
Frequency ratio (FR) model
adaptive neuro-fuzzy inference system (ANFIS)
Jahrom town
Geographic Information System (GIS)
Seyed Vahid
Razavi Termeh
vrazavi70@gmail.com
1
Geoinformation Tech, Center of Excellence, Faculty of Geomatics, K.N. Toosi University of Technology, Tehran, Iran
AUTHOR
Abolghasem
Sadeghi-Niaraki
a.sadeghi@kntu.ac.ir
2
Geoinformation Tech, Center of Excellence, Faculty of Geomatics, K.N. Toosi University of Technology, Tehran, Iran
LEAD_AUTHOR
ORIGINAL_ARTICLE
A modified wavelet-based method for detection of outliers in time series
As a multi-resolution analysis, wavelet transformation tool has been used to detect contingent outliers in time series data with no need to specify a model for the data. The objective of this article is to design an orthonormal wavelet system that optimizes the wavelet-based outlier detection procedure. In addition, we show that regardless of the selected base functions, the existing wavelet-based methods extract two adjacent suspicious observations so that probably one of them is an outlier. Therefore, we modify the wavelet-based outlier detection scheme by introducing a transformation matrix consisting of our designed wavelet filters that can be used to detect outlying observations without the above-mentioned ambiguity. In a numerical example, a sample observation vector is analyzed using our scheme. At the same time, a robust statistical approach- modified z-score method- has been used to evaluate the capability of our desired wavelet-based procedure. The results were completely reliable and comparable.
https://eoge.ut.ac.ir/article_72838_3ff2a23f4066690e23b418387660c0a1.pdf
2019-06-01
77
83
10.22059/eoge.2019.285487.1054
Wavelet transform
Outlier detection
time series
Amirreza
Moradi
a-moradi@arakut.ac.ir
1
Department of Surveying Engineering, Arak University of Technology,
DaneshgahSt., P.O. Box 38135-1177, Arak, Iran
LEAD_AUTHOR
Sajjad
Asiaei Mojarad
sajjad74asiaie@gmail.com
2
Department of Surveying Engineering, Arak University of Technology , Arak , Daneshgah street , Iran
AUTHOR
ORIGINAL_ARTICLE
A novel density-based super-pixel aggregation for automatic segmentation of remote sensing images in urban areas
Efficient segmentation of remote sensing images needs optimally estimated parameters for any segmentation algorithm. These optimal parameters help algorithms avoid both over- and under- segmentation of image data and provide high-quality inputs for further processing.Recently, the super-pixels method has been introduced as a powerful tool to over-segment the images and replace the pixels with higher-level inputs. Automatic aggregation of super-pixels with image segments is a challenge in the remote sensing and computer programming community. In this paper, a new automated segmentation method, namely density-based super-pixel aggregation (DBSPA), is proposed. This method is based on the spatial clustering algorithm for integrating the obtained super-pixels from the Simple Linear Iterative Clustering (SLIC). The DBSPA algorithm uses a Normalized Difference Vegetation Index (NDVI) and a normalized Digital Surface Model (nDSM) to form core segments and defines the primary structure of geographic features in an image scene. Then, the box-whisker plot was used to analyze the statistical similarity of super-pixels to each core-segment, and spatially cluster all super-pixels. In our experiments, two ultra-high-resolution datasets selected from ISPRS semantic labelling challenge were used. As for the Vaihingen dataset, the overall accuracy was 83.7%, 84.8%, and 89.6% for pixel-based, object-based, and the proposed method respectively. The values for the Potsdam dataset are 85.2%, 85.6%, and 86.4%. The evaluation of results revealed an overall accuracy improvement in Random Forest classification results, while the number of image objects reduced by about 4%.
https://eoge.ut.ac.ir/article_72858_c5c498924a823f7f4c9bb2eb06dbdac7.pdf
2019-06-01
84
91
10.22059/eoge.2019.282354.1048
Image segmentation
Super-pixel
Density-based spatial clustering
Ultra-high resolution
Image classification
Ahmad
Hadavand
ahadavand@ut.ac.ir
1
School of Surveying and Geospatial Information Engineering, College of Engineering, University of Tehran, Tehran, Iran
LEAD_AUTHOR
Mohammad
Saadat Seresht
msaadat@ut.ac.ir
2
School of Surveying and Geospatial Information Engineering, College of Engineering, University of Tehran, Tehran, Iran
AUTHOR
Saeid
Homayouni
saeid.homayouni@ete.inrs.ca
3
Centre Eau Terre Environnement, Institut National de la Recherche Scientifique, Quebec, Canada
AUTHOR
ORIGINAL_ARTICLE
Introducing a Customized Framework for 3D Spatial Data Infrastructure of Iran Based on OGC Standards
This paper describes a framework for 3D geospatial data infrastructure based on OGC Standards in Iran, specially Tabriz, as one of the oldest cities with more than 3000 years of history. The external code lists based on local culture, vegetation and heritage landmarks were proposed for indexing 3D city structures, buildings, statues and city furniture of Tabriz. These code lists can be used between different governmental agencies as a communication tool and utilized for indexing in the 3D spatial DB. There are some predefined code lists from Germany as the founder of CityGML, which can be utilized for Iranian context along with some defined codes based on local cultures and structures. These code lists can be defined for all the street furniture, sculptures, and facade textures in some sort of applications such as city planning, built environment, disaster management, etc. The code lists can also be used for the components of the buildings for instance the facade of the building in different layers as an entity or multi-patch feature class or implicite geometry such as windows, doors and backgrounds. In addtion, the code list can be used to index city elements and enhance the usage of 3D SDI for a variety of privileges from end-users to professionals in the near future for different organizations and management levels. The framework for web-based CityServer3D application has been discused in this research paper. CityGML as a standard data exchange format has been utilized for developing 3D SDI for Iran and implicit geometry representation has been used to avoid lagging while rendering the 3D models during navigation in the 3D virtual environment. The IGR has been defined in CityGML as prototypic geometry which can be parametrized for multiple usage(Löwner, Gröger, Benner, Biljecki, & Nagel, 2016).
https://eoge.ut.ac.ir/article_72861_76bd9cf895bdf1aa41887ad2b3126d4d.pdf
2019-06-01
92
101
10.22059/eoge.2019.285974.1056
Code lists
CityServer3D
3D SDI
CityGML
interoperability
Behnam
Alizadehashrafi
b.alizadehashrafi@tabriziau.ac.ir
1
Faculty of Multimedia, Tabriz Islamic Art University, Iran
LEAD_AUTHOR
ORIGINAL_ARTICLE
High-resolution urban aerosol monitoring using Sentinel -2 satellite images
Satellite remote sensing aerosol monitoring products are readily available but limited to regional and global scales due to low spatial resolutions making them unsuitable for city-level monitoring. Freely available satellite images such as Sentinel -2 at relatively high spatial (10m) and temporal (5 days) resolutions offer the chance to map aerosol distribution at local scales. In this study, we retrieve Aerosol Optical Depth (AOD) from Sentinel -2 imagery for the Munich region and assess the accuracy against ground AOD measurements obtained from two Aerosol Robotic Network (AERONET) stations. Sentinel -2 images with less than 30% cloud cover acquired between January and October 2018 were used in the study and contemporaneous AERONET Level 1.5 AOD data used to validate the AOD retrievals. Since aerosol distribution and properties exhibit high temporal variations, only satellite data and AERONET measurements acquired within 15 minutes were considered for validation and statistical analysis. Sen2Cor, iCOR and MAJA algorithms which retrieve AOD using Look-up-Tables (LUT) pre-calculated using radiative transfer (RT) equations and SARA algorithm that applies RT equations directly to satellite images were used in the study. Sen2Cor, iCOR and MAJA retrieved AOD at 550nm show strong consistency with AERONET measurements with average correlation coefficients of 0.91, 0.89 and 0.73 respectively. However, MAJA algorithm gives better and detailed variations of AOD at 10m spatial resolution which is suitable for identifying varying aerosol conditions over urban environments at a local scale.
https://eoge.ut.ac.ir/article_72963_7f0f805b010b479ea8691b5a8048a742.pdf
2019-06-01
102
111
10.22059/eoge.2019.289985.1063
Urban Air Quality Monitoring
Aerosol Optical Depth
Aerosol Retrieval
Sentinel -2
Sentinel -3
Joseph
Gitahi
joseph.gitahi@hft-stuttgart.de
1
Department of Geomatics, Computer Science and Mathematics, Hochschule für Technik Stuttgart, Germany
AUTHOR
Michael
Hahn
michael.hahn@hft-stuttgart.de
2
Department of Geomatics, Computer Science and Mathematics, Hochschule für Technik Stuttgart, Germany
LEAD_AUTHOR
Andres
Ramirez
a.ramirezmejia@fsc.org
3
Forest Stewardship Council International, Bonn, Germany
AUTHOR
ORIGINAL_ARTICLE
Estimation of the degree of surface sealing with Sentinel 2 data using building indices
Various building indices to identify and extract sealed surfaces have been developed and implemented by various authors. Previous research has shown that building indices are easy to implement since they do not use complex algorithms and therefore can be used as quick methods for monitoring impervious surfaces. The aim of this study is to assess the ability of selected indices to identify sealed surfaces. Also, previously, authors have posted results that building indices face difficulty in distinguishing between sealed surfaces and bare lands owing to the spectral similarity between these two land covers. Additionally, it has been concluded by some researches that the performance of building indices also depends on the time of image capture i.e. dry and wet seasons. In this study, we implement 6 selected indices using sentinel 2 data covering Nürtingen city in Stuttgart, investigate and compare their performance in different times of the year. Google earth engine is used to conduct these investigations.
https://eoge.ut.ac.ir/article_72964_d2708c341af3d6760e96f7791b5b56cf.pdf
2019-06-01
112
119
10.22059/eoge.2019.290064.1064
Surface sealing
Sentinel Imagery
Building Indices
Otsu thresholding
Google Earth Engine
Caleb
Masinde
82maca1mpg@hft-stuttgart.de
1
Department of Geomatics, Computer Science and Mathematics, Hochschule für Technik Stuttgart, Germany
AUTHOR
Dorothy
Rono
82rodo1mpg@hft-stuttgart.de
2
Department of Geomatics, Computer Science and Mathematics, Hochschule für Technik Stuttgart, Germany
AUTHOR
Michael
Hahn
michael.hahn@hft-stuttgart.de
3
Department of Geomatics, Computer Science and Mathematics, Hochschule für Technik Stuttgart, Germany
LEAD_AUTHOR