Earthquake cycles based on the site of the 2011 MW9.0 earthquake in Tohoku, Japan was simulated with a 2D finite element model in which motions of the fault is controlled by a slip weakening friction law. The numerical results show that earthquakes occur in a typical pattern of characteristic earthquakes: there are 6 large earthquakes during the 1 000 a simulation, the intervals between two adjacent earthquakes are about 161±4 a, and the seismic moments of per unit length for each earthquake is about 1.13×1020 Nm/km. There is a small earthquake with a seismic moment of 5.62×1018 Nm/km occurred between each two-adjoin large earthquakes. The coseismic and interseismic surface deformations of the numerical model agree well with the GPS observations. The uncertainty of the elastic parameters has limited effects on the coseismic and interseismic deformations, while variations of the viscosity influence the interseismic deformations. The numerical results also show that if the interseismic deformations were only controlled by the motion of the fault, the gravity anomaly of this model would decrease linearly during the interseismic period and reach about -370 μGal at about 100 km from the trench on the continent. Variations on velocity mainly happened within the first 5 a and velocities change little about 5 a after each earthquake.
In this paper, the vertical component seismic record data of 15 near-field stations of the Minxian-Zhangxian MS6.6 earthquake are used to analyze the main shock. The average centroid frequency is calculated, and its geographical distribution is given. The results show that the high frequency distribution is in southwestern sides of the Lintan-Tanchang fault, not along the Lintan-Tanchang fault; the elliptical long axis parallels to the fault strike. That is to say, the thrust block of this earthquake is between the southwest side of Lintan-Tanchang fault and the other fault. The geographical distribution of the highest frequency of the spectrum is in agreement with the macroscopic epicenter. It can be considered that the highest frequency region of the spectral centroid is the most serious area of earthquake disaster.
We analyze the neo-tectonic characteristics and their evolution in the Weihe basin. Based on the faulting segment study of the north Qinling fault, the faulting activity of the Kouzhen-Guanshan fault may correspond to the segment of the north Qinling fault, which has the strongest faulting activity. Then the influence of the Kouzhen-Guanshan fault, including other north margin faults of the Weihe basin, are modelled by FEM, under the condition that the north Qinling and the Huashan front faults are the main and controlling active faults in the Weihe basin. The results show that the faulting activity of the Kouzhen-Guanshan fault corresponds to the east segment of the north Qinling fault. Furthermore, as a preexisting and unsubstantial tectonic belt fault, the Kouzhen-Guanshan fault is easier to aggravate the above-mentioned effect and has stronger faulting activity than other north margin faults of the Weihe basin.
The IBIS-L ground-based SAR monitors the displacement of an observed object by combining of the stepped frequency continuous wave, synthetic aperture radar and an interferometry technique. To monitor the displacement after the landslide in Dashuchang town on September 1, 2014, we first discuss the key technology of the IBIS-L ground-based InSAR and the ground-based SAR system. Second, we summarize InSAR data processing flow. Then third, obtain displacement evolution characteristics with sub-millimeter precision and high spatial-temporal resolution. GB-InSAR results show that displacements of the middle-upper parts on the left and right sides of the landslide body are 120 mm and 75 mm respectively. This is caused by crack water and rain. Displacements of landslide body are impossible to produce larger secondary geological disasters.
Taking measured data of surface deformation transverse observation line of Qingdao Metro Line 3 as an example, this paper studies a wavelet de-noising combined model. First, we use wavelet theory to eliminate observation value errors. According to the principle of lowest mean square error and highest signal to noise ratio, the calculated results show that dmey wavelet decomposition and rigrsure soft threshold wavelet de-noising are optimal. Second, we present the surface deformation predicting model expression combined with gray and time series of the subway tunnel. The settlement value GM(1,1) model and residual time series model are selected to predict surface deformation. Last, we analyze and compare the wavelet de-noising time series model and the combined wavelet de-noising gray and time series prediction model, both pre and post. The results show the post wavelet de-noising gray and time series combined model has the highest prediction accuracy. We analyze the different causes of each model.
In the process of deformation analysis, it is more reasonable and reliable that multiple monitoring points are applied to modeling and analyzing of grey system in the same deformation monitoring network compared with single monitoring point. But the traditional multivariable grey model (MGM(1,n)) has some short comings. This paper proposes a multivariable grey model based on multivariate total least-squares optimization. And the superiority of this model is proved by some examples. Simultaneously, comparing this model with the traditional MGM(1,n) and analyzing the modeling value and predictive value of these two models. The results show that the modeling and prediction accuracy of this model are higher than MGM(1,n) and this model is more suitable for the actual circumstances when the number of data modeling is over four. This model can be referred to the analysis and forecast of deformation monitoring data in further research.
Uncertainty often arises in the process of obtaining settlement observation data, affecting the reliability of parameter estimation and the accuracy of settlement prediction. This paper establishes a new adjustment model in which uncertainty is incorporated into the AR model as a parameter.An algorithm is given based on an uncertainty propagation law in the residual errors.A new way is used to research uncertainty, in which maximum possible uncertainty is minimized: existing error theory is extended with new observational data about uncertainty.
The background noise level in seismic frequency band (200-600 s) based on four years continuous observation of superconducting gravimeter (SG) at Lhasa station is investigated, and the influence of power spectral density (PSD) caused by atmosphere and sampling rate is also studied. The noise level of SG observation provided by global geodynamic project (GGP) for 2013 in seismic frequency band and sub-seismic frequency band are calculated, and they are further analyzed and compared. It is found that the PSD of gravity can be improved by tides and atmosphere correction when the frequency is below 10-3 Hz. When the frequency is below 0.5×10-3 Hz, the noise level of SG is obviously better than that of a seismometer. This means that SG is more suitable for researching long-period seismic and sub-seismic modes.The range of the seismic noise magnitude of global SG is 0.180-1.964, and the range of sub-seismic noise magnitude is 1.860-3.853.
In this study, we select 15 sets of 18 of superconducting gravimeter (SG) records from GGP (global geodynamics project) stations after the 2004 Sumatra Mw 9.0 earthquake and use EEMD (ensemble empirical free oscillations mode (3S1, 0S4, 0S5) and their spectral splitting with frequencies less than 1 mHz. After removing the tidal and local atmospheric pressure effects from the original minute-interval SG records, we obtain a residual gravity data set. Then, EEMD is applied to this SG time-series to obtain different IMF (intrinsic mode function) on different frequencies. This will significantly reduce the possibility of mode mixing and end effect, and it could improve some low-frequency seismic signals’ SNR (signal-to-noise ratio). Therefore, EEMD could enable some splitting spectral of low-frequency free oscillation signals to be observed more clearly. Through comparisons of the normalized amplitude of the results obtained without using EEMD and after using EEMD of the residual gravity records, the experimental results show that when EEMD is applied to residual gravity records, it can be more effective for observation of the Earth’s low-frequency signals and will obtain higher resolution of low-order spherical oscillations’ singlets. This study demonstrates that EEMD is effective in data-processing and that the superconducting gravimeter is superior in detecting the low-order earth’s free oscillations.
We show the asymptotical expressions for equal space in time (EST) and equal space in distance (ESD), in order to estimate the standard deviation of the parameters of the initial position, the initial velocity and the gravity acceleration. We then discuss the variation of the standard deviation of the gravity acceleration, caused by fringe scales, initial velocity and data segment. By increasing the scales, reducing the initial velocity and selecting the suitable data segment, we give the optimization method for obtaining a minimal value of the standard deviation of the gravity acceleration.
The calculation formula of lunar rover positioning based on VLBI and celestial navigation is presented in the paper. The results of VLBI positioning and joint positioning based on combined VLBI and celestial navigation are calculated with the measurement data of Chang’e-3 (CE-3). This shows that joint positioning can improve the positioning accuracy of lunar rover by celestial navigation or VLBI alone. Furthermore, joint positioning can also guarantee the reliability and stability of lunar rover positioning.
We assess the characteristics of noise in the coordinate time series of daily position from August 2008 to April 2013 at 36 north China GPS fiducial stations. Regional stacking filtering is employed to remove the common mode errors from the daily position time series. The noise characteristics of time series and the station velocities are assessed using maximum likelihood estimation. The results indicate that the common mode errors are attributed to the flicker noise and that their variations are 1 mm for the NS component, 1 mm for the EW component and 3 mm for the vertical component. The noise in the unfiltered position time series can be described as a combination of variable white noise plus flicker noise or variable white noise plus power-law noise, while the filtered position time series can be described as a combination of variable noise plus flicker noise plus random walker noise or variable white noise plus power-law noise. The removal of the common mode errors decreases the flicker noise component and highlights the corresponding noise component related to site effects. The velocity uncertainties are about 5-8 times greater than if only variable white noise is considered. The velocity uncertainties decrease by about 40% if the common mode errors are removed.
By designing different scenarios, we can solve stations of CMONOC, in order to analyze the influences of higher-order terms ionospheric delay and receiver clock error to CMONOC stations’ coordinates, and the differences of higher-order terms ionospheric delay's effect under different geomagnetic models. The results show that as solar activity becomes more active, the change of higher-order terms ionospheric delay to CMONOC stations' vertical coordinates can reach 1.2 cm. The influence on the receiver clock error is close to 4.4 mm, the influences of the higher-order ionospheric delay model under different geomagnetic conditions are little. Finally, we discuss and analyze the effect of latitude, baseline length and direction to the higher-order terms ionospheric delays.
In this paper we present the method of satellite orbit validation with SLR measurements, and use SLR data observed from 1 November 2009 until 31 January 2010 (3 months) to validate the GOCE PKI (precise kinematic) orbit. Based on residual analysis, we find that system error due to station time bias and range bias exits and has not been removed in the SLR observation. After modeling and fitting the station time bias and range bias, we achieve the GOCE PKI orbit precision at the level of 1.5 cm.
This method is an improvement based on TurboEdit algorithm. Firstly, it uses the Kalman filter to separate the ambiguity resolution in order to improve the precision of the ambiguity, thus to set up the foundation for detecting the repairing the cycle slips properly; Secondly, this method uses a fixed length sliding window fitting model to improve the polynomial fitting of the original GF combination; Thirdly, it uses the fixed length sliding window fitting model to calculate the floating solution of the cycle slips while repairing the cycle slips. The validity of this method has been tested by the GPS data. The results show that this method is valid for the detection and repair for small, big and multiple cycle slips.
By analyzing and comparing the MEO satellite pseudorange multipath effect from different satellite system to determine the existence of Beidou satellite pseudorange multipath deviation; further analysis of the BDS three kind of satellite’s pseudorange multipath effect found that IGSO satellite and MEO satellites exist obvious multipath deviation; selecting piecewise linear approximation and piecewise polynomial fitting two methods to build the empirical formula to correct Beidou satellite pseudorange multipath deviation and compared the effects of two methods; by analyzing the characteristics of the MW series and the results of PPP before and after the error correction to demonstrate the effectiveness of empirical formula.
We discuss the relationship between the PDOP value of satellite constellation and satellite number and analyze the magnitudes of the residual errors of pseudo-range and carrier phase observations value. We propose a classification factor robust adaptive filtering method. This method is applied for precise point positioning and the equivalent replacement is based on observation of the different values. At the same time, the adaptive factor is set up according to the characteristics of different parameters and calculated by the prediction residual. Experiments show that this method can not only detect and control the influence of the abnormal values, but can also improve the accuracy and reliability of PPP.
A nonlinear 3D rectangular coordinate transformation method based on the Levenberg-Marquarat algorithm,which is effective when normal equation is ill-conditioned or singular,is proposed. The problem of divergencein the process of iteration,which arises due to different dimensions between translation amount and rotation angle,is effectively resolved.A neat and effective model of iterative solving is designed and a stable parametric solution is achieved.Finally, the effectiveness and correctness of the method is proven by comparative analysis of simulated data.
Firstly, the inverse formula of complex matrix is provided in this paper, on which the complex-valued least squares adjustment method (CLS) for complex-valued linear model is presented in detail. Then, based on the equivalent adjustment model with independent observations weighted equally, it is proven theoretically that the CLS method is equivalent to the real-valued least squares method (LS). However, the CLS method is greatly superior to the LS method in efficiency. Lastly, both real-valued and complex-valued polynomial models for image geometric correction are employed to verify the validity of the CLS method. Results show that CLS and LS method have no difference in parameter and in accuracy estimation, and the CLS method is very helpful in simplifying the adjustment model and in improving calculation efficiency, because the number of complex-valued parameters is only equal to the half of the number of real-valued parameters.
We are concerned with the issue of lacking an effective method to determine the covariance matrix of coefficient matrix and observation vector in spherical target positioning. The distance from point to plane reflects the correlation of point and plane, and the incidence angle makes a difference to point clouds. We expand both of them to spherical target positioning. The covariance matrix of coefficient matrix and observation vector are provided. The robust weighted total least squares method is applied to spherical target positioning, which is based on weighted total least squares and setting certain criteria. Point clouds with outliers are conquered in this method. The experimental results show that robust weighted total least squares, which determined covariance by distance, is better than other methods in spherical target positioning.
This paper is concerned with analysis of the measurement of the magnetic error calibration of levels and the relevant sources of uncertainty. We analyze the evaluation method of uncertainty for magnetic error calibration of levels. The sources of the uncertainty result from the reproducibility of measurement, the ruling error of micrometer collimator, the reading error, the magnetic field and the fitted linear regression equation for the calibration curve; these sources of uncertaintyare discussed and calculated. The combined standard uncertainty and expanded uncertainty are reported. The results show that the fitted linear regression equation for the calibration curve is an important source of uncertainty.