Presenter: Richard Hassler
Session Time: October 28, 2020
Prefer a downloadable PDF? Scroll to the bottom to download it.
Q: Is there a method to adjust the network vertically moving the entire dataset up and down to best fit the 3 vertical points instead of putting a tilt across it?
A: It will effectively move the network up and down if you constrain only one point vertically. It cannot calculate the tilt parameters until you lock more than one. So fixing one point will provide a vertical shift only, even if the Latitude and Longitude Deflections are turned on in your Adjustment Settings. Note that your GNSS vectors are not providing vertical values for the unfixed points that agree with the vertical datum in the area if you do not fix three elevations and allow the tilts to happen. They are still tilted to agree with the satellite datum. If the Deflections are turned off and you fix more than one point vertically then your network will have to be distorted to make the constraints work. This will result in elevations for the unknown points that aren’t really in the vertical datum as well. If your vertical datum is not tilted much compared to the satellite datum then this may be acceptable.
Q: How do you set the error tolerances for your fixed control points or benchmarks?
A: The most common approach is to use the default 0.000 constrained value, but you can set it as below.
Q: Are you able to do a Network Adjustment and a Site Calibration in the same project? If not, why?
A: No. As explained in the webinar, the two types of adjustments work in different ways and are in large part incompatible. They both put parameters into the Coordinate System definition but those parameters do slightly different things to the GNSS vectors in the project.
Q: Some third party adjustment packages include the ability to model the Deflection of the Vertical. Can you comment on this idea and explain why TBC does not provide this option?
A: Deflection of the vertical is a function of the geoid model to ellipsoid relationship. It probably would be best to go to the source of the geoid model to obtain that information. Oftentimes they have a scientific model that includes the deflection of the vertical information for an input position. It is possible that we could add this information to TBC but it is rarely requested.
Q: Is there any paper going into depth about the GNSS baseline processing in TBC available, linear combinations used etc?
A: Here is a white paper you could visit: Modernized Approaches for GNSS Baseline Processing White Paper.
Q: In the network adjustment report, the residuals (unreduced), are presented per shot (HZ, V, Slope D grouped together). Is there a way to get all the HZ grouped together separated from V and slope D (and the same with V angles and Slope D) so that we end up with 3 lists of observations and residuals, sorted by residual
A: You can customize the network adjustment report in order to split the table into three. Here is a how-to video.
Q: If you use the mean angles from multiple sets of terrestrial angles, do you need to adjust your weighting estimates for your horizontal angles?
A: If you choose to use the Project Settings as the source for your error estimates, You can enter what you think is a reasonable estimate for your horizontal angles there and the Horizontal Angle Reference Factor will tell you whether you have guessed correctly or not. In the end, the product of the input error estimate and the scalar for that observation type is what is used. That allows for a bit of leeway in deciding what estimate you wish to input in the setting field.
If you choose to use Imported Files as your source for error estimates, you do not need to enter an estimated value. The Reference Factor will tell you if the estimates from the field data need to be scaled.
Q: How does network adjustment handle static GNSS, VRS, and RTK data?
A: Usually you would want to use the Baseline Processor’s error estimates for GNSS data. Post Processed vectors get their error estimates from the TBC processing engine. RTK and VRS vectors get theirs from the engine in the receiver. Different algorithms are used in the different engines and different error estimates are produced. We recognize that real-time kinematic processing frequently has less data to use to estimate vector uncertainties, so we provide a separate scalar in the Weighting Strategies for kinematic vectors so that you can get the relative weighting correct in the minimally constrained adjustment.
Q: For a 500km long High Speed Railway project, what would you suggest?
A: Many different factors come into play when planning a survey of this type and no general rules of thumb that we can provide would necessarily be applicable for your project. More information and a lot of detailed planning would be required to ensure that the survey techniques chosen will fit your project conditions.
Q: Why wouldn't you hold the control coordinates on your first adjustment? When would you ever use a "free" adjustment when using known points like these CORS stations?
A: There typically should be no discernible difference in the results of a minimally constrained versus free adjustment. If your observations fall very far from the final position of the control, you may consider fixing a set of Hz coordinates and a vertical coordinate to get the network where it belongs while performing the minimally constrained steps.
Q: Is the term "Standard Error of Unit Weight" related to Reference Factor or Standardized Residual?
A: The term “Standard Error of Unit Weight” is another name for what we call “Reference Factor”. We used that terminology in TrimNet and TGO (I believe) but we decided to use Reference Factor in TBC. Same thing.
Q: I hesitate to scale "better" than manufacturer specifications. What are your thoughts on that? Potentially leave them since it is unlikely your equipment is performing better. Is it really better or is the population of measurements not enough to see the spread of measurements that would truly exist? That is, in the end are you getting a statistical confidence better than it probably is?
A: I agree with your assessment. It seems counterintuitive to scale your estimates to values that are less than the Manufacturers’ specifications, but as you state, having such a small sample size in a network like the one I demonstrated in the webinar could result in better agreement in the observations than you would see in a much larger sample during specification testing.
Q: What are the default standard errors in Project Settings?
A: Those are your best, realistic guess of how much error was seen when making measurements using your techniques and equipment in the field. The manufacturer’s specifications are decent starting points for these, but as you gain experience in adjustments you may find that you are constantly scaling your estimates in a particular way to make your network fit together. This could indicate that you should change the values you put in the project settings which would result in scalars closer to 1.0 without scaling them.
Q: Is it possible to complete adjustment and holding a bearing between two points?
A: Yes. In the Constraints tab of the Adjust Network command, there is a pulldown that allows an azimuth constraint or distance constraint to be added to the adjustment. This is useful for moving a terrestrial network onto a “Basis of Bearing” as is required by law in many regions. It is better than fixing two horizontal coordinates as that will constrain more than just a distance or just an azimuth by itself.
Q: It would be helpful if TBC would give the user a way to individually weight observations, instead of having global settings for observation types. This is particularly important to total station work, where you may have different target precisions. Right now all I am aware of that we can do to accomplish this is to control it with survey styles, and that is prone to risk, can't be edited and can be too complex to reliably count on it being done correctly in the field every time.
A: This feature would give TBC adjustment more flexibility to allow for different equipment as you describe. We will consider this enhancement.
Q: Sometimes the RF is low, even though the weighting is correct, because you do not have enough redundancy in the survey. This one has a lot of sideshots. Do you still agree with using scalars?
A: Even with fairly sparse networks you should be able to properly weight the observations to pass the Chi Squared test. If there are too few observations to achieve that then more observations need to be added. The primary reason that scalars are included is for convenience. Without them, one has to go back to the standard errors in the Project Settings to adjust the error estimates there. If the adjustment will not pass chi squared when using reasonable error estimates then I would scale them unless the scale factors times the input error estimate is simply unreasonable. It could be the result of too sparse a network and more observations are needed to make it work properly. Statistics are always easier to calculate and more reliable from networks with more redundancy.
Q: How accurate in ortho height is a CORS station? If no published BMs aren found, I would have to rely on the nearby CORS.
A: The answer to this question is all over the map. Some CORS stations are very well measured and tied to the regional vertical network. others are not. The CORS operator should be contacted to determine the quality of the survey used to generate the station coordinates. Also it should be noted that the vertical Datum of a CORS site may differ from what is required in the project area. Local adjustment may exist for vertical control in different areas. (i.e. City versus State, …)
Q: Why would you not use adjusted leveled elevations? Would that not take out any error in the level run?
A: You cannot use the adjusted observations from the Level Editor in the Network Adjustment with the rest of the survey data. It is adjusting “adjusted” observations. The reference factor goes to 0.00 for the level data and no scaling will provide a reasonable overall adjustment. The level data receives no adjustment but the statistics for the rest of the observations are thrown off in the adjustment. If you wish to use the adjusted values from the Level Editor, the level observations should be left out of the Network Adjustment. This is an acceptable approach.However you will probably get a lot of computation flags due to differences between the adjusted GNSS vector elevations and the leveled elevations.
I was demonstrating a combined adjustment approach. If your level error estimates are done properly, you will probably get very similar results using either approach.
Q: How do you determine which variance group to scale first? Is it based on the highest variance or the highest redundancy value?
A: The highest redundancy value will have the greatest impact on the adjustment so some believe that group should be scaled first. I have other reliable sources tell me to scale the most precise observations first (i.e. level data, …) If you use Chi Squared as your indicator to stop scaling, you may use this approach.
I personally have not seen any dependence on the order of scaling the observation groups or of just scaling them all at the same time, because I usually go further than Chi Squared and scale until the reference factors are all very close to 1.0. I seem to arrive at the same values no matter which order I use. In the end you get different reference factors for each variance group and those are dependent on the error estimates and fit. There should really be no significant dependence on the order in which you scale the error estimates.
Q: Is it possible to adjust GNSS surveys in different network time epochs?
A: TBC does not support time-dependent reference frames today. You will need to convert the data into the same reference frame, then make the network adjustment.
Any other questions? Please leave your comments below. Thank you.