This one is for all you seismic data loading professionals. Tell us what can be done to make your life easier when it comes to loading seismic and segy data. Are your biggest gripes with incomplete ebcdic headers? What about bizarre trace headers? Maybe the filenames don’t describe what’s actually in the file? Are there common problems with the actual data? Are the amplitudes out-of-whack? Do you have large spikes? Missing data? Maybe the navigation information is incorrect? Common ftp delivery problems? Whatever makes your life difficult as a data loader (or makes it better!), feel free to comment below. Maybe this will help future deliverables. I was having a casual conversation with Don Robinson at Resolve GeoSciences, Inc. and it got me to thinking about this topic. https://lnkd.in/gAPAUyUc Interested in a seismic training class? https://lnkd.in/gN3-RZa5 * I have almost 2 hours of online presentations introducing various aspects of seismic velocities and processing. There’s a modest cost – https://lnkd.in/gw3eQcCB #houstonseismic
关于我们
Resolve GeoSciences equips explorationists with customized seismic attribute workflows, helping them save time, lower cost, and maximize the potential in their seismic data. As the leading seismic attribute service company, Resolve offers unprecedented access to Curvature, Rock Solid, Frequency Enhancement, and Spectral Decomposition volumes. In addition, Resolve provides complimentary SeisShow analysis software to help clients discover the key attributes that help highlight faults, fractures, structure, velocity/porosity changes, and other features of interest. To find out more about Resolve's services, follow this LinkedIn page and visit ResolveGeo.com.
- 网站
-
https://www.resolvegeo.com
Resolve GeoSciences, Inc.的外部链接
- 所属行业
- 石油天然气
- 规模
- 2-10 人
- 总部
- Fulshear,Texas
- 类型
- 私人持股
- 创立
- 1997
- 领域
- Seismic Attributes、Frequency Enhancement、Spectral Decomposition、Data Visualization和QC Analysis
地点
-
主要
US,Texas,Fulshear,77441
Resolve GeoSciences, Inc.员工
动态
-
Processing/Reprocessing Challenges and Insights: When loading 3D poststack datasets into workstations or repositories, key processing details often get lost—especially if communication between the interpreter and processor is no longer active. This is a common issue with dual-processed surveys or reprocessing efforts, where critical insights shared with the initial processor may not carry over to subsequent teams. Case Study: USA Land Survey A recent USA land survey east of the Rockies was processed and reprocessed using identical acquisition and field data. With surface elevations and geology unchanged, the key differences lay in workflows, software, and expertise. Both processing runs applied refraction statics with a floating datum finalized to a fixed datum. While amplitudes and original frequencies were nearly identical, challenges arose due to varying time shifts, requiring horizons to be repicked for each version. Key Observations: Multiple prestack static corrections (up to five) were applied, including refraction statics, residual statics (twice), surface-consistent automatic statics, trim statics, and final datum statics. Can we pinpoint which adjustments are driving the largest time shifts? Surface elevations appeared smeared, likely from widely spaced source and receiver points, and differed significantly from DEMs (digital elevation models). Could this smoothing affect surface-consistent statics? When spacing is sparse, do processing companies use DEMs for better elevation control? Synthetic Seismogram Correlation: Wells were available for geologic and synthetic seismogram correlation, but minor time shifts gave the misleading impression of a good tie with slight stretching and squeezing. This issue could potentially stem from refraction statics. Reprocessing Best Practices When reprocessing to improve signal-to-noise, frequency, and imaging, it is critical to share previous processing versions. Side-by-side comparisons help identify time shifts and character changes, allowing processors to address differences and align results with interpretation goals. It’s also important to distinguish between reprocessing, which builds on prior work, and dual processing, which starts fresh from the same field data. The Role of Expertise With staff reductions affecting experienced professionals, the risk to data quality increases while confidence in acquisition and processing decreases. Hiring acquisition and processing specialists with expertise in both legacy and modern equipment/software is critical to mitigating risks and maintaining high data quality. If you’re one of these specialists, share your name, areas of expertise, and the countries or basins you’re most familiar with. For example, Ron Kerr will definitely have insights worth exploring. #SeismicDataProcessing #Geophysics #SeismicReprocessing #ExplorationData #GeoscienceExperts
-
-
-
-
-
+2
-
-
???????????????? ??????????: ????????-???????????????? ???????????? Originally developed in the mid-1990s, SeisShow was designed to simplify viewing 2D and 3D poststack seismic data—without requiring a full 3D interpretation system. It became a go-to for data loaders to preview datasets before workstation imports and for seismic data brokers showcasing library data. Additional features enabled 2D/3D cuts for licensed area delivery. Back then, SEG-Y implementations were even more inconsistent than today. To tackle this, we built wizards to auto-detect byte locations for lines, traces, and coordinates—even in datasets lacking clear EBCDIC headers or loadsheets. With each dataset, we uncovered new irregularities and evolved the tool to handle them seamlessly. We've enhanced SeisShow with seismic attribute calculation services, enabling interpreters to efficiently visualize attribute volumes, time slices, and extracted horizons. New features were specifically designed to support this workflow. Today's highlight: Automatic identification of loading parameters, indexing, and display of SEG-Y files using SeisShow. These algorithms now scale to analyze hundreds or thousands of SEG-Y datasets, with metadata stored in JSON files and data models. Stay tuned for updates on our batch application, AnalyzeSE. ???????????????? ???????????????? ??????????: ? PostStack & Gather volume cutting ? Pane sections & timeslice displays ? 360° rotation analysis ? Frequency band exploration ? Interactive filtering, mutes, and spectral decomposition ? Dynamic section and timeslice visualizations ? Rapid multi-section, horizon extractions, and timeslice displays ? Blending, masking, and joint section/gather displays ? JSON file creation, reports, and stats Check out our website to watch the full video and explore other shorter clips: https://lnkd.in/g8QA5-TN #SeismicData #Geophysics #SEGYFormat #SeismicVisualization #DataManagement
-
IBM Float vs IEEE Float In the late '70s and early '80s, seismic interpretation workstations used IBM Floating-Point format. With the rise of personal computers in the mid-'80s, the IEEE format became standard—even adopted by IBM for mainframes by the late '90s. SEG-Y Rev 1 (2002) officially supported IEEE format, yet IBM Floating-Point persists. Are there still technical benefits to using it today? The answer seems to be NO. Here’s why across key areas: 1. Transmission IBM float offers no advantage for data transmission. Both formats use the same storage space (32 bits), but IEEE 754 is the modern standard universally supported by software and hardware. 2. Calculation IBM float is inefficient for modern computations. Processors are optimized for IEEE 754, meaning IBM float must be converted to IEEE for calculations. This adds unnecessary computational overhead, slowing down workflows. 3. Storage While IBM float uses the same 32-bit size, its base-16 exponent sacrifices precision compared to IEEE 754. IEEE provides better accuracy and consistency, especially for scientific and engineering data. Converting back to IBM float for storage to comply with SEG-Y Rev 0 or other legacy standards. This double conversion (IBM-to-IEEE and IEEE-to-IBM) introduces computational redundancy and potential for rounding errors. Why IBM Float Persists The only reason IBM float survives is legacy compatibility with older standards like SEG-Y Rev-0 in geophysics. SEG-Y Rev-1 and Rev-2 both support IEEE. It’s not a matter of technical superiority but historical inertia. Conclusion By storing data in IEEE 32-bit floating-point format and removing the need for IBM-to-IEEE and IEEE-to-IBM conversions: Computational overhead for each dataset processing cycle can be halved (no double conversion). The processing pipeline becomes simpler and more robust. Resources are saved, particularly for large datasets common in geophysics. The industry's reliance on legacy formats poses challenges, but phased transitions or dual-format support can help. SEG-Y Rev 1 and Rev 2 already address this by supporting both formats during migration. There's no shortage of opinions on this topic—otherwise, we wouldn’t still be using IBM Float. Share your thoughts, experiences, or suggestions in the comments! #SeismicData #Geophysics #SEGY #DataProcessing #ExplorationInnovation
-
Bin Grid Definitions - Loading to Workstations Precise location data is crucial for accurate seismic interpretation. While the "4 corners" method can introduce risks, extracting coordinates directly from trace headers improves spatial accuracy, minimizing misalignment from cumulative azimuth and spacing errors. Loading corner coordinates from load sheets and EBCDIC headers is efficient, but manual data entry raises error risks. Studies indicate that 20-30% of these errors are transpositions (e.g., "43" entered as "34"), with the rest being random digit additions or omissions. Analyzing XY values from hundreds of thousands of 3D poststack volumes confirms that trace headers—populated directly by processing software—provide more reliable spacing and azimuth data than load sheets and EBCDIC headers, with far fewer manual-entry errors. However, trace headers can still have issues, which can often be identified and corrected automatically. The images from the Waihapa 3D dataset included here are ?Crown Copyright, reproduced with permission from New Zealand Petroleum and Minerals (www.nzp&m.govt.nz), and are used to showcase the Bin Grid Calculator. We're launching a new Grid Definition Calculator, available soon for Beta testing. This tool allows users to enter or paste line, trace, and XY corner values to calculate spacings, azimuths, area, and create a grid polygon using either three corners or a Point + Spacing method. Currently, Projection (CRS) is for display only, but an upcoming feature will check corner orthogonality (90-degree angles). Interested in Beta testing? Reach out! We welcome feedback on the interface and are especially keen on your input for handling 4-corner data from load sheets/EBCDIC headers versus XYs from trace headers. #SeismicDataIntegrity #GeophysicalAnalysis #SEG_YFormat #SeismicGridCalculation
-
-
-
-
-
+1
-
-
The Parihaka 3D dataset in New Zealand's Taranaki Basin is publicly available through New Zealand Petroleum and Minerals and worth exploring. We reviewed the Near, Mid, Far, and Full Angle Stack volumes, noting the Mid Angle Stack volume had issues with a few traces. The images included in this posting are tied to the ?Crown Copyright and has been reproduced with permission, from the New Zealand Petroleum and Minerals website at www.nzpam.govt.nz. Initial display of the Parihaka 3D dataset highlights its impressive quality, though there are some loading and interpretation challenges. Logarithmic histograms are used to capture the full amplitude range, skipping bins with low counts until sufficient data appears. Absolute and alternate min/max amplitudes are stored to flag outliers. Notably, only two traces out of over 1,038,172 had extreme values at the 32-bit float limit. For display, standard deviations of amplitude values were used to ensure a representative view, despite these outliers. The indexing process scans each trace and sample, logging findings in reports and a JSON file. It flagged 15 traces with missing values in the trace headers, with file positions highlighted in Red (see second image). These issues were found at the end of a few lines, and SeisShow excluded them from the index file since they couldn’t be linked to any line or trace. The sample rate stored in both the Binary and Trace Headers presents another issue. Here, the Trace Headers showed 2049, while the correct value in the Binary Header was 1168. If both headers are off, aligning sample rates across traces can help identify the correct count—a method used in SeisShow and AnalyzeSE to maintain accuracy. This discrepancy is highlighted in yellow in the SeisShow Index, Trace Header, and Report (see second image). Spikes in datasets can disrupt analysis, interpretation, and proper loading into workstations. The previous paragraph discussed this issue. The following images show methods for handling outliers: setting them to zero, clipping, or interpolating traces. SeisShow identifies extreme amplitudes, providing details like line, crossline, x, y, amplitude, time, and trace location. Red arrows highlight spikes, and users can click on high-amplitude lines to jump to their location for review and correction. Interpolation generally yields the best results, while clipping can leave residual spikes in quieter areas. There's also an option to write out the edited file for further adjustments. Included are two more displays: the SeisShow Report and a well-documented EBCDIC header. Have you encountered problems with bad trace header values or amplitude spikes? Please share your experiences in the comments. Feel free to share or repost.
-
-
-
-
-
+3
-
-
Before loading SEG-Y files into an interpretation system, analysis tool, or data repository, several critical questions must be addressed. The image below (from a previous post) effectively highlights key considerations for preparing seismic files properly. SeisShow and AnalyzeSE take care of everything, eliminating the need to manually locate bytes for key fields and addressing all necessary items. After careful review, it's clear we should cover these topics across multiple posts. As the saying goes, "Be sincere, be brief, be seated", often attributed to Franklin D. Roosevelt. So today we’ll focus on key steps for loading data for immediate use. Our primary concern is: Location! Location! Location! With seismic data now integral to GeoSteering, accuracy is crucial. Modern drilling pads support 32+ wells, so even small spacing errors can impact well positioning and fail to warn of faults and hazards for all wells on a pad. A key concern is accurately identifying XY values from the Trace Headers or Load Sheet and carefully checking spacings. As shown in the second image, if the expected spacing is 110 feet but the data reads 110.5 feet, the grid could be off by up to 1,000 feet in X and Y by the end of the survey or greater if the survey had more lines and traces.. Ensure your Projection System is correct. The image below uses Texas Central, NAD 27, but results would vary significantly with Texas Central, NAD 83 or Texas North Central, NAD 27. Incorrect datums have led to many dry holes. Comment below and share any issues you've encountered that could lead to survey misalignment. #SeismicData #SEGY #GeoSteering #GeospatialAccuracy #SeismicInterpretation
-
-
Seismic data plays a key role for many professionals, whether it's loading to workstations, managing repositories,?interpreting datasets?or preparing?datafor licensing. However, a common misconception is that the data we receive is clean and ready to use. After 50+ years of experience, we've learned to avoid "blind faith" and adopt a "trust but verify" approach instead. Just because seismic data loads into a workstation doesn't mean it's accurate. Even new data can have issues like duplicate traces or spikes. Workstations create grids with one trace per line and crossline, so they may load only the first or last duplicate trace. Spikes are often clipped, but quieter intervals can still be affected. When data is loaded into Numpy arrays or cloud formats for analysis, they expect a clean 3D grid with one trace per cell, so any errors can disrupt the process. EBCDIC headers and load sheets, often created manually, are prone to errors in projection systems, byte locations for lines/traces, SP/CDP, XYs, and other metadata. Verification is key. If your wells tie reliably in the southwest but not in the northeast, there could be a simple reason. We've seen transposition errors in XY values, represented by fractional differences in spacing, cause offsets up to 1220 meters (4,000 feet). This explains why well control might not match seismic data, but the issue is easy to resolve once spotted. These are just a few issues we'll cover in future posts, with help from SeisShow for troubleshooting and AnalyzeSE for scanning thousands of SEG-Y files, with results in JSON metafiles for easy data management. What challenges have you faced with seismic data (whether resolved or not)? Share your experiences in the comments to help guide the order of our future posts. You can also contact us here: resolvegeo.com/contact and share the post with others. Your insights are valuable, and we’re always surprised by new challenges. #SeismicDataManagement #Geophysics #SEGY #SeismicQC #SeismicData
-