Serial Murder in Healthcare & FHIR
FHIR

Serial Murder in Healthcare & FHIR

A Brief History of the Lack of FHIR Implementation

FHIR stands for Fast Healthcare Interoperability Resources. One of the biggest problems in health care is the ability to exchange data between systems built by different vendors. The FHIR standards provide a set of formats to which to export data from any system so that any other system can import the data from the standardized known format. If that sounds too good to be true, it is. It's not quite that simple.

FHIR, version 4.x released in 2016, is based on the older protocols from HL7, which were created in the 1970's. HL7 stands for Health Layer 7. It is called HL7 because the data exchange protocols focus on the application layer in the protocol stack for the OSI model, which is layer 7. OSI stands for Open Systems Interconnection. The layers are the following.

  1. Physical Layer - This is hardware that transmits raw data such as the computer hardware motherboard bus, CPU, network cables, etc. The data can be broken down into zeros and ones at this layer.
  2. Data Link Layer - This layer facilitates data transfer between two devices such as a client computer in a doctor's office to a central server cluster within a hospital network. The two devices are usually connected by some type of cable, fiber optics, etc., but on the same network.
  3. Network Layer - This layer facilitates connections between different networks, usually using SSL. This is usually a secure connection across the Internet or a protected set of private networks. For example, a doctor's office in a remote location may need to transfer the records of a patient to a specialist at a major hospital.
  4. Transport Layer - This layer facilitates the transfer of data between two devices across the network layer. This could be a secure file transfer protocol (sftp), https form submission over SSL, etc.
  5. Session Layer - This layer is responsible for opening and closing communications between two devices and providing authentication, followed by authorization based on user profiles and duties. The session also ensures that complete files or complete data is transferred. It can also act like a peer-to-peer transfer and transfer files based on multiple checkpoints in the file size.
  6. Presentation Layer - This is essentially the ETL, extract transform, and load, for data formats. The presentation layer prepares the data to be transferred by the application layer. The presentation layer may also encrypt and compress the data for in-transit protection.
  7. Application Layer - This is the software that collects the data from users, and interacts with the data directly using applications such as web browsers and sftp clients. The client software, browsers, etc. are not actually part of the application layer, but they are used to access the application graphically or via the command line. HL7 and FHIR focus on this layer and the formats that can be uploaded and downloaded using the application layer software.

Essentially, data is exported from an EMR, Electronic Medical Records system, such as Epic, into an XML format that is a standard format in the FHIR published standards. All vendors SHOULD edit their software to read the formats, but they don't or if they do, it does not always work correctly.

FHIR has been reorganized into the following modules but still complies with the original HL7 methodologies. The modules are also the core functionality of HIPAA and HEDIS-compliant EMR systems.

  1. Foundation: The basic definitional infrastructure on which the rest of the specification is built (hardware network etc.)
  2. Implementer Support: Services to help implementers make use of the specification (data modeler, code developer, business analyst)
  3. Security & Privacy: Documentation and services to create and maintain security, integrity, and privacy (encryption, SSL, HTTPS, SFTP)
  4. Conformance: How to test conformance to the specification, and define implementation guides (unit testing)
  5. Terminology: Use and support of terminologies and related artifacts
  6. Linked Data: XML formats, RESTful APIs (implementation of Representational State Transfer Application Programming Interface), and defined methods of exchange for resources. (This is basically a repeatable ETL process, Extract data Transform it and Load it to another system)
  7. Administration: Atomic data capture in storage for basic resources for tracking patients, practitioners, organizations, devices, substances, etc.
  8. Clinical: Cohort groups, Care programs, Core clinical content such as problems, allergies, and the care process (care plans, referrals)
  9. Medications: Medication management and immunization tracking (I would add a fatal mix database to this section to alert doctors and patients when medications are about to be prescribed that could be fatal or harmful when mixed)
  10. Diagnostics: Atomic data storage for Observations, Diagnostic reports and requests and related content (I would also add an automated diagnosis database that can provide possible diagnoses based on tests and symptoms)
  11. Workflow: Managing the process of care, and technical artifacts to do with obligation management, scheduling care, reviewing care quality, etc.
  12. Financial: Billing, Negotiations, and Claims support
  13. Clinical Reasoning: Clinical Decision Support and Quality Measures using the HEDIS star system and other methods

Basically, again, all of this means that data from one EMR will be exported to XML and sent to another EMR for import using a standardized format. HL7 also includes the following methodologies in addition to the latest FHIR.

  1. Arden Syntax?– a grammar for representing medical conditions and recommendations as a?Medical Logic Module?(MLM)
  2. Claims Attachments – a Standard Healthcare Attachment to augment another healthcare transaction
  3. Functional Specification of?Electronic Health Record?(EHR) /?Personal Health Record?(PHR) systems – a standardized description of health and medical functions sought for or available in such software applications
  4. GELLO?– a standard expression language used for clinical decision support


What is the Problem? & Why is it Not Working?

The primary and most frequent reason most EMR software cannot comply with FHIR is due to the lack of compliance with data modeling standards for multi-dimensional atomic data which have been well-known and published since the 1970s. The reasons they are not in compliance are usually because they do not want to pay the labor costs, usually $500 to $3000 / hour, for "real" and competent resources who actually know Enterprise Data Modeling and the cost of their data engineering hardware and software system. Additionally, companies have desired to push the products to market as fast as possible and update the products continuously over time; the fast money now approaches in software sales; and greed. That causes labor fraud and many people claiming to be data modelers when they are not just to steal money by promising the impossible at a low rate; the biggest scam in the book.

Over 99% of people claiming to be data modelers do not have any atomic data models, which are required by FHIR. They won't even know what atomic data is. If someone promises they can make an EHR that is FHIR compliant for less than 30 million dollars, it's a scam. Most of the time, just the data model will cost 30 million dollars and cost over 10 million just for labor, software, hardware, and facilities. There are systems that have cost over one billion dollars to construct, but they are usually limited to providing services for the wealthy and exclusive organizations that offer the best healthcare money can buy.

Except in cases to save the lives of the affluent, nobody is willing to pay fair wages for competent labor and nobody wants to wait for long-term profits. Most "permanent" employees also do not have the multi-year stamina to participate in data analysis meetings. The Analysis phase for an ERP or EHR for a major corporation could take one to five years of meetings to document how the business works. Most groups of employees give up and cancel the data project, producing a 99% ERP failure rate and a 33% gross revenue loss due to poor data systems. Most managers don't know that a real data modeler with adequate hardware and software could cost three to six million dollars, or over $30,000,000 for a real contractor or OTS license, and seek to hire a temporary contractor at permanent employee rates, which leaves nothing for the hardware and software required to actually create the ERP or EHR data model. Additionally, most employees see pay rates as a competition and have issues with jealousy which affects business decisions. Most projects never obtain resources more competent than whoever is making the hiring decisions due to competition fear and jealousy. ERP project managers need to be SVPs or higher with extreme professionalism and maturity.

Unfortunately, fixing an existing database requires massive code updates and new versions of the software. Most companies increment their version numbers as a marketing ploy while not actually making any major changes to the system. Some EMR systems are more than 30 years old and still do not have multi-dimensional atomic data storage, nor support for modern object code languages such as Java and C++. Additionally, most EMR systems also have data quality issues which cause export processes to fail. If the export process fails, it must be restarted at the beginning in most cases. If the data is not repaired, it will simply fail again.

I must also state that atomic data has nothing to do with nuclear bombs, it does not explode, or do anything destructive. Atomic data is the lowest granular level of detail for data and is related to ACID compliance which came from the database management system standards published by Dr. Edgar F. Codd in the 1970's. Essentially types of data should be segregated into their own areas of the database. For example, in a presentation on a computer screen or browser, one would like to see a patient's name, test results, calendar, conditions, etc. all on the same screen. That is an aggregate format meant for repeatable analysis.

However, the aggregate format is not how data should be stored inside the database, nor should the data model be structured for aggregation alone. Interface developers always seek to put data together in a meaningful way, but a data modeler must consider how is the data uniquely identified so that its integrity can be maintained across multiple systems with consideration for ACID compliance. Most often, in EMR systems, data is stored in the aggregate format inside the database, which is a grave mistake. Imagine you have three numbers that mean something, such as wholesale cost, market value, and sale price. You can plug those numbers into a formula and calculate profit. If the database only stores the value for profit and not all three values with a time dimension that shows changes over time, one will not be able to calculate projections that could be used to make decisions proactively to increase the profit.

Additionally, the market value number can have multiple factors and may need to be stored as multiple pieces of atomic data. For example, the market value of real estate can be affected by what is built around it. If your asset management system tracks permits and finds that someone is building a paper mill, oil refinery, or heaven forbid an apartment building next door to your multi-million dollar luxury investment property, the market value will go down. Clearly, all data should be stored as atomic data instead of aggregate data, which can hide information required for proactive decision-making.

If the data is stored in atomic formats, the aggregations can be calculated at the application level, and then seen in an interface as they change in real time. Storing data in aggregate formats or cubes can also lead to long-term mistakes. For example, if data is corrected in a data source, it will not then automatically update any reports or materialized data that is using a summation or aggregations of the old values with mistakes. Additionally, in the case of slowly changing dimensions with data is constantly changing, any system using aggregate data will be instantly obsolete. Imagine trying to make stock trades with data from yesterday or even five minutes ago. Additionally, pre-aggregated data and materialized data or cubes may be an indication that there are major performance issues to be hidden using those methods. Real-time atomic data is the wisest choice.

Atomic data is stored and organized based on the type of data it is. Therefore, the database table in which it is stored and the constraints on the table are designed specifically for that data and only that data in accordance with business requirements. If you mix two types of data together in the same table, say patient address and patient medical conditions, the constraints of the table will apply to both types of data even if they do not match the business requirements.

For example, patient conditions may need more security and access controls specifically for a set of doctors' "eyes" only constraints and encryption according to business requirements. If the rules for storing addresses are the only rules applied to the table, the table structures, constraints, and security, anybody working at the entire hospital and all the hospitals in that network will be able to see everything, including patient conditions. I can say for sure that almost all EMR systems have this exact problem. They were not designed for atomic multi-dimensional data and everybody who has access to the system can see the data of every patient in the system in great detail.

Enabling Serial Murder & Black Market Organ Harvesting

The lack of security can cause other issues such as bias in health care access. A patient who has had an abortion may be denied access to further health care at a different facility if doctors and nurses object to abortion on religious grounds. Such bias can be fatal. Issues such as approvals for disability can be affected if everyone on the system can see the ethnicity of the patient before sending records proving disability. Biased healthcare professionals routinely sabotage the healthcare of minorities. Additionally, minorities are routinely targeted for black-market organ harvesting. If they go to the emergency room, they may experience sudden brain death, and their relatives are asked to donate their organs. However, if one checks the data, there is no record of to where the organs went. Other groups are targeted when there are not enough minorities. They are usually targeted by compromising insurance records which contain health records that can be used to match tissue between two people, a donor and organ recipient. Black / African Americans have avoided doctors and hospitals for centuries due to this behavior and the lack of representation and protection from the government.

Additionally, the original universal health care system for all Americans was derailed to prevent minorities from getting health care during the Truman administration. Segregation was able to stop everyone from getting higher quality health care, which is definitely needed during pandemics. Ironically, considering that the majority of American society was and is willing to let minorities die by withholding health care, there have been several healthcare professionals who were serial killers. Today, by simply editing medication dosage data or medication type data, patients can be killed without leaving a trace. The victims of the healthcare serial killers are only the people who have access to the healthcare system. Also, hackers could potentially hack a weak EMR and either collect data or modify the data for specific people. Collecting health care data could provide a way to commit murder without being detected. Even simple allergy information could be used to kill. Serial killers usually target a specific type of patient. Additionally, and strangely, medical "accidents" are the third leading cause of death in the United States, causing the death of at least 250,000 people per year and as many as 400,000 per year according to Johns Hopkins. That is why patient data needs to be protected.

The granular, row-level security of health records and identities is paramount. Atomic data can be locked, encrypted, and protected based on what it is if the database has the correct format. It would also allow patients to log in and control who can see their data and see who has seen their data. Patients could also prevent their data from being sent to another facility if they need to do so. Patients also might want to know fatality rates for doctors before choosing a doctor. That information could be made available instantly with the correct database format.

The other major issue, as related to FHIR, is the ability to find and export individual pieces of data into a new format. Since the data is often not atomic and all data is mixed together, it's almost impossible to program something that can filter and collect the data, change the format, and then export it to an XML file that complies with FHIR standards. For example, an address may be stored with patient visits, but also stored with patient billing. Since there are two areas of the database with patient addresses, they can go out of sync with one address in one area of the database being different from the address in the other area. Also, if the patient moves, both addresses could be wrong if there is no way to globally update the addresses and show a history of addresses, when they changed, etc.

The aforementioned features require multi-dimensional data modeling with time dimensions. See my article on Temporal Data Modeling for more information. The addresses may need a separate star in a star schema. See my article on how business requirements become data models for more information on a star schema. An FHIR application layer may not be able to distinguish between which set of data is correct if the data is not atomic and organized in a multi-dimensional format which is at most 2NF. Additionally, some EMR systems have extra data that does not fit into any area of the known FHIR formats. If the data is not atomic in the best multi-dimensional format, it will take much longer for the application to filter the extra unneeded data which is unique to a specific EMR. If the format and data are atomic, the application can simply skip over all the dimensions it does not need and only access exactly the data it requires.

There are currently only three proven ways to fix this problem and make an EMR work with FHIR.

  1. Fix all the EMR systems and remake all of their databases correctly using real Enterprise Data Modelers and known data modeling standards. I can tell you now that this will never happen because changing the database will force nearly 100 percent of the code to require changes. Essentially, one would have to recreate all of the EMR systems or create a new one to replace all of them, which I could do myself for approximately $50 million dollars, if I had $50 million dollars. Such a system would be worth trillions over a 30 year period. Because America is so legalistic, companies can use the courts to prevent competition. The cost of litigation for such a product could be $50 million dollars.
  2. In the rare case that an EMR database was created by a real data modeler in atomic multi-dimensional format, one simply needs to map the atomic data to the XML format and export the data to XML using a query run by a programming language, such as Java, which can output XML. If one is using Oracle or other databases that support XML output, one may be able to simply select the data directly from the database and output the correct format using select xmlelement() in a script. In cases where the database does not support XML output, one can export to CSV, then import the CSV to XML using any text processing language such as Java, Python, Perl, etc. Some databases also have the capability to persistently store XML formats into which data can be repeatedly selected using functions such as select dbms_xmlgen.getxml(). Typically they are called XMLDB or XML databases and are integrated into the relational database management system.
  3. The other, most prevalent, solution is a canonical data model. A canonical data model is a model that is made to accept data from a third-party system but also uses an ETL system to transform the data into an atomic multi-dimensional format using the database of an EMR as the source. From the canonical model, the data can be exported into any format in actual or near real-time speed. The time dimensions of a canonical multi-dimensional model can be used to only export new data in cases where previous exports had already been received. Such a model can also be used to generate reports that show changes in the data over specified time periods, but operate independently of the original EMR from which the data was extracted. This method should be used with EMR systems that have custom data models that do not conform to the known standards developed by Kimball & Inmon, a poor data model design, no data model, a text file data storage system, or poor data quality. Systems such as Epic use files to store data. The data must be exported to CSV, transformed to XML, then imported to a canonical data model, QA checked, unit tested, and then exported to FHIR XML formats using the XML export feature from the database with data integrity in place from the data model. Other systems such as Cerner require a similar ETL process, but the data may come from MS SQL Server instead of files.

One of the biggest failings of EMR systems is data formatting with the lack of time dimensions and row-level security. There have been many breaches of medical records for celebrities, but the even larger problem is that the data formats are not inter-operable even with published standards meant to make them inter-operable because they never followed standard data governance practices to make the software at the beginning of their software development cycles. Note that some of the celebrities who had their health records compromised later died due to issues with their medication. Both Prince and Michael Jackson died of fentanyl-based medication overdoses. The data cannot be properly secured because it is not in an atomic format. For example, if I want to limit access to data, I would need to do it by table because tables are represented as objects in memory when used by object-capable programming languages such as Java and C++.

For example, a Java EJB represents a table in a database. If the data is not atomic, one would need to secure individual columns that number in the millions in most databases. The amount of code required to try that would be so enormous that it would be larger than both the data and the original application. The best practice is to follow the standards of object-oriented design and object-oriented programming. Most EMR systems are designed using tabular database design which puts everything into one big table like a spreadsheet with no segregation of atomic data. So, when someone is granted access to one thing, they can see everything. An adequate system would be able to combine a data dictionary, row-level security, table-level security, and a set of data governance rules to grant each user access to only what they are authorized to see.

Remember the encryption in the Presentation Layer of the HL7/FHIR/OSI Model? While that layer can encrypt the data for transit, if the target system has no row-level security nor table-level security, the data can still be easily stolen once it reaches its destination. Then the question becomes who is legally responsible for the breach, the company that sent the data or the company that received the data? Each would point the proverbial finger at the other.

The objective of FHIR is to get data from any EMR system into a canonical data model so it can be exported into XML, then imported from XML into another canonical model, and then imported into another EMR using ETL. Below is an example model for customer contact data followed by an XML format from hl7.org. The customer contact data model centralizes all customer contact data, keeping it from being mixed with health care data, but allowing it to be linked as needed so that separate security can be maintained for each type of atomic data.


<?xml version="1.0" encoding="UTF-8"?>

<Patient xmlns="https://hl7.org/fhir">
  <id value="example"/> 
  <text> 
    <status value="generated"/> 
    <div xmlns="https://www.w3.org/1999/xhtml">
      <table> 
        <tbody> 
          <tr> 
            <td> Name</td> 
            <td> Peter James 
              <b> Chalmers</b>  (&quot;Jim&quot;)
            </td> 
          </tr> 
          <tr> 
            <td> Address</td> 
            <td> 534 Erewhon, Pleasantville, Vic, 3999</td> 
          </tr> 
          <tr> 
            <td> Contacts</td> 
            <td> Home: unknown. Work: (03) 5555 6473</td> 
          </tr> 
          <tr> 
            <td> Id</td> 
            <td> MRN: 12345 (Acme Healthcare)</td> 
          </tr> 
        </tbody> 
      </table> 
    </div> 
  </text>
</id>
</Patient> 
  
        

The XML format above is a sample from https://www.hl7.org/fhir/patient-example.xml.html. The section that reads Address followed by 534 Erewhon would be pulled from multiple tables with atomic data. For example, the 534 would come from the Dim3AddrHouseNum table by following the link between the customer's ID, to the SID table connected between the dimensions Dim3AddrCustAcct and Dim1CustAccNum, using the customer's ID number to uniquely identify the customer across two stars in the star schema. One would have to link data columns from the atomic data structure, using the data model structure as a guide, to the elements of an XML format, then write the appropriate SQL queries to populate the XML format with data. The XML file would then be encrypted, compressed, and sent to the target system for import.

One issue that is not apparent in the FHIR format above is that there is an identifier for the MRN, and medical record number, but no unique identifier for the patient, doctor, facility, etc. Even the latest FHIR standards could stand additional refinement for unique identifiers for atomic data, which become primary keys in database tables. However, the standards allow vendors to create a new XML format and simply distribute the format as needed. Essentially, everything in atomic data should be uniquely identifiable by the computer system. The unique identifier column can also be used by programming languages, such as Java, to scale the system's end-user capacity using a clustering system. When Java turns tables into in-memory objects, it uses unique identifiers to serialize and distribute objects across a cluster providing both high availability and scale-ability.

The configuration for EJB 2.1 is also XML-based, as is many Java APIs, making Java instantly compatible with FHIR. See the documentation for Configuring a Primary Key for an EJB 2.1 Entity Bean With Container-Managed Persistence for more information on how Java serializes data based on the primary keys of tables.

If all EMR/EHR companies had multidimensional data models and modern EJB or equivalent code, they would be able to send and receive data in real time without the extra steps of exporting to XML and canonical data models. One could simply map one set of atomic data to another set of atomic data in the application layer using an in-memory object. Additionally, all of the aforementioned security and reporting features would also become possible with real-time tracking of who is doing what to the data. Why don't they have it? The billionaires who own the companies are too greedy to pay the wages required to create the data models and the government refuses to make it a legal requirement. Even if the first two issues are resolved, to my knowledge, there are only 14 real data modelers in the entire world who could make such a model. The rest are pretenders and scammers after money and/or access required for a data breach so they can sell the data.

Until these issues are resolved and FHIR is fully implemented with a fully atomic data model, over 400,000 people will be killed by "accident," of which many are likely serial murders, and 47 million black/African people will be denied healthcare and likely live less than half of their lives in full health and die an average of 25 years sooner than they should. Additionally, 168 million women can be denied civil rights and healthcare as long as there is no real-time data system that can detect bias. The worst part is that current EMR systems are so easily hacked, that serial murders can kill patients without ever leaving home by altering prescription and diagnosis data. They can also steal data and target people for organ harvesting and likely murder if they happen to be a tissue match to someone rich enough to buy black market organs.

  1. The US Government has recently, in 2024, started to "probe" the issues of illegal organ harvesting. However, it is difficult to catch anyone because the data is so bad and poorly formatted without temporal data modeling. Since criminal investigators need to establish a credible timeline of events, most data systems are useless. https://www.washingtonpost.com/health/2024/02/26/organ-transplant-investigation/
  2. Although the black market organ and serial murder for profit market using data theft has existed for over 80 years, the FBI didn't catch anyone until 2011. https://archives.fbi.gov/archives/newark/press-releases/2011/brooklyn-man-pleads-guilty-in-first-ever-federal-conviction-for-brokering-illegal-kidney-transplants-for-profit
  3. A man in Texas held hospital staff at gunpoint to protect his son after the staff attempted to declare him brain dead to justify stealing his organs. Although the organ harvesting groups claim to only be targeting black/African people when they speak to white people, most of the victims are white because most black people avoid doctors and hospitals. There are more white people who need organs than there are black people to supply them. https://www.reddit.com/r/nursing/comments/19en25x/the_2015_story_of_a_dad_in_texas_who_held_up/?rdt=41829
  4. Texas stand-off in Houston Texas with a gun to save the life of a patient. A father saves his son from illegal organ harvesting. Note that the hospital had already planned to harvest the patient's organs before he was dead. Likely the patient was medicated to simulate brain death to obtain over five million dollars in black market organs. Neither the hospital nor its staff were charged with a crime, and they acted as if they were the victims. https://www.click2houston.com/news/2015/12/18/father-son-involved-in-hospital-standoff-speak-to-kprc-2/
  5. Texas attempts crackdown on organ harvesting for profit within the U.S. for China and other foreign countries using Senate Bill 1040. https://dallasexpress.com/government/lawmakers-crack-down-on-illegal-organ-trade/
  6. NIH verifies that a Baclofen overdose mimics brain death and forces any patient into a coma, making them appear dead temporarily. https://pubmed.ncbi.nlm.nih.gov/22292975/
  7. NIH verifies that multiple drugs and substances can simulate brain death. This simulation can be used in illegal black market organ harvesting schemes. If someone in your family gets a brain death diagnosis and there is pressure to donate organs, you must stop them and force them to wait at least 72 hours with the patient under guard by multiple people, including an independent doctor if possible. Baclofen will usually wear off if the patient is properly hydrated, sweats, and/or uniates normally. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7526708/
  8. According to the NIH, "There were 69,735 BD (0.039%) and 3,309,955 ih-CPD (1.85%) with one BD for every fifty ih-CPD. The number of BD increased from 12,575 in 2012 to 15,405 in 2016 (p < 0.0001), with an average of 39 BD per 100,000 discharges and a mean age of 47.83 ± 20.93 years old." That means tens of thousands of people in their 20s suddenly had brain death (BD) at hospitals; fresh young organs. Also, note that the government withholds data for years to decades. They have not released the 2023 data. Good, recent data is like kryptonite for criminals. https://pubmed.ncbi.nlm.nih.gov/32442805/
  9. In theory, if 69,000 people/year, the 2024 projection out of the 400,000 "accidental medical deaths," are murdered for their organs at an average of $5,000,000 / body of organs, a small amount for the billionaires and millionaires of the world, the average annual revenue for black market organs would be at least 345 BILLION dollars/year just from US patients. Even if only the detected brain death cases at 15,000 / year according to 2016 data were harvested illegally, that would still be 75 BILLION dollars/ year. Would people kill for that amount of money?

Perhaps someone like me should create a new EMR to stop the carnage. If only I had the 100 million dollars required to make the software and withstand all the legal challenges in court for five years. See my article on health care data if you would like to see more examples of atomic health care data.

Thank you for reading my article and may your data always have integrity.

Hanabal Khaing


要查看或添加评论,请登录

Hanabal Khaing的更多文章

  • Complex and Correct vs Simple and Wrong

    Complex and Correct vs Simple and Wrong

    Eight years ago, a company hired me to fix a telecommunications system by updating the data model. The fix cost…

  • How People Steal Millions from Coworker 401K

    How People Steal Millions from Coworker 401K

    Have you ever taken clothes out of the dryer, matched up all the socks, but had one sock left over? How did that…

  • How People Steal a Million Dollars from the Data Modeling IT Budget

    How People Steal a Million Dollars from the Data Modeling IT Budget

    How Do Data Models Either Prevent or Enable IT Budget Theft Real, theft-deterrent Data models can only be created…

    1 条评论
  • How to Spot a Fake Data Model

    How to Spot a Fake Data Model

    Why is the Data Modeler and your Data Model More Important than the CEO, all C-Level Staff, and the Board of Directors?…

  • The 30 Plus Skillsets of a Data Modeler

    The 30 Plus Skillsets of a Data Modeler

    The Major Skillsets of a Data Modeler The total skillset count is at minimum 36 and may exceed 60 total skillsets…

  • Data Governance BIM & MDM

    Data Governance BIM & MDM

    Data Governance is the methodical macro and micro management of data storage and flow between countries and companies…

  • Why are over 800,000 Data Jobs Always Open?

    Why are over 800,000 Data Jobs Always Open?

    I could answer the question, "Why are 800,00 Data Jobs Always Open," with one sentence. MOST, not all, of the resources…

  • UNDERSTANDING SAP HANA

    UNDERSTANDING SAP HANA

    First I would like to note that SAP HANA, the platform, versus SAP HANA S/4, the replacement for the SAP ERP / SAP ECC…

  • Canonical Data Model Data Dictionary Mapping

    Canonical Data Model Data Dictionary Mapping

    The purpose of a canonical model is to provide inter-operability between multiple systems at the data level. Once the…

  • Asset Valuation Alert System for Real Estate & Securities Investments

    Asset Valuation Alert System for Real Estate & Securities Investments

    One of the most frequent requests I get as a data modeler is to integrate external unstructured "Big Data" with…

社区洞察

其他会员也浏览了