POST #16??ARE BUREAUCRATIC AND AI THINKING INCOMPATIBLE WITH CRITICAL THINKING? Is bureaucratic/AI thinking incompatible with critical thinking??It depends. Post #15, stated that both bureaucratic and AI decision-making are rules based.?The question is do these rules for bureaucratic and AI decision-making also include rules for decision-making for critical thinking. To answer this, the question what makes up critical thinking needs to be addressed.?One aspect is to evaluate the data being used for decision-making.?This includes the completeness and age of the data.?This also includes understanding the provenance of the data—how was it initially generated??Sourced??Aggregated??Manipulated and/or modified??Think of this as establishing the supply chain for the data under consideration. Another aspect of critical thinking is to understand the underlying assumptions regarding how the data is to be applied.?For example, the assumption school lunch programs can reduce school absenteeism.?This flows from a policy discussion and policy assumptions.??And did these policy considerations holistically consider the “state of art” for solutions from both what is being put forth in theory by researchers and in practice by market innovators (see Posts #2-4). Typically, the assessments of data and policy come upstream before the downstream development of rules within stored procedures for AI, and regulations, and operating manuals for bureaucrats and operators.?So, these elements of critical thinking are not included, and if this is the case, bureaucratic/AI thinking does not take into account critical thinking. So, upstream critical thinking and policy making can be disconnected from downstream rules-based decision-making.?An example of what can happen from these disconnects, is the New Yort Times publishing of the Pentagon Papers, officially titled “History of U.S. Decision-Making in Vietnam, 1945-68” that had been commissioned by the Secretary of Defense Robert McNamara in 1967. Can these disconnects between upstream critical thinking and downstream rules-based decision-making be rectified??If so, how??One way is to take a page from DevSecOps and apply it to MLOps, for MLSecOps.?Another way it to combine upstream critical thinking and downstream manuals into dynamic models.?The dynamic models include predictive analytics and real-time solution generation.?This exponentially de-risks and compresses decision cycles while converting fragile plans into anti-fragile ones, as per Nassim Nicholas Taleb’s book, Antifragile. Another aspect of critical thinking is both self and situational awareness.?This includes understanding the current operating environment and rules to win in that space.?Or in other words, what game you are in and what are the rules of that game.?Here is a 48-second video clip of Kenny Rogers’ advice regarding critical thinking— https://lnkd.in/gJ2Ws7uz????? ?? Here's the full song--?https://lnkd.in/gw7vFvu5????
TDP Data Systems
商务咨询服务
Palo Alto,CA 95 位关注者
TDP provides open-innovation advisory services with its proprietary tech platform to accelerate and de-risk projects
关于我们
- 网站
-
https://www.decisionplatform.io
TDP Data Systems的外部链接
- 所属行业
- 商务咨询服务
- 规模
- 11-50 人
- 总部
- Palo Alto,CA
- 类型
- 私人持股
- 创立
- 2016
- 领域
- consulting、data、ai、ml、artificial intelligence、business development、advisory、M&A、startups和market sizing
地点
-
主要
1618 Sand Hill Road #203
US,CA,Palo Alto,94304
TDP Data Systems员工
动态
-
POST #15??WHY BUREAUCRACIES ARE LIKE ARTIFICIAL INTELLIGENCE (AI) Bureaucracies are like AI.?By extension, bureaucrats are like AI.?? Reason being, both are rules based whether as stored procedures within computer code for AI or as text and images within a regulations or operating manual for bureaucrats.?Rules dictate behavior for humans, computers, and organizations, regardless of the operating mission or political affiliation.?Every entity with agency voluntarily attempts to follow its rules, to some degree.?Even radicals have rules as witnessed by Saul D. Alinsky’s book, Rules for Radicals.?So too do anarchists who may have at least one rule--which may be--don’t follow any other rules. But it’s not just bureaucrats.?Even if someone is not working from an operating manual, they will be relying on reasoning, heuristics, and life lessons for some form of rules-based decision-making, which are elements of critical thinking.?If they aren’t working from reasoning, heuristics, and life lessons, then they are probably making decision emotionally along with innate behaviors--if they are operating with agency. So are there issues with being rules based? One is if the rules no longer describe how to successfully operate in the current environment.?Maybe the operating environment has evolved since the rules were established.?Do the rules help navigate the present reality??Do the rules still apply??Help? Hurt??How so??Is the operating manual on the shelf, out-of-date and in need of repair??TDP uses advanced computing combined with human intelligence to evaluate the gaps between rules and the operating environment.?These gaps may end up in the known unknown and/or unknown unknown quadrants described in Posts #12 and 14. Another potential issue is the speed of the pacing of the rules as they are implemented.?The set of rules is a process list, and the question is whether the process is matching the pace of its operating environment—not too slow, not too fast.?Implementation of the rules could be by a computer, a human, or both.?This 45-second video clip gives a sense of how it feels when the rules are being followed too slowly:?https://lnkd.in/gHJW4czm?Here is the full three-minute clip if you want to see the video’s punch line:?https://lnkd.in/gB-kjzGZ TDP evaluates the pacing of rules as they are implemented and identifies any gaps, as well.?Addressing these two issues may avoid negative outcomes involving life-and-death situations and the extinction events noted in Post #10 How To Avoid Extinction Events.?The next posts will touch on dealing with the hand you have been dealt.
Zootopia DMV Scene
https://www.youtube.com/
-
POST #14??IS YOUR ORGANIZATION CONSIDERING THE UNKNOWN UNKNOWN QUADRANT??IF SO, HOW? Unknown unknowns can include technologies, use cases, scenarios, events, risks, upsides, and downsides that have not been considered so far.?They can be anything. Underlying root causes include i) SMEs ensconced in their own respective, siloed databases, ii) reward structures across Lab-to-Market (L2M) ecosystems that are not harmonized, or are disjointed or even in conflict, iii) performance metrics and information being stratified between bureaucratic levels, and iv) information and data being compartmentalized by organizational functions.?These data issues include data asymmetries, data siloes and data inaccessibilities that were mentioned in Post #11.?These underlying data issues cause extinction events as pointed out in Post #10. One technique to search for upsides and downsides across unknown unknowns is to apply your existing value proposition to a new industry and/or use case.?To this end, consider how the application of machine learning spread across biotech, healthcare, finance, retail, research, supply chains, logistics, manufacturing, transportation over time. To make this less of a shotgun approach, it would be useful to apply analogous thinking to unexplored use cases for consideration.?Robert Hofstadter’s and Emmanuel Sander’s book, Surfaces and Essences, is a great read to prompt analogous thinking.?This helps connect the dots and to extend value propositions to adjacent opportunities.?One example from the book was how Newton saw the planets fall around the Earth in the sky, as Galileo saw objects fall, being dropped from the Tower of Pisa.?The keyword for both is “to fall,” whether on Earth or in the Heavens. Consider the spy glass (aka telescope) that was invented in 1608 to help sailors and military personnel better see distant objects—to discern their known unknowns (see https://lnkd.in/gfZvEKta).?In 1609, Galileo pointed his telescope to the unknown unknowns of the vast sky and discovered the moon’s surface was not smooth, the white haze of the Milky Way was made up of dense clusters of stars, and that Jupiter had four moons.?In parallel, lens were being pointed at the unknown unknowns of the microscopic world, first in 1590 with an early microscope, and by 1676, to examine cells and bacteria. TDP first presented at Stanford’s 2018 Disruptive Technology Summit, how the global pipeline of startups could serve as “the canary in the coal mine.”?In other words, as an early warning system for upsides and downsides, for both the known unknown and unknown unknown quadrants.?Here is a link to TDP’s 2018 presentation at Stanford:??https://lnkd.in/g7DJqEWH Thinking in an analogous manner, TDP then extended this approach to be able to map the “art of the possible” in both theory and practice (see Posts 2-4). ??This approach can dramatically collapse and de-risk the sources sought and RFI approaches used by government.
Pirates of the Caribbean 3 - Jack And Barbossa’s Telescope War
https://www.youtube.com/
-
POST #13 IS YOUR ORGANIZATION A NON-LEARNER??A FOLLOW-UP TO POST #12 Is your organization a complete or partial non-learner??If your organization is a non-learner, it will most likely be operating in the unknown-known quadrant. What are unknown knowns??They are relevant information that is denied, discredited, ignored or suppressed.?These can be signals that your organization is a non-learning one. An example of an organization that is a complete non-learner would be one driven by ideology and dogma with any challenges to those accepted beliefs being denied or suppressed.?I am thinking of a functioning cult. An organization can also be a partial non-learner in which it has a functioning feedback loop in some areas and in other areas the feedback look has been disabled or is non-existent.?I am thinking of the unknown knowns in American’s healthcare ecosystem as pointed out in Dr. Marty Makary’s latest book Blind Spots. An organization may have been established as a non-learner at its origin; for example, one based on dogma.?On the other hand, a learning organization may evolve over time to become a non-learner.?An example would be a market-driven startup that gains dominant market share and becomes monopolistic in nature.?Or a political party that becomes beholden to rigid, ideological beliefs over time. What are the risks of being a non-learner organization??It depends on the type of markets it currently counts on for sustainability and survival.?Some of these include financial, consumer, voting, social, political, research, and STEM markets.???The market-driven startup may have needed consumer market access initially, but once achieving monopolistic power, may now need more access to political markets and capital.?With the shift in focus, the dominant company may be taking its eye off the need for accelerating innovation.?This is based on exponentially accelerating advances across STEM that is driving this.?For the dogmatic political party and the dominant corporation, they are both exposed to potential disruptive innovation.?TDP models this out probabilistically based on predictive analytics, including quantifying the potential downsides. So how does it feel to be in a non-learner organization that is operating with unknown knowns??It could feel like this:?https://lnkd.in/gBHUKr88 My next post will touch on unknown unknowns.
Career Builder Monkeys Super Bowl XL Commercial
https://www.youtube.com/
-
POST #12: WHICH QUADRANT ARE YOU OPERATING IN??FROM KNOWN KNOWNS TO UNKNOWN KNOWNS AND EVERYTHING IN BETWEEN TDP is operating based on four quadrants to categorize levels of situational and self-awareness for its clients, and for itself.?These four quadrants are i) Known Knowns, ii) Known Unknowns, iii) Unknown Unknowns (as Donald Rumsfeld laid out i-iii on February 11, 2002 ), and iv) Unknown Knowns.?For those who are interested, here is a link to that interaction between Rumsfeld and reporters:?https://lnkd.in/gaJrpA6X To move folks from quadrants ii-iv, TDP accesses 50 million databases and datasets and transforms this data into actionable information-in real time based on specific use cases.?Post #11 was an example of iii) Unknown Unknowns with a startup founder being unaware of the failure rate of startups. To support startup founders to help them move to Quadrant i, TDP has been distributing this two-minute video:?https://lnkd.in/g9UebEUY As an aside there really should be informed consent here for founders—more about this in a future post. Examples of iv) Unknown Knowns include the known (but largely ignored--for decades) risks from smoking tobacco and now from processed foods.?These Unknown Knowns form societal blind spots that may fester below the surface until ultimately rising to the level of general awareness to be a Known Known.?This typically happens as a result of accumulating socio-economic costs from the blind spot.?TDP tracks these blind spots and their societal costs through its predictive analytics and modeling. Unknown Unknowns can also result from functional groups working in silos.?This includes silos in industry and government.?This is why TDP developed a “build vs buy" algorithm to help strategy functions assess whether R&D should proceed with building or if M&A/CVC should invest and/or acquire.?When a Global 2000 C-Level client asked TDP to illuminate its Unknown Unknowns, there was a lot to show—more on this in a future post as well. Moving from quadrants ii-iv to quadrant i increases the chances for organizational survivability and success.?This applies to for-profits, non-profits, NGOs and government.?This includes for-profits spanning from startups to Fortune 500s and Global 2000s that are in competitive foot races.?This also includes non-profits and NGOs competing for funding and to demonstrate impact.?And finally, this applies to government and its capacity to serve and improve the lives of its citizens that translates into economic growth locally.??
-
POST #11 THE RISK OF FLYING BLIND WITHOUT INFORMED PROBABLISTIC DECISION-MAKING In 2015, I met a startup founder in Hollywood--just a couple of times. He had quit his job as a producer of a very famous British show and moved from London to LA to pursue his media-tech startup. He was either newly engaged or recently married. On the second meeting, I mentioned in passing that the failure rate of pre-VC backed startups was 95-99%. At that time for VC-backed startups, it was around 75%. When I said this, the blood drained from his face. He had never heard these stats before. I recounted these numbers in the spirit of being forewarned is to be forearmed. This is an example of one of the data issues that undermines, blocks, or impairs decision-making. In this case, the issue is data asymmetries in which some can see the data and others cannot. The important point here is to not only understand the success rate, but also understand the total number who tried. So, both the successes and failures that make up the denominator--the base rate. Speaking of base rates, maybe I missed it. But I don't recall the application of base-rate analysis being applied to the investigation/debate as the origin of COVID-19. The point being is that of 2022, there were 44,768 wet markets in China. At the same time, there were two Biosafety Lab Level 4 (BSL-4 Labs), one in Harbin and the other, Wuhan. And the wet market in Wuhan was not selling bats or pangolins. So this is an example of base-rate analysis that can help inform decision-marking and probabilistic conclusions. I look forward to posting again in around a week's time. Meanwhile Happy New Year and hope 2025 is a great one for you and all of us!
-
POST #10: HOW TO AVOID EXTINCTION EVENTS This and subsequent posts will touch on how subjective human decision-makers can avoid extinction events. One way this can be accomplished is by being more informed by objective computer intelligence enabling probabilistic decision-making by humans. I will give two examples of probabilistic decision-making informing (or not informing) human decision-makers over this and the subsequent post--POST #11. The first is when I was recruited in 2014 to be an internal consultant for Xerox PARC in Palo Alto, CA. I was brought in from LA to help PARC more quickly commercialize various research areas across hundreds of PARC researchers. After an internal review, I recommended that PARC reframe how it went to market, from selling its research as point solutions to mid-level R&D directors to providing it as a managed service for C-Level executives. And probabilistic thinking supported this change. First, the average tenure of a Fortune 500 CEO was 4.7 years. As an aside, it was 4.2 for CFOs and 2 for CMOs at the time. Second, the average tenure of a corporation to stay on the Fortune 500 list had dropped from 60 to 11 years over the decades. Dropping out of the Fortune 500 would trigger various negative consequences for a corporation and its CEO. So the opportunity was to ask CEOs what their pain points were that they wanted to be solved. CEOs would readily provide this information as they were on the clock (4.7 years average tenure and falling tenure on the Fortune 500 list) to earn out their stock options. This probabilistic approach allowed PARC to reposition its go-to-market approach from selling point solutions to mid-level R&D directors to providing on-going managed-services to the C Level. This increased the close rate for the sales pipeline and decreased the costs of sales. It also gave a direct window into C-Level thinking. The postscript is that I rolled out of PARC that latter half of 2015 to begin formulating what would become TDP Data Systems to address underlying data issues impeding or blocking human decision-makers. And SRI would acquire PARC in 2023 to underpin its New Future Concepts Division. The next post--POST #11--will provide an example of probabilistic decision-making not informing a human decision-maker--a startup founder.
-
Humans have a cognitive bias with a tendency to think that today was like yesterday and tomorrow will be like today.?We have a hard time keeping track of the rate of change--for whatever.?Humans tend to be victims of the status quo bias. I bring this up in the context of AI.?Probably since the start of 2024, I have been seeing decks that refer to AI without any distinction regarding the type of AI being considered.?There used to be a distinction made between AI retrieving data (Retrieval AI) versus Generative AI.?And prior to Generative AI exploding on to the scene around two years ago, the distinction made was between AI and Machine Learning. I raise this as another example of needing to be crisper, whether regarding the type of data being used (lagging versus leading) or the type of AI being used (Generative versus Retrieval).?Reason being is that for Generative AI and AI retrieving data from traceable and trusted sources, the types of use cases supported tend to be distinct without too much overlap. The use cases that will be supported by the market for Generative AI are still to be determined, especially given legal challenges to creator use cases.?And while some of Generative AI’s use cases and revenue potential are unclear, the costs for Generative AI seem clearer, including for the costs of data centers and the energy needed to power those centers.?Generative AI is not Green AI.?And with $1T expected to be invested in Generative AI, there will be a proliferation of new potential use cases that VCs and startup founders will be bringing to market to see what may actually be viable and sticks. Having started this post discussing cognitive bias, TDP uses Machine Learning to make sense of the various rates of change by objectively assessing billions of data points.?This is something humans cannot do.?With that said, humans and Generative AI do have one shared cognitive bias—the ability to hallucinate. The point of Retrieval AI is to provide humans actionable information based on correlations and casualties.?This includes to address the status quo bias by objectively assessing rates of change--for whatever.?Retrieval AI provides human decision-makers probabilistic assessments to enhance their decision-making. This will be the topic of the next post in about one week.???
-
So once you have assembled data of interest, what is the next step? My next couple of posts will address ways to analyze the data--how to turn data into actionable information. And these posts will continue to follow the theme of needing to be crisper in approach to achieve clearer and superior results--and to do so faster. Data can be analyzed by using computer intelligence and/or human intelligence. Many Silicon Valley VCs love to talk about taking the friction out of the system. Taking friction out of the system is code for replacing humans with computer intelligence. The theory is that this approach will allow for greater scalability and hopefully 10X returns for the fund. Having said that, TDP Data Systems' experience in the marketplace since 2016 is that the combination of computer intelligence with human intelligence offsets the respective biases of both. Reason being is that computers can deal with vast amounts of data and do so objectively (humans can't) while humans can deal with new, novel situations and do so subjectively (computers can't). This approach doesn't replace humans, but rather enhances them. It frees up humans to be more human and to participate in more higher-value added activities. This combination of computer and human intelligence de-risks and compresses decision-making by 80% or more. This approach has been used to support close to $1B in CVC investments and acquisitions, and is agnostic to industry, geography, technology and/or use case. The next post will touch on the need to be crisper regarding computer intelligence-AI.
-
On to the next post in the stream of needing to be crisper, whether when talking about the art of the possible, innovation, or data. Just about every week, I am reviewing presentations from industry, government, startups, investors, nonprofits, researchers and philanthropic stakeholders across Lab-to-Market ecosystems. And these presentations tend to have references to "data" or "big data." I would suggest that these references need to provide greater detail in terms of the type of data being cited and used. For instance, is it retrieved from curated sources or generated from AI? Digging a little deeper, is the data from curated sources based on lagging indicators or something else? An example of lagging-indicator data is financial data that may be trailing actual events by weeks or months depending on how the financial data is being rolled up, reported and disseminated. The implication is that the financial data may be out-of-date to some degree in describing the current situation and/or providing predictive capability. An example of data providing greater predictive power would be a current weather report and the potential impact on future agricultural output. So there is a need to be crisper in terms of describing the data being worked with. By categorizing the different types of data you are working with provides insights into any limitations, and potential inferences and actionable information that can be drawn from these different data categories. Understanding the type of data being worked with will dictate to a large degree what actionable information that can be generated from it. Maybe an analogy is to think about data as seeds and what will grow from those seeds. What actionable information will be the output? Will the resulting crop be one of beneficial plants or weeds? And how does this output become grist for the decision-making process including for AI?