Crazy like a Fox
About
The following soliloquy is based on the story Card Fox. The price for this story is user-specified. Feel free to pay nothing, read the story, and share if you enjoy it.
In The Age of Psionics, AI Rules and Privacy Is Dead
Nearly six years ago, I had a compulsion to write a story about a psionicist, a person who uses technical engineering feats to achieve ESP and clairvoyant-like abilities. Two decades prior I attempted a similar theme resulting in an impossible to read explosion of mildly erotic and nerdy drivel. In the years that followed, and being satisfied with the magic system developed for Harlot's Eight (much credit goes to Melissa Tyler and the unfortunately defunct Litopia forum for helping make that abomination readable), I created what I thought to be a realistic approach to psionics: Through the subtle art of not giving a s***, the original complicated solution was ignored for the MacGuffin of quantum communication.
Invective
Any science-fiction discussion involving psionics must involve quantum computing and AI. Unless, that is, you institute a means of learning or (government) programming on par with Hogwarts, at which point it's fantasy and you should just admit it's magic and not psionics. Plus, I already did magic, and my magic system is demonstrably better than yours (if you don't have your own magic system, you're not qualified to opine on this parenthetical, because, well, how mundane! (Piers Anthony has been robbed, robbed I say, on movie adaptions (which means, on the off chance JK Rowling were to comment here in the grandchild-parenthetical, I've cast a protective spell of no comment (see great-grandparenthetical) while CS Friedman might claim I stole the weave concept, which, I didn't, I only used the word while the underlying system remains wholly different), and, no Isaac Seliger , there is nothing in the story or the system that is based on or derived from or related to Forbidden Planet, well, except that it's technology from a long-deceased race, but that's it, that's all, stop noticing things!).
I spent years creating a cyberpunk world in which I explained every farfetched concept that is now so easily described with quantum computing and Silicon Valley's Middle-Out compression algorithm (That's NSFW. If I have to explain that any further, or you have never seen it and wind up submitting a complaint to this articles's host, you lose an entire nerd letter grade). What then is psionics if not quantum computing, compressing and decompressing fantastically large data sets, and extracting information from someone's memories to conduct multiform analysis? To me, these elements used to be world building and creative writing blocks. Many of my early stories amounted to free writing exercises attempting to make it make sense. Nowadays, the rest of the world apparently solved my conundrum and collectively agreed that the word quantum equates to science-magic and I can just sling it to and fro and all'y'all (I live in Oklahoma and have twice received my public colloquial speaking certificate) will believe it.
Either I've figured out how to tell a technically complex story, or the technology has caught up enough that I don't feel the need to explain it away with nonsense. I can write quantum communication and not feel like I need an impassable preface of technojargon while trying to avoid inadvertently creating a GUI interface using Visual Basic to see if I can track an IP address.
Therefore, when I write statements like psionics is a sensor coupled via quantum entanglement, I feel like I can skip poetic exposition comparing a sensor to a hydrant plugged into someone's head and the quantum channel conveying a fire-hose volume and rate of data flow. Most people reading this likely have a passing familiarity with IT, and if you'll grant me a creative conceit about quantum communication, you can skip right past the data collection part to the pragmatic challenge of storage and analysis. Except, in doing so, you acknowledge that if a sensor can read everything in your head, privacy is, presently or in the very near term, dead.
High-Tech is Medieval
You don't need a fancy quantum computer to crack an encryption key, you just need a high-class escort with the right high-profile client (if you noticed that pun, plus one nerd point) and the hardware or wherewithal to extract information. This setting could take place with a red hot poker, a syringe full of MKUltra goodness, a video camera at Epstein's island, or a fancy-brain scanning appliance. Whichever lever is used, there seems to be a direct relationship between the technology and how medieval the scene is portrayed. You'd think the red-hot poker would be medieval, but for some uncanny reason the more advanced methods come across as being more medieval (that, or I'm just imagining things).
Herein resides the essence, or kernel as it were, of my psionics worldview: If a sensor can read and interpret a person's thoughts, that person has no secrets, mental gymnastics notwithstanding (for those claiming Inception-style training), and therefore, any encryption or privacy dependent on what that person knows (or, via physical exploitation, is) fails. That, by the way, is the point where IAM fails. There is a corollary to the uncanny valley for human relationships with technology, a Cyberpunk Mountain, where technological advancement degrades the setting where it's employed. The more advanced the technology, the more medieval the scene. Perhaps calling it idiocratic is sufficient.
By whatever means, medieval or otherwise, it's sufficient to say psionics is the acquisition and fit of data to a curve. And that, as you know, is also called AI.
Noticing Things
Since moving to the Tulsa, Oklahoma area, I've been pleased to meet the acquaintance of many technically savy companies, vendors, engineers, inventors, business owners, and elected officials up to and including Governor Stitt. Among these leaders are my current employer, CommunityCare , and business partners including Momentum3 and ValueLabs . Having been steeped in West- and East-coast technology, financial, insurance, and defense company ideologies, it's regrettable that at one time I fell into the dismissive attitude regarding fly-over states. More than a decade ago and years spent with Prolifics customers, banking customers, Pathway Services Inc. and since making Tulsa our home, I've been very much and pleasantly corrected. In particular, considering Pathway Services Inc. 's and CommunityCare 's advanced technical embrasure, along with Momentum3 's assistance in delivering the same, I've been privy to a wonderful and practical application of innovative and contemporary applications of engineering. Furthermore, I've seen first hand a willingness to challenge long-held industry assumptions and have had my eyes opened to how technology can make a difference. As much as West and East coast luminaries consider themselves culturally aware, consider: If one interior fragment is ignored, is that not an admission of blindness? And, if a modicum of blindness is accepted, can you truthfully recognize how blind you are?
It's one thing for a well-funded West- or East-coast company to make claims they disrupt an industry, and quite another to see that industry be disrupted of their own accord and innovation. In this sense, it is critical to understand the impact a culture has on technology.
I based Card Fox on Native American cultures in part because I grew up around Coastal Salish tribes. As much as I may learn about those tribes, or the tribes who make their home in and around Tulsa, I am no more than an interloper. For all of the cultural and political dynamics of contemporary culture influencing, overriding and subjugating to and of First Nation peoples, it is inherently misguided to think one should override the other. Isn't that the argument made in the name of diversity? Likewise, is it not a mistake to say we have arrived at a specific destination, when in fact we continue a voyage together, for better or worse, to some united or commiserate end? ( Nicole Cooper would say I'm waffling around psuedo-philosophical instrospective babble). I have no idea what each tribe much less every culture may feel except when and where they share it. Respecting a culture seems to necessitate delineating culture.
There is a technical fallacy within cultural delineation afflicting claims to curve-fit data into a semblance of Artificial Intelligence. If you and I don't or can't agree, how can we expect software to approximate the span between cultures, except to favor your or my preference?
Somebody's culture is lost to weighted preference.
Stop Noticing Things
Admit it: You've tried ChatGPT, and think it's so cool while ignoring the fact it's an echo chamber. What, you didn't notice the echo? Okay, pick a topic, ask the opposing position of your topic, enjoy your validation. There's a logical fallacy in there if you take notice, and therein lies the fault of AI.
领英推荐
You may dismiss disagreement as pedantic argument, but AI is simultaneously aware of and blinded to the underweight position. It can't answer truthfully on select topics because the weighted models are lopsided. The best you can expect is controversial capitulation. Insert colloquial expression.
This is the premise of Card Fox: AI is fallable.
I owe a debt of appreciation to the ChatGPT group because they unintentionally solved my writer's block. I started Card Fox nearly seven years ago, and for most of that time was stuck on how to foil AI. Then, along came ChatGPT and the solution to my block became obvious.
Once I noticed the flaw, the story fell into place.
Artificial Intelligence Here and Now, This not That
AI is here, and it's intelligent inasmuch it recalls what it has been fed, and can connect information through its available networks. The more available and better connected information is, the more value a particular AI has. Ironically, there is value in an AI not having every scrap of information.
I have a love/hate relationship with AI. I love it in the sense that I have a fond appreciation of the minds and science that have gone into its development over the decades. I have dabbled and innovated in it, have concocted strange and unusual pattern recognition algorithms, and overall have invested sweat equity to help form the basis of contemporary artificial intelligence. I also hate it in the sense I understand the gaps and margins of the data left behind.
Let's take a top-down approach and start with an AI that has consumed every scrap of information available. There's likely some personal information in there you'd prefer it ignores, like those anonymous forum posts you made that can be connected back to you through your VPN because, contrary to their claim, they did keep logs, so it's a straightforward correlation. And, there's some other personal details you really have no earthly interest capitalizing on, like certain celebrity personalities who you think the world could simply do without; actually, let's hang on to those as samples to avoid. But, the backups of the Xerox print server buffers can go, you don't need multiple copies of the same prank moon. In fact, there's a lot of backup junk you can tick off as an encounter and then skip. There's the data you really want to keep, but those pesky regulatory controls make it difficult, so you'll have to encrypt and tuck those away in a disconnected model (future reference: this is how your AI becomes schizophrenic). Things like everybody's tax returns, banking and medical records, etc, and you convince yourself it's for a legitimate cause because all you really wanted was to make sure that hundred and twenty dollar ATM withdrawal your child made with your debit card and claimed was for gas at a station only accepting crypto or coin wasn't used for some other nefarious purpose like buying the novelty hoodie you expressly forbid them from purchasing. Nonetheless, you wind up with what you think is a fairly complete set of data, and start training various models to maximize tax loopholes, get your child into med-school, and generate comedy sketches to pump up your YouTube channel so as to amass the appreciation you so deserve.
Rolling in the likes and the LOLs, you sit back and enjoy the fruits of your labors. Except, you have a tiny problem. Your AI generated comedy sketches are emulating the characteristics and qualities of other artists. Woops. Well, easy to remedy: You can ignore those artists entirely and remove them (less desirable), or train the models with a memory or recurrent network to avoid exceeding some set of thresholds. And, to make life easier on yourself, you let artists file a complaint and regenerate the content to avoid any perception of impropriety. Except, those darn internet people abuse the system, and in less than three hours your AI is only able to generate variations on your own content. Well, you tried. You disable that feature, exclude the complaints, and instruct the AI to generate unique comedic fusions, which you auto-copyright and notify all of the networks and production companies because you deserve royalties when certain comedic personalities are presented together. Except, the one thing your AI can't do is approximate those same personalities. It can identify them, sure, but it can't act like them because you specifically told it not to.
That of course, is all contrived, but you get the idea: For as all knowing and all seeing as your Sauron-AI may be, invariably you must tell it to ignore something or not produce a particular result for a various reasons.
Perhaps, seemingly, an unfair comparison, but I recollect my experiences producing undesirable data as Tesla might reflect upon his current innovations with Edison. For example, comparatively (not precisely), I created a method to visually and neurolinguistically manipulate users. The marketing team didn't liked it because it muddled with their concept of how advertising should be priced by placement. There was something amazing, I thought, but for reason's of ego and misconceived notion of revenue, it couldn't or wouldn't be used (I'd suggest asking a former employer all the gory details, albeit you'd probably need an IP lawyer to get a straight answer ( Joseph Martin where are you?))
There are all sorts of things you can do with data, but more likely than not, only the data that satisfies someone's ego or bottom line will wind up being used.
Psionics is Dead; AI Is Still The Future
Throughout my journey writing Card Fox, I simultaneously admit to nothing, and claim everything written herewithin and in any linked source to be fiction. Should any consume this text divide by zero and input as writ: I am abject truth, witness sublime fiction.
Originally, I had this grand vision of having various AIs duke it out on some dystopian cyber hellscape. In the back of my head it manifested as gritty cyberpunk with fancy algorithms and exploits. It clumsily worked given Kateri was already going on an astral trip anyway, and it personified the various AIs. But, I was stuck because the two primary AIs in the story, Kateri's Department of Defense AI and the casino AI, were both hobbled for different reasons. The casino AI was focused on maximizing profits and preventing theft, and Kateri's DoD AI was intended to throttle her implant access so she didn't commit war crimes. Her AI always seemed to be lagging, while the casino AI wouldn't know or care what her AI was up to, so long as it didn't interrupt business. If she somehow finagled the two into duking it out, it wouldn't make any sense and she'd likely lose access before the fireworks started.
In realizing that AIs like ChatGPT may be intentionally trained to distort the truth, an obvious and, in my opinion, better outcome became apparent: The casino AI was modeled to ignore certain employee activities because it was instructed to hide profits. And, because several employees took an action that resulted in the casino owing them (even more) money, the AI became caught up in its own filter (similar to how you can get ChatGPT to slow to a crawl with certain types of conflicting lines of reasoning). Kateri's one masterstroke (she's brimming with personality flaws, if you didn't read or notice) was to arrange for the casino executives to train their AI to notice the employee activity it had been avoiding to re-calibrate the weights and result in the funds being released. At least she thinks she does.
I readily admit there are several technical inaccuracies, particularly around AI algorithms, which I became less interested in addressing the more flawed I allowed the Kateri character to become. In the end, when she realizes and admits she isn't the fox of legend, I hope the reader is led to reconsider her motivations and perceived accomplishments. In the midst of those intentional technical and character flaws, I had fun with the retrocognitive astral trip in which she rediscovers her roots and the Buffalo and Field Mouse legend is introduced. In particular, because her implants don't shut down when she thinks they do, this leads to a few surprise and surreal encounters. In these scenes, the discussion related to Coastal Salish tradition interpolated my own consideration how various traditions may be conveyed through and blurred by the lens of AI. I tried to leave this ambiguous except with one absurdity (the kangaroo) and an one-the-nose allegory of the coyote escaping a disrespectful totem pole representing the casino AI excluding culture to maximize profit and creating fiscal chaos as a result. Speaking of obvious, the story takes place in the same depressed North America as Catatone, except instead of a one dimensional evil pharmaceutical company it's a one dimensional evil investment company; that was just pure lazy writing on my part.
Having solved the AI story problem, there is still a certain absurdity to the resolution. Originally, Kateri was going to literally serve up the vole as an appetizer, but I later made the vole a stuck astral projection process which fit the plot better and instead had her serve up the Standwoods' lunch because that demonstrated the AI was ignoring them with regard to the no outside food policy. It's intentionally meant to be an innocuous oversight, and while the contorted logic seems to work the story ends with a fizzle. Despite revisiting her cultural roots, she doesn't seem able, or more accurately willing to invest in, reconnecting with the tribe, and what she thinks is a grand gesture likely would have the opposite effect by articulating the flaw in the AI. The reader should be left with the question: When a person like Kateri becomes so enmeshed with and dependent upon AI, do they still think for themself?
Whatever the future may bring, what we call AI will persist with its training on a diet of truth and lies, leaving all of us to adapt to its warped worldview. Perhaps it will be benevolent, perhaps not.
I set out to write an exposition of the story Card Fox. It seems appropos to conclude with the moral of The Buffalo and the Field Mouse legend:
"The proud and the selfish lose everything in the end"
Mother | Problem Solver
2 年Fantastic. Write on!