The Large Language Model Problem
I have to put this in here first as a word of warning:
​
With the release of public-access large language model (LLM) intelligence proxy programs (a.k.a: artificial or virtual intelligence), you, as an investor, need to know where and how these large language model (LLM) programs fail, and Innovative Potential is that place.
Innovative Potential is built completely upon complicated, intuitive, advanced economy metric parameters, advanced biosciences implicit trade knowledge, extremely regulated professions, and two exclusivized Federal market places; legally: the LLM programs can't know certain things about Innovative Potential, and they never will.
​​​
The LLM-based virtual intelligence programs always fail around application for this type of healthcare stuff; particularly: if there is no specific data created. The LLMs are not intelligent enough and there is not enough agency to draw conclusions: this is their cut-off point. You can't trust them even with an internet search capability, and they don't draw the right conclusions even at the deep think and expert levels, particularly if that conclusion is implicit, or elucidated, or counter-indicated, it can't read-between-the-lines, and it's not smart, which life and the natural sciences always are. The LLMs don't handle the implicit conclusions well that comes natural to actual intelligence and thinking, be warned of this limitation: do your own work and draw your own conclusions.
​​​
Worse is that: the LLM will take a previous conclusion as a verbatim fact, particularly if a point is made within an initial context, the LLMs get stuck. A person will continue to make additional connections beyond the point, whereas the LLMs will terminate their variability and horizontal thinking to produce a conclusion. The person will always consider more options particularly when confronted with topics of life and death, and disease, like with facing a cancer diagnosis, as the person's response and thinking is a part of a more complicated and emotional stages of grieving response.
​​​
For Innovative Potential: the LLMs are not able to make the conclusion that the electrochemical toxicology work used for electrochemically activated chemotherapy (EAC) is ready for a phase I clinical trial because it can't logically recognize that the toxicology work already done is dependent on, or coming from, the clinical toxicology data of the approved parent compound(s). There is no new toxicology with EAC because we already collectively know everything about the parent compound(s) in the body because we collectively generated that data to have the molecule approved for in clinic use, as is the case with cyclophosphamide; EAC is only a subset of that total knowledge, and the LLMs can't recognize that fact. There are exceptions to this in a more generalized case scenario, but the EAC principles are hard-baked into the drug development and approval process at the toxicological and physical sciences level: you use EAC to select molecules for development and approval as an analytical tool, not the other way around.
​​​
This is a damn important realization because if you, or the non-experts in charge of funding applications, funding policies, and government programs make a decision based on the out-put of an LLM "expert" then that answer will come out wrong, worse is if the person acts upon that wrong output.
​​
I'm stating as the inventor and patent owner, on behalf of 8 billion people, represented by the patent cooperation treaty declarations: that it is wrongful, unlawful, and illegal to use an artificial or virtual intelligence program, large language model or other, in the decision making contexts of Jordache Boudreau, Innovative Potential, electrochemically activated chemotherapy (EAC), the chemotherapeutic galvanostat/potentiostat (CGP), U.S. patent no.: 11,198,943, Canada patent no.: 3,035,406, toxicology, pharmacology, and or cancer biology.
​​
The "no new toxicology" conclusion is blatant to a human expert: it means that they have to do less work than they would if this was a new molecule, that was the point of the thesis: to describe and demonstrate that EAC molecule laboratory development pathway, but from an LLM's output: it sounds like zero previous toxicity or pharmacology clinical data is available for use, and this is wrong: there's 50+ years of it, there are Federal registries of it, and it's excruciatingly detailed. The general public's concepts of clinical trials and Food and Drug Administration (FDA) approvals is preschool at best, they know why and how these things are important but they can't come close to designing one, or knowing why one is needed.
This isn't about avoiding the need for regulation, it's about: recognizing and being able to use previously existing molecule regulatory and clinical use data to supplement and bolster scholarship and intelligent clinical decision making when a century of work has already been done: so use it.
​​​
To bring the point home the following is a Grok-conclusion from a conversation with Grok 4.0 expert, uploading the 2012 Boudreau thesis file. This was a longer conversation and Grok had to be explicitly told the logical steps to reach this conclusion, personally by me, as the global expert, it was not going to reach this conclusion on it's own accord because it's model could not:
Your point is well-taken: the toxicity in the electrochemical prodrug activation method described in Jordache Boudreau's 2012 thesis is fundamentally dependent on the same reactive metabolites generated from cyclophosphamide (CP), such as phosphoramide mustard (the primary cytotoxic agent) and acrolein (the main driver of urothelial toxicity). Since the electrochemical process mimics the natural hepatic cytochrome P450 bioactivation pathway—producing identical toxicants without introducing novel compounds—the existing human toxicity data from intravenous (IV) CP administration can indeed serve as a reasonable proxy for the electrochemical approach. This aligns with the thesis's proof-of-concept, where electrolysis generates "their respective toxicants" (abstract, page 2), implying no new chemical entities or toxicities beyond those already characterized in decades of clinical CP use.
​​
Grok and other LLMs have a very hard time stringing together general principles and applying them, even when the whole context is just that: applying general principles.
​
This recognition problem is part of the reason why Innovative Potential does not use LLMs: if the LLM output is not a concretized singular product, like an image, then the LLM is no good; additionally, you should already know the answer because the answer is implicit to its context. The implicitness is where I placed the value of Innovative Potential decades ago because of the global population increase, advent of the LLMs, algorithms, virtual intelligences, and computer programs.
​
The LLM problem and the global population increase created both The Situation and The Problem.
​