@sharko @yerrag @Peatful @Kvirion
I would like to use this tech if it returned links/citations back to source material. I think it's a cool experiment and fine to ask to be paid for the archival value-add that the LLM (might) provide.
I guess, @sharko, you are not a quant stats modeler who understands the "stochastic parrot" comment by @Kvirion and paper he linked to or valid concerns raised by @Peatful
The prompts you posted telling your LLM to "invest 100 times more" and do something "even more surprising" to Peat, etc., reveals, to me that you're unaware how those prompts are parsed and what's happening under the hood (in the code). The code will assemble a ranked list of word associations and return the top-ranked test composition by some very limited likelihood criterion in its very "small-world" model space. When you tell it to "come up with an idea" (it cannot!...it can only assemble combinations it was trained on!)...it simply returns #2 on the ranked list or does a re-ranking with a final check to give you something else on its list of guesses/synthetic text assemblages.
The idea that "AI" is "getting smarter" or graduating from high-school- to PhD-level "thinking" is absurd. This tech can be a tremendous aid for constructing text or computer code or many other useful applications, where the training dataset is sufficient in some sense to satisfy the task at hand. Thus, as an archive tool that can spot likely interconnections in the literature or sources you trained it on, I agree there is a real value-add.
When it comes to synthesizing new ideas as in your last few posts, any claim of "synthesizing new ideas" falls flat and does become (borderline?) dishonest. It would be dishonest, for example, to claim that what is returned by your LLM represents a "likely" Ray Peat take on a new topic he never wrote about.
Keep in mind that the training dataset is a binding constraint. The archival value-add is genuine. Claiming that the LLM comes up with "new insights" in any way associated with Ray Peat is dishonest in my view. Better to say explicitly "new insights generated by an LLM trained on a specialized training set comprised of..." and the list precisely which sources were included in the training set. Ray Peat's, Broda Barnes' oeuvres. Some filtered pub-med content you've evidently included. That specificity would match the archival value-add claim. I recommend strongly against selling the LLM with the repeated prompts you gave upthread, which, to anyone working in this space is unlikely to impress.