Bioenergetic AI Labs: The future of Ray Peat's legacy is already set
-
I've been looking at this nightmarish "diagram" for far too long.
-
@NNight said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
I've been looking at this nightmarish "diagram" for far too long.
It is not part of the system because as I noted, public AI technologies are still not yet able to produce images of this kind at a satisfactory level. This was an example of another extremely useful future AI capability, for all the AI naysayers out there
-
@sharko
I'm not a naysayer, more like a realist about capabilities of AI.
I agree with T-3, you don't seem to really understand how these models work (your prompt are a bit "fanciful"). And I'm not saying that to be mean.Ps: I think LLM can be useful for translation, summarization, categorization.
-
Some shitty AI parroting his words taken out of context is a joke and not his legacy. So many people trying to profit off of his name since he died. Maybe at least try and sound less like a pompous ass by not calling this his legacy? AI incapable of own thought, reason and creativity isn't his legacy.
-
@NNight said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko
I'm not a naysayer, more like a realist about capabilities of AI.
I agree with T-3, you don't seem to really understand how these models work (your prompt are a bit "fanciful"). And I'm not saying that to be mean.Ps: I think LLM can be useful for translation, summarization, categorization.
This is a fairly common attitude among Ray fans, many of us like to skip information in order to continue to stick to our faith. You ignored the response I wrote to him. Anyway, I'm really not interested in trying to convince anyone. I enjoy technology that helps me help myself and others.
-
@Barghest said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
Some shitty AI parroting his words taken out of context is a joke and not his legacy. So many people trying to profit off of his name since he died. Maybe at least try and sound less like a pompous ass by not calling this his legacy? AI incapable of own thought, reason and creativity isn't his legacy.
It's a common symptom of Ray fans who are sure they have all the answers to everything. You don't know me and you also happen to be completely wrong. If my main goal was money, first of all I wouldn't focus on such a small niche and if I did, I would make a bot that doesn't cost me money in half an hour and sell to all 20 thousand people who search for Ray Peat Diet every month. Hi, I gave advice to anyone who is trying to get rich from Ray Peat. I'll tell you more than that: your attitude is so negative that I doubt your organism is approaching homeostasis according to Ray Peat's approach. There is no problem in making money, if in the end it contributes to the world. Any other story stems from unclear thinking and a general lack of understanding
-
@sharko You really don't get what @T-3, @Peatful, me, and others are trying to explain to you, do you?
Maybe because you don't know what you don't know i.e. you're probably affected by Dunning–Kruger effect...
Your convictions/claims are mostly false, because of
- Epistemology:
LLMs are only capable of partial deductive and inductive reasoning in statistical ways.
They are unable to perform Creative (innovative/inventive) thinking i.e. abductive reasoning (e.g. "logic of hunches") - this is a scientific fact. - Ontology:
LLM's algorithms are linear and statistical, limited by datasets i.e. closed IT systems. But the human mind is a nonlinear, interrelated, adaptive, partially quantum open complex ecosystem. This is a big difference. - Phenomenology:
A human mind operates with a sense of self-awareness and intentionality, we perceive, think, and act, etc.
LLMs lack consciousness and intentionality. They generate responses based on patterns in the data they were trained on, without any subjective experience or self-awareness. Their outputs have no underlying intention or purpose; they just produce statistically probable text.
Moreover, human cognition is deeply embodied. Our thoughts and experiences are influenced by our physical bodies and sensory inputs. Emotions, physical sensations, and the environment play critical roles in shaping our mental states.
We are emotional and adaptable - continuous/dynamic improvement (or regress); can also reflect, and see things from different angles (at least some of us). LLMs do not. - Axiology: LLM doesn't get ethics...
Plus basics of Knowledge management: people can think more than they can say, and they can say more, they can write...
So, "AI" can help us find some useful info, but can not bring new ideas.
- Epistemology:
-
Thanks for sharing these ideas. Did you learn these concepts by self-study? Or did these come from a liberal arts education that are similar to Peat's that give such a perspective?
Can you be more constructive rather than talk down then as if you want to give a lecture more than help Sharko make his efforts of using AI to help us gain a better understanding of Ray Peat's work?
But I may be wrong in not giving you and T-3 enough credit as really I can see some effort to be constructive, and that Sharko's responses to you may not hit the right notes with you, though I get the sense that he is more about explaining the possibilities and potentials if AI, than in addressing the points you raised.
Sharko, I can see if I were in your shoes I would feel like I'm defending a thesis from a panel consisting of judges ranging from caustic and belligerent to constructive and helpful. It would make you uncomfortable and defensive if only for the belligerent trying to have his say in rhe tone he only is used to.
But let's continue the diacussion giving Sharko a chance to answer Kvirion's points, and if Kvirion's points are not sufficiently addressed in the current prototypical stage, then we have to consider the likelihood of improving the AI model instead of prejudice it based on previous attempts of AI that failed.
-
@Barghest said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
Some shitty AI parroting his words taken out of context is a joke and not his legacy. So many people trying to profit off of his name since he died. Maybe at least try and sound less like a pompous ass by not calling this his legacy? AI incapable of own thought, reason and creativity isn't his legacy.
I appreciate the objectivity / understanding here balanced with passion
Said much better than I have
Ray was a gifted soul of a man
In the legacy thread from the old forum iirc-
someone else, again, summarized it as digital vs analogAnd it really comes down to relationship
Relationship to yourself, others and the environment
Expressed intellectually and or creativelyPeat was the epitome of both
Soulless LLM or AI offers me nothing
-
@yerrag said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
Thanks for sharing these ideas. Did you learn these concepts by self-study? Or did these come from a liberal arts education that are similar to Peat's that give such a perspective?
A mix of... partial education in liberal arts plus self-study thanks to some wise people sharing their insights online/books... Plus recently reading/learning from Ray's works (I miss him so much...) helped me to add another dimension... but I'm still learning and I'm open to dialogue
BTW I also received professional training in advanced sense-making and Complex Adaptive Systems.
Can you be more constructive rather than talk down then as if you want to give a lecture more than help Sharko make his efforts of using AI to help us gain a better understanding of Ray Peat's work?
Yeah, you're right, my bad. I may try... but both sides need to be more open-minded...
Honestly, I'm pretty frustrated with the IT guys claiming unfoundedly that they have a panacea or they are helping the world... When in reality they are unaware (WEF/neoliberal?) agents of destruction/idiocracy (i.e. Moloch)...In my imagination I'm with John Connor, Morpheus, and Butlerian Jihad - fighting the machines!
BTW I tried to be nice to Sharko at first...
Full disclosure - in the ancient past I also worked in roles of IT database developer/Analyst or IT project manager and I was a technology fanboy.
But I may be wrong in not giving you and T-3 enough credit as really I can see some effort to be constructive, and that Sharko's responses to you may not hit the right notes with you, though I get the sense that he is more about explaining the possibilities and potentials if AI, than in addressing the points you raised.
Right, it's good to use/explore possibilities and potentials, BUT one also must be aware of (many) limitations...
LLM can help us find something, but such info must not be seen as a conclusion, but only as an input for further conscious processing with the help of the scientific method and creativity...
But let's continue the diacussion giving Sharko a chance to answer Kvirion's points, and if Kvirion's points are not sufficiently addressed in the current prototypical stage, then we have to consider the likelihood of improving the AI model instead of prejudice it based on previous attempts of AI thst failed.
Golden advice, I'm for it.
-
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@NNight said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko
I'm not a naysayer, more like a realist about capabilities of AI.
I agree with T-3, you don't seem to really understand how these models work (your prompt are a bit "fanciful"). And I'm not saying that to be mean.Ps: I think LLM can be useful for translation, summarization, categorization.
This is a fairly common attitude among Ray fans, many of us like to skip information in order to continue to stick to our faith. You ignored the response I wrote to him. Anyway, I'm really not interested in trying to convince anyone. I enjoy technology that helps me help myself and others.
I'm not really a "Ray fan" and I've always despised the cultish behavior that I had observed on the RPF.
My goal was to give you an honest feedback on your business model (and to criticize some of your "idealism").
I don't want to appear as someone who want to discourage you, this is not my goal. I think it's very good that you want to develop such a project and I'm not of those who are afraid of AI. Again, good Luck!
-
@Kvirion said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko You really don't get what @T-3, @Peatful, me, and others are trying to explain to you, do you?
Maybe because you don't know what you don't know i.e. you're probably affected by Dunning–Kruger effect...
Your convictions/claims are mostly false, because of
- Epistemology:
LLMs are only capable of partial deductive and inductive reasoning in statistical ways.
They are unable to perform Creative (innovative/inventive) thinking i.e. abductive reasoning (e.g. "logic of hunches") - this is a scientific fact. - Ontology:
LLM's algorithms are linear and statistical, limited by datasets i.e. closed IT systems. But the human mind is a nonlinear, interrelated, adaptive, partially quantum open complex ecosystem. This is a big difference. - Phenomenology:
A human mind operates with a sense of self-awareness and intentionality, we perceive, think, and act, etc.
LLMs lack consciousness and intentionality. They generate responses based on patterns in the data they were trained on, without any subjective experience or self-awareness. Their outputs have no underlying intention or purpose; they just produce statistically probable text.
Moreover, human cognition is deeply embodied. Our thoughts and experiences are influenced by our physical bodies and sensory inputs. Emotions, physical sensations, and the environment play critical roles in shaping our mental states.
We are emotional and adaptable - continuous/dynamic improvement (or regress); can also reflect, and see things from different angles (at least some of us). LLMs do not. - Axiology: LLM doesn't get ethics...
Plus basics of Knowledge management: people can think more than they can say, and they can say more, they can write...
So, "AI" can help us find some useful info, but can not bring new ideas.
You're writing things down that sound like I once said that my goal is for AI to replace us.
I have explained several times that these are tools that can accelerate research processes at a dizzying pace and this is the essence of what I am offering.
I'm not an AI freak, I was a temp when it came out and ran away from it once I couldn't control my serotonin even with substances. (This is what happens when you try to solve problems in your head intensively without a break).
I will ask you a question:
Suppose you are researching according to the knowledge of Ray Peat but you do not have a basic understanding of human physiology and biology, how long would it take you to understand the picture if you were to read all of Ray Peat's information and at the same time learn basic things about human physiology and biology through Google, Pubmed, etc. and how long would it take you if you used an AI that contained all the knowledge that Ray Peat published, along with a lot of additional knowledge and unlimited access to research?
Another question:
If you were to ask an AI agent that is capable of performing several processes one after the other in response to a prompt, connected to a Vector DB containing chunks of 3,000 words and structured so that it is possible to quickly retrieve the 5 chunks most relevant to the user's prompt, after being filtered and improved by gpt-4o and returning An answer after analyzing 15,000 words with the most relevant content of all the knowledge Ray Peat published in a clear language of your choice, checking it against relevant external information and information he was trained on by gpt-4o, with let's say Gemini 1.5 flash model (because we want a high token model and 1M token input+output is better than the others for the part it need to check all the collected data together)
“The goal: to find the most likely reasons why some thyroid supplements work and others don't.
Go over your knowledge and extract the most relevant information about:
T4 (Thyroxine) T3 (Triiodothyronine) rT3 (Reverse Triiodothyronine) T2 (Diiodothyronine) T1 (Monioiodothyronine) Calcitonin
Then, you will work on the knowledge you have been trained on and extract relevant information about each of these hormones.
After that, you will search pubmed for 5-10 studies that contain information about each of these hormones, for each of them and summarize the 5-10 most relevant findings for each of the hormones.
Then, check 3 studies on each of the following list of drugs and supplements: *** and summarize the findings indicating their effect, along with information on their ingredients if any.
After that, go through all the information you have gathered, provide a list of possible and accurate reasons why a particular drug or supplement affects body temperature and heart rate and others do not, relevant quotes from which you drew the conclusions and links to the sources of information for the relevant quotes."
Do you think there is no way he will find new insights that no one had found before?
- Epistemology:
-
@NNight said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@NNight said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko
I'm not a naysayer, more like a realist about capabilities of AI.
I agree with T-3, you don't seem to really understand how these models work (your prompt are a bit "fanciful"). And I'm not saying that to be mean.Ps: I think LLM can be useful for translation, summarization, categorization.
This is a fairly common attitude among Ray fans, many of us like to skip information in order to continue to stick to our faith. You ignored the response I wrote to him. Anyway, I'm really not interested in trying to convince anyone. I enjoy technology that helps me help myself and others.
I'm not really a "Ray fan" and I've always despised the cultish behavior that I had observed on the RPF.
My goal was to give you an honest feedback on your business model (and to criticize some of your "idealism").
I don't want to appear as someone who want to discourage you, this is not my goal. I think it's very good that you want to develop such a project and I'm not of those who are afraid of AI. Again, good Luck!
Thank you.
It's really just a side project and my main goal is to recruit friends for the research tools, so that I can continue to use them myself and develop them, for the sake of my study goals and at the same time, if it ends up generating profits for me, it's only will allow me to focus more on that, at the expense of other things I do today, like selling digital health courses that make me enough money so that I don't have to spend time on bots for extra income.
It's so unprofitable compared to all my alternatives, that it's a little funny to me that there are those who think I'm trying to ride on someone's knowledge to make money. And what's more: that those people think that riding on someone's knowledge to make money by using AI technologies is bad, but learning from all of their knowledge and making a living helping others with their health, thanks to this knowledge - is fine.
-
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
I will ask you a question:
Suppose you are researching according to the knowledge of Ray Peat but you do not have a basic understanding of human physiology and biology, how long would it take you to understand the picture if you were to read all of Ray Peat's information and at the same time learn basic things about human physiology and biology through Google, Pubmed, etc. and how long would it take you if you used an AI that contained all the knowledge that Ray Peat published, along with a lot of additional knowledge and unlimited access to research?
Sorry, but with all the respect, you are making the same mistake over and over...
The theory of cognitive Predictive Processing claims that "People don't see, what they do not expect to see". You seem to be a great example of it...
Wisdom isn't about (the speed of) information processing.
It is about the process of continuous sense-making... (learning, exploration, probing, understanding/framing, analyzing chunks, synthesizing contradictions, contextualizing, refining, questioning assumptions, epistemic humility, etc.)If one does not have a basic understanding of human physiology and biology (not to mention ontology, epistemology, phenomenology, and axiology) then one shouldn't even approach this field, or one may get seriously hurt...
Moreover, you seem to assume that all knowledge sources are equal and rational. And that a person is asking the right question...
Knowledge is like a forest - each component (trees, fungi/Mycorrhiza, insects, soil, rain, sun, prey/predators, season) and their relationships are important! But you seem to be only interested in counting kilograms of wood...
One should know [before] that a body is an ecosystem of interrelated components with synergistic effects, feedforward/feedback loops, allostatic/homeostatic mechanisms, etc. That most substances/substrates and their effects are context- and path-dependent. Also, understand what is the current problem with peer-reviewed publications. How to differentiate between a paper written with a reductionistic approach or a systemic (holistic one). What is the problem with p-values... And this is only level one...
Have you ever heard about Popper, Kuhn, and Lakatos, and how they define a theory, a paradigm (change), and the process of gaining understanding in general?
And all the points above are just the tip of the iceberg of scientific Knowledge Management...
You want to offer definitive answers in a non-definitive universe... This will not end well...
What you propose is a reductionistic point-based (not systemic) information stripped from all dependencies - we already have it in the mainstream and this is a key problem! What you propose is the opposite of what Ray was for...
If you were to ask an AI agent that is capable of performing several processes one after the other in response to a prompt, connected to a Vector DB containing chunks of 3,000 words and structured so that it is possible to quickly retrieve the 5 chunks most relevant to the user's prompt, after being filtered and improved by gpt-4o and returning An answer after analyzing 15,000 words with the most relevant content of all the knowledge Ray Peat published in a clear language of your choice, checking it against relevant external information and information he was trained on by gpt-4o
It's just soulless/blind information mining...
Where is Perceive-Think-Act?
-
@Kvirion said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
I will ask you a question:
Suppose you are researching according to the knowledge of Ray Peat but you do not have a basic understanding of human physiology and biology, how long would it take you to understand the picture if you were to read all of Ray Peat's information and at the same time learn basic things about human physiology and biology through Google, Pubmed, etc. and how long would it take you if you used an AI that contained all the knowledge that Ray Peat published, along with a lot of additional knowledge and unlimited access to research?
Sorry, but with all the respect, you are making the same mistake over and over...
The theory of cognitive Predictive Processing claims that "People don't see, what they do not expect to see". You seem to be a great example of it...
Wisdom isn't about (the speed of) information processing.
It is about the process of continuous sense-making... (learning, exploration, probing, understanding/framing, analyzing chunks, synthesizing contradictions, contextualizing, refining, questioning assumptions, epistemic humility, etc.)If one does not have a basic understanding of human physiology and biology (not to mention ontology, epistemology, phenomenology, and axiology) then one shouldn't even approach this field, or one may get seriously hurt...
Knowledge is like a forest - each component (trees, fungi/Mycorrhiza, insects, soil, rain, sun, prey/predators, season) and their relationships are important! But you seem to be only interested in counting kilograms of wood...
One should know [before] that a body is an ecosystem of interrelated components with synergistic effects, feedforward/feedback loops, allostatic/homeostatic mechanisms, etc. That most substances/substrates and their effects are context- and path-dependent. Also, understand what is the current problem with peer-reviewed publications. How to differentiate between a paper written with a reductionistic approach or a systemic (holistic one). What is the problem with p-values... And this is only level one...
Moreover, you seem to assume that all knowledge sources are equal and rational. And that a person is asking the right question...
Have you ever heard about Popper, Kuhn, and Lakatos, and how they define a theory, a paradigm (change), and the process of gaining understanding in general?And all the above points are just the tip of the iceberg of Knowledge Management...
You want to offer definitive answers in a non-definitive universe... This will not end well
What you propose is a reductionistic point-based (not systemic) information stripped from all dependencies - we already have it in the mainstream and this is a key problem! What you propose is the opposite of what Ray was for...
If you were to ask an AI agent that is capable of performing several processes one after the other in response to a prompt, connected to a Vector DB containing chunks of 3,000 words and structured so that it is possible to quickly retrieve the 5 chunks most relevant to the user's prompt, after being filtered and improved by gpt-4o and returning An answer after analyzing 15,000 words with the most relevant content of all the knowledge Ray Peat published in a clear language of your choice, checking it against relevant external information and information he was trained on by gpt-4o
It's just soulless/blind information mining...
Your attitude is excellent but you tend to think that this is the only way...
I will tell you that I started studying Ray Peat's knowledge 10 years ago and to this day I lack about 95% of the knowledge of basic physiology and biology and despite this - in the last decade, I have brought myself back from the grave several times (the number of serious problems I had at the same time is greater than you can imagine ). A few examples out of maybe 100: for a year I would wake up every day after exactly 5 hours with massive chest pains and crazy sweating, along with hand tremors for 15 years, tingling all over my body and head, tics, neurological problems, manic depression, vision problems, severe stomach pains , 0 energy, crops and that's just the tip of the iceberg.
Every time after I pushed myself to the edge, I returned to a "new" state in less than three months, each time in a different and complex way, based on connecting a lot of dots according to the bioenergetic approach, without knowing almost anything about basic physiology and biology.
I don't ask AI a question, get an answer and say thank you. I check everything 100 times, fix it and improve its understanding as I go. In the end, I'm still learning at a speed 100 times more than it would have taken me through Google (as someone who has been an expert in Google search for over 25 years and of which 15 years as an seo expert who knows exactly what to choose from all the top false results and how to find the more correct answers).
I enjoy the discussion, you have no idea how insignificant it is for me to recruit customers for this bot compared to other things I do.
What's more, I understand if some of you were upset because of the title of the post and examples of correspondence with the AI that I sent for fun and made you raise an eyebrow.
Regarding the title: I'm sorry, the marketing comes out of me naturally without even noticing. habits...
-
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
I don't ask AI a question, get an answer and say thank you. I check everything 100 times, fix it and improve its understanding as I go. In the end, I'm still learning at a speed 100 times more than it would have taken me through Google (as someone who has been an expert in Google search for over 25 years and of which 15 years as an seo expert who knows exactly what to choose from all the top false results and how to find the more correct answers).
Will your "AI" do that? Will ask a user to check different sources, think about different approaches, ask a user to experiment, ask a user to think of the context of an unhealthy condition and what preceded it?
And a key question: what is the chance that your ego is bigger than your knowledge?
-
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
A few examples out of maybe 100: for a year I would wake up every day after exactly 5 hours with massive chest pains and crazy sweating, along with hand tremors for 15 years, tingling all over my body and head, tics, neurological problems, manic depression, vision problems, severe stomach pains , 0 energy, crops and that's just the tip of the iceberg.
It doesn't prove anything, some issues may be easier to fix, some are more difficult, and sometimes it's just luck... or confabulation...
Will you take responsibility, that someone asking your "favorite toy" for advice may die because your "AI" miscalculated a statistic between words...?
-
@Kvirion said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
I don't ask AI a question, get an answer and say thank you. I check everything 100 times, fix it and improve its understanding as I go. In the end, I'm still learning at a speed 100 times more than it would have taken me through Google (as someone who has been an expert in Google search for over 25 years and of which 15 years as an seo expert who knows exactly what to choose from all the top false results and how to find the more correct answers).
Will your "AI" do that? Will ask a user to check different sources, think about different approaches, ask a user to experiment, ask a user to think of the context of an unhealthy condition and what preceded it?
And a key question: what is the chance that your ego is bigger than your knowledge?
My AI, your HI, who cares?
I don't know what ego is. I am at the bottom of humanity and have no problem with that or anything else.
It's just that it rarely happens that I drift into serotonergic discussions and find some enjoyment in them, Before I have to stop the rampage with a minor Cypro dose
-
@Kvirion said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
Will you take responsibility, that someone asking your "favorite toy" for advice may die because your "AI" miscalculated a statistic between words...?
You now sound like a messenger for the pharma companies to be honest and not someone who has read too much Ray Peat information.
You are promoting the method that allows them to continue to rule, by brainwashing the public, that when it comes to medicine, we are all children who need the approval of Father Doctor before any change in diet, medication and lifestyle and we do not have the capacity for research and self-intuition. It is very dangerous for us to be exposed to knowledge Because we lack responsibility for ourselves, even though it's okay for us to drive on deadly roads, go skydiving and eat industrial poison that is sometimes marketed as something that sounds healthy.
You are forgetting perhaps the most important thing: most people accelerate their aging process at the speed of light, without even knowing it and at the same time, most authoritative medical and health information leads to the same thing and that is their starting point.
By the way, my idea is to create a small community of knowledgeable researchers who want to speed up the research processes, along with a common group for discussions and group study on the right ways to deal with these tools. But you already know everything...
-
What lively discussion.
@sharko said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
@Kvirion said in Bioenergetic AI Labs: The future of Ray Peat's legacy is already set:
You are promoting the method that allows them to continue to ruleThe specifics of the law does this sharko. And that can be changed quite easily, people willing. Technology tends to just compound resulting behaviour. It might change some of the beneficiaries again. If anyone can figure out how to monetise it, or further rig the 'market' and divert capital.