Dandruff or scalp irritation? Try BLOO.

    Bioenergetic Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Does anyone else hate AI?

    Philosophy
    19
    49
    1.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ThinPickingT
      ThinPicking @ThinPicking
      last edited by

      Another bold probable. The corporates themselves are probably more vulnerable than their creations. It's all just paper.

      1 Reply Last reply Reply Quote 0
      • jamezb46J
        jamezb46 @Rah1woot
        last edited by jamezb46

        @Rah1woot I'm not sure your counterexample works at all. As far as I am aware, there are no free cyanide salts in nature concentrated in an amount that could harm humans. Cyanide salts only exist as synthetic man made creations. Many plants have prodrugs to cyanide such as amygdalin in their seeds, such as apricot seeds, apple seeds, and peach seeds. The amount of seeds needed to be consumed to achieve a toxic response in a human is enormous. The seeds are bitter. They taste revolting. No one would ever eat them willingly.

        Similarly, we would never try to eat castor beans or the seeds or leaves of plants unless we were starving or mistook the plant for one that we had over thousands of years of knowledge of the land determined to be safe.

        But modern technology blows the door off of all of those safety mechanisms and rhythms of nature. Yes, there are of course exceptions like poisonous fish or dart frogs, but even then they are usually brightly colored.

        In time there is life but no knowledge; outside time there is knowledge but no life

        R 1 Reply Last reply Reply Quote 0
        • R
          Rah1woot @jamezb46
          last edited by

          @jamezb46 Your meaning of "nature" is a relatively narrower one encompassing something like "what happened before industrialization". To me industrialization is actually more natural than anything which came before it, it literally encompasses a greater exploitation of the Principles of Nature.

          So when I say that

          Briefly, the body has a similar vulnerability to cyanide by way of mitochondria. And yet it is part of our nature as well as advanced organisms.

          I mean it is the nature of advanced systems in general to /rely/ on the availability of certain factors, and that these factors are a vulnerability to that system. Not a statement about what is "found in nature".

          https://bioenergetic.life/clips/a6050?t=668&c=12

          1 Reply Last reply Reply Quote 0
          • M
            Mulloch94 @Rah1woot
            last edited by

            @Rah1woot said in Does anyone else hate AI?:

            @Mulloch94 You could make similar arguments about electricity itself. "Just don't lose access to it for a long time" is the working strategy of every electrified civilization.

            The nature of development itself is increasing socialization of the means of production. Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.

            I don't think that's a good comparison to be making. Electricity is a mechanical tool, AGI has a "ontological fluidity" to it so to speak. I think you may be conflating my pov with "lower AI" stuff. Specific domain based AI's seem ostensibly okay. Like, I don't want to underestimate human retardation. You could even make these simplified task-oriented AI's do really dark shit. But their general application into things like healthcare, engineering, etc. seem (at the periphery anyway) okay. But I still don't trust there wouldn't be militarization of it.

            I'm talking about AGI, more specifically recursively self-improving AGI. Electricity grids don’t recursively self-improve. A distributed AI system with access to its own source code creates a risk amplifying feedback loop. Redundancy here doesn’t add safety, it adds surfaces for unintended self-modification. What you're addressing is component failure. This would apply to lower AI's but not a post-human intelligence civilization.

            These far reaching implications surpass any sort of goal corruption. When world economies, agriculture, defense systems, space exploration, education, etc. all depend on AI and it crashes....ooof. And those are real world problems we would face. In a hypothetical scenario when you're dealing with a civilization harnessing energy from stars, black holes, colonizing asteroids, terraforming planets, etc. the implications get more dire if there's a crash.

            ThinPickingT R 2 Replies Last reply Reply Quote 1
            • ThinPickingT
              ThinPicking @Mulloch94
              last edited by

              @Mulloch94 said in Does anyone else hate AI?:

              I'm talking about AGI, more specifically recursively self-improving AGI. Electricity grids don’t recursively self-improve. A distributed AI system with access to its own source code creates a risk amplifying feedback loop. Redundancy here doesn’t add safety, it adds surfaces for unintended self-modification. What you're addressing is component failure. This would apply to lower AI's but not a post-human intelligence civilization.

              Just as well there's nothing on arxiv or a git to indicate that's possible. Even as the stage is set for theatrics.

              But I still don't trust there wouldn't be militarization of it.

              Already militarised in the clear. Everything done in the dark was always too dim. But where is the war and who are the combatants. Maybe the corona situation left some clues behind.

              1 Reply Last reply Reply Quote 0
              • R
                Rah1woot @Mulloch94
                last edited by

                @Mulloch94

                But I still don't trust there wouldn't be militarization of it.

                Of course there will be. Which is why humans need to master it thoroughly.

                A distributed AI system with access to its own source code creates a risk amplifying feedback loop.

                I don't know. I tend to think that something with the serious intelligence to improve itself can (example: humans themselves), but will take a long time to do so. It's not explosive. O(n^2) complexity in evaluation of context windows.

                Electricity grids don’t recursively self-improve

                They do though. Coal power makes it easier to produce solar power and nuclear power, and so on. Just not very quickly. Similar story for AI, probably.

                When world economies, agriculture, defense systems, space exploration, education, etc. all depend on AI and it crashes....ooof.

                There's no reason why such a system would "suddenly crash". Any more than any other world-spanning phenomenon would for whatever reason suddenly end. Such as humanity itself, whose cell line through reproduction is seemingly nearly immortal.

                Diagnosis: too much time spent reading AI safetyism posts. Treatment: Five years hard labor in AI sweatshop.

                1 Reply Last reply Reply Quote 0
                • samsonS
                  samson @Korven
                  last edited by

                  @Korven Its just a more useful google, using it for anything generative is really bad and very telling about someones mental capacities i think

                  jamezb46J 1 Reply Last reply Reply Quote 0
                  • jamezb46J
                    jamezb46 @samson
                    last edited by jamezb46

                    @samson

                    It's not that clever when you really push it. For example I tried to see if it could follow quite sophisticated arguments in the philosophy of mind, but it couldn't offer any meaningful engagement with ideas that go quite deep into the weeds.

                    Also, it is eerie that its primary objective seems not to be to answer difficult or speculative questions truthfully, but rather to try to get the user to trust it.

                    In time there is life but no knowledge; outside time there is knowledge but no life

                    1 Reply Last reply Reply Quote 0
                    • yerragY
                      yerrag @Korven
                      last edited by

                      @Korven

                      So far I have used 3 AI- Gemini, Deepseek, and Delphi.

                      I don't claim they are representative of AI in its full potential whether it be good or bad. But I want to use it to the fullest extent to bridge my own admittedly imperfections as a regular human with a regular IQ, whatever that is.

                      Though my preference is that I don't use AI to do things I can do well over the years, which are cognitive skills I have learned, if not mastered in varying degrees, to make me highly functional unaided by automation such as AI. Which is to say I won't let it compose phrases, paragraphs, essays, poems, articles, books, letters, and poems, and scripts for me. Nor keep me from mentally find my way through calculations, approximations, using rules of thumbs when needed. True enough, I use Waze for directions when driving, but don't use it in my own locality as I want to keep the ability to find my way through the city the way taxicab drivers have that honed as disuse equates to misuse and that would be s detriment to my brain and it's neurons.

                      But when it comes to health as centered especially on bioenergetics, and as much as I've learned a great extent from Ray Peat, I haven't fully tapped into his knowledge of metabolic principles particularly in the way all his research, sorting wheat from chaff, narratives from truths, and all condensed into his writings and interviews. The body of knowledge we would find in his writing, as we know it forms our understanding of bioenergetics, which is the umbrella of knowledge involving notable great minds such as Szent Gyogi, Otto, Ling et al.

                      Which is why in my interaction with Delphi's AI that tried to capture Ray Peat's essence based on all his writings and interviews and whatever that can be gleaned from his life's works, I have found myself asking many questions of clarification and guidance that helps me fill many gaps of information I have that without it would be something I feel would longer on to my death unquestioned with problems in my health unresolved and uncorrected, and me unhealed. I think that using Delphi's implementation of emulating a clone of ideas and mindset, I have in the past few days found some stirrings of hope that my healing journey can resume and also end in short time with success.

                      What is Delphi AI is answered in this article that has a video clip of an interview by Maria Bartiromo of the founder of Delphi Lara Lajevardian from last year:

                      https://www.foxbusiness.com/media/maria-bartiromo-interviews-lifelike-artificial-intelligence-clone

                      And here is a series of questions that I have asked the AI and the answers I had gotten, which I find extremely helpful:

                      https://delphi.ai/ray-peat/talk/conversation/shared/69881fbc-cf44-49df-b31c-281571391e30

                      I hope you can give it a try for your own use. I think you will be pleasantly surprised at what it can do.

                      Temporal thinking is the faculty that’s
                      engaged by an enriched environment, but it’s
                      wrong to call it “thinking,” because it’s simply
                      the way organisms exist... - Ray Peat Nov 2017 Newsletter

                      1 Reply Last reply Reply Quote 1
                      • 1
                      • 2
                      • 3
                      • 3 / 3
                      • First post
                        Last post