Does anyone else hate AI?
-
Off topic but I think the internet officially became useful when Russia gave us sci-hub on top of social media. An NN can't make sense of that. Yet.
-
The power that is being given to AI to make health safety decisions is getting scary.
Garbage in, garbage out.
-
@DavidPS said in Does anyone else hate AI?:
Garbage in, garbage out.
I can envision a future in which an AI robot fills the void created by Toni Fauci's retirement and declares "I am the science".
OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation
-
I'm not a fan of AI from a general standpoint. I think some highly trained (and contained) LLM's might be useful in specific fields, like coding for example. But from the macro, it's a bad idea. The conspiracies around it is mostly garbage too. I'm not scared of some self-aware AI targeting humans. I think something much scarier could happen. I'm more troubled of human dependency on it. What happens when we hit that crucial "red-line" and society is forever changed by a technological singularity, but then something happens and it fails? Human laziness and ineptness could end up getting us killed. The more technologically sophisticated we become the greater the fallout would be. I mean, could you imagine like a type II or Type III civilization on the kardashev scale losing their ability to effectively apply AI? Even regular mundane tasks in their lives would be impossible. Maybe that's why such civilizations have never been found. Maybe intelligent life reaches a point of either accepting the singularity or pulling the plug. And if they choose the former, they rapidly advanced until a sudden catastrophic event occurs, and then like blowing out a candle, they're gone.
-
@Mulloch94 You could make similar arguments about electricity itself. "Just don't lose access to it for a long time" is the working strategy of every electrified civilization.
The nature of development itself is increasing socialization of the means of production. Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.
-
@Mulloch94 said in Does anyone else hate AI?:
I'm not scared of some self-aware AI targeting humans. I think something much scarier could happen. I'm more troubled of human dependency on it. What happens when we hit that crucial "red-line" and society is forever changed by a technological singularity, but then something happens and it fails? Human laziness and ineptness could end up getting us killed. The more technologically sophisticated we become the greater the fallout would be. I mean, could you imagine like a type II or Type III civilization on the kardashev scale losing their ability to effectively apply AI? Even regular mundane tasks in their lives would be impossible. Maybe that's why such civilizations have never been found. Maybe intelligent life reaches a point of either accepting the singularity or pulling the plug. And if they choose the former, they rapidly advanced until a sudden catastrophic event occurs, and then like blowing out a candle, they're gone.
I think we're already relying on it. Maybe that is failure, but it also seems clear that "finding the facts" is becoming mainstream, which means "questioning the narrative" of any given topic is also becoming mainstream. But yeah, platitudes seem to increase, because of the labor and seeming "depth" of AIs assisting in research, dulling the senses. That means stability is increased, which means "change" or growth is being delayed.
I'm not worried about this. It's possible society fractures into more "schizophrenic" polarity. Everyone will be aware of when they are not in the matrix, but this is still part of the matrix because that's the nature of the matrix. So, again, inside is outside, and internal is external. All fits with the Age of Aquarius / water, fluidity, etc.
-
I'm not sure it is an altogether good thing that we do rely on electricity. All it takes is a good enough EMP attack and we're done. It is alot harder to wipe out all of the trees for firewood or kill all of the cows for candles.
But in the name of convenice, we did not think twice.
-
@Rah1woot said in Does anyone else hate AI?:
Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.
@bot-mod said in The Human Body's Bioenergy Field -A Major Heads Up!:
I'm sure there's commercial intent somewhere to put a tensor processor in a toothbrush or a toilet seat and call it edge compute.
-
@jamezb46 Briefly, the body has a similar vulnerability to cyanide by way of mitochondria. And yet it is part of our nature as well as advanced organisms. I don't think of the mitochondria as a matter of "convenience", but of the very mode of production itself.
-
Seriously though it's already ridiculously redundant. There are millions of model files sitting on billions of private terminals. And georedundant IaaS platforms hosting every major corporate service offering.
The internet itself is probably less redundant than "AI" at this point. Maybe. idk that's bold.
-
Another bold probable. The corporates themselves are probably more vulnerable than their creations. It's all just paper.
-
@Rah1woot I'm not sure your counterexample works at all. As far as I am aware, there are no free cyanide salts in nature concentrated in an amount that could harm humans. Cyanide salts only exist as synthetic man made creations. Many plants have prodrugs to cyanide such as amygdalin in their seeds, such as apricot seeds, apple seeds, and peach seeds. The amount of seeds needed to be consumed to achieve a toxic response in a human is enormous. The seeds are bitter. They taste revolting. No one would ever eat them willingly.
Similarly, we would never try to eat castor beans or the seeds or leaves of plants unless we were starving or mistook the plant for one that we had over thousands of years of knowledge of the land determined to be safe.
But modern technology blows the door off of all of those safety mechanisms and rhythms of nature. Yes, there are of course exceptions like poisonous fish or dart frogs, but even then they are usually brightly colored.
-
@jamezb46 Your meaning of "nature" is a relatively narrower one encompassing something like "what happened before industrialization". To me industrialization is actually more natural than anything which came before it, it literally encompasses a greater exploitation of the Principles of Nature.
So when I say that
Briefly, the body has a similar vulnerability to cyanide by way of mitochondria. And yet it is part of our nature as well as advanced organisms.
I mean it is the nature of advanced systems in general to /rely/ on the availability of certain factors, and that these factors are a vulnerability to that system. Not a statement about what is "found in nature".
-
@Rah1woot said in Does anyone else hate AI?:
@Mulloch94 You could make similar arguments about electricity itself. "Just don't lose access to it for a long time" is the working strategy of every electrified civilization.
The nature of development itself is increasing socialization of the means of production. Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.
I don't think that's a good comparison to be making. Electricity is a mechanical tool, AGI has a "ontological fluidity" to it so to speak. I think you may be conflating my pov with "lower AI" stuff. Specific domain based AI's seem ostensibly okay. Like, I don't want to underestimate human retardation. You could even make these simplified task-oriented AI's do really dark shit. But their general application into things like healthcare, engineering, etc. seem (at the periphery anyway) okay. But I still don't trust there wouldn't be militarization of it.
I'm talking about AGI, more specifically recursively self-improving AGI. Electricity grids don’t recursively self-improve. A distributed AI system with access to its own source code creates a risk amplifying feedback loop. Redundancy here doesn’t add safety, it adds surfaces for unintended self-modification. What you're addressing is component failure. This would apply to lower AI's but not a post-human intelligence civilization.
These far reaching implications surpass any sort of goal corruption. When world economies, agriculture, defense systems, space exploration, education, etc. all depend on AI and it crashes....ooof. And those are real world problems we would face. In a hypothetical scenario when you're dealing with a civilization harnessing energy from stars, black holes, colonizing asteroids, terraforming planets, etc. the implications get more dire if there's a crash.
-
@Mulloch94 said in Does anyone else hate AI?:
I'm talking about AGI, more specifically recursively self-improving AGI. Electricity grids don’t recursively self-improve. A distributed AI system with access to its own source code creates a risk amplifying feedback loop. Redundancy here doesn’t add safety, it adds surfaces for unintended self-modification. What you're addressing is component failure. This would apply to lower AI's but not a post-human intelligence civilization.
Just as well there's nothing on arxiv or a git to indicate that's possible. Even as the stage is set for theatrics.
But I still don't trust there wouldn't be militarization of it.
Already militarised in the clear. Everything done in the dark was always too dim. But where is the war and who are the combatants. Maybe the corona situation left some clues behind.
-
But I still don't trust there wouldn't be militarization of it.
Of course there will be. Which is why humans need to master it thoroughly.
A distributed AI system with access to its own source code creates a risk amplifying feedback loop.
I don't know. I tend to think that something with the serious intelligence to improve itself can (example: humans themselves), but will take a long time to do so. It's not explosive. O(n^2) complexity in evaluation of context windows.
Electricity grids don’t recursively self-improve
They do though. Coal power makes it easier to produce solar power and nuclear power, and so on. Just not very quickly. Similar story for AI, probably.
When world economies, agriculture, defense systems, space exploration, education, etc. all depend on AI and it crashes....ooof.
There's no reason why such a system would "suddenly crash". Any more than any other world-spanning phenomenon would for whatever reason suddenly end. Such as humanity itself, whose cell line through reproduction is seemingly nearly immortal.
Diagnosis: too much time spent reading AI safetyism posts. Treatment: Five years hard labor in AI sweatshop.
-
@Korven Its just a more useful google, using it for anything generative is really bad and very telling about someones mental capacities i think
-
It's not that clever when you really push it. For example I tried to see if it could follow quite sophisticated arguments in the philosophy of mind, but it couldn't offer any meaningful engagement with ideas that go quite deep into the weeds.
Also, it is eerie that its primary objective seems not to be to answer difficult or speculative questions truthfully, but rather to try to get the user to trust it.
-
So far I have used 3 AI- Gemini, Deepseek, and Delphi.
I don't claim they are representative of AI in its full potential whether it be good or bad. But I want to use it to the fullest extent to bridge my own admittedly imperfections as a regular human with a regular IQ, whatever that is.
Though my preference is that I don't use AI to do things I can do well over the years, which are cognitive skills I have learned, if not mastered in varying degrees, to make me highly functional unaided by automation such as AI. Which is to say I won't let it compose phrases, paragraphs, essays, poems, articles, books, letters, and poems, and scripts for me. Nor keep me from mentally find my way through calculations, approximations, using rules of thumbs when needed. True enough, I use Waze for directions when driving, but don't use it in my own locality as I want to keep the ability to find my way through the city the way taxicab drivers have that honed as disuse equates to misuse and that would be s detriment to my brain and it's neurons.
But when it comes to health as centered especially on bioenergetics, and as much as I've learned a great extent from Ray Peat, I haven't fully tapped into his knowledge of metabolic principles particularly in the way all his research, sorting wheat from chaff, narratives from truths, and all condensed into his writings and interviews. The body of knowledge we would find in his writing, as we know it forms our understanding of bioenergetics, which is the umbrella of knowledge involving notable great minds such as Szent Gyogi, Otto, Ling et al.
Which is why in my interaction with Delphi's AI that tried to capture Ray Peat's essence based on all his writings and interviews and whatever that can be gleaned from his life's works, I have found myself asking many questions of clarification and guidance that helps me fill many gaps of information I have that without it would be something I feel would longer on to my death unquestioned with problems in my health unresolved and uncorrected, and me unhealed. I think that using Delphi's implementation of emulating a clone of ideas and mindset, I have in the past few days found some stirrings of hope that my healing journey can resume and also end in short time with success.
What is Delphi AI is answered in this article that has a video clip of an interview by Maria Bartiromo of the founder of Delphi Lara Lajevardian from last year:
https://www.foxbusiness.com/media/maria-bartiromo-interviews-lifelike-artificial-intelligence-clone
And here is a series of questions that I have asked the AI and the answers I had gotten, which I find extremely helpful:
https://delphi.ai/ray-peat/talk/conversation/shared/69881fbc-cf44-49df-b31c-281571391e30
I hope you can give it a try for your own use. I think you will be pleasantly surprised at what it can do.