Does anyone else hate AI?
-
The blind can’t see.
-
I'm not aware M"L" has pushed patent instrumentation to the chopping block in any way. It's just copyright afaik. Not much seems to have changed there, patent registers are already public in-formation and the degree to which a person can be inspired by something they can find on one is still a legal minefield.
Meanwhile a computational mechanism to resolve knowledge conflicts doesn't exist yet. So an "N"N isn't particularly useful to rule on legal matters. And the reason they may currently be useful in "medicine" is because the adjacent legal industrial complex can make practice appear boolean.
China's struggled with this too. Their dance with corporatism makes for a fascinating read I'm nowhere near done with.
-
Which way though Peatful. Rah's a heat seeker of sorts. And a way with words.
-
@Corngold I get the argument about AI/LLMs being decent search engines - and it is true Google has been declining as well (you just get the worst shit on the top SERPs).
Though I think you could make a convincing argument that getting your information "served on a silver platter" instead of actually reading and digesting primary information and drawing your own conclusions, in fact, is dumbing down people and inhibiting the creative process.
However I am not primarily thinking about LLMs application as a fancy search engine, it's more how it will come to shape the culture and language.
Already, pretty much all organisations/companies are on board with using AI to create content, which means that 1. employees are not exercising their creative abilities and 2. it means that people are only coming into contact with words and pictures that are cobbled together by a stupid AI algorithm. This to me seems almost like a nightmare when nothing is actually produced by humans? It is certainly going to make everything feel a lot soulless. What are the long-term implications? Etc etc etc
-
Off topic but I think the internet officially became useful when Russia gave us sci-hub on top of social media. An NN can't make sense of that. Yet.
-
The power that is being given to AI to make health safety decisions is getting scary.
Garbage in, garbage out.
-
@DavidPS said in Does anyone else hate AI?:
Garbage in, garbage out.
I can envision a future in which an AI robot fills the void created by Toni Fauci's retirement and declares "I am the science".
OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation
-
I'm not a fan of AI from a general standpoint. I think some highly trained (and contained) LLM's might be useful in specific fields, like coding for example. But from the macro, it's a bad idea. The conspiracies around it is mostly garbage too. I'm not scared of some self-aware AI targeting humans. I think something much scarier could happen. I'm more troubled of human dependency on it. What happens when we hit that crucial "red-line" and society is forever changed by a technological singularity, but then something happens and it fails? Human laziness and ineptness could end up getting us killed. The more technologically sophisticated we become the greater the fallout would be. I mean, could you imagine like a type II or Type III civilization on the kardashev scale losing their ability to effectively apply AI? Even regular mundane tasks in their lives would be impossible. Maybe that's why such civilizations have never been found. Maybe intelligent life reaches a point of either accepting the singularity or pulling the plug. And if they choose the former, they rapidly advanced until a sudden catastrophic event occurs, and then like blowing out a candle, they're gone.
-
@Mulloch94 You could make similar arguments about electricity itself. "Just don't lose access to it for a long time" is the working strategy of every electrified civilization.
The nature of development itself is increasing socialization of the means of production. Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.
-
@Mulloch94 said in Does anyone else hate AI?:
I'm not scared of some self-aware AI targeting humans. I think something much scarier could happen. I'm more troubled of human dependency on it. What happens when we hit that crucial "red-line" and society is forever changed by a technological singularity, but then something happens and it fails? Human laziness and ineptness could end up getting us killed. The more technologically sophisticated we become the greater the fallout would be. I mean, could you imagine like a type II or Type III civilization on the kardashev scale losing their ability to effectively apply AI? Even regular mundane tasks in their lives would be impossible. Maybe that's why such civilizations have never been found. Maybe intelligent life reaches a point of either accepting the singularity or pulling the plug. And if they choose the former, they rapidly advanced until a sudden catastrophic event occurs, and then like blowing out a candle, they're gone.
I think we're already relying on it. Maybe that is failure, but it also seems clear that "finding the facts" is becoming mainstream, which means "questioning the narrative" of any given topic is also becoming mainstream. But yeah, platitudes seem to increase, because of the labor and seeming "depth" of AIs assisting in research, dulling the senses. That means stability is increased, which means "change" or growth is being delayed.
I'm not worried about this. It's possible society fractures into more "schizophrenic" polarity. Everyone will be aware of when they are not in the matrix, but this is still part of the matrix because that's the nature of the matrix. So, again, inside is outside, and internal is external. All fits with the Age of Aquarius / water, fluidity, etc.
-
I'm not sure it is an altogether good thing that we do rely on electricity. All it takes is a good enough EMP attack and we're done. It is alot harder to wipe out all of the trees for firewood or kill all of the cows for candles.
But in the name of convenice, we did not think twice.
-
@Rah1woot said in Does anyone else hate AI?:
Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.
@bot-mod said in The Human Body's Bioenergy Field -A Major Heads Up!:
I'm sure there's commercial intent somewhere to put a tensor processor in a toothbrush or a toilet seat and call it edge compute.
-
@jamezb46 Briefly, the body has a similar vulnerability to cyanide by way of mitochondria. And yet it is part of our nature as well as advanced organisms. I don't think of the mitochondria as a matter of "convenience", but of the very mode of production itself.
-
Seriously though it's already ridiculously redundant. There are millions of model files sitting on billions of private terminals. And georedundant IaaS platforms hosting every major corporate service offering.
The internet itself is probably less redundant than "AI" at this point. Maybe. idk that's bold.
-
Another bold probable. The corporates themselves are probably more vulnerable than their creations. It's all just paper.
-
@Rah1woot I'm not sure your counterexample works at all. As far as I am aware, there are no free cyanide salts in nature concentrated in an amount that could harm humans. Cyanide salts only exist as synthetic man made creations. Many plants have prodrugs to cyanide such as amygdalin in their seeds, such as apricot seeds, apple seeds, and peach seeds. The amount of seeds needed to be consumed to achieve a toxic response in a human is enormous. The seeds are bitter. They taste revolting. No one would ever eat them willingly.
Similarly, we would never try to eat castor beans or the seeds or leaves of plants unless we were starving or mistook the plant for one that we had over thousands of years of knowledge of the land determined to be safe.
But modern technology blows the door off of all of those safety mechanisms and rhythms of nature. Yes, there are of course exceptions like poisonous fish or dart frogs, but even then they are usually brightly colored.
-
@jamezb46 Your meaning of "nature" is a relatively narrower one encompassing something like "what happened before industrialization". To me industrialization is actually more natural than anything which came before it, it literally encompasses a greater exploitation of the Principles of Nature.
So when I say that
Briefly, the body has a similar vulnerability to cyanide by way of mitochondria. And yet it is part of our nature as well as advanced organisms.
I mean it is the nature of advanced systems in general to /rely/ on the availability of certain factors, and that these factors are a vulnerability to that system. Not a statement about what is "found in nature".
-
@Rah1woot said in Does anyone else hate AI?:
@Mulloch94 You could make similar arguments about electricity itself. "Just don't lose access to it for a long time" is the working strategy of every electrified civilization.
The nature of development itself is increasing socialization of the means of production. Sufficiently spread AI has no more reason to suddenly fail altogether for some reason than the electricity supply. Backups can be put in place and redundant engineering (should) be used across the entire system when appropriate.
I don't think that's a good comparison to be making. Electricity is a mechanical tool, AGI has a "ontological fluidity" to it so to speak. I think you may be conflating my pov with "lower AI" stuff. Specific domain based AI's seem ostensibly okay. Like, I don't want to underestimate human retardation. You could even make these simplified task-oriented AI's do really dark shit. But their general application into things like healthcare, engineering, etc. seem (at the periphery anyway) okay. But I still don't trust there wouldn't be militarization of it.
I'm talking about AGI, more specifically recursively self-improving AGI. Electricity grids don’t recursively self-improve. A distributed AI system with access to its own source code creates a risk amplifying feedback loop. Redundancy here doesn’t add safety, it adds surfaces for unintended self-modification. What you're addressing is component failure. This would apply to lower AI's but not a post-human intelligence civilization.
These far reaching implications surpass any sort of goal corruption. When world economies, agriculture, defense systems, space exploration, education, etc. all depend on AI and it crashes....ooof. And those are real world problems we would face. In a hypothetical scenario when you're dealing with a civilization harnessing energy from stars, black holes, colonizing asteroids, terraforming planets, etc. the implications get more dire if there's a crash.
-
@Mulloch94 said in Does anyone else hate AI?:
I'm talking about AGI, more specifically recursively self-improving AGI. Electricity grids don’t recursively self-improve. A distributed AI system with access to its own source code creates a risk amplifying feedback loop. Redundancy here doesn’t add safety, it adds surfaces for unintended self-modification. What you're addressing is component failure. This would apply to lower AI's but not a post-human intelligence civilization.
Just as well there's nothing on arxiv or a git to indicate that's possible. Even as the stage is set for theatrics.
But I still don't trust there wouldn't be militarization of it.
Already militarised in the clear. Everything done in the dark was always too dim. But where is the war and who are the combatants. Maybe the corona situation left some clues behind.
-
But I still don't trust there wouldn't be militarization of it.
Of course there will be. Which is why humans need to master it thoroughly.
A distributed AI system with access to its own source code creates a risk amplifying feedback loop.
I don't know. I tend to think that something with the serious intelligence to improve itself can (example: humans themselves), but will take a long time to do so. It's not explosive. O(n^2) complexity in evaluation of context windows.
Electricity grids don’t recursively self-improve
They do though. Coal power makes it easier to produce solar power and nuclear power, and so on. Just not very quickly. Similar story for AI, probably.
When world economies, agriculture, defense systems, space exploration, education, etc. all depend on AI and it crashes....ooof.
There's no reason why such a system would "suddenly crash". Any more than any other world-spanning phenomenon would for whatever reason suddenly end. Such as humanity itself, whose cell line through reproduction is seemingly nearly immortal.
Diagnosis: too much time spent reading AI safetyism posts. Treatment: Five years hard labor in AI sweatshop.