[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 894 KB, 1000x1500, nodal_grl.jpg [View same] [iqdb] [saucenao] [google]
15195486 No.15195486 [Reply] [Original]

I love literary studies and philosphy, but philosphy academia is - let's be honest - just fake.
My philosphy major friends would write me "interesting" news articles about AI, and those fuckers in their field write about the future, machines, artificial intelligence, and so on - but they also refuse to actually learn it. They'd never want to invest three month of python and some statistics that it takes to be able to actually get to the point where you can write your own machine learning routines, understand what it is and what it can and can't do. Actually write a program that recognizes digits, or maybe faces if you allow yourself to import some libraries. Get a feel for how things work.

The joke is that this is justified: Of course, if they came to understand how things work (AI or whatever), then you wouldn't be mystified by it and it would be harder to dream up shit about things bordering the scientific realm. You'd not be able to put out papers that the other theoreticans (dreamers) would read and your career would be over.
The fucking industry bordering the scientific field hinges upon the people writing the philosophy papers not understanding that shit. If someone goes on and learns about it, machine learning say, then they'd catch the mind virus and end up spouting out arguments that the philosphers don't want to deal with. The "boring" "narrow minded" naive realist perspective. I understand the rejection of that, but the rejection of investing a few month to learn more about the subject from a "hard" perspective than your peers would be a nice change of events.

>> No.15195500

didnt read

>> No.15195534

Machine learning fags are the worst. They think they're onto some hot new metaphysical revolution but they have no philosophical insight into it at all. They just assume that they're doing "something," because there's a manufactured buzz about machine learning nonsense, and therefore it's exciting and new.

The more people are actually immersed in CS, the worse they are with it, the more they make really embarrassing mistakes like hypostatizing things like hypothetical Turing-passable machines or recursively algorithmic "black boxes."

>Of course, if they came to understand how things work (AI or whatever), then you wouldn't be mystified by it
We understand how it works. The answer to that one is pretty simple: It doesn't. What you deal with is not AI, it's in-principle mechanical routines becoming (epistemic) black boxes, i.e. without changing their ontology whatsoever, and then doing enough things and "evolving" for enough generations that you can select ones that "do" some thing, as determined by YOU (the actual decisional agent) actually knowing and being conscious of what the "thing" is prior to being able to articulate (e.g. write it as a program, as the first wave of AI morons did because they took cognition to be nested sets of function-calling programs), what "doing it" consists in, and what its "results" consist in.

You take this bundle of functions and subroutines that you've selected from among potentially billions, you determine it "does" the "thing" "correctly" by looking at the "results" (all quoted things are contained in YOUR mind, not in the machine), and go "this one is the 'smart' program, this one is the program that 'knows how to' 'do' the 'thing'."

>The fucking industry bordering the scientific field hinges upon the people writing the philosophy papers not understanding that shit.
The people who understand it immediately understand it's bullshit and lose interest in it. The field selects for people too stupid to realise it's not a real field.

>> No.15195545

>>15195534
>You take this bundle of functions and subroutines that you've selected from among potentially billions, you determine it "does" the "thing" "correctly" by looking at the "results" (all quoted things are contained in YOUR mind, not in the machine), and go "this one is the 'smart' program, this one is the program that 'knows how to' 'do' the 'thing'."
What makes this different than a mind?

>> No.15195575

>>15195545
This is what I mean by hypostatizing Turing-passable machines. Yes, it's a reasonably interesting conceptual move to go:
>What is cognition?
>Don't know. I can see its fruits but not its roots.
>What if I made a machine that produced the "same" fruits? Or just any kind of fruits - what separates my mind from a fruit-producing mind and an apparatus that produces fruits?
>I've made machines that can do all sorts of crazy things, produce all sorts of fruits. They almost seem autonomous sometimes, although I know they really aren't..
>But I wonder, what if the "autonomous" mind is just a big bundle of semi-autonomous subroutines, mutually interacting?
>That would square with materialist neo-Darwinism. Presumably the mind had to evolve from somewhere, right? How do we go from dead matter to "living," thinking, sensing and perceiving meta-machines?
>Maybe if you just put enough mechanisms together, recursively interrelating, it eventually clicks into whatever a mind is.

But you're supposed to think of this as a possibility and then shrug and say that it's not sufficient proof of anything. Likewise, the more you dig into the problem of mind and its history, a lot of things should stand out to you that immediately problematize this account. But most of all, there is a huge fucking gap between this half-baked speculation and presuming that programs that "correctly" identify elephants hiding in foliage are "thinking." Making a better elephant-finder is not making you better at making a mind, at least not necessarily.

>> No.15195592

>>15195575
It's reasonable though to assert that one function(a fairly important one) of the brain is the sort of pattern recognition those programs are doing. So they are mimicking a faculty of mind, it is an impressive accomplishment.

>> No.15195598

>>15195486
not literature

>> No.15195648

>>15195592
To an extent yes, but unless the speculation I just mentioned is correct (and we have good reasons to think it isn't; if people were more self-aware that this assumption is underlying AI, far fewer people would work in AI because of schisms over metaphysics), they are ontologically distinct, and the best thing you can say about a program is that it's an ingenious approximation, an isomorphism, of some human task.

In terms of logical content, it is exactly analogous logically to saying "the light knows to turn on when I press the switch." It's logically analogous, because both are in-principle, that is ontologically mechanical systems that do some "thing."

What is a thing, and what does it mean to do a thing? What is the difference between doing a thing and knowing THAT you're doing a thing -- is there a difference between a baby accidentally perfectly mouthing the word "prestidigitation, an adult using the word deliberately with a high degree of self-awareness about what it means, and another adult saying it but misunderstanding its meaning and misusing it? Are these all doing the same "thing?" Sure, in logically the same way as the light switch "does the thing," that is to say, on a very superficial account that doesn't take at all into account epistemic/phenomenological distinctions between "doing" and "knowing that one is doing," between an act and the meaning of an act, etc.

This is what happens when machine learning dudes say things like "pattern recognition." The machine "recognises patterns" in a vague, heuristic sense that you are free to use in contexts where it's safe or pleasurable for you to do so, just as you can say your light switch "knows to make it dark" when you touch it. But if we're talking about metaphysics, obviously both accounts are badly wrong, or at least would require a whole lot of discussion that's not being had. And I would argue that if you want to advance AI understanding, you are going to have to get the majority of AI researchers to the point that they understand just how similar the two situations are, and just how badly they are sinning against ontological self-consciousness when they say the machine "recognises patterns."

This low level of self-consciousnes has negative effects on perfectly empirical, everyday levels. AI research is riven with problems of "why doesn't this work???? wtf am i doing wrong?????" that they try to fix by piling on yet more algorithmic recursivity, when it's not the quantity of algorithms but their quality (i.e. fundamentally mechanical and non-cognitive) that's the problem.

There was a recent article in some handbook on phenomenology discussing the extent to which the AI community has assimilated the phenomenological critique it claimed to begin taking seriously in the '90s/'00s, and the answer was: no, absolutely not, they just interpreted as "think procedurally, but still fundamentally mechanically" when the critique was "stop thinking mechanically!"

>> No.15195659

>>15195648
Completely based, let's start adding in phenomenology to the argument. How can we determine if an AI has the intentionality of consciousness that's fundamental to being a human? Not to mention that a part of human beings learning is the brain physically rewiring neural pathways, no AI is doing that.

>> No.15195707

>>15195648
I see two sort of main things cropping up in what you're saying, one is consciousness, the other I guess we could call agency. If you build a robot that walks around with some basic goal, something that has been done, eg. those MIT robots, then you have an entity whose agency sort of resembles a basic lifeform. It was programmed but so are organisms, it can autonomously move about and do stuff, react to its enviornment. A very basic set of goals and abilities sure, but it doesn't seem qualitatively different to me. This is far, far from the complexity of a human brain and everything going on in it obviously. The machine learning program that is sorting trees and cars is not doing anything like that, but such a program could be incorporated into a robot that did have agency.

As for consciousness I dont know.

>> No.15195710

>>15195648
I want to write a short story regarding the self and this post helped me lots. Thank you

>> No.15195734

>>15195648
I think the only mistake in your arguments is that you seem to believe that most AI researchers actually care about the kind of conceptual considerations you raise here. In my experience, this is patently false. Modern computer science, and damning bmodern computer science education, have an "artifact first" conception of, well, everything really.
Even more pertinent criticism of modern AI (such as 'many models held as the state of the art seem to derive much of the "meaning" they find in images from elements that clearly can't have any kind of actual relevance') are only a problem in that they cast doubt in the viability of AI products. Where you to actually engage with the literature, you would notice it consists of the most inane things, where approaches "that work" (on a very poorly defined way) are published without any kind of critical thought being put into them, a literature where most things are warm overs of other things with very few worthwhile, actually scientific, work.
The few people who actually care about the kind of stuff you're talking about are, as far as I know it, actually split in several camps and try to do all sorts of different stuff. But they are very least part of it.

>> No.15196007

>>15195648
Greet summarization anon.

>> No.15196246

>>15195486
>>15195534
>technology can’t just exponentially develop in the span of decades, what are you talking about?
>there’s no use speculating about something if you can’t currently prove it in a lab
Your attitude actually hinders both genuine scientific research and philosophical inquiry. Academia does suck, I’ll give you that.

>> No.15196304

>>15196246
>Hey what if we just used more gigabytes of data, even more graphics cards and even more network layers?
Is not scientific enquiry

>> No.15196376

>>15196304
If you can get incredible results just by adding gigabytes, then you should continue even without inquiry.

>> No.15196406

>>15195486
Sorry, loser. I don't need to learn python and other tranny shit to know that AI fears are bunch of bullshit.

>> No.15196466

>>15196304
Please consider educating yourself on this matter before you spout such shit.
A lot of technological developments were necessary to train better AI models PRECISELY because "just throwing more data and more GPUs" was NOT working. That's how the entire field of Deep Learning even started existing.
I swear to god some sort of basic multi-area knowledge test should be necessary to post in this fucking site.

Also to all the fucking geniuses here saying machine learning and deep learning are just some stupid fad and statistics did this decades ago, explain to me why every single gigantic company in the world right now is making literal billions by employing DL models while statisticians are just sucking their thumbs and seething like there is no tomorrow that no one uses their never cited, never applied models?

>> No.15196489

>>15195648
>The light switch doesn't know things.
>Humans do.
Here's a fun challenge for you OK?
Pick a number from 1 to 10. Type it down.
Now explain to me how exactly the choice happened in your brain, to the deepest detail you can conjure scientifically.

Good.
Fucking.
Luck.
Mr lightswitch.

>> No.15196498

>>15196489
10, because it's the last number you wrote and right before my eyes :/ (I'm not him btw).

>> No.15196500

>>15195534
you're making some really narrow-minded assumptions about what constitutes ontological change

>> No.15196505

>>15196498
Go deeper. Explain your algorithm. What made your brain "pick the last number it saw".
That guy from the other post claims you can do this while the stupid machine learning dumb dumbs can't explain their black boxes.

>> No.15196523

>>15196466
Lmao the fundamentals of your big innovation were already figured out in the 80s. The main revolution was on data gathering and computing power. Some implementation challenges were solved and that resulted in some interesting models (residue networks, some of the recursive models used in translation, etc). This is all very useful, no doubt. But it's just a rehearsal of things that were already conceptualized in the 80s.
It is just the same but bigger indeed.
But let's pretend this lego brick stacking is anything but that.

>> No.15196546

>>15196523
You literally cannot build a neural network with 300 layers that can learn anything from 1 terabyte of data, even if I give you the largest supercomputer in this planet, with 80's technology.
Ergo, you are wrong.

>> No.15196558

>>15196546
by "80's technology" I mean what AI researchers knew about AI in the 80s.

>> No.15196573

>>15195534
>they have no philosophical insight
When has this stopped anyone in the past 200 years or so?

>> No.15196645

>>15195648
Very good verbalisation of some thoughts I have had towards AI. One must imagine monkeytypewriticus happy

>> No.15197003

>>15195534
>We understand how it works.
You clearly don’t, because your phrasing makes it plainly obvious you have layman’s knowledge in regards to the psychological operations/inputs/semantics behind theory of mind.

>> No.15197713

bump for interesting thread

>> No.15197977

>>15195486
Throughout undergrad as a math major I'd take philosophy electives for fun. The amount of times that someone would bring up or argue about mathematics or stem generally, while knowing fuck all about it beyond a hs level, was baffling.

It's a problem with the arts as well. The company (AI / ML focused) that I work at had a tour of a tech focused exhibit at MoMA last year, and the engineers could not stop cracking jokes about the boneheaded comments the artists and guide were making on AI.

>> No.15198231

If you want to build a new intelligent system just have sex.

>> No.15198244
File: 62 KB, 900x550, chadgreypill.png [View same] [iqdb] [saucenao] [google]
15198244

>>15195486
philosophy is a meme

>> No.15198277

>>15195534
>>15195648

This totally misses the point of the Turing test. Turing didn't come up with it as some important theoretical concept in computer science he came up with it as a reductio ad absurdum against these type of arguments against the potential for computers to think. Because the only we have of knowing that other people are conscious is through conversation and if a computer can pass for human during a conversation and yet we still claim it is not conscious why don't we claim the same thing about other people? I know that I have intentionality but I only have your conversation and behavior as evidence that you do so if a computer can fool me there why would I say it wasn't conscious but you are?

>> No.15198286
File: 3.03 MB, 960x720, 1579330024729.png [View same] [iqdb] [saucenao] [google]
15198286

>>15198277
I don't know that other people have intentionality. Frankly I know a lot of people who would not pass the turing test. If our conversations were text only I might well think I was dealing with a bot and sneedpost at them.

>> No.15198289

>>15198244
Holy based

>> No.15198305

>>15195486
Wow, you have absolutely no idea what you're talking about.

>> No.15198316

>>15198286
You don't understand. What would contradict Turing's argument was that if you denied that another person had intentionality after they passed the Turing test

>> No.15198339

>>15198316
I'm not contradicting Turing's argument as you put it. I just don't think that "other people are thoughtless/lack intentionality" is actually a good reductio ad absudium because plenty of people seem to be that way. ¯\_(ツ)_/¯

>> No.15198361

>>15198339
The argument is that the people I'm having an intelligent conversation with are conscious not that everyone is conscious. Clearly if you're in a coma I can't have a conversation with you and incidentally I also don't think you're conscious. The absurd conclusion Turing is pointing at is having an intelligent conversation with someone and at the same time claiming they are not conscious. Do you think I'm conscious from this conversation? If so you've validated his argument

>> No.15198403

>>15196505
I personally think knowing why you picked a number from zero to ten is different than knowing why you are doing a certain task and how that task is done.
Whatever though, not that you’d know the difference.

>> No.15199059

OP here.
This wasn't supposed to be an thread on AI but nice discussion anyway.
I'm was also not merely giving the example of philosophers of consciousness and general intelligence not wanting to learn machine learning, but also about them discussing this machine learning as AI

>> No.15199069

>>15199059
Most of the one's above haven't even taken a calculus class. You can't expect them to acknowledge a flaw they are guilty of themselves

>> No.15199170

>>15198286
Only a computer can pass or fail a Turing test. The human interlocutor is the standard for comparison, not the subject of the test.

>> No.15199180

>>15199170
This misses the point just like anon. The Turing test is meant to apply to people as well. I say you are conscious because we can have an intelligent conversation it's not like I can access your awareness. If I can have the same conversation with a machine as in the Turing test why would I say it was not conscious but you are?

>> No.15199217

>>15199180
No it isn't, and can't. There is no condition for which a human could "pass" or "fail" a Turing test, and if you think there is then you don't understand the incredibly simple test itself. In order to create an augmented version for testing a human, you would have to invoke some sort of alternative baseline to test them against.

>> No.15199222

>>15195545
>

>> No.15199243

>>15199217
It was never meant to be a real thing it's just a goal now because it gets media attention and funding. Turing came up with it as a reductio ad absurdum against people claiming machines lacked the potential to think by showing that the same criteria applied to other people would cast doubt on their consciousness

>> No.15199250

>>15199217
People get accused of being bots all the time in multiplayer games

>> No.15199309

>>15199243
No, he proposed concrete ways to follow up on his thesis. He was a sperg to the last.
>>15199250
Misunderstanding of the test. It endeavors to determine if computers can successfully emulate human behavior and actions. You could change the question to be whether or not a human can demonstrate that they are such, but that would be a different test and one that Turing, to the best of our knowledge, never imagined.

>> No.15199334

>>15199309
Have you read the paper? A good chunk of it is devoted to meeting arguments that computers can't be conscious. It's not a proposition for any kind of real experiment and more of a polemic in favor of AI. All the rooms and teletype was just to stop people from thinking of looking at giant machine

>> No.15199365

>>15199250
High levels of competition in games usually means that the top guys are very likely to cheat. Human survivability will do what they can to win and so of course cheating is a natural thing.

Not cheating is also a natural thing because if you can agree with the group to all play by certain rules, there will be a kind of fairness to it. if you cheat and your tribe doesn't like it you may lose the advantage of having the favor of your tribe. Therefore cheaters may not cheat if it is not an advantage for them to cheat.

But the outliers will tend to have to find a niche outside the normal hierarchy. You see this in criminal activity. things like drug cartels selling drugs when they are illegal are very useful to that so-called cheater. It cuts down on competition. The problem is that cheaters might become a huge threat to established Society since they in general will tend to not abide by the rules of that Society. Therefore cartels also turn to all kinds of even more horrendous things like sex slavery and murdering this or that person.

With cheaters and MMOs things like massive amounts of botting can muck up the economy. Everyone will feel forced to bot to compete. Depending on the structure of the game the cheaters might have control and rule over other people, they to some degree effectively control the experience earned ratio of players. they can even impose on people to pay real life money in order to play on such a server.

>> No.15199370

>>15199365
I don't understand what you're saying here. I just meant it as an example of a sort of reverse Turing test.

>> No.15199375

Good thread

>> No.15199382
File: 75 KB, 1500x712, activation.jpg [View same] [iqdb] [saucenao] [google]
15199382

A neural network is a bunch of nodes, often called neurons, that you can monitor in real time, all of them, knowing exactly their activation values as well as the synaptic weights between them.
Even then that doesn't mean you know what the neural network is doing. The numbers don't contain any meaningful information to you. You just know that if you pass an input through those numbers some meaningful output comes out, but the behavior within the machine, even if you can monitor every single neuron, is meaningless to us as of yet.
In a very metaphorical sense, this means "we don't know what the AI is 'thinking'"

Now take a human being. We can't even monitor all of the neurons in real time. We don't even know what constitutes all the possible inputs to a neuron, or all the possible outputs. But imagine we did. Imagine we had a perfect real time map of someone's brain. All the inbound/outbound chemical transmissions. All the electric signals. We would still have the same problem we have with AI. Just knowing all of these inputs and outputs and internal states at any given time does not mean you know what is happening inside.

It's just meaningless numbers. The human brain is as much of a black box as the neural network, even if we ever get to map it perfectly. No one is really gonna give a fuck about me saying this but some philososhitter is gonna write this and make a million dollars from some best seller when he gets proved right.

>> No.15199388

>>15199382
This is literally the mainstream view most of neuroscience, computer science, and philosophy. It's not new

>> No.15199395

>>15199388
Tell that to the "I've mapped the part of the brain responsible for pleasure!" crew. I feel like we only ever see that in media and neuro related popsci.

>> No.15199410

>>15199395
You can believe both things. They can't read peoples thoughts but they can pick up large scale structures lighting up. It's just a tested correlation between stimulus and response on an EEG. They stick needles into rat's brains and watch them starve to death in favor of stimulating the pleasure center

>> No.15199423

>>15199395
Additionally making sense of neural nets is one of the major fields of research. If we could abstract out whatever algorithm a trained one uses that would be really useful. But we can't do it yet and we don't know how current trained ones work

>> No.15199428

>>15195486
Complete and utter projection, all your friends are made up and you have not spend a single day in any philosophy department.

>> No.15199429

>>15199370
Sorry I kind of sperged out. But that got me to thinking. If Consciousness is what humans are then I have demonstrated a threat of ai if we intend to make an AI like we are. Consciousness might be said to be a form of autonomy. Can we give AI autonomy and not expect it to deviate in ways that we may not like? Humans are sophisticated and AI would be expected to be sophisticated as well. Complexity leads to unknowns.

Thinking about the future does not mean that we have to limit ourselves to the limitations of the present. Philosophy would apply what it knows to Future things and it would still gain something in thinking of such things since they have to consider things that are and things that may be.

>> No.15199454

>>15199410
But it's kind of tricky, right? Even if all people say they feel the same thing when the same areas light up, unless we can somehow visualize that in a meaningful way, what does it really mean?
Like if some structure in your brain lights up when seeing red squares, and the same one lights up in mine when seeing the same red square, we might be tempted to draw all kinds of conclusions, but the reality is that we still don't know shit, right?

Also I agree that whenever the "neural net interpreter" technology comes out, it will be extremely interesting to try and port it to human brains,

>> No.15199524

I think very limited AI has more use for human survival. For instance terraforming other planets. we can make self-replicating robots that convert something into something else to make it more habitable for humans. The AI only has to emulate a simple life form in preserving itself and utilizing its environment for its own will and to replicate by means of its environment. You would probably need very many forms of this due to massively changing the environment from one thing into another. Perhaps life on Earth is because of the desire of some being to transform the Earth from one thing into something else.

I can already imagine advanced technology. this robot would have so much information that it can become what it needs to, to properly alter the environment into the desired whatever it is. I heard rumors that a Russian experiment with radiation was able to make very many different elements out of one thing. There already very well could be a planet-sized AI or robot out there.

>> No.15199536

>>15199524
Stop posting Pajeet

>> No.15199570

>>15199536
Imagine robots that can be sent to an asteroid to be mined. it would probably not require that sophisticated of an program compared to some AI human wish to achieve. We could convert a large part of a planet into a motor which propels it wherever we want it to go. We could make the Earth move so that our oceans don't dry up by the Sun. We can convert mercury into a giant space station. Okay, bye.

>> No.15200864

>>15199388
To bad the mainstream view is entirely wrong