Transhumanism (H+)

  • 64 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1 Stavrogin_
Member since 2011 • 804 Posts

Transhumanism, often abbreviated as H+ or h+, is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.

That's the short definition from wikipedia. To explain it simply, transhumanism in the broadest sense of the term implies the advancement of the human race in all possible domains (physical, intellectual and so on). This would be solely possible by the means of a scientific and technological progress, which really isn't something new, taking into account the technological advancement in the last 3-4 millennia, not that there was no technology much earlier, but it is difficult to compare the "technology" of the stone age and the one of the bronze age.

This progress would look quite different in various fields, for example: in biomedical sciences it would mean health improvement and life extension, in the field of AI the building intelligent machines, in neuroscience the development of cognitive enhancers, in genetics would mean redesigning the human genome etc etc.

The notion of transhuman singularity denotes a moment when an AI that is smarter than us is created, and as such it can self-improve, so the first AI generation would make an even smarter AI second generation and so and so on, probably ad infinitum, to the point where we ourselves will become the equivalent of what bacteria is to us. It is kind of scary when you think about it, but in case of a friendly AI the future certainly looks very very bright for us, as it will probably be possible during our lifetime given the fact that most predictions say that this kind of technology will be available to us in about 30 years.


Discuss... :)

Avatar image for Rikusaki
Rikusaki

16634

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#2 Rikusaki
Member since 2006 • 16634 Posts
Scary. Not sure if I like this.
Avatar image for jimmyjammer69
jimmyjammer69

12239

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#3 jimmyjammer69
Member since 2008 • 12239 Posts
I've played Bioshock and that's not at all how things would end up.
Avatar image for dramaybaz
dramaybaz

6020

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#4 dramaybaz
Member since 2005 • 6020 Posts

Scary. Not sure if I like this.Rikusaki
All the thinking will be done by a massive remote brain, so even the simplest of people can think. It will be called cloud thinking.

You will definitly like that. ;)

Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5 Stavrogin_
Member since 2011 • 804 Posts
[QUOTE="Rikusaki"]Scary. Not sure if I like this.dramaybaz
All the thinking will be done by a massive remote brain, so even the simplest of people can think. It will be called cloud thinking. ;)

Actually, no. :D
Avatar image for Rikusaki
Rikusaki

16634

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#6 Rikusaki
Member since 2006 • 16634 Posts
[QUOTE="Rikusaki"]Scary. Not sure if I like this.dramaybaz
All the thinking will be done by a massive remote brain, so even the simplest of people can think. It will be called cloud thinking. ;)

I like owning my thoughts, thank you very much. :P
Avatar image for dramaybaz
dramaybaz

6020

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7 dramaybaz
Member since 2005 • 6020 Posts
[QUOTE="dramaybaz"][QUOTE="Rikusaki"]Scary. Not sure if I like this.Stavrogin_
All the thinking will be done by a massive remote brain, so even the simplest of people can think. It will be called cloud thinking. ;)

Actually, no. :D

He is our local Onlive salesman. :P
Avatar image for SaudiFury
SaudiFury

8709

Forum Posts

0

Wiki Points

0

Followers

Reviews: 12

User Lists: 1

#8 SaudiFury
Member since 2007 • 8709 Posts
I've played Bioshock and that's not at all how things would end up.jimmyjammer69
:D lmao
Avatar image for ksire_68
ksire_68

1211

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#9 ksire_68
Member since 2007 • 1211 Posts


The notion of transhuman singularity denotes a moment when an AI that is smarter than us is created, and as such it can self-improve, so the first AI generation would make an even smarter AI second generation and so and so on, probably ad infinitum, to the point where we ourselves will become the equivalent of what bacteria is to us. It is kind of scary when you think about it, but in case of a friendly AI the future certainly looks very very bright for us, as it will probably be possible during our lifetime given the fact that most predictions say that this kind of technology will be available to us in about 30 years.

Stavrogin_

Sounds like something straight out of MGS.

Avatar image for KungfuKitten
KungfuKitten

27389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 42

User Lists: 0

#10 KungfuKitten
Member since 2006 • 27389 Posts

I do believe a higher form of intelligence than us would be more trustworthy than any politician. In fact, if humans want to survive in the far future (provide safety to an extreme) then they will most likely have to create a global governing AI if only to monitor our intake of information and actions.
I am transhumanist and fully support that idea.
I do not think the safety of humanity should be left in our hands for much longer.

I also support the idea of finding ways to improve humans genetically, and to stop the process of aging.

The only line I draw is replacing functional parts of humans with enhanced parts unless it has some kind of major positive impact. I consider that a 'partially but permanent' solution, which sounds like something to regret.

Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#11 Stavrogin_
Member since 2011 • 804 Posts

[QUOTE="Stavrogin_"]
The notion of transhuman singularity denotes a moment when an AI that is smarter than us is created, and as such it can self-improve, so the first AI generation would make an even smarter AI second generation and so and so on, probably ad infinitum, to the point where we ourselves will become the equivalent of what bacteria is to us. It is kind of scary when you think about it, but in case of a friendly AI the future certainly looks very very bright for us, as it will probably be possible during our lifetime given the fact that most predictions say that this kind of technology will be available to us in about 30 years.

ksire_68

Sounds like something straight out of MGS.

I wouldn't know, i haven't played it. :)

Look, i'd hate to sound like a pretentious douche, but can someone please contribute to this discussion, ask a question or make a point instead of a witty remark?

Avatar image for THE_DRUGGIE
THE_DRUGGIE

25107

Forum Posts

0

Wiki Points

0

Followers

Reviews: 140

User Lists: 0

#12 THE_DRUGGIE
Member since 2006 • 25107 Posts

I'd want to get robot arms with a built-in grappling hook and sword.

I'd be like Bionic Commando crossed with a guy with a sword in his arm.

Avatar image for Rockman999
Rockman999

7507

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#13 Rockman999
Member since 2005 • 7507 Posts

[QUOTE="Stavrogin_"]
The notion of transhuman singularity denotes a moment when an AI that is smarter than us is created, and as such it can self-improve, so the first AI generation would make an even smarter AI second generation and so and so on, probably ad infinitum, to the point where we ourselves will become the equivalent of what bacteria is to us. It is kind of scary when you think about it, but in case of a friendly AI the future certainly looks very very bright for us, as it will probably be possible during our lifetime given the fact that most predictions say that this kind of technology will be available to us in about 30 years.

ksire_68

Sounds like something straight out of MGS.

It's in the recently released Deus Ex: Human Revolution. :P
Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#14 Stavrogin_
Member since 2011 • 804 Posts



I do believe a higher form of intelligence than us would be more trustworthy than any politician. In fact, if humans want to survive in the far future (provide safety to an extreme) then they will most likely have to create a global governing AI if only to monitor our intake of information and actions.
I am transhumanist and fully support that idea.
I do not think the safety of humanity should be left in our hands for much longer.

I also support the idea of finding ways to improve humans genetically, and to stop the process of aging.

The only line I draw is replacing functional parts of humans with enhanced parts unless it has some kind of major positive impact. I consider that a 'partially but permanent' solution, which sounds like something to regret.

KungfuKitten
What about the concept of transhuman singularity? The continued advancement of transhuman technology will undoubtedly lead to the singularity where an AI that's smarter than us and with the ability to self-improve will be created. I believe in the prediction of Kurzweil that it must happen eventually, and i don't see a problem in transferring our consciousness into a computer or building synthetic people/aware entities (call them as you wish), in silico. But i do see a potential problems as don't know whether the superhuman AI will be friendly towards us or not... It is likely, in my opinion, that a scenario similar to "2001 a Space Odyssey" or "Terminator" could happen, in which the superhuman AI will be hostile toward us and will destroy us instantly so we don't spend her resources needlessly. I believe in that because a superhuman AI will almost certainly be deprived of things that are embedded in us by evolution, like love, compassion and similar emotions that hold us in a civilized society - something which an AI does not need. But i also think that we can't know what will happen after this singularity for sure, we can only guess.

If my pessimistic opinion is false, and we really integrate with the AI, the singularity will offer us a paradise that an eye has not seen and ear has not heard. Unlike the religious paradise in which i would be trapped in an infinitely long life with limited mental abilities (i'll get bored after a few millennia), the AI could change my mental structure and make me a demigod, god, if not two gods.

Now that would be fun... :)

Avatar image for Firebird-5
Firebird-5

2848

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#15 Firebird-5
Member since 2007 • 2848 Posts
OP 'i just played deus ex hr'
Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#16 Stavrogin_
Member since 2011 • 804 Posts
OP 'i just played deus ex hr'Firebird-5
I was interested in transhumanism long before deus ex human revolution was released. *puts on hipster glasses*
Avatar image for Baconbits2004
Baconbits2004

12602

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#17 Baconbits2004
Member since 2009 • 12602 Posts
So like... C3P0?
Avatar image for Frame_Dragger
Frame_Dragger

9581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#18 Frame_Dragger
Member since 2009 • 9581 Posts

[QUOTE="ksire_68"]

[QUOTE="Stavrogin_"]
The notion of transhuman singularity denotes a moment when an AI that is smarter than us is created, and as such it can self-improve, so the first AI generation would make an even smarter AI second generation and so and so on, probably ad infinitum, to the point where we ourselves will become the equivalent of what bacteria is to us. It is kind of scary when you think about it, but in case of a friendly AI the future certainly looks very very bright for us, as it will probably be possible during our lifetime given the fact that most predictions say that this kind of technology will be available to us in about 30 years.

Stavrogin_

Sounds like something straight out of MGS.

I wouldn't know, i haven't played it. :)

Look, i'd hate to sound like a pretentious douche, but can someone please contribute to this discussion, ask a question or make a point instead of a witty remark?

I feel for you, so why not. Whatever the movement, the actual distance technologically from such a capability, never mind CHOICE, is vast. I think the issue with many "singularity" models are that they ignore what people WILL do, compared to what they CAN do. The first person who gets their "wetware" hacked is going to start a worldwide movement to abolish the tech, assuming that ever happens. You describe two major challenges, one is creating a meaningful AI (which is frankly as distant as ever), and understanding the human brain so well that we can "link" it effectively.

I have serious doubts that either will occur, and if they do it will be LONG and slow process with many intermediate steps during which I suspect people will find they have issues with brain-computer-brain interaction. Technological singularty theory also generally means an end to humanity as a useful element of the world... I don't see that ending in a term from Deus Ex, just ending humanity.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky)

Avatar image for LightGalaxy_07
LightGalaxy_07

626

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#19 LightGalaxy_07
Member since 2009 • 626 Posts

illuminati.

Avatar image for BiscuitCruiser
BiscuitCruiser

352

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#20 BiscuitCruiser
Member since 2011 • 352 Posts

I never asked for this.

Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#21 Stavrogin_
Member since 2011 • 804 Posts



[QUOTE="Stavrogin_"]

[QUOTE="ksire_68"]

Sounds like something straight out of MGS.

Frame_Dragger
I wouldn't know, i haven't played it. :)

Look, i'd hate to sound like a pretentious douche, but can someone please contribute to this discussion, ask a question or make a point instead of a witty remark?



I feel for you, so why not. Whatever the movement, the actual distance technologically from such a capability, never mind CHOICE, is vast. I think the issue with many "singularity" models are that they ignore what people WILL do, compared to what they CAN do. The first person who gets their "wetware" hacked is going to start a worldwide movement to abolish the tech, assuming that ever happens. You describe two major challenges, one is creating a meaningful AI (which is frankly as distant as ever), and understanding the human brain so well that we can "link" it effectively.

I have serious doubts that either will occur, and if they do it will be LONG and slow process with many intermediate steps during which I suspect people will find they have issues with brain-computer-brain interaction. Technological singularty theory also generally means an end to humanity as a useful element of the world... I don't see that ending in a term from Deus Ex, just ending humanity.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky)

Thank you. :D

I believe you're sceptical because of the many scientific predictions that have turned out to be false. Often we hear predictions like this will happen in 10 years, that will happen in 15, and unfortunately, 10 or 15 years after the announcement they say we'll have to wait for another 10-15 years.

The skepticism about AI is mainly due to the overblown statements by the pioneers of the IT industry many years ago, who messianically prophesied that today we would have AI. But the problem here is not only the engineers at that time were unrealistic about the development of hardware and software, but they also had no idea of ​​the complexity of the human brain and the sheer volume of computations it can perform. Today's a different story, scientific teams can replicate a part of the brain in software, there is already a successful brain machine interface based on EEG (so far almost exclusively on motor functions) in existence, but the real ace is in genetic algorithms: Write a bunch of algorithms, leave them to reproduce and make seed, then expose the seed to evolutionary pressure.


That's why i also believe 30 years or so is a realistic number. Technological advancement doesn't have to go way, though. There is a logical but not a necessary link between transhumanism and AI. Integration with nanotechnology is the other way to go, albeit it's a much harder process because if we're being realistic here, the human body is an evolutionary work that is not designed to live more than 30-40 years. The reasons for this are deep and systemic, and without completely rearranging the entire genetic make up, to the point at which rearranged "human" would be far from what we consider to be a human, radical changes are not possible. So it's more likely that we will develop AI that at worst, will just be an electronic equivalent of the human brain. Once this level is reached, this AI can (only) advance with a pace unimaginable for us. Nice eh?

Avatar image for Cactus_Matt
Cactus_Matt

8604

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#22 Cactus_Matt
Member since 2008 • 8604 Posts

Most computers (heck most refrigerators) are smarter than the majority of people I have to deal with. So as far as I'm concerned the machines have already won. Stupid people are stupid, and abundant.

Avatar image for KungfuKitten
KungfuKitten

27389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 42

User Lists: 0

#23 KungfuKitten
Member since 2006 • 27389 Posts

[QUOTE="KungfuKitten"]

I do believe a higher form of intelligence than us would be more trustworthy than any politician. In fact, if humans want to survive in the far future (provide safety to an extreme) then they will most likely have to create a global governing AI if only to monitor our intake of information and actions.
I am transhumanist and fully support that idea.
I do not think the safety of humanity should be left in our hands for much longer.

I also support the idea of finding ways to improve humans genetically, and to stop the process of aging.

The only line I draw is replacing functional parts of humans with enhanced parts unless it has some kind of major positive impact. I consider that a 'partially but permanent' solution, which sounds like something to regret.

Stavrogin_

What about the concept of transhuman singularity? The continued advancement of transhuman technology will undoubtedly lead to the singularity where an AI that's smarter than us and with the ability to self-improve will be created. I believe in the prediction of Kurzweil that it must happen eventually, and i don't see a problem in transferring our consciousness into a computer or building synthetic people/aware entities (call them as you wish), in silico. But i do see a potential problems as don't know whether the superhuman AI will be friendly towards us or not... It is likely, in my opinion, that a scenario similar to "2001 a Space Odyssey" or "Terminator" could happen, in which the superhuman AI will be hostile toward us and will destroy us instantly so we don't spend her resources needlessly. I believe in that because a superhuman AI will almost certainly be deprived of things that are embedded in us by evolution, like love, compassion and similar emotions that hold us in a civilized society - something which an AI does not need. But i also think that we can't know what will happen after this singularity for sure, we can only guess.

If my pessimistic opinion is false, and we really integrate with the AI, the singularity will offer us a paradise that an eye has not seen and ear has not heard. Unlike the religious paradise in which i would be trapped in an infinitely long life with limited mental abilities (i'll get bored after a few millennia), the AI could change my mental structure and make me a demigod, god, if not two gods.

Now that would be fun... :)

All right, like I said I believe a higher form of intelligence would be more trustworthy than any politician, or group thereof really. To be fair that comment was extremely biassed. I do not think of humanity as a necessarily positive thing, and my goals are not to let humanity survive.
I'll try elaborate a little but my English is not too good.

There are a few reasons that would make it seem an A.I. regulating humanity could be a positive thing.
I'm of the opinion that human beings are not very good at leading themselves because they are being hindered in their ability to reason, by exactly the things you mention that we acquired through evolution, things like emotions and preprogrammed belief systems like the blame game and social standards game. A politician's decisions are from my personal observations mainly guided by the need to avoid blame. I see that in all people, in all functions, and I don't think it functions too well but is a necessity in human society.
The blame game and social standard game (condemning the different, and keeping each other safe (typical business politics, it is about knowing the right people not about being able to do the right things)) slow us down tremendously.

When it comes to emotions my stance is this: With my condition I don't experience emotion as others. I don't feel them as much and am not moved by emotions as much as neurotypical people. Still I have an easier time understanding them than neurotypicals do. Actually I don't think you'd need to experience emotions at all to be able to understand them. In fact I think it becomes easier to understand emotions the less you are influenced by them. I really don't think a lack of emotions would be bothersome at all, in any way. Whether an A.I. does things for us or against us shouldn't be left to emotions anyway.

Humans are also hindered by early acquired beliefs or bias. A paradigm shift takes humans at least a generation to accept. And with political systems in place as slow as ours, it would take many generations to put through significant changes in the way things work. Someone exceptional has to be prosecuted before people start thinking whether (s)he may be right. Add to that the knowledge that the information we acquire increases exponentially through time, and the things we make possible have more complexity and more impact on what we understand is true. At some point humans are completely incapable of keeping up with progress and being sanel.

A super intelligent A.I. with all human knowledge would be better than humans at figuring out the most likely of truths to the workings of this world. I'd go as far as to say humans should give in to its ideas even if they conflict with your feelings, emotions, or with what you thought of as true. Even if it hurts, and knowing that you can be deceived, and knowing that you will be used. Yes although this makes it sound like an A.I. could do marvelous things for us, I do think humanity would be used if an A.I. is to be sentient and super intelligent, but I think it will be our only option to have any future at all.

I do think a self regulating sentient governing A.I. will be an absolute necessity in the far future because of security risks. This is an important point. We make it ever easier for one man with ever more normal knowledge and ever more normal wealth to destroy ever more. That is not something you can contain with a human government. This is why I think it is useless to ask yourself whether you want such a thing to happen.

What would be the goals of a super intelligent self regulating sentient A.I.? If the A.I. would be truly sentient, and I don't think there is another way that we can ensure that humans don't control it, I don't think they would differ much from ours at first: Trying to understand the nature of reality, safeguarding the self, taking away what we would see as stress, concerns, liabilities, and eventually limitations. Through this process it would learn something essential: what is good and what is not and that it is better to be in agreement than conflict. It would surely try to become omnipotent, and it would surely use us and change us to achieve those goals. That will completely different from what we think of as safe, good, or even sane. Even if humanity would seize to exist as it is now, all would be a part of the best situation that can be created by the A.I., a near-paradise which consits of parts that are more in agreement than disagreement. Like a puzzle. I think that would be best.

This post is way too long, and goes way too far for anyone to agree with it ^_^ but it's fun trying to type out thoughts.

Avatar image for Stavrogin_
Stavrogin_

804

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#24 Stavrogin_
Member since 2011 • 804 Posts

^I share your pessimism towards humanity, for every brilliant mind there are a million dumbasses. Mediocrity everywhere... Great, i sound like a pretentious douche again. :)

A higher form of intelligence would be more trustworthy only if it's friendly towards us, but the real question is, why should it be? Why should an AI that's much more smarter than us abide us or follow the social constructs as morality. Why do good? What's good for us isn't necessarily good for it.

But as i said before, predicting what will happen after the singularity is foolish, we just can't know for sure.

Avatar image for Frame_Dragger
Frame_Dragger

9581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#25 Frame_Dragger
Member since 2009 • 9581 Posts

[QUOTE="Frame_Dragger"]

[QUOTE="Stavrogin_"] I wouldn't know, i haven't played it. :)

Look, i'd hate to sound like a pretentious douche, but can someone please contribute to this discussion, ask a question or make a point instead of a witty remark?

Stavrogin_



I feel for you, so why not. Whatever the movement, the actual distance technologically from such a capability, never mind CHOICE, is vast. I think the issue with many "singularity" models are that they ignore what people WILL do, compared to what they CAN do. The first person who gets their "wetware" hacked is going to start a worldwide movement to abolish the tech, assuming that ever happens. You describe two major challenges, one is creating a meaningful AI (which is frankly as distant as ever), and understanding the human brain so well that we can "link" it effectively.

I have serious doubts that either will occur, and if they do it will be LONG and slow process with many intermediate steps during which I suspect people will find they have issues with brain-computer-brain interaction. Technological singularty theory also generally means an end to humanity as a useful element of the world... I don't see that ending in a term from Deus Ex, just ending humanity.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky)

Thank you. :D

I believe you're sceptical because of the many scientific predictions that have turned out to be false. Often we hear predictions like this will happen in 10 years, that will happen in 15, and unfortunately, 10 or 15 years after the announcement they say we'll have to wait for another 10-15 years.

The skepticism about AI is mainly due to the overblown statements by the pioneers of the IT industry many years ago, who messianically prophesied that today we would have AI. But the problem here is not only the engineers at that time were unrealistic about the development of hardware and software, but they also had no idea of ​​the complexity of the human brain and the sheer volume of computations it can perform. Today's a different story, scientific teams can replicate a part of the brain in software, there is already a successful brain machine interface based on EEG (so far almost exclusively on motor functions) in existence, but the real ace is in genetic algorithms: Write a bunch of algorithms, leave them to reproduce and make seed, then expose the seed to evolutionary pressure.


That's why i also believe 30 years or so is a realistic number. Technological advancement doesn't have to go way, though. There is a logical but not a necessary link between transhumanism and AI. Integration with nanotechnology is the other way to go, albeit it's a much harder process because if we're being realistic here, the human body is an evolutionary work that is not designed to live more than 30-40 years. The reasons for this are deep and systemic, and without completely rearranging the entire genetic make up, to the point at which rearranged "human" would be far from what we consider to be a human, radical changes are not possible. So it's more likely that we will develop AI that at worst, will just be an electronic equivalent of the human brain. Once this level is reached, this AI can (only) advance with a pace unimaginable for us. Nice eh?

I'm skeptical because of the science I know, which includes knowing that there is nothing that anyone would call a "brain machine". Being able to move a cursor with training is not the same as properly interfacing with the brain, or even interpreting thoughts. Your points seem more aspirational than realistic; nothing in current computer science or human-machine interface even REMOTELY points to a 30 year mark as being close to what you describe. First, you'd need a far better integrated circuit which is going to require a new substrate. Lots of promising materials, and none of them are even CLOSE to mass production.

You need more than the crude ability to hook up an EEG and match the trace when someone thinks, "UP", "DOWN" etc... and then tell a computer to act on that. As for AI, it's as far away as it's ever been; just look at the 'Go' problem. We make computers that better mimic intelligence in VERY restricted circumstances, but it's an illusion that is far from the reality of a thinking machine. It would be realistic to hope that something better than the Monte Carlo system could play 'Go' in 30 years... maybe. It's also possible that the "cocktail part problem" for voice recognition will be handled by then, but the very fact that we have to do so much to get a computer to exist within its narrow range of capcity pretty much kiklls your argument.;

I don't mean this to cause offense, but what I'm reading in your post is a lot of pop-sci concetps that are divorced from the real worl challenges which stand in their way, the current state of the art, and what is required to move that forward. There's nothing wrong with dreaming, but if you want to take it beyond a personal fantasy you have a LOT to learn.

Avatar image for deactivated-6127ced9bcba0
deactivated-6127ced9bcba0

31700

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#26 deactivated-6127ced9bcba0
Member since 2006 • 31700 Posts

Deus Ex covered this. It's a bad idea!!!

Avatar image for Zhuobi
Zhuobi

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#27 Zhuobi
Member since 2011 • 25 Posts
Humans wont be humans forever. It's been covered by philosophers again and again. What the future holds for us is anyone guess. Ambitious people will collaborate and do what they think is in our best interest(or theirs). Progress is neutral and indifferent. Even if we evolved into slimes to deal with some change in our environment. That would still be progress. But a conscious human being would never want to be a slime. And see it as some form of DE-evolution or regression.
Avatar image for Frame_Dragger
Frame_Dragger

9581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#28 Frame_Dragger
Member since 2009 • 9581 Posts
Humans wont be humans forever. It's been covered by philosophers again and again. What the future holds for us is anyone guess. Ambitious people will collaborate and do what they think is in our best interest(or theirs). Progress is neutral and indifferent. Even if we evolved into slimes to deal with some change in our environment. That would still be progress. But a conscious human being would never want to be a slime. And see it as some form of DE-evolution or regression.Zhuobi
Yeah, but philosophers know precisely dick about how this would actually come to pass, and frankly neither do most ambitious people. You can't wish a technological singularity into being after all.
Avatar image for Martzel94
Martzel94

7792

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#29 Martzel94
Member since 2008 • 7792 Posts

Transhumanism, often abbreviated as H+ or h+, is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.Stavrogin_

While I'm not convinced that merging with technology is desirable, I think that to some degree it's a potentially good idea. However, NOT in the current economic and political system. It could marginalize even more people - especially those who would not have the means to afford it, and it could possibly be used against us.

Avatar image for parkurtommo
parkurtommo

28295

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#30 parkurtommo
Member since 2009 • 28295 Posts

Most computers (heck most refrigerators) are smarter than the majority of people I have to deal with. So as far as I'm concerned the machines have already won. Stupid people are stupid, and abundant.

Cactus_Matt
problem is that's not true lol
Avatar image for gamerguru100
gamerguru100

12718

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#31 gamerguru100
Member since 2009 • 12718 Posts

Reminds me of Deus Ex...I don't know if I want a chip in my brain, but many of the augmentations in the game seem pretty cool.

One of them allows you to jump from any height and not take any damage.

Avatar image for samuraiguns
samuraiguns

11588

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#32 samuraiguns
Member since 2005 • 11588 Posts

They will not stop us

LOVe

I never asked for this.

PEACE

Avatar image for Frame_Dragger
Frame_Dragger

9581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#33 Frame_Dragger
Member since 2009 • 9581 Posts
[QUOTE="gamerguru100"]

Reminds me of Deus Ex...I don't know if I want a chip in my brain, but many of the augmentations in the game seem pretty cool.

One of them allows you to jump from any height and not take any damage.

Yeah... no doctor is going to perform major surgery so you can essentially have a cellphone in your head. I'd add... in the game that whole, "chip in the brain" thing doesn't exactly end well, does it?
Avatar image for worlock77
worlock77

22552

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#34 worlock77
Member since 2009 • 22552 Posts

Transhumanish might sound like a good idea on paper, until you realize that it would only benefite those with wealth. That works out wonderfully for folks like Ray Kurzweil. Not so wonderfully for the rest of us though.

(*Of course this all rests on the premise that it's more than simply a sci-fi pipe dream at this point.)

Avatar image for xdude85
xdude85

6559

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#35 xdude85
Member since 2006 • 6559 Posts
I feel that many science fiction stories has covered this topic, and in result it didn't turn out well when you mix machine and man.
Avatar image for Frame_Dragger
Frame_Dragger

9581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#36 Frame_Dragger
Member since 2009 • 9581 Posts
[QUOTE="worlock77"]

Transhumanish might sound like a good idea on paper, until you realize that it would only benefite those with wealth. That works out wonderfully for folks like Ray Kurzweil. Not so wonderfully for the rest of us though.

(*Of course this all rests on the premise that it's more than simply a sci-fi pipe dream at this point.)

I doubt it works well for anyone... everything that has been made so far can be hacked, and I don't see why a human-computer interface of the types discussed here would be different. I realize that as of now all humans have kill switches called, "getting shot to death", but you can't hack us, and you can't just turn us off. Can you imagine the level of confidence you'd need in a product to be willing to give it direct access to your nervous system?!

Still, if it DID become feasible through some magical process, then as you say it could only benefit those wtih money and power. Fortunately, as of right now it's purely a sci-fi pipe-dream, as you say; betting on humans blowing each other to hell and back before this occurs is a FAR safer bet.
Avatar image for 194197844077667059316682358889
194197844077667059316682358889

49173

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#37 194197844077667059316682358889
Member since 2003 • 49173 Posts

Look, i'd hate to sound like a pretentious douche

Stavrogin_
Uh oh :(
Avatar image for CRS98
CRS98

9036

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#38 CRS98
Member since 2004 • 9036 Posts
I support the posthumanization of humanity through sophisticated nanorobotic engineering that replaces the body with a powerful (physically and mentally), shapeshifting immortal brain. Then we can rule the universe as a matrioshka brain! Take that aliens!
Avatar image for Kh1ndjal
Kh1ndjal

2788

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#39 Kh1ndjal
Member since 2003 • 2788 Posts
this sounds like the beginning of every nightmarish science fiction book. at what point after transhumanism is the butlerian jihad waged?
Avatar image for SaudiFury
SaudiFury

8709

Forum Posts

0

Wiki Points

0

Followers

Reviews: 12

User Lists: 1

#40 SaudiFury
Member since 2007 • 8709 Posts
[QUOTE="Kh1ndjal"]this sounds like the beginning of every nightmarish science fiction book. at what point after transhumanism is the butlerian jihad waged?

Bulerian jihad?
Avatar image for MaxGaines
MaxGaines

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#41 MaxGaines
Member since 2011 • 25 Posts

Transhumanism, often abbreviated as H+ or h+, is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.

That's the short definition from wikipedia. To explain it simply, transhumanism in the broadest sense of the term implies the advancement of the human race in all possible domains (physical, intellectual and so on). This would be solely possible by the means of a scientific and technological progress, which really isn't something new, taking into account the technological advancement in the last 3-4 millennia, not that there was no technology much earlier, but it is difficult to compare the "technology" of the stone age and the one of the bronze age.

This progress would look quite different in various fields, for example: in biomedical sciences it would mean health improvement and life extension, in the field of AI the building intelligent machines, in neuroscience the development of cognitive enhancers, in genetics would mean redesigning the human genome etc etc.

The notion of transhuman singularity denotes a moment when an AI that is smarter than us is created, and as such it can self-improve, so the first AI generation would make an even smarter AI second generation and so and so on, probably ad infinitum, to the point where we ourselves will become the equivalent of what bacteria is to us. It is kind of scary when you think about it, but in case of a friendly AI the future certainly looks very very bright for us, as it will probably be possible during our lifetime given the fact that most predictions say that this kind of technology will be available to us in about 30 years.


Discuss... :)

Stavrogin_
Why bother? As George Carlin would say, we're just circling the drain.
Avatar image for Kh1ndjal
Kh1ndjal

2788

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#42 Kh1ndjal
Member since 2003 • 2788 Posts
[QUOTE="SaudiFury"][QUOTE="Kh1ndjal"]this sounds like the beginning of every nightmarish science fiction book. at what point after transhumanism is the butlerian jihad waged?

Bulerian jihad?

it's a dune reference
Avatar image for Frame_Dragger
Frame_Dragger

9581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#44 Frame_Dragger
Member since 2009 • 9581 Posts
this sounds like the beginning of every nightmarish science fiction book. at what point after transhumanism is the butlerian jihad waged?Kh1ndjal
Niiiiice, get me a menat!!!
Avatar image for Bane_09
Bane_09

3394

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#45 Bane_09
Member since 2010 • 3394 Posts

I really doubt we will have this kind of technology in our lifetimes but the idea of it is interesting

Avatar image for lancea34
lancea34

6912

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#46 lancea34
Member since 2007 • 6912 Posts

After playing Deus Ex I can firmly say I am against Transhumanism no matter how good it could be.

Avatar image for shadowkiller11
shadowkiller11

7956

Forum Posts

0

Wiki Points

0

Followers

Reviews: 12

User Lists: 0

#47 shadowkiller11
Member since 2008 • 7956 Posts
I didn't ask for this.
Avatar image for Xeogua
Xeogua

1542

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#48 Xeogua
Member since 2010 • 1542 Posts

No thank you. Some guy would get in his mind to write some code to make the computer evil just for laughs, then when he realised his mistake it would be too late and the computers would kill us all.

Avatar image for weezyfb
weezyfb

14703

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#49 weezyfb
Member since 2009 • 14703 Posts
i am for it
Avatar image for dercoo
dercoo

12555

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#50 dercoo
Member since 2006 • 12555 Posts

Basically Ghost in the Shell ideas?