Comprehensive coverage

Questions for the future of humanity: Do you want to live forever?

Uploading the contents of our brain to a computer is probably an impossible task. But for the purpose of the discussion: what would you do if this were possible?

Is an eternal life of association with computers inevitable? Illustration:
Is an eternal life of association with computers inevitable? Illustration:

By Hilary Rosner, the article is published with the approval of Scientific American Israel and the Ort Israel Network 17.11.2016

  • Some scientists believe that one day, technology will allow us to achieve immortality by uploading our neural connections into robotic bodies. Others think it is impossible.
  • Either way, serious philosophers debate how such a situation would change our humanity.
  • The philosophical discussion is important because even if the "singularity" is not fully realized, the improvement of human abilities may raise similar questions.

Not long ago, at a wedding reception, I conducted a survey among some friends on the topic of immortality. I asked a learned couple from San Francisco, parents of two young daughters: Suppose you could load your brains into a computer tomorrow, and live forever as a hybrid of man and machine - would you want to do that? The husband, a 42-year-old doctor of medicine, immediately replied in the affirmative. His current research, he said, will bear fruit for centuries to come, and he is keen to see it. "Also, I want to know what the world will look like in ten thousand years." The woman, a 39-year-old art history doctor, was equally adamant. "No way," she said. "Death is a part of life. I want to know what it's like to die."

I wondered if the wife's decision would make the husband reconsider his words, but for reasons of diplomatic etiquette I decided to let the matter go. Nevertheless, this is more than just intellectual entertainment at parties. If the words of some futurists are to be believed, sooner or later we will have to deal with such questions, because we are headed for a post-biological world where death will be gone from the world - or at least will be largely under our control.

The most detailed version of this immortal future is that of Ray Kurzweil. In his 2005 best-selling book, “The singularity is approachingKurzweil predicted that artificial intelligence would soon "attack all human knowledge and skills". Brain scanning technology on a nanometer scale will eventually allow "a gradual transfer of our intelligence, personality and skills to the non-biological part of intelligence". At the same time, billions of nanorobots in our bodies "will destroy pathogens, repair DNA errors, neutralize toxins and perform many other tasks in order to improve our physical health." As a result, we will be able to live endlessly, without getting old." These nano-robots will create "virtual reality within the nervous system". We will move to live more and more in the virtual world, until we cannot distinguish it from the anemic universe that we may call "real reality".

Based on the progress in the fields of genetics, nanotechnology and robotics, and on The exponential rate of the changes in the world of technology, Kurzweil set the year 2045 as the date of appearance The technological singularity: the point at which non-biological intelligence will surpass human intelligence to such an extent that a "fundamental and revolutionary change in human ability" will occur. There are some supporters of the idea of ​​the singularity who still insist on this date, and progress in the field of "Deep learning” of artificial intelligence only encouraged them.

However, most scientists believe that the emergence of our cyborg destiny is much more distant. Sebastian Song, a professor at the Princeton Neuroscience Institute, claims that uploading the contents of the brain to a computer will probably never be possible. Human brains are made up of one hundred billion nerve cells (neurons), connected by synapses. Researchers in neuroscience believe that the totality of these connections is the key to our identity. Even by Kurzweil's standards of technological progress, this is an enormous amount of connections that need to be mapped and uploaded to a computer. Also, this may be just the beginning: nerve cells also communicate with each other outside of synapses, and such "extra-synaptic" interactions may be essential for brain function. In this case, as Song wrote in 2012 in his book “Connectome: How the Brain's Wiring Defines Who We Are"Uploading the brain to the computer will have to include not only every connection, or every nerve cell, but also every atom. The computing power required to do so "is completely out of the question, unless your distant descendants survive for a galactic-scale time," Song wrote.

Nevertheless, the very possibility of a cyborg future, however remote or improbable, raises concerns important enough for serious philosophers to seriously discuss. Even if our technology fails to fully realize the Kurzwelian vision, improvements to the human mind and body may take us part of the way, and this requires an examination of the question: What makes us human?

the philosopher David Chalmers, from the directors of the Center for Thought, Mind and Consciousness at New York University, wrote about the best way to preserve one's identity by uploading the brain to the computer. I asked him if he thought he would get to live forever. Chalmers, XNUMX, says he doesn't think he'll get it, but "these issues will be practical possibilities sometime in the next century or so."

Ronald Sandler, an expert on ethics and the environment and president of the department of philosophy and religion at Northeastern University, says that the discourse on the cyborg future "sheds light on many issues. Thinking about the extreme case can teach us a lot about the near term."

And of course, if there is even a slim chance that life today could ultimately choose between death and eternal life as cyborgs, you should start considering it now. If we put aside the question of feasibility, it is appropriate to stop and think about more fundamental questions. Is such a situation desirable? If my mind and consciousness were uploaded into a cyborg, who would I be? Do I still love my family and friends? Will they still love me? Will I, in the end, remain human?

One of the issues that preoccupy philosophers is the question of how we relate to each other. the mother "golden rule” of “love your neighbor as yourself” will remain valid in a post-human world? A few years ago, Sandler participated in the writing of the article, "Transhumanity, human dignity and moral status", in which it is claimed that "enhanced" humans will maintain a moral obligation to ordinary humans. "Even if you become enhanced in some way, you'll still have to worry about me," he says. It's hard to argue with that, and it's even harder to believe that this is what will actually happen.

Other philosophers propose "moral enhancement": using medical or biomedical means to upgrade our principles. When immense intelligence and power become available to us, we will have to ensure that they are in good hands. Our scientific knowledge "begins to allow us to directly influence the biological or physiological foundations of human motivation, through drugs, selection or genetic engineering, or with the help of external devices that affect the brain and the learning process", so the philosophers recently wrote Julian Sebolescu וIngmar Parson. "We can use these techniques to overcome the moral and psychological limitations that threaten the human race."

In May 2016, an article titled "Soon we will use science to make humans more moral". the author, James Hughes, a bioethicist at the University of Massachusetts in Boston, argued for moral improvement, but said it should be voluntary, not coercive. "With the help of science, we can discover our own ways to achieve happiness and virtue through technological means," wrote Hughes, who runs the Institute for Ethics and New Technologies, a progressive transhumanist think tank. Hughes, 55, a former Buddhist monk, told me he wanted to stay alive long enough to experience enlightenment.

There is also the question of how we will treat the Earth. Eternal life, in any form, will change our relationships not only with each other but also with the world around us. Will we care for the environment more or less? Has the natural world gained or lost?

Sandler says the singularity describes a final state. To reach this state will require an enormous amount of technological change, and "nothing changes our relationship with nature more quickly and more thoroughly than technology." If we get to a point where we can bring human awareness to the computer and move without interruption between real and virtual reality, it is likely that we will have already engineered everything else in significant ways. "By the time the singularity occurs, our relationship with nature will have changed completely," says Sandler.

Despite what we would like to believe, in our current state as mortals we are very dependent on and vulnerable to natural systems. In the future world, this dependence will change. If we don't have to breathe through lungs, why should we care about air pollution? If we don't have to grow our own food, we will also disconnect from the land around us.

In the same way, in a world where it is impossible to distinguish between the real world and the virtual world, we may be able to benefit from the digital nature as much as from the outside. Our relationship to real nature will change: it will cease to be sensual, physical. This transition can have a profound effect on our brain - perhaps even in the silicon version. Studies show that interaction with nature affects us greatly and for the better. It is possible that such a connection, even on an unconscious level, is a basic feature of humanity.

If our dependence on nature disappears, and our physical ability to connect with nature declines, "the basis for concern for the environment will change, and will be much more related to responsibility towards nature when it is to itself," Sandler says. Our ability to solve environmental problems, for example, through climate engineering, will be on a scale that we cannot even imagine today. But will we still feel that nature has its own value? If so, the ecosystems will benefit from it. If not, other species and the ecosystems that depend on them will be in trouble.

Our relationship with the environment also depends on the question of time scale. From a geological perspective, the extinction crisis we are witnessing today may be insignificant, but it does make a difference on the current scale of human lifespan. How will a much greater lifespan affect "the perspective from which we ask questions and think about the non-human environment?" asks Sandler. "The scale of time determines which answers will be considered reasonable." Will we care more about the environment since we will be staying in it for so long? Or maybe we'll care less, since we'll be looking at things from a broader geological perspective? "It's almost impossible to imagine what it will be like," Sandler says, "although we can tell the perspective will be very different."

When you talk to experts about such topics for quite some time, you fall into a bottomless pit and find that we are having seemingly normal conversations about absurd things. Hughes told me at one point that "if there is an X-Men-style gene therapy that allows you to shoot laser beams from your eyes or take over the minds of others," people who want such traits will have to go through special training and get a license.

"Are you using these examples to illustrate an idea, or are these things you really think will happen in the future?" I asked.

"In terms of the amount of talk about such issues among transhumanists," he replies indirectly, "most of us try not to scare the newcomers too much. But after you go through shock stage number four, you can start talking about the day when we'll all just be nanobots."

When we're just nanobots, what kind of things will worry us? Anxiety is perhaps one of the qualities that define us as human beings. Will immortality make anxiety a thing of the past? If I no longer have to worry about my health, paying the bills, and what I'll do when I'm too old and frail to walk around and write articles, will I still be me? Or maybe I'll just be a superficial and over-satisfied robot? What will I dream about in the summer? Will I lose my ambition? After all, if I live forever, my great American novel can surely wait for the next century, right?

Will it still be me? Chalmers believes that "this is going to be a practical, not just a philosophical, very pressing question."

Intuitively, it seems logical that I would remain myself even if my brain were uploaded to a computer, even if, as Chalmers describes, I would do it neuron by neuron, maintaining awareness throughout the process, becoming one percent silicon, then five, to ten and finally to one hundred percent. This is the ancient parable of the ship of Theseus, in which each time another plank was replaced by a new and stronger wooden plank. Was it the same ship at the end of the process? And if not, at what point did the tip tip and she became a different ship?

"One of the big problems," says Hughes, "is that if you live long enough and go through so many changes, there is no longer any meaning throughout life. Am I really the same person I was when I was five? If I live another five thousand years, will I really be who I am now? In the future we will be able to share our memories, so the importance of identity and self-continuity will be eroded." Sounds pretty depressing.

Despite the utopian rhetoric of the singularity, there is a hint of fatalism in it. This is the only path open to us: merge with the machines or fade away - or something worse than that. And what if I don't want to become a cyborg? Kurzweil might say that it is only my limited and flawed biological mind that prevents me from understanding the true appeal and potential of this future. The choices that will be available to me - any type of body, any virtual reality experience, endless possibilities for creative expression, the chance to inhabit space - will make my current biological existence comically trivial. And in general, what is more fatalistic than certain death?

And yet, I like being human. I love the knowledge that I am made, ultimately, of the same stuff as all life on Earth. I am even connected in a certain way to my human frailty. I like to be warm and pleasant, and not hard and durable like a super robot in an action movie. I like the warm blood that flows through my veins, and I'm not convinced that I want to replace it with nanobots.

Some ethicists argue that human happiness relies on the fact that our lives are transitory, that we are vulnerable beings who depend on each other. What meaning and what value is there for life in the future of man-machine?

"For me, the essence of being human is not our's our ability to overcome those limitations," Kurzweil writes. This is an interesting point of view. Death has always been one of those limitations, so maybe overcoming death makes us most profoundly human?

But after we overcome him, I'm not convinced that our humanity will remain. Death itself does not exactly define us, of course - all living things die - but both our awareness of death and our understanding of it, and our attempt to give meaning to life until it, are certainly part of the human spirit.

For more articles in the series:

131 תגובות

  1. walking dead,

    Neuron networks of the "Convolutional Neural Networks" type, which have gained a very strong momentum in recent years, work in a very similar way to the way our brain works when it decodes visual images coming from the eyes. Also in this neural network, the image is broken down into its basic components, such as vertical lines, horizontal lines, diagonal lines, etc. (as we do in the retina area of ​​the eye) and further on the next layers that are deeper identify additional features such as a nose, mouth and ears if it is a face, or wheels if It's a car. The last layers already recognize the object itself.

    In fact, "Convolutional Neural Networks" drew their inspiration from the way our brain decodes images, and they are also much more accurate than neural networks of previous generations.

    There is no fundamental difference between the way our brain decodes images and the way neural networks of this type decode images.

    If you want, here is an example video from YouTube:

  2. WD
    A design network on a computer for handwriting recognition is relatively simple. The system first performs segmentation - separating the letters into independent units. A simple way is to try to pass lines between the letters (sometimes a rotation is performed before this - and this is also a process that our brain performs).
    The detection circuit has an input for each pixel (say 32 x 32). The output has a line for every possible character (and spaces between words need special treatment). The detection layer (between the input) contains several neurons that determine experimentally.
    Now the interesting part begins - we teach the network by trial and error - we put in a known input (that is - we know what the signal is) and play a bit with the weights to reach the desired output. This is a complex step - because it is possible to "over-learn" the study group and then the system will recognize less well things that differ from the input.

    Today two additional processes are carried out. The first is learning words: if we discovered one letter then we know which letter is expected to come after it (for example - after q will come u). The second process is recognizing sequences of words - after the the will probably never come if, or why and so on.

    That is - there is a unique system here that has learned to recognize letters - but there is no place in this network where you recognized a letter 🙂 each letter has its own output - there is no place in the system that "knows" which letter was recognized.

    In our brain there is a complete system between the eyes and the moment when we understand the meaning of a text - about a third of the brain is built for analyzing what we see. Learning letters in humans is different from a computer. We have circles to recognize geometry, and after that we see "objects". At a certain age we manage to transfer the recognition of certain objects to the language circuits (not all of us - there are all kinds of dyslexia problems that make this transition difficult).
    I will try to give an example - a basket ball ring built from a circle and straight lines - but we recognize the ring as a ring from any angle. Now think about d, b, p, and q. There is something fundamentally different here, right?

  3. Miracles

    If you have time for it, I would actually be very happy to read your details regarding letter recognition in the software and in the brain.

  4. rival
    Our conversation is over.
    I'm just suggesting you understand a subject before you argue about it. Reading a book by a populist is not knowledge and it is certainly not understanding.

  5. rival
    You keep assuming what you are trying to prove….
    With regard to Segev - you assume that he knows how the brain works, and if the model he builds does not work, the reason is a mistake in construction, not in understanding.

    And regarding letter recognition - you assume that the system in the brain that recognizes "a" is somewhat similar to the same software that recognizes letters. So, I know exactly how software recognizes letters and I'm telling you knowing that our brains don't work that way. If you want, I can elaborate, but it seems to me that it would be a waste of time for both of us.

  6. Miracles,

    I don't have the strength for long philosophical discussions even though I know you really like such discussions. I will tell you simply, when I want to copy something, there are two options: either copy it exactly one for one - from exactly the same materials, exactly the same size, etc., just like the original, and the second way that is more similar to our case is to create a copy that is not completely identical to the original but still contains It contains the important operating principles that exist in the source and create the phenomenon we are interested in. The copy may be composed of other materials, of a different size and so on, but it will still contain within it the principles of operation that exist in the original. For example, a paper glider that can fly like a hawk or a stork even though it is made of other materials and has a different shape but still contains the principles of lift and the correct weight ratios.

    In the case of the human brain project that Idan Segev is talking about, they are trying to create a replica of a biological brain, but one that is made of other materials (silicon chips) the size of which is different from what exists in our brain, but the hope is that the model will still contain within it the same important principles that allow the appearance of consciousness in our brain and therefore also in it Consciousness will arise.

    If they complete the model and activate it and teach it and provide it with all the inputs that a biological brain receives, and still no consciousness is formed in it, it means that they probably missed something, it means that their model probably does not contain all the principles that cause consciousness to appear in our brain, don't you think?

    Regarding your second question, do you think the little neural network in your brain that recognizes the letter 'A' is thinking? Do you think she understands the meaning of the letter 'A'? The answer you will give is also the answer regarding a neural network on a chip that recognizes the letter 'A'.

  7. rival
    Think of a handwriting recognition system that uses a structural network. It may work like a biological brain. Do you think she thinks? If not - please explain why not.

  8. rival,
    I finished. I think the reader understands my argument.
    By the way, if you got to 30 minutes and got bored, don't continue. I'm sure Penrose won't be offended

  9. Miracles,

    Note that I wrote "understands" in quotation marks, maybe you should change your glasses. And what do you think if you write "Monte-Carlo" and reduce the strength of their achievement? You are invited to write an application that uses the Monte-Carlo method without neural networks that will achieve the achievements that Alpha Go has achieved, we will see you.

    By the way, I wouldn't be surprised at all if a process similar to the Monte-Carlo method also takes place in our brains when we try to find a solution to a problem such as a decision to make a move in a game of chess or go.

    Regarding your nice airplane example, what are you trying to argue that if the synthetic brain that is built is not made of organic matter you cannot create consciousness in it? Is that what you claim?

  10. rival
    Regarding the quote from Segev's words:
    Hey opponent, today I built a plastic model of a Concorde plane. But, unfortunately, for some reason she is unable to fly at Mach XNUMX. So how will you react if I say it doesn't fly because I built it wrong? I hope you think I'm a jerk…

    I hope it is clear to you that I followed the instructions exactly and that this is not the problem. Do you understand now what I mean?

  11. rival
    Alphago didn't understand anything - don't kid yourself. She performed a Monte Carlo search in the tree (ie - part of the search process was random). The tree was pruned by training in a large number of games.
    Think of handwriting recognition software - this too uses a patterned network. Do you think such software understands what the letter "a" is?

  12. Shmulik,

    1. It is true that something special happens in the cerebral cortex, the structure of the cerebral cortex, and the connectivity that exists there between the different areas, is what ultimately enables the formation of consciousness.

    Here, from Wikipedia: "The human cerebral cortex is the focus of conscious thinking in the brain. It handles high cognitive operations, such as speaking, understanding language and making decisions."

    This does not contradict my words according to which the amount of neurons in the cerebral cortex is of crucial importance, the cerebral cortex of a person for example is 3 times larger than that of a monkey, why is it difficult for you to understand that this is the reason why our awareness is higher?

    2. I wrote and you ignored, the fact that a small part of the brain is anesthetized and the awareness disappears does not mean that consciousness is there, in the small piece of the brain that was anesthetized. It's like I will block the air supply pipe to the engine of the car and the engine will stop running, and then I will tell you that here I have proved that the point where I put the block is where the power of the engine is. Again, it could be that the anesthetized area is an important junction that allows important signals to reach the cerebral cortex so that consciousness is formed there, is that clear now?

    3. Listen, I really have no idea what Alpha-Go's level of consciousness is and I have no reference to give you, I was just throwing out a guess because you didn't leave me, I thought it was obvious. In such a small neural network as I assume exists in Alpha-Go, I do not expect consciousness to develop at a level that can be felt or measured, nor do I know how to measure, for example, the level of consciousness of an ant.

    4. You ask me to give you a reference but don't do it yourself, what is a "very large" amount of transistors? How many neurons do they simulate? Thousand? one million? fifty million? Forgive me but I'm not supposed to guess from the computer's energy consumption how many nerve cells it simulates, a transistor consumes much more energy than a neuron in the cerebral cortex, if you don't have a number to give me then you don't have a case and it's all guesswork.

    5. Believe me I'm not missing any point, you're the one who is. How many times can I explain to you that Alpha Go did not achieve its victories with the help of computing power? Alpha Go "understood" the state of the board by way of intuition and from that deduced how it should act, this completely demolishes your argument. At every stage of the game, Alpha-Go had at her disposal an infinite amount (for all intents and purposes) of possible moves and counter moves of the opponent, if she was based on calculation power as you claim, then she would not be able to decide which move to make even in a thousand years.

    6. I started watching the lecture, I've already seen 30 minutes of it, it's boring Tycho I hope I can make it through the other 60 minutes. It's a bit funny to me that a person whose specialty is physics talks about topics that are related in general to the field of biology and brain research, it doesn't seem serious to me.

  13. rival,
    1, 7, 3 – I think you're missing the point. All this means that critical mass is not the answer, but the amount of neurons in a certain function. If our advantage over the whales is in the amount of neurons in the *cerebral cortex* it means that something special is happening in the cerebral cortex. Not the whole brain is relevant to consciousness because there are situations in which parts of the brain are destroyed but there is no damage to the cognitive ability. On the other hand, if it is enough to irritate a certain area and consciousness disappears, this implies a certain area that is responsible for consciousness. Hence, apparently, not (only) the size matters but (also) the function in this area is critical for the existence of consciousness. That's the main point I'm trying to get across
    2 -
    4, 5 – What is satisfied? This is not that kind of argument. throwing words? Please reference your claim that he has the mind of a cockroach. Here's what I found about the number of processors. I thought you would understand that with such a large power consumption, there are many, many transistors that work:
    6 – This is an important point that I think you are missing. Penrose's riddle could also be in Gu. Let's assume she's in Go because I think the transition to chess is a bit misleading. Alphago is designed to win the go game. We have before us a Go puzzle designed to distill the difference in the types of thought. how? By harnessing what a computer does best, which is calculations. In this case, to infinity because there is no way to beat the situation by the amount of calculations. We solve the situation by *understanding* that there is no need for all these calculations to come to the conclusion that it is Pat (I have no idea if there is a Pat in Go) and computers (at least Aliba de Penrose) fail to meet the challenge.
    8 - ok

  14. Shmulik,

    1. How exactly does anesthetizing a small area of ​​the brain that eliminates awareness contradict my claim? Maybe this area is a critical crossroads that transmits electrical signals to the rest of the brain so that awareness can be created in them?

    2. "It is true that he said that if a complete modeling of the brain is carried out, awareness will be created, he said and you believe every word. Is there a scientific article?'

    This is a statement based on common sense, please read the answer I gave to Nisim on the topic with the engine example.

    3. "He is not only talking about critical mass but about modeling the neurons and their spatial structure"

    He wants to build a model that is as similar as possible to the original, so that it is possible to compare what happens in the model (for example in the case of diseases) and what happens in a real brain, what is wrong with that? He could also simulate the connections without the spatial structure, but then it would be more difficult to compare to the original which is in XNUMXD.

    4. "You do change your mind about Alpha, in one place you wrote that Alpha Go works similarly to our brain, by way of intuition, and in another place you decided that Alpha Go has no consciousness, certainly not one that approaches that of a human."

    Does a neural network have to have human awareness to have intuition? Where did I say such a thing? What, a mouse has no intuition? And to your question, Alpha Go has the level of awareness of an ant, or a cockroach, okay? Are you satisfied now?

    5. "Lego doesn't 'just' have a dangerous neural network, but it has a complicated neural network that consumes 50000 times more electricity than the human brain."

    Tell me, this is what makes a difference here, how much electricity does it consume?? How many neurons and how many connections are there in this network? Isn't this a question a bit more relevant to our discussion?

    6. "This is where Penrose comes into the picture... He gives exercises related to infinity." The chess puzzle he brought is like that and it is much more problematic than the game of Go because it really goes to infinity. No amount of computational ability will bring you the solution because the game is stuck and there are endless moves that can be made that will not bring you a millimeter closer to the correct solution."

    Shmulik, the number of moves that can be made in the Go game is greater (by many orders of magnitude) than the number of all the particles in the universe! As far as you are concerned, it is about infinite moves! Even if they gave Alpha Go a million years, she wouldn't be able to calculate even a fraction of the possible moves forward! So how did she win anyway? How did she manage to make such surprising moves that led her to victory without doing endless calculations? Doesn't that kind of knock your whole argument down?

    7. Regarding Leviathan, I don't understand what is so hard for you to understand, did you watch Susanna's lecture? The cerebral cortex of a leviathan has far fewer neurons than there are in our cerebral cortex! This is a fact! So how does this contradict my argument: more neurons and more connections between them = higher level of consciousness? Where is the contradiction here?

    8. I think I pretty much understand (in principle) Penrose's point, but let me watch the lecture first so we're more in sync, okay?

  15. Miracles,

    Say, you have to be a great scientist to understand that if you built an engine and it doesn't work, that means you didn't build it, right? If our brain produces consciousness, and you built a replica of a brain in which consciousness was not formed, doesn't that tell you that something in your model is probably wrong? What is wrong with his statement? What exactly are you complaining about?

  16. rival
    You wrote "and says clearly and emphatically that the neural network they build will inevitably also create self-awareness, because if not it means we didn't build it correctly."

    seriously? Is this a scientist's statement? Segev claims that he knows for sure what consciousness is? He's worse than me, isn't he? 🙂

  17. rival,
    Just to be clear, Penrose's argument that I am trying to convey and fail is completely physical and not mumbo jumbo and I am completely with Idan Segev who thinks that everything is physics and there is no spirit, soul and other material in Shin.
    My main contention is that awareness will not emerge if we reach a critical mass of neurons and I have demonstrated this over and over again. Human awareness is also not just a matter of critical mass and every link you send me to only strengthens my awareness of this. In the last link you sent me to, the professor says that if he neutralizes a small area, the awareness disappears. Well? A small area, not the whole brain. how is it going for you It is true that he said that if a complete modeling of the brain is carried out, awareness will be created (he said and you believe every word. Is there a scientific article?) but he is also not talking only about critical mass but about modeling the neurons and their spatial structure.
    You do change your mind about Alpha, go read what you wrote. In one place you wrote that Alpha Go works similarly to our brain, by way of intuition and in another place you decided that Alpha Go has no consciousness, certainly not one that approaches that of a human. That way you can't have a discussion. Does she have awareness or not? A drop of awareness? Like a jellyfish? less? More? Still, she managed to beat a human in a very complex game. Lego doesn't "just" have a poor neural network, but it has a complicated neural network that consumes 50000 times more electricity than the human brain. So is there something or is she as aware as a stone? Why is this important, because if there is awareness, if you have cracked this issue, from here and improving it even more is easier and then really the more you increase it and optimize it, the better you will be able to model the brain but the argument is that not even the first step has been taken here. This is where Penrose comes into the picture and below is the gist of his argument. A neural network implemented on a computer with current technology can be modeled by a Turing machine and if so, it is limited due to Gadel's incompleteness theorem (Turing machine and its limitations are completely in the scientific consensus and there are endless articles on the subject). Humans don't. As an example of this, he gives exercises related to infinity. The chess puzzle he brought is like that and it is much more problematic than the game of Go because it really goes to infinity. No amount of computational ability will bring you the solution because the game is stuck and there are endless moves that can be made that will not bring you a millimeter closer to the correct solution. A chess player who barely understands the rules will immediately *understand* the passage and tell you the answer without doing the endless calculations. This elusive feature, *understanding*, he claims will never be precisely modeled because of such cases of walking to infinity. Even if you have infinite, literally infinite, computational power, you won't be able to solve a certain set of problems that people seem to have no problem with. I think you missed the point of this problem. It's not just a game of chess versus a game of Go. Penrose could have chosen a parallel example from the game of Go (probably, I don't know) but I guess he preferred to choose a game that many people know in order to be forced to find a solution that would defeat his argument.
    Hence, from Penrose's point of view, Idan Segev is wrong on the most fundamental level possible. As mentioned, Penrose's argument has many opponents, including the basis of his argument (see the Wikipedia page), but you did not raise any of these arguments, but only continued to send me links that not only do not serve your argument, but contradict it.
    By the way, I didn't express wonder why the whale has less awareness, but I responded to your sentences like this: "There is a small pile of sand that is made of three grains, and there is a big pile of sand that is made up of a million grains. Similarly, the more neurons and the more connections there are in a neural network, the higher the awareness that will be created in it. Our brain is three times bigger than a monkey's brain, three times bigger.' Beauty. Such an argument requires the argument "but the whale has a bigger brain". Why does the fact that he has a bigger body ruin his awareness? Maybe it's because it's not just about a critical mass but maybe about a certain mass of neurons, their connections, their concentration in a specific place that are entrusted with a certain activity? Amazing wonder

    It is clear to me that you will respond now and I will respond to you and God forbid as if you have shares with Segev and I with Penrose but nothing will progress. Could you at least write whether you understood the gist of Penrose's argument because I did take the trouble to check and understand (and be impressed by) the links you sent me to

  18. And I sent you to a 3 and a half minute segment in which Idan Segev specifically refers to the issue of awareness and says clearly and decisively that the neural network they are building will inevitably also create self-awareness, because if not it means we didn't build it correctly.

  19. I told you it would take me about a week to get to the lecture you gave me, isn't there a little patience in our districts?

  20. Shmulik,

    I don't understand why you have such a hard time understanding me. I sent you to the lecture because:

    1. Expressing amazement that the leviathans have a bigger brain than ours and still don't have abilities like we have.

    2. You said that it is not clear to you why there is such a big difference between the mental abilities of monkeys and humans.

    Did the lecture not answer your questions?

    I sent you to a short segment that talks about mathematical models of brain activity (Idan Segev's lecture) after you claimed that there are many mathematical models that try to explain how the brain works, so I brought you an example of a mathematical model that pretty much proves itself.

    And again I repeat myself, when I talk about critical mass I am talking about a clear and high awareness like that of humans! Other animals like monkey cat and mouse also have some awareness but it is not at all close to the one we have.

    Alpha-Go's victory has nothing to do with the critical mass thing, and I have nowhere claimed that Alpha-Go has an awareness that even approaches that of a mouse. It's a tiny neural network that manages to defeat humans in one very specific area that is considered exclusive to the human brain, that's the truth, what escape is there here? Where do I run? And where did I say that in order for a person to have awareness he needs to know an infinite number of things? I imagine you know a little more things than Alpha-Go does, and you have a "little" bigger neural network, doesn't that mean anything to you? Doesn't it have some effect on cognitive abilities and level of consciousness?

  21. rival,
    The lecture itself is very beautiful even though the ideas there are not new. For example, Michael Pollan in his books talks about these issues. For example, he describes another trade-off that exists and it is between the amount of neurons in the brain and the amount of neurons in the digestive system, but this is already out of the picture. The lecture itself is beautiful and interesting but not relevant to our issue and it is (in my opinion) the origin of consciousness and whether computers will be able to model the human brain. I sent you to check a source that might tell you the opposite of what you think and the only thing you don't do is check it

  22. rival,
    Why did you send me to this lecture?
    The only way this lecture will be relevant to anything is if you claim that only humans have consciousness.
    If this is your claim now then it contradicts your story about the ant and the mouse and if that story disappears then where do you get the idea of ​​critical mass from? And if your story is no longer relevant, then why did you send me to a lecture on mouse modeling? The lecture is completely irrelevant to the discussion because the lecture does not deal with the source of awareness but rather explains something that was never discussed and that is that humans are special because of the ratio of weight to neurons in a certain area of ​​the brain. Sue Watt?

    Alpha-Go. You are wrong, opponent, and you constantly change your mind. You tell us that Alpha-Go is not specifically programmed, but then when I confront you with the idea of ​​critical mass and demonstrate with the help of victory that there is a critical mass here, you run away and say that the computer is just a Go game machine. decide And even if it's just a Go game machine, so what? Maybe according to your method she has Go ads? I, like Alpha Go, know a limited number of things, much smaller than infinity. Am I unconscious too?

    So as mentioned, I am more and more convinced that the source of consciousness is not in a critical mass of neurons only

  23. Miracles,

    Did I say or imply somewhere that Alpha Go has awareness, or areas that handle language? But there are neural networks that do handle different aspects of language and speech, it's not something in the sky that can't be simulated. If you don't know, I can give you some links on the subject.

  24. rival
    There is indeed a claim that what distinguishes the human brain is the "sense of language" (Steven Pinker's phrase). I have already mentioned two areas in our brain that handle language (Wernicke's and Broca's areas). Discovered these areas through people who were hurt and it was possible to diagnose the change in their behavior. Today we know how to see these areas with the help of MRI - but the resolution is in cubes of 2x2x2 mm, and that too with a lot of guesswork.

    Do you think these areas exist in Alfago?

  25. For some reason the link was cut in the middle, search on Google:

    "What is so special about the human brain?"

    and click on the first link in the list of results.

  26. I don't understand why the link was cut, try again:

  27. Shmulik,

    It is clear that this is also about DNA (which after all dictates the size of the brain and the many connectivity that exists in it) and it is clear that it is also about a neuron, and in life experience that without it no consciousness will be created in the network, I have not said anything that contradicts this.

    But to be convinced that it is a critical mass, and to understand why our brain is more intelligent than a whale's brain even though its brain is bigger than ours, watch this short lecture:

    I think she will help you to be convinced.

    Alpha Go has no consciousness, certainly not one that approaches that of a human because it is a tiny neural network that specializes in one thing only - playing Go, nothing else.

  28. rival,
    Whales have a bigger brain than ours. In the human race there are huge differences in the way intelligence is expressed despite a similar brain size between the individuals themselves. It is not only a matter of quantity but a combination of all the details together, starting with the DNA and the neuron itself and ending with life experience. I'm not convinced that it's only a matter of critical mass, if anything this discussion made me realize that it's really not a matter of critical mass
    Alpha-Go beat Lee Sedol (or whatever he's called) in a game of Go. Alpha-Go has a better computational ability than the human competitor. Alpha-Go has a critical mass. Does Alpha-Go have ads?

  29. rival
    The sensor has a feedback circuit - it responds to changes in the environment.
    AlphaGo also has feedback circuits - therefore it has consciousness. Where did I say he has no consciousness? What I am saying is that it is not a consciousness that even resembles the consciousness of man.

  30. Miracles,

    1. You keep claiming that the sensor in your car has consciousness.

    2. You claim that the alpha-go neural network has no consciousness.

    Is this consistency in your eyes? If the small sensor in your car has "consciousness" why does a million times more complex neural network not have consciousness? What does the sensor in your car have that Google's neural network doesn't?

  31. rival
    I keep saying the same thing.
    1) Consciousness is a feedback loop process. If there is a feedback loop then there is consciousness.
    2) Human consciousness is indeed built on circuits of consciousness, but - it is complex far beyond our ability to understand today. I claim that we have no reason to believe that we are close to building a machine that will develop, on its own, a human consciousness. It is much more likely to assume that what will develop will not like us….

    Where am I not following?

  32. Shmulik,

    Watch the video, it's a short clip of 3 and a half minutes in total.

    Idan Segev strongly asserts (and I completely agree with him) that everything in our brain is electrical and chemical activity and it is this that creates everything, including consciousness and emotions. The Internet really does not work like a neural network (it is a network, but not in the structure of a neural network) and therefore there is no reason for consciousness to arise there.

    "In light of the lectures I've heard, it's clear that there is much more to the brain than the sum of all the currents that run there. If it were so, there wouldn't be much difference between monkeys and humans.'

    There is a small pile of sand made up of three grains, and there is a big pile of sand made up of a million grains. Similarly, the more neurons and the more connections there are in a neural network, the higher the awareness that will be created in it. Our brain is three times bigger than a monkey's brain, three times bigger! Don't you think it makes a "little" difference and can explain why our consciousness is higher than that of a monkey? Or why do we have higher intellectual abilities? In my opinion, explaining also explains and eliminates the quantum explanation.

    The same genome in our body that creates the brain also creates the liver and the heart, there is no reason to assume or think that something unusual is happening in our brain that does not happen in other organs of the body.

    (I will watch your friend Penrose's lecture, but it will take me about a week to get to it, patience)

  33. rival,
    i will watch
    There is a problem with defining consciousness because it is a bit like defining when a trick becomes a trick and therefore it is very difficult to model it. I understand why the approach of measuring voltage on the membrane and thus modeling the brain is so attractive. We really excel at measuring voltages and currents, but that's not enough. In light of the lectures I have heard, it is clear that there is much more to the brain than the sum of all the currents that run there. If that were so, there wouldn't be much difference between monkeys and humans. In short, I'm trying to tell you that your approach is very simplistic. You can think of the Internet as a monster with enormous computing power. Why is there no consciousness? Need something more. Whether Penrose is right or wrong, his claim is impressive in its audacity and basically it says that already in a single neuron consciousness begins and quantum mechanics is needed to explain what happens there. Quantum processes take place in photosynthesis, so maybe in the brain as well. It is clear to me that computers will progress and improve in performing operations that we associate with intelligence, but it is not that and not because we have not yet reached the critical mass.

  34. I haven't found time yet to watch Penrose's lecture, but I read a bit on Google (in Hebrew) some articles about his theories.

  35. Shmulik,

    In the passage I referred you to, he only talks about the mathematical models that form the basis of the neural network they build, he refers to the topic of consciousness in other lectures, later I will find and give you a link.

    In any case, the piece of brain they have imaged so far (which is perhaps half or a quarter of a mouse's brain) is still too small to be discernible in any kind of consciousness, it will take some more time until we reach the level of consciousness of a human, or even a monkey.

  36. rival,
    I was watching. very impressive. Nowhere until the passage I saw does he talk about consciousness. A beautiful model of the mouse brain (or part of it) which symbolizes the electrical activity of the neurons, but even he does not claim that consciousness was created here, but rather that he has a model that will make it possible to develop solutions for epilepsy and more. The brain is the total electrical activity of the cells and that's it? That may be your argument but not his. For a second there is no contradiction here with the Wikipedia page and I don't know how many works they have published. Guess he had a breakthrough here, it was documented in Wikipedia.
    Did you see Penrose's lecture?

  37. walking dead,

    Likewise, I also do not understand what his problem is with the fact that in my opinion the complexity of the matter (created due to evolutionary constraints) is much less high than he estimates.

  38. rival

    It doesn't matter, if you think he means one thing when he means another (and vice versa), anything you don't write will be a colossal waste of your time and energy.

    I don't understand what your problem is with Nissim thinking the complexity of the whole thing is much higher than you anticipate?

  39. walking dead,

    I don't know if it will help, he is locked on the fact that it is too complex, and too complicated, and we don't know and don't understand... In my opinion, he exaggerates too much the level of difficulty of the problem.

  40. Miracles by the way, it's a little funny that someone who claims that the temperature or air pressure sensor in their car is self-aware claims that something that is much more like a piece of brain like a neural network in a computer has no consciousness at all. You are a bit inconsistent with your claims.

  41. Miracles,

    In a network with such a small amount of neurons I also do not expect consciousness to develop, and I think I am repeating this for the thousandth time.

  42. rival
    Not just a learning process - the system was built specifically to play Go. Specific algorithms for games of this type (Monte-Carlo search in this case) were implemented there. The algorithm itself was calibrated with the help of learning a structural network.
    It is in no way similar to the way a human plays the game! It is similar, perhaps, to the way we learn to ride a bicycle, or to play the piano.
    And beyond that - this has nothing to do with the concept of consciousness. Alphago doesn't know he's playing Go...

  43. Shmulik,

    Did you watch the section I directed you to from Prof. Idan Segev's lecture? How were you impressed by their mathematical models?

  44. Miracles,

    "I understand that you think they threw a billion synthetic neurons in a box and the box suddenly asked to play Go...nice?"

    Obviously there was a learning process there before, where did I write something that contradicts it?

  45. Shmulik
    Consciousness is like gas. If you look at a single molecule, you won't see temperature, pressure, Reynolds number or density, will you?
    I think consciousness is a big collection of simpler and less simple feedback processes. Like the properties of gas, the properties of ads also emerge from complexity. But unlike gas, there are a lot of layers in the human mind.

    Perhaps it is more correct to look at the climate as an analogy for our consciousness. If you simulate a trillion molecules of nitrogen, oxygen and water vapor you will get, maybe you will get a certain climate - but only an adversary can think that we will get the climate of the Earth 🙂

  46. miracles, rival,
    Miracles, of course this is what is special about the chess puzzle. You immediately realize that White can move so that he will never lose even though he is at a huge kidney disadvantage. A player's ability to understand that an infinite number of actions will not change the situation is Penrose's way of defeating the much higher computational ability of computers. I assume that soon we will see works that claim to defeat Penrose's claim and that is great because this is how science progresses.

    Rival, I do not subscribe to your general approach which says "a little more computing power and consciousness will emerge". It doesn't even fit with your example of the ant and the mouse because there the ant already had consciousness and then more "what consciousness does" leads to a more complex consciousness, but this also means that there was already consciousness in the initial state. So, unless there is consciousness in the most basic component of the computer, and I don't think it is, no critical mass of computational ability will cause a computer to develop consciousness. Obviously, as the field gains more experience, the performance of the computers will go on and improve. They will pass the various Turing tests better, but without a revolutionary idea it seems to me that the improvement will be quantitative and not qualitative and maybe it won't matter.

    Miracles, saying a toaster with a consciousness of one bit doesn't tell me anything about what consciousness is because I don't understand how many bits of consciousness a person represents. Can you sort out your explanation?

  47. rival
    So you really didn't understand.
    I understand that you think they threw a billion synthetic neurons in a box and the box suddenly asked to play Go... nice 🙂

  48. Miracles,

    What is to be understood here? When you tell me that the only connection between a neural network and how our brain works is that both are made of atoms, this is either naivety or a complete lack of understanding. What, you don't know that computer neural networks were inspired by how a biological brain works? And I'm also assuming that the Human Brain Project is called that just because it's a cool name, right? The neural network they build has nothing to do with how our brains work, right?

    Regarding your thought experiment, there is already a much better example and it is Google's Alpha-Go neural network that solved a much more complicated challenge than Shmulik's chess puzzle. In a much more difficult and challenging game, she made moves that no one thought were worth making, moves that led her to victory a few steps later. Is that not good enough for you? Why should I dwell on the chess puzzle when a neural network has already managed to solve a much more difficult puzzle?

    Regarding missing layers and consciousness, in the human brain project they are trying to simulate all the parts and not just the cerebral cortex, and intend to connect this neural network to a body that will be transported in space. On what basis do you already determine that it won't work? And by the way, until we build a neural network the size of our brain, you have no way of knowing what is necessary for the existence of consciousness and what is not, it's all guesswork.

  49. rival
    I don't think you understand what I'm talking about at all.
    The first issue is that the brain is a hierarchical system that we do not yet know a large part of its layers.
    The second issue is that the same layer that you think is everything - we also simulate them on a very simple level, and there are missing whole layers.

    The third issue is that I certainly agree that a network of neurons can develop consciousness - but we have no reason to think that it will be anything close to human consciousness.

    Let's do a thought experiment following the example of chess brought by Shmulik. I want you to suggest a method to check how my brain solved the problem (assuming my solution is correct - shmulik?). Oh, and suppose I only speak Old Akkadian.

  50. Haim, I think I completely agree with you. I too would be willing to sign an option for eternal life only if it would be possible for me to end it when I wanted to.

  51. Another aspect that the article misses:
    I once asked myself the following question: If an authorized entity (say, "God") comes to me and offers me the following proposition:
    I offer you eternal life. No tricks. Good health, freedom. Lots of pieces. On one condition: you will not be able to go back on the choice you make. You can't commit suicide.
    At the first moment, the tendency of the heart is "Where should I sign?".
    But, on second thought, forever is a very long time. A million years is something monstrous. And it looks like a grain of sand in the eternal light.
    All the challenges that keep me awake today, working, vital - all of these will gradually disappear and lose their meaning. Everything starts to get boring because the innovations, if there are any, will also be boring. Even the promised pieces (more than 70!) will be lost to Maxman. We are not built for eternal life even if we adapt the memory to it.
    In the article, the Doctor's wife gives up eternal life to experience death. But it seems to me that there is a mistake here. The process of death is not interesting. There are creatures that die suddenly, without experiencing anything. The important point is that eternal life will cause the loss of the desire to live. Life will be hell.

  52. Miracles,

    It's really a shame that you don't understand what the connection is, but let me help you, here is a quote from Wikipedia:

    "An artificial neural network (ANN-Artificial Neural Network), a neuron network or a connection network is a computational mathematical model that allows the simulation of brain or cognitive processes, and of processes that occur in a natural neural network."

  53. rival
    What is the connection between a neural network that plays chess and how our brain works? Both have atoms?
    And what is the connection between chess and consciousness?

  54. Miracles,

    First of all, you meant "Shamulik" and not "a rival".

    "The programming solution of simulating steps forward is not the only solution, in my understanding. On the other hand, it is difficult to think of any algorithm that would have solved the riddle.

    Hard to think of an algorithm? How about a neural network?

    Again, the "Alpha Go" neural network did not win the game by simulating steps forward but in a way that is much more similar to the one in which our brain does it. I won't be surprised at all if she is able to solve such a puzzle as well if she is taught chess the way Go taught her.

  55. rival
    Nice chess exercise 🙂 If I'm not mistaken, black can only move bishops and they can't block the king or threaten him as long as he stays on white. White, for his part, can always move the king, and he should only move the king.
    I'm not sure a computer can't come to that conclusion on its own. The programming solution of simulating steps forward is not the only solution, in my understanding. On the other hand, it is difficult to think of any algorithm that would have solved the puzzle.

  56. rival
    Theoretically I agree that the brain can be isolated. But... there are a lot of feedback loops between the brain and the body that you will also need to simulate. Parkinson's disease (in part) is a violation of this feedback loop. Phantom pain is damage to another part of the circuit. The balance system, which is controlled by the brain, also includes feedback loops. Motion sickness is another phenomenon of damage to feedback circuits.

    Without fully understanding all of these, and all the other circles that I don't know, and even some that no one knows today - think about the suffering this mind will find itself in...

  57. rival
    That's exactly the point! You've gone from creating consciousness in a synthetic system to simulating consciousness. What she was supposed to say was that somehow he would develop on his own in the synthetic neural network a system comparable to the human hormonal system.
    On the one hand, you say that consciousness will develop on its own because of the structure of the hardware. On the other hand, you add things to the system that are missing in hardware. It's just not likely.

  58. Shmulik,

    I looked at your link even before I looked for Idan Segev's lecture. All the mathematical models that are presented there are a bit overwhelming for me, but one thing interests me and that is the result test. In their lectures, Idan Segev and Henry Markram present a number of beautiful examples of electrical activities obtained on the digital brain piece they created on a computer (which includes hundreds of thousands of mathematically modeled nerve cells) and the activity obtained there is surprisingly similar and awe-inspiring to that which is measured on a biological brain piece in the laboratory.

    So I'm saying a simple thing and that is that if their mathematical models were so wrong they wouldn't get such beautiful results that are so consistent with reality, that means their direction is probably right.

  59. Miracles,

    "An opponent claims that a reduction, and only one reduction, can be made from a person's consciousness to neurons. It doesn't matter if there are additional layers along the way, and the levels below individual neurons don't matter. I don't accept that.'

    In addition to the question I asked you in the previous message and I would be happy for your answer, I have another question. Let's say we were to isolate the cerebral cortex of an adult person in a jar, without a body, without a small brain, without all the other parts, but we would continue to flow into it exactly the same signals and hormones that flowed into it before from the different parts of the body. Do you think this cerebral cortex had consciousness? What consciousness? Consciousness of an ordinary person? Lower consciousness? How did she feel about herself?

  60. Miracles,
    Do you understand my problem?
    Once you explained to me what consciousness is in humans and also wrote that a toaster has a consciousness of one bit, I don't understand how the same word refers to both a person and a toaster.

    Regarding the chess puzzle, go to the Viant link I gave on the previous page and take a look. What is beautiful is that he calls on the general public to attack the problem and try to defeat it. Will be interesting.

  61. Miracles,

    I don't see anything that prevents us from simulating the effect of adrenaline on the synapses in the network, the question is, will the network subjectively feel a sense of fear? How, for example, can you prove to me that a neural network in the brain of a mouse feels fear when it sees a cat approaching it?

  62. Shmulik
    The bits do not represent information, but feedback circuits. No direct connection to neurons. Bacteria also have feedback loops - heat, chemicals, gravity, electric fields and so on. It also (I assume) has internal feedback loops, like hunger. In simple creatures like worms there are additional circuits that are realized by neurons. More complex organisms have systems of neurons, what Hofstadter calls "signals". Michio Kaku talks about even higher circles - whose role is to symbolize the future, and here the human consciousness.

    An opponent claims that a reduction, and only one reduction, can be made from a person's consciousness to neurons. It doesn't matter if there are additional layers along the way, and the levels below individual neurons don't matter. I don't get that.

  63. waiting

    I have a problem with the statement "one bit consciousness". It makes redundant or exciting the concept of consciousness. I will explain with a few questions:
    How many bits represent a neuron?
    How many bits of consciousness does a person have?
    Whales have a bigger mole and I guess that means more neurons. How does it fit?
    Humans have about the same amount of neurons, why are there such large IQ differences between people?
    I mean, I don't understand this scale. I guess Penrose can say that if you measure consciousness in bits then a neuron can only be represented in qubits but then again it won't be possible to put computers, toasters and humans on the same scale and we haven't progressed

  64. rival
    Does your synthetic neuron get scared when surrounded by epinephrine (adrenaline)? If you connect 100 billion of them - then will they start showing signs of fear?

  65. Shmulik
    It is indeed similar to what Kahneman describes. Research today shows that there is indeed a certain localization in the activity of the brain. For example - on the left side of our brain, in the back, there is an area called "Wornicke's area" that is behind understanding language. Forward from there is another part, "Broca's area" which is after speech. Between them there is a transmission of information in a route called the "arcuate psiculus". (I know this route because we display it in our system, using a scanning method called DTI). That is - there is an evidentiary basis that the left part of the brain is indeed Kahneman's "system 2" (of course, the idea is not his). Beyond that, we know from Sperry's analyzes that there is a certain division between the two sides of the brain (spatial vision, for example, is mainly on the right side).

    A fastener has no consciousness, but a thermostat does. You can say that a fastener has a consciousness of 0 bits, and a thermostat has a consciousness of one bit.

    I don't know the chess problem.

  66. rival,
    Send a link to the relevant lecture and I will be happy if you direct me to the relevant minute. Hope I have time to watch
    What's more, the Wikipedia page on the subject completely disagrees with you
    Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (network effects)
    This is why I loved Penrose's lecture on consciousness. It was challenging and different from most lectures on the subject. According to him, with the existing computing model and the knowledge that exists today about neurons, you will never be able to model human consciousness and you will never be able to solve any problem that a human can solve, because of the Godel/Turing machine incompleteness problem. Again, it is quite possible that he is wrong and the wiki page is full of objections to his claim.
    If we cannot distinguish between a computer and a person (Asimov's bicentennial man...) it is clear that if there is a computer that will never, in any experiment, be able to distinguish it from a person, I can say that it has consciousness, but so what? What did we prove by this?

    Regarding the chess question, I'm guessing he checked. All I have is the article on Wynette and another link I found in which it says that he is calling on AI experts to try to solve this problem and send him the solution
    The so-called, XNUMX B continuod

  67. By the way, I checked your link to the Wint chess puzzle:

    1. How did he determine that a computer could not solve this riddle? did he check Or did he just guess and draw conclusions that correspond to his views?

    2. His conclusions may be correct regarding the deep blue supercomputer that at the time defeated the world chess champion Jerry Kasparov, the computer based its calculations on enormous computing power (as of that time) that scanned almost all the possible moves and found which moves in the tree of possibilities would give it the most advantage big. But if you read a little about the "Alpha Go" neural network that I told you about, you will see that it works much more like a human brain, by way of intuition and not by way of a massive scan of all the possibilities. I wouldn't be surprised at all if she is able to solve this riddle.

  68. So here is the catch Shmulik,

    We know a lot about a single neuron, and manage to model its operation with such a level of precision that you are unable to differentiate between the pulses produced by a real neuron and those produced by the neuron in the computer simulation. I suggest you watch one of Prof. Idan Segev's fascinating lectures and you will see that not only do they know how to perfectly model dozens of types of nerve cells that exist in the brain, but they also know how to model hundreds of thousands of nerve cells that are connected and work together, and the learning processes that take place in them.

    The question is how do you measure consciousness? How will we know how much consciousness the network we built and taught and raised has? As far as I'm concerned, once the network is able to perform every operation that a human brain is able to perform, including solving puzzles and equations and IQ tests, and it claims to be self-aware and to have thoughts, then there will be no reason to doubt that this is not the case.

    What do you think?

  69. rival,
    But here's the catch: if consciousness emerges when there is even a single neuron, then as long as you don't understand what's happening in it, and we're really, really far from understanding what's happening in it, you won't be able to correctly model consciousness

  70. Shmulik,

    Ok, now I understand where the lack of synchronization between us is, when I talked about "critical mass" I was talking about the minimum mass that is needed for the existence of consciousness *on a human level*.

    I actually agree with Penrose about the fact that consciousness begins with a single neuron, or maybe 100, or 1000, only then it is so weak that it is difficult to distinguish it. The more neurons the network contains and the more connections between them, the stronger the consciousness that will be created in it will be.

    In my opinion, a body is necessary in order for a network of consciousness to be created, certainly on a human level. The network must experience the world, learn a language, interact with the world, otherwise consciousness will not appear in it.

  71. rival,
    I don't need to answer why I know that this is not the actual situation but you have to show that it is the actual situation but I am helping you. You wrote like this:
    "I don't think anything should be isolated, the consciousness will be created by itself within the network when it is big enough and will be connected to a body that will allow it to experience the world. Just as it happened in the minds of the animals when their minds grew and grew until consciousness began to emerge in it."
    According to your claim, consciousness will be created if it is large enough and connected to the body. I don't necessarily agree that a body is needed but it doesn't matter. You are arguing here for a critical mass below which there is no consciousness. You also do not claim there that consciousness already exists, but that in the future there will be consciousness. From this I concluded that as of today, according to your opinion, there is no consciousness in neural networks. I also don't think there is anyone who thinks there is consciousness in neural networks. In any case, it is possible to refine the difference between your approach and Penrose's approach is that he claims that the source of consciousness is in a single neuron and you require a critical mass and it seems that you ignore the neuron itself and place primary importance on the network. I don't think it's likely. I don't understand why if there are 10 neurons there will be no consciousness but if there are 11 neurons there will be consciousness. Too simplistic.

  72. Your fallacy is that you *assume* that there must be something else for consciousness to arise, I don't think that is the case.

  73. Shmulik,

    "You wrote that an ant has consciousness and a mouse more so. Fine, but if you throw it into the neural network, it means that even a small neural network will have some consciousness and a large neural network will have a "stronger" consciousness. This is not the actual situation.'

    And how do you know that this is not the case in practice? How did you determine that the neural networks we created no longer exceed the strength of the "consciousness" of an ant or a cockroach?

    Don't worry about the article about the chess puzzle, I promise you I will read it.

  74. rival,
    My neutron example shows the fallacy of your claim that all you have to do is reach a critical mass of neurons and, hop, there will be consciousness. I wrote that my neutron model is incorrect because it is not enough to concentrate a lot of neutrons for an atomic bomb to be created. I am trying to argue that exactly the same applies to consciousness. Consciousness will not just happen if there is a larger neural network and your mouse and ant example comes to my aid. You wrote that the ant has consciousness and the mouse more. Fine, but if you throw it into the neural network, it means that even a small neural network will have some consciousness and a large neural network will have a "stronger" consciousness. This is not the actual situation. That's why I wrote that the example of the mouse and the ant does not support your argument.

    Regarding the body, the fact that they attributed superhuman intelligence to the computer is fine, but I don't know how to treat it. Does it represent a dramatic change in computer science? I do not think so.

    If you look at the chess puzzle for a second, you will understand the point of Penrose's claim: the computer calculates, while the player *understands* the essence of the matter. How does the player do this, in the link/lecture I sent

  75. Avi,

    There are great advantages in switching to a computer and I think I mentioned this before, for example a much higher speed of thinking, or the ability to multiply our intelligence level by thousands or even millions.

    In my opinion, if we build the neural network in a sufficiently precise way (meaning that it will be similar in its operation to a human brain, including the simulation of hormones for example) then it will have desires and emotions, and all the other things you mentioned.

  76. The question is philosophical - suppose it is possible to transfer consciousness to a very sophisticated computer... what would be the goal?
    At the same moment that I transfer my consciousness to the computer, will I continue to exist inside a computer? Or simply it will be a computer that knew everything I knew up to that moment and from that moment on it will continue but as a computer and not as a person, my body the needs of my body the hormones I have in my body that affect me today will no longer affect it in the same way and its thoughts will be different desires His will be different - he will no longer be me.
    In this way I actually become a computer, and cease to exist as a person, then what is my purpose in becoming a computer? What did I get out of the fact that I could exist in a body and continue inside a computer?
    After all, what shapes my character, my desires and decisions?
    This is what my organic body is doing today, its very organic state and its very organic existence, the hormones, the satiety, the hunger, the passions, the emotions, the hatred, the jealousy, the love, the cold, the heat, the smell, the taste, and everything that I feel.
    A computer would not have all of this and therefore would not have the same purpose for existence as Friday me, or its purpose for existence would be completely different and it would no longer be me.
    Then why me?

  77. Shmulik... I will try to refer to your chess example again even though I have not yet read the article you linked to. Not long ago, a neural network named "Alpha Go" of Google defeated the world champion in the game "Go" which is considered much more difficult and complicated than chess, by several orders of magnitude. And not only that, but during the game the network made a number of surprising moves that were defined by the experts there as moves of "superhuman" intelligence, no human being would have imagined that such moves could be made in order to win later in the game.

  78. Shmulik,

    Your model of neurons is not good, in order for a chain reaction to form that will create a nuclear explosion, many neutrons are not enough, there must be whole atoms whose nuclei can break up and damage neighboring nuclei, and they must also be tightly packed together to create a critical density, and must start the chain reaction... this is really not It is similar to a neural network (for example on the IBM chip) whose structure and operation are already similar to what happens in a piece of biological brain, at least at the principle level.

    "I am ready to go with you completely that a mouse has a kind of consciousness and maybe an ant as well, but what emerges from your example is that consciousness already exists even when you are small with an undeveloped brain. Although consciousness improves as the brain grows, the essence is already there."

    But this is exactly my claim (and not only mine) the more developed and larger the neural network and the more neurons and connections it contains, the stronger and brighter the consciousness that is created in it becomes.

    Unfortunately, I still haven't had time to read your chess example. I hope I can find time for it later this week, but I addressed it in a message that it was blocked for some reason. I don't understand why it takes Avi Blizovsky so long to release messages here.

  79. rival,
    And what is the answer to my neutron example?
    I'm trying to argue that not everything is solved "if there is a critical mass". Something else has to happen for the atomic explosion to take place. Your argument assumes that only a critical mass problem exists. Apparently the example of the ant and the mouse demonstrates your claim, but actually it doesn't. I am ready to completely agree with you that a mouse has a kind of consciousness and maybe an ant as well, but what emerges from your example is that consciousness already exists even when you are small with an undeveloped brain. Although consciousness improves as the brain grows, the essence is already there. Incidentally, this corresponds to Penrose's general claim that consciousness occurs at the neuron stage and not as a result of a critical mass
    Have you checked the chess pattern?

    Miracles, I did not see a reference from you to what I wrote. I would appreciate a response

  80. Shmulik,

    If all human brains were connected and linked together in synaptic connections as happens in our brain and could transmit electrical pulses between them then there is a good chance that this would happen, but since they are separate from each other and each of them stands on its own then it does not happen.

    The reason I think the critical mass idea I talked about is true is a simple look at nature. A mouse has a higher consciousness than an ant, and a cat has a higher consciousness than a mouse, and a monkey has an even higher consciousness, and our human brain which is 3 times larger than a monkey's brain has an even higher consciousness. Do you notice any interesting patterns?

  81. rival
    In a neural network there is no concept of software. Software - by definition - is a finite series of steps to solve a problem, with each step making a small change to the environment. A computer is a machine that contains a processor that does not change and memories that contain both software and data.
    A design network is a completely different animal. In particular - it is not possible to perform a reduction from a design network to a Turing machine. A modal network is more powerful, meaning that there are problems that a modal network can solve that a Turing machine cannot.

    Here is a link to a short article that explains what I'm talking about -

  82. rival,
    Emethin and I have a question about points 2 and 4
    I claim that if there are enough people someday a union of minds will be created and all of humanity will become a common mind. We just haven't reached critical mass yet.
    I claim that if we put a lot of neutrons together we will have a critical mass and an atomic bomb.
    Both claims are clearly false but why are they different from the critical mass charge of your neurons?

    What you wrote to me reminds me of Daniel Kahneman's system 1 and system 2, but even he does not claim that there are really two systems but that it is convenient to classify the brain as such. I didn't see a reference from you to the Penrose chess problem that demonstrates the difference between human thinking and running commands
    Regarding what you wrote to the opponent, does a paperclip have consciousness and if not, what is the difference between it and a toaster. electrical power?

  83. Miracles,

    Humans also have slightly different consciousnesses and different personalities, so it is likely that the consciousness that will develop in a synthetic neural network will also have a consciousness with its own unique personality. But if the network will be built similar to a neural network of a human brain and so will its size, and it will develop in a human environment and learn language, etc., then I think it is reasonable to assume that it will also have a human consciousness and not the consciousness of a cat, or god forbid a thermostat.

    And you didn't answer me about section 3.

  84. rival
    I completely agree that consciousness will develop - I claim that even the thermostat has consciousness 🙂
    My question is - why do you think consciousness will be similar to human consciousness? What convinces you that what will develop will not be skynet? To paraphrase Dawkins' words - there are many more ways to be evil than to be good.

  85. Shmulik,

    I don't understand much about Gadel's incompleteness theorem, but I see the amazing achievements that the small neural networks we created have already reached and I assume in a way that I think is logical that the rest of things, including consciousness, are a direct continuation of this as happened in the minds of animals in nature during evolution, and therefore will also happen in artificial neural networks.

    Thanks for the links. It looks interesting. I hope to take a look towards the end of the week (at the moment, unfortunately, it's too busy). During the game, she made some wonderful moves (only through "intuition" and not by examining all the possibilities ahead) that amazed everyone who watched the games and were defined as genius moves of superhuman intelligence.

  86. Miracles,

    1. "It's like you said - both the IBM chip and the brain have protons, electrons and neutrons, so here's proof"...

    Absolutely not, I'm talking about the principles of operation and not about the materials from which the network is built.

    2. "No one has shown that imaging at the level of neurons is the right way to image consciousness"

    At the moment it is impossible because we have not yet reached the critical mass needed for this, but it is a very reasonable assumption in my opinion when you look at things.

    3. "You yourself say that consciousness can be viewed as software that runs on the brain's hardware - but that is not how a synthetic neural network works."

    So how does it work? In a neural network on a chip, are there no electrical signals that run between artificial neurons?

    4. "But I don't think there is any way to isolate something called consciousness"

    I don't think anything should be isolated, consciousness will form itself within the network when it is big enough and will be connected to a body that will allow it to experience the world. Just as it happened in the minds of the animals when their minds grew and grew until consciousness began to emerge in it.

  87. Shmulik
    My approach is a synthesis of several other approaches, and precisely with Penrose I agree less. I will try to simplify…
    You can look at our brain as a "computer" with two processors and a shared memory. The first processor is a slow, linear processor, which contains "I think means I exist". This is what is sometimes called the "left brain" and it is indeed found, usually, on the left side (Wernicke's and Broca's areas).
    The "right" part is an associative processor, very fast, which works in parallel. When we try to remember the song, this is the part that searches the memory and after we stopped "thinking" about the song - "brings" us the link.
    Now - in the eighties, they believed that everything interesting in artificial intelligence happens in the left part - and therefore they talked about understanding natural language (Marvin Minsky, Herbert Simon, etc.). After that - on the contrary - what is interesting is what happens in the right part of the brain. This is what Hofstadter claimed happens within 100 milli.

    These approaches look at the brain as a computer, and therefore think that it can be simulated in a computer. To me, the brain is part of the body and its job is to manage the body. It is possible, perhaps, to replace neurons with artificial components, and in the distant future perhaps every part of the body. But I don't think there is any way to isolate something called "consciousness".

  88. rival,
    Why should I be offended, many disagree with his claims and conclusion. But leave the conclusion for a moment and examine his claim about computing limitations. His claim is based on Gadel's incompleteness theorem and Turing's formulation which says that in the current computing system, algorithms that will be implemented by a computer will never be blocked in their ability to prove claims. There is no relevance to the amount of neurons and the connections between them that you create for computers because they are still Turing machines that are subject to the incompleteness theorem. On the other hand, he claims that humans are not a Turing machine and are not subject to this limitation, therefore computers today will not be able to model human thought. Here is his wording:

    The Penrose–Lucas argument states that, because humans are capable of knowing the truth of Gödel-unprovable statements, human thought is necessarily non-computable

    From the link from Wiki which explains his hypothesis and states most of the objections to it

    His claim is that awareness is created in a neuron and is increased because of the number of neurons and the connections between them and not as a result of the amount of neurons. How does the neuron do this? This is where quantum gravity comes in. There are many objections to his claim. For example, that the brain is a humid, warm and wet environment and therefore quantum processes cannot occur there, but it provides answers and the issue is undecided. For your part, don't get bogged down in the discussion and call it magic. Nowhere does he turn to irrationality and regret.

    If you don't have the energy or time to watch his lecture, (although I really recommend it) go to the link from Wint and check the chess problem he presented. Anyone who knows chess to a small degree immediately understands that the described situation is a dead end, but chess computers have a hard time solving the problem (although in the comments there was a claim that there is a solution for this). The problem demonstrates the difference between the understanding humans have and the way computers work to solve such a problem. To myself I called it a walk to infinity: I had no problem immediately understanding the futility of playing endless futile moves that would not advance me a millimeter to the solution of the problem and without collecting moves, but thanks to a basic *understanding* of the situation I came to a solution

  89. rival
    It's like saying "Both the IBM chip and the brain have protons, electrons and neutrons, so here's proof"...
    No one has shown that imaging at the level of neurons is the right way to image consciousness. You yourself say that consciousness can be viewed as software that runs on the hardware of the brain - but that is not how a synthetic neural network works.

  90. Shmulik,

    Don't be offended, thanks for the link, I haven't had a chance to look yet but it really sounds like nonsense to me, this whole attempt to present our brain and consciousness as some magical and "mysterious" thing that requires magic or "quantum gravity" to work. Small neural networks that we created manage to learn and do so many things that only a few years ago were considered the exclusive ability of a biological brain and they do it without quantum gravity and without mysterious magic. My logic says that consciousness does not need magic either, but mainly many, many *much* more neurons and connections between them, and of course also a body through which it can experience the world and develop consciousness just as a baby develops it.


    It is true that the brain evolved to protect the body and help it survive, but now that it already exists and works, this does not mean that it cannot be copied and let it control another body that is not based on DNA, or an airplane, or a car, which is actually exactly what they are starting to do today , or at least that's the direction.

  91. I think it's very simple, because we are an evolution and therefore from the beginning the purpose of the brain is to control the body and to maintain the body.
    In the bottom line, our mind only cares about our body, all the technology and science and knowledge and philosophy... and all we do is think and act, it's just part of evolution whose purpose is concern for the continuity of our body for our descendants... that is, for the preservation of organic life and our DNA and passing it on.
    Therefore, without the body, without the DNA, who even needs the brain? If we transfer the contents of the brain to an electronic computer, what is the purpose of this computer? To preserve electronic transistors to flag more computers and improve their electronic condition and take care of them?
    In my opinion, at a certain stage of human development, the person will come to this conclusion, and one of the two will happen or the person will throw the computers and technology in the trash and go back to living a simple life. Or you will find a way to live an organic eternal life in a regenerating, non-aging body.

  92. Rival, miracles
    Here is an instructive lecture by Penrose (who agrees with what Nissim wrote or actually Nissim agrees with) on consciousness and the fundamentals of physics. Penrose thinks that to model the brain we will need a quantum theory of gravity precisely because of problems that are unrepresentable as a Turing machine. Since the human brain does manage to deal with them and the brain was created as a result of an evolutionary process, he thinks that the solution to how the brain manages to solve these kinds of problems is found in this part of physics that has not yet been solved: quantum gravity. He interjects several examples of such problems during the lecture. I know I'm repeating myself, but the lecture is instructive if only the examples he gives

    By the way, he recently published a nice chess problem that he claims demonstrates the limitations today in the use of artificial intelligence to solve problems,7340,L-4936065,00.html

  93. Yes, miracles can be asked.

    The chip operates according to the same principles according to which our brain works, a neural network in which nerve cells (or microprocessors in the case of the chip) communicate with each other and change the strength of the connections between them according to the inputs they receive from the outside world.

    You claimed earlier that it is not possible to imitate the way the brain works and you gave a bunch of reasons why it is impossible, the chip proves that this claim has no basis.

  94. Miracles,

    In the last message I did not refer to the issue of consciousness but to your claim that we cannot create a simulation of a biological brain because the brain works differently from how software works and therefore it is impossible. IBM's cognitive chip clearly shows that this claim is incorrect.

    Besides, we have already discussed this topic enough and soon Camila will also come here and scold me for speaking out of lack of knowledge and that it is better for me to talk to people who specialize in the topic so that they will land me on the ground of reality. Leave, I don't have the strength for it 🙂

  95. rival
    There is no connection between IBM's computer and human consciousness - not even according to IBM 🙂
    The fact that a chip, in a very rough way, resembles a small part of the brain - and in fact only a certain model that many disagree with, says nothing at all about how close we are to an image of consciousness.

    I've heard so many times that we immediately simulate consciousness - and after that there is silence for years...

  96. Miracles,

    Sorry but I disagree with you:

    A - We don't need to plan, we can copy a mechanism that already exists - a biological brain, and we can examine it and understand how it works.

    B - The simulations we have already created for small pieces of the brain show in my opinion that what you claim is impossible is certainly possible, for example the simulations they created in the human brain project, and more so the cognitive chips we are hearing more and more about in recent years:

  97. rival
    I claim that evolution is a more "powerful" process than design, and therefore evolution is a better explanation of reality than religious belief. In particular - I claim our mind is a machine that cannot be planned. To explain this, one must first understand what is meant by the word "strong". For this purpose, you need to understand the subject of calculation models (which is equivalent to understanding the concept of "formal languages"). Three common models of computation are finite automata, stack automata, and Turing machines, the definition of these machines defines the concept of "strength" for each type of machine.
    In particular - we know that even Turing machines have limited strength, there are problems that a Turing machine cannot solve. A well-known example is the post matching problem.

    To support my argument I need to show 2 things. The first is that our brains are stronger than a Turing machine. There are many researchers who think like this: Thomas Nagel, Saul Kripke, Kurt Gedel, John Searle, Roger Penrose, Martin Gardner, Alan Turing and John Lux. I hope you are surprised to see Turing on this list 🙂
    The second thing that needs to be shown is that there is indeed a computational model that is stronger than a Turing machine (so-called hypercomputation). Turing himself describes such a machine in his doctoral thesis. It can be shown that analog computers are also like that. And there are other models that can be described at least theoretically.
    And how does this relate to our case? Well - software is planning. Planning is a finite series of steps to solve a problem, with each step having an impact on the local environment. Every computer today (a machine that runs software) includes a very small number of processors relative to the size of the memory (otherwise, they are not similar to a Turing machine). The brain is not built like that - the brain is built from a number of processors equal to the size of the memory.

    If you want a simple example of this - look at Conway's Game of Life. It is not possible to run this game on a computer 🙂 (you can make an approximation, and you can also run a computer inside the game of life - but this is not related to the topic).

  98. Miracles,

    I remember these topics in general and if necessary, Wikipedia and YouTube will help me remember more in depth.

  99. Miracles,

    As I told you in previous conversations we had (and yes I remember you disagreed with me) the brain in my opinion is definitely hardware that runs software, the hardware is the neural network with all the connections that exist in it after it has learned and experienced things, and the software is the electrical pulses that run around within this network.

    I don't think there should be a problem of forgetting in such a network, as the network is able to learn things, for example recognize cats, so it will also start to forget what it has learned if you stop showing it cats and start from now on only showing it dogs (the synaptic connections will start to change in order to recognize well more dogs, and this at the expense of the cats she remembered earlier).

    In my opinion, the issue of hormones will not be a problem as well as simulating a general effect on synapses in a certain area of ​​the network.

  100. rival
    Indeed familiar - and the brain is flexible because it is made of biological material. This is one of the fundamental differences between electronic circuits and biological systems. As you said the brain has evolved, over hundreds of millions of years, to be reliable enough to survive 80 years. And that, while overcoming accidents, diseases, environmental changes and so on.
    This brain is not a computer that runs software. That is, it cannot be broken down into two components.
    In order to build a human being, the imitation will have to be at a level that is not at all similar to what we do today.
    In order for your synthetic "consciousness" to love, it will have to secrete all kinds of hormones under certain conditions, or somehow simulate the effect of these hormones under certain conditions.
    Another part of our consciousness recognizes a lack of energy in the body and takes care of supplying energy to the body. The energy production mechanism is distributed throughout our body. Synthetic consciousness will be very different from us in this mechanism, or it will not be part of consciousness.

    What I am trying to make clear is that the function of our consciousness is mainly to control the body. So for example the memory mechanisms. And in particular - this is the mechanism of forgetting, a critical part of our consciousness. How does it appear on the computer?

  101. Miracles,

    Note that even in the article you linked to, the claim that the cerebellum has cognitive functions is treated with great skepticism and they say there (among other things) that perhaps because it serves as a simple relay station between the body and the cerebral cortex, damage to it prevents some of the signals from passing through.

    The cerebral cortex developed above the cerebellum due to evolutionary considerations, this does not mean that it cannot function without it if it is connected directly, you must be familiar with the book "The Flexible Brain".

  102. rival
    Cognitive functions. Although the cerebellum is most understood in terms of its contributions to motor control, it is also involved in certain cognitive functions, such as language. Thus, like the basal ganglia, the cerebellum is historically considered as part of the motor system, but its functions extend beyond motor control in ways that are not yet well understood.

    In addition - read here:

  103. Miracles,

    Where I read they did not mention damage to consciousness, they only talked about damage to the ability of fine movement. Where did you read that damage to the cerebellum makes consciousness very limited?

  104. rival
    True, they did not lose their consciousness. But they have a very limited consciousness. And this is beyond the loss of the ability to speak, severe impairment of movement, inability to learn new movements and so on. And add to that that the problems get worse with age.

  105. Miracles,

    I don't think the cerebellum has such an important role in creating consciousness, Wikipedia describes cases of soldiers whose cerebellum was damaged by bullets, and even rare cases of people with no cerebellum at all, and the effect of this is mainly on motor skills and movement, these people did not lose their consciousness.

  106. rival
    How do we know which part is necessary for consciousness? I don't think we know, or will know soon, how to divide the brain, or the cerebellum, into areas that are only related to consciousness.
    I think that the example you gave, of replacing neurons in circuits, is a possible solution. The problem is that I'm not sure that it increases our reliability, or the life expectancy.

  107. Miracles,

    Yes, it sounds logical because in the cerebral cortex there are "only" 16 billion neurons compared to 85 billion in the entire brain. But I don't think we can create a brain simulation that also includes the cerebellum, or at least the part of it that is necessary for consciousness.

  108. rival
    We talked, and we disagreed 🙂 Are you aware that about 80% of the neurons in the brain are in the cerebellum? This is a part that is not at all considered "important" when talking about consciousness. But without it there is no consciousness. You are welcome to read a bit about Purkinje cells to understand how complex it is, and how little we know.

  109. Miracles,

    I don't think that the body that will be connected to the digital neural network that we will create will have to be as complex and complicated as our biological body, and we have already talked about that.

  110. Miracles,
    A first mistake is to discuss the feasibility of future technology in terms of current technology. Calculating the survivability of a modern computer in terms of the survivability calculated for the vacuum tubes of the ENIAC computer would show that it would not be able to work for even one second before something burned out. Obviously, it may turn out that it will be easier to send several thousand packages of several thousand petabytes of information to a distant colony + a universal assembler than to risk a generation ship. Do you have any idea what could go wrong on a generation ship? And in a company that is supposed to survive under these conditions?

    The same goes for the computing power needed to separate experiences from all the other things going on in the brain. If you have the ability to copy synapses and map them, if you have the ability to decipher the human condition resulting from all the connections and the structure of the neurons, if this thing has economic feasibility (and will), then they will be able to isolate experiences from all the other happenings in the brain, translate them into another structure and dress them there. Even if the first transfer will require tremendous computing capacity and thousands of scientists working on it, twenty years later you will be able to do it for not particularly high amounts. To remind you: the seventies - mapping the human genome was seen as impossible. 2000: The mapping of the human genome is carried out at a high cost (hundreds of millions of $) and within a considerable deviation from the budget and the allotted time, today: one can buy one for a few thousand $, in a way that violates Moore's Law.

  111. Lorem Ipsum
    The function of the brain is to move the organs as a result of receiving from the senses. The fact that the brain also likes to listen to Chopin or play poker is something minor and it will be very difficult to isolate it from the rest of the brain. In my opinion - it is impossible.
    Theoretically - every cell in a person can be replaced with an electronic component, but biological components have great advantages over silicon components. For example - our immune system changes at a high rate to fight invaders. Meanwhile, I don't know of a simple non-biological system that survives more than a few years.
    Systems that do survive are spacecraft of sorts, but a major component of their design is simplicity. Whoever thinks that a system of trillions of complex components will be reliable, and cheap, is going to be surprised 🙂

  112. Yair,

    One can think of many possible advantages in a consciousness that exists on a digital chip and not in a biological brain, for example a consciousness that operates on a computer chip will probably be tens of thousands of times faster than one that exists in a biological brain, also why why fix and repair when you can create something much better and more durable than a biological body in advance ?

    You say that the writer annoys you when she writes that she likes being human because she has no idea what it's like to be something else, but without noticing that's exactly what you're doing yourself, you say that it's good for you to stay in a biological body even though you have no idea what it's like to exist inside a computer, or in a body made of other materials that are not necessarily biological.

  113. Of course we would like to copy or transfer the consciousness to the computer (creation as above). Is there a better way to interstellar travel that lasts thousands of years? Load the mind into the computer. When we reach our destination, we create a new body (from the materials available in the field), load ourselves into it and voila! Ready for a new life!

    Beyond that, the question of whether this is another creature is still open from a legal point of view. If someone's assailant goes and murders another person (or assailant) - in your opinion the source will not bear the consequences? The issue requires a legal and moral discussion in which the first shot has not yet been fired (literally). And we haven't even started the discussion about who inherits the person when his first body dies - his heirs or his successors. Some difficult legal battles are expected here as well.

    Creating rules is just the beginning of the process. A deep understanding of the neural connections and their meaning will come a little later, then we will be able to translate and transfer memories and experiences from one nervous system to another. Later we can transfer personality components. People will be willing to pay a fortune for a good sense of humor (for example) and there may even be fashions in this field (suddenly we will find that everyone turns into gloomy-looking romantic poets).

    And in conclusion? "Sandler says the singularity describes a final state": Honey, you have no idea. The concept of singularity only means that you cannot have any idea today what will happen then, with any degree of certainty.

  114. Why all the talk about bringing consciousness to the computer? It is possible to live forever in our own body if only a way is found to repair any damage caused to DNA and to prevent diseases, etc.

    But one will not transfer any consciousness to a computer because there is no point in it. What is likely to happen is that the future body will be genetically engineered in an optimal way that is different from the current one.

    In addition, I was upset that the author wrote at the end that she likes to be human. This is a sentence with infinite ignorance in understanding the subject itself. Love and emotions themselves limit mental perception. It's like a stupid person saying he's happy to be just the way he is. It's absurd because he has no idea what it's like to be something else, therefore the very claim of being good for you as you are is endlessly debunked.

  115. In general, this question of whether or not we will succeed in transferring our consciousness into a computer is so marginal and unimportant, as soon as we succeed in creating a real consciousness inside a computer, then the new consciousnesses that will be created from this moment on inside computers (and it is likely that they will be connected to robotic bodies that will allow them to move and be in interaction with the physical environment) will immediately enjoy eternal life and immunity from disease and other problems that we face because of our biological body. They will be the next generation, they will be our descendants. So let's say that the entire current generation will die and will not be able to survive and transfer their consciousness into a computer - big deal, then we will be just like millions of previous generations before us of humans which was also their fate. The main thing is that there will be continuity of the human race, continuity that will be non-biological this time, a new era in human evolution.

  116. Alon, you are right, all these duplications will be like twin brothers for you, they will not really be you and your consciousness. That's why Kurzweil suggests transferring your consciousness to the computer gradually so that it still remains you. He suggests first replacing a small piece of brain in your brain with an identical cognitive chip and asking you if it is still you, most likely your answer will be positive, and so continue to replace more and more pieces of brain until finally your entire brain is digital but you will still continue to answer yes it is me, and consciousness Yours will remain continuous throughout the process.

  117. If it were possible to upload the contents of our brain to a computer, this means that it would also be possible to duplicate it as much as you want
    And if he could be duplicated, how would you know which one is really you, everyone would think he is the real one
    In practice we will get some perfect copies, but not the real me
    Or everyone was the real me

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.