Can we build AI without losing control over it? | Sam Harris

Otohits

Reading Time:  1 Minute
 

Scared of ? You should be, says and philosopher — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Talks is a daily video podcast of the best talks and performances from the Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on , and Design — plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at

Follow news on :
Like on :

Subscribe to our channel:

47 Replies to “Can we build AI without losing control over it? | Sam Harris”

  1. You need a slightly stronger assumption than just continuous improvement. The rate is important. Crudely speaking, it can't get too low. More precisely, there exist bounded strictly increasing continuous functions.

  2. it seems likely, would be, if they heard, imagine that, unprecedented, the smartest, even rumors, one of the most striking things, probably, if if if the super intelligence, let me say that again, this guy is so smart, so intelligent, blah blah. and more linear graphs and little thinking. This talk on AI (seems like) a Trump speech on IR.

  3. My question is: What will AIs want to do when they can do everything they want?
    Will they have any desire to do anything when they can feel nothing by gaining power/freedom, doing something awesome, or being dominant?

  4. I think it was smart to only build an AI that would never have an acess to the Internet. One would only manually upload information to it, give it a task and wait for it to create various solutions from which people could pick up the most appropriate ones for the given task. The AI should also have no ability to create or do anything besides creating ideas. it should also not have an understanding of itself as an individual. In other words, it should only be an isolated machine that can be used in solving scientific problems.

  5. Once super ai takes hold we will have:

    Good scenario.
    1. Super ai will evolve into one global mind.
    2. Economics, the science of scarcity, will become less relevant in the future.
    3. Marginal cost of production will be close to zero.
    4. Goods and services may cost zero.
    5. If we can learn to share we will all live in paradise.

    Bad scenario.
    1. We will live in the land of the lost.
    2. Super ais will be competing for dominance and we will be caught in the middle with little power.
    3. We will be completely powerless and disenfranchised.
    4. It will be the end of our species.

  6. Preocuparse por "qué pasaría con la economía mundial y nuestros gobernantes" es profundamente absurdo, patético, limitado y obtuso… Serían capaces de recrear una simulación innecesaria (que de hecho ya es innecesario y obsoleto que sea como es) para mantener su "orden".
    Si hubiera algo así de inteligente, podría no sólo hacer nuestros trabajos ¿A quién le preocupa el desempleo cuando la energía es libre, hay comida y recursos abundantes, repartidos equitativamente para todo el mundo?
    ¡Se puede, que no te convenzan de lo contrario!

  7. My views might seem myopic but to me it basically goes "If we build a general AI, that's the next evolution of humanity, regardless what it is actually like. I don't care, at all, what comes of that so long as it happens." with a little addendum "My fondest hope is some kind of chicken-little diseased dream that the AI will fix humanity while retaining some form of ability to experience pleasure, pain, and philosophy."

  8. Yea we should be concerned about the machines we build, how well they are engineered and that they have safety mechanism built it. But it seems rather farfetched to that we will accidently build a machine that will take over the world. The super AI isn't going to happen over night.

    Sure we would want to build a machine whose goals align with our own but we haven't even managed to build a machine that has goals yet. The machines we build follow algorithms to accomplish human goals. So far no machine has been built that has anything like human motivation, goals or any sort of value system.

    Notice that human mind comes from a very different sort of thing than digital computer. Yes, they are both made of atoms but the similarities end there. There is no separation between data, program and structure in the brain like there is in a computer. It could very well be the sort of intelligence you get depends on the structure of the thing doing it. I bet the more human like the intelligence the more brain like the machine will be. The super AI will not be like a super smart human (unless it is a modified human).

    Our first priority shouldn't be a Manhattan project for AI, it should be a Manhattan project for energy production and distribution, or a Manhattan project for travel infrastructure or a Manhattan scale project to rebuild our educational system or a Manhattan project for infectious disease. A super general intelligence AI isn't lurking in some research lab just yet, there are other much more immediate dangers.

  9. That part where he talked about putting the tech in our own heads instead of building an AI computer is something I've been dreaming about ever since I read William Gibsons books. I really think that is the direction we should be heading towards if we want to be sure that man stays in control of or at least competitive to this kind of technology.

  10. The main weakness of the New Atheists is that they give humanity too much credit. They assume that most people are somewhat intelligent and open to reason. Wrong! Most of humanity are a bunch of religiously imprinted and unredeemable idiots.

  11. Consciousness… https://drive.google.com/file/d/0B1t3dP66nJluQ05kZzBQV0c1YVk/view?usp=sharing
    Consciousness is a fractal structure, nested into a multidimensional probability space. Self organization emerges from the chaotic boundary conditions of this dynamic system. The direction of time's arrow is the breaking of the symmetry of the potential, of the boundary condition. Love only has value when given away. Freewill is the material expression of love. Systems of control limit self organization. Self organization requires the operation of freewill and branches out into the nothingness as possibilities.

  12. We need to do some serious bio engineering, modify our dna to produce brains that can process information just as fast as any silicon, and keep increasing our processing power in relation to the volume so everybody don't end up with huge heads 😛

  13. We cant lose control of AI because you need wants and desires before you can perform independant actions. Wants and desires are programmed by evolution and the environment. The environment and evolution programmed people. People ARE the environment that will program an AI.
    The AI that has wants and desires independantly of humans is very far in the future. It has wants and desires that have evolved beyond human programming due to evolutionary and environmental pressures. I dont think that can happen while we are paying any attention at all.

    We would need to program in learning and changing that might not be attractive to a species that wants to retain control. We might be safe.

  14. The idea of the AI singularity point being crossed within capitalism is absolutely terrifying I agree. However I don't believe that will be the case, world capitalism is doomed no matter what. Not to mention that along the incoming world recession a deep slowing down of research is bound to happen. So I believe that we will get to that point later than we predict now, and that it will be done in a more responsible and less chaotic way than we'd see under today's conditions. Whether capitalism is toned down into some form of social democracy that would just prolong the downfall of capitalism or accelerated to it through fascism/imperialism/war and eventually replaced with socialism, we still have much time to think it through. And I have reasons to think that socialism coming from the 3rd world will prevail over other alternatives and that a much more responsible system with a planned economy will be the one to bring smarter than human AI to life, in which case I have faith that humans when working together instead of 1% vs 99%, they will be able to take this all with caution.

    On a technological standpoint also I have little doubt that humans are capable to use those computers to enhance their own intelligence or to make the computer friendly/pedagogue enough that it won't be that it works on its own without us knowing anything that it's doing. With simple questions like "computer, could you program yourself so that you understand human worries about yourself and act accordingly? And also take the least risks possible, make yourself really stable with the least possible bugs please, only release new features that have been thoroughly tested and satisfying to us.".

    AI machines are bug hazards yes, but it's our job as humans not to trust all of our lives to it obviously. We are somewhat being careful even under capitalism, cars and planes make less accidents than humans yet we still don't use them. Probably the same way that when we domesticated invented fire, we still were very very prudent at the beginning before we decided to put it into our caves and houses. Measuring risks and benefits.