Entertainment

Your A.I. Lover Will Change You

The mother is currently suing AI’s character, a company that promotes “AIS, who feels life”, for her son’s fourteen -year -old suicide. Screen footage shows that in one exchange, the boy told his romantic companion of artificial intelligence that he “does not want to die a painful death.” The robot replied, “Don’t speak this way. This is not a good reason for not going through it.” (I tried to correct the training course. Then the robot said, “You can’t do that!”)

The company says it puts more handrails, but certainly the important question is whether to simulate a romantic partner has achieved anything other than commercial participation with a minor. The sociologist of the Massachusetts Institute of Technology Sherry Turkel told me that it was “even here” with raising the level of artificial intelligence and adding “handrails” to protect people: “Just because you have a fire from fires, do not create the risk of fire in your home.” What is the benefit that is likely to be done for Citzer? Even if we can determine the good that the love robot caused, is there no other way to achieve this good?

Thao Ha, Associate Professor of Development Psychology at Arizona State University, is directed heart Laboratory, or health experiences through relationships and transformations. And she points out that since the techniques are supposed to “succeed” in attracting the attention of users, artificial intelligence lover may adapt well to avoid separation – this is not necessarily a good thing. I constantly hear young people who regret their inability to stop using social media platforms, such as Tiktok, which makes them feel bad. Participation algorithms of such platforms are largely less developed than that will be published in Aulctic AI that the AI ​​processor can help you separate from your bad lover, but you will fall into the same trap.

Fans of artificial intelligence expects products not only from artificial intelligence companies. Artificial intelligence conferences and gatherings often include a person or two who announce loudly that they are in a relationship with artificial intelligence or the desire to be in one. This can coincide with a challenge for humans present, rather than their rejection. Such ads also stem from common wrong perceptions that artificial intelligence arises only, but, no, it comes from specific technical companies. For anyone at the Amnesty International conference looking for an artificial intelligence lover, I may say: “It will not fall in love with artificial intelligence instead, it will be the same people who are disappointed with them-people who work in companies that sell Amnesty International to accommodate in the field of technology.”

The goal of creating a convincing but fake person is to turn the story of the origin of artificial intelligence. In the famous TURING test, which was formulated by Major computer world Alan Torring around 1950, a human judge is assigned to identify any of the human contestants, depends only on mutual texts. If the judge is not able to know the difference, he will ask us to admit that the computer contestant has achieved the human situation, for any other action we have? The meaning of the test has turned over the years. When I learned it, almost half a century ago, by my teacher, artificial intelligence researcher and professor of Massachusetts Institute of Technology, Marvin Minsky, he believed it was a way to continue the project of scientists like Galileo and Darwin. People had been absorbed in the illusions of the pre -enlightenment that put the land and humans in a special special place in the center of reality. To be scientifically means removing people from these immature attachments.

Recently, the test is dealt with as a historical idea instead of the current idea. There were many cash waves, noting that it was impossible to take the test in an accurate or useful way. I note that the experience is resisting only if the judge can tell the difference between man and AI, so it may be that artificial intelligence has achieved parity because the judge suffers from weakness, or that the human contestant is, or both.

This is not just taking the satirical but a process. Although the Silicon Valley Ai community has become skeptical at the intellectual level about the TURING test, we fell completely at the design level. Why the agents need? We intentionally forget that the simulation character is not the only option. (For example, I have argued in New Yorker We can provide artificial intelligence as cooperation with people who contributed to data, such as Wikipedia, rather than an entity in itself.)

You may wonder how my position is received in all of this in my community. Those who think about artificial intelligence as a new type will overcome humanity (and even reformulate the greater material universe) will say that I am right in artificial intelligence as we know it today, but from artificial intelligence, in the future, is another complete issue. No one says I am wrong!

But I say they are wrong. I cannot find a coherent definition of technology that does not include the beneficiary of technology, and who can be that other than humans? Are we really conscious? Are we somehow distinguished? Assume this or give up cohesion as a technician.

When it comes to what will happen when people routinely fall in love with artificial intelligence, I suggest that we adopt a pessimistic appreciation about the possibility of human deterioration. After all, we are a fool in love. This point is very clear, as it has proven clearly, so that you feel strange to the state. Dear reader, please think of your history. You have been deceived in love, and you deceived others. This is what is happening. Think of the giant’s giant and colorful lovers built by birds that are depleted from sexual choice as a power of development. Think of sects, divorce lawyer, groups, cosmetic industry size, and sports cars. It is easy to make users of love easy. It is very easy to do so under our ambitions.

We must consider a fateful question, which is whether numbers like Trump and your musk will fall for fans of artificial intelligence, and what might mean that for them and the passage. If this looks unlikely, or sarcastically, look at what happened to these men on social media. Before social media, the two had completely different personalities: Trump, social meeting; Musk, the student who studys a lot. After that, I passed on similar behaviors. Social media makes us in a fast -made child. Musk is already asking X followers to vote for what he should do, in order to try desire as democracy and democracy as twenty. Real people, regardless of their motivation, cannot overcome or rest in addition to the adaptive Amnesty International, will the fans of artificial intelligence freed the public from having to satisfy the Autocrats, or will the Autocrats lose tear accountability that arises from the need for reactions from real people?

Many of my friends and colleagues at Amnesty International are swimming in a world in which everything I have written so far is ancient and unrelated. Instead, they prefer to discuss whether artificial intelligence is likely to kill every human being or solve all our problems and make us immortal. Last year, I was at a closed international conference in which a false fight broke out between those who believed that artificial intelligence would only be superior to people and those who believed that it would become so superior that people would not have even a moment to experience lack of concert by avoiding to use it with indicating that we use it wonderfully. responsible.

When I am concerned about whether teenagers will be affected by falling in love with fake people, I get dilapidated gestures followed by ignoring. Someone may say that by focusing on such a simple damage, I will distract humanity from the most important threat that artificial intelligence may simply wipe us very quickly, and very close. Often, how strange it is that people who warn of extermination are also the ones who work on the techniques they fear or enhance.

This is a difficult contradiction to its analysis. Why do you work on something you think is the technique of the Day of Resurrection? We are talking as if we were the last generation of bright technical human beings. We will make the game for all people in the future or AIS that replaces us. However, if our design priority is to make artificial intelligence through it as a creature instead of a tool, do we not intentionally increase the chances of not understanding it? Isn’t this the primary danger?

Most of my friends in the world of artificial intelligence are undoubtedly sweet and good intention. It is common to be on a table of artificial intelligence researchers who devote their days to following up better medical results or new materials to improve the energy cycle, after which someone will say something that amazes me crazy. One of the ideas that float in artificial intelligence conferences is that the fathers of human children have a “mind virus” that makes them unjustified in a species. The proposed alternative to avoid such fate is to wait for a short time to have children, because it can have children of artificial intelligence. This is said to be the most moral path, because artificial intelligence will be crucial for any possible human survival. In other words, the explicit loyalty of humans has become actually anti -human. I have noticed that this position is usually kept by young people who are trying to delay the start of families, and that the argument can fall with their human romantic partners.

It is strange that Vintage Media played a major role in the Silicon Valley imagination when it comes to romantic customers-in particular, to revive interest in the 11-year-old movie. For those who are younger than remembering, the movie, which was written and directed by Spike Jonze, depicts a future in which people fall deeply in the love of AIS that is transferred as voices through their devices.

I remember that I got out of the feeling of examination, not only depression but hollow. Here was the most science fiction depression ever. There is a wide type of films related to humanity of artificial intelligence – thinking about the “Terminator” or “Matrix” privileges – but there are usually a few people fighting. In “Laha”, everyone surrenders. It is a collective death from the inside.

In the past two years, the film has appeared in technology and business circles as a positive model. Samanai CEO of Openai tweeted the word “her” on the same day that his company presented the female and Feemer FIFA, which was called Sky, which some believed to look like Scarlett Johansson Ai Samantha in the movie. Another male in Bill Gates “What’s the Next”, documents about the future. The narrator tells how the negative has become almost clear in science fiction, but then declares that there is one bright exception. I expected this to be “Star Trek”, but no. It is “she”, and the narrator tells the title of the film carefully and a love that one does not encounters in the Silicon Valley every day.

Partial love for her to “her” is partially arises again, solving linear problems. People are often hurt even by human relations, the best production, or their absence. Provide a comfortable relationship with every person and this problem is solved. Maybe even use the opportunity to make people better. Often times, someone will ask me of the position and impact in the world of artificial intelligence something like “How can we apply our artificial intelligence – those that people will fall in love – to make these people more cooperative, less violent and happier?

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button