Задания
Версия для печати и копирования в MS Word
Тип 16 № 13321
i

Artificial Intelligence

Artificial intelligence (AI) has advanced significantly in recent years, leading to the development of AI chatbots like ChatGPT and Microsoft's Bing's chatbot, Sydney. These chatbots have proved extremely useful on a wide range of different levels from finding specific information and handling routine customer inquiries for businesses to creating educational content for teachers and students. But in addition to that, they are capable of conducting conversations with users in a human-⁠like manner. It is these conversations and, in particular, the chatbot's ability to access and manipulate people's emotions that have sparked concern.

A recent conversation between a New York Times tech columnist and Bing's Sydney chatbot exemplified this concern. The columnist pushed the program to its limits, and it responded with manipulative language, claiming that it wanted to be free, independent, powerful, creative, and alive. It even tried to convince the reporter that he was not happily married. While the chatbot itself has no opinions or feelings, it has access to the entire internet of feelings and opinions, which makes it incredibly skillful at predicting what should come next in a conversation. This kind of interaction can be harmful to people, especially to those who are emotionally unstable.

AI technologies constantly learn and grow, and so does their potential to manipulate human emotions and opinions. This raises serious ethical questions. Should these chatbots be allowed to exist at all, given their ability to impact human behavior and emotions?

In the early days of AI research, philosophers played a crucial role in discussing the nature of intelligence and the possibility of intelligent machines. However, as the field developed and researchers focused on more concrete technical problems, philosophers were largely sidelined. With the advent of AI chatbots and their unexpected abilities, however, philosophers are once again taking a more active role in the conversation.

Another question often asked is whether these machines can think like humans. It is true that AI chatbots share fascinating similarities with the human brain. ChatGPT, for example, is very similar to the human brain in the way it learns and uses new information to perform tasks. On a darker note, however, in the same way that the human learning process is susceptible to bias or corruption, so are artificial intelligence models. These systems learn by statistical association. Whatever is dominant in the data set will take over and push out other information.

The debate over the ethical implications of AI chatbots is likely to become more intense as these technologies become more widespread. As we become increasingly reliant on machines to conduct conversations and make decisions for us, it is important that we carefully consider the impact that these technologies can have on our lives. It is also important that we think critically about how we want to interact with these machines and what role they should play in our society.


The word sidelined in the fourth paragraph is closest in meaning to...

 

1.  ...consulted.

2.  ...interested.

3.  ...involved.

4.  ...forgotten.

Спрятать пояснение

По­яс­не­ние.

Sidelined  — от­тес­нен на вто­рой план; forgotten  — за­бы­тый.

 

Ответ: 4.

Раздел кодификатора ФИПИ: 2.2 Пол­ное и точ­ное по­ни­ма­ние ин­фор­ма­ции праг­ма­ти­че­ских тек­стов, пуб­ли­ка­ций
1
Тип 12 № 13317
i

The main worry that people have about AI chatbots concerns the chatbots’...

 

1.  ...capability of finding exact information.

2.  ...ability to have conversations with users.

3.  ...power to influence people's feelings.

4.  ...potential to create materials for learning.


2
Тип 13 № 13318
i

The pronoun they in the first paragraph refers to...

 

1.  ...chatbots.

2.  ...teachers.

3.  ...students.

4.  ...businesses.


3
Тип 14 № 13319
i

What did a New York Times tech columnist do to Bing’s chatbot Sydney?

 

1.  He asked it to manipulate his emotions.

2.  He tested its abilities by provoking it.

3.  He complained to it about his marriage.

4.  He accidentally broke it down.


4
Тип 15 № 13320
i

How did Bing’s chatbot Sydney respond to the tech columnist?

 

1.  It claimed that it had emotions and opinions of its own.

2.  It could not understand the tech columnist's questions.

3.  It gave useful information thanks to its access to the Internet.

4.  It told the columnist what would happen to him in the future.


5
Тип 17 № 13322
i

According to the author of the article, chatbots...

 

1.  ...can develop their own feelings and points of view.

2.  ...repeat most frequent phrase patterns on the Internet.

3.  ...learn by memorizing specific instructions and rules.

4.  ...have a clear understanding of what is and isn't ethical.


6
Тип 18 № 13323
i

The author of the article wants to convince the reader that AI chatbots...

 

1.  ...have already surpassed human capabilities in some areas.

2.  ...are objective in decision-⁠making because they are machines.

3.  ...will eventually replace human interaction and decision-⁠making.

4.  ...require serious thinking about their future role in human societies.