16 September 2024
Prof. Dr. Burhan Pektaş

Prof. Dr. Burhan Pektaş

Beware of the misuse of artificial intelligence-supported algorithms! It can increase social divisions…

Experts say that AI-powered technologies have the potential to facilitate social isolation by reducing face-to-face interactions, and that AI algorithms used on social media platforms can contribute to the creation of echo chambers where individuals are only exposed to information and opinions that align with their existing beliefs.

Noting that artificial intelligence-supported algorithms can be used to manipulate public opinion, spread misinformation and increase harmful content, Prof. Dr. Burhan Pektaş said, “This can undermine trust in information sources and lead to social divisions and confusion.”

Prof. Dr. Burhan Pektaş, Head of the Department of Computer Engineering (Eng) at Üsküdar University Faculty of Engineering and Natural Sciences (MDBF), evaluated the issue of whether artificial intelligence can intervene in people’s lives.

“AI-enabled technologies facilitate social isolation”

Prof. Dr. Burhan Pektaş stated that AI-enabled technologies such as virtual assistants and social media algorithms have the potential to facilitate social isolation by reducing face-to-face interactions and encouraging dependence on digital communication. “AI algorithms used on social media platforms can contribute to the formation of echo chambers where individuals are only exposed to information and opinions that align with their existing beliefs. This can exacerbate social polarization and hinder constructive dialogue between different groups.”

“It can lead to social divisions and confusion”

Noting that AI-powered algorithms can be used to manipulate public opinion, spread misinformation and amplify harmful content, Prof. Dr. Burhan Pektaş said: “This can undermine trust in information sources and lead to social divisions and confusion. To mitigate these dangers, it is important to develop and implement robust ethical guidelines, regulations and accountability mechanisms for the responsible design, deployment and use of AI in social contexts.”

“It is important to promote awareness of the potential risks associated with AI technologies”

Explaining that promoting digital literacy, critical thinking skills and awareness of the potential risks associated with AI technologies can empower individuals to navigate social interactions in an increasingly AI-driven world, Prof. Dr. Burhan Pektaş said:

“Striking a balance between Artificial Intelligence (AI) and human life requires careful consideration of the benefits and risks of AI technologies, as well as the implementation of strategies to ensure that AI serves the best interests of humanity. This requires prioritizing ethical considerations in the design, development and deployment of AI systems. This includes ensuring transparency, fairness, accountability and respect for human rights throughout the AI lifecycle.

“Users should be included in the design process of artificial intelligence technologies”

On the other hand, we must implement regulatory frameworks and standards to govern the responsible use of AI technologies. These frameworks should address issues such as data privacy, algorithmic bias, autonomous systems and the ethical implications of AI applications. Prioritize human needs, values and preferences in the design of AI systems. It is important to involve all users in the design process to ensure that AI technologies are intuitive, user-friendly and compatible with human values and goals.”

“Designers have a moral responsibility to consider societal impacts”

Emphasizing that Artificial Intelligence (AI) is increasingly playing a role in decision-making processes with ethical implications, Prof. Dr. Burhan Pektaş said: “Designers, developers and engineers responsible for creating AI systems have a moral responsibility to ensure that these systems are designed ethically and with potential societal impacts in mind.”

“Steps should be taken to mitigate potential harms such as privacy violations and unintended consequences”

Stating that this includes addressing issues such as bias, fairness, transparency and accountability in the design process, Prof. Dr. Burhan Pektaş said, “Furthermore, those who deploy and use AI technologies bear moral responsibility for the consequences of their actions. This includes ensuring that AI systems are used responsibly and ethically and taking steps to mitigate potential harms such as discrimination, privacy violations and unintended consequences.”

“Who will bear moral responsibility for the actions of these systems?”

On the other hand, Prof. Dr. Burhan Pektaş noted that as artificial intelligence systems become more autonomous and have the capacity to make decisions without human intervention, questions arise as to who will take moral responsibility for the actions of these systems:

“It is crucial to establish clear lines of accountability and ensure that appropriate mechanisms are in place to address issues of responsibility and liability. AI systems often make decisions with ethical implications in areas such as healthcare, criminal justice and autonomous vehicles. Individuals and organizations involved in the development and deployment of AI systems have a moral responsibility to ensure that these systems make decisions consistent with ethical principles and values.

Overall, the intervention of AI in human life raises complex ethical questions of moral responsibility, requiring careful consideration and collaboration among various stakeholders to ensure that AI technologies are developed and used in a way that is compatible with ethical principles and promotes the well-being of individuals and society as a whole.”

Leave a Reply

Your email address will not be published. Required fields are marked *