In 15 Ted Talk style presentations, the Faculty of MIT recently discussed their pioneering research which incorporates social, ethical and technical considerations and expertise, each supported by seed subsidies established by the Social and ethical responsibilities of IT (SERC), a cross initiative of MIT Schwarzman College of Computing. THE call proposals Last summer met nearly 70 requests. A committee with representatives of each MIT and college school met to select winning projects that received up to $ 100,000 in funding.
“SERC is committed to stimulating progress at the intersection of computer science, ethics and society. Seed subsidies are designed to trigger a daring and creative thought around the challenges and complex possibilities in this space, “said Nikos Trichakis, co-associated dean of Serc and professor of management of JC Penney. “With the MIT Ethics of Computing Research Symposium, we thought it is important not only to highlight the extent and depth of research which shapes the future of ethical IT, but also to invite the community to be part of the conversation.”
“What you see here is a kind of collective judgment of the community on the most exciting work with regard to research, in the social and ethical responsibilities of current computer science at the MIT,” said Caspar Hare, co-associated dean of Serc and professor of philosophy.
THE One -day symposium May 1, was organized around four key themes: the technology responsible for health care, the governance of artificial intelligence and ethics, technology in society and civic engagement and digital inclusion and social justice. The speakers made stimulating presentations on a wide range of subjects, including algorithmic biases, data confidentiality, social implications of artificial intelligence and the evolution of the relationship between humans and machines. The event also presented a poster session, where students researchers presented projects They worked throughout the year as LERC Sholars.
Saints of the IT research symposium in each of the theme areas, many of which are available to watch on YouTubeincluded:
Make the renal transplant system more equitable
Policies regulating the organ transplant system in the United States are issued by a national committee which often takes more than six months to create, then years to implement, a calendar that many from the waiting list simply cannot survive.
Dimitris Bertsimas, Vice-Prévôt of Open Apprenticeship, Dean Partner of Commercial Analysis and Operational Research Professor of Boeing, shared his last work in the analysis for the fair and effective renal transplant allowance. Bertsima’s new algorithm examines criteria such as geographic location, mortality and age in just 14 seconds, a monumental change compared to the usual six hours.
Bertsimas and his team work in close collaboration with the United Network for Organ Sharing (UNOS), a non -profit organization that manages the major part of the national donation and transplant system thanks to a contract with the federal government. During his presentation, Bertsimas shared a video by James Alcorn, the main strategist for politicians at UNOS, which offered this poignant summary of the impact of the new algorithm:
“This optimization radically modifies the execution time to assess these different simulations of political scenarios. It took us a few months to examine a handful of different political scenarios, and now it takes a question of a few minutes to examine thousands and thousands of scenarios. We are able to make these changes much faster, which means more quickly that we can improve the system for transport candidates much faster ”.
The ethics of the content of social media generated by AI
While the content generated by AI becomes more widespread on social media platforms, what are the implications of disclosure (or not to disclose) only part of a post was created by AI? Adam Berinsky, professor of Mitsui political science, and Gabrielle Péloquin-Skulski, doctoral student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on the content generated by AI.
In a series of surveys and experiences affixing labels on the publications generated by AI, the researchers examined how specific words and descriptions had an impact on the perception of users of deception, their intention to engage in the post and, finally, if the post was true or false.
“The big point to remember from our initial assembly of results is that the size of a size does not correspond to everyone,” said Péloquin-Skulski. “We found that the labeling of the images generated by AI with a label oriented towards the process reduces belief in false and true articles. This is quite problematic, because labeling intends to reduce the belief of people in false information, not necessarily real information.
Use of AI to increase civilian discourse online
“Our research aims to explain how people want to have their say more and more to say in the organizations and communities to which they belong”, Lily Tsai explained in a session On the experiences of the generative AI and the future of digital democracy. Tsai, professor Ford of political science and director of MIT governance LAB, conducts in progress research with Alex Pentland, professor of sciences of the arts of media, and a wider team.
Online deliberative platforms have recently increased popularity in the United States in public and private parameters. Tsai explained that with technology, it is now possible that everyone has a say – but it can be overwhelming, or even in danger. First, too much information is available, and secondly, online speech has become more and more “essential”.
The group focuses on “how we can rely on existing technologies and improve them with rigorous and interdisciplinary research, and how we can innovate by integrating a generative AI to improve the advantages of online spaces for deliberation.” They developed their own platform integrated into AI for deliberative democracy, deliberation.io, and deployed four initial modules. All studies have been in the laboratory so far, but they are also working on a series of future field studies, the first of which will be in partnership with the Columbia District Government.
Tsai told the public: “If you do not remove anything else from this presentation, I hope you will remove this – that we should all demand that developed technologies be assessed to see if they have positive results downstream, rather than simply focus on maximizing the number of users.”
A public thinking group which considers all aspects of AI
When Catherine d’Ignazio, associate professor of science and urban planning, and Nikko Stevens, Postdoc at Data + Feminism Lab du MIT, initially submitted their financing proposal, they were not intention to develop a reflection group, but a framework – which articulated the way in which artificial intelligence and machine learning could integrate community methods and use participatory design.
In the end, they created a liberating AI, that they describe As a “public reflection group on all aspects of AI”. D’INGNAZIO and STEVENS gathered 25 researchers from a diversified range of institutions and disciplines that have written more than 20 positions of position examining the most recent academic literature on AI systems and commitment. They intentionally gathered the papers in three distinct themes: the landscape of corporate AI, the dead ends and the ways to follow.
“Instead of waiting for the Open AI or Google to invite us to participate in the development of their products, we have gathered to challenge the status quo, think more greatly and reorganize resources in this system in the hope of a broader societal transformation,” said Ingnazio.