Artificial intelligence (AI) adoption is accelerating rapidly, with the global AI market projected to grow more than 37 percent annually between now and 2030. The McKinsey Global Institute anticipates that 70 percent of all business activities will likely incorporate AI technologies within the next ten years.
What’s driving this transformation? The promises of improved efficiency and bottom-line savings. AI delivers myriad benefits, including automating repetitive tasks, processing big data more efficiently, facilitating better decision-making, and helping medical practitioners detect health issues faster.
However, the widespread use of AI also raises significant ethical concerns. UNESCO warns that AI’s potential to embed biases, exacerbate existing inequalities, contribute to climate degradation, and curtail human rights represents genuine threats to humanity. While governments around the world are alarmed by AI’s potential for harm and misuse, including the spreading of disinformation, their efforts to adequately regulate its use have not yet kept pace with AI’s rapid development and any new dangers. As a result, the leading AI companies are left abiding by voluntary safeguards with no enforcement as they continue to compete with each other in creating new AI products.
Given the lack of robust AI regulations and the technology’s many known and unanticipated dangers, if you’re considering a career in AI, your professional development should include training in its ethical use. The Tufts University online Master of Science in Computer Science (MSCS) program meets that need. It provides ethics-focused instruction in computer science and AI, producing leaders committed to computing solutions that are both innovative and morally responsible.
A New Era: Assessing the Need for Ethical AI
The need to integrate AI ethics into all businesses and applications has grown alongside AI’s spread. A recent IBM study shows that most companies understand this—75 percent of surveyed executives ranked AI ethics as important. However, those same executives conceded that they need to do more to safeguard against AI’s potential risks. Fewer than 20 percent strongly agree their organizations’ practices and actions related to AI ethics match (or exceed) their stated principles and values. No wonder the public is skeptical: only 40 percent of consumers, citizens, and employees surveyed trust companies to implement AI technologies ethically.
Bias and data privacy rank high among the ethical concerns. AI systems can deliver biased results. An investigation by ProPublica, for example, found that a criminal justice algorithm used in Broward County, Florida, mislabeled African-American defendants as “high risk” at nearly twice the rate it mislabeled white defendants. Bias can creep into AI algorithms in many ways. AI systems learn to make decisions based on training data that often include biased human decisions or reflect historical or social inequities. In other cases, the training data is incomplete, or particular groups are over- or under-represented.
The privacy challenges are just as troubling. There are privacy and security concerns about AI and facial recognition being used for ticketing at airports, cruises, and theme parks, such as people’s biometric data being stolen and misused to impersonate someone online or create deepfake videos. Also, AI algorithms train on big data sets that can include sensitive personal information such as medical records and social security numbers. What controls prevent misuse of this data? And how secure are these systems from cybercriminals and other online malefactors? Not secure enough, at least in the case of the popular generative AI tool ChatGPT, where a data breach allowed users to see other users’ chat histories.
Earn Your Computer Science Master’s 100 Percent Online
Build Advanced Cross-Disciplinary Computing Skills
How Tufts’ “Engine for Good” Can Help
The Tufts School of Engineering is in the midst of a ten-year “Engine For Good” development campaign. Central to its efforts is the mission “to educate students committed to the innovative and ethical application of science and technology, and empower them to address the most pressing societal needs.” The campaign also aims to strengthen the bonds among students, faculty, and alums, promote greater diversity, and create agile graduate programs—like the online MSCS—that can rapidly adjust to the demands of computer science advancement and the computing job market.
Preparing Ethical AI Leaders
The online Tufts MSCS program foregrounds ethics in artificial intelligence and other computer science applications through instruction in network security, machine learning best practices, and ethical database collection and management. Tufts computer science associate teaching professor and director of online programs Martin Allen focuses on ethics in technology and lectures on bias in algorithms and other AI drivers to provide students with the tools they need to unpack and resolve ethical dilemmas.
Tufts faculty aren’t just teaching AI ethics; they are ethically utilizing AI tools in the real world for good. Dean of Graduate Education Karen Panetta and doctoral student Obafemi Jinadu recently developed drone-based AI systems to track elephants in their natural habitats using thermal imaging, aiding conservation efforts. Tufts Associate Professor Valencia Koomson has contributed to an AI-based mobile app using a genetic algorithm to enable users in developing countries to formulate personalized weight control solutions.
The AI and machine learning landscape is constantly evolving. To make a difference, ethical computer scientists must lead these developments. As industry leader Max Tegmark explains: “The biggest threat from artificial general intelligence (AGI) is not that it’s going to turn evil, like in some silly movie. The worry is it’s going to turn really competent and accomplish goals that aren’t aligned with our goals.” To prevent this, Tegmark states that humans need to win the “wisdom race” with artificial intelligence, which will require effective, ethical AI education. Tufts maintains a solid commitment to staying ahead of the curve, regularly updating its computer science curriculum to reflect industry developments and ethical concerns.
Explore the Tufts Online MS in Computer Science
By enrolling in Tufts’ online MSCS program, you can gain the skills and knowledge to become an “engine for good” at the frontline of AI development. The 100 percent online program, which can be completed in under two years, builds cross-disciplinary skills and insights students synthesize in a culminating two-course hands-on capstone project. Enrollment advisors, faculty, career counselors, and student support experts all provide support from application to graduation and beyond.
The Tufts online MSCS curriculum offers a solid grounding in computer science theory and programming practice and covers a broad range of computer science disciplines, including algorithms, AI, machine learning, software engineering, computer security, and database systems. In addition, the program develops soft skills such as communication and critical thinking. The online format allows students to study at their own pace. Students complete independent online learning modules each week to prepare them for live online class sessions; while online office hours provide additional opportunities for students to interact with faculty. Ethics-centered instruction focuses students on the skills and principles needed to guide socially responsible computing, whether in AI or other emerging technologies.
Contact an enrollment advisor today to learn more about the admissions process, the program, and tuition and financial aid.