About TechHive AI
TechHive AI is an innovative learning program to teach high school students about cybersecurity and ethics of AI. While AI holds great promise to positively transform many areas of society, ill-conceived deployments pose significant risks to people’s rights and safety.
TechHive AI combines STEM and social sciences to teach students about cybersecurity and AI. Using ethical AI principles (e.g., fairness, accountability, transparency, security), students will explore potential technology and governance strategies to mitigate harms and maximize benefits of AI in society.
We are currently recruiting teens like you to learn and develop techniques that can address and mitigate AI cybersecurity threats in ways that adhere to ethical AI principles. Participants will also get opportunities to share these techniques with their communities, school districts, and local policymakers. This project is funded through a grant by the National Science Foundation.
Apply
Please complete the application form by February 15, 2021. This information will be used to select participants for the program. Please note that this is a non-binding form, completing the application form does not obligate you to enroll in the program. TechHive AI is a free program that will take place online (through video conferencing) on Tuesday and Thursday afternoons, from 3:45-5:15 pm PST, starting March 2 and continuing through May 18. (No sessions will take place April 5 – 9, 2021.)
More Information
Artificial Intelligence (AI) presents amazing possibilities, but also looming threats for cybersecurity. AI can aid the detection of malicious attacks, but can also be used to make attacks more problematic. As AI-enabled systems enter into our most critical social institutions—such as for screening job applications, predicting criminal activity, or determining who receives loans—ill-conceived deployments and efforts to hack their intended uses pose serious threats to many fundamental rights. As AI becomes embedded within the systems and objects in our physical world, mitigating bias, discrimination, and threats to public safety become paramount. It is therefore critical that AI systems are built with attention to fairness, accountability, transparency, and ethics (FATE) principles. This challenge requires more than a technological solution. Instead, responding to this challenge requires attention to each of the physical, social, and technological spheres AI touches—bringing together the fields of AI, cybersecurity, and social sciences to effectively understand and apply FATE principles in AI.
To support this goal, this project is developing a novel educational approach, TechHive AI, that is recruiting teens from diverse communities to learn and develop techniques to address AI cybersecurity threats in ways that adhere to the FATE principles. Participants will also develop practice with and have opportunities to share these techniques (technical and non-technical) with their communities, school districts, and policymakers. Together, this project will lead to: (1) a transdisciplinary curriculum model for teaching cybersecurity and AI; (2) guidelines for the development of effective online and hybrid learning models that integrate STEM and social science curriculum with FATE principles in cybersecurity and AI; and (3) a research report that will detail the effectiveness of this transdisciplinary and experiential education model to support high schoolers’ development of 21st century workforce skills.
This project is funded through a grant by the National Science Foundation.
###
The Center for Information Technology Research in the Interest of Society (CITRIS) and the Banatao Institute drive interdisciplinary innovation for social good with faculty researchers and students from four University of California campuses – Berkeley, Davis, Merced, and Santa Cruz – along with public and private partners.
To learn more about CITRIS, sign up for our newsletter: bit.ly/SubscribeCITRIS