- All Categories
- Featured Events
- Alumni
- Application Deadline
- Arts
- Campus Discourse
- Careers
- BU Central
- Center for the Humanities
- Charity & Volunteering
- Kilachand Center
- Commencement
- Conferences & Workshops
- Diversity & Inclusion
- Examinations
- Food & Beverage
- Global
- Health & Wellbeing
- Keyword Initiative
- Lectures
- LAW Community
- Meetings
- Orientation
- Other Events
- Religious Services & Activities
- Special Interest to Women
- Sports & Recreation
- Social Events
- Study Abroad
- Weeks of Welcome
- Body, Mind, Space, and Spirit: Margaret Rigg Art Collection8:00 am
- 2021 MFA Sculpture Thesis Exhibition11:00 am
- Oceans Past, Present & Future: Historical Ecology & Circumpolar Fisheries Management12:00 pm
- Tai Chi at Marsh Chapel12:00 pm
- Study Smarter, Not Harder: Effective Note Taking & Study Strategies Workshop (Student Success Week)1:30 pm
- Coping Through Connection3:00 pm
- Facing an Uncertain Job Market: Networking with COM Alums3:00 pm
- How To Rock Your Resume3:00 pm
- Spark! Info Session 3:00 pm
- AI Malpractice3:30 pm
- The Role of Industrial Policy in Global Development: Evidence from China and India4:00 pm
- Perfect Your Pitch Workshop4:00 pm
- Employer Info Session: Peace Corps4:00 pm
- Healthy Relationships 4:00 pm
- Who We Are & Who We’re Becoming4:00 pm
- Career Fair Prep Panel5:00 pm
- Funded Internships 1015:00 pm
- BBCC: ClearView Info Session5:30 pm
- International Student InterVarsity6:30 pm
- Virtual Sargent Choice Test Kitchen 8:00 pm
AI Malpractice
Should AI developers be held to a professional standard of care? Recent scholarship has argued that those who build AI systems owe special duties to the public to promote values such as fairness, transparency, and accountability. For example, many commentators have expressed concern that AI systems conceal improper biases and fail to offer meaningful opportunity to contest harmful results. Yet, there is little agreement as to what the content of those duties should be. Nor is there a good framework for how conflicting views should be resolved as a matter of law. <br><br>This Cyber Alliance talk, featuring The Ohio State University Assistant Professor of Law and Computer Science & Engineering Bryan Choi, explores whether professional malpractice law can offer a useful framework for staging the discourse on AI ethics. The malpractice doctrine establishes an alternate standard of care—the customary care standard—that substitutes for the ordinary reasonable care standard. That substitution is needed in areas like medicine or law where the service is essential, a uniform duty of care is impossible to define, and some accountability is nonetheless desirable. The customary care standard offers a more flexible approach that tolerates a range of professional practices above a minimum expectation of competence. This approach is especially apt where the field of practice is hotly contested or is rapidly evolving. <br><br>Prof. Choi has argued elsewhere that the work of conventional software development fits this bill. At the same time, there are reasons to reserve doubt about treating AI developers like software developers. Instead, lawmakers should split the difference. On the one hand, the core skills used to construct modern AI systems are sufficiently mature and well-established to determine an objective baseline of reasonable care. On the other hand, proposals such as explainability, algorithmic risk assessment, and bias correction are likely to remain heavily contested territory, and thus should be placed within the professional malpractice framework.<br><br>There will be time for casual conversation before and after the presentation. Please email Mayank Varia at varia@bu.edu to request Zoom invite details.
When | 3:30 pm to 5:00 pm on Wednesday, March 3, 2021 |
---|---|
Location | Zoom |