Are agency and responsibility solely ascribable to humans? This chapter explores the question from legal and ethical perspectives. In addition to presenting important theories, the chapter uses arguments, counterarguments, and scenarios to clarify both the actual and the hypothetical ethical and legal situations governing a very particular type of advanced computer system: medical decision support systems (MDSS) that feature AI in their system design. The author argues that today’s MDSS must be categorized by more than just type and function even to begin ascribing some level of moral or legal responsibility. As the scenarios demonstrate, various U.S. and UK legal doctrines appear to allow for the possibility of assigning specific types of agency—and thus specific types of legal responsibility—to some types of MDSS. The author concludes that strong arguments for assigning moral agency and responsibility are still lacking, however.