Show simple item record

dc.contributor.authorLi, Rita Yi Man
dc.contributor.authorCrabbe, M. James C.
dc.date.accessioned2022-05-24T08:56:16Z
dc.date.available2024-05-15T00:00:00Z
dc.date.available2022-05-24T08:56:16Z
dc.date.issued2022-05-15
dc.identifier.citationLi RYM, Crabbe MJC (2022) 'Artificial intelligence robot safety: a conceptual framework and research agenda based on new institutional economics and social media', in Li RYM, Chau KW, Ho DCW (ed(s).). Current State of Art in Artificial Intelligence and Ubiquitous Cities, Singapore: Springer pp.41-61.en_US
dc.identifier.isbn9789811907364
dc.identifier.doi10.1007/978-981-19-0737-1_3
dc.identifier.urihttp://hdl.handle.net/10547/625402
dc.description.abstractAccording to "Huang's law", Artificial intelligence (AI)-related hardware increases in power 4 to 10 times per year. AI can benefit various stages of real estate development, from planning and construction to occupation and demolition. However, Hong Kong's legal system is currently behind when it comes to technological abilities, while the field of AI safety in built environments is still in its infancy. Negligent design and production processes, irresponsible data management, questionable deployment, algorithm training, sensor design and/or manufacture, unforeseen consequences from multiple data inputs, and erroneous AI operation based on sensor or remote data can all lead to accidents. Yet, determining how legal rules should apply to liability for losses caused by AI systems takes time. Traditional product liability laws can apply for some systems, meaning that the manufacturer will bear responsibility for a malfunctioning part. That said, more complex cases will undoubtedly have to come before the courts to determine whether something unsafe should be the manufacturer's fault or the individual's fault, as well as who should receive the subsequent financial and/or non-financial compensation, etc. Since AI adoption has an inevitable relationship with safety concerns, this project intends to shed light on responsible AI development and usage, with a specific focus on AI safety laws, policies, and people's perceptions. We will conduct a systematic literature review via the PRISMA approach to study the academic perspectives of AI safety policies and laws and data-mining publicly available content on social media platforms such as Twitter, YouTube, and Reddit to study societal concerns about AI safety in built environments. We will then research court cases and laws related to AI safety in 61 jurisdictions, in addition to policies that have been implemented globally. Two case studies on AI suppliers that sell AI hardware and software to users of built environment will also be included. Another two case studies will be conducted on built environment companies (a contractor and Hong Kong International Airport) that use AI safety tools. The results obtained from social media, court cases, legislation, and policies will be discussed with local and international experts via a workshop, then released to the public to provide the international community and Hong Kong with unique policy and legal orientations.en_US
dc.description.sponsorshipGrant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. UGC/IIDS15/E01/19).en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.urlhttps://link.springer.com/chapter/10.1007/978-981-19-0737-1_3en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjecthuman-centred computingen_US
dc.subjecturban developmenten_US
dc.subjecturban planningen_US
dc.subjectroboticsen_US
dc.subjectrobot operating systemen_US
dc.subjectculturally competent robotsen_US
dc.subjectSubject Categories::H671 Roboticsen_US
dc.titleArtificial intelligence robot safety: a conceptual framework and research agenda based on new institutional economics and social mediaen_US
dc.title.alternativeCurrent State of Art in Artificial Intelligence and Ubiquitous Citiesen_US
dc.typeBook chapteren_US
dc.date.updated2022-05-24T08:50:59Z
dc.description.note"Authors whose work is accepted for publication in a non-open access Springer or Palgrave Macmillan book are permitted to self-archive the accepted manuscript (AM), on their own personal website and/or in their funder or institutional repositories, for public release after an embargo period (see the table below). " https://www.springernature.com/gp/open-research/policies/book-policies embargo for Contributed volumes = 24m


Files in this item

Thumbnail
Name:
jcfinCh3_18+pages_AI+safety(1).pdf
Embargo:
2024-05-15
Size:
638.3Kb
Format:
PDF
Description:
author's accepted version

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International