Semantics derived automatically from language corpora contain human-like biases

scientific article

Semantics derived automatically from language corpora contain human-like biases is …
instance of (P31):
scholarly articleQ13442814

External links are
P819ADS bibcode2017Sci...356..183C
P818arXiv ID1608.07187
P356DOI10.1126/SCIENCE.AAL4230
P8608Fatcat IDrelease_tx326v534bhnpolxs7cd3rojx4
P698PubMed publication ID28408601

P50authorArvind NarayananQ16442100
Joanna BrysonQ47493123
Aylin CaliskanQ87832196
P2860cites workGloVe: Global Vectors for Word RepresentationQ22827276
Distributed Representations of Words and Phrases and their CompositionalityQ24731579
Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market DiscriminationQ30050262
National differences in gender-science stereotypes predict national sex differences in science and math achievementQ30437573
Extracting semantic representations from word co-occurrence statistics: a computational studyQ38394264
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word EmbeddingsQ44794248
Math = male, me = female, therefore math not = me.Q44888631
Measuring individual differences in implicit cognition: the implicit association test.Q52564449
Harvesting implicit group attitudes and beliefs from a demonstration web siteQ53464849
Fairness through awarenessQ56812996
P433issue6334
P407language of work or nameEnglishQ1860
P921main subjectautomationQ184199
biasQ742736
word embeddingQ18395344
P1104number of pages4
P304page(s)183-186
P577publication date2017-04-01
P1433published inScienceQ192864
P1476titleSemantics derived automatically from language corpora contain human-like biases
P478volume356

Reverse relations

cites work (P2860)
Q62495367"It's hard to argue with a computer:" Investigating Psychotherapists' Attitudes towards Automated Evaluation
Q46335746'New Wilderness' Requires Algorithmic Transparency: A Response to Cantrell et al.
Q58767392A Neural Network Framework for Cognitive Bias
Q100712023Accelerating evidence-informed decision-making for the Sustainable Development Goals using machine learning
Q47887487An AI stereotype catcher.
Q90069541Anthropogenic biases in chemical reaction data hinder exploratory inorganic synthesis
Q64068337Are You What You Read? Predicting Implicit Attitudes to Immigration Based on Linguistic Distributional Cues From Newspaper Readership; A Pre-registered Study
Q58693486Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
Q96577550Assessing the accuracy of automatic speech recognition for psychotherapy
Q47173407Big Data in Public Health: Terminology, Machine Learning, and Privacy.
Q90132336Building more accurate decision trees with the additive tree
Q45939173Clinical judgement in the era of big data and predictive analytics.
Q98905807Considering the Safety and Quality of Artificial Intelligence in Health Care
Q98303406Cultural influences on word meanings revealed through large-scale semantic alignment
Q91869734Deep learning predicts hip fracture using confounding patient and healthcare variables
Q98192396Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood
Q57465417Ethical governance is essential to building trust in robotics and artificial intelligence systems
Q91524853Fair play in research and policy
Q56001931Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data
Q91875376Four equity considerations for the use of artificial intelligence in public health
Q57167337From hype to reality: data science enabling personalized medicine
Q98196875Gender stereotypes are reflected in the distributional structure of 25 languages
Q91631418Judicial analytics and the great transformation of American Law
Q56001932Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation
Q64110291Machine behaviour
Q89847182Network architectures supporting learnability
Q90173773Neurocritical Care: Bench to Bedside (Eds. Claude Hemphill, Michael James) Integrating and Using Big Data in Neurocritical Care
Q59792376On the Development of a Computer-Based Tool for Formative Student Assessment: Epistemological, Methodological, and Practical Issues
Q124742470Online images amplify gender bias
Q89589357Racial disparities in automated speech recognition
Q62489888Societal Issues Concerning the Application of Artificial Intelligence in Medicine
Q57406176Society-in-the-loop: programming the algorithmic social contract
Q38373750Speaking two "Languages" in America: A semantic space analysis of how presidential candidates and their supporters represent abstract political concepts differently.
Q110949500Syntax and prejudice: ethically-charged biases of a syntax-based hate speech recognizer unveiled
Q47648627The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity
Q92691502The digitization of organic synthesis
Q58798689The growing ubiquity of algorithms in society: implications, impacts and innovations
Q91021415The impact of artificial intelligence on the current and future practice of clinical cancer genomics
Q92126990The relationship between implicit intergroup attitudes and beliefs
Q90591817Toward a unified framework for interpreting machine-learning models in neuroimaging
Q38661877Transhumanism: How Far Is Too Far?
Q121301533Unsupervised discovery of non-trivial similarities between online communities
Q102634470Using Aspect-Based Analysis for Explainable Sentiment Predictions
Q89551348Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Q64125239Word embeddings quantify 100 years of gender and ethnic stereotypes
Q57475887askMUSIC: Leveraging a Clinical Registry to Develop a New Machine Learning Model to Inform Patients of Prostate Cancer Treatments Chosen by Similar Men
Q113307029text2map: R Tools for Text Matrices

Search more.