scholarly article | Q13442814 |
P819 | ADS bibcode | 2017Sci...356..183C |
P818 | arXiv ID | 1608.07187 |
P356 | DOI | 10.1126/SCIENCE.AAL4230 |
P8608 | Fatcat ID | release_tx326v534bhnpolxs7cd3rojx4 |
P698 | PubMed publication ID | 28408601 |
P50 | author | Arvind Narayanan | Q16442100 |
Joanna Bryson | Q47493123 | ||
Aylin Caliskan | Q87832196 | ||
P2860 | cites work | GloVe: Global Vectors for Word Representation | Q22827276 |
Distributed Representations of Words and Phrases and their Compositionality | Q24731579 | ||
Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination | Q30050262 | ||
National differences in gender-science stereotypes predict national sex differences in science and math achievement | Q30437573 | ||
Extracting semantic representations from word co-occurrence statistics: a computational study | Q38394264 | ||
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings | Q44794248 | ||
Math = male, me = female, therefore math not = me. | Q44888631 | ||
Measuring individual differences in implicit cognition: the implicit association test. | Q52564449 | ||
Harvesting implicit group attitudes and beliefs from a demonstration web site | Q53464849 | ||
Fairness through awareness | Q56812996 | ||
P433 | issue | 6334 | |
P407 | language of work or name | English | Q1860 |
P921 | main subject | automation | Q184199 |
bias | Q742736 | ||
word embedding | Q18395344 | ||
P1104 | number of pages | 4 | |
P304 | page(s) | 183-186 | |
P577 | publication date | 2017-04-01 | |
P1433 | published in | Science | Q192864 |
P1476 | title | Semantics derived automatically from language corpora contain human-like biases | |
P478 | volume | 356 |
Q62495367 | "It's hard to argue with a computer:" Investigating Psychotherapists' Attitudes towards Automated Evaluation |
Q46335746 | 'New Wilderness' Requires Algorithmic Transparency: A Response to Cantrell et al. |
Q58767392 | A Neural Network Framework for Cognitive Bias |
Q100712023 | Accelerating evidence-informed decision-making for the Sustainable Development Goals using machine learning |
Q47887487 | An AI stereotype catcher. |
Q90069541 | Anthropogenic biases in chemical reaction data hinder exploratory inorganic synthesis |
Q64068337 | Are You What You Read? Predicting Implicit Attitudes to Immigration Based on Linguistic Distributional Cues From Newspaper Readership; A Pre-registered Study |
Q58693486 | Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? |
Q96577550 | Assessing the accuracy of automatic speech recognition for psychotherapy |
Q47173407 | Big Data in Public Health: Terminology, Machine Learning, and Privacy. |
Q90132336 | Building more accurate decision trees with the additive tree |
Q45939173 | Clinical judgement in the era of big data and predictive analytics. |
Q98905807 | Considering the Safety and Quality of Artificial Intelligence in Health Care |
Q98303406 | Cultural influences on word meanings revealed through large-scale semantic alignment |
Q91869734 | Deep learning predicts hip fracture using confounding patient and healthcare variables |
Q98192396 | Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood |
Q57465417 | Ethical governance is essential to building trust in robotics and artificial intelligence systems |
Q91524853 | Fair play in research and policy |
Q56001931 | Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data |
Q91875376 | Four equity considerations for the use of artificial intelligence in public health |
Q57167337 | From hype to reality: data science enabling personalized medicine |
Q98196875 | Gender stereotypes are reflected in the distributional structure of 25 languages |
Q91631418 | Judicial analytics and the great transformation of American Law |
Q56001932 | Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation |
Q64110291 | Machine behaviour |
Q89847182 | Network architectures supporting learnability |
Q90173773 | Neurocritical Care: Bench to Bedside (Eds. Claude Hemphill, Michael James) Integrating and Using Big Data in Neurocritical Care |
Q59792376 | On the Development of a Computer-Based Tool for Formative Student Assessment: Epistemological, Methodological, and Practical Issues |
Q124742470 | Online images amplify gender bias |
Q89589357 | Racial disparities in automated speech recognition |
Q62489888 | Societal Issues Concerning the Application of Artificial Intelligence in Medicine |
Q57406176 | Society-in-the-loop: programming the algorithmic social contract |
Q38373750 | Speaking two "Languages" in America: A semantic space analysis of how presidential candidates and their supporters represent abstract political concepts differently. |
Q110949500 | Syntax and prejudice: ethically-charged biases of a syntax-based hate speech recognizer unveiled |
Q47648627 | The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity |
Q92691502 | The digitization of organic synthesis |
Q58798689 | The growing ubiquity of algorithms in society: implications, impacts and innovations |
Q91021415 | The impact of artificial intelligence on the current and future practice of clinical cancer genomics |
Q92126990 | The relationship between implicit intergroup attitudes and beliefs |
Q90591817 | Toward a unified framework for interpreting machine-learning models in neuroimaging |
Q38661877 | Transhumanism: How Far Is Too Far? |
Q121301533 | Unsupervised discovery of non-trivial similarities between online communities |
Q102634470 | Using Aspect-Based Analysis for Explainable Sentiment Predictions |
Q89551348 | Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers |
Q64125239 | Word embeddings quantify 100 years of gender and ethnic stereotypes |
Q57475887 | askMUSIC: Leveraging a Clinical Registry to Develop a New Machine Learning Model to Inform Patients of Prostate Cancer Treatments Chosen by Similar Men |
Q113307029 | text2map: R Tools for Text Matrices |
Search more.