Abstract is: Bidirectional Encoder Representations from Transformers (BERT) is a transformer-based machine learning technique for natural language processing (NLP) pre-training developed by Google. BERT was created and published in 2018 by Jacob Devlin and his colleagues from Google. In 2019, Google announced that it had begun leveraging BERT in its search engine, and by late 2020 it was using BERT in almost every English-language query. A 2020 literature survey concluded that "in a little over a year, BERT has become a ubiquitous baseline in NLP experiments", counting over 150 research publications analyzing and improving the model. The original English-language BERT has two models: (1) the BERTBASE: 12 encoders with 12 bidirectional self-attention heads, and (2) the BERTLARGE: 24 encoders with 16 bidirectional self-attention heads. Both models are pre-trained from unlabeled data extracted from the BooksCorpus with 800M words and English Wikipedia with 2,500M words.
masked language model | Q24868018 |
transformer | Q85810444 |
large language model | Q115305900 |
transformer | Q85810444 |
P973 | described at URL | https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html |
https://devopedia.org/bert-language-model | ||
P9100 | GitHub topic | bert |
P2671 | Google Knowledge Graph ID | /g/11h75tkhxs |
P856 | official website | https://arxiv.org/abs/1810.04805 |
P1324 | source code repository URL | https://github.com/google-research/bert |
P275 | copyright license | Apache Software License 2.0 | Q13785927 |
P6216 | copyright status | copyrighted | Q50423863 |
P3575 | data size | 110000000 | |
340000000 | |||
P1343 | described by source | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | Q57267388 |
P178 | developer | Google Research | Q28943742 |
P366 | has use | natural language processing | Q30642 |
P571 | inception | 2018-01-01 | |
P138 | named after | Bert | Q584184 |
Q108039873 | A Simple Yet Strong Pipeline for HotpotQA |
Q104418048 | An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020 |
Q61726888 | Assessing BERT's Syntactic Abilities |
Q118823928 | Assessing the impact of Word Embeddings for Relation Prediction: An Empirical Study |
Q65924660 | Augmenting Chinese WordNet semantic relations with contextualized embeddings |
Q112084338 | Automated Crossword Solving |
Q85528867 | Automatic Fact-guided Sentence Modification |
Q70186975 | BERT for Named Entity Recognition in Contemporary and Historic German |
Q57267388 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
Q116250705 | Biographical Semi-Supervised Relation Extraction Dataset |
Q112075162 | BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions |
Q112254431 | Code and Named Entity Recognition in StackOverflow |
Q118894562 | Combining Semantic Web and Machine Learning for Auditable Legal Key Element Extraction |
Q62101883 | Combining embedding methods for a word intrusion task |
Q70188422 | Combining embedding methods for a word intrusion task |
Q119525793 | CrossRE: A Cross-Domain Dataset for Relation Extraction |
Q70444897 | Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings |
Q109943627 | Does Typological Blinding Impede Cross-Lingual Sharing? |
Q115698532 | Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing |
Q125749900 | Entity Linking for Short Text Using Structured Knowledge Graph via Multi-Grained Text Matching |
Q112119127 | Entity Linking in 100 Languages |
Q125947627 | Event Extraction for Portuguese: A QA-Driven Approach Using ACE-2005 |
Q113770518 | Find the Funding: Entity Linking with Incomplete Funding Knowledge Bases |
Q109283586 | Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics |
Q113663041 | Harnessing privileged information for hyperbole detection |
Q126087116 | Image classification with symbolic hints using limited resources |
Q123260150 | Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking |
Q118809860 | Knowledge graph data enrichment based on a software library for text mapping to the Sustainable Development Goals |
Q125422072 | Knowledge graphs for empirical concept retrieval |
Q67482086 | Language Models as Knowledge Bases? |
Q122922648 | Large language models converge toward human-like concept organization |
Q84876760 | Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering |
Q112667375 | MAGNET: Multi-Label Text Classification using Attention-based Graph Neural Network |
Q113662028 | MOVER: Mask, Over-generate and Rank for Hyperbole Generation |
Q109944224 | Multi-Hop Fact Checking of Political Claims |
Q84078340 | Multi-hop Reading Comprehension through Question Decomposition and Rescoring |
Q118867783 | NASTyLinker: NIL-Aware Scalable Transformer-Based Entity Linker |
Q123020081 | Native Language Prediction from Gaze: a Reproducibility Study |
Q109285514 | Revealing the Dark Secrets of BERT |
Q109283441 | Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference |
Q85124231 | Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents |
Q118814258 | Semi-supervised Construction of Domain-specific Knowledge Graphs |
Q65933760 | Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation |
Q76470494 | SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems |
Q118610590 | The finer they get: Aggregating fine-tuned models improves lexical semantic change detection |
Q86591666 | Towards Detection of Subjective Bias using Contextualized Word Embeddings |
Q118812268 | Towards Tailored Knowledge Base Modeling using Masked Language Models |
Q118867488 | Transformer Based Semantic Relation Typing for Knowledge Graph Integration |
Q87402253 | What the [MASK]? Making Sense of Language-Specific BERT Models |
Q86695949 | A Primer in BERTology: What we know about how BERT works |
Q107763843 | ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection |
Q123494755 | An Analysis of BERT (NLP) for Assisted Subject Indexing for Project Gutenberg |
Q104418048 | An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020 |
Q61726888 | Assessing BERT's Syntactic Abilities |
Q109283281 | BERT Busters: Outlier Dimensions that Disrupt Transformers |
Q112119511 | BERT for Coreference Resolution: Baselines and Analysis |
Q104798629 | BERT-GT: Cross-sentence n-ary relation extraction with BERT and graph transformer |
Q57267388 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
Q123966253 | BERTologyNavigator: Advanced Question Answering with BERT-based Semantics |
Q126726880 | BRep-BERT: Pre-training Boundary Representation BERT with Sub-graph Node Contrastive Learning |
Q90006632 | BioBERT: a pre-trained biomedical language representation model for biomedical text mining |
Q125526133 | Broadening BERT vocabulary for Knowledge Graph Construction using Wikipedia2Vec |
Q105703238 | Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT |
Q89653510 | Does BERT need domain adaptation for clinical negation detection? |
Q104090794 | DynaBERT: Dynamic BERT with Adaptive Width and Depth |
Q84574679 | Enriching BERT with Knowledge Graph Embeddings for Document Classification |
Q125526132 | Expanding the Vocabulary of BERT for Knowledge Base Construction |
Q114795666 | Fine-Tuning BERT Models to Extract Named Entities from Archival Finding Aids. |
Q98775064 | Hate speech detection and racial bias mitigation in social media based on BERT model |
Q108533266 | KBLab |
Q100393501 | Korean clinical entity recognition from diagnosis text using BERT |
Q108824029 | Lacking the embedding of a word? Look it up into a traditional dictionary |
Q119982575 | Load What You Need: Smaller Versions of Multilingual BERT |
Q111697654 | MetaMap versus BERT models with explainable active learning: ontology-based experiments with prior knowledge for COVID-19 |
Q110756024 | New explainability method for BERT-based model in fake news detection |
Q105592627 | Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles |
Q125809751 | Personalizing Type-Based Facet Ranking Using BERT Embeddings |
Q108750211 | Playing with Words at the National Library of Sweden -- Making a Swedish BERT |
Q109286905 | Positional Artefacts Propagate Through Masked Language Model Embeddings |
Q104388228 | Relation Extraction with BERT-based Pre-trained Model |
Q109285514 | Revealing the Dark Secrets of BERT |
Q108146683 | STonKGs: A Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs |
Q117199937 | ScholarBERT: Bigger is Not Always Better |
Q108712147 | Scholarly Text Classification with Sentence BERT and Entity Embeddings |
Q104433073 | SciBERT: A Pretrained Language Model for Scientific Text |
Q104091325 | The Lottery Ticket Hypothesis for Pre-trained BERT Networks |
Q76472200 | Visualizing and Measuring the Geometry of BERT |
Q116449197 | W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training |
Q87402253 | What the [MASK]? Making Sense of Language-Specific BERT Models |
Q109283171 | When BERT Plays the Lottery, All Tickets Are Winning |
Q107059867 | WikiBERT models: deep transfer learning for many languages |
Q112063555 | BioBERT | named after | P138 |
Q70444897 | Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings | uses | P2283 |
Category:BERT | wikimedia | |
Категория:BERT (языковая модель) | wikinews | |
Catalan (ca / Q7026) | BERT (model de llenguatge) | wikipedia |
BERT | wikipedia | |
BERT (language model) | wikipedia | |
BERT (modelo de lenguaje) | wikipedia | |
Basque language (eu / Q8752) | BERT (hizkuntz eredua) | wikipedia |
Persian (fa / Q9168) | برت (مدل زبانی) | wikipedia |
BERT (modèle de langage) | wikipedia | |
hi | बर्ट (भाषा मॉडल) | wikipedia |
BERT | wikipedia | |
BERT (言語モデル) | wikipedia | |
BERT (언어 모델) | wikipedia | |
BERT | wikipedia | |
BERT (modelo de linguagem) | wikipedia | |
BERT (модель мови) | wikipedia | |
BERT (mô hình ngôn ngữ) | wikipedia | |
yue | BERT | wikipedia |
BERT | wikipedia |
Search more.