scholarly article | Q13442814 |
P8978 | DBLP publication ID | journals/ficn/JordanWM19 |
P356 | DOI | 10.3389/FNCOM.2019.00046 |
P932 | PMC publication ID | 6687756 |
P698 | PubMed publication ID | 31427939 |
P50 | author | Jakob Jordan | Q87956459 |
P2093 | author name string | Abigail Morrison | |
Philipp Weidel | |||
P2860 | cites work | PyNEST: A Convenient Interface to the NEST Simulator | Q21129072 |
Six views of embodied cognition | Q22305484 | ||
Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS. | Q27313980 | ||
The Brian simulator | Q27500449 | ||
Human-level control through deep reinforcement learning | Q27907579 | ||
Functional requirements for reward-modulated spike-timing-dependent plasticity. | Q51898363 | ||
Reinforcement learning in populations of spiking neurons. | Q51940526 | ||
Reinforcement learning in continuous time and space. | Q52082903 | ||
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. | Q52674870 | ||
BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python | Q60958324 | ||
Mastering the game of Go with deep neural networks and tree search | Q28005460 | ||
The Virtual Brain: a simulator of primate brain network dynamics | Q28681611 | ||
Run-time interoperability between neuronal network simulators based on the MUSIC framework | Q30386206 | ||
Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail | Q30972868 | ||
Reducing the dimensionality of data with neural networks | Q31050179 | ||
Nengo: a Python tool for building large-scale functional brain models | Q31148248 | ||
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python | Q33485202 | ||
An imperfect dopaminergic error signal can drive temporal-difference learning | Q33904009 | ||
Place cells, grid cells, and the brain's spatial representation system | Q34009993 | ||
Spiking network simulation code for petascale computers | Q34320753 | ||
Reinforcement learning using a continuous time actor-critic framework with spiking neurons | Q34671773 | ||
How attention can create synaptic tags for the learning of working memories in sequential tasks | Q35572058 | ||
Forgetting in Reinforcement Learning Links Sustained Dopamine Signals to Motivation. | Q36162490 | ||
Goal-Directed Decision Making with Spiking Neurons | Q36534330 | ||
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator. | Q38370411 | ||
Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform. | Q38373397 | ||
Recurrent Spiking Networks Solve Planning Tasks | Q38542712 | ||
RM-SORN: a reward-modulated self-organizing recurrent neural network | Q42024268 | ||
Neuroscience. How good are neuron models? | Q43260508 | ||
Code-specific learning rules improve action selection by populations of spiking neurons | Q45758528 | ||
A spiking neural network based on the basal ganglia functional anatomy | Q48235190 | ||
Solving the distal reward problem through linkage of STDP and dopamine signaling. | Q48311585 | ||
A spiking neural network model of an actor-critic learning agent. | Q48765915 | ||
P275 | copyright license | Creative Commons Attribution 4.0 International | Q20007257 |
P6216 | copyright status | copyrighted | Q50423863 |
P4510 | describes a project that uses | Neuron | Q7002467 |
P304 | page(s) | 46 | |
P577 | publication date | 2019-08-02 | |
P1433 | published in | Frontiers in Computational Neuroscience | Q15817583 |
P1476 | title | A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents | |
P478 | volume | 13 |
Search more.