Skip to content

Google Speech Commands and BindsNET#467

@hansemandse

Description

@hansemandse

Hi! I am trying to train a very small feed forward network much alike your example with MNIST with the dataset replaced by Google Speech Commands (GSCD), but I can't seem to make the network learn anything (see train.py). My source code is available here.

The network I am using is a scaled version of the DiehlAndCook2015 network (see kwsonsnn/model.py) made to fit on a custom hardware accelerator. It works with MNIST achieving roughly 70% accuracy in a single epoch. I believe my dataset (see kwsonsnn/dataset.py) is correctly preprocessed in a style similar to your implementation of Spoken MNIST. The regular PyTorch model included in the repo (see trainpt.py) achieves 90% accuracy on GSCD indicating that data is not to blame.

I have tried multiple different parameter configurations in terms of weight normalization and learning rates, but they do not seem to improve the network's performance.

So, am I doing something wrong? And do you by any chance have some benchmark results for either Spoken MNIST or GSCD produced with BindsNET?

System specifications:

  • Python3 3.7.6
  • BindsNET 0.2.7
  • numpy 1.18.1
  • PyTorch 1.6.0+cpu

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions