commit 0d840eee7a0d16628368fb01d3965ae32f306902
parent 38ad85405e3b60c893b7b03a2b31c846e2e490f9
Author: Steven Atkinson <[email protected]>
Date: Thu, 19 Sep 2024 17:02:22 -0700
Update docs to reference new `input.wav` and `output.wav` (#469)
Update docs
Diffstat:
8 files changed, 61 insertions(+), 20 deletions(-)
diff --git a/docs/source/model-file.rst b/docs/source/model-file.rst
@@ -21,10 +21,10 @@ There are a few keys you should expect to find with the following values:
* ``"weights"``: a list of float-type numbers that are the weights (parameters)
of the model. How they map into the model is architecture-specific. Looking at
``._export_weights()`` will usually tell you what you need to know (e.g. for
- ``WaveNet``
- `here <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/wavenet.py#L428>`_
- and ``LSTM``
- `here <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/recurrent.py#L317>`_.)
+ ``WaveNet`` at
+ `wavenet.py <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/wavenet.py#L428>`_
+ and ``LSTM`` at
+ `recurrent.py <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/recurrent.py#L317>`_.)
There are also some optional keys that ``nam`` may use:
diff --git a/docs/source/tutorials/colab.rst b/docs/source/tutorials/colab.rst
@@ -43,11 +43,10 @@ have made high-quality tutorials.
However, if you want to skip reamping for your first model, you can download
these pre-made files:
-* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_,
+* `input.wav <https://drive.google.com/file/d/1KbaS4oXXNEuh2aCPLwKrPdf5KFOjda8G/view?usp=sharing>`_,
a standardized input file.
-* `output.wav <https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link>`_,
- a reamp of the same overdrive used to make
- `ParametricOD <https://www.neuralampmodeler.com/post/the-first-publicly-available-parametric-neural-amp-model>`_.
+* `output.wav <https://drive.google.com/file/d/1NrpQLBbCDHyu0RPsne4YcjIpi5-rEP6w/view?usp=sharing>`_,
+ a reamp of a high-gain tube head.
To upload your data to Colab, click the Folder icon here:
@@ -88,3 +87,7 @@ If you don't see it, you might have to refresh the file browser:
.. image:: media/colab/refresh.png
:scale: 20 %
+
+To use it, point
+`the plugin <https://github.com/sdatkinson/NeuralAmpModelerPlugin>`_ at the file
+and you're good to go!
+\ No newline at end of file
diff --git a/docs/source/tutorials/full.rst b/docs/source/tutorials/full.rst
@@ -28,10 +28,8 @@ signal from it (either by reamping a pre-recorded test signal or by
simultaneously recording your DI and the effected tone). For your first time,
you can download the following pre-made files:
-* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_
- (input)
-* `output.wav <https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link>`_
- (output)
+* `input.wav <https://drive.google.com/file/d/1KbaS4oXXNEuh2aCPLwKrPdf5KFOjda8G/view?usp=sharing>`_
+* `output.wav <https://drive.google.com/file/d/1NrpQLBbCDHyu0RPsne4YcjIpi5-rEP6w/view?usp=sharing>`_
Next, make a file called e.g. ``data.json`` by copying
`nam_full_configs/data/single_pair.json <https://github.com/sdatkinson/neural-amp-modeler/blob/main/nam_full_configs/data/single_pair.json>`_
@@ -40,7 +38,7 @@ and editing it to point to your audio files like this:
.. code-block:: json
"common": {
- "x_path": "C:\\path\\to\\v1_1_1.wav",
+ "x_path": "C:\\path\\to\\input.wav",
"y_path": "C:\\path\\to\\output.wav",
"delay": 0
}
diff --git a/docs/source/tutorials/gui.rst b/docs/source/tutorials/gui.rst
@@ -8,13 +8,52 @@ with:
$ nam
-Training with the GUI requires a reamp based on one of the standardized training
-files:
+You'll see a GUI like this:
-* `v3_0_0.wav <https://drive.google.com/file/d/1Pgf8PdE0rKB1TD4TRPKbpNo1ByR3IOm9/view?usp=drive_link>`_
- (preferred)
-* `v2_0_0.wav <https://drive.google.com/file/d/1xnyJP_IZ7NuyDSTJfn-Jmc5lw0IE7nfu/view?usp=drive_link>`_
-* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_
-* `v1.wav <https://drive.google.com/file/d/1jxwTHOCx3Zf03DggAsuDTcVqsgokNyhm/view?usp=drive_link>`_
+.. image:: media/gui/gui.png
+ :scale: 30 %
+
+Start by pressing the "Download input file" button to be taken to download the
+audio you'll reamp through your gear to make your model,
+`input.wav <https://drive.google.com/file/d/1KbaS4oXXNEuh2aCPLwKrPdf5KFOjda8G/view?usp=sharing>`_.
+Reamp this through the gear that you want to model and render the output as a
+WAVE file. Be sure to match the sample rate (48k) and bit depth (24-bit) of the
+input file. Also, be sure that your render matches the length of the input file.
+An example can be found here:
+`output.wav <https://drive.google.com/file/d/1NrpQLBbCDHyu0RPsne4YcjIpi5-rEP6w/view?usp=sharing>`_.
+
+Return to the trainer and pick the input and output files as well as where you
+want your model to be saved.
.. note:: To train a batch of models, pick all of their reamp (output) files.
+
+Once you've selected these, then the "Train" button should become available:
+
+.. image:: media/gui/ready-to-train.png
+ :scale: 30 %
+
+Click "Train", and the program will check your files for any problems, then
+start training.
+
+Some recording setups will have round-trip latency that should be accounted for.
+Some DAWs might attempt to compensate for this but can overcompensate.
+The trainer will automatically attempt to line up the input and output audio. To
+help with this, the input file has two impulses near its beginning that are used
+to help with alignment. The trainer will attempt to identify the response to
+these in the output. You'll see a plot showing where it thinks that the output
+first reacted to the input (black dashed line) as well as the two responses
+overlaid with each other. You should see that they overlap and that the black
+line is just before the start of the response, like this:
+
+.. image:: media/gui/impulse-responses.png
+ :scale: 50 %
+
+Close this figure, and then you will see training proceed. At the end, you'll
+see a plot that compares the model's prediction against your recording:
+
+.. image:: media/gui/result.png
+ :scale: 30 %
+
+Close that plot, and your model will be saved. To use it, point
+`the plugin <https://github.com/sdatkinson/NeuralAmpModelerPlugin>`_ at the file
+and you're good to go!
diff --git a/docs/source/tutorials/media/gui/gui.png b/docs/source/tutorials/media/gui/gui.png
Binary files differ.
diff --git a/docs/source/tutorials/media/gui/impulse-responses.png b/docs/source/tutorials/media/gui/impulse-responses.png
Binary files differ.
diff --git a/docs/source/tutorials/media/gui/ready-to-train.png b/docs/source/tutorials/media/gui/ready-to-train.png
Binary files differ.
diff --git a/docs/source/tutorials/media/gui/result.png b/docs/source/tutorials/media/gui/result.png
Binary files differ.