Running harmtrace

To get into the functional programming side, I’m learning more and more about a functional music package in Haskell: harmtrace. It can parse a chord sequence (what I’m using it for), and do much more music analysis in a clean and functional way.

Thanks to the help of the authors of this package, I was finally able to run it using the binary. (Still not able to build it though, because of all the version issues with ghc..)

Some specifics are given here:

http://foswiki.cs.uu.nl/foswiki/GenericProgramming/HarmTrace

Here’s the screen shot of running it in terminal:

Capture.PNG

The output is a php syntax tree like this:

[Piece[PT[T_3par_1[IIIm_0[E:min]]]][PD[D_1_1[S_3_1[IV_0[F:maj7]]][D_1_1[S_1par_1[IIm_0[D:min]]][D_2_1[V7[Inserted]]]]]][PT[T_1_1[I_0[C:maj]]]][PD[D_3_1[V_0[G:maj7][G:maj]]]][PT[T_1_1[I_0[C:maj]]]]]

To visualise it, one can use this website:

http://ironcreek.net/phpsyntaxtree/

And the visualisation looks like this:

 

 

In the future, more on the analysis of these trees…

The ISMIR 2017 paper and deadlines

Results: Accept!!!

which means Asia for me in October and maybe November!

Paper title: A comparison and fusion of musical pattern discovery
algorithms (Paper #120) -> Finding the consensus among musical pattern discovery algorithms (after first revision) -> ???

It was a bumpy road towards submission: realising the overall results are not good enough and you have to write an almost completely new paper with new dataset, new algorithms and new results in less than two weeks… It was an intensive working schedule towards the deadline. But I didn’t hate it. I should have know better though…

The happiness of getting accepted is now mixed with the not-so-fun process of revision. This step is mostly small things, but still important: make the figures and text clearer, make the contribution and purpose of the paper more obvious, etc. One pain is to re-generate the figures which need to be improved. Because of all the deadline hassle, I didn’t really comment my code well. To get back to my own thinking  2-3 months ago was amazingly hard!

1 (1)

Finally, I think I’m starting to like writing and reading papers more.  It’s a old way of communication but one can convey and grasp all the info if they really try..

A random chord sequence generator in Haskell

Since I got the idea that quickcheck can be used to generate things, I wanted to use it for something about music. I found by this post: http://chromaticleaves.com/posts/generate-user-data-quickcheck.html and it seems to be a good starting point.

I just did something very easy to change the codes, but I think it’s quite a nice learning experience for Haskell beginners like me.

The output is a tuple with the root note and the chord quality. There hasn’t been any restriction implemented in this so the chord sequence doesn’t really make sense musically. Something to play with in the future!

1C60E645-6D35-4B0C-99DD-B23E4EDA39F5.png

So satisfying when you ghci it and it just works!!

Running tensorflow in Haskell

This is a try to run the tensorflow functionalities (https://github.com/tensorflow/haskell) under Ubuntu.

Basically I just followed the instruction on the github page. I heard people have had bad experience with it, but it’s been pretty smooth for me. There’s indeed something tricky if you haven’t installed docker. But it’s pretty easy to fix, just follow the instruction given by the system.

Some screen shots after successfully testing the system:

Screenshot from 2017-06-14 17-50-52Screenshot from 2017-06-14 17-50-05

The MNIST task: good guess!

Screenshot from 2017-06-14 17-49-43

Playing with a pattern visualiser

Thanks to a master student studying at Eindhoven, I got the chance to play with a music pattern visualiser. You can find his repo here: https://github.com/Shiroid/Thesis-Pattern-Discovery-In-Families/tree/master/Builds

The work is mostly based on Peter Boot’s paper:

Boot, Peter, Anja Volk, and W. Bas de Haas. “Evaluating the Role of Repeated Patterns in Folk Song Classification and Compression.” Journal of New Music Research 45.3 (2016): 223-238.

Using this, we are able to see what are the patterns found by various pattern extraction algorithms. In addition, we can also see the differences using different parameters of the algorithms. There’s another option where you can compare the patterns across a whole tune family.

Some screen shots are here:

Untitled2UntitledCapture.PNG

As written in this post, I also tried to visualise the algorithmically extracted music patterns. My focus was more on the comparison amongst algorithms and the location of the patterns.

This program provides more in terms of the comparison amongst different parameters and across the whole tune families. It would also be nice if the users would be able to export some statistics of the visualisation. The author said it’s possible but not a priority yet…

Looking forward to his thesis! Keep the good work 🙂

H2O: easy machine learning

I learnt H2O in a meetup group in this post. The demo and presentation were impressive. It’s been a while but I have always wanted to try it.

Ok, so I started with this page: https://github.com/h2oai/h2o-3

I must say it’s not the clearest instruction I’ve seen. I first installed using pip and conda. Import successful! And then use: h2o.init(ip=”localhost”, port=54323)

new
It’s pretty funny it says the version is too old. I just downloaded it!

(Failures: In between, I tried build which didn’t work. There was an error message about R. Then I tried install R. But the error message is still there. And the attempt to try to install h2o in R didn’t work either. It’s been so long since I used R!)

But actually, the easiest thing is to follow this page: http://h2o-release.s3.amazonaws.com/h2o/rel-ueno/7/index.html

After running the .jar file, use this  http://localhost:54321 (they call it flow http://docs.h2o.ai/h2o/latest-stable/h2o-docs/flow.html).
There is a GUI for deep learning and data, etc. A bit like weka on steroids 😀

Tried using the deep learning example. Start:

9

Finish:

8

The estimation of time is not that accurate.

And it’s pretty hard on the CPUs with default settings:

7

There are lots of other products on different platform from this company. More explorations to be done.

Magenta: starter code

This is my second try with magenta now. The first one was last year when it was just released, but I couldn’t figure out anything in the end… Now it seems to have got much better. And maybe I got more familiar with the terms as well 😀

So, basically follow the code given here: https://github.com/tensorflow/magenta

I actually did both the automatic install and the manual install because an error message about a module called six (happened both times). Solved in a simple way:
pip uninstall six
pip install six

After installation, one important thing to notice:
Note that you will need to run source activate magenta to use Magenta every time you open a new terminal window.

And then, success yay!
1

To see the generated files:
2

The first melody looks like this (using musescore):3

And then I tried using bazel:

BUNDLE_PATH=/Users/irisren/Downloads/attention_rnn.mag  CONFIG=’attention_rnn’ bazel run //magenta/models/melody_rnn:melody_rnn_generate — –bundle_file=${BUNDLE_PATH}  –config=${CONFIG}

Basically did the same thing as above. More explorations to be done!