Experiencing the cabal hell with Fcomp

This is an effort to try to run the FComp Haskell package described here: http://dreixel.net/research/pdf/fghm.pdf

So, with the help from an awesome colleague, the story goes like:

It complained when I did cable install -> delete all sorts of upper cap of dependencies -> can’t install instant-generics 0.6, only instant-generics 0.1 could be successfully installed -> delete instant-generics 0.1, which was not straight forward… one had to find all files on the system and unregister it from ghc-pkg -> leave the mess and try cabal sandbox and install fcomp from scratch -> found out there’s another problem about template haskell -> delete and change all sorts dependency restrictions again, even tried to go from larger to smaller number… used instant-generics 0.4 for example -> then a problem come from uuparsinglib -> then from haskore splitbase -> then from special-functors, and we found out it’s because this package hasn’t been updated for a very long time, maybe got abandoned by the author 😦 and the incompatibility comes from something in Control/Monad/Instances.hs

It’s been the quite a journey and we decided to let it go for now…

Something I learnt from this:

It’s good to save the terminal history as a txt file and look back on it later. Kind of like a cool journal 😀

The importance of Vim and command line is paramount. This is what I’ll be doing for a while now:


The difficult really lies in the intertwined complexity amongst the cabal files. My poor memory is just not holding up.

Why there’s no self-container (with all the proper dependencies versions) to just make everything reproducible at anytime? It might be big but shouldn’t be a problem with current storage ability?



Running harmtrace

To get into the functional programming side, I’m learning more and more about a functional music package in Haskell: harmtrace. It can parse a chord sequence (what I’m using it for), and do much more music analysis in a clean and functional way.

Thanks to the help of the authors of this package, I was finally able to run it using the binary. (Still not able to build it though, because of all the version issues with ghc..)

Some specifics are given here:


Here’s the screen shot of running it in terminal:


The output is a php syntax tree like this:


To visualise it, one can use this website:


And the visualisation looks like this:



In the future, more on the analysis of these trees…

The ISMIR 2017 paper and deadlines

Results: Accept!!!

which means Asia for me in October and maybe November!

Paper title: A comparison and fusion of musical pattern discovery
algorithms (Paper #120) -> Finding the consensus among musical pattern discovery algorithms (after first revision) -> ???

It was a bumpy road towards submission: realising the overall results are not good enough and you have to write an almost completely new paper with new dataset, new algorithms and new results in less than two weeks… It was an intensive working schedule towards the deadline. But I didn’t hate it. I should have know better though…

The happiness of getting accepted is now mixed with the not-so-fun process of revision. This step is mostly small things, but still important: make the figures and text clearer, make the contribution and purpose of the paper more obvious, etc. One pain is to re-generate the figures which need to be improved. Because of all the deadline hassle, I didn’t really comment my code well. To get back to my own thinking  2-3 months ago was amazingly hard!

1 (1)

Finally, I think I’m starting to like writing and reading papers more.  It’s a old way of communication but one can convey and grasp all the info if they really try..

A random chord sequence generator in Haskell

Since I got the idea that quickcheck can be used to generate things, I wanted to use it for something about music. I found by this post: http://chromaticleaves.com/posts/generate-user-data-quickcheck.html and it seems to be a good starting point.

I just did something very easy to change the codes, but I think it’s quite a nice learning experience for Haskell beginners like me.

The output is a tuple with the root note and the chord quality. There hasn’t been any restriction implemented in this so the chord sequence doesn’t really make sense musically. Something to play with in the future!


So satisfying when you ghci it and it just works!!

Running tensorflow in Haskell

This is a try to run the tensorflow functionalities (https://github.com/tensorflow/haskell) under Ubuntu.

Basically I just followed the instruction on the github page. I heard people have had bad experience with it, but it’s been pretty smooth for me. There’s indeed something tricky if you haven’t installed docker. But it’s pretty easy to fix, just follow the instruction given by the system.

Some screen shots after successfully testing the system:

Screenshot from 2017-06-14 17-50-52Screenshot from 2017-06-14 17-50-05

The MNIST task: good guess!

Screenshot from 2017-06-14 17-49-43

Playing with a pattern visualiser

Thanks to a master student studying at Eindhoven, I got the chance to play with a music pattern visualiser. You can find his repo here: https://github.com/Shiroid/Thesis-Pattern-Discovery-In-Families/tree/master/Builds

The work is mostly based on Peter Boot’s paper:

Boot, Peter, Anja Volk, and W. Bas de Haas. “Evaluating the Role of Repeated Patterns in Folk Song Classification and Compression.” Journal of New Music Research 45.3 (2016): 223-238.

Using this, we are able to see what are the patterns found by various pattern extraction algorithms. In addition, we can also see the differences using different parameters of the algorithms. There’s another option where you can compare the patterns across a whole tune family.

Some screen shots are here:


As written in this post, I also tried to visualise the algorithmically extracted music patterns. My focus was more on the comparison amongst algorithms and the location of the patterns.

This program provides more in terms of the comparison amongst different parameters and across the whole tune families. It would also be nice if the users would be able to export some statistics of the visualisation. The author said it’s possible but not a priority yet…

Looking forward to his thesis! Keep the good work 🙂

H2O: easy machine learning

I learnt H2O in a meetup group in this post. The demo and presentation were impressive. It’s been a while but I have always wanted to try it.

Ok, so I started with this page: https://github.com/h2oai/h2o-3

I must say it’s not the clearest instruction I’ve seen. I first installed using pip and conda. Import successful! And then use: h2o.init(ip=”localhost”, port=54323)

It’s pretty funny it says the version is too old. I just downloaded it!

(Failures: In between, I tried build which didn’t work. There was an error message about R. Then I tried install R. But the error message is still there. And the attempt to try to install h2o in R didn’t work either. It’s been so long since I used R!)

But actually, the easiest thing is to follow this page: http://h2o-release.s3.amazonaws.com/h2o/rel-ueno/7/index.html

After running the .jar file, use this  http://localhost:54321 (they call it flow http://docs.h2o.ai/h2o/latest-stable/h2o-docs/flow.html).
There is a GUI for deep learning and data, etc. A bit like weka on steroids 😀

Tried using the deep learning example. Start:




The estimation of time is not that accurate.

And it’s pretty hard on the CPUs with default settings:


There are lots of other products on different platform from this company. More explorations to be done.