# The LilyPond Report #26

Friday 18 May 2012

This short, informal opinion column is about the GNU LilyPond project: its team, its world, its community. It is not, however, an official documentation resource. Reader comments welcome; reader contributions even more appreciated!

Welcome to this twenty-sixth issue of the LilyPond Report!

No funny pictures or cooking recipes this month, the Report is back with a very serious issue full of long, technical articles. Whilst we’re getting ready for the new LilyPond stable release, David Kastrup takes a look at our current development model and what needs to be ironed out. Janek has a pretty detailed analysis of LilyPond’s current output and how it measures to our beloved hand-engraved references; at last, the Report as found a new contributor, namely kernel developer Pavel Roskin, who’ll tell us about how he saved the world (well at least, the GCC part of it) thanks to LilyPond. All that, and many other things are discussed in this month’s issue: read on to find out!

### Editorial

(by Valentin Villenave)

How do you ensure that a Free Software project remains viable on the long run? As Graham Percival (our current LilyPond project manager) noted a couple of years ago, developers are but a non-renewable energy source. The question then becomes: what do you do with however few resources and time you have at hand? Quoting Graham again in a recent message: "I personally am interested in helping people to do whatever they want with lilypond. [But:] Simply creating an atmosphere where this is easy takes more resources than we have available." Thus, perhaps the crucial part of our discussions these days is about organisation : which tasks should take priority, which ones could be automatized, how to make sure nobody’s time will be spent in vain (for example by banishing any duplicative work), etc.

Such will be the subject of upcoming discussions, with a revival of Graham’s "Grand Organisation Project" (GOP2.0, nope, not this one), to be (hopefully) followed by the long-awaited "Grand LilyPond Syntax Stabilisation" project (a.k.a. GLISS). Where and how will these discussions take place? Via a live intercontinental remote-conference system? Or even in real life (David welcomes you this summer in Germany)? It’s a fairly safe bet that the main place will remain our -devel mailing list, as with all serious discussions. (Well, not too serious.)

Then again, another way of tackling the problem would be to actually increase the resources (read: developer time) we have available: in other words, getting one or more paid developers to work on LilyPond.

A way to do that is to apply for grants and hope : as you may know, our Polish contributor Janek Warchoł was recently accepted as a "Google Summer of Code" student, but there’s more: thanks to Mike Solomon, LilyPond recently won the first prize in a prestigious music software competition!

Another (perhaps more reliable) way, is simply to ask _you_, dear readers, for money. We’re all grateful for the Free Software movement that allows us to do our computing, express ourselves, communicate and create every single day; sometimes our gratitude can be expressed by contributing: reporting bugs, helping others, promoting our favourite programs... and yet, at times the best way to contribute is the most common: throwing in some dough. That’s the way the ReactOS project has recently been choosing, and it has been (rightly so, if you ask me) commended for that decision as paving the way for Free Software at large.

The LilyPond world is no stranger to paid development: a special sponsoring webpage on our website allows you to pay people for their work on LilyPond, and you may have read from David Kastrup, the only developer currently listed on this page, as he recently asked you to join him towards a stable development funding model. How’s it working for him so far? Read below to find out, and do not hesitate to join the plan!

Finally, another way of addressing the issue is, rather than asking for direct payment, to help our top contributors lead a comfortable enough career in music editing, writing or performing; a career that, in turn, will allow them to work on LilyPond as much as they’d like to. Mike Solomon, for example, is currently trying to raise money for his vocal ensemble on KickStarter; he has been trying to get this announced on our website but there is room for debate as to whether or not that kind of announcements should be featured prominently. In any case, there are a few days left for his fund-raiser, so feel free to chip in!

(by David Kastrup)

Development statistics

As the release of LilyPond 2.16 is looming closer and closer and closer, let us take a look at how the contributions to it are spread out. Running the following Gnuplot program

set terminal pdf font ",5" set output "contrib.pdf" set logscale y 2 plot "< git shortlog -n -s release/2.14.2-1..origin" using :1:2 \   with labels rotate by 45, "" using :1 with lines

leaves us with the following graphics right now:

We have 50 contributors and a rather straightforward logarithmic distribution of commits each, resulting in 2062 commits. If we cut the graph off after 37 contributors because of rounding issues (along with their commits), we arrive at the law

Solving for

as the expected number of commits from the most prolific contributor we arrive at

which numerically leaves us with . This is somewhat below the 336 we find in reality, but then the cutoff had been somewhat arbitrary. All in all, logarithmic law seems to match the reality quite well. If we take a look at the number of commits between 2.12.3-1 and 2.14.0-1, it turns out that we had 5242 commits there, so we are still quite below the last number of changes between stable releases for 2.16.

Critical issues

If we take a look at the stream of critical issues that are keeping us from releasing LilyPond 2.16, we see a mixture of several causes.

#### Detailed table (unfold)

While there _are_ some changes wich did not make sense to postpone (2240 was one of those, and it has been responsible for several followup regressions), most critical issues nowadays are due to discovered regressions. Our release criteria require a two-week dormant phase in which no critical regressions are being discovered, where a _critical_ regression is defined as something that has worked already in either 2.14 or 2.12.

Other holdups have been the lack of support for MacOSX (that has actually caused quite a bit of delay until we found people willing to invest the work required for calling MacOSX a supported platform). A number of the regressions have been caused by developments of invasive nature: while the number of commits during the 2.14 development phase is not yet half that of those while 2.12 has been current, there have been lots of larger-scale projects.

2.14.0 has been tagged for release on June 5th 2011, about 11 months ago. 2.12.0, in contrast, was created on December 22nd 2008, 2.5 years earlier. So while the density of commits has slightly picked up in 2.14, we also have the situation that so many important changes have been created in that time that a new stable release is rather desirable.

So even while several developers have, for health or other reasons, mostly withdrawn from active development, development itself is still going forward. In fact, there were times when the limited human resources available for working on our development infrastructures were affecting the development to a degree where work morale and atmosphere were suffering.

But even while the quality assurance department is still not exactly overstaffed (volunteers welcome), we are currently in reasonably good shape.

However, both the depth as well as the breadth of the development that is happening make it less likely that the two-week window for not discovering new regressions will pass without occurrences. Discussions about changes to the release processes are slated to reopen in summer. Hopefully by that time we will not be under the pressure of still having to release 2.16.

Regressions, once discovered, tend to be fixed rather speedily. The above table does not show this, but the average life time of a newly discovered bug, in particular of critical nature, tends to be remarkably short.

### Investor’s report

(by David Kastrup)

So here are the operating results for my work on LilyPond in April, the second month where my work on LilyPond has been enabled by LilyPond users pitching in after my request for funding in February::

Onetime payments (€)Payment plans (€)
2×100
80
50
2×25
22
19
10
100
25+50
3×25
4×10
4
Total:431 Total: 290
Totals: €721

The projections for next month are currently €290. There are some ameliorating circumstances for this month: I have been offered €200 if I write a proper bill in Germany for that, but I am currently checking my financial status with financial administration to make sure that I can make use of a regulation foregoing sales tax (this makes no net difference for corporate customers, but private customers’ can’t reclaim sales tax, so it would be a nuisance if I had to actually cash and divert sales tax: it is already noticeable that about 5% evaporates through PayPal, and sales tax is about 4 times that). There has also been a payment of €200 from a supporter I had figured in for a monthly payment of €100, and I have counted this for next month until I hear otherwise. So one way or another, April has good chances to meet the minimal target of €800 yet that I need for treading water financially.

While one-time payments have declined somewhat, some more people pitched in with monthly payments for several months (3 to 12 months). A surprisingly large ratio of one or more-time contributors have not committed to regular plans because they don’t feel that their own financial/job situation allows them to plan ahead that far. I was reminded of Steinbeck’s Grapes of Wrath’’ where Ma Joad, after getting credit in a store from a clerk rather than the store, says "I’m learnin’ one thing good,’ she said. Learnin’ it all a time, ever’ day. If you’re in trouble or hurt or need— go to poor people. They’re the only ones that’ll help— the only ones.’" Of course, the analogy is not all that fitting since I am appealing to those who have a fortune, namely that of being able to feel excited about a project like LilyPond.

Now what has been accomplished? It turns out that April has been slow not just for me as

git shortlog -n --since "1 month ago"

shows. There is probably some exhaustion involved as we are still gearing up for the release of 2.16 which has seen a stream of last-minute regressions being discovered on the brink of release. But while in line with a general decrease of activities, the list of changes checked in and projects tackled by me still appears embarrassingly small compared to that of last month.

Now like last month, I have been alone in running the regression testing most of the month despite of my comparatively slow setup (James has just now taken over, a development I am not altogether happy with since he already does a lot of grunt work). But while taking a week off for my yearly Easter climbing trip did not help, the trouble actually started before that and is still on:

I was trying to tackle a task involving adding new event types to LilyPond, and I figured out that the current sketchy support for this in LilyPond was broken by concept. So I had to redesign the way event classes are supported in order to stop the existing code example and interface from doing the equivalent of modifying constants. I had to move things like the event class hierarchy into dynamic, per-context data structures, and while it was workable, it was not really a pretty fit. And when things are not pretty and I don’t like what I have to program, I get slow and ineffective because I spend more time brooding than coding.

You can find most of the accomplished work in the bug tracker connected with issue 2449 what I arrived here fits with LilyPond’s architecture logically (and doing so already ironed out a number of user interface aggravations in connection with fingering indications and string numbers and closed a few long-standing bugs) but while what I have been doing does no longer clash with the current architecture of LilyPond, fitting it as a sort of in-between between LilyPond’s grob properties and context properties did not feel good, and an incommensurate amount of brooding and scribbling and bothered sleep was spent on what amounts to a comparatively small amount of actual code.

As a result, I am now working on issue 2507 (and am in the design phase, so again there is little visible output yet) in order to tie context and and user-settable grob properties (which are implemented as an incompatible special case of context properties) into a coherent and user-extensible logical whole, and then fit user-extensible events and grobs and similar entities into that whole.

I can’t really think of immediately visible end user visible benefits. But things should get quite more logical for people willing to program extensions for LilyPond, and it will be good to be able to simplify and streamline the documentation sections concerning grob and context properties when I am done, and get rid of a lot of handwaving in the process.

One consequence should be that

\set SomeProperty = value    \unset SomeProperty

becomes equivalent to

\override SomeProperty #'() = value    \revert SomeProperty #'()

namely let the grob property changing commands (overrides) become equivalent to alist changes of a context property. While the implementation of grob and general context properties at the current point of time is somewhat shared, it is in such a half-baked manner that crashes will ensue if you try interchanging them.

Having a single coherent model for structured per-context data will make it much easier to design and work with extensions, and a redesign of this area will address several long-standing bugs in connection with nested grob properties.

The actual task I set out to do originally (implementing a way to typeset multiple marks at the same place, still pending) is still not finished because of my dislike of having to pick an ugly way for implementing it, and because I am now redoing the landscape in order to make for a nicer path.

And while I feel bad about the sluggish progress I am making right now, this is actually about redesigning fundamental parts of LilyPond that have been taken for granted (or beyond improvement?) for very long and that nobody else feels like touching. And it will not just benefit the particular extension I had set out to do but make a number of things, not just for me, more straightforward.

This sort of too much, too late’’ effect where I take far too long for doing far more than planned is why task-independent funding makes good use of my skills and programmer personality’: while it is certainly nice when I manage to tackle a lot of small and medium-size things (like in March) that would take quite longer without my involvement, tackling a few big roadblocks that would not be moved at all otherwise is also important.

What are the outlooks for May? Partly due to the recent slow rate of change, I consider it likely that we’ll manage to release version 2.16 after all. It is high time for getting a lot of improvements into everybody’s hands in the form of a stable release. Adapting LilyPond to using Guile 2.0 has not moved forward in April, and this issue of compatibility and forward-going is likely the next topic of general interest after LilyPond 2.16 has been released.

With regard to contributors, actually not just for my own finances but for the various workloads of the whole project: I’d like to see the load distributed over more shoulders. Maybe it is to be expected that most of the large contributions come from people who can be seen to have a large piece of their heart in the project. But I am sure that there are many users for which LilyPond is useful and for which some progress is a little important to them, and if those would decide to contribute regularly a little, both in the project itself as well as supporting my ability to continue focusing on it fully, I think this could make quite a difference.

My thanks to all those who contributed to LilyPond in April in either manner, and those who continue doing so.
David Kastrup.

(by David Kastrup)

With the release of Ubuntu 11.10, the regtest suite stopped working when compiling an optimized version of LilyPond. The initial report was in October. The test in question, tablature-negative-fret.ly, displayed a warning "Requested string for pitch requires negative fret: string 1 pitch #<Pitch c' >" (quite as intended) and then crashed with a segmentation fault. So naturally the first suspect was that it did not properly deal with the condition that the warning was about.

segfault backtrace

The backtrace started with

#0  Grob::internal_set_property (this=0x1a, sym=0xb607cb90, v=0x1a)   at /home/gperciva/src/lilypond/lily/grob-property.cc:112 #1  0x082c0c0b in Engraver_dispatch_list::apply (this=0x85d08e8, gi=...)   at /home/gperciva/src/lilypond/lily/translator-dispatch-list.cc:35

Which was kind of a bummer: while it was obvious that Grob::internal_set_property was being called with an invalid value for "this", the displayed caller, Engraver_dispatch_list::apply, did not actually contain a call to this property. The function in its entirety looks like

void Engraver_dispatch_list::apply (Grob_info gi) { Translator *origin = gi.origin_translator (); for (vsize i = 0; i < dispatch_entries_.size (); i++)   {     Engraver_dispatch_entry const &e (dispatch_entries_[i]);     if (e.engraver_ == origin)       continue;     (*e.function_) (e.engraver_, gi);   } }

So the problem apparently was with the function called via the function pointer "e.function_". Unfortunately, both e and gi had been optimized out and were not available for debugging at the point where the problem occured. To add insult to injury, "dispatch_entries_" is a C++ vector, and asking the debugger to display any member of it resulted in the debugger complaining that "operator []" was not available for calling: all of its occurences in the code had been compiled right into its surroundings, and the compiler had not kept a separate copy around for the sake of calling from the debugger.

Now this is not the end of the world: the compiler we use, GCC, has a gazillion options for about every conceivable purpose, and indeed there is one for keeping a copy of inlined functions for the sake of debugging or whatever else:

-fkeep-inline-functions
In C, emit static’ functions that are declared inline’ into the
object file, even if the function has been inlined into all of its
callers. This switch does not affect functions using the extern
inline’ extension in GNU C90. In C++, emit any and all inline
functions into the object file.

To my dismay, the crash disappeared (to date, I don’t see a good reason for this). So while debugging became somewhat more feasible, there was no bug left to analyze!

After running the problem through Valgrind, a memory analyzer, we had a lot of inconclusive data. The memory analysis was reasonable: most bugs of that kind tend to be caused by Guile’s garbage collector reusing memory locations that were still being used for a different purpose, with the most likely reason being a programmer forgetting to properly protect it while it was still needed.

About a month later, I boiled this down to a minimal example still crashing. It still looked like an unitialized variable handling error by a programmer at first.

I finally got a handle on the problem while trying various things with the debugger gdb: I noticed that it did not just have a command for executing a program step by step, but that there was a way to step _backwards_. One needed to use a few special commands (basically, something like "set target record"), and then gdb would, while eating lots of time and memory, record a history of states. Stepping backwards from the crash, I finally discovered that the actual function that had crashed was not even visible in the backtrace. Now being able to step backwards, I finally got to the location where the trouble occured, and the trouble did not make sense.

It crashed inside of a function that, judging from the function arguments, was not really supposed to crash. Since I could not pinpoint why and where stuff started going wrong, I suspected the debugger of not telling the whole truth and switched the single-step display to (disassembled) machine language rather than C++, to find out the exact instruction where things went wrong.

And it was then that I discovered that the machine language did not quite correspond to the C++ source: the compiler had miscompiled the C++ program.

You can see the further progress of work on the bug by continuing to read the bug report I reported the bug to GCC development and followed up with a boiled down example. The problem was picked up and handled rather fast: apparently Jakub Jelinek had worked on the particular part of the compiler anyway, and thus was able to recognize and test the bug rather fast, creating a relevant minimal test case (which demonstrated the problem in C, meaning that it affected all programming languages) on his own.

The bug has been fixed as of GCC-4.6.3 or 4.7.0. In contrast to this bug which occurred under rather limited circumstances and mostly on the 32bit Intel platforms, the bug Pavel uncovered recently in GCC-4.7.0 (more on that below) occurs on all platforms, and under quite more common circumstances in C++. I consider it way more scary.

Having gotten used to the idea that sometimes the computer can be wrong, I was able to provide feedback, guesses and suggestions when Pavel was seeing a similar kind of problem recently.

With experience and teamwork, the bug was handled much more thoroughly and lots faster than the one I described here, and the GCC developers were provided with decidedly better input. So while we were not lucky to have a fix in the queue already (that just needed to get accepted more speedily and backported to 4.6) this time, the bug turnaround time was still much faster and led to a fix within days. Frankly, I hope that the fix will get picked into the upcoming Fedora’s compiler: I am scared by the recent one more than by the one I tracked down previously.

How I made the world a better place
(by Pavel Roskin)

I installed Fedora 17 beta on one of my systems. I tried the version of Lilypond that was included with the distro. It was version 2.15.29 at the time when version 2.15.37 was current.

Actually, I would be fine with Fedora 17 shipping Lilypond 2.14.2 until 2.16 is released. But it turned out that the maintainer had a minor compile problem with 2.14.2 and switched to the development branch, but kept updating only versions for RawHide (the rolling unstable Fedora) leaving Fedora 17 with an old development version.

I tried the stock Lilypond with my Lilypond files and found that it would give bogus barcheck errors.

So I checked out the latest Lilypond code from git repository. Imagine my surprise when I got the same barcheck errors! I tried Lilypond 2.14.2 fixing the compile issue and sure enough, the barcheck errors were still there.

It was clear that something was wrong with the system. I reduced the piece to a short example. Barchecks were gone, but there was another error message about "moving backwards in time". The PDF output was obviously wrong with an extra measure appearing at the end.

Since Fedora 17 came with a brand new version of gcc (4.7.0), the compiler was an obvious suspect. I tried compiling with the optimisation flag "-O1" instead of "-O2", and sure enough, it fixed Lilypond.

I posted my finding to the Lilypond mailing list and got some valuable suggestions. But most importantly, I found that the issue is of great interest to the developers, which was very encouraging.

Everything from this point was done in cooperation with Lilypond and gcc developers and could be seen in the mailing list archives and bug tracking systems.

I started looking for the miscompiled file. My initial approach was to write a script that would compile N files with -O2 and then switch to -O1. That approach turned out the be ineffective and slow, as I would have to recompile everything every time and I would not be able to use parallel build.

The approach that worked was to remove some object files, recompile them with different flags and see if that would change the behavior. That’s how I found the problematic file, simultaneous-music-iterator.cc.

Then I looked for the function suffering from the gcc bug. I copied the contents of simultaneous-music-iterator.cc into another file. I would comment some functions in the original file. The remaining functions were commented in the other file. That way, I found the problematic function, Simultaneous_music_iterator::pending_moment().

The next step was to find which optimization caused the problem. The "-fverbose-asm" option to gcc reported all optimization flags enabled for compilation. I quickly found that "-fno-tree-vrp" was fixing the problem. At this point, I had a working patch, so I submitted it as an issue.

As a result of the discussion in the Lilypond bug tracker, I actually had to write a script and test all other optimization options that would fix Lilypond. "-fno-inline" was the only other option, but it was obviously much more intrusive than "-fno-tree-vrp".

David Kastrup asked me to post the assembly code so that he would look for problems in the generated code. I posted the x86_64 code. David asked me to post i386 code. It turned out that some i386 libraries needed by Lilypond are not available in x86_64 Fedora. So I installed Fedora 17 for i386 on a spare SSD.

But at that point David has already found the problem. The compiler output in the broken case wasn’t interested in the result of comparison used in the min() function.

I wrote a wrapper around min() that would fix LilyPond. But it wasn’t an acceptable fix. What if there other calls to min() that gcc miscompiles? So the solution was to give the "-fno-tree-vrp" flag to gcc 4.7.0 and report the bug to the GCC developers so that the next version of GCC would not have to be worked around.

I opened an issue in the GCC bug tracker. It had the preprocessed version of simultaneous-music-iterator.cc with everything but the problematic function commented out. I mentioned the findings of David Kastrup about the assemble output.

GCC developers reacted quickly and confirmed that it was a bug. However, they wanted a self-contained test. LilyPond was too big to be such a test.

I wrote a new main() function that would call Simultaneous_music_iterator::pending_moment() with the same arguments as it would happen in LilyPond processing the problematic LilyPond file. I replaced a linked list with an array for simplicity. The result of the function would be printed on stdout.

It’s a little embarrassing to admit that the program was printing "denominator/numerator" rather that the other way around. I can only tell in my defense that I attended school not in an English-speaking country. Anyway, what mattered was that the output would depend on the compile flags.

I wrote a bash script that would compile and link the files the way how Lilypond build system would do it. I found a small set of object files that could be linked into a working executable after their sources were stripped of unneeded code.

I was happy to see that the program output was different with -O1 and -O2 optimization. The next step was to copy the needed sources to a separate directory together with the compile script.

I put the new directory under git control and would commit every change that would show the "interesting" behavior. I didn’t waste any time on descriptions, I would just do

git commit -a -m "simplify"

I made a few wrong commits, but I was able to revert them quickly.

I cleaned everything I could, removing files, merging them, expanding macros and just trying to remove almost every line. Special thanks go to astyle, a program I used to re-indent C++ code after every round of simplification. That’s how I came to a short example that I posted to the GCC bug tracker.

The GCC developers made a fix quickly. It is slated to appear in gcc 4.7.1. I wrote about it in the Lilypond bug tracker, and my fix was committed shortly.

It took me a lot of time, but I’m glad that Fedora 17 would have a working version of Lilypond and GCC would be fixed too. GCC bugs are scary. So much code is compiled with it, including the Linux kernel. That’s how I made the world a less scary place

### LilyPond output analysis

(by Janek Warchoł)

Some of the readers may be familiar with my articles in the previous LilyPond Report, as well as my multi-issue Lyrics Bug Report. Here I would like to present a detailed examination of an exemplary LilyPond output — the viola part from Mozart’s Requiem in d-minor (first two movements — Introit and Kyrie).

The International Music Score Library Project hosts a scan of a hand-engraved edition published by Breitkopf & Hartel at the end of 19th century, that we’ll use as a reference throughout this article. Will the LilyPond-made score stand up to this print made by a well-known publisher?

K626, viola part - Breitkopf and Hartel
(click to view in full-size)

You may notice that this score is set very, very tightly: it was common practice in 19th century to use space available on page to the maximum, aiming for scores set on as few pages as possible. In fact, judging by today’s standards it’s engraved way too tight; no modern publisher sets music so densely — but nevertheless Breitkopf’s score remains legible.

Let’s see what will happen when we use LilyPond to engrave this piece. We will begin our analysis with 100% default settings — just what you’d get after compiling with LilyPond version 2.15.36 (latests as of this writing) a lilypond file containing just the music that Mozart wrote, without any layout tweaks.

If you have a decent printer at hand i recommend printing both scores and comparing them on paper — resolution of computer screens is limited and therefore some inconsistencies in display appear (for example lines that have the same thickness appear to be different).

Here’s what LilyPond gives us without any manual intervention:

K626 viola - default LilyPond output
(click to view in full-size)

\version "2.15.33" \pointAndClickOff \header {  title = "Requiem d-minor K626"  subtitle = "I. Introit et Kyrie"  composer = "W. A. Mozart"  instrument = "Viola" } \score {  \relative c {    \key d \minor    \tempo Adagio    \time 4/4    \clef C    r8 f-.\p r a-. r g-. r bes-.    r8 a-. r a-. r a-. r b-.    r8 c r d r c r d    r8 d r a r a r b    r8 f' r f r e r a    r8 f r f r e r d    r8 c r g'\f r f r e    \mark \default    r8 d r d r d r cis    d8 r e r f r e r    d8 r e r f e d g    c,4 d g, c    f4 bes8 a g f16 e a8 g    f8 e d cis d4 c    bes4 bes' a8 r g r    \mark \default    f4 r r16 f( a) f-. c-. f-. a,-. c-.    f,4 r r16 f'( a) f-. c-. f-. a,-. c-.    es8 r d r r c'\p r bes    r8 g, r g r bes r a    bes8 bes r bes r c r c    bes4 a bes8( g es f)    bes4 r r8 d~ ( d16 es d c)    bes16( a g f) es4 f r8 f'~(    f16 g f es) d( es d c) bes ( c' bes a g a g f)    e!4( f) r2    r8 d~ ( d16 es d c bes a bes c) d4~    \mark \default    d4 r16. bes32 \f g'16. es32 f4 r16. a32 c16. a32    f4 r16. d32 f16. d32 a'4 r16. fis32 a16. fis32    bes4 r16. bes,32 d16. bes32 es8 f4 es8    d4 r16. d32 f16. d32 bes4 r16. bes32 d16. bes32    g4 r16. g'32 bes16. g32 g,4 r16. es'32 g,16. g'32    fis8[ r16. fis32] e!8[ r16. e32] d8 d d d    g,8 g\p r bes r a r a    g4( fis) g8( es c d)    \mark \default    g8 r bes\f r a r a r    d8 r e! r f! r d r    e,8 r e r a4 b    c4 d~d8 d4 d8~    d8 d~( d16 es d c ) bes ( d c bes) a( c bes a)    g8.( a16) bes( a) g( bes) a( g)  f( g) a( g) a( f)    c'8 c, c'2 e4    f4 a,8 a a4 cis    d4 f,8 f f4 a    \mark \default    bes4 r r16 bes'( d) bes-. f-. d-. bes-. d-.    a4 r r16 d'( f) d-. a-. f-. d-. f-.    gis,8 b' b b  b b b b    a8 a, r cis \p d4 e    d4 a f( e8 d)    a'2 a\fermata    \bar "||" \tempo Allegro \time 4/4    R1*4    r2 r8 b\f b b    c16 d c b c d b c d e d c d e c d    e8 f16 e d c b a gis8 b e d    \mark \default    c8 cis d e a, d, g! f    e8 g bes4 a r    R1    r2 d4. d8    c4 f gis,4. gis8    \mark \default    a4 r8 a b4. a16 b    c8 b16 c d8 c16 d e8 d c b    a16( b c4) cis8 d d, e fis    g16( a bes4) b8 c! bes a g    a16 f g a bes g a bes c8 g c4~    c8 bes16 a bes c a bes g8 a16 bes c4    R1    \mark \default    d4. d8 bes4 es    fis,4. fis8 g4 r8 g    a4. g16 a bes8 a16 bes c8 bes16 c    d8 c bes a g f' es b    c8 g' f16 es d c d c b a g as f g    \mark \default    es8 c as'4. f8 bes4~    bes4 as g8 a16 b c8 d    es16 ( d c bes a8) bes4 a8( bes c)    d4 r r8 f, f f    g16 a g f   g a f g  a bes a g   a bes g a    bes4 r r8 c c c    \mark \default    d16 es d c d es c d es f es d es f d es    f8 d e! f g4 f8 e    f8 f, bes g c4 r    r2 r8 g g g    as16 bes as g a bes g a bes c bes a b c a b    \mark \default    c8 g c( bes!) a d, d'( c)    bes4 r r8 g g a    a4 a8. a16 b4 b8. b16    cis8 a( b cis) d4. d8    \mark \default    e4 a,4. f8 bes4~    bes8 g e' e d cis16 d e f g f    e8 d cis8. d16 e4 a,8 cis    d8( f e d) e a,( b cis)    d8 d,( g e) a( bes! a g)    \mark \default    f8 d g( f) e c a' a    f4 d' g, r    r8 a d( c) b b e( d)    c4 d8. d16 d4 e8. e16    e4 d8( f) e4. e8    f2 r4\fermata \tempo Adagio f8. f16    e4 f2 e4    d1 \bar "|."  }  \layout { } }

We instantly recognize that LilyPond decided to set the music much more loosely: the original fits one page (15 systems), while Lily’s version is 1,5 pages long (21 systems). I personally find it a bit too "airy". While you don’t have to agree with my opinion on this matter, my goal in engraving this piece was specifically to demonstrate that LilyPond can produce music set as tightly as the Breitkopf edition (one of the violists in our orchestra said that she much prefers tightly set scores). Also, sometimes there is no other choice than to set music tightly - for example when optimal page breaking requires putting a lot of music on one page, or when measures are so long and full of notes that they simply need serious squeezing to fit on the page at all.

Thus, let’s change the overall horizontal spacing to see how Lily will handle it. Some of the problems in engravings below will appear exactly because of this decision, but generally condensing the music doesn’t change much in program’s behaviour — most things you can also find in the uncondensed version.

To change horizontal spacing, I use the method mentioned in the manuals:
\override Score.SpacingSpanner #'common-shortest-duration = #(ly:make-moment 1 4 )
Here are the results:

#### [embedded viewer]

That’s definitely tighter than the default output — I quite like it! However, have a look at letter E, measures 49-52 (after the tempo change): these are four measures, each filled with rests. What for? That’s a waste of space; they should be typeset as a MultiMeasure rest. Of course, it’s easy to add a \compressFullBarRests command, but I don’t quite get the point why this isn’t the default behavior. I don’t recall any score in which the rests weren’t compressed...

Going back to the horizontal spacing: the result is now nicer, but still not as tight as the Breitkopf engraving. Let’s try more:
\override Score.SpacingSpanner #'common-shortest-duration = #(ly:make-moment 1 1 )

#### [embedded viewer]

What’s that? Nothing changed... or did it?

Yes, there are changes, but not for the best at all — the spacing went all weird, see the markings here:

#### [embedded viewer]

(I suggest you to do an Alt-Tab comparison, i.e. open both files in full size and switch between the windows (or browser tabs) using Alt-Tab or whatever shortcut your operating system uses)

I find this behavior very strange, not only because it results in a clearly ugly spacing, but also because obviously there is still a lot of whitespace left between the notes, so it *should be possible* to actually do some shrinking. For example, 16ths are separated with about 0.5 staffspace, while in hand-engraved scores it is not uncommon for short notes to almost touch each other. Consider the fourth system: there are twenty-four 16th notes in it. Reducing the space between them to 0.1 staffspace — that’s close, but still legible — would be enough to add another measure to that system (without changing anything about longer notes’ spacing)! The situation is very similar in systems 10 and 14.

Ok, so how to achieve spacing as tight as in Breitkopf edition? I’ve tried forcing the number of systems, but this resulted in music overflowing the page in the last system. Finally i managed to do this by reducing font size to 18 and forcing page-count to 1, but it’s not exactly what i was looking for.

#### [embedded viewer]

As you can see, this setting revealed some problems that were not instantly recognizable in the previous versions — for example the piano in measure 46 and the F rehearsal mark seemed previously okay, but here it is obvious that they stick out too much. If you look closely at the hand-engraved edition, you’ll see that the F rehearsal mark is not perfectly centered on the barline, but shifted a bit to the right, to allow placing it closer to the staffline. Dynamic stickout is greatly reduced in Breitkopf’s edition by moving the dynamic horizontally — see measure 26 for an example. This isn’t hard to fix in LilyPond — but it seems possible to automate such decisions.

Nevertheless, as I’ve said; the Breitkopf edition is set too tight, so let’s go back to the Lily version created by overriding the common shortest duration to 1/4 — it is definitely the best. Time to examine it closely.

#### [embedded viewer]

The first thing that drew my attention was bes32\f g'16. es32 figure in measure 26: Lily output is very "airy" compared to the engraved version:

That’s because the distances between notes, augmentation dots and accidentals are much smaller in Breitkopf score. LilyPond’s distances are quite good for "normal" scores, but they stay the same even when notes are placed very close to each other, leading to poor results in tight scores - see also issue 2142 in our tracker (it is about accidental spacing, but the problem is really similar in case of any other object).

So, I’ve overriden the spacing properties of dots and accidentals to have these objects closer to notes:

#### [embedded viewer]

Unfortunately, this solution isn’t perfect — while flats and sharps are positioned quite nicely, the naturals are definitely too close when the notes are down-stemmed:

To achieve a pleasing result one would have to write a function that would apply a different right-padding depending on the type of accidental and stem direction. While that’s doable, I think it would make more sense to write a "true & complete" fix for issue 2142 to get rid of this problem once and for all.

Going back to note spacing, all 16th notes seem to get too much space, at least compared to longer notes: there is little difference between 8ths and 16ths in this regard, unlike in the Breitkopf score:

There are more issues present, marked in the pdf below. For greater clarity, only one instance of a problem is marked in color (additional places where an issue appears are marked in gray).

When you hover your mouse over the markings, you should see an explanation text. I know that this feature works for me when i open pdfs using Adobe Reader (for example the one bundled inside Firefox). If you don’t see explanation texts, please download the pdf and try some external viewer (for example Okular).

(click to view)

#### [embedded viewer]

That concludes my analysis. For an almost-default output, the results of LilyPond work are quite impressive — but they’re not a publication-quality engraving yet: there are noticeable flaws. Most of them concern note spacing and beaming, some are caused by lack of intelligent and flexible placement of objects like dynamics and rehearsal marks. The most visible things are easy to fix manually - after all, no computer program will be smart enough to engrave everything perfectly. However, all the problems present seem to be fixable (in other words, it should be possible to write an algorithm that would solve them).

There surely is lots of room for our Development Team to improve things (and we encourage you to help us!)

What are your opinions? I invite you to discuss this engraving - either in comments here, or on LilyPond development mailing list.

Post Scriptum: wrong positioning of the augmentation dot in measure 31 was fixed in LilyPond’s version 2.15.38.

Janek Warchoł, april 2012.

### The snippet of the month

(by Valentin Villenave)

Anyone who has been discovering and using LilyPond in the past decade will tell you about the wonder and bewilderment that we encounter when we compile our first scores and discover the beautiful results. And yet, anyone who has been using LilyPond a tad bit longer, will also testify to the incredibly annoying limitations we’re all bound to stumble upon, sooner or later.

Of all these bugs, one that annoys me the most is the way LilyPond writes tuplets: it can’t handle slur-shaped tuplet brackets, it tends to be dumb when placing its tuplet brackets, and most of all it has absolutely no idea where to place cross-staff tuplets, which may result in ugly collisions.

Which is why I was thrilled to recently (re)discover this snippet, that does make things look (a little) better.

Here’s the code, I’m not sure who is to thank for this snippet but it’s well written and nicely commented.

Positioning tuplet numbers close to kneed beams

### The (undocumented) feature of the month

(by Valentin Villenave)

It is not uncommon for some advanced LilyPonders, to use invisible "anchors" in their music. These can be useful for adding text above full-measure rests...

\relative c'{  c1  s1*0^"hello"  R1  c1  s1*0^"there"  R1*2  c1 }

to end crescendos more elegantly...

\relative c'{  d2\> c\!  d2\>^"better:" c s1*0\! }

or to integrate articulations and texts with music stored in a variable:

notes = \relative c'{  c4 c g' g c,1 } \score {  \new Voice {    s1*0-> \notes    << { s1 s1*0\fermata} { \oneVoice \notes } >>  } }

However, this syntax has a pitfall: using an invisible rest with a null duration, effectively changes the "last-used-duration" for LilyPond, and therefore can play tricks with your score depending on how you use it (see below).

Earlier this month, David Kastrup noticed that there actually always had been an other way to enter an invisible anchor in LilyPond: the empty chord construct, noted as <>. He went on to explain how that might prove to be less confusing for users:

Quick: tell me what you would expect without too much thinking (imagine you are a naive user) from the following:

\new Staff <<  \relative c'' { c4 d e f s1*0-\markup Oops c d e f g1 } \\  \relative c' { c4 d e f <>-\markup Wow c d e f g1 } >>

Whilst he definitely had a point there, and nobody is really happy with s1*0, some contributors disagreed with the idea of somehow "officially" advertising the <> syntax in our documentation, to the point where names were called (Haskell, really? That’s a bit harsh.) and people appeared to be on the verge of leaving the project altogether! If we had not known this already, it again shows that LilyPond developers can be quite passionate about details.

As for me, all I can see is the awesomeness of discovering an undocumented feature that has been right there under our noses for 16 years. That alone, without any parser modification needed or whatever, is terrifically cool from a pure geek point of view. (To what David accurately told me: "Well, that was part of the problem. It was divisive between geeks and non-geeks." And LilyPond is, and must remain, meant for both types of people.)

Then the question remains: is <> elegant and "LilyPond-ish" enough to get a prominent place in our manuals? I’d say it is, but history has proven there’s room for flamewar debate here. Whether we end up including it, implementing a new keyword (either \null or, to avoid confusion with the markup command of the same name, something else, or even a new letter such as z) which would require a parser modification, or — much more likely — nothing at all and just stick with s1*0` until someone resuscitate the issue in a dozen years, it really doesn’t matter that much. Invisible anchors are really not used everyday for an average LilyPonder; and when they are, I suspect whoever uses them is already advanced enough to be able to cope with whatever is thrown at her, be it tricky null durations, new parser keywords/letters or weird Haskell-like constructs.

The closing word for now will be this comment from Janek, as I was telling him how hard reading the thread made me laugh: "I can’t believe you guys went all flaming for such a silly thing!" — "Well, he replied, I couldn’t believe that you quit LilyPond for a few months last year because of a silly mailing list question ."

Oh, well. Touché.

Here endeth the twenty-sixth issue of the LilyPond Report. With just a bit of luck, our next installment will be released after the upcoming LilyPond stable release, and there will be a lot to report vis-à-vis brand new exciting work being started...
In the meantime, feel free to send us your contributions !

Cheers,
David Kastrup, Janek Warchoł & Valentin Villenave

## Forum

• The LilyPond Report #26
24 May 2012, by Carl Sorensen

In my opinion, the greatest difference in the tightness of the LilyPond score and the Breitkopf score is the shape of the note heads. The Breitkopf heads are nearly 20% narrower than the LilyPond heads. It would be relatively straightforward to create a sandbox for exploring this effect by creating a new lilypond installation with the parameters of draw_outside_ellipse inside of draw_quarter_path in mf/feta-params.mf adjusted to create narrow heads. I think it would be interesting to try such an experiment. Perhaps I will give it a shot soon.

Anyway, I think that focusing on the between-element spacings is helpful; ignoring the note head aspect ratio may have an even larger effect.

Thanks,

Carl

• The LilyPond Report #26
24 May 2012, by Valentin Villenave

What amazes me most is how human engravers can choose to place dynamics alongside some out-of-staff note heads, rather than strictly aligned above or below. It’s quite helpful, say, if you have an ottava above the staff, plus high notes with several ledger lines and an "ff" on top of that: in LilyPond, the dynamic will be placed above the highest note and therefore the ottava dashed line will be very, very high; with a human-engraved score (or a highly tweaked LilyPond output) the dynamic will be placed just before the note, therefore not taking any additional vertical space.

I wonder if Lily could ever be made to act like this.

• The LilyPond Report #26
24 May 2012, by Janek Warchoł

Of course, Valentin! I think that this is really easy algorithmic-wise: take a dynamic, measure it’s width, compare with the space before the note (to make sure it won’t look like attached to the previous note if moved), try moving it to the left and see how much closer to the staff it could be then (take this example: e’1\f e\f - it doesn’t make sense to adjust first dynamic, but makes sense to move the second one).
On the design level, i think this should do (actually, something similar could be used for some other objects, too). I don’t know whether it would be difficult to implement.

• The LilyPond Report #26
27 May 2012, by Tuukka Verho

Being a physicist, I can’t help thinking the layouting problem is analogous to finding the minimum energy configuration of a molecule. The "energy" of a score would be its "ugliness" — probably Lilypond already uses such a concept in layouting. For example the energy is increased if the a dynamic is either placed far from the staff or not aligned with the note, i.e. there may be competition between trying to be close to the staff and aligned with the note. This reminds me of competition between different interactions in physical systems.

I think energy minimization methods in physics (for example Monte Carlo methods) could serve as models for music layouting.

• The LilyPond Report #26
27 May 2012, by Janek Warchoł

@Tuukka Verho:
yes, LilyPond uses "ugliness points" for determining how some layout elements should look (see http://www.lilypond.org/doc/v2.15/D... for an example). However, finding "the least ugly combination" of all elements simultaneously would be impossible computation-wise (the amount of processing power grows too fast with the number of elements increased). We need to compute "least ugly" layout for each element separately; the trick is to do this in right order and use good estimates for values that we don’t know yet (these estimates are called pure properties in Lily).

• The LilyPond Report #26
27 May 2012, by David Kastrup

However, finding "the least ugly combination" of all elements simultaneously would be impossible computation-wise (the amount of processing power grows too fast with the number of elements increased).

Which is why a shortest graph/dynamic programming algorithm is employed, like the paragraph breaker in TeX does. Simulated annealing would not really help much and render the results non-deterministic (you can’t really predict which of several local minima will end up as the point of convergence).

• The LilyPond Report #26
24 May 2012, by Janek Warchoł

That’s a good idea. I was thinking about a "notehead roundness" setting some time ago.

• The LilyPond Report #26
24 May 2012, by David Kastrup

I think it is ironic that an edition from a publisher called "Breitkopf" (Broadhead) is supposed to excel in the narrowness of its heads.

• The LilyPond Report #26
20 May 2012, by georgH

After reading the comparison with the hand-engraved edition I am really amazed on the great work and skill of engravers and the huge effort of hand-engraving that music in one page, with such incredible detailed spacing. I understand that hand-engraved sheet music may had been expensive, even a reprint of old, good engravings, as there is still value on them. But I cannot comprehend why some newest digital editions have none of that carefully typeset and detailed layout are as expensive or more.

When I was studying I enjoyed copies of prints horn concertos that would fit in just four pages, or one A3 on both sides, almost like a 4-page book. When I bought my own I could not find these kind of printings anymore: all are loosely spaced and with extra-thin lines to ensure it is hard to read from distance.

BTW, even with the current Lilypond imperfections, the default output is still really good compared with other software. Sometimes in the band we play arrangements made by another member or the conductor, of course made with either Finale or Encore (I asked them to confirm). The amount of mistakes that need to be corrected manually (and stay uncorrected) is too much, to the point of even confusing the musicians during rehearsal like extremely bad collisions or transposing issues.

By the way, I found two engraving videos on YouTube, here are the links:

Thanks to all LilyPond developers, and for LilyPond reports!

• The LilyPond Report #26
20 May 2012, by Janek Warchoł

I think that the new computer-made editions are so expensive because it takes a lot of struggle to get commercial software to do what you want :P

• The LilyPond Report #26
20 May 2012, by David Kastrup

It’s more like the new computer-made editions are so expensive because of a lack of competition, due to a total failure of copyright laws to promote the common good. The case of the music industry is acercabated because of several factors:

a) modern classical music is not a cash cow. For better or worse, the highest demand for good engraving is obviously placed by performers with extraordinary sightreading skills. A classic music education ultimately ending in an orchestral career (for classical soloists, the score is just a starting point, and they are outnumbered anyway) has a focus on efficient sightreading. But modern music is mostly "avantgarde", meaning small circulation numbers.

b) old plates have culminated in rather high quality. Established engraving was a competition between engraver schools established by practice in renowned publishing houses. Engraving was a mass manufacturing job, like instrument production for accordions, bandonions, violins at the start of the 20th century. Better quality led to better jobs and better payment, desperately needed. While this involved aspects where machines excel (being able to rake straight, constant staff lines), when those basics taking hard work to acquire were under control, competition happened with more subtle things.

Now due to the ridiculously long copyright protections, the new computer typesetting processes never needed to actually compete with hand engraving on high circulation stuff: the new music was in low circulation, and the copyright protection on the old engravings lasted long enough for the skill of engraving to die out completely before one would come into the situation that, in order to secure continuing revenue, one would have to create better output than those produced by the old plates.

Moving from mass manufacture to technology always has this problem: machines tend to produce better and more consistent output than unskilled workers, but workers don’t stay unskilled. And if they do, they might have to look for a new job eventually.

In the music printing industry, copyright laws and lack of interest in new music for the sightreading professionals combined to a lack of market resistance against an appalling slide of quality.

Competing manufacturing houses will acquire talent at every level required, and that is possible. Competing closed software applications are independent. You can’t make an offer not to be refused to some pretty good subroutine in a reputed module from a competing software product. There is no crosspollination.

The means of evolving high quality at all levels in an industry have not made the transfer from a manufacture-based process to a process centered around computers and copyright laws, laws that extend protection of old plates to a duration where computers never need to compete with them.

As a result, there are much fewer computer products for engraving than there had been publishing houses. And the set of in-house skills distinguishing publishing houses is rather fuzzy and small, but since much of the market revenue is based on the sales from old plates (physically safe-kept) still under copyright (and/or marginally annotated to refresh copyright), a revenue stream for subsidizing work on new music (even if computerized production tends to be cheaper) with money and reputation is available mostly to the old houses.

This rant might warrant turning into an article of its own.

• The LilyPond Report #26
23 May 2012, by Janek Warchoł

“This rant might warrant turning into an article of its own.”

+1!

Home page | Contact | Site Map | | Site statistics | Visitors : 4788 / 182333