Federico's Blog

  1. Reducing the number of image copies in GNOME

    - gdk-pixbuf, gnome, performance

    Our graphics stack that deals with images has evolved a lot over the years.

    In ye olden days

    In the context of GIMP/GNOME, the only thing that knew how to draw RGB images to X11 windows (doing palette mapping for 256-color graphics cards and dithering if necessary) was the GIMP. Later, when GTK+ was written, it exported a GtkPreview widget, which could take an RGB image buffer supplied by the application and render it to an X window — this was what GIMP plug-ins could use in their user interface to show, well, previews of what they were about to do with the user's images. Later we got some obscure magic in a GdkColorContext object, which helped allocate X11 colors for the X drawing primitives. In turn, GdkColorContext came from the port that Miguel and I did of XmHTML's color context object (and for those that remember, XmHTML became the first version of GtkHtml; later it was rewritten as a port of KDE's HTML widget). Thankfully all that stuff is gone now; we can now assume that video cards are 24-bit RGB or better everywhere, and there is no need to worry about limited color palettes and color allocation.

    Later, we started using the Imlib library, from the Enlightenment project, as an easy API to load images — the APIs from libungif, libjpeg, libpng, etc. were not something one really wanted to use directly — and also to keep images in memory with a uniform representation. Unfortunately, Imlib's memory management was peculiar, as it was tied to Enlightenment's model for caching and rendering loaded/scaled images.

    A bunch of people worked to write GdkPixbuf: it kept Imlib's concepts of a unified representation for image data, and an easy API to load various image formats. It added support for an alpha channel (we only had 1-bit masks before), and it put memory management in the hands of the calling application, in the form of reference counting. GdkPixbuf obtained some high-quality scaling functions, mainly for use by Eye Of Gnome (our image viewer) and by applications that just needed scaling instead of arbitrary transformations.

    Later, we got libart, the first library in GNOME to do antialiased vector rendering and affine transformations. Libart was more or less compatible with GdkPixbuf: they both had the same internal representation for pixel data, but one had to pass the pixels/width/height/rowstride around by hand.

    Mea culpa

    Back then I didn't understand premultiplied alpha, which is now ubiquitous. The GIMP made the decision to use non-premultiplied alpha when it introduced layers with transparency, probably to "avoid losing data" from transparent pixels. GdkPixbuf follows the same scheme.

    (Now that the GIMP uses GEGL for its internal representation of images... I have no idea what it does with respect to alpha.)

    Cairo and afterwards

    Some time after the libart days, we got Cairo and pixman. Cairo had a different representation of images than GdkPixbuf's, and it supported more pixel formats and color models.

    GTK2 got patched to use Cairo in the toplevel API. We still had a dichotomy between Cairo's image surfaces, which are ARGB premultiplied data in memory, and GdkPixbufs, which are RGBA non-premultiplied. There are utilities in GTK+ to do these translations, but they are inconvenient: every time a program loads an image with GdkPixbuf's easy API, a translation has to happen from non-premul RGBA to premul ARGB.

    Having two formats means that we inevitably do translations back and forth of practically the same data. For example, when one embeds a JPEG inside an SVG, librsvg will read that JPEG using GdkPixbuf, translate it to Cairo's representation, composite it with Cairo onto the final result, and finally translate the whole thing back to a GdkPixbuf... if someone uses librsvg's legacy APIs to output pixbufs instead of rendering directly to a Cairo surface.

    Who uses that legacy API? GTK+, of course! GTK+ loads scalable SVG icons with GdkPixbuf's loader API, which dynamically links librsvg at runtime: in effect, GTK+ doesn't use librsvg directly. And the SVG pixbuf loader uses the "gimme a pixbuf" API in librsvg.


    Then, we got GPUs everywhere. Each GPU has its own preferred pixel format. Image data has to be copied to the GPU at some point. Cairo's ARGB needs to be translated to the GPU's preferred format and alignment.

    Summary so far

    • Libraries that load images from standard formats have different output formats. Generally they can be coaxed into spitting ARGB or RGBA, but we don't expect them to support any random representation that a GPU may want.

    • GdkPixbuf uses non-premultiplied RGBA data, always in that order.

    • Cairo uses premultiplied ARGB in platform-endian 32-bit chunks: if each pixel is 0xaarrggbb, then the bytes are shuffled around depending on whether the platform is little-endian or big-endian.

    • Cairo internally uses a subset of the formats supported by pixman.

    • GPUs use whatever they damn well please.

    • Hilarity ensues.

    What would we like to do?

    We would like to reduce the number of translations between image formats along the loading-processing-display pipeline. Here is a plan:

    • Make sure Cairo/pixman support the image formats that GPUs generally prefer. Have them do the necessary conversions if the rest of the program passes an unsupported format. Ensure that a Cairo image surface can be created with the GPU's preferred format.

    • Make GdkPixbuf just be a wrapper around a Cairo image surface. GdkPixbuf is already an opaque structure, and it already knows how to copy pixel data in case the calling code requests it, or wants to turn a pixbuf from immutable to mutable.

    • Provide GdkPixbuf APIs that deal with Cairo image surfaces. For example, deprecate gdk_pixbuf_new() and gdk_pixbuf_new_from_data(), in favor of a new gdk_pixbuf_new_from_cairo_image_surface(). Instead of gdk_pixbuf_get_pixels() and related functions, have gdk_pixbuf_get_cairo_image_surface(). Mark the "give me the pixel data" functions as highly discouraged, and only for use really by applications that want to use GdkPixbuf as an image loader and little else.

    • Remove calls in GTK+ that cause image conversions; make them use Cairo image surfaces directly, from GdkTexture up.

    • Audit applications to remove calls that cause image conversions. Generally, look for where they use GdkPixbuf's deprecated APIs and update them.

    Is this really a performance problem?

    This is in the "excess work" category of performance issues. All those conversions are not really slow (they don't make up for the biggest part of profiles), but they are nevertheless things that we could avoid doing. We may get some speedups, but it's probably more interesting to look at things like power consumption.

    Right now I'm seeing this as a cool, minor optimization, but more as a way to gradually modernize our image API.

    We seem to change imaging models every N years (X11 -> libart -> Cairo -> render trees in GPUs -> ???). It is very hard to change applications to use different APIs. In the meantime, we can provide a more linear path for image data, instead of doing unnecessary conversions everywhere.


    I have a use-cairo-surface-internally branch in gdk-pixbuf, which I'll be working on this week. Meanwhile, you may be interested in the ongoing Performance Hackfest in Cambridge!

  2. Madrid GNOME+Rust Hackfest, part 3 (conclusion)

    - gnome, hackfests, rust

    The last code I wrote during the hackfest was the start of code generation for GObject interfaces. This is so that you can do

    gobject_gen! {
        interface Foo {
            virtual fn frob(&self);

    and it will generate the appropriate FooIface like one would expect with the C versions of interfaces.

    It turns out that this can share a lot of code from the existing code generator for classes: both classes and interfaces are "just virtual method tables", plus signals and properties, and classes can actually have per-instance fields and such. I started refactoring the code generator to allow this.

    I also took a second look at how to present good error messages when the syn crate encounters a parse error. I need to sit down at home and experiment with this carefully.

    Back home

    I'm back home now, jetlagged but very happy that gnome-class is in a much more advanced a state than it was before the hackfest. I'm very thankful that practically everyone worked on it!

    Also, thanks to Alberto and Natalia for hosting me at their apartment and showing me around Madrid, all while wrangling their adorable baby Mario. We had a lovely time on Saturday, and ate excellent food downtown.

    Sponsored by the GNOME Foundation

    Hosted by OpenShine

  3. Madrid GNOME+Rust Hackfest, part 2

    - gnome, hackfests, librsvg, rust

    Hacking on gnome-class continues apace!

    Philippe updated our dependencies.

    Alberto made the syntax for per-instance private structs more ergonomic, and then made that code nice and compact.

    Martin improved our conversion from CamelCase to snake_case for code generation.

    Daniel added initial support for GObject properties. This is not finished yet, but the initial parser and code generation is done.

    Guillaume turned gir, the binding generator in gtk-rs, from a binary into a library crate. This will let us have all the GObject Introspection information for parent classes at compilation time.

    Antoni has been working on a tricky problem. GTK+ structs that have bitfields do not get reconstructed correctly from the GObject Introspection information — Rust does not handle C bitfields yet. This has two implications. First, we lose some of the original struct fields in the generated bindings. Second, the sizes of the generated structs are not the same as the original C structs, so g_type_register_static() complains that one is trying to register an invalid class.

    Yesterday we got as far as reading the amd64 and ARM ABI manuals to see what the hell C compilers are supposed to do for laying out structs with bitfields. Most likely, we will have a temporary fix in gir's code generator so that it generates structs with the same layout as the C ones, with padding in place of the space for bitfields. Later we can remove this when rustc gets support for C bitfields.

    I've been working on support for GObject interfaces. The basic parsing is done; I'm about to refactor the code generation so I can reuse the parts that fill vtables from classes.

    Yesterday we went to the Madrid Rust Meetup, a regular meeting of rustaceans here. Martin talked about WebRender; I talked about refactoring C to port it to Rust, and then Alex talked about Rust's plans for 2018. Fun times.

    Sponsored by the GNOME Foundation

    Hosted by OpenShine

  4. Madrid GNOME+Rust Hackfest, part 1

    - gnome, hackfests, librsvg, rust

    I'm in Madrid since Monday, at the third GNOME+Rust hackfest! The OpenShine folks are kindly letting us use their offices, on the seventh floor of a building by the Cuatro Caminos roundabout.

    I am very, very thankful that this time everyone seems to be working on developing gnome-class. It's a difficult project for me, and more brainpower is definitely welcome — all the indirection, type conversion, GObject obscurity, and procedural macro shenanigans definitely take a toll on oneself.

    Gnome-class internals

    Gnome-class internals on the whiteboard

    I explained how gnome-class works to the rest of the hackfest attendees. I've been writing a document on gnome-class's internals, so the whiteboard was a whirlwind tour through it.

    Error messages from the compiler

    Antoni Boucher, the author of relm (a Rust crate to write GTK+ asynchronous widgets with an Elm-like model), explained to me how relm manages to present good error messages from the Rust compiler, when the user's code has mistakes. Right now this is in a very bad state in gnome-class: user errors within the invocation of the procedural macro get shown by the compiler as errors at the macro call, so you don't get line number information that is meaningful.

    For a large part of the day we tried to refactor bits of gnome-class to do something similar. It is very slightly better now, but this really requires me to sit down calmly, at home, and to fully understand how relm does it and what changes are needed in the syn parser crate to make it easy to present good errors.

    I think I'll continue this work at home, as there is a lot of source code to understand: the combinator parsers in syn, the error handling scheme in relm, and the peculiarities of gnome-class.

    Further work during the hackfest

    Other people working on gnome-class are adding support for GObject properties, inheritance from non-Rust classes, and improving the ergonomics of class-private structures.

    I think I'll stop working on error messages for now, and focus instead on either supporting GTypeInterfaces, or completing support for type conversions for methods and signals.

    Other happenings in Rust

    Paolo Borelli has been porting RsvgState to Rust in librsvg. This is the big structure that holds all the CSS state for SVG elements. This is very meticulous work, and I'm thankful that Paolo is paying good attention to it. Soon we will have all the style machinery for librsvg in Rust, which will make it easier to use the selectors crate from Servo instead of libcroco, as the latter is unmaintained.


    Food in Madrid

    Ah, Spanish food. We have been enjoying cheese, jamón, tortilla, pimientos, oxtail stews, natillas, café con leche...


    Thanks to OpenShine for hosting the hackfest, and to the GNOME Foundation for sponsoring my travel. And thanks for Alberto Ruiz for putting me up in his house!

    Sponsored by the GNOME Foundation

  5. Refactoring some repetitive code to a Rust macro

    - librsvg, rust

    I have started porting the code in librsvg that parses SVG's CSS properties from C to Rust. Many properties have symbolic values:

    stroke-linejoin: miter | round | bevel | inherit
    stroke-linecap: butt | round | square | inherit
    fill-rule: nonzero | evenodd | inherit

    StrokeLinejoin is the first property that I ported. First I had to write a little bunch of machinery to allow CSS properties to be kept in Rust-space instead of the main C structure that holds them (upcoming blog post about that). But for now, I just want to show how this boiled down to a macro after refactoring.

    First cut at the code

    The stroke-linejoin property can have the values miter, round, bevel, or inherit. Here is an enum definition for those values, and the conventional machinery which librsvg uses to parse property values:

    #[derive(Debug, Copy, Clone)]
    pub enum StrokeLinejoin {
    impl Parse for StrokeLinejoin {
        type Data = ();
        type Err = AttributeError;
        fn parse(s: &str, _: Self::Data) -> Result<StrokeLinejoin, AttributeError> {
            match s.trim() {
                "miter" => Ok(StrokeLinejoin::Miter),
                "round" => Ok(StrokeLinejoin::Round),
                "bevel" => Ok(StrokeLinejoin::Bevel),
                "inherit" => Ok(StrokeLinejoin::Inherit),
                _ => Err(AttributeError::from(ParseError::new("invalid value"))),

    We match the allowed string values and map them to enum values. No big deal, right?

    Properties also have a default value. For example, the SVG spec says that if a shape doesn't have a stroke-linejoin property specified, it will use miter by default. Let's implement that:

    impl Default for StrokeLinejoin {
        fn default() -> StrokeLinejoin {

    So far, we have three things:

    • An enum definition for the property's possible values.
    • impl Parse so we can parse the property from a string.
    • impl Default so the property knows its default value.

    Where things got repetitive

    The next property I ported was stroke-linecap, which can take the following values:

    #[derive(Debug, Copy, Clone)]
    pub enum StrokeLinecap {

    This is similar in shape to the StrokeLinejoin enum above; it's just different names.

    The parsing has exactly the same shape, and just different values:

    impl Parse for StrokeLinecap {
        type Data = ();
        type Err = AttributeError;
        fn parse(s: &str, _: Self::Data) -> Result<StrokeLinecap, AttributeError> {
            match s.trim() {
                "butt" => Ok(StrokeLinecap::Butt),
                "round" => Ok(StrokeLinecap::Round),
                "square" => Ok(StrokeLinecap::Square),
                "inherit" => Ok(StrokeLinecap::Inherit),
                _ => Err(AttributeError::from(ParseError::new("invalid value"))),

    Same thing with the default:

    impl Default for StrokeLinecap {
        fn default() -> StrokeLinecap {

    Yes, the SVG spec has

    default: butt

    somewhere in it, much to the delight of the 12-year old in me.

    Refactoring to a macro

    Here I wanted to define a make_ident_property!() macro that would get invoked like this:

        default: Miter,
        "miter" => Miter,
        "round" => Round,
        "bevel" => Bevel,
        "inherit" => Inherit,

    It's called make_ident_property because it makes a property definition from simple string identifiers. It has the name of the property (StrokeLinejoin), a default value, and a few repeating elements, one for each possible value.

    In Rust-speak, the macro's basic pattern is like this:

    macro_rules! make_ident_property {
        ($name: ident,
         default: $default: ident,
         $($str_prop: expr => $variant: ident,)+
        ) => {
            ... macro body will go here ...

    Let's dissect that pattern:

    macro_rules! make_ident_property {
        ($name: ident,
    //   ^^^^^^^^^^^^ will match an identifier and put it in $name
         default: $default: ident,
    //            ^^^^^^^^^^^^^^^ will match an identifier and put it in $default
    //   ^^^^^^^^ arbitrary text
         $($str_prop: expr => $variant: ident,)+
                           ^^ arbitrary text
    //   ^^ start of repetition               ^^ end of repetition, repeats one or more times
        ) => {

    For example, saying "$foo: ident" in a macro's pattern means that the compiler will expect an identifier, and bind it to $foo within the macro's definition.

    Similarly, an expr means that the compiler will look for an expression — in this case, we want one of the string values.

    In a macro pattern, anything that is not a binding is just arbitrary text which must appear in the macro's invocation. This is how we can create a little syntax of our own within the macro: the "default:" part, and the "=>" inside each string/symbol pair.

    Finally, macro patterns allow repetition. Anything within $(...) indicates repetition. Here, $(...)+ indicates that the compiler must match one or more of the repeating elements.

    I pasted the duplicated code, and substituted the actual symbol names for the macro's bindings:

    macro_rules! make_ident_property {
        ($name: ident,
         default: $default: ident,
         $($str_prop: expr => $variant: ident,)+
        ) => {
            #[derive(Debug, Copy, Clone)]
            pub enum $name {
    //          ^^^^^^^^^^^^^ this is how we invoke a repeated element
            impl Default for $name {
                fn default() -> $name {
    //              ^^^^^^^^^^^^^^^ construct an enum::variant
            impl Parse for $name {
                type Data = ();
                type Err = AttributeError;
                fn parse(s: &str, _: Self::Data) -> Result<$name, AttributeError> {
                    match s.trim() {
                        $($str_prop => Ok($name::$variant),)+
    //                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expand repeated elements
                        _ => Err(AttributeError::from(ParseError::new("invalid value"))),

    Getting rid of duplicated code

    Now we have a macro that we can call to define new properties. Librsvg now has this, which is much more readable than all the code written by hand:

        default: Miter,
        "miter" => Miter,
        "round" => Round,
        "bevel" => Bevel,
        "inherit" => Inherit,
        default: Butt,   // :)
        "butt" => Butt,
        "round" => Round,
        "square" => Square,
        "inherit" => Inherit,
        default: NonZero,
        "nonzero" => NonZero,
        "evenodd" => EvenOdd,
        "inherit" => Inherit,

    Etcetera. It's now easy to port similar symbol-based properties from C to Rust.

    Eventually I'll need to refactor all the crap that deals with inheritable properties, but that's for another time.

    Conclusion and references

    Rust macros are very powerful to refactor repetitive code like this.

    The Rust book has an introductory appendix to macros, and The Little Book of Rust Macros is a fantastic resource that really dives into what you can do.

  6. Making sure the repository doesn't break, automatically

    - cairo, gnome, librsvg, rust

    Gitlab has a fairly conventional Continuous Integration system: you push some commits, the CI pipelines build the code and presumably run the test suite, and later you can know if this succeeded of failed.

    But by the time something fails, the broken code is already in the public repository.

    The Rust community uses Bors, a bot that prevents this from happening:

    • You push some commits and submit a merge request.

    • A human looks at your merge request; they may tell you to make changes, or they may tell Bors that your request is approved for merging.

    • Bors looks for approved merge requests. It merges each into a temporary branch and waits for the CI pipeline to run there. If CI passes, Bors automatically merges to master. If CI fails, Bors annotates the merge request with the failure, and the main repository stays working.

    Bors also tells you if the mainline has moved forward and there's a merge conflict. In that case you need to do a rebase yourself; the repository stays working in the meantime.

    This leads to a very fair, very transparent process for contributors and for maintainers. For all the details, watch Emily Dunham's presentation on Rust's community automation (transcript).

    For a description of where Bors came from, read Graydon Hoare's blog.

    Bors evolved into Homu and it is what Rust and Servo use currently. However, Homu depends on Github.

    I just found out that there is a port of Homu for Gitlab. Would anyone care to set it up?

    Update: Two people have suggested porting Bors-ng to Gitlab instead, for scalability reasons.

  7. Librsvg and Gnome-class accepting interns

    - gnome, gnome-class, librsvg, mentoring, rust

    I would like to mentor people for librsvg and gnome-class this Summer, both for Outreachy and Summer of Code.

    Librsvg projects

    Project: port filter effects from C to Rust

    Currently librsvg implements SVG filter effects in C. These are basic image processing filters like Gaussian blur, matrix convolution, Porter-Duff alpha compositing, etc.

    There are some things that need to be done:

    • Split the single rsvg-filter.c into multiple source files, so it's easier to port each one individually.

    • Figure out the common infrasctructure: RsvgFilter, RsvgFilterPrimitive. All the filter use these to store intermediate results when processing SVG elements.

    • Experiment with the correct Rust abstractions to process images pixel-by-pixel. We would like to omit per-pixel bounds checks on array accesses. The image crate has some nice iterator traits for pixels. WebKit's implementation of SVG filters also has interesting abstractions for things like the need for a sliding window with edge handling for Gaussian blurs.

    • Ensure that our current filters code is actually working. Not all of the official SVG test suite's tests are in place right now for the filter effects; it is likely that some of our implementation is broken.

    For this project, it will be especially helpful to have a little background in image processing. You don't need to be an expert; just to have done some pixel crunching at some point. You need to be able to read C and write Rust.

    Project: CSS styling with rust-selectors

    Librsvg uses an very simplistic algorithm for CSS cascading. It uses libcroco to parse CSS style data; libcroco is unmaintained and rather prone to exploits. I want to use Servo's selectors crate to do the cascading; we already use the rust-cssparser crate as a tokenizer for basic CSS properties.

    • For each node in its DOM tree, librsvg's Node structure keeps a Vec<> of children. We need to move this to store the next sibling and the first/last children instead. This is the data structure that rust-selectors prefers. The Kuchiki crate has an example implementation; borrowing some patterns from there could also help us simplify our reference counting for nodes.

    • Our styling machinery needs porting to Rust. We have a big RsvgState struct which holds the CSS state for each node. It is easy to port this to Rust; it's more interesting to gradually move it to a scheme like Servo's, with a distinction between specified/computed/used values for each CSS property.

    For this project, it will be helpful to know a bit of how CSS works. Definitely be comfortable with Rust concepts like ownership and borrowing. You don't need to be an expert, but if you are going through the "fighting the borrow checker" stage, you'll have a harder time with this. Or it may be what lets you grow out of it! You need to be able to read C and write Rust.

    Bugs for newcomers: We have a number of easy bugs for newcomers to librsvg. Some of these are in the Rust part, some in the C part, some in both — take your pick!

    Projects for gnome-class

    Gnome-class is the code generator that lets you write GObject implementations in Rust. Or at least that's the intention — the project is in early development. The code is so new that practically all of our bugs are of an exploratory nature.

    Gnome-class works like a little compiler. This is from one of the examples; note the call to gobject_gen! in there:

    struct SignalerPrivate {
        val: Cell<u32>
    impl Default for SignalerPrivate {
        fn default() -> Self {
            SignalerPrivate {
                val: Cell::new(0)
    gobject_gen! {
        class Signaler {
            type InstancePrivate = SignalerPrivate;
        impl Signaler {
            signal fn value_changed(&self);
            fn set_value(&self, v: u32) {
                let private = self.get_priv();

    Gnome-class implements this gobject_gen! macro as follows:

    1. First we parse the code inside the macro using the syn crate. This is a crate that lets you parse Rust source code from the TokenStream that the compiler hands to implementations of procedural macros. You give a TokenStream to syn, and it gives you back structs that represent function definitions, impl blocks, expressions, etc. From this parsing stage we build an Abstract Syntax Tree (AST) that closely matches the structure of the code that the user wrote.

    2. Second, we take the AST and convert it to higher-level concepts, while verifying that the code is semantically valid. For example, we build up a Class structure for each defined GObject class, and annotate it with the methods and signals that the user defined for it. This stage is the High-level Internal Representation (HIR).

    3. Third, we generate Rust code from the validated HIR. For each class, we write out the boilerplate needed to register it against the GObject type system. For each virtual method we write a trampoline to let the C code call into the Rust implementation, and then write out the actual Rust impl that the user wrote. For each signal, we register it against the GObjectClass, and write the appropriate trampolines both to invoke the signal's default handler and any Rust callbacks for signal handlers.

    For this project, you definitely need to have written GObject code in C in the past. You don't need to know the GObject internals; just know that there are things like type registration, signal creation, argument marshalling, etc.

    You don't need to know about compiler internals.

    You don't need to have written Rust procedural macros; you can learn as you go. The code has enough infrastructure right now that you can cut&paste useful bits to get started with new features. You should definitely be comfortable with the Rust borrow checker and simple lifetimes — again, you can cut&paste useful code already, and I'm happy to help with those.

    This project demands a little patience. Working on the implementation of procedural macros is not the smoothest experience right now (one needs to examine generated code carefully, and play some tricks with the compiler to debug things), but it's getting better very fast.

    How to apply as an intern

    Details for Outreachy

    Details for Summer of Code

  8. Helping Cairo

    - cairo, gnome, librsvg

    Cairo needs help. It is the main 2D rendering library we use in GNOME, and in particular, it's what librsvg uses to render all SVGs.

    My immediate problem with Cairo is that it explodes when called with floating-point coordinates that fall outside the range that its internal fixed-point numbers can represent. There is no validation of incoming data, so the polygon intersector ends up with data that makes no sense, and it crashes.

    I've been studying how Cairo converts from floating-point to its fixed-point representation, and it's a nifty little algorithm. So I thought, no problem, I'll add validation, see how to represent the error state internally in Cairo, and see if clients are happy with getting back a cairo_t in an error state.

    Cairo has a very thorough test suite... that doesn't pass. It is documented to be very hard to pass fully for all rendering backends. This is understandable, as there may be bugs in X servers or OpenGL implementations and such. But for the basic, software-only, in-memory image backend, Cairo should 100% pass its test suite all the time. This is not the case right now; in my tree, for all the tests of the image backend I get

    497 Passed, 54 Failed [0 crashed, 14 expected], 27 Skipped

    I have been looking at test failures to see what needs fixing. Some reference images just need to be regenerated: there have been minor changes in font rendering that broke the reference tests. Some others have small differences in rendering gradients - not noticeable by eye, just by diff tools.

    But some tests, I have no idea what changed that made them break.

    Cairo's git repository is accessible through [cgit.freedesktop.org]. As far as I know there is no continuous integration infrastructure to ensure that tests keep passing.

    Adding minimal continuous testing

    I've set up a Cairo repository at gitlab.com. That branch already has a fix for an uninitialized-memory bug which leads to an invalid free(), and some regenerated test files.

    The repository is configured to run a continuous integration pipeline on every commit. The test artifacts can then be downloaded when the test suite fails. Right now it is only testing the image backend, for in-memory software rendering.

    Initial bugs

    I've started reporting a few bugs against that repository for tests that fail. These should really be in Cairo's Bugzilla, but for now Gitlab makes it much easier to include test images directly in the bug descriptions, so that they are easier to browse. Read on.

    Would you like to help?

    A lot of projects use Cairo. We owe it to ourselves to have a library with a test suite that doesn't break. Getting to that point requires several things:

    • Fixing current failures in the image backend.
    • Setting up the CI infrastructure to be able to test other backends.
    • Fixing failures in the other backends.

    If you have experience with Cairo, please take a look at the bugs. You can see the CI configuration to see how to run the test suite in the same fashion on your machine.

    I think we can make use of modern infrastructure like gitlab and continuous integration to improve Cairo quickly. Currently it suffers from lack of attention and hostile tools. Help us out if you can!

  9. Quick and dirty checklist to update syn 0.11.x to syn 0.12

    - gnome, rust

    Today I ported gnome-class from version 0.11 of the syn crate to version 0.12. syn is a somewhat esoteric crate that you use to parse Rust code... from a stream of tokens... from within the implementation of a procedural macro. Gnome-class implements a mini-language inside your own Rust code, and so it needs to parse Rust!

    The API of syn has changed a lot, which is kind of a pain in the ass — but the new API seems on the road to stabilization, and is nicer indeed.

    Here is a quick list of things I had to change in gnome-class to upgrade its version of syn.

    There is no extern crate synom anymore. You can use syn::synom now.

    extern crate synom;    ->   use syn::synom;

    SynomBuffer is now TokenBuffer:

    synom::SynomBuffer  ->  syn::buffer:TokenBuffer

    PResult, the result of Synom::parse(), now has the tuple's arguments reversed:

    - pub type PResult<'a, O> = Result<(Cursor<'a>, O), ParseError>;
    + pub type PResult<'a, O> = Result<(O, Cursor<'a>), ParseError>;
    // therefore:
    impl Synom for MyThing { ... }
    let x = MyThing::parse(...).unwrap().1;   ->  let x = MyThing::parse(...).unwrap().0;

    The language tokens like synom::tokens::Amp, and keywords like synom::tokens::Type, are easier to use now. There is a Token! macro which you can use in type definitions, instead of having to remember the particular name of each token type:

    synom::tokens::Amp  ->  Token!(&)
    synom::tokens::For  ->  Token!(for)

    And for the corresponding values when matching:

    syn!(tokens::Colon)  ->  punct!(:)
    syn!(tokens::Type)   ->  keyword!(type)

    And to instantiate them for quoting/spanning:

    -     tokens::Comma::default().to_tokens(tokens);
    +     Token!(,)([Span::def_site()]).to_tokens(tokens);

    (OK, that one wasn't nicer after all.)

    To the get string for an Ident:

    ident.sym.as_str()  ->  ident.as_ref()

    There is no Delimited anymore; instead there is a Punctuated struct. My diff has this:

    -  inputs: parens!(call!(Delimited::<MyThing, tokens::Comma>::parse_terminated)) >>
    +  inputs: parens!(syn!(Punctuated<MyThing, Token!(,)>)) >>

    There is no syn::Mutability anymore; now it's an Option<token>, so basically

    syn::Mutability  ->  Option<Token![mut]>

    which I guess lets you refer to the span of the original mut token if you need.

    Some things changed names:

    TypeTup { tys, .. }  ->  TypeTuple { elems, .. }
    PatIdent {                          ->  PatIdent {
        mode: BindingMode(Mutability)           by_ref: Option<Token!(ref)>,
                                                mutability: Option<Token![mut]>,
        ident: Ident,                           ident: Ident,
        subpat: ...,                            subpat: Option<(Token![@], Box<Pat>)>,
        at_token: ...,                      }
    TypeParen.ty  ->  TypeParen.elem   (and others like this, too)

    (I don't know everything that changed names; gnome-class doesn't use all the syn types yet; these are just the ones I've run into.)

    This new syn is much better at acknowledging the fine points of macro hygiene. The examples directory is particularly instructive; it shows how to properly span generated code vs. original code, so compiler error messages are nice. I need to write something about macro hygiene at some point.

  10. Librsvg's continuous integration pipeline

    - gnome, librsvg

    Jordan Petridis has been kicking ass by overhauling librsvg's continous integration (CI) pipeline. Take a look at this beauty:

    Continuous integration pipeline

    On every push, we run the Test stage. This is a quick compilation on a Fedora container that runs "make check" and ensures that the test suite passes.

    We have a Lint stage which can be run manually. This runs cargo clippy to get Rust lints (check the style of Rust idioms), and cargo fmt to check indentation and code style and such.

    We have a Distro_test stage which I think will be scheduled weekly, using Gitlab's Schedules feature, to check that the tests pass on three major Linux distros. Recently we had trouble with different rendering due to differences in Freetype versions, which broke the tests (ahem, likely because I hadn't updated my Freetype in a while and distros were already using a newer one); these distro tests are intended to catch that.

    Finally, we have a Rustc_test stage. The various crates that librsvg depends on have different minimum versions for the Rust compiler. These tests are intended to show when updating a dependency changes the minimum Rust version on which librsvg would compile. We don't have a policy yet for "how far from $newest" we should always work on, and it would be good to get input from distros on this. I think these Rust tests will be scheduled weekly as well.

    Jordan has been experimenting with the pipeline's stages and the distro-specific idiosyncrasies for each build. This pipeline depends on some custom-built container images that already have librsvg's dependencies installed. These images are built weekly in gitlab.com, so every week gitlab.gnome.org gets fresh images for librsvg's CI pipelines. Once image registries are enabled in gitlab.gnome.org, we should be able to regenerate the container images locally without depending on an external service.

    With the pre-built images, and caching of Rust artifacts, Jordan was able to reduce the time for the "test on every commit" builds from around 20 minutes, to little under 4 minutes in the current iteration. This will get even faster if the builds start using ccache and parallel builds from GNU make.

    Currently we have a problem in that tests are failing on 32-bit builds, and haven't had a chance to investigate the root cause. Hopefully we can add 32-bit jobs to the CI pipeline to catch this breakage as soon as possible.

    Having all these container images built for the CI infrastructure also means that it will be easy for people to set up a development environment for librsvg, even though we have better instructions now thanks to Jordan. I haven't investigated setting up a Flatpak-based environment; this would be nice to have as well.

« Page 2 / 5 »