Federico's Blog

  1. Who wrote librsvg?

    - gnome, librsvg

    Authors by lines of code, each year:

    Librsvg authors by lines of code by year

    Authors by percentage of lines of code, each year:

    Librsvg authors by percentage of lines of code by year

    Which lines of code remain each year?

    Lines of code that remain each year

    The shitty thing about a gradual rewrite is that a few people end up "owning" all the lines of source code. Hopefully this post is a little acknowledgment of the people that made librsvg possible.

    The charts are made with the incredible tool git-of-theseus — thanks to @norwin@mastodon.art for digging it up! Its README also points to a Hercules plotter with awesome graphs. You know, for if you needed something to keep your computer busy during the weekend.

  2. Librsvg's GObject boilerplate is in Rust now

    - gnome, librsvg, rust

    The other day I wrote about how most of librsvg's library code is in Rust now.

    Today I finished porting the GObject boilerplate for the main RsvgHandle object into Rust. This means that the C code no longer calls things like g_type_register_static(), nor implements rsvg_handle_class_init() and such; all those are in Rust now. How is this done?

    The life-changing magic of glib::subclass

    Sebastian Dröge has been working for many months on refining utilities to make it possible to subclass GObjects in Rust, with little or no unsafe code. This subclass module is now part of glib-rs, the Rust bindings to GLib.

    Librsvg now uses the subclassing functionality in glib-rs, which takes care of some things automatically:

    • Registering your GObject types at runtime.
    • Creating safe traits on which you can implement class_init, instance_init, set_property, get_property, and all the usual GObject paraphernalia.

    Check this out:

    use glib::subclass::prelude::*;
    
    impl ObjectSubclass for Handle {
        const NAME: &'static str = "RsvgHandle";
    
        type ParentType = glib::Object;
    
        type Instance = RsvgHandle;
        type Class = RsvgHandleClass;
    
        glib_object_subclass!();
    
        fn class_init(klass: &mut RsvgHandleClass) {
            klass.install_properties(&PROPERTIES);
        }
    
        fn new() -> Self {
            Handle::new()
        }
    }
    

    In the impl line, Handle is librsvg's internals object — what used to be RsvgHandlePrivate in the C code.

    The following lines say this:

    • const NAME: &'static str = "RsvgHandle"; - the name of the type, for GType's perusal.

    • type ParentType = glib::Object; - Parent class.

    • type Instance, type Class - Structs with #[repr(C)], equivalent to GObject's class and instance structs.

    • glib_object_subclass!(); - All the boilerplate happens here automatically.

    • fn class_init - Should be familiar to anyone who implements GObjects!

    And then, a couple of the property declarations:

    static PROPERTIES: [subclass::Property; 11] = [
        subclass::Property("flags", |name| {
            ParamSpec::flags(
                name,
                "Flags",
                "Loading flags",
                HandleFlags::static_type(),
                0,
                ParamFlags::READWRITE | ParamFlags::CONSTRUCT_ONLY,
            )
        }),
        subclass::Property("dpi-x", |name| {
            ParamSpec::double(
                name,
                "Horizontal DPI",
                "Horizontal resolution in dots per inch",
                0.0,
                f64::MAX,
                0.0,
                ParamFlags::READWRITE | ParamFlags::CONSTRUCT,
            )
        }),
        // ... etcetera
    ];
    

    This is quite similar to the way C code usually registers properties for new GObject subclasses.

    The moment at which a new GObject subclass gets registered against the GType system is in the foo_get_type() call. This is the C code in librsvg for that:

    extern GType rsvg_handle_rust_get_type (void);
    
    GType
    rsvg_handle_get_type (void)
    {
        return rsvg_handle_rust_get_type ();
    }
    

    And the Rust function that actually implements this:

    #[no_mangle]
    pub unsafe extern "C" fn rsvg_handle_rust_get_type() -> glib_sys::GType {
        Handle::get_type().to_glib()
    }
    

    Here, Handle::get_type() gets implemented automatically by Sebastian's subclass traits. It gets things like the type name and the parent class from the impl ObjectSubclass for Handle we saw above, and calls g_type_register_static() internally.

    I can confirm now that implementing GObjects in Rust in this way, and exposing them to C, really works and is actually quite pleasant to do. You can look at librsvg's Rust code for GObject here.

    Further work

    There is some auto-generated C code to register librsvg's error enum and a flags type against GType; I'll move those to Rust over the next few days.

    Then, I think I'll try to actually remove all of the library's entry points from the C code and implement them in Rust. Right now each C function is really just a single call to a Rust function, so this should be trivial-ish to do.

    I'm waiting for a glib-rs release, the first one that will have the glib::subclass code in it, before merging all of the above into librsvg's master branch.

    A new Rust API for librsvg?

    Finally, this got me thinking about what to do about the Rust bindings to librsvg itself. The rsvg crate uses the gtk-rs machinery to generate the binding: it reads the GObject Introspection data from Rsvg.gir and generates a Rust binding for it.

    However, the resulting API is mostly identical to the C API. There is an rsvg::Handle with the same methods as the ones from C's RsvgHandle... and that API is not particularly Rusty.

    At some point I had an unfinished branch to merge rsvg-rs into librsvg. The intention was that librsvg's build procedure would first build librsvg.so itself, then generate Rsvg.gir as usual, and then generate rsvg-rs from that. But I got tired of fucking with Autotools, and didn't finish integrating the projects.

    Rsvg-rs is an okay Rust API for using librsvg. It still works perfectly well from the standalone crate. However, now that all the functionality of librsvg is in Rust, I would like to take this opportunity to experiment with a better API for loading and rendering SVGs from Rust. This may make it more clear how to refactor the toplevel of the library. Maybe the librsvg project can provide its own Rust crate for public consumption, in addition to the usual librsvg.so and Rsvg.gir which need to remain with a stable API and ABI.

  3. Librsvg is almost rustified now

    - gnome, librsvg, rust

    Since a few days ago, librsvg's library implementation is almost 100% Rust code. Paolo Borelli's and Carlos Martín Nieto's latest commits made it possible.

    What does "almost 100% Rust code" mean here?

    • The C code no longer has struct fields that refer to the library's real work. The only field in RsvgHandlePrivate is an opaque pointer to a Rust-side structure. All the rest of the library's data lives in Rust structs.

    • The public API is implemented in C, but it is just stubs that immediately call into Rust functions. For example:

    gboolean
    rsvg_handle_render_cairo_sub (RsvgHandle * handle, cairo_t * cr, const char *id)
    {
        g_return_val_if_fail (RSVG_IS_HANDLE (handle), FALSE);
        g_return_val_if_fail (cr != NULL, FALSE);
    
        return rsvg_handle_rust_render_cairo_sub (handle, cr, id);
    }
    
    • The GObject boilerplate and supporting code is still in C: rsvg_handle_class_init and set_property and friends.

    • All the high-level tests are still done in C.

    • The gdk-pixbuf loader for SVG files is done in C.

    Someone posted a chart on Reddit about the rustification of librsvg, comparing lines of code in each language vs. time.

    Rustifying the remaining C code

    There is only a handful of very small functions from the public API still implemented in C, and I am converting them one by one to Rust. These are just helper functions built on top of other public API that does the real work.

    Converting the gdk-pixbuf loader to Rust seems like writing a little glue code for the loadable module; the actual loading is just a couple of calls to librsvg's API.

    Rsvg-rs in rsvg?

    Converting the tests to Rust... ideally this would use the rsvg-rs bindings; for example, it is what I already use for rsvg-bench, a benchmarking program for librsvg.

    I have an unfinished branch to merge the rsvg-rs repository into librsvg's own repository. This is because...

    1. Librsvg builds its library, librsvg.so
    2. Gobject-introspection runs on librsvg.so and the source code, and produces librsvg.gir
    3. Rsvg-rs's build system calls gir on librsvg.gir to generate the Rust binding's code.

    As you can imagine, doing all of this with Autotools is... rather convoluted. It gives me a lot of anxiety to think that there is also an unfinished branch to port the build system to Meson, where probably doing the .so→.gir→rs chain would be easier, but who knows. Help in this area is much appreciated!

    An alternative?

    Rustified tests could, of course, call the C API of librsvg by hand, in unsafe code. This may not be idiomatic, but sounds like it could be done relatively quickly.

    Future work

    There are two options to get rid of all the C code in the library, and just leave C header files for public consumption:

    1. Do the GObject implementation in Rust, using Sebastian Dröge's work from GStreamer to do this easily.

    2. Work on making gnome-class powerful enough to implement the librsvg API directly, and in an ABI-compatible fashion to what there is right now.

    The second case will probably build upon the first one, since one of my plans for gnome-class is to make it generate code that uses Sebastian's, instead of generating all the GObject boilerplate by hand.

  4. In support of Coraline Ada Ehmke

    - code-of-conduct

    Last night, the linux.org DNS was hijacked and redirected to a page that doxed her. Coraline is doing extremely valuable work with the Contributor Covenant code of conduct, which many free software projects have adopted already.

    Coraline has been working for years in making free software, and computer technology circles in general, a welcome place for underrepresented groups.

    I hope Coraline stays safe and strong. You can support her directly on Patreon.

  5. My GUADEC 2018 presentation

    - gnome, librsvg, rust, talks

    I just realized that I forgot to publish my presentation from this year's GUADEC. Sorry, here it is!

    Patterns of refactoring C to Rust - link to PDF

    You can also get the ODP file for the presentation. This is released under a CC-BY-SA license.

    This is the video of the presentation.

    Update Dec/06: Keen readers spotted an incorrect use of opaque pointers; I've updated the example code in the presentation to match Jordan's fix with the recommended usage. That merge request has an interesting conversation on FFI esoterica, too.

  6. Refactoring allowed URLs in librsvg

    - gnome, librsvg, rust

    While in the middle of converting librsvg's code that processes XML from C to Rust, I went into a digression that has to do with the way librsvg decides which files are allowed to be referenced from within an SVG.

    Resource references in SVG

    SVG files can reference other files, i.e. they are not self-contained. For example, there can be an element like <image xlink:href="foo.png">, or one can request that a sub-element of another SVG be included with <use xlink:href="secondary.svg#foo">. Finally, there is the xi:include mechanism to include chunks of text or XML into another XML file.

    Since librsvg is sometimes used to render untrusted files that come from the internet, it needs to be careful not to allow those files to reference any random resource on the filesystem. We don't want something like <text><xi:include href="/etc/passwd" parse="text"/></text> or something equally nefarious that would exfiltrate a random file into the rendered output.

    Also, want to catch malicious SVGs that want to "phone home" by referencing a network resource like <image xlink:href="http://evil.com/pingback.jpg">.

    So, librsvg is careful to have a single place where it can load secondary resources, and first it validates the resource's URL to see if it is allowed.

    The actual validation rules are not very important for this discussion; they are something like "no absolute URLs allowed" (so you can't request /etc/passwd, "only siblings or (grand)children of siblings allowed" (so foo.svg can request bar.svg and subdir/bar.svg, but not ../../bar.svg).

    The code

    There was a central function rsvg_io_acquire_stream() which took a URL as a string. The code assumed that that URL had been first validated with a function called allow_load(url). While the code's structure guaranteed that all the places that may acquire a stream would actually go through allow_load() first, the structure of the code in Rust made it possible to actually make it impossible to acquire a disallowed URL.

    Before:

    pub fn allow_load(url: &str) -> bool;
    
    pub fn acquire_stream(url: &str, ...) -> Result<gio::InputStream, glib::Error>;
    
    pub fn rsvg_acquire_stream(url: &str, ...) -> Result<gio::InputStream, LoadingError> {
        if allow_load(url) {
            acquire_stream(url, ...)?
        } else {
            Err(LoadingError::NotAllowed)
        }
    }
    

    The refactored code now has an AllowedUrl type that encapsulates a URL, plus the promise that it has gone through these steps:

    • The URL has been run through a URL well-formedness parser.
    • The resource is allowed to be loaded following librsvg's rules.
    pub struct AllowedUrl(Url);  // from the Url parsing crate
    
    impl AllowedUrl {
        pub fn from_href(href: &str) -> Result<AllowedUrl, ...> {
            let parsed = Url::parse(href)?; // may return LoadingError::InvalidUrl
    
            if allow_load(parsed) {
                Ok(AllowedUrl(parsed))
            } else {
                Err(LoadingError::NotAllowed)
            }
        }
    }
    
    // new prototype
    pub fn acquire_stream(url: &AllowedUrl, ...) -> Result<gio::InputStream, glib::Error>;
    

    This forces callers to validate the URLs as soon as possible, right after they get them from the SVG file. Now it is not possible to request a stream unless the URL has been validated first.

    Plain URIs vs. fragment identifiers

    Some of the elements in SVG that reference other data require full files:

    &lt;image xlink:href="foo.png" ...&gt;      &lt;!-- no fragments allowed --&gt;
    

    And some others, that reference particular elements in secondary SVGs, require a fragment ID:

    &lt;use xlink:href="icons.svg#app_name" ...&gt;   &lt;!-- fragment id required --&gt;
    

    And finally, the feImage element, used to paste an image as part of a filter effects pipeline, allows either:

    &lt;!-- will use that image --&gt;
    &lt;feImage xlink:href="foo.png" ...&gt;
    
    &lt;!-- will render just this element from an SVG and use it as an image --&gt;
    &lt;feImage xlink:href="foo.svg#element"&gt;
    

    So, I introduced a general Href parser :

    pub enum Href {
        PlainUri(String),
        WithFragment(Fragment),
    }
    
    /// Optional URI, mandatory fragment id
    pub struct Fragment(Option<String>, String);
    

    The parts of the code that absolutely require a fragment id now take a Fragment. Parts which require a PlainUri can unwrap that case.

    The next step is making those structs contain an AllowedUrl directly, instead of just strings, so that for callers, obtaining a fully validated name is a one-step operation.

    In general, the code is moving towards a scheme where all file I/O is done at loading time. Right now, some of those external references get resolved at rendering time, which is somewhat awkward (for example, at rendering time the caller has no chance to use a GCancellable to cancel loading). This refactoring to do early validation is leaving the code in a very nice state.

  7. Thessaloniki GNOME+Rust Hackfest 2018

    - gnome, rust

    A couple of weeks ago we had the fourth GNOME+Rust hackfest, this time in Thessaloniki, Greece. This is the beautiful city that will host next year's GUADEC, but fortunately GUADEC will be in summertime!

    We held the hackfest at the CoHo coworking space, a small, cozy office between the University and the sea.

    Every such hackfest I am overwhelmed by the kind hackers who work on [gnome-class], the code generator for GObject implementations in Rust.

    Mredlek has been working on generalizing the code generators in gnome-class, so that we can have the following from the same run:

    • Rust code generation, for the GObject implementations themselves. Thanks to mredlek, this is much cleaner than it was before; now both classes and interfaces share the same code for most of the boilerplate.

    • GObject Introspection (.gir) generation, so that language bindings can be generated automatically.

    • C header files (.h), so the generated GObjects can be called from C code as usual.

    So far, Rust and GIR work; C header files are not generated yet.

    Mredlek is a new contributor to gnome-class, but unfortunately was not able to attend the hackfest. Not only did he rewrite the gnome-class parser using the new version of syn; he also added support for passing owned types to GObject methods, such as String and Variant. But the biggest thing is probably that mredlek made it a lot easier to debug the generated Rust source; see the documentation on debugging for details.

    Speaking of which, thanks to Jordan Petridis for making the documentation be published automatically from Gitlab's Continuous Integration pipelines.

    Alex Crichton kindly refactored our error propagation code, and even wrote docs on it! Along with Jordan, they updated the code for the Rust 2018 edition, and generally wrangled the build process to conform with the lastest Rust nightlies. Alex also made code generation a lot faster, by offloading auto-indentation to an external rustfmt process, instead of using it as a crate: using the rustfmt crate meant that the compiler had a lot more work to do. During the whole hackfest, Alex was very helpful with Rust questions in general. While my strategy to see what the compiler does is to examine the disassembly in gdb, his strategy seems to be to look at the LLVM intermediate representation instead... OMG.

    And we can derive very simple GtkWidgets now!

    Saving the best for last... Antoni Boucher, the author of relm, has been working on making it possible to derive from gtk::Widget. Once this merge request is done, we'll have an example of deriving from gtk::DrawingArea from Rust with very little code.

    Normally, the gtk-rs bindings work as a statically-generated binding for GObject, which really is a type hierarchy defined at runtime. The static binding really wants to know what is a subclass of what: it needs to know in advance that Button's hierarchy is Button → Bin → Container → Widget → Object, plus all the GTypeInterfaces supported by any of those classes. Antoni has been working on making gnome-class extract that information automatically from GIR files, so that the gtk-rs macros that define new types will get all the necessary information.

    Future work

    There are still bugs in the GIR pipeline that prevent us from deriving, say, from gtk::Container, but hopefully these will be resolved soon.

    Sebastian Dröge has been refactoring his Rust tools to create GObject subclasses with very idiomatic and refined Rust code. This is now at a state where gnome-class itself could generate that sort of code, instead of generating all the boilerplate from scratch. So, we'll start doing that, and integrating the necessary bits into gtk-rs as well.

    Finally, during the last day I took a little break from gnome-class to work on librsvg. Julian Sparber has been updating the code to use new bindings in cairo-rs, and is also adding a new API to fetch an SVG element's geometry precisely.

    Thessaloniki

    Oh, boy, I wish the weather had been warmer. The city looks delightful to walk around, especially in the narrow streets on the hills. Can't wait to see it in summer during GUADEC.

    Thanks

    Finally, thanks to CoHo for hosting the hackfest, and to the GNOME Foundation for sponsoring my travel and accomodation. And to Centricular for taking us all to dinner!

    Special thanks to Jordan Petridis for being on top of everything build-wise all the time.

    Sponsored by the GNOME Foundation

  8. Propagating Errors

    - rust

    Lately, I have been converting the code in librsvg that handles XML from C to Rust. For many technical reasons, the library still uses libxml2, GNOME's historic XML parsing library, but some of the callbacks to handle XML events like start_element, end_element, characters, are now implemented in Rust. This has meant that I'm running into all the cases where the original C code in librsvg failed to handle errors properly; Rust really makes it obvious when that happens.

    In this post I want to talk a bit about propagating errors. You call a function, it returns an error, and then what?

    What can fail?

    It turns out that this question is highly context-dependent. Let's say a program is starting up and tries to read a configuration file. What could go wrong?

    • The file doesn't exist. Maybe it is the very first time the program is run, and so there isn't a configuration file at all? Can the program provide a default configuration in this case? Or does it absolutely need a pre-written configuration file to be somewhere?

    • The file can't be parsed. Should the program warn the user and exit, or should it revert to a default configuration (should it overwrite the file with valid, default values)? Can the program warn the user, or is it a user-less program that at best can just shout into the void of a server-side log file?

    • The file can be parsed, but the values are invalid. Same questions as the case above.

    • Etcetera.

    At each stage, the code will probably see very low-level errors ("file not found", "I/O error", "parsing failed", "value is out of range"). What the code decides to do, or what it is able to do at any particular stage, depends both on the semantics you want from the program, and from the code structure itself.

    Structuring the problem

    This is an easy, but very coarse way of handling things:

    gboolean
    read_configuration (const char *config_file_name)
    {
        /* open the file */
    
        /* parse it */
    
        /* set global variables to the configuration values */
    
        /* return true if success, or false if failure */
    }
    

    What is bad about this? Let's see:

    • The calling code just gets a success/failure condition. In the case of failure, it doesn't get to know why things failed.

    • If the function sets global variables with configuration values as they get read... and something goes wrong and the function returns an error... the caller ends up possibly in an inconsistent state, with a set of configuration variables that are only halfway-set.

    • If the function finds parse errors, well, do you really want to call UI code from inside it? The caller might be a better place to make that decision.

    A slightly better structure

    Let's add an enumeration to indicate the possible errors, and a structure of configuration values.

    enum ConfigError {
        ConfigFileDoesntExist,
        ParseError, // config file has bad syntax or something
        ValueError, // config file has an invalid value
    }
    
    struct ConfigValues {
        // a bunch of fields here with the program's configuration
    }
    
    fn read_configuration(filename: &Path) -> Result<ConfigValues, ConfigError> {
        // open the file, or return Err(ConfigError::ConfigFileDoesntExist)
    
        // parse the file; or return Err(ConfigError::ParseError)
    
        // validate the values, or return Err(ConfigError::ValueError)
    
        // if everything succeeds, return Ok(ConfigValues)
    }
    

    This is better, in that the caller decides what to do with the validated ConfigValues: maybe it can just copy them to the program's global variables for configuration.

    However, this scheme doesn't give the caller all the information it would like to present a really good error message. For example, the caller will get to know if there is a parse error, but it doesn't know specifically what failed during parsing. Similarly, it will just get to know if there was an invalid value, but not which one.

    Ah, so the problem is fractal

    We could have new structs to represent the little errors, and then make them part of the original error enum:

    struct ParseError {
        line: usize,
        column: usize,
        error_reason: String,
    }
    
    struct ValueError {
        config_key: String,
        error_reason: String,
    }
    
    enum ConfigError {
        ConfigFileDoesntExist,
        ParseError(ParseError), // we put those structs in here
        ValueError(ValueError),
    }
    

    Is that enough? It depends.

    The ParseError and ValueError structs have individual error_reason fields, which are strings. Presumably, one could have a ParseError with error_reason = "unexpected token", or a ValueError with error_reason = "cannot be a negative number".

    One problem with this is that if the low-level errors come with error messages in English, then the caller has to know how to localize them to the user's language. Also, if they don't have a machine-readable error code, then the calling code may not have enough information to decide what do do with the error.

    Let's say we had a ParseErrorKind enum with variants like UnexpectedToken, EndOfFile, etc. This is fine; it lets the calling code know the reason for the error. Also, there can be a gimme_localized_error_message() method for that particular type of error.

    enum ParseErrorKind {
        UnexpectedToken,
        EndOfFile,
        MissingComma,
        // ... etc.
    }
    
    struct ParseError {
        line: usize,
        column: usize,
        kind: ParseErrorKind,
    }
    

    How can we expand this? Maybe the ParseErrorKind::UnexpectedToken variant wants to contain data that indicates which token it got that was wrong, so it would be UnexpectedToken(String) or something similar.

    But is that useful to the calling code? For our example program, which is reading a configuration file... it probably only needs to know if it could parse the file, but maybe it doesn't really need any additional details on the reason for the parse error, other than having something useful to present to the user. Whether it is appropriate to burden the user with the actual details... does the app expect to make it the user's job to fix broken configuration files? Yes for a web server, where the user is a sysadmin; probably not for a random end-user graphical app, where people shouldn't need to write configuration files by hand in the first place (should those have a "Details" section in the error message window? I don't know!).

    Maybe the low-level parsing/validation code can emit those detailed errors. But how can we propagate them to something more useful to the upper layers of the code?

    Translation and propagation

    Maybe our original read_configuration() function can translate the low-level errors into high-level ones:

    fn read_configuration(filename: &Path) -> Result<ConfigValues, ConfigError> {
        // open file
    
        if cannot_open_file {
            return Err(ConfigError::ConfigFileDoesntExist);
        }
    
        let contents = read_the_file().map_err(|e| ... oops, maybe we need an IoError case, too)?;
    
        // parse file
    
        let parsed = parse(contents).map_err(|e| ... translate to a higher-level error)?
    
        // validate
    
        let validated = validate(parsed).map_err(|e| ... translate to a higher-level error)?;
    
        // yay!
        Ok(ConfigValues::from(validated))
    }
    

    Etcetera. It is up to each part of the code to decide what do do with lower-level errors. Can it recover from them? Should it fail the whole operation and return a higher-level error? Should it warn the user right there?

    Language facilities

    C makes it really easy to ignore errors, and pretty hard to present detailed errors like the above. One could mimic what Rust is actually doing with a collection of union and struct and enum, but this gets very awkward very fast.

    Rust provides these facilities at the language level, and the idioms around Result and error handling are very nice to use. There are even crates like failure that go a long way towards automating error translation, propagation, and conversion to strings for presenting to users.

    Infinite details

    I've been recommending The Error Model to anyone who comes into a discussion of error handling in programming languages. It's a long, detailed, but very enlightening read on recoverable vs. unrecoverable errors, simple error codes vs. exceptions vs. monadic results, the performance/reliability/ease of use of each model... Definitely worth a read.

  9. My gdk-pixbuf braindump

    - gdk-pixbuf, gnome

    I want to write a braindump on the stuff that I remember from gdk-pixbuf's history. There is some talk about replacing it with something newer; hopefully this history will show some things that worked, some that didn't, and why.

    The beginnings

    Gdk-pixbuf started as a replacement for Imlib, the image loading and rendering library that GNOME used in its earliest versions. Imlib came from the Enlightenment project; it provided an easy API around the idiosyncratic libungif, libjpeg, libpng, etc., and it maintained decoded images in memory with a uniform representation. Imlib also worked as an image cache for the Enlightenment window manager, which made memory management very inconvenient for GNOME.

    Imlib worked well as a "just load me an image" library. It showed that a small, uniform API to load various image formats into a common representation was desirable. And in those days, hiding all the complexities of displaying images in X was very important indeed.

    The initial API

    Gdk-pixbuf replaced Imlib, and added two important features: reference counting for image data, and support for an alpha channel.

    Gdk-pixbuf appeared with support for RGB(A) images. And although in theory it was possible to grow the API to support other representations, GdkColorspace never acquired anything other than GDK_COLORSPACE_RGB, and the bits_per_sample argument to some functions only ever supported being 8. The presence or absence of an alpha channel was done with a gboolean argument in conjunction with that single GDK_COLORSPACE_RGB value; we didn't have something like cairo_format_t which actually specifies the pixel format in single enum values.

    While all the code in gdk-pixbuf carefully checks that those conditions are met — RGBA at 8 bits per channel —, some applications inadvertently assume that that is the only possible case, and would get into trouble really fast if gdk-pixbuf ever started returning pixbufs with different color spaces or depths.

    One can still see the battle between bilevel-alpha vs. continuous-alpha in this enum:

    typedef enum
    {
            GDK_PIXBUF_ALPHA_BILEVEL,
            GDK_PIXBUF_ALPHA_FULL
    } GdkPixbufAlphaMode;
    

    Fortunately, only the "render this pixbuf with alpha to an Xlib drawable" functions take values of this type: before the Xrender days, it was a Big Deal to draw an image with alpha to an X window, and applications often opted to use a bitmask instead, even if they had jagged edges as a result.

    Pixel formats

    The only pixel format that ever got implemented was unpremultiplied RGBA on all platforms. Back then I didn't understand premultiplied alpha! Also, the GIMP followed that scheme, and copying it seemed like the easiest thing.

    After gdk-pixbuf, libart also copied that pixel format, I think.

    But later we got Cairo, Pixman, and all the Xrender stack. These prefer premultiplied ARGB. Moreover, Cairo prefers it if each pixel is actually a 32-bit value, with the ARGB values inside it in platform-endian order. So if you look at a memory dump, a Cairo pixel looks like BGRA on a little-endian box, while it looks like ARGB on a big-endian box.

    Every time we paint a GdkPixbuf to a cairo_t, there is a conversion from unpremultiplied RGBA to premultiplied, platform-endian ARGB. I talked a bit about this in Reducing the number of image copies in GNOME.

    The loading API

    The public loading API in gdk-pixbuf, and its relationship to loader plug-ins, evolved in interesting ways.

    At first the public API and loaders only implemented load_from_file: you gave the library a FILE * and it gave you back a GdkPixbuf. Back then we didn't have a robust MIME sniffing framework in the form of a library, so gdk-pixbuf got its own. This lives in the mostly-obsolete GdkPixbufFormat machinery; it even has its own little language for sniffing file headers! Nowadays we do most MIME sniffing with GIO.

    After the intial load_from_file API... I think we got progressive loading first, and animation support aftewards.

    Progressive loading

    This where the calling program feeds chunks of bytes to the library, and at the end a fully-formed GdkPixbuf comes out, instead of having a single "read a whole file" operation.

    We conflated this with a way to get updates on how the image area gets modified as the data gets parsed. I think we wanted to support the case of a web browser, which downloads images slowly over the network, and gradually displays them as they are downloaded. In 1998, images downloading slowly over the network was a real concern!

    It took a lot of very careful work to convert the image loaders, which parsed a whole file at a time, into loaders that could maintain some state between each time that they got handed an extra bit of buffer.

    It also sounded easy to implement the progressive updating API by simply emitting a signal that said, "this rectangular area got updated from the last read". It could handle the case of reading whole scanlines, or a few pixels, or even area-based updates for progressive JPEGs and PNGs.

    The internal API for the image format loaders still keeps a distinction between the "load a whole file" API and the "load an image in chunks". Not all loaders got redone to simply just use the second one: io-jpeg.c still implements loading whole files by calling the corresponding libjpeg functions. I think it could remove that code and use the progressive loading functions instead.

    Animations

    Animations: we followed the GIF model for animations, in which each frame overlays the previous one, and there's a delay set between each frame. This is not a video file; it's a hacky flipbook.

    However, animations presented the problem that the whole gdk-pixbuf API was meant for static images, and now we needed to support multi-frame images as well.

    We defined the "correct" way to use the gdk-pixbuf library as to actually try to load an animation, and then see if it is a single-frame image, in which case you can just get a GdkPixbuf for the only frame and use it.

    Or, if you got an animation, that would be a GdkPixbufAnimation object, from which you could ask for an iterator to get each frame as a separate GdkPixbuf.

    However, the progressive updating API never got extended to really support animations. So, we have awkward functions like gdk_pixbuf_animation_iter_on_currently_loading_frame() instead.

    Necessary accretion

    Gdk-pixbuf got support for saving just a few formats: JPEG, PNG, TIFF, ICO, and some of the formats that are implemented with the Windows-native loaders.

    Over time gdk-pixbuf got support for preserving some metadata-ish chunks from formats that provide it: DPI, color profiles, image comments, hotspots for cursors/icons...

    While an image is being loaded with the progressive loaders, there is a clunky way to specify that one doesn't want the actual size of the image, but another size instead. The loader can handle that situation itself, hopefully if an image format actually embeds different sizes in it. Or if not, the main loading code will rescale the full loaded image into the size specified by the application.

    Historical cruft

    GdkPixdata - a way to embed binary image data in executables, with a funky encoding. Nowadays it's just easier to directly store a PNG or JPEG or whatever in a GResource.

    contrib/gdk-pixbuf-xlib - to deal with old-style X drawables. Hopefully mostly unused now, but there's a good number of mostly old, third-party software that still uses gdk-pixbuf as an image loader and renderer to X drawables.

    gdk-pixbuf-transform.h - Gdk-pixbuf had some very high-quality scaling functions, which the original versions of EOG used for the core of the image viewer. Nowadays Cairo is the preferred way of doing this, since it not only does scaling, but general affine transformations as well. Did you know that gdk_pixbuf_composite_color takes 17 arguments, and it can composite an image with alpha on top of a checkerboard? Yes, that used to be the core of EOG.

    Debatable historical cruft

    gdk_pixbuf_get_pixels(). This lets the program look into the actual pixels of a loaded pixbuf, and modify them. Gdk-pixbuf just did not have a concept of immutability.

    Back in GNOME 1.x / 2.x, when it was fashionable to put icons beside menu items, or in toolbar buttons, applications would load their icon images, and modify them in various ways before setting them onto the corresponding widgets. Some things they did: load a colorful icon, desaturate it for "insensitive" command buttons or menu items, or simulate desaturation by compositing a 1x1-pixel checkerboard on the icon image. Or lighten the icon and set it as the "prelight" one onto widgets.

    The concept of "decode an image and just give me the pixels" is of course useful. Image viewers, image processing programs, and all those, of course need this functionality.

    However, these days GTK would prefer to have a way to decode an image, and ship it as fast as possible ot the GPU, without intermediaries. There is all sorts of awkward machinery in the GTK widgets that can consume either an icon from an icon theme, or a user-supplied image, or one of the various schemes for providing icons that GTK has acquired over the years.

    It is interesting to note that gdk_pixbuf_get_pixels() was available pretty much since the beginning, but it was only until much later that we got gdk_pixbuf_get_pixels_with_length(), the "give me the guchar * buffer and also its length" function, so that calling code has a chance of actually checking for buffer overruns. (... and it is one of the broken "give me a length" functions that returns a guint rather than a gsize. There is a better gdk_pixbuf_get_byte_length() which actually returns a gsize, though.)

    Problems with mutable pixbufs

    The main problem is that as things are right now, we have no flexibility in changing the internal representation of image data to make it better for current idioms: GPU-specific pixel formats may not be unpremultiplied RGBA data.

    We have no API to say, "this pixbuf has been modified", akin to cairo_surface_mark_dirty(): once an application calls gdk_pixbuf_get_pixels(), gdk-pixbuf or GTK have to assume that the data will be changed and they have to re-run the pipeline to send the image to the GPU (format conversions? caching? creating a texture?).

    Also, ever since the beginnings of the gdk-pixbuf API, we had a way to create pixbufs from arbitrary user-supplied RGBA buffers: the gdk_pixbuf_new_from_data functions. One problem with this scheme is that memory management of the buffer is up to the calling application, so the resulting pixbuf isn't free to handle those resources as it pleases.

    A relatively recent addition is gdk_pixbuf_new_from_bytes(), which takes a GBytes buffer instead of a random guchar *. When a pixbuf is created that way, it is assumed to be immutable, since a GBytes is basically a shared reference into a byte buffer, and it's just easier to think of it as immutable. (Nothing in C actually enforces immutability, but the API indicates that convention.)

    Internally, GdkPixbuf actually prefers to be created from a GBytes. It will downgrade itself to a guchar * buffer if something calls the old gdk_pixbuf_get_pixels(); in the best case, that will just take ownership of the internal buffer from the GBytes (if the GBytes has a single reference count); in the worst case, it will copy the buffer from the GBytes and retain ownership of that copy. In either case, when the pixbuf downgrades itself to pixels, it is assumed that the calling application will modify the pixel data.

    What would immutable pixbufs look like?

    I mentioned this a bit in "Reducing Copies". The loaders in gdk-pixbuf would create immutable pixbufs, with an internal representation that is friendly to GPUs. In the proposed scheme, that internal representation would be a Cairo image surface; it can be something else if GTK/GDK eventually prefer a different way of shipping image data into the toolkit.

    Those pixbufs would be immutable. In true C fashion we can call it undefined behavior to change the pixel data (say, an app could request gimme_the_cairo_surface and tweak it, but that would not be supported).

    I think we could also have a "just give me the pixels" API, and a "create a pixbuf from these pixels" one, but those would be one-time conversions at the edge of the API. Internally, the pixel data that actually lives inside a GdkPixbuf would remain immutable, in some preferred representation, which is not necessarily what the application sees.

    What worked well

    A small API to load multiple image formats, and paint the images easily to the screen, while handling most of the X awkwardness semi-automatically, was very useful!

    A way to get and modify pixel data: applications clearly like doing this. We can formalize it as an application-side thing only, and keep the internal representation immutable and in a format that can evolve according to the needs of the internal API.

    Pluggable loaders, up to a point. Gdk-pixbuf doesn't support all the image formats in the world out of the box, but it is relatively easy for third-parties to provide loaders that, once installed, are automatically usable for all applications.

    What didn't work well

    Having effectively two pixel formats supported, and nothing else: gdk-pixbuf does packed RGB and unpremultiplied RGBA, and that's it. This isn't completely terrible: applications which really want to know about indexed or grayscale images, or high bit-depth ones, are probably specialized enough that they can afford to have their own custom loaders with all the functionality they need.

    Pluggable loaders, up to a point. While it is relatively easy to create third-party loaders, installation is awkward from a system's perspective: one has to run the script to regenerate the loader cache, there are more shared libraries running around, and the loaders are not sandboxed by default.

    I'm not sure if it's worthwhile to let any application read "any" image format if gdk-pixbuf supports it. If your word processor lets you paste an image into the document... do you want it to use gdk-pixbuf's limited view of things and include a high bit-depth image with its probably inadequate conversions? Or would you rather do some processing by hand to ensure that the image looks as good as it can, in the format that your word processor actually supports? I don't know.

    The API for animations is very awkward. We don't even support APNG... but honestly I don't recall actually seeing one of those in the wild.

    The progressive loading API is awkward. The "feed some bytes into the loader" part is mostly okay; the "notify me about changes to the pixel data" is questionable nowadays. Web browsers don't use it; they implement their own loaders. Even EOG doesn't use it.

    I think most code that actually connects to GdkPixbufLoader's signals only uses the size-prepared signal — the one that gets emitted soon after reading the image headers, when the loader gets to know the dimensions of the image. Apps sometimes use this to say, "this image is W*H pixels in size", but don't actually decode the rest of the image.

    The gdk-pixbuf model of static images, or GIF animations, doesn't work well for multi-page TIFFs. I'm not sure if this is actualy a problem. Again, applications with actual needs for multi-page TIFFs are probably specialized enough that they will want a full-featured TIFF loader of their own.

    Awkward architectures

    Thumbnailers

    The thumbnailing system has slowly been moving towards a model where we actually have thumbnailers specific to each file format, instead of just assuming that we can dump any image into a gdk-pixbuf loader.

    If we take this all the way, we would be able to remove some weird code in, for example, the JPEG pixbuf loader. Right now it supports loading images at a size that the calling code requests, not only at the "natural" size of the JPEG. The thumbnailer can say, "I want to load this JPEG at 128x128 pixels" or whatever, and in theory the JPEG loader will do the minimal amount of work required to do that. It's not 100% clear to me if this is actually working as intended, or if we downscale the whole image anyway.

    We had a distinction between in-process and out-of-process thumbnailers, and it had to do with the way pixbuf loaders are used; I'm not sure if they are all out-of-process and sandboxed now.

    Non-raster data

    There is a gdk-pixbuf loader for SVG images which uses librsvg internally, but only in a very basic way: it simply loads the SVG at its preferred size. Librsvg jumps through some hoops to compute a "preferred size" for SVGs, as not all of them actually indicate one. The SVG model would rather have the renderer say that the SVG is to be inserted into a rectangle of certain width/height, and scaled/positioned inside the rectangle according to some other parameters (i.e. like one would put it inside an HTML document, with a preserveAspectRatio attribute and all that). GNOME applications historically operated with a different model, one of "load me an image, I'll scale it to whatever size, and paint it".

    This gdk-pixbuf loader for SVG files gets used for the SVG thumbnailer, or more accurately, the "throw random images into a gdk-pixbuf loader" thumbnailer. It may be better/cleaner to have a specific thumbnailer for SVGs instead.

    Even EOG, our by-default image viewer, doesn't use the gdk-pixbuf loader for SVGs: it actually special-cases them and uses librsvg directly, to be able to load an SVG once and re-render it at different sizes if one changes the zoom factor, for example.

    GTK reads its SVG icons... without using librsvg... by assuming that librsvg installed its gdk-pixbuf loader, so it loads them as any normal raster image. This kind of dirty, but I can't quite pinpoint why. I'm sure it would be convenient for icon themes to ship a single SVG with tons of icons, and some metadata on their ids, so that GTK could pick them out of the SVG file with rsvg_render_cairo_sub() or something. Right now icon theme authors are responsible for splitting out those huge SVGs into many little ones, one for each icon, and I don't think that's their favorite thing in the world to do :)

    Exotic raster data

    High bit-depth images... would you expect EOG to be able to load them? Certainly; maybe not with all the fancy conversions from a real RAW photo editor. But maybe this can be done as EOG-specific plugins, rather than as low in the platform as the gdk-pixbuf loaders?

    (Same thing for thumbnailing high bit-depth images: the loading code should just provide its own thumbnailer program for those.)

    Non-image metadata

    The gdk_pixbuf_set_option / gdk_pixbuf_get_option family of functions is so that pixbuf loaders can set key/value pairs of strings onto a pixbuf. Loaders use this for comment blocks, or ICC profiles for color calibration, or DPI information for images that have it, or EXIF data from photos. It is up to applications to actually use this information.

    It's a bit uncomfortable that gdk-pixbuf makes no promises about the kind of raster data it gives to the caller: right now it is raw RGB(A) data that is not gamma-corrected nor in any particular color space. It is up to the caller to see if the pixbuf has an ICC profile attached to it as an option. Effectively, this means that applications don't know if they are getting SRGB, or linear RGB, or what... unless they specifically care to look.

    The gdk-pixbuf API could probably make promises: if you call this function you will get SRGB data; if you call this other function, you'll get the raw RGBA data and we'll tell you its colorspace/gamma/etc.

    The various set_option / get_option pairs are also usable by the gdk-pixbuf saving code (up to now we have just talked about loaders). I don't know enough about how applications use the saving code in gdk-pixbuf... the thumbnailers use it to save PNGs or JPEGs, but other apps? No idea.

    What I would like to see

    Immutable pixbufs in a useful format. I've started work on this in a merge request; the internal code is now ready to take in different internal representations of pixel data. My goal is to make Cairo image surfaces the preferred, immutable, internal representation. This would give us a gdk_pixbuf_get_cairo_surface(), which pretty much everything that needs one reimplements by hand.

    Find places that assume mutable pixbufs. To gradually deprecate mutable pixbufs, I think we would need to audit applications and libraries to find places that cause GdkPixbuf structures to degrade into mutable ones: basically, find callers of gdk_pixbuf_get_pixels() and related functions, see what they do, and reimplement them differently. Maybe they don't need to tint icons by hand anymore? Maybe they don't need icons anymore, given our changing UI paradigms? Maybe they are using gdk-pixbuf as an image loader only?

    Reconsider the loading-updates API. Do we need the GdkPixbufLoader::area-updated signal at all? Does anything break if we just... not emit it, or just emit it once at the end of the loading process? (Caveat: keeping it unchanged more or less means that "immutable pixbufs" as loaded by gdk-pixbuf actually mutate while being loaded, and this mutation is exposed to applications.)

    Sandboxed loaders. While these days gdk-pixbuf loaders prefer the progressive feed-it-bytes API, sandboxed loaders would maybe prefer a read-a-whole-file approach. I don't know enough about memfd or how sandboxes pass data around to know how either would work.

    Move loaders to Rust. Yes, really. Loaders are security-sensitive, and while we do need to sandbox them, it would certainly be better to do them in a memory-safe language. There are already pure Rust-based image loaders: JPEG, PNG, TIFF, GIF, ICO. I have no idea how featureful they are. We can certainly try them with gdk-pixbuf's own suite of test images. We can modify them to add hooks for things like a size-prepared notification, if they don't already have a way to read "just the image headers".

    Rust makes it very easy to plug in micro-benchmarks, fuzz testing, and other modern amenities. These would be perfect for improving the loaders.

    I started sketching a Rust backend for gdk-pixbuf loaders some months ago, but there's nothing useful yet. One mismatch between gdk-pixbuf's model for loaders, and the existing Rust codecs, is that Rust codecs generally take something that implements the Read trait: a blocking API to read bytes from abstract sources; it's a pull API. The gdk-pixbuf model is a push API: the calling code creates a loader object, and then pushes bytes into it. The gdk-pixbuf convenience functions that take a GInputStream basically do this:

    loader = gdk_pixbuf_loader_new (...);
    
    while (more_bytes) {
        n_read = g_input_stream_read (stream, buffer, ...);
        gdk_pixbuf_loader_write(loader, buffer, n_read, ...);
    }
    
    gdk_pixbuf_loader_close (loader);
    

    However, this cannot be flipped around easily. We could probably use a second thread (easy, safe to do in Rust) to make the reader/decoder thread block while the main thread pushes bytes into it.

    Also, I don't know how the Rust bindings for GIO present things like GInputStream and friends, with our nice async cancellables and all that.

    Deprecate animations? Move that code to EOG, just so one can look at memes in it? Do any "real apps" actually use GIF animations for their UI?

    Formalize promises around returned color profiles, gamma, etc. As mentioned above: have an "easy API" that returns SRGB, and a "raw API" that returns the ARGB data from the image, plus info on its ICC profile, gamma, or any other info needed to turn this into a "good enough to be universal" representation. (I think all the Apple APIs that pass colors around do so with an ICC profile attached, which seems... pretty much necessary for correctness.)

    Remove the internal MIME-sniffing machinery. And just use GIO's.

    Deprecate the crufty/old APIs in gdk-pixbuf. Scaling/transformation, compositing, GdkPixdata, gdk-pixbuf-csource, all those. Pixel crunching can be done by Cairo; the others are better done with GResource these days.

    Figure out if we want blessed codecs; fix thumbnailers. Link those loaders statically, unconditionally. Exotic formats can go in their own custom thumbnailers. Figure out if we want sandboxed loaders for everything, or just for user-side images (not ones read from the trusted system installation).

    Have GTK4 communicate clearly about its drawing model. I think we are having a disconnect between the GUI chrome, which is CSS/GPU friendly, and graphical content generated by applications, which by default right now is done via Cairo. And having Cairo as a to-screen and to-printer API is certainly very convenient! You Wouldn't Print a GUI, but certainly you would print a displayed document.

    It would also be useful for GTK4 to actually define what its preferred image format is if it wants to ship it to the GPU with as little work as possible. Maybe it's a Cairo image surface? Maybe something else?

    Conclusion

    We seem to change imaging models every ten years or so. Xlib, then Xrender with Cairo, then GPUs and CSS-based drawing for widgets. We've gone from trusted data on your local machine, to potentially malicious data that rains from the Internet. Gdk-pixbuf has spanned all of these periods so far, and it is due for a big change.

  10. Debugging an Rc<T> reference leak in Rust

    - gnome, librsvg, rust

    The bug that caused two brown-paper-bag released in librsvg — because it was leaking all the SVG nodes — has been interesting.

    Memory leaks in Rust? Isn't it supposed to prevent that?

    Well, yeah, but the leaks were caused by the C side of things, and by unsafe code in Rust, which does not prevent leaks.

    The first part of the bug was easy: C code started calling a function implemented in Rust, which returns a newly-acquired reference to an SVG node. The old code simply got a pointer to the node, without acquiring a reference. The new code was forgetting to rsvg_node_unref(). No biggie.

    The second part of the bug was trickier to find. The C code was apparently calling all the functions to unref nodes as appropriate, and even calling the rsvg_tree_free() function in the end; this is the "free the whole SVG tree" function.

    There are these types:

    // We take a pointer to this and expose it as an opaque pointer to C
    pub enum RsvgTree {}
    
    // This is the real structure we care about
    pub struct Tree {
        // This is the Rc that was getting leaked
        pub root: Rc<Node>,
        ...
    }
    

    Tree is the real struct that holds the root of the SVG tree and some other data. Each node is an Rc<Node>; the root node was getting leaked (... and all the children, recursively) because its reference count never went down from 1.

    RsvgTree is just an empty type. The code does an unsafe cast of *const Tree as *const RsvgTree in order to expose a raw pointer to the C code.

    The rsvg_tree_free() function, callable from C, looked like this:

    #[no_mangle]
    pub extern "C" fn rsvg_tree_free(tree: *mut RsvgTree) {
        if !tree.is_null() {
            let _ = unsafe { Box::from_raw(tree) };
                             // ^ this returns a Box<RsvgTree> which is an empty type!
        }
    }
    

    When we call Box::from_raw() on a *mut RsvgTree, it gives us back a Box<RsvgTree>... which is a box of a zero-sized type. So, the program frees zero memory when the box gets dropped.

    The code was missing this cast:

        let tree = unsafe { &mut *(tree as *mut Tree) };
                                     // ^ this cast to the actual type inside the Box
        let _ = unsafe { Box::from_raw(tree) };
    

    So, tree as *mut Tree gives us a value which will cause Box::from_raw() to return a Box<Tree>, which is what we intended. Dropping the box will drop the Tree, reduce the last reference count on the root node, and free all the nodes recursively.

    Monitoring an Rc<T>'s reference count in gdb

    So, how does one set a gdb watchpoint on the reference count?

    First I set a breakpoint on a function which I knew would get passed the Rc<Node> I care about:

    (gdb) b <rsvg_internals::structure::NodeSvg as rsvg_internals::node::NodeTrait>::set_atts
    Breakpoint 3 at 0x7ffff71f3aaa: file rsvg_internals/src/structure.rs, line 131.
    
    (gdb) c
    Continuing.
    
    Thread 1 "rsvg-convert" hit Breakpoint 3, <rsvg_internals::structure::NodeSvg as rsvg_internals::node::NodeTrait>::set_atts (self=0x646c60, node=0x64c890, pbag=0x64c820) at rsvg_internals/src/structure.rs:131
    
    (gdb) p node
    $5 = (alloc::rc::Rc<rsvg_internals::node::Node> *) 0x64c890
    

    Okay, node is a reference to an Rc<Node>. What's inside?

    (gdb) p *node
    $6 = {ptr = {pointer = {__0 = 0x625800}}, phantom = {<No data fields>}}
    

    Why, a pointer to the actual contents of the Rc. Look inside again:

    (gdb) p *node.ptr.pointer.__0
    $9 = {strong = {value = {value = 3}}, weak = {value = {value = 1}},  ... and lots of extra crap ...
    

    Aha! There are the strong and weak reference counts. So, set a watchpoint on the strong reference count:

    (gdb) set $ptr = &node.ptr.pointer.__0.strong.value.value
    (gdb) watch *$ptr
    Hardware watchpoint 4: *$ptr
    

    Continue running the program until the reference count changes:

    (gdb) continue
    Thread 1 "rsvg-convert" hit Hardware watchpoint 4: *$ptr
    
    Old value = 3
    New value = 2
    

    At this point I can print a stack trace and see if it makes sense, check that the refs/unrefs are matched, etc.

    TL;DR: dig into the Rc<T> until you find the reference count, and watch it. It's wrapped in several layers of Rust-y types; NonNull pointers, an RcBox for the actual container of the refcount plus the object it's wrapping, and Cells for the refcount values. Just dig until you reach the refcount values and they are there.

    So, how did I find the missing cast?

    Using that gdb recipe, I watched the reference count of the toplevel SVG node change until the program exited. When the program terminated, the reference count was 1 — it should have dropped to 0 if there was no memory leak.

    The last place where the toplevel node loses a reference is in rsvg_tree_free(). I ran the program again and checked if that function was being called; it was being called correctly. So I knew that the problem must lie in that function. After a little head-scratching, I found the missing cast. Other functions of the form rsvg_tree_whatever() had that cast, but rsvg_tree_free() was missing it.

    I think Rust now has better facilities to tag structs that are exposed as raw pointers to extern code, to avoid this kind of perilous casting. We'll see.

    In the meantime, apologies for the buggy releases!

Page 1 / 5 »