Federico's Blog

  1. Reducing memory consumption in librsvg, part 4: compact representation for Bézier paths

    Translations: es - Tags: gnome, librsvg, performance, rust

    Let's continue with the enormous SVG from the last time, a map extracted from OpenStreetMap.

    According to Massif, peak memory consumption for that file occurs at the following point during the execution of rsvg-convert. I pasted only the part that refers to Bézier paths:

        --------------------------------------------------------------------------------
          n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
        --------------------------------------------------------------------------------
    1    33 24,139,598,653    1,416,831,176    1,329,943,212    86,887,964            0
    2   ->24.88% (352,523,448B) 0x4A2727E: alloc (alloc.rs:84)
        | ->24.88% (352,523,448B) 0x4A2727E: alloc (alloc.rs:172)
        |   ->24.88% (352,523,448B) 0x4A2727E: allocate_in<rsvg_internals::path_builder::PathCommand,alloc::alloc::Global> (raw_vec.rs:98)
        |     ->24.88% (352,523,448B) 0x4A2727E: with_capacity<rsvg_internals::path_builder::PathCommand> (raw_vec.rs:167)
        |       ->24.88% (352,523,448B) 0x4A2727E: with_capacity<rsvg_internals::path_builder::PathCommand> (vec.rs:358)
        |         ->24.88% (352,523,448B) 0x4A2727E: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T,I>>::from_iter (vec.rs:1992)
        |           ->24.88% (352,523,448B) 0x49D212C: from_iter<rsvg_internals::path_builder::PathCommand,smallvec::IntoIter<[rsvg_internals::path_builder::PathCommand; 32]>> (vec.rs:1901)
        |             ->24.88% (352,523,448B) 0x49D212C: collect<smallvec::IntoIter<[rsvg_internals::path_builder::PathCommand; 32]>,alloc::vec::Vec<rsvg_internals::path_builder::PathCommand>> (iterator.rs:1493)
        |               ->24.88% (352,523,448B) 0x49D212C: into_vec<[rsvg_internals::path_builder::PathCommand; 32]> (lib.rs:893)
        |                 ->24.88% (352,523,448B) 0x49D212C: smallvec::SmallVec<A>::into_boxed_slice (lib.rs:902)
    3   |                   ->24.88% (352,523,016B) 0x4A0394C: into_path (path_builder.rs:320)
        |
    4   ->03.60% (50,990,328B) 0x4A242F0: realloc (alloc.rs:128)
        | ->03.60% (50,990,328B) 0x4A242F0: realloc (alloc.rs:187)
        |   ->03.60% (50,990,328B) 0x4A242F0: shrink_to_fit<rsvg_internals::path_builder::PathCommand,alloc::alloc::Global> (raw_vec.rs:633)
        |     ->03.60% (50,990,328B) 0x4A242F0: shrink_to_fit<rsvg_internals::path_builder::PathCommand> (vec.rs:623)
        |       ->03.60% (50,990,328B) 0x4A242F0: alloc::vec::Vec<T>::into_boxed_slice (vec.rs:679)
        |         ->03.60% (50,990,328B) 0x49D2136: smallvec::SmallVec<A>::into_boxed_slice (lib.rs:902)
    5   |           ->03.60% (50,990,328B) 0x4A0394C: into_path (path_builder.rs:320)
    

    Line 1 has the totals, and we see that at that point the program uses 1,329,943,212 bytes on the heap.

    Lines 3 and 5 give us a hint that into_path is being called; this is the function that converts a temporary/mutable PathBuilder into a permanent/immutable Path.

    Lines 2 and 4 indicate that the arrays of PathCommand, which are inside those immutable Paths, use 24.88% + 3.60% = 28.48% of the program's memory; between both they use 352,523,448 + 50,990,328 = 403,513,776 bytes.

    That is about 400 MB of PathCommand. Let's see what's going on.

    What is in a PathCommand?

    A Path is a list of commands similar to PostScript, which get used in SVG to draw Bézier paths. It is a flat array of PathCommand:

    pub struct Path {
        path_commands: Box<[PathCommand]>,
    }
    
    pub enum PathCommand {
        MoveTo(f64, f64),
        LineTo(f64, f64),
        CurveTo(CubicBezierCurve),
        Arc(EllipticalArc),
        ClosePath,
    }
    

    Let's see the variants of PathCommand:

    • MoveTo: 2 double-precision floating-point numbers.
    • LineTo: same.
    • CurveTo: 6 double-precision floating-point numbers.
    • EllipticalArc: 7 double-precision floating-point numbers, plus 2 flags (see below).
    • ClosePath: no extra data.

    These variants vary a lot in terms of size, and each element of the Path.path_commands array occupies the maximum of their sizes (i.e. sizeof::<EllipticalArc>).

    A more compact representation

    Ideally, each command in the array would only occupy as much space as it needs.

    We can represent a Path in a different way, as two separate arrays:

    • A very compact array of commands without coordinates.
    • An array with coordinates only.

    That is, the following:

    pub struct Path {
        commands: Box<[PackedCommand]>,
        coords: Box<[f64]>,
    }
    

    The coords array is obvious; it is just a flat array with all the coordinates in the Path in the order in which they appear.

    And the commands array?

    PackedCommand

    We saw above that the biggest variant in PathCommand is Arc(EllipticalArc). Let's look inside it:

    pub struct EllipticalArc {
        pub r: (f64, f64),
        pub x_axis_rotation: f64,
        pub large_arc: LargeArc,
        pub sweep: Sweep,
        pub from: (f64, f64),
        pub to: (f64, f64),
    }
    

    There are 7 f64 floating-point numbers there. The other two fields, large_arc and sweep, are effectively booleans (they are just enums with two variants, with pretty names instead of just true and false).

    Thus, we have 7 doubles and two flags. Between the two flags there are 4 possibilities.

    Since no other PathCommand variant has flags, we can have the following enum, which fits in a single byte:

    #[repr(u8)]
    enum PackedCommand {
        MoveTo,
        LineTo,
        CurveTo,
        ArcSmallNegative,
        ArcSmallPositive,
        ArcLargeNegative,
        ArcLargePositive,
        ClosePath,
    }
    

    That is, simple values for MoveTo/etc. and four special values for the different types of Arc.

    Packing a PathCommand into a PackedCommand

    In order to pack the array of PathCommand, we must first know how many coordinates each of its variants will produce:

    impl PathCommand {
        fn num_coordinates(&self) -> usize {
            match *self {
                PathCommand::MoveTo(..) => 2,
                PathCommand::LineTo(..) => 2,
                PathCommand::CurveTo(_) => 6,
                PathCommand::Arc(_) => 7,
                PathCommand::ClosePath => 0,
            }
        }
    }
    

    Then, we need to convert each PathCommand into a PackedCommand and write its coordinates into an array:

    impl PathCommand {
        fn to_packed(&self, coords: &mut [f64]) -> PackedCommand {
            match *self {
                PathCommand::MoveTo(x, y) => {
                    coords[0] = x;
                    coords[1] = y;
                    PackedCommand::MoveTo
                }
    
                // etc. for the other simple commands
    
                PathCommand::Arc(ref a) => a.to_packed_and_coords(coords),
            }
        }
    }
    

    Let's look at that to_packed_and_coords more closely:

    impl EllipticalArc {
        fn to_packed_and_coords(&self, coords: &mut [f64]) -> PackedCommand {
            coords[0] = self.r.0;
            coords[1] = self.r.1;
            coords[2] = self.x_axis_rotation;
            coords[3] = self.from.0;
            coords[4] = self.from.1;
            coords[5] = self.to.0;
            coords[6] = self.to.1;
    
            match (self.large_arc, self.sweep) {
                (LargeArc(false), Sweep::Negative) => PackedCommand::ArcSmallNegative,
                (LargeArc(false), Sweep::Positive) => PackedCommand::ArcSmallPositive,
                (LargeArc(true), Sweep::Negative) => PackedCommand::ArcLargeNegative,
                (LargeArc(true), Sweep::Positive) => PackedCommand::ArcLargePositive,
            }
        }
    }
    

    Creating the compact Path

    Let's look at PathBuilder::into_path line by line:

    impl PathBuilder {
        pub fn into_path(self) -> Path {
            let num_commands = self.path_commands.len();
            let num_coords = self
                .path_commands
                .iter()
                .fold(0, |acc, cmd| acc + cmd.num_coordinates());
    

    First we compute the total number of coordinates using fold; we ask each command cmd its num_coordinates() and add it into the acc accumulator.

    Now we know how much memory to allocate:

            let mut packed_commands = Vec::with_capacity(num_commands);
            let mut coords = vec![0.0; num_coords];
    

    We use Vec::with_capacity to allocate exactly as much memory as we will need for the packed_commands; adding elements will not need a realloc(), since we already know how many elements we will have.

    We use the vec! macro to create an array of 0.0 repeated num_coords times; that macro uses with_capacity internally. That is the array we will use to store the coordinates for all the commands.

            let mut coords_slice = coords.as_mut_slice();
    

    We get a mutable slice out of the whole array of coordinates.

            for c in self.path_commands {
                let n = c.num_coordinates();
                packed_commands.push(c.to_packed(coords_slice.get_mut(0..n).unwrap()));
                coords_slice = &mut coords_slice[n..];
            }
    

    For each command, we see how many coordinates it will generate and we put that number in n. We get a mutable sub-slice from coords_slice with only that number of elements, and pass it to to_packed for each command.

    At the end of each iteration we move the mutable slice to where the next command's coordinates will go.

            Path {
                commands: packed_commands.into_boxed_slice(),
                coords: coords.into_boxed_slice(),
            }
        }
    

    At the end, we create the final and immutable Path by converting each array into_boxed_slice like the last time. That way each of the two arrays, the one with PackedCommands and the one with coordinates, occupy the minimum space they need.

    An iterator for Path

    This is all very well, but we also want it to be easy to iterate on that compact representation; the PathCommand enums from the beginning are very convenient to use and that's what the rest of the code already uses. Let's make an iterator that unpacks what is inside a Path and produces a PathCommand for each element.

    pub struct PathIter<'a> {
        commands: slice::Iter<'a, PackedCommand>,
        coords: &'a [f64],
    }
    

    We need an iterator over the array of PackedCommand so we can visit each command. However, to get elements of coords, I am going to use a slice of f64 instead of an iterator.

    Let's look at the implementation of the iterator:

    impl<'a> Iterator for PathIter<'a> {
        type Item = PathCommand;
    
        fn next(&mut self) -> Option<Self::Item> {
            if let Some(cmd) = self.commands.next() {
                let cmd = PathCommand::from_packed(cmd, self.coords);
                let num_coords = cmd.num_coordinates();
                self.coords = &self.coords[num_coords..];
                Some(cmd)
            } else {
                None
            }
        }
    }
    

    Since we want each iteration to produce a PathCommand, we declare it as having the associated type Item =  PathCommand.

    If the self.commands iterator has another element, it means there is another PackedCommand available.

    We call PathCommand::from_packed with the self.coords slice to unpack a command and its coordinates. We see how many coordinates the command consumed and re-slice self.coords according to the number of commands, so that it now points to the coordinates for the next command.

    We return Some(cmd) if there was an element, or None if the iterator is empty.

    The implementation of from_packed is obvious and I'll just paste a bit from it:

    impl PathCommand {
        fn from_packed(packed: &PackedCommand, coords: &[f64]) -> PathCommand {
            match *packed {
                PackedCommand::MoveTo => {
                    let x = coords[0];
                    let y = coords[1];
                    PathCommand::MoveTo(x, y)
                }
    
                // etc. for the other variants in PackedCommand
    
                PackedCommand::ArcSmallNegative => PathCommand::Arc(EllipticalArc::from_coords(
                    LargeArc(false),
                    Sweep::Negative,
                    coords,
                )),
    
                PackedCommand::ArcSmallPositive => // etc.
    
                PackedCommand::ArcLargeNegative => // etc.
    
                PackedCommand::ArcLargePositive => // etc.
            }
        }
    }
    

    Results

    Before the changes (this is the same Massif heading as above):

    --------------------------------------------------------------------------------
      n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
    --------------------------------------------------------------------------------
     33 24,139,598,653    1,416,831,176    1,329,943,212    86,887,964            0
                                           ^^^^^^^^^^^^^
                                               boo
    

    After:

    --------------------------------------------------------------------------------
      n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
    --------------------------------------------------------------------------------
     28 26,611,886,993    1,093,747,888    1,023,147,907    70,599,981            0
                                           ^^^^^^^^^^^^^
                                              oh yeah
    

    We went from using 1,329,943,212 bytes down to 1,023,147,907 bytes, that is, we knocked it down by 300 MB.

    However, that is for the whole program. Above we saw that Path data occupies 403,513,776 bytes; how about now?

    ->07.45% (81,525,328B) 0x4A34C6F: alloc (alloc.rs:84)
    | ->07.45% (81,525,328B) 0x4A34C6F: alloc (alloc.rs:172)
    |   ->07.45% (81,525,328B) 0x4A34C6F: allocate_in<f64,alloc::alloc::Global> (raw_vec.rs:98)
    |     ->07.45% (81,525,328B) 0x4A34C6F: with_capacity<f64> (raw_vec.rs:167)
    |       ->07.45% (81,525,328B) 0x4A34C6F: with_capacity<f64> (vec.rs:358)
    |         ->07.45% (81,525,328B) 0x4A34C6F: rsvg_internals::path_builder::PathBuilder::into_path (path_builder.rs:486)
    

    Perfect. We went from occupying 403,513,776 bytes to just 81,525,328 bytes. Instead of Path data amounting to 28.48% of the heap, it is just 7.45%.

    I think we can stop worrying about Path data for now. I like how this turned out without having to use unsafe.

    References

  2. Reducing memory consumption in librsvg, part 3: slack space in Bézier paths

    Translations: es - Tags: gnome, librsvg, performance, rust

    We got a bug with a gigantic SVG of a map extracted from OpenStreetMap, and it has about 600,000 elements. Most of them are <path>, that is, specifications for Bézier paths.

    A <path> can look like this:

    <path d="m 2239.05,1890.28 5.3,-1.81"/>
    

    The d attribute contains a list of commands to create a Bézier path, very similar to PostScript's operators. Librsvg has the following to represent those commands:

    pub enum PathCommand {
        MoveTo(f64, f64),
        LineTo(f64, f64),
        CurveTo(CubicBezierCurve),
        Arc(EllipticalArc),
        ClosePath,
    }
    

    Those commands get stored in an array, a Vec inside a PathBuilder:

    pub struct PathBuilder {
        path_commands: Vec<PathCommand>,
    }
    

    Librsvg translates each of the commands inside a <path d="..."/> into a PathCommand and pushes it into the Vec in the PathBuilder. When it is done parsing the attribute, the PathBuilder remains as the final version of the path.

    To let a Vec grow efficiently as items are pushed into it, Rust makes the Vec grow by powers of 2. When we add an item, if the capacity of the Vec is full, its buffer gets realloc()ed to twice its capacity. That way there are only O(log₂n) calls to realloc(), where n is the total number of items in the array.

    However, this means that once we are done adding items to the Vec, there may still be some free space in it: the capacity exceeds the length of the array. The invariant is that vec.capacity() >= vec.len().

    First I wanted to shrink the PathBuilders so that they have no extra capacity in the end.

    First step: convert to Box<[T]>

    A "boxed slice" is a contiguous array in the heap, that cannot grow or shrink. That is, it has no extra capacity, only a length.

    Vec has a method into_boxed_slice which does eactly that: it consumes the vector and converts it into a boxed slice without extra capacity. In its innards, it does a realloc() on the Vec's buffer to match its length.

    Let's see the numbers that Massif reports:

    --------------------------------------------------------------------------------
      n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
    --------------------------------------------------------------------------------
     23 22,751,613,855    1,560,916,408    1,493,746,540    67,169,868            0
                                           ^^^^^^^^^^^^^
                                               before
    
     30 22,796,106,012    1,553,581,072    1,329,943,324   223,637,748            0
                                           ^^^^^^^^^^^^^
                                               after
    

    That is, we went from using 1,493,746,540 bytes on the heap to using 1,329,943,324 bytes. Simply removing extra capacity from the path commands saves about 159 MB for this particular file.

    Second step: make the allocator do less work

    However, the extra-heap column in that table has a number I don't like: there are 223,637,748 bytes in malloc() metadata and unused space in the heap.

    I suppose that so many calls to realloc() make the heap a bit fragmented.

    It would be good to be able to read most of the <path d="..."/> to temporary buffers that don't need so many calls to realloc(), and that in the end get copied to exact-sized buffers, without extra capacity.

    We can do just that with the smallvec crate. A SmallVec has the same API as Vec, but it can store small arrays directly in the stack, without an extra heap allocation. Once the capacity is full, the stack buffer "spills" into a heap buffer automatically.

    Most of the d attributes in the huge file in the bug have fewer than 32 commands. That is, if we use the following:

    pub struct PathBuilder {
        path_commands: SmallVec<[PathCommand; 32]>,
    }
    

    We are saying that there can be up to 32 items in the SmallVec without causing a heap allocation; once that is exceeded, it will work like a normal Vec.

    At the end we still do into_boxed_slice to turn it into an independent heap allocation with an exact size.

    This reduces the extra-heap quite a bit:

    --------------------------------------------------------------------------------
      n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
    --------------------------------------------------------------------------------
     33 24,139,598,653    1,416,831,176    1,329,943,212    86,887,964            0
                                                            ^^^^^^^^^^
    

    Also, the total bytes shrink from 1,553,581,072 to 1,416,831,176 — we have a smaller heap because there is not so much work for the allocator, and there are a lot fewer temporary blocks when parsing the d attributes.

    Making the code prettier

    I put in the following:

    /// This one is mutable
    pub struct PathBuilder {
        path_commands: SmallVec<[PathCommand; 32]>,
    }
    
    /// This one is immutable
    pub struct Path {
        path_commands: Box<[PathCommand]>,
    }
    
    impl PathBuilder {
        /// Consumes the PathBuilder and converts it into an immutable Path
        pub fn into_path(self) -> Path {
            Path {
                path_commands: self.path_commands.into_boxed_slice(),
            }
        }
    }
    

    With that, PathBuilder is just a temporary struct that turns into an immutable Path once we are done feeding it. Path contains a boxed slice of the exact size, without any extra capacity.

    Next steps

    All the coordinates in librsvg are stored as f64, double-precision floating point numbers. The SVG/CSS spec says that single-precision floats are enough, and that 64-bit floats should be used only for geometric transformations.

    I'm a bit scared to make that change; I'll have to look closely at the results of the test suite to see if rendered files change very much. I suppose even big maps require only as much precision as f32 — after all, that is what OpenStreetMap uses.

    References

  3. Reducing memory consumption in librsvg, part 2: SpecifiedValues

    Translations: es - Tags: gnome, librsvg, performance, rust

    To continue with last time's topic, let's see how to make librsvg's DOM nodes smaller in memory. Since that time, there have been some changes to the code; that is why in this post some of the type names are different from last time's.

    Every SVG element is represented with this struct:

    pub struct Element {
        element_type: ElementType,
        element_name: QualName,
        id: Option<String>,
        class: Option<String>,
        specified_values: SpecifiedValues,
        important_styles: HashSet<QualName>,
        result: ElementResult,
        transform: Transform,
        values: ComputedValues,
        cond: bool,
        style_attr: String,
        element_impl: Box<dyn ElementTrait>,
    }
    

    The two biggest fields are the ones with types SpecifiedValues and ComputedValues. These are the sizes of the whole Element struct and those two types:

    sizeof Element: 1808
    sizeof SpecifiedValues: 824
    sizeof ComputedValues: 704
    

    In this post, we'll reduce the size of SpecifiedValues.

    What is SpecifiedValues?

    If we have an element like this:

    <circle cx="10" cy="10" r="10" stroke-width="4" stroke="blue"/>
    

    The values of the style properties stroke-width and stroke get stored in a SpecifiedValues struct. This struct has a bunch of fields, one for each possible style property:

    pub struct SpecifiedValues {
        baseline_shift:              SpecifiedValue<BaselineShift>,
        clip_path:                   SpecifiedValue<ClipPath>,
        clip_rule:                   SpecifiedValue<ClipRule>,
        /// ...
        stroke:                      SpecifiedValue<Stroke>,
        stroke_width:                SpecifiedValue<StrokeWidth>,
        /// ...
    }
    

    Each field is a SpecifiedValue<T> for the following reason. In CSS/SVG, a style property can be unspecified, or it can have an inherit value to force the property to be copied from the element's parent, or it can actually have a specified value. Librsvg represents these as follows:

    pub enum SpecifiedValue<T>
    where
        T: // some trait bounds here
    {
        Unspecified,
        Inherit,
        Specified(T),
    }
    

    Now, SpecifiedValues has a bunch of fields, 47 of them to be exact — one for each of the style properties that librsvg supports. That is why SpecifiedValues has a size of 824 bytes; it is the largest sub-structure within Element, and it would be good to reduce its size.

    Not all properties are specified

    Let's go back to the chunk of SVG from above:

    <circle cx="10" cy="10" r="10" stroke-width="4" stroke="blue"/>
    

    Here we only have two specified properties, so the stroke_width and stroke fields of SpecifiedValues will be set as SpecifiedValue::Specified(something) and all the other fields will be left as SpecifiedValue::Unspecified.

    It would be good to store only complete values for the properties that are specified, and just a small flag for unset properties.

    Another way to represent the set of properties

    Since there is a maximum of 47 properties per element (or more if librsvg adds support for extra ones), we can have a small array of 47 bytes. Each byte contains the index within another array that contains only the values of specified properties, or a sentinel value for properties that are unset.

    First, I made an enum that fits in a u8 for all the properties, plus the sentinel value, which also gives us the total number of properties. The #[repr(u8)] guarantees that this enum fits in a byte.

    #[repr(u8)]
    enum PropertyId {
        BaselineShift,
        ClipPath,
        ClipRule,
        Color,
        // ...
        WritingMode,
        XmlLang,
        XmlSpace,
        UnsetProperty, // the number of properties and also the sentinel value
    }
    

    Also, since before these changes there was the following monster to represent "which property is this" plus the property's value:

    pub enum ParsedProperty {
        BaselineShift(SpecifiedValue<BaselineShift>),
        ClipPath(SpecifiedValue<ClipPath>),
        ClipRule(SpecifiedValue<ClipRule>),
        Color(SpecifiedValue<Color>),
        // ...
    }
    

    I changed the definition of SpecifiedValues to have two arrays, one to store which properties are specified, and another only with the values for the properties that are actually specified:

    pub struct SpecifiedValues {
        indices: [u8; PropertyId::UnsetProperty as usize],
        props: Vec<ParsedProperty>,
    }
    

    There is a thing that is awkward in Rust, or which I haven't found how to solve in a nicer way: given a ParsedProperty, find the corresponding PropertyId for its discriminant. I did the obvious thing:

    impl ParsedProperty {
        fn get_property_id(&self) -> PropertyId {
            use ParsedProperty::*;
    
            match *self {
                BaselineShift(_) => PropertyId::BaselineShift,
                ClipPath(_)      => PropertyId::ClipPath,
                ClipRule(_)      => PropertyId::ClipRule,
                Color(_)         => PropertyId::Color,
                // ...
            }
        }
    }
    

    Initialization

    First, we want to initialize an empty SpecifiedValues, where every element of the the indices array is set to the sentinel value that means that the corresponding property is not set:

    impl Default for SpecifiedValues {
        fn default() -> Self {
            SpecifiedValues {
                indices: [PropertyId::UnsetProperty.as_u8(); PropertyId::UnsetProperty as usize],
                props: Vec::new(),
            }
        }
    }
    

    That sets the indices field to an array full of the same PropertyId::UnsetProperty sentinel value. Also, the props array is empty; it hasn't even had a block of memory allocated for it yet. That way, SVG elements without style properties don't use any extra memory.

    Which properties are specified and what are their indices?

    Second, we want a function that will give us the index in props for some property, or that will tell us if the property has not been set yet:

    impl SpecifiedValues {
        fn property_index(&self, id: PropertyId) -> Option<usize> {
            let v = self.indices[id.as_usize()];
    
            if v == PropertyId::UnsetProperty.as_u8() {
                None
            } else {
                Some(v as usize)
            }
        }
    }
    

    (If someone passes id = PropertyId::UnsetProperty, the array access to indices will panic, which is what we want, since that is not a valid property id.)

    Change a property's value

    Third, we want to set the value of a property that has not been set, or change the value of one that was already specified:

    impl SpecifiedValues {
        fn replace_property(&mut self, prop: &ParsedProperty) {
            let id = prop.get_property_id();
    
            if let Some(index) = self.property_index(id) {
                self.props[index] = prop.clone();
            } else {
                self.props.push(prop.clone());
                let pos = self.props.len() - 1;
                self.indices[id.as_usize()] = pos as u8;
            }
        }
    }
    

    In the first case in the if, the property was already set and we just replace its value. In the second case, the property was not set; we add it to the props array and store its resulting index in indices.

    Results

    Before:

    sizeof Element: 1808
    sizeof SpecifiedValues: 824
    

    After:

    sizeof Element: 1056
    sizeof SpecifiedValues: 72
    

    The pathological file from the last time used 463,412,720 bytes in memory before these changes. After the changes, it uses 314,526,136 bytes.

    I also measured memory consumption for a normal file, in this case one with a bunch of GNOME's symbolic icons. The old version uses 17 MB; the new version only 13 MB.

    How to keep fine-tuning this

    For now, I am satisfied with SpecifiedValues, although it could still be made smaller:

    • The crate tagged-box converts an enum like ParsedProperty into an enum-of-boxes, and codifies the enum's discriminant into the box's pointer. This way each variant occupies the minimum possible memory, although in a separately-allocated block, and the container itself uses only a pointer. I am not sure if this is worth it; each ParsedProperty is 64 bytes, but the flat array props: Vec<ParsedProperty> is very appealing in a single block of memory. I have not checked the sizes of each individual property to see if they vary a lot among them.

    • Look for a crate that lets us have the properties in a single memory block, a kind of arena with variable types. This can be implemented with a bit of unsafe, but one has to be careful with the alignment of different types.

    • The crate enum_set2 represents an array of field-less enums as a compact bit array. If we changed the representation of SpecifiedValue, this would reduce the indices array to a minimum.

    If someone wants to dedicate some time to implement and measure this, I would be very grateful.

    Next steps

    According to Massif, the next thing is to keep making Element smaller. The next thing to shrink is ComputedValues. The obvious route is to do exactly the same as I did for SpecifiedValues. I am not sure if it would be better to try to share the style structs between elements.

  4. Librsvg accepting interns for Summer of Code 2020

    - Tags: gnome, librsvg, mentoring

    Are you a student qualified to run for Summer of Code 2020? I'm willing to mentor the following project for librsvg.

    Project: Revamp the text engine in librsvg

    Librsvg supports only a few features of the SVG Text specification. It requires extra features to be really useful:

    • Proper bidirectional support. Librsvg supports the direction and unicode-bidi properties for text elements, among others, but in a very rudimentary fashion. It just translates those properties to Pango terminology and asks PangoLayout to lay out the text. SVG really wants finer control of that, for which...

    • ... ideally you would make librsvg use Harfbuzz directly, or a wrapper that is close to its level of operation. Pango is a bit too high level for the needs of SVG.

    • Manual layout of text glyphs. After a text engine like Harfbuzz does the shaping, librsvg would need to lay out the produced glyphs in the way of the SVG attributes dx, dy, x, y, etc. The SVG Text specification has the algorithms for this.

    • The cherry on top: text-on-a-path. Again, the spec has the details. You would make Wikimedia content creators very happy with this!

    Requirements: Rust for programming language; some familiarity with Unicode concepts and text layout. Familiarity with Cairo and Harfbuzz would help a lot. Preference will be given to people who can write a right-to-left human language, or a language that requires complex shaping.

    Details for students

  5. Reducing memory consumption in librsvg, part 1: text nodes

    Translations: es - Tags: gnome, librsvg, performance, rust

    Librsvg's memory consumption has not been a problem so far for GNOME's use cases, which is basically rendering icons. But for SVG files with thousands of elements, it could do a lot better.

    Memory consumption in the DOM

    Librsvg shares some common problems with web browsers: it must construct a DOM tree in memory with SVG elements, and keep a bunch of information for each of the tree's nodes. For example, each SVG element may have an id attribute, or a class; each one has a transformation matrix; etc.

    Apart from the tree node metadata (pointers to sibling and parent nodes), each node has this:

    /// Contents of a tree node
    pub struct NodeData {
        node_type: NodeType,
        element_name: QualName,
        id: Option<String>,    // id attribute from XML element
        class: Option<String>, // class attribute from XML element
        specified_values: SpecifiedValues,
        important_styles: HashSet<QualName>,
        result: NodeResult,
        transform: Transform,
        values: ComputedValues,
        cond: bool,
        style_attr: String,
    
        node_impl: Box<dyn NodeTrait>, // concrete struct for node types
    }
    

    On a 64-bit box, that NodeData struct is 1808 bytes. And the biggest fields are the SpecifiedValues (824 bytes) and ComputedValues (704 bytes).

    Librsvg represents all tree nodes with that struct. Consider an SVG like this:

    <svg xmlns="http://www.w3.org/2000/svg" width="100" height="100">
      <rect x="10" y="20"/>
      <path d="..."/>
      <text x="10" y="20">Hello</text>
      <!-- etc -->
    </svg>
    

    There are 4 elements in that file. However, there are also tree nodes for the XML text nodes, that is, the whitespace between tags and the "Hello" inside the <text> element.

    The contents of each of those text nodes is tiny (a newline and maybe a couple of spaces), but each node still takes up at least 1808 bytes from the NodeData struct, plus the size of the text string.

    Let's refactor this to make it easier to remove that overhead.

    First step: separate text nodes from element nodes

    Internally, librsvg represents XML text nodes with a NodeChars struct which is basically a string with some extra stuff. All the concrete structs for tree node types must implement a trait called NodeTrait, and NodeChars is no exception:

    pub struct NodeChars {
       // a string with the text node's contents
    }
    
    impl NodeTrait for NodeChars {
       // a mostly empty impl with methods that do nothing
    }
    

    You don't see it in the definition of NodeData in the previous section, but for a text node, the NodeData.node_impl field would point to a heap-allocated NodeChars (it can do that, since NodeChars implements NodeTrait, so it can go into node_impl: Box<dyn NodeTrait>).

    First, I turned the NodeData struct into an enum with two variants, and moved all of its previous fields to an Element struct:

    // This one is new
    pub enum NodeData {
        Element(Element),
        Text(NodeChars),
    }
    
    // This is the old struct with a different name
    pub enum Element {
        node_type: NodeType,
        element_name: QualName,
        id: Option<String>,
        class: Option<String>,
        specified_values: SpecifiedValues,
        important_styles: HashSet<QualName>,
        result: NodeResult,
        transform: Transform,
        values: ComputedValues,
        cond: bool,
        style_attr: String,
        node_impl: Box<dyn NodeTrait>,
    }
    

    The size of a Rust enum is the maximum of the sizes of its variants, plus a little extra for the discriminant (you can think of a C struct with an int for the discriminant, and a union of variants).

    The code needed a few changes to split NodeData in this way, by adding accessor functions to each of the Element or Text cases conveniently. This is one of those refactors where you can just change the declaration, and walk down the compiler's errors to make each case use the accesors instead of whatever was done before.

    Second step: move the Element variant to a separate allocation

    Now, we turn NodeData into this:

    pub enum NodeData {
        Element(Box<Element>), // This goes inside a Box
        Text(NodeChars),
    }
    

    That way, the Element variant is the size of a pointer (i.e. a pointer to the heap-allocated Box), and the Text variant is as big as NodeChars as usual.

    This means that Element nodes are just as big as before, plus an extra pointer, plus an extra heap allocation.

    However, the Text nodes get a lot smaller!

    • Before: sizeof::<NodeData>() = 1808
    • After: sizeof::<NodeData>() = 72

    By making the Element variant a lot smaller (the size of a Box, which is just a pointer), it has no extra overhead on the Text variant.

    This means that in the SVG file, all the whitespace between XML elements now takes a lot less memory.

    Some numbers from a pathological file

    Issue 42 is about an SVG file that is just a <use> element repeated many times, once per line:

    <svg xmlns="http://www.w3.org/2000/svg">
      <defs>
        <symbol id="glyph0-0">
          <!-- a few elements here -->
        </symbol>
      </defs>
    
      <use xlink:href="#glyph0-0" x="1" y="10"/>
      <use xlink:href="#glyph0-0" x="1" y="10"/>
      <use xlink:href="#glyph0-0" x="1" y="10"/>
      <!-- about 196,000 similar lines -->
    </svg>
    

    So we have around 196,000 elements. According to Valgrind's Massif tool, this makes rsvg-convert allocate 800,501,568 bytes in the old version, versus 463,412,720 bytes in the new version, or about 60% of the space.

    Next steps

    There is a lot of repetition in the text nodes of a typical SVG file. For example, in that pathological file above, most of the whitespace is identical: between each element there is a newline and two spaces. Instead of having thousands of little allocations, all with the same string, there could be a pool of shared strings. Files with "real" indentation could get benefits from sharing the whitespace-only text nodes.

    Real browser engines are very careful to share the style structs across elements if possible. Look for "style struct sharing" in "Inside a super fast CSS engine: Quantum CSS". This is going to take some good work in librsvg, but we can get there gradually.

    References

  6. Exposing C and Rust APIs: some thoughts from librsvg

    - Tags: librsvg, rust

    Librsvg exports two public APIs: the C API that is in turn available to other languages through GObject Introspection, and the Rust API.

    You could call this a use of the facade pattern on top of the rsvg_internals crate. That crate is the actual implementation of librsvg, and exports an interface with many knobs that are not exposed from the public APIs. The knobs are to allow for the variations in each of those APIs.

    This post is about some interesting things that have come up during the creation/separation of those public APIs, and the implications of having an internals library that implements both.

    Initial code organization

    When librsvg was being ported to Rust, it just had an rsvg_internals crate that compiled as a staticlib to a .a library, which was later linked into the final librsvg.so.

    Eventually the code got to the point where it was feasible to port the toplevel C API to Rust. This was relatively easy to do, since everything else underneath was already in Rust. At that point I became interested in also having a Rust API for librsvg — first to port the test suite to Rust and be able to run tests in parallel, and then to actually have a public API in Rust with more modern idioms than the historical, GObject-based API in C.

    Version 2.45.5, from February 2019, is the last release that only had a C API.

    Most of the C API of librsvg is in the RsvgHandle class. An RsvgHandle gets loaded with SVG data from a file or a stream, and then gets rendered to a Cairo context. The naming of Rust source files more or less matched the C source files, so where there was rsvg-handle.c initially, later we had handle.rs with the Rustified part of that code.

    So, handle.rs had the Rust internals of the RsvgHandle class, and a bunch of extern "C" functions callable from C. For example, for this function in the public C API:

    void rsvg_handle_set_base_gfile (RsvgHandle *handle,
                                     GFile      *base_file);
    

    The corresponding Rust implementation was this:

    #[no_mangle]
    pub unsafe extern "C" fn rsvg_handle_rust_set_base_gfile(
        raw_handle: *mut RsvgHandle,
        raw_gfile: *mut gio_sys::GFile,
    ) {
        let rhandle = get_rust_handle(raw_handle);        // 1
    
        assert!(!raw_gfile.is_null());                    // 2
        let file: gio::File = from_glib_none(raw_gfile);  // 3
    
        rhandle.set_base_gfile(&file);                    // 4
    }
    
    1. Get the Rust struct corresponding to the C GObject.
    2. Check the arguments.
    3. Convert from C GObject reference to Rust reference.
    4. Call the actual implementation of set_base_gfile in the Rust struct.

    You can see that this function takes in arguments with C types, and converts them to Rust types. It's basically just glue between the C code and the actual implementation.

    Then, the actual implementation of set_base_gfile looked like this:

    impl Handle {
        fn set_base_gfile(&self, file: &gio::File) {
            if let Some(uri) = file.get_uri() {
                self.set_base_url(&uri);
            } else {
                rsvg_g_warning("file has no URI; will not set the base URI");
            }
        }
    }
    

    This is an actual method for a Rust Handle struct, and takes Rust types as arguments — no conversions are necessary here. However, there is a pesky call to rsvg_g_warning, about which I'll talk later.

    I found it cleanest, although not the shortest code, to structure things like this:

    • C code: bunch of stub functions where rsvg_blah just calls a corresponding rsvg_rust_blah.

    • Toplevel Rust code: bunch of #[no_mangle] unsafe extern "C" fn rust_blah() that convert from C argument types to Rust types, and call safe Rust functions — for librsvg, these happened to be methods for a struct. Before returning, the toplevel functions convert Rust return values to C return values, and do things like converting the Err(E) of a Result<> into a GError or a boolean or whatever the traditional C API required.

    In the very first versions of the code where the public API was implemented in Rust, the extern "C" functions actually contained their implementation. However, after some refactoring, it turned out to be cleaner to leave those functions just with the task of converting C to Rust types and vice-versa, and put the actual implementation in very Rust-y code. This made it easier to keep the unsafe conversion code (unsafe because it deals with raw pointers coming from C) only in the toplevel functions.

    Growing out a Rust API

    This commit is where the new, public Rust API started. That commit just created a Cargo workspace with two crates; the rsvg_internals crate that we already had, and a librsvg_crate with the public Rust API.

    The commits over the subsequent couple of months are of intense refactoring:

    • This commit moves the unsafe extern "C" functions to a separate c_api.rs source file. This leaves handle.rs with only the safe Rust implementation of the toplevel API, and c_api.rs with the unsafe entry points that mostly just convert argument types, return values, and errors.

    • The API primitives get expanded to allow for a public Rust API that is "hard to misuse" unlike the C API, which needs to be called in a certain order.

    Needing to call a C macro

    However, there was a little problem. The Rust code cannot call g_warning, a C macro in glib that prints a message to stderr or uses structured logging. Librsvg used that to signal conditions where something went (recoverably) wrong, but there was no way to return a proper error code to the caller — it's mainly used as a debugging aid.

    This is what the rsvg_internals used to be able to call that C macro:

    First, the C code exports a function that just calls the macro:

    /* This function exists just so that we can effectively call g_warning() from Rust,
     * since glib-rs doesn't bind the g_log functions yet.
     */
    void
    rsvg_g_warning_from_c(const char *msg)
    {
        g_warning ("%s", msg);
    }
    

    Second, the Rust code binds that function to be callable from Rust:

    pub fn rsvg_g_warning(msg: &str) {
        extern "C" {
            fn rsvg_g_warning_from_c(msg: *const libc::c_char);
        }
    
        unsafe {
            rsvg_g_warning_from_c(msg.to_glib_none().0);
        }
    }
    

    However! Since the standalone librsvg_crate does not link to the C code from the public librsvg.so, the helper rsvg_g_warning_from_c is not available!

    A configuration feature for the internals library

    And yet! Those warnings are only meaningful for the C API, which is not able to return error codes from all situations. However, the Rust API is able to do that, and so doesn't need the warnings printed to stderr. My first solution was to add a build-time option for whether the rsvg_internals library is being build for the C library, or for the Rust one.

    In case we are building for the C library, the code calls rsvg_g_warning_from_c as usual.

    But in case we are building for the Rust library, that code is a no-op.

    This is the bit in rsvg_internals/Cargo.toml to declare the feature:

    [features]
    # Enables calling g_warning() when built as part of librsvg.so
    c-library = []
    

    And this is the corresponding code:

    #[cfg(feature = "c-library")]
    pub fn rsvg_g_warning(msg: &str) {
        unsafe {
            extern "C" {
                fn rsvg_g_warning_from_c(msg: *const libc::c_char);
            }
    
            rsvg_g_warning_from_c(msg.to_glib_none().0);
        }
    }
    
    #[cfg(not(feature = "c-library"))]
    pub fn rsvg_g_warning(_msg: &str) {
        // The only callers of this are in handle.rs. When those functions
        // are called from the Rust API, they are able to return a
        // meaningful error code, but the C API isn't - so they issues a
        // g_warning() instead.
    }
    

    The first function is the one that is compiled when the c-library feature is enabled; this happens when building rsvg_internals to link into librsvg.so.

    The second function does nothing; it is what is compiled when rsvg_internals is being used just from the librsvg_crate crate with the Rust API.

    While this worked well, it meant that the internals library was built twice on each compilation run of the whole librsvg module: once for librsvg.so, and once for librsvg_crate.

    Making programming errors a g_critical

    While g_warning() means "something went wrong, but the program will continue", g_critical() means "there is a programming error". For historical reasons Glib does not abort when g_critical() is called, except by setting G_DEBUG=fatal-criticals, or by running a development version of Glib.

    This commit turned warnings into critical errors when the C API was called out of order, by using a similar rsvg_g_critical_from_c() wrapper for a C macro.

    Separating the C-callable code into yet another crate

    To recapitulate, at that point we had this:

    librsvg/
    |  Cargo.toml - declares the Cargo workspace
    |
    +- rsvg_internals/
    |  |  Cargo.toml
    |  +- src/
    |       c_api.rs - convert types and return values, call into implementation
    |       handle.rs - actual implementation
    |       *.rs - all the other internals
    |
    +- librsvg/
    |    *.c - stub functions that call into Rust
    |    rsvg-base.c - contains rsvg_g_warning_from_c() among others
    |
    +- librsvg_crate/
       |  Cargo.toml
       +- src/
       |    lib.rs - public Rust API
       +- tests/ - tests for the public Rust API
            *.rs
    

    At this point c_api.rs with all the unsafe functions looked out of place. That code is only relevant to librsvg.so — the public C API —, not to the Rust API in librsvg_crate.

    I started moving the C API glue to a separate librsvg_c_api crate that lives along with the C stubs:

    +- librsvg/
    |    *.c - stub functions that call into Rust
    |    rsvg-base.c - contains rsvg_g_warning_from_c() among others
    |    Cargo.toml
    |    c_api.rs - what we had before
    

    This made the dependencies look like the following:

          rsvg_internals
           ^           ^
           |             \
           |               \
    librsvg_crate     librsvg_c_api
      (Rust API)             ^
                             |
                        librsvg.so
                          (C API)
    

    And also, this made it possible to remove the configuration feature for rsvg_internals, since the code that calls rsvg_g_warning_from_c now lives in librsvg_c_api.

    With that, rsvg_internals is compiled only once, as it should be.

    This also helped clean up some code in the internals library. Deprecated functions that render SVGs directly to GdkPixbuf are now in librsvg_c_api and don't clutter the rsvg_internals library. All the GObject boilerplate is there as well now; rsvg_internals is mostly safe code except for the glue to libxml2.

    Summary

    It was useful to move all the code that dealt with incoming C types, our outgoing C return values and errors, into the same place, and separate it from the "pure Rust" code.

    This took gradual refactoring and was not done in a single step, but it left the resulting Rust code rather nice and clean.

    When we added a new public Rust API, we had to shuffle some code around that could only be linked in the context of a C library.

    Compile-time configuration features are useful (like #ifdef in the C world), but they do cause double compilation if you need a C-internals and a Rust-internals library from the same code.

    Having proper error reporting throughout the Rust code is a lot of work, but pretty much invaluable. The glue code to C can then convert and expose those errors as needed.

    If you need both C and Rust APIs into the same code base, you may end up naturally using a facade pattern for each. It helps to gradually refactor the internals to be as "pure idiomatic Rust" as possible, while letting API idiosyncrasies bubble up to each individual facade.

  7. Moving gnome-shell's styles to Rust

    - Tags: gnome, gnome-shell, rust

    Gnome-shell uses CSS processing code that dates from HippoCanvas, a CSS-aware canvas from around 2006. It uses libcroco to parse CSS, and implements selector matching by hand in C.

    This code is getting rather dated, and libcroco is unmaintained.

    I've been reading the code for StTheme and StThemeNode, and it looks very feasible to port it gradually to Rust, by using the same crates that librsvg uses, and eventually removing libcroco altogether: gnome-shell is the last module that uses libcroco in distro packages.

    Strategy

    StTheme and StThemeNode use libcroco to load CSS stylesheets and keep them in memory. The values of individual properties are just tokenized and kept around as a linked list of CRTerm; this struct represents a single token.

    Later, the drawing code uses functions like st_theme_node_lookup_color(node, "property_name") or st_theme_node_lookup_length() to query the various properties that it needs. It is then that the type of each property gets determined: prior to that step, property values are just tokenized, not parsed into usable values.

    I am going to start by porting the individual parsers to Rust, similar to what Paolo and I did for librsvg. It turns out that there's some code we can share.

    So far I have the parser for colors implemented in Rust. This removes a little bunch of code from the C parsers, and replaces it with a little Rust code, since the cssparser crate can already parse CSS colors with alpha with no extra work — libcroco didn't support alpha.

    As a bonus, this supports hsl() colors in addition to rgb() ones out of the box!

    After all the parsers are done, the next step would be to convert the representation of complete stylesheets into pure Rust code.

    What can we expect?

    A well-maintained CSS stack. Firefox and Servo both use the crates in question, so librsvg and gnome-shell should get maintenance of a robust CSS stack "for free", for the foreseeable future.

    Speed. Caveat: I have no profile data for gnome-shell yet, so I don't know how much time it spends doing CSS parsing and cascading, but it looks like the Rust version has a good chance of being more efficient.

    The selectors crate has some very interesting optimizations from Mozilla Servo, and it is also now used in Firefox. It supports doing selector matching using Bloom filters, and can also avoid re-cascading child nodes if a change to a parent would not cause its children to change.

    All the parsing is done with zero-copy parsers thanks to Rust's string slices; without so many malloc() calls in the parsing code path, the parsing stage should really fly.

    More CSS features. The selectors crate can do matching on basically all kinds of selectors as defined by recent CSS specs; one just has to provide the correct hooks into the calling code's representation of the DOM tree. The kind of matching that StTheme can do is somewhat limited; the rustification should make it match much more closely to what people expect from CSS engines in web browsers.

    A well-defined model of property inheritance. StThemeNode's model for CSS property inheritance is a bit ad-hoc and inconsistent. I haven't quite tested it, but from looking at the code, it seems that not all properties get inherited in the same way. I hope to move it to something closer to what librsvg already does, which should make it match people's expectations from the web.

    In the meantime

    I have a merge request ready to simply move the libcroco source code directly inside gnome-shell's source tree. This should let distros remove their libcroco package as soon as possible. That MR does not require Rust yet.

    My playground is here:

    This does not compile yet! I'll plug things together tomorrow.

    (Oh, yes, the project to redo Firefox's CSS stack in Rust used to be called Stylo. I'm calling this Stylish, as in Styles for the Shell.)

  8. Refactoring the Length type

    - Tags: librsvg, rust

    CSS length values have a number and a unit, e.g. 5cm or 6px. Sometimes the unit is a percentage, like 50%, and SVG says that lengths with percentage units should be resolved with respect to a certain rectangle. For example, consider this circle element:

    <circle cx="50%" cy="75%" r="4px" fill="black"/>
    

    This means, draw a solid black circle whose center is at 50% of the width and 75% of the height of the current viewport. The circle should have a 4-pixel radius.

    The process of converting that kind of units into absolute pixels for the final drawing is called normalization. In SVG, percentage units sometimes need to be normalized with respect to the current viewport (a local coordinate system), or with respect to the size of another object (e.g. when a clipping path is used to cut the current shape in half).

    One detail about normalization is that it can be with respect to the horizontal dimension of the current viewport, the vertical dimension, or both. Keep this in mind: at normalization time, we need to be able to distinguish between those three modes.

    The original C version

    I have talked about the original C code for lengths before; the following is a small summary.

    The original C code had this struct to represent lengths:

    typedef struct {
        double length;
        char factor;
    } RsvgLength;
    

    The parsing code would set the factor field to a character depending on the length's unit: 'p' for percentages, 'i' for inches, etc., and '\0' for the default unit, which is pixels.

    Along with that, the normalization code needed to know the direction (horizontal, vertical, both) to which the length in question refers. It did this by taking another character as an argument to the normalization function:

    double
    _rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, char dir)
    {
        if (in->factor == '\0')            /* pixels, no need to normalize */
            return in->length;
        else if (in->factor == 'p') {      /* percentages; need to consider direction */
            if (dir == 'h')                                     /* horizontal */
                return in->length * ctx->vb.rect.width;
            if (dir == 'v')                                     /* vertical */
                return in->length * ctx->vb.rect.height;
            if (dir == 'o')                                     /* both */
                return in->length * rsvg_viewport_percentage (ctx->vb.rect.width,
                                                              ctx->vb.rect.height);
        } else { ... }
    }
    

    The original post talks about how I found a couple of bugs with how the directions are identified at normalization time. The function above expects one of 'h'/'v'/'o' for horizontal/vertical/both, and one or two places in the code passed the wrong character.

    Making the C version cleaner

    Before converting that code to Rust, I removed the pesky characters and made the code use proper enums to identify a length's units.

    +typedef enum {
    +    LENGTH_UNIT_DEFAULT,
    +    LENGTH_UNIT_PERCENT,
    +    LENGTH_UNIT_FONT_EM,
    +    LENGTH_UNIT_FONT_EX,
    +    LENGTH_UNIT_INCH,
    +    LENGTH_UNIT_RELATIVE_LARGER,
    +    LENGTH_UNIT_RELATIVE_SMALLER
    +} LengthUnit;
    +
     typedef struct {
         double length;
    -    char factor;
    +    LengthUnit unit;
     } RsvgLength;
    

    Then, do the same for the normalization function, so it will get the direction in which to normalize as an enum instead of a char.

    +typedef enum {
    +    LENGTH_DIR_HORIZONTAL,
    +    LENGTH_DIR_VERTICAL,
    +    LENGTH_DIR_BOTH
    +} LengthDir;
    
     double
    -_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, char dir)
    +_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, LengthDir dir)
    

    Making the C version easier to get right

    While doing the last change above, I found a place in the code that used the wrong direction by mistake, probably due to a cut&paste error. Part of the problem here is that the code was specifying the direction at normalization time.

    I decided to change it so that each direction value carried its own direction since initialization, so that subsequent code wouldn't have to worry about that. Hopefully, initializing a width field should make it obvious that it needed LENGTH_DIR_HORIZONTAL.

     typedef struct {
         double length;
         LengthUnit unit;
    +    LengthDir dir;
     } RsvgLength;
    

    That is, so that instead of

      /* at initialization time */
      foo.width = _rsvg_css_parse_length (str);
    
      ...
    
      /* at rendering time */
      double final_width = _rsvg_css_normalize_length (&foo.width, ctx, LENGTH_DIR_HORIZONTAL);
    

    we would instead do this:

      /* at initialization time */
      foo.width = _rsvg_css_parse_length (str, LENGTH_DIR_HORIZONTAL);
    
      ...
    
      /* at rendering time */
      double final_width = _rsvg_css_normalize_length (&foo.width, ctx);
    

    This made the drawing code, which deals with a lot of coordinates at the same time, a lot less noisy.

    Initial port to Rust

    To recap, this was the state of the structs after the initial refactoring in C:

    typedef enum {
        LENGTH_UNIT_DEFAULT,
        LENGTH_UNIT_PERCENT,
        LENGTH_UNIT_FONT_EM,
        LENGTH_UNIT_FONT_EX,
        LENGTH_UNIT_INCH,
        LENGTH_UNIT_RELATIVE_LARGER,
        LENGTH_UNIT_RELATIVE_SMALLER
    } LengthUnit;
    
    typedef enum {
        LENGTH_DIR_HORIZONTAL,
        LENGTH_DIR_VERTICAL,
        LENGTH_DIR_BOTH
    } LengthDir;
    
    typedef struct {
        double length;
        LengthUnit unit;
        LengthDir dir;
    } RsvgLength;
    

    This ported to Rust in a straightforward fashion:

    pub enum LengthUnit {
        Default,
        Percent,
        FontEm,
        FontEx,
        Inch,
        RelativeLarger,
        RelativeSmaller
    }
    
    pub enum LengthDir {
        Horizontal,
        Vertical,
        Both
    }
    
    pub struct RsvgLength {
        length: f64,
        unit: LengthUnit,
        dir: LengthDir
    }
    

    It got a similar constructor that took the direction and produced an RsvgLength:

    impl RsvgLength {
        pub fn parse (string: &str, dir: LengthDir) -> RsvgLength { ... }
    }
    

    (This was before using Result; remember that the original C code did very little error checking!)

    The initial Parse trait

    It was at that point that it seemed convenient to introduce a Parse trait, which all CSS value types would implement to parse themselves from string.

    However, parsing an RsvgLength also needed an extra piece of data, the LengthDir. My initial version of the Parse trait had an associated called Data, through which one could pass an extra piece of data during parsing/initialization:

    pub trait Parse: Sized {
        type Data;
        type Err;
    
        fn parse (s: &str, data: Self::Data) -> Result<Self, Self::Err>;
    }
    

    This was explicitly to be able to pass a LengthDir to the parser for RsvgLength:

    impl Parse for RsvgLength {
        type Data = LengthDir;
        type Err = AttributeError;
    
        fn parse (string: &str, dir: LengthDir) -> Result <RsvgLength, AttributeError> { ... }
    }
    

    This was okay for lengths, but very noisy for everything else that didn't require an extra bit of data. In the rest of the code, the helper type was Data = () and there was a pair of extra parentheses () in every place that parse() was called.

    Removing the helper Data type

    Introducing one type per direction

    Over a year later, that () bit of data everywhere was driving me nuts. I started refactoring the Length module to remove it.

    First, I introduced three newtypes to wrap Length, and indicate their direction at the same time:

    pub struct LengthHorizontal(Length);
    pub struct LengthVertical(Length);
    pub struct LengthBoth(Length);
    

    This was done with a macro because now each wrapper type needed to know the relevant LengthDir.

    Now, for example, the declaration for the <circle> element looked like this:

    pub struct NodeCircle {
        cx: Cell<LengthHorizontal>,
        cy: Cell<LengthVertical>,
        r: Cell<LengthBoth>,
    }
    

    (Ignore the Cell everywhere; we got rid of that later.)

    Removing the dir field

    Since now the information about the length's direction is embodied in the LengthHorizontal/LengthVertical/LengthBoth types, this made it possible to remove the dir field from the inner Length struct.

     pub struct RsvgLength {
         length: f64,
         unit: LengthUnit,
    -    dir: LengthDir
     }
    
    +pub struct LengthHorizontal(Length);
    +pub struct LengthVertical(Length);
    +pub struct LengthBoth(Length);
    +
    +define_length_type!(LengthHorizontal, LengthDir::Horizontal);
    +define_length_type!(LengthVertical, LengthDir::Vertical);
    +define_length_type!(LengthBoth, LengthDir::Both);
    

    Note the use of a define_length_type! macro to generate code for those three newtypes.

    Removing the Data associated type

    And finally, this made it possible to remove the Data associated type from the Parse trait.

     pub trait Parse: Sized {
    -    type Data;
         type Err;
    
    -    fn parse(parser: &mut Parser<'_, '_>, data: Self::Data) -> Result<Self, Self::Err>;
    +    fn parse(parser: &mut Parser<'_, '_>) -> Result<Self, Self::Err>;
     }
    

    The resulting mega-commit removed a bunch of stray parentheses () from all calls to parse(), and the code ended up a lot easier to read.

    Removing the newtypes

    This was fine for a while. Recently, however, I figured out that it would be possible to embody the information for a length's direction in a different way.

    But to get there, I first needed a temporary refactor.

    Replacing the macro with a trait with a default implementation

    Deep in the guts of length.rs, the key function that does something different based on LengthDir is its scaling_factor method:

    enum LengthDir {
        Horizontal,
        Vertical,
        Both,
    }
    
    impl LengthDir {
        fn scaling_factor(self, x: f64, y: f64) -> f64 {
            match self {
                LengthDir::Horizontal => x,
                LengthDir::Vertical => y,
                LengthDir::Both => viewport_percentage(x, y),
            }
        }
    }
    

    That method gets passed, for example, the width/height of the current viewport for the x/y arguments. The method decides whether to use the width, height, or a combination of both.

    And of course, the interesting part of the define_length_type! macro was to generate code for calling LengthDir::Horizontal::scaling_factor()/etc. as appropriate depending on the LengthDir in question.

    First I made a trait called Orientation with a scaling_factor method, and three zero-sized types that implement that trait. Note how each of these three implementations corresponds to one of the match arms above:

    pub trait Orientation {
        fn scaling_factor(x: f64, y: f64) -> f64;
    }
    
    pub struct Horizontal;
    pub struct Vertical;
    pub struct Both;
    
    impl Orientation for Horizontal {
        fn scaling_factor(x: f64, _y: f64) -> f64 {
            x
        }
    }
    
    impl Orientation for Vertical {
        fn scaling_factor(_x: f64, y: f64) -> f64 {
            y
        }
    }
    
    impl Orientation for Both {
        fn scaling_factor(x: f64, y: f64) -> f64 {
            viewport_percentage(x, y)
        }
    }
    

    Now most of the contents of the define_length_type! macro can go in the default implementation of a new trait LengthTrait. Crucially, this trait has an Orientation associated type, which it uses to call into the Orientation trait:

    pub trait LengthTrait: Sized {
        type Orientation: Orientation;
    
        ...
    
        fn normalize(&self, values: &ComputedValues, params: &ViewParams) -> f64 {
            match self.unit() {
                LengthUnit::Px => self.length(),
    
                LengthUnit::Percent => {
                    self.length() *
                    <Self::Orientation>::scaling_factor(params.view_box_width, params.view_box_height)
                }
    
                ...
        }
    }
    

    Note that the incantation is <Self::Orientation>::scaling_factor(...) to call that method on the associated type.

    Now the define_length_type! macro is shrunk a lot, with the interesting part being just this:

    macro_rules! define_length_type {
        {$name:ident, $orient:ty} => {
            pub struct $name(Length);
    
            impl LengthTrait for $name {
                type Orientation = $orient;
            }
        }
    }
    
    define_length_type! { LengthHorizontal, Horizontal }
    define_length_type! { LengthVertical, Vertical }
    define_length_type! { LengthBoth, Both }
    

    We moved from having three newtypes of length-with-LengthDir to three newtypes with dir-as-associated-type.

    Removing the newtypes and the macro

    After that temporary refactoring, we had the Orientation trait and the three zero-sized types Horizontal, Vertical, Both.

    I figured out that one can use PhantomData as a way to carry around the type that Length needs to normalize itself, instead of using an associated type in an extra LengthTrait. Behold!

    pub struct Length<O: Orientation> {
        pub length: f64,
        pub unit: LengthUnit,
        orientation: PhantomData<O>,
    }
    
    impl<O: Orientation> Length<O> {
        pub fn normalize(&self, values: &ComputedValues, params: &ViewParams) -> f64 {
            match self.unit {
                LengthUnit::Px => self.length,
    
                LengthUnit::Percent => {
                    self.length 
                        * <O as Orientation>::scaling_factor(params.view_box_width, params.view_box_height)
                }
    
                ...
            }
        }
    }
    

    Now the incantation is <O as Orientation>::scaling_factor() to call the method on the generic type; it is no longer an associated type in a trait.

    With that, users of lengths look like this; here, our <circle> element from before:

    pub struct Circle {
        cx: Length<Horizontal>,
        cy: Length<Vertical>,
        r: Length<Both>,
    }
    

    I'm very happy with the readability of all the code now. I used to think of PhantomData as a way to deal with wrapping pointers from C, but it turns out that it is also useful to keep a generic type around should one need it.

    The final Length struct is this:

    pub struct Length<O: Orientation> {
        pub length: f64,
        pub unit: LengthUnit,
        orientation: PhantomData<O>,
    }
    

    And it only takes up as much space as its length and unit fields; PhantomData is zero-sized after all.

    (Later, we renamed Orientation to Normalize, but the code's structure remained the same.)

    Summary

    Over a couple of years, librsvg's type that represents CSS lengths went from a C representation along the lines of "all data in the world is an int", to a Rust representation that uses some interesting type trickery:

    • C struct with char for units.

    • C struct with a LengthUnits enum.

    • C struct without an embodied direction; each place that needs to normalize needs to get the orientation right.

    • C struct with a built-in direction as an extra field, done at initialization time.

    • Same struct but in Rust.

    • An ugly but workable Parse trait so that the direction can be set at parse/initialization time.

    • Three newtypes LengthHorizontal, LengthVertical, LengthBoth with a common core. A cleaned-up Parse trait. A macro to generate those newtypes.

    • Replace the LengthDir enum with an Orientation trait, and three zero-sized types Horizontal/Vertical/Both that implement the trait.

    • Replace most of the macro with a helper trait LengthTrait that has an Orientation associated type.

    • Replace the helper trait with a single Length<T: Orientation> type, which puts the orientation as a generic parameter. The macro disappears and there is a single implementation for everything.

    Refactoring never ends!

  9. CSS in librsvg is now in Rust, courtesy of Mozilla Servo

    - Tags: gnome, librsvg, rust

    Summary: after an epic amount of refactoring, librsvg now does all CSS parsing and matching in Rust, without using libcroco. In addition, the CSS engine comes from Mozilla Servo, so it should be able to handle much more complex CSS than librsvg ever could before.

    This is the story of CSS support in librsvg.

    Introduction

    The first commit to introduce CSS parsing in librsvg dates from 2002. It was as minimal as possible, written to support a small subset of what was then CSS2.

    Librsvg handled CSS stylesheets more "piecing them apart" than "parsing them". You know, when g_strsplit() is your best friend. The basic parsing algorithm was to turn a stylesheet like this:

    rect { fill: blue; }
    
    .classname {
        fill: green;
        stroke-width: 4;
    }
    

    Into a hash table whose keys are strings like rect and .classname, and whose values are everything inside curly braces.

    The selector matching phase was equally simple. The code only handled a few possible match types as follows. If it wanted to match a certain kind of CSS selector, it would say, "what would this selector look like in CSS syntax", it would make up a string with that syntax, and compare it to the key strings it had stored in the hash table from above.

    So, to match an element name selector, it would sprintf("%s", element->name), obtain something like rect and see if the hash table had such a key.

    To match a class selector, it would sprintf(".%s", element->class), obtain something like .classname, and look it up in the hash table.

    This scheme supported only a few combinations. It handled tag, .class, tag.class, and a few combinations with #id in them. This was enough to support very simple stylesheets.

    The value corresponding to each key in the hash table was the stuff between curly braces in the stylesheet, so the second rule from the example above would contain fill: green; stroke-width: 4;. Once librsvg decided that an SVG element matched that CSS rule, it would re-parse the string with the CSS properties and apply them to the element's style.

    I'm amazed that so little code was enough to deal with a good number of SVG files with stylesheets. I suspect that this was due to a few things:

    • While people were using complex CSS in HTML all the time, it was less common for SVG...

    • ... because CSS2 was somewhat new, and the SVG spec was still being written...

    • ... and SVGs created with illustration programs don't really use stylesheets; they include the full style information inside each element instead of symbolically referencing it from a stylesheet.

    From the kinds of bugs that librsvg has gotten around "CSS support is too limited", it feels like SVGs which use CSS features are either hand-written, or machine-generated from custom programs like data plotting software. Illustration programs tend to list all style properties explicitly in each SVG element, and don't use CSS.

    Libcroco appears

    The first commit to introduce libcroco was to do CSS parsing, from March 2003.

    At the same time, libcroco was introducing code to do CSS matching. However, this code never got used in librsvg; it still kept its simple string-based matcher. Maybe libcroco's API was not ready?

    Libcroco fell out of maintainership around the first half of 2005, and volunteers have kept fixing it since then.

    Problems with librsvg's string matcher for CSS

    The C implementation of CSS matching in librsvg remained basically untouched until 2018, when Paolo Borelli and I started porting the surrounding code to Rust.

    I had a lot of trouble figuring out the concepts from the code. I didn't know all the terminology of CSS implementations, and librsvg didn't use it, either.

    I think that librsvg's code suffered from what the refactoring literature calls primitive obsession. Instead of having a parsed representation of CSS selectors, librsvg just stored a stringified version of them. So, a selector like rect#classname really was stored with a string like that, instead of an actual decomposition into structs.

    Moreover, things were misnamed. This is the field that stored stylesheet data inside an RsvgHandle:

        GHashTable *css_props;
    

    From just looking at the field declaration, this doesn't tell me anything about what kind of data is stored there. One has to grep the source code for where that field is used:

    static void
    rsvg_css_define_style (RsvgHandle * ctx,
                           const gchar * selector,
                           const gchar * style_name,
                           const gchar * style_value,
                           gboolean important)
    {
        GHashTable *styles;
    
        styles = g_hash_table_lookup (ctx->priv->css_props, selector);
    

    Okay, it looks up a selector by name in the css_props, and it gives back... another hash table styles? What's in there?

            g_hash_table_insert (styles,
                                 g_strdup (style_name),
                                 style_value_data_new (style_value, important));
    

    Another string key called style_name, whose key is a StyleValueData; what's in it?

    typedef struct _StyleValueData {
        gchar *value;
        gboolean important;
    } StyleValueData;
    

    The value is another string. Strings all the way!

    At the time, I didn't really figure out what each level of nested hash tables was supposed to mean. I didn't understand why we handled style properties in a completely different part of the code, and yet this part had a css_props field that didn't seem to store properties at all.

    It took a while to realize that css_props was misnamed. It wasn't storing a mapping of selector names to properties; it was storing a mapping of selector names to declaration lists, which are lists of property/value pairs.

    So, when I started porting the CSS parsing code to Rust, I started to create real types with for each concept.

    // Maps property_name -> Declaration
    type DeclarationList = HashMap<String, Declaration>;
    
    pub struct CssStyles {
        selectors_to_declarations: HashMap<String, DeclarationList>,
    }
    

    Even though the keys of those HashMaps are still strings, because librsvg didn't have a better way to represent their corresponding concepts, at least those declarations let one see what the hell is being stored without grepping the rest of the code. This is a part of the code that I didn't really touch very much, so it was nice to have that reminder.

    The first port of the CSS matching code to Rust kept the same algorithm as the C code, the one that created strings with element.class and compared them to the stored selector names. Ugly, but it still worked in the same limited fashion.

    Rustifying the CSS parsers

    It turns out that CSS parsing is divided in two parts. One can have a style attribute inside an element, for example

    <rect x="0" y="0" width="100" height="100"
          style="fill: green; stroke: magenta; stroke-width: 4;"/>
    

    This is a plain declaration list which is not associated to any selectors, and which is applied directly to just the element in which it appears.

    Then, there is the <style> element itself, with a normal-looking CSS stylesheet

    <style type="text/css">
      rect {
        fill: green;
        stroke: magenta;
        stroke-width: 4;
      }
    </style>
    

    This means that all <rect> elements will get that style applied.

    I started to look for existing Rust crates to parse and handle CSS data. The cssparser and selectors crates come from Mozilla, so I thought they should do a pretty good job of things.

    And they do! Except that they are not a drop-in replacement for anything. They are what gets used in Mozilla's Servo browser engine, so they are optimized to hell, and the code can be pretty intimidating.

    Out of the box, cssparser provides a CSS tokenizer, but it does not know how to handle any properties/values in particular. One must use the tokenizer to implement a parser for each kind of CSS property one wants to support — Servo has mountains of code for all of HTML's style properties, and librsvg had to provide a smaller mountain of code for SVG style properties.

    Thus started the big task of porting librsvg's string-based parsers for CSS properties into ones based on cssparser tokens. Cssparser provides a Parser struct, which extracts tokens out of a CSS stream. Out of this, librsvg defines a Parse trait for parsable things:

    use cssparser::Parser;
    
    pub trait Parse: Sized {
        type Err;
    
        fn parse(parser: &mut Parser<'_, '_>) -> Result<Self, Self::Err>;
    }
    

    What's with those two default lifetimes in Parser<'_, '_>? Cssparser tries very hard to be a zero-copy tokenizer. One of the lifetimes refers to the input string which is wrapped in a Tokenizer, which is wrapped in a ParserInput. The other lifetime is for the ParserInput itself.

    In the actual implementation of that trait, the Err type also uses the lifetime that refers to the input string. For example, there is a BasicParseErrorKind::UnexpectedToken(Token<'i>), which one returns when there is an unexpected token. And to avoid copying the substring into the error, one returns a slice reference into the original string, thus the lifetime.

    I was more of a Rust newbie back then, and it was very hard to make sense of how cssparser was meant to be used.

    The process was more or less this:

    • Port the C parsers to Rust; implement types for each CSS property.

    • Port the &str-based parsers into ones that use cssparser.

    • Fix the error handling scheme to match what cssparser's high-level traits expect.

    This last point was... hard. Again, I wasn't comfortable enough with Rust lifetimes and nested generics; in the end it was all right.

    Moving declaration lists to Rust

    With the individual parsers for CSS properties done, and with them already using a different type for each property, the next thing was to implement cssparser's traits to parse declaration lists.

    Again, a declaration list looks like this:

    fill: blue;
    stroke-width: 4;
    

    It's essentially a key/value list.

    The trait that cssparser wants us to implement is this:

    pub trait DeclarationParser<'i> {
        type Declaration;
        type Error: 'i;
    
        fn parse_value<'t>(
            &mut self,
            name: CowRcStr<'i>,
            input: &mut Parser<'i, 't>,
        ) -> Result<Self::Declaration, ParseError<'i, Self::Error>>;
    }
    

    That is, define a type for a Declaration, and implement a parse_value() method that takes a name and a Parser, and outputs a Declaration or an error.

    What this really means is that the type you implement for Declaration needs to be able to represent all the CSS property types that you care about. Thus, a struct plus a big enum like this:

    pub struct Declaration {
        pub prop_name: String,
        pub property: ParsedProperty,
        pub important: bool,
    }
    
    pub enum ParsedProperty {
        BaselineShift(SpecifiedValue<BaselineShift>),
        ClipPath(SpecifiedValue<ClipPath>),
        ClipRule(SpecifiedValue<ClipRule>),
        Color(SpecifiedValue<Color>),
        ColorInterpolationFilters(SpecifiedValue<ColorInterpolationFilters>),
        Direction(SpecifiedValue<Direction>),
        ...
    }
    

    This gives us declaration lists (the stuff inside curly braces in a CSS stylesheet), but it doesn't give us qualified rules, which are composed of selector names plus a declaration list.

    Refactoring towards real CSS concepts

    Paolo Borelli has been steadily refactoring librsvg and fixing things like the primitive obsession I mentioned above. We now have real concepts like a Document, Stylesheet, QualifiedRule, Rule, AtRule.

    This refactoring took a long time, because it involved redoing the XML loading code and its interaction with the CSS parser a few times.

    Implementing traits from the selectors crate

    The selectors crate contains Servo's code for parsing CSS selectors and doing matching. However, it is extremely generic. Using it involves implementing a good number of concepts.

    For example, this SelectorImpl trait has no methods, and is just a collection of types that refer to your implementation of an element tree. How do you represent an attribute/value? How do you represent an identifier? How do you represent a namespace and a local name?

    pub trait SelectorImpl {
        type ExtraMatchingData: ...;
        type AttrValue: ...;
        type Identifier: ...;
        type ClassName: ...;
        type PartName: ...;
        type LocalName: ...;
        type NamespaceUrl: ...;
        type NamespacePrefix: ...;
        type BorrowedNamespaceUrl: ...;
        type BorrowedLocalName: ...;
        type NonTSPseudoClass: ...;
        type PseudoElement: ...;
    }
    

    A lot of those can be String, but Servo has smarter things in store. I ended up using the markup5ever crate, which provides a string interning framework for markup and XML concepts like a LocalName, a Namespace, etc. This reduces memory consumption, because instead of storing string copies of element names everywhere, one just stores tokens for interned strings.

    (In the meantime I had to implement support for XML namespaces, which the selectors code really wants, but which librsvg never supported.)

    Then, the selectors crate wants you to say how your code implements an element tree. It has a monster trait Element:

    pub trait Element {
        type Impl: SelectorImpl;
    
        fn opaque(&self) -> OpaqueElement;
    
        fn parent_element(&self) -> Option<Self>;
    
        fn parent_node_is_shadow_root(&self) -> bool;
    
        ...
    
        fn prev_sibling_element(&self) -> Option<Self>;
        fn next_sibling_element(&self) -> Option<Self>;
    
        fn has_local_name(
            &self, 
            local_name: &<Self::Impl as SelectorImpl>::BorrowedLocalName
        ) -> bool;
    
        fn has_id(
            &self,
            id: &<Self::Impl as SelectorImpl>::Identifier,
            case_sensitivity: CaseSensitivity,
        ) -> bool;
    
        ...
    }
    

    That is, when you provide an implementation of Element and SelectorImpl, the selectors crate will know how to navigate your element tree and ask it questions like, "does this element have the id #foo?"; "does this element have the name rect?". It makes perfect sense in the end, but it is quite intimidating when you are not 100% comfortable with webs of traits and associated types and generics with a bunch of trait bounds!

    I tried implementing that trait twice in the last year, and failed. It turns out that its API needed a key fix that landed last June, but I didn't notice until a couple of weeks ago.

    So?

    Two days ago, Paolo and I committed the last code to be able to completely replace libcroco.

    And, after implementing CSS specificity (which was easy now that we have real CSS concepts and a good pipeline for the CSS cascade), a bunch of very old bugs started falling down (1 2 3 4 5 6).

    Now it is going to be easy to implement things like letting the application specify a user stylesheet. In particular, this should let GTK remove the rather egregious hack it has to recolor SVG icons while using librsvg indirectly.

    Conclusion

    This will appear in librsvg 2.47.1 — that version will no longer require libcroco.

    As far as I know, the only module that still depends on libcroco (in GNOME or otherwise) is gnome-shell. It uses libcroco to parse CSS and get the basic structure of selectors so it can implement matching by hand.

    Gnome-shell has some code which looks awfully similar to what librsvg had when it was written in C:

    • StTheme has the high-level CSS stylesheet parser and the selector matching code.

    • StThemeNode has the low-level CSS property parsers.

    ... and it turns out that those files come all the way from HippoCanvas, the CSS-aware canvas that Mugshot used! Mugshot was a circa-2006 pre-Facebook aggregator for social media data like blogs, Flickr pictures, etc. HippoCanvas also got used in Sugar, the GUI for One Laptop Per Child. Yes, our code is that old.

    Libcroco is unmaintained, and has outstanding CVEs. I would be very happy to assist someone in porting gnome-shell's CSS code to Rust :)

  10. Gdk-pixbuf modules - call for help

    - Tags: gdk-pixbuf

    I've been doing a little refactoring of gdk-pixbuf's crufty code, to see if the gripes from my braindump can be solved. For things where it is not obvious how to proceed, I've started taking more detailed notes in a gdk-pixbuf survey.

    Today I was looking at which gdk-pixbuf modules are implemented by third parties, that is, which external projects provide their own image codecs pluggable into gdk-pixbuf.

    And there are not that many!

    The only four that I found are libheif, libopenraw, libwmf, librsvg (this last one, of course).

    Update 2019/Sep/12 - Added apng, exif-raw, psd, pvr, vtf, webp, xcf.

    All of those use the gdk-pixbuf module API in a remarkably similar fashion. Did they cut&paste each other's code? Did they do the simplest thing that didn't crash in gdk-pixbuf's checks for buggy loaders, which happens to be exactly what they do? Who knows! Either way, this makes future API changes in the modules a lot easier, since they all do the same right now.

    I'm trying to decide between these:

    • Keep modules as they are; find a way to sandbox them from gdk-pixbuf itself. This is hard because the API is "chatty"; modules and calling code go back and forth peeking at each other's structures.

    • Decide that third-party modules are only useful for thumbnailers; modify them to be thumbnailers instead of generic gdk-pixbuf modules. This would mean that those formats would stop working automatically in gdk-pixbuf based viewers like EOG.

    • Have "blessed" codecs inside gdk-pixbuf which are not modules so their no longer have API/ABI stability constraints. Keep third-party modules separate. Sandbox the internal ones with a non-chatty API.

    • If all third-party modules work indeed as I found, the module API can be simplified quite a lot since no third-party modules implement animations or saving. If so, simplify the module API and the gdk-pixbuf internals rather drastically.

    Do you know any other image formats which provide gdk-pixbuf modules? Mail me, please!

Page 1 of 8 (next)