Friday, March 28, 2014

Mozilla's Pushes - February 2014

Here's February's monthly analysis (a bit late) of the pushes to our Mozilla development trees (Gaia trees are excluded).

You can load the data as an HTML page or as a json file.

TRENDS

  • We are staying on the 7,000 pushes/month range
  • Last year we only had 4 months with more than 7,000 pushes


















HIGHLIGHTS

  • 7,275 pushes
  • 260 pushes/day (average)
    • NEW RECORD
  • Highest number of pushes/day: 421 pushes on 02/26
    • Current record is 427 on January
  • Highest pushes/hour (average): 16.57 pushes/hour
    • NEW RECORD

GENERAL REMARKS

  • Try keeps on having around 50% of all the pushes
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 30% of all the pushes

RECORDS

  • August of 2013 was the month with most pushes (7,771 pushes)
  • February 2014 has the highest pushes/day average with 260 pushes/day
  • February 2014 has the highest average of "pushes-per-hour" is 16.57 pushes/hour
  • January 27th, 2014 had the highest number of pushes in one day with 427 pushes

DISCLAIMER

  • The data collected prior to 2014 could be slightly off since different data collection methods were used
  • An attempt to gather again all data will be attempted sometime this year


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, March 26, 2014

Moving away from the Rev3 Minis

On May last year we managed to move the Windows unit tests from the Rev3 Mac Minis to the iX hardware.

Back in November, we were still running some desktop and b2g jobs on the Rev3 minis on Fedora and Fedora64 for the *trunk* trees.

This was less than ideal not only because of the bad wait times (since the pool of minis is out of capacity) but also we're evacuating the SCL1 datacenter where those Rev3 minis are located at. To stop using the minis we needed to move to EC2 before April/May came around .

As of yesterday, we're running all jobs running on the minis as well as on the EC2 instances for all *trunk* trees and mozilla-aurora.

You can see the jobs running side-by-side on the minis and on EC2 in here:







Over the next few weeks you should see us moving away from the minis.

You can wait for the next blog post or follow along on bug 864866.

Stay tuned!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Friday, March 21, 2014

Running B2G reftests on AWS on mozilla-central and trunk based trees

Last week we announced that we started running B2G reftests on mozilla-inbound.
This week we have enabled it on every trunk tree.

You will each job twice since we're running it side-by-side with the old Fedora Mac minis.
Once we're ready to disable the Fedora Mac minis we will let you know. It should happen sometime in April.

To follow along you can visit bug 818968.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, March 20, 2014

Re-thinking Mozilla's Firefox for Android automated testing

NOTE: This is a blog post from December 2013 which I did not publish since our main proposal could not be accomplished. However, some of the analysis I did in here was interesting and could be useful in the future for reference. This proposal did not go through because we were proposing reducing armv6 testing on a periodic approach instead of per-check-in without the ability to backfill (we now have that ability).

Callek recently blogged about what the distribution of Android versions are, as well as what Mozilla's Firefox for Android version distribution is.

I wanted to put his data into a table and graphs to help me make comparisons easier.

DISCLAIMER: (I’m quoting Callek, however, I'm striking what I won't be doing)
  • I am not a metrics guy, nor do I pretend to be.
  • This post will not give you absolute numbers of users.
  • This post is not meant to show any penetration into the market
  • Ignores all things Firefox OS for the purposes herein.
  • I present this as an attempt at data only and no pre-judgment of where we should go, nor what we should do.
  • I am explicitly trying to avoid color commentary here, and allowing the reader to draw their own conclusions based on the static data.¹

Android versions

The tables shows the top four versions rather than the seven Android versions that Google reports against the Firefox Beta and Firefox Release distribution of Android versions. Please read Callek's post to know how he gathered the data.
Data from December 2013

If we look at the table in the image, we will notice that we’re listing the top four versions with most users for Android, Firefox Beta and Firefox Release. The last row, shows what percentages those four versions represent of the total number of Android users.
The three chart pies represent the same data in a visual manner.
The stacked bars chart shows only two specific versions: 2.3 and 4.0.

If you look at the stacked bars chart, we have two clear anomalies compared to the normal distribution of Android users:
  • We have a lot more users on 4.0 than normal and/or
  • We have abnormally less users on 2.3 than the norm
One theory that Callek shared with me, is that it is likely that there are devices running Android 2.3 that don't have the Play Store as it was not a product requirement (citation needed). This would explain this pattern for Android 2.3.

Armv6 VS Armv7


ArmV7
ArmV6
x86
FxBeta
96.19%
0.9%
2.91%
FxRelease
98.61%
1.39%
N/A

It seems that 96-99% of our Firefox users are using an Armv7 device. I don't know if that is growing or shrinking, or if this distribution is the same for every country.

I do know that if we stopped automated testing of the Firefox Armv6 builds on the Tegra board, we would have much better wait times (read below to know what wait times are).

Android 2.2 vs Android 2.3


Android
FxBeta
FxRelease
2.2
1.6%
1.62%
1.66%
2.3
24.1%
10.82%
14.22%

Another aspect that I want to point out is that we have less than 2% of our users on Android 2.2. Currently, our Tegra mobile testing devices are running Android 2.2.
I believe we would gain value by moving our testing infrastructure from Android 2.2 to 2.3.
We currently started running Android 2.3 test jobs inside of Android emulators on Amazon's EC2 instances. It is still experimental and in its early stages, but it might be a viable option. It is yet to be seen if we could run performance performance jobs (aka talos) on them.
We could also have Android 2.3 running on Panda ES boards, but no work has begun on that project that I’m aware of.

Closing remarks

I can't say what we should be testing on, however, I can highlight some of the information that I believe is relevant to push a decision one way or another.

Our current infrastructure load and current distribution of Android versions are not the only factors that need to be considered when determining what our testing pool should be. For example, if we had a country where Firefox for Android was extremely popular and all users were running on Armv6 (or Android 2.2), we would need to take into account whether or not we want to keep running this architecture/version in our test infrastructure, even though the number of users is small on a global scale.

Another example would be if we had partner deals, similar to the recent news about Gigabyte and Kobo bringing Firefox for Android pre-installed. In such a situation, we could have reached a testing coverage agreement, and therefore would have to support our partner’s needs even if their architecture/version choice had a small number of users globally. However, they did not choose Android 2.2 or Armv6.

Recommendations

My recommendations are:
  • Immediately drop automated testing of Armv6 builds on the Tegras
    • This would decrease our current wait times for Tegras
    • This would improve our turn around for Armv7 testing on Tegras
  • Push on Q1 to have Android 2.3 running side by side with Android 2.2 jobs
    • This would allow us to reduce testing on Tegras if we still have wait times after shutting down Armv6 testing

At the present time, I would not recommend the following:
  • Move Panda testing to 4.1 JB (instead of 4.0 ICS)
  • JB is where most of our users are
  • However, dealing with the first two goals will involve grabbing the same people from the Mobile, A-team and Release Engineering teams
  • I would wait until the end of Q1 to see if the first two goals made it through and if this idea is even the right one

Definitions

1 - Wait times

Every time a developer commits new code to our repositories, we create various builds for Firefox for Android. These builds are tested on either Tegras or Panda boards (read below).
The time that a job waits to get started  is called “wait time”.

2 - Automated testing on Tegras

Currently, we can't buy anymore Tegras and we have around 200 left, which are in various states of “good”. We have not been able to keep up with our current load for a long while; we only start around 50-70% of our test jobs within 15 minutes of being available to be run (our aim is to start 95% of our test jobs within 15 minutes).
We run between 5k-7k test and performance jobs.

3 - Automated testing on Pandas

We have around 900 of them running Android 4.0.
They meet our wait time demands consistently.
They are properly supported through mozpool, which allows our team to do lots of operations on the boards through various APIs. These APIs also allow us to re-image boards on the fly to have different Android versions.
We run between 3k-5k test and performance jobs.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Friday, March 14, 2014

Running hidden b2g reftests and Linux debug mochitest-browser-chrome on tbpl

For many months we have been working with the A-team and developers to move away every single job from our old Mac mini pool.

This project is very important as we're moving out of the data-centre that holds the minis in the next few months. The sooner we get out of there the more we can save. Moving the machines out there will start in April/May.

Currently we run on the minis:
  • Linux 32 debug mochitest-browser-chrome
  • Linux 64 debug mochitest-browser-chrome
  • B2G reftests
This week we have enabled these jobs on EC2 for mozilla-inbound. We will run them side-by-side until we're at par on coverage. They are currently hidden to help us see these jobs running at scale.

Here are the URLs to see them running live:
As you read this post we're landing and merging more patches to deal with the remaining known issues.
Stay tuned for more!





Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, March 11, 2014

Debian packaging and deployment in Mozilla's Release Engineering setup

I'm been working on creating my second Debian package for Mozilla's Release Engineering infrastructure and it's been a pain like the first one.

To be honest, it's been hard to figure out the correct flow and to understand what I was doing.
In order to help other people in the future, I decided to document the process and workflow.
This is not to replace the documentation but to help understand it.

If you're using a Mac or a Windows machine, notice that we have a VM available on EC2 that has the tools you need: ubuntu64packager1.srv.releng.use1.mozilla.com. The documentation can be found in "How to build DEBs". You can use this blog post to help you get up to speed.

During our coming work week we will look at a complete different approach to make changes like this easier for developers to make without Release Engineering intervention. It is not necessarily a self-serve for Debian deployments.

Goal

We want to upgrade a library or a binary on our infrastructure.
For Linux, we use Puppet to deploy packages and we deploy it through a Debian repository.
Before we deploy the package through Puppet, we have to add the package to our internal Debian repository. This blog post will guide you to:

  1. Create the .deb files
  2. Add them to our internal Debian repository
  3. Test the deployment of the package with Puppet

Debian packaging

For a newbie, it can be a very complicated system that has many many parts.

In short, I've learned that there are involved three different files that allow you to recreate the .deb files. The files extensions are: .dsc, .orig.tar.gz and .diff.gz. If you find the source package page for your desired package, you will notice that these 3 files are available to download. We can use the .dsc file to generate all the .deb files.

For full info you can read the Debian Packaging documentation and/or look at the building tutorial to apply changes to an existing package.

Ubuntu version naming

If I understand correctly (IIUC), "precise" is an identifier for a Ubuntu release. In our case it refers to Ubuntu 12.04 LTS.

Versions of a package

IIUC, a package can have 3 different versions or channels:
  • release. The version that came out with a specific release
    • Ubuntu 12.04 came out with mesa 8.0.2-0ubuntu3
  • security. The latest security release
    • e.g. mesa 8.0.4-0ubuntu0.6
  • updates. The latest update
    • e.g. mesa 8.0.4-0ubuntu0.7
If you load the "mesa" source package page, you will find a section called "Versions published" and you will see all three versions listed there.

Precise, not precise-updates

In our specific releng setup, we always use "precise" as the distribution and not "precise-updates".
I don't know why.

Repackage the current version or the latest one?

If you're patching a current package, do not try jump to the latest available version unless necessary. Choose the version closest to our current package to reduce the number of new dependencies.

In my case I was trying to go for mesa 8.0.4-0ubuntu0.7 instead of mesa 8.0.2-0ubuntu3.
Due to that, I had all sorts of difficulties and it had lots of new dependencies.
Even then, I realized later that I had to go for mesa 8.0.4-0ubuntu0.6 as a minimum.

Puppetagain-build-deb OR pbuilder?

From Mozilla's Release Engineering's prespective, we're only considering two ways of creating our .deb files: 1) puppetagain-build-deb and 2) pbuilder.

FYI puppetagain-build-deb was written to make it very simple to create the required .deb files.
Unfortunately, in my case, puppetagain-build-deb could only handle the dependencies of 0.8.2 and not the ones of 0.8.4.

I describe how to use pbuilder in the section "Create the debian/ directory".
Below is the "puppetagain-build-deb" approach. Also documented in here.

Puppetagain-build-deb

At this point we have the "package_name-debian" directory under modules/packages/manifests in Puppet. Besides that, we need to download ".orig.tar.gz" file.

To create the .deb files we need 1) the debian directory + 2) the original tar ball.

In most cases, we should be able to use ubuntu64packager1 and puppetagain-build-deb to build the deb files. If not,

NOTE: The .orig.tar.gz file does not need to be committed.

cd puppet
hg up -r d6aac1ea887f #It has the 8.0.2 version checked-in
cd modules/packages/manifests
wget https://launchpad.net/ubuntu/+archive/primary/+files/mesa_8.0.2.orig.tar.gz
# The .deb files will appear under /tmp/mesa-precise-i386
puppetagain-build-deb precise amd64 mesa-debian
# The .deb files will appear under /tmp/mesa-precise-amd64
puppetagain-build-deb precise i386 mesa-debian

Create the debian/ directory

In Puppet we have "debian" directories checked-in (e.g. mesa-debian/) for any debian package we deploy to our systems through it. The debian directory is produced with the standard Debian packing instructions.

If you have access to a Linux machine you can follow the steps that rail gave me to generate the deb files. You can also log-in to ubuntu64packager1 (you have to start it up first).

To make it work locally, I had to install pbuilder with "sudo apt-get install pbuilder".
I also needed to create my own pbuilder images.

In short, to recreate .deb files without modifying them you can follow these steps:
  1. use dget to downloads all three required files (.dsc, .orig.tar.gz and .diff.gz)
  2. use pbuilder --build to generate the .deb files
Since we want to patch the libraries rather than use them as-is, we also have to run these steps in between step 1 & step 2:
  1. dpkg-source -x
    • it extracts the source files
  2. download your patch under debian/patches
  3. append a line to debian/patches/series
    • the line indicates the filename of your patch under debian/patches
  4. set DEBFULLNAME
    • to bump the version when repackagin the source
  5. dpkg-source -b
    • rebuild the source package
You can read rail's explanation for full details.

Keep track of the debian/ directory in Puppet

The previous section should have generated your desired "debian" directory.
We now need to check it inside of our puppet repository to keep track of it.
cp -r mesa-8.0.4/debian ~/puppet/modules/packages/manifests/mesa-debian
cd ~/puppet
hg addremove
hg diff

Having Debian packaging issues?

rail and dustin have experience in this area, however, if we have further Debian packaging issues we can reach sylvestre and glandium.

Determine involved libraries

To create our Puppet patch, we have to determine which packages are involved.
For instance, the mesa bug required updating five different libraries.
rail explains on comment 26 how to discover which libraries are involved.
You can list the package names you compiled with something like this:
ls *deb | awk -F_ '{print $1}' | xargs
# copy the list of names and run the following on the target machine:
dpkg -l 2>/dev/null | grep ^ii | awk '{print $2}'

Create a no-op puppet change (pinning the version)

If the package already exists on our infra but it is not managed by Puppet (e.g. the library came by default on the OS), then it is better to write first a puppet change to pin the versions.

To write the puppet change you will have to answer these questions:
  • Do we want this change for the in-house and ec2 machines? Or a subset?
  • Do we want the change for both 64-bit and 32-bit machines?
  • What are the versions currently running on the machines that would be affected?
    • Check on each pool you're planning to deploy it since we could have inconsistencies between them
Answering these questions will determine which files to modify in puppet.
Remember that you will have to test that your puppet change runs without issues.

Integrating your .deb files into the releng Debian repository and sync to the puppet masters

The documentation is here. And here's what I did for it.

1 - Sync locally the Debian packages repository
We need to sync locally from the "distinguished master" the "releng", "conf" and "deb" directories:
sudo su
rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/releng/ /data/repos/apt/releng/
rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/conf/ /data/repos/apt/conf/
rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/db/ /data/repos/apt/db/

2 - Import your .deb files into the Debian repo

cd /data/repos/apt
cp ~armenzg/tmp/mesa_8.0.4.orig.tar.gz releng/pool/main/m/mesa
reprepro -V --basedir . include precise ~armenzg/tmp/out64/*.changes
reprepro -V --basedir . includedeb precise ~armenzg/tmp/out32/*.deb

If the package is new you will also have to place the .orig.tar.gz file under /data/repos/apt/releng. The reprepro will let you know as it will fail until you do.

3 - Rsync the repo and db back to the distinguished master
Push your file back to the official repository:
rsync -av /data/repos/apt/releng/ releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/releng/
rsync -av /data/repos/apt/db/ releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/db/

Your files should show up in here:
http://puppetagain.pub.build.mozilla.org/data/repos/apt/releng/pool/main

NOTE: Pushing the .deb files to the repo does not update the machines.

4 - Fix the permissions at the distinguished master
ssh root@releng-puppet2.srv.releng.scl3.mozilla.com
puppetmaster-fixperms

Test that you can update

Before you can sync up a host with puppet you need to let the puppet servers sync up with the distinguished master.

For instance, my puppet runs were failing because the packages were missing at:
http://puppetagain-apt.pvt.build.mozilla.org/repos/apt/releng/pool/main/m/mesa

To test my changes, I created two EC2 instances. For other pools you will have to pull a machine from production.

1 - Prepare your user environment
ssh armenzg@releng-puppet2.srv.releng.scl3.mozilla.com
cd /etc/puppet/environments/armenzg/env
hg pull -u && hg st

2 - Run a no-op test sync from your loaned machines
puppet agent --test --environment=armenzg --server=releng-puppet2.srv.releng.scl3.mozilla.com

3 - Under your user environment on the puppet master, bump the versions and the repoflag


4 - Run puppet syncs again on the test instances and watch for the changes on the Puppet output

puppet agent --test --environment=armenzg --server=releng-puppet2.srv.releng.scl3.mozilla.com

5 - Review the package versions are the right ones


6 - Test a rollback scenario

You will have to remove the bumping of the versions from step #3 and bump the repoflag again.
Run steps 4 and 5 to see that we downgrade properly.

7 - Clean up ubuntu64packager1 and shut it off

8 - Deploy your change like any other Puppet change

Read all the steps at once

Commands and minimum comments:
https://bugzilla.mozilla.org/show_bug.cgi?id=975034#c37



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, March 04, 2014

Planet Release Engineering

If you're interested on reading about Mozilla's Release Engineering, you can subscribe to "Planet Release Engineering".

This is a central location that collects the blog posts of each one of Mozilla's Release Engineering team members.





Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.