Wednesday, December 31, 2008

How to show subdirectories in your hg local setup

In my previous blog post I had a *wish list* section which had become a *I am blocked in here* section

Before I was using [collections] which I believe it saves you the hassle of having to declare individually each repo living within it but it does not show "l10n-central" in the URL which I need for my local development setup to repack on changes.

This is the hgweb.config that I am using now:
templates = /repos/hg_templates
style = gitweb_mozilla
l10n-central/af = l10n-central/af
l10n-central/de = l10n-central/de
mozilla-central = mozilla-central
and the URLs look like this now (in bold those that are real repos):
  • http://localhost:8000/
  • http://localhost:8000/mozilla-central/
  • http://localhost:8000/l10n-central/ <- It shows you the list of repos within it: "af" and "de"
  • http://localhost:8000/l10n-central/af
  • http://localhost:8000/l10n-central/de

I believe it is not the correct/proper way of doing it but I believe it should meet my needs now.

I still have a couple of questions regarding to:
  1. why does the gitweb_mozilla style get applied in the real repos and not in the main page (http://localhost:8000) the same way that in hg.m.o?
  2. why does http://localhost:8000/l10n-central/af show beside the title "summary" show "l10n-central/af" instead of "af' like in

Tuesday, December 23, 2008

setup a couple of l10n-central's repos locally *with* pushlog

This is the milestone that I wanted to reach after the last couple of blog posts.
I wanted to have a local hg server with few of the l10n repositories and with pushlog to be able to test an HgPoller. Let's see if this is the last one of this series of posts.

What I did on the server side:

  1. cd /var/hg/l10n-central
  2. hg clone
  3. hg clone
  4. replaced the .hg/hgrc of both repos with what was explained on my first post plus adding the push_ssl and allow_push values:
    # the a_ and z_ prefixes are for ordering:
    # any hooks which can veto an action (require-singlehead) should be run before
    # hooks which make permanent logs (record-changeset-info)

    pretxnchangegroup.a_singlehead = python:mozhghooks.single_head_per_branch.hook
    pretxnchangegroup.z_linearhistory = python:mozhghooks.pushlog.log

    push_ssl = false
    allow_push = *
    templates = /repos/hg_templates
    style = gitweb_mozilla
    pushlog-feed = /repos/hgpoller/
    buglink = /repos/hgpoller/
    hgwebjson = /repos/hgpoller/
  5. /var/hg$ hg serve --webdir-conf hgweb.config

NOTE: Remember to export PYTHONPATH=/repos/hghooks/ - You would get an error like this when trying to push if not set: "abort: pretxnchangegroup.a_singlehead hook is invalid (import of "mozhghooks.single_head_per_branch" failed)". In fact, I have put it on my .bashrc

NOTE2: It is not suggested to use "allow_push = *" and "push_ssl = false" since you allow everyone who can reach your server to commit changes

What I did on the client side:

  1. hg clone http://localhost:8000/af
  2. make some changes on the checked out repo
  3. add "default-push = http://localhost:8000/af" under [paths] in my checked out repo's .hg/hgrc file
  4. hg commit -m "message"
  5. hg push

To check the web interface and/or the pushlog of that repo browse to:
  • http://localhost:8000/af
  • http://localhost:8000/af/pushlog

My wish list:

I would like to be able to have the URL to look like:
  • http://localhost:8000/l10n-central/af
so I could have things like:
  • http://localhost:8000/releases/l10n-mozilla-1.9.1/af
but for now I believe that I am in good shape and that this improvement might actually require me to setup an apache server which I am trying to avoid.

Monday, December 22, 2008

How to setup multiple hg repositories locally without apache or lighttpd

In my previous blog post I created the following:
  • a single local repository with a web interface plus pushlog
and in this blog post, I will show you what I did to create:
  • multiple local repositories with a web interface without pushlog
NOTE: I read this page which has much more information: "Publishing Repositories with hgwebdir.cgi" even though I do not use that file at all

These are the steps I followed:
  1. sudo mkdir -p /var/hg
  2. sudo chown -R armenzg /var/hg
  3. sudo cat <<> /var/hg/hgweb.config
    l10n-central/ = l10n-central/
  4. mkdir l10n-central
  5. hg init l10n-central/af
  6. hg init l10n-central/de
  7. hg serve --webdir-conf hgweb.config

You can now do this:

$> hg clone http://localhost:8000/af
$> hg clone http://localhost:8000/af

Thursday, December 18, 2008

Setup your own hg local repo with pushlog

I am working on generating repackages on change and the HgLocalePoller looks for the pushlog of a locale's repository. For instance:

Since my development and testing is dependent on commit changes to the l10n repositories, I need to have commit access to generate more changes and since I am not a localizer I do not have access to the l10n repositories.

Therefore there were three options for me:
  1. Gain commit access to the x-testing repository on the l10n repositories and do commits in there. Inconvenience: It is a long process to gain access.
  2. Create my own user repo in The same inconvenience as the previous one
  3. Setup my own local repo with pushlog
I was lucky enough that ted was around and he proposed me and guided me to do this last option.

Steps to create your own hg repo locally with pushlog:

  1. cd /repos
  2. hg clone
  3. hg clone
  4. hg clone
  5. mkdir test-repo
  6. cd test-repo
  7. hg init
  8. cp ../hghooks/examples-hgrc .hg/hgrc // which basically contains the following:
    # the a_ and z_ prefixes are for ordering:
    # any hooks which can veto an action (require-singlehead) should be run before
    # hooks which make permanent logs (record-changeset-info)

    pretxnchangegroup.a_singlehead = python:mozhghooks.single_head_per_branch.hook
    pretxnchangegroup.z_linearhistory = python:mozhghooks.pushlog.log

  9. export PYTHONPATH=/repos/hghooks/ // /repos is my path, use your own
  10. vi .hg/hgrc // Add the following with your own paths:
    templates = /repos/hg_templates
    style = gitweb_mozilla
    pushlog-feed = /repos/hgpoller/
    buglink = /repos/hgpoller/
    hgwebjson = /repos/hgpoller/
  11. sudo apt-get install python-simplejson //hgwebjson needs this module
  12. hg serve //To start your hg server

You can now check your own hg server with pushlog:
and your pushlog at:

Remember that it is a server and you have to clone first!

  1. export PYTHONPATH=/repos/hghooks/
  2. hg clone test-repo test-repo-2
  3. cd test-repo-2; echo "test data" > testfile;
  4. hg add testfile;
  5. hg ci -m "adding a test file";
  6. hg push ../test-repo

Wednesday, December 17, 2008

Understanding the Release Engineering buildbot setup (introduction)

In Release Engineering we use buildbot (an external project written in python) to automate builds for us at certain times, do builds on change and do many different things.
With this blog post I would like to introduce you to the logic of the code we use (mozilla2/master.cfg)

In general words it works like this:
  1. import modules from python, twisted, buildbot and our own custom modules buildbotcustom
  2. import configuration files regarding slaves and each branch
  3. for every branch do:
    1. setup a TinderboxMailNotifier
      1. if L10n is enabled for this branch setup a TinderboxMailNotifier for "Mozilla-l10n" and for "Mozilla-l10n-%locale%"
    2. add the HgPoller for this branch and the associated Scheduler
    3. add a Periodic scheduler if enabled for this branch
    4. for every nightlyBuilder generate a Nightly scheduler
      1. if L10n enabled for this branch add a DependentL10n scheduler
    5. for every platform defined for this branch do:
      1. create builders for dependent builds (periodic and build-on-change)
      2. create builders for nightly builds (if it is not a debug platform)
        1. if L10n enabled for this branch add a nightly builder for this platform
  4. end of for every branch loop
  5. import Release Automation configuration
  6. import Mobile configuration
  7. import UnitTest configuration (it is coming soon)
NOTE: I am omitting smaller details and I could be slightly wrong in some of the steps but we can go deeper into details if I do more blog posts

Saturday, October 25, 2008

Gmail adds emoticons feature

I love finding by myself new UI elements here and there in the services that I normally use.

Gmail has added a new feature to add emoticons in your emails which I highly doubt I will use.
They have added this feature two days ago.

Composing a post in looks funny on Firefox 3

Today I was going to do a blog post and this is how the form looked like:
instead of like this:

I wonder if it is one of the three add-ons I have or that I am using the Spanish version but it looks fine in the other browsers that I have.
I will file a bug if I cannot find a way to fix it

Mozilla/5.0 (Windows; U; Windows NT 5.1; es-ES; rv: Gecko/2008092417 Firefox/3.0.3

Wednesday, October 22, 2008

FSOSS - discussion panel

I am taking part of a discussion panel (TOS@FSOSS: The Student's perspective) during FSOSS and I have to prepare a short slide show to give a brief background about me.
I am one of another 3 students who have been involved with Open Source during their programs in post-secondary institutions and will be taking part of this discussion lead by Mark Surman, Executive Director, Mozilla Foundation

These are the slides that I will use:

Link to the slide: link

Wednesday, October 08, 2008

Google chrome comes with a Trojan - false positive

Who in this world likes to download a trojan instead of the program you want to get?
In my case, when I tried to download Google Chrome, avast saved me from downloading a trojan called Win32:Midgare-OI [Trj] and it seems that I am not the only one who has run into it

I thought that I was being saved by avast but after reading the mentioned post in google groups, I have found that it might most likely be a false positive by avast. I do not think that Google is going to be happy with the impression given to many users that were using avast and tried to download chrome.

Here is the screenshots if you are interested.

Image 1: You can see that I did not even have time to "save" the file to the hard drive.

google-chrome-trojan(from IE)
Image 2: I tried to download Google Chrome with Internet Explorer and get asked if I want to "run" the program. If I cancel and go to next page to download the file, I get the same trojan warning

Monday, September 29, 2008

l10n build changes

As announced in this week's weekly update there are changes on the way L10n repackages are being done.

What has it changed?
The L10n repackages are being driven by buildbot and are being pushed at the usual place:
The old way of driving the L10n repackages with tinderbox will be pushing the repackages in here:

Where are the logs?
The tinderbox reports will be keep on being reported to "Mozilla-l10n" and "Mozilla-l10n-locale" but the header of the columns will change to "platform_l10n_nightly"

The old tinderbox machines will keep on reporting for a while to "MozillaTest" with the original names

How many times do we generate the locales?
Every 4 hours starting at 9AM PDT.
We kept 9AM for now as the "usual reference" time for an L10n nightly build. With tinderbox-driven L10n builds we could only say "it should be generated sometime after 9AM". We can now say when to happen

Restating briefly some of the benefits
The l10n repackages will now be driven by buildbot rather than tinderbox. This switch will give us more flexibility on how and when the L10n repackages happen and we will be able to trigger other steps.

Another benefit is that we will be able to run L10n repackages in parallel which will allow us to scale since the number of locales has kept on growing and our builds did not

If more interest in the benefits, please read some of the previous blogs or let me know that would be good to have a well documented post

Please let us know (Release Engineering) if anything is not going as expected

Wednesday, August 13, 2008

L10n parallelization results

We want to do L10n repackages in parallel to improve the time it takes to generate them since we are growing every month the number of locales.
This blog posts shows some early results on staging machines and it is just for future references

List of machines used for setup A and B:
  • 3 linux, 1 mac and 3 win32
Setup A (buildbot repackages in paralell)
In this setup I made the slaves to "remove everything (mozilla and l10n) before you start with the next locale"

Start time:
  • 15:44 - Tuesday Aug. 12th
End times:
  • Linux -> 16:25 (41 mins)
  • Mac -> 17:04 (1hr 20mins)
  • Win32 -> 18:09 (2h 25mins)
Setup B (buildbot repackages in parallel)
In this setup I did not remove anything, just checked out again

Start time:
  • 10:20 Wednesday Aug. 13th
End times:
  • Linux -> 10:37 (17 mins)
  • Mac -> 11:04 (44 mins)
  • Win32 -> 11:44 (1h 24 mins)
NOTE: See the results in this tinderbox page (look at columns *_l10n_nightly that are in the right side)

Old Tinderbox setup (everything taken care by one machine)
Start date:
  • Aug. 13th (each platform starts as its own time - we just know that it can happen after 9am)
Start and end times:
  • Linux -> 9:11->9:39 (28 mins)
  • Mac -> 9:04 -> 9:26 (22 mins)
  • Win32 -> 10:48 -> 14:26 (3h 37 mins)
We can say that the platform that gets most benefited is windows (from over 3hr to 1hr and a half), while the mac looses performance (from 22mins to 44mins)

The drop in performance with the mac is expected because of the new way of doing things and can be improved

Wednesday, July 30, 2008

L10n build presentation slides

Thanks everyone who attended the "L10n build process" presentation at the Summit 08.

This the presentation that was used for the session, please contact me if something does not make sense:

These are some of the notes/feedback that I got from the discussion with the rest of the localizers:
- No back check-in to locale's repository
- Generate installer with missing string so the installer does not break
- Modify the branding if the locale is missing entities
- Add Source Stamp in "About" for better litmus testing

The last two notes/suggestions are not at hand right now and they will be worked on in due time

Monday, July 07, 2008

Make our nightly builds identical for each platform - part 1

The way we do nightly builds every night for Firefox 2.0.0.x and Firefox 3.0.x have few problems that will be good to have them fixed.
  1. The build for each platform happens AFTER a determined hour BUT whenever the build slave can start it. Therefore, the build start time is different from platform to platform. This will be even more obvious when the buildID will capture minutes and seconds. This is not a big problem but it is not a correct behavior and it would be good to have the same buildID for all 3 platforms.
  2. The build for each platform is NOT build from the same source code. You knew it or not, this has been happening for many years. This is a big problem specially when a commit to the source code happens around the time that the nightly builds are to happen and this change might be captured by one platform and not by the others. This problem affects when you want to see if a bug introduced in one of the platforms is part of the other two platforms
Things we are doing to improve this
  1. Separating the generation of nightly builds from generating dependent builds
    For years, we have generated dependent and nightly builds from the same slave running the exact same code all day. We generate nightly builds after the last dependent build that gets finished after 3AM PDT. Each platform finishes at different time and each platform checks out different source code. To be more precise a nightly build is started if 1) it is passed the $build_hour and 2) twenty four hours have passed since the last nightly build. To stop the dependt builds process to generate nightly builds we need to set $OfficialBuildMachinery to zero in the
  2. Use Nightly scheduler to trigger nightly builds
    Using a nightly scheduler will allow us to add more steps after an en-US nightly build like trigerring L10n repackages and/or trigger other schedulers as needed
  3. Setting the checkout time before the build starts
    If we set up the same MOZ_CO_DATE for the 3 different platforms before the build start they will check out the same source stamp no matter when the slave will start the build
  4. Remember the checkout time
    Saving the SourceStamp inside of the application.ini file will allow us to know which SourceStamp was used for that build even if the log for that build gets removed from the tinderbox logs
How would things look like after these changes?

As the image shows:
  1. Our nightly builds will not depend on previous dependent builds
  2. The nightly builds will be triggered when we say
  3. The nightly builds will check out the same source code
  4. From a binary we will know which source stamp was used to generate the build

Wednesday, June 25, 2008

SourceStamp in application.ini (it helps the L10n process)

As I mentioned in my previous blog post, when we do L10n repackages it happens by unpackaging an en-US build and overwriting the en-US dtd and property files with the ones of that locale.

Part of this process requires checking out part of the en-US code and the current problem is that we do NOT checkout the same Source Stamp than what was used to generate the en-US build, since we do not really have a way to know what was the Source Stamp used.

There is a bug which I started to work on last week and the patches would allow us to have the Source Stamp in application.ini if a MOZ_CO_DATE variable is set when checking out the source code. Therefore, if we have this working we will have it easy to know what Source Stamp was used for an en-US build and we will be able to check out exactly the same Source Stamp when doing L10n repackages and we should not be unsynchronized at all.

NOTE: The previous patches mentioned do not change anything at all if no MOZ_CO_DATE is set.

BuildID do not be precise!! - part 1 (in relation with l10n/build)

What is the BuildID? The BuildID represents the date and hour a build was generated. This allows us to identify a build; we could think of it as the Serial Number of a product which allows people to know things as in which factory and/or what date was produced.

When you select "about firefox" you can see the BuildID in there:
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9) Gecko/2008061004 Firefox/3.0"

If you look at that number in bold, you can tell that the number represents a date YYYYmmDD and it has an extra 2 digits which represent the hour (HH) in which a build was generated.

This BuildID can also be found in the file application.ini that comes with your Firefox's installation:

One of the problems in here is that 2 different builds with (or without) different source code can have the same buildID if they happen inside the same hour. In the past, we were not able to generate two builds in less than an hour but nowadays we can and it happens that we can have to different builds with the same BuildID

NOTE: There are two bugs in which the meaning and the precision of the BuildID is being discussed. Bugs 431270 and 431905

L10n and the BuildID
We have lived many years with this and only this value. We have downloaded the binaries for years and we only had the reference on when that binary was created but nothing at all related to which Source Stamp, which files have been compiled in this build. We could only tell by looking at tinderbox logs and others.

There is people who needed to know what Source Stamp was used to generate that build and one of them is the L10n community, you knew it or not.

When we do generate the L10n repackages, we do download an en-US build, we checkout the en-US code and we do a configuration before running the repackage of the locale. Currently, we do checkout the latest en-US code BUT not the code that was used to generate that en-US build. This could mean that there can be several hours of difference in the Source Stamp and therefore we generate locales that are NOT exactly as the en-US build.

NOTE: This does not happen on releases since the code for en-US is frozen, but it does happen for nightly L10n repackages

Axel showed me a solution which reduces this gap between "the en-US Source Stamp used to generate the en-US build" and "the en-US Source Stamp used to generate an L10n repackage". The solution estimates out of the BuildID which could haven been the Source Stamp for that build by saying "let's assume it checked out at the beginning of the hour (Minute Zero of the hour)". This is better than what we have now since a BuildID that says the hour is HH, it could be a checkout of the Source Stamp HH:00 or HH:59

It is needed that we fix this and in the follow up blog post I will mention what I have been working on and as you might assume already it has to do by adding the Source Stamp to a binary.

Monday, June 23, 2008

l10n repackaging - part 5 - introduction

This is the first time you are reading one of my blogs posts and you will be able to hear on this Monday's meeting talking briefly about the project I am working on related to change the way we do l10n repackages.

The way localizations are generated are by: 1) downloading an en-US binary, 2) unpack it and 3) repack it by replacing certain files from the specific locale.
The way we do repackages are in dedicated machines and we are trying to move it over the buildbot infrastructure to make it scalable and do them on our generic buildslaves

Key concepts for the next weeks
I have done several blogs posts over the past weeks (1, 2, 3, 4) and I will just talk what my attention is at right now:

Nightly repackages
In this scenario we have an en-US build being generated at certain hour of the night, checking out a specific source stamp and after it finishes we can continue by checking out the same source stamp that we are going to use to do l10n repackages. For now, I want this code into the staging configuration we have since it involves 2 schedulers (Nighlty and DependentL10n) and it can be done with one machine until we find the proper way to pass the source stamp from one scheduler to the next one.

Dependent repackages
We generate dependent en-US builds continually and it can take less than 6-8 minutes to generate one of them, therefore, we do not really have enough time to keep up with that rhythm with l10n repackages (in my local machine it takes around 1-2 mins to get ready and 30 secs to repackage each locale and with 50 locales it takes a total of 25 mins) unless we had 4-6 slaves for l10n repackages and that is just thinking of linux or mac machines, the windows slaves take way longer than the previous ones.
I am thinking that we would want to do l10n repackages every hour and the source stamp would most likely have to be different from the dependent en-US that we would download (unless we have the en-US source stamp somewhere in the application.ini), but for now it should do

Release repackages
In our release automation system we already do l10n repackages in the generic slaves but it still does it by using the code used in tinderbox client which is not scalable.
Changes in here will require to check out the same tag as the en-US build does

Bootstrap configuration
There is a lot of configuration that is read in our release automation code from "bootstrap configuration" files which contain information as which tag to checkout, where to upload the builds, which tree to report on tinderbox, etc... and this information will be needed in the process of doing l10n repackages on buildbot. The best solution is to have a factory (or a step) to parse this information and add it as a property to the build object that gets passed from step to step on buildbot and read them to be able to use them at the proper time.

Major problems

  1. Make sure that multiple slaves run preparatory steps. Currently, the only way I have found is to have a builder per slave under a Dependent scheduler to make sure that all slaves run these steps but this has its problems, which is that if a slave disconnects (does not fail) it will not trigger the next DependentL10n scheduler
  2. Pass build properties from Scheduler to Scheduler which I thought we would not need but it seems like it. I might be able to "cheat" by passing the checkout time in the application.ini file or by passing text files to the slaves with the desired information

Tuesday, June 10, 2008

l10n repackaging - part 4 - and rewriting goals


  • I have finished all that it is required to generate a locale's dmg file and its associated xpi (an add-on to change the language of your browser) under a buildbot configuration
  • I have filed a bug: "Bug 438240 - does not mount image automatically"
    • This line used to be "good enough" to answer in an automated way to the question: "do you want to mount this image?"
      echo Y | hdiutil attach -readonly -mountroot /tmp -private -noautoopen $(UNPACKAGE) > hdi.output;
      but it is not good enough at least for Leopard and according to Axel for Tiger (We have decided to open a zoo instead of continuing with our l10n work in Mozilla ;)
    • "My fix" (which is based in the scripts from Axel and Alice) is an expect script which feeds answers to an spawned process (hdiutil) which in reality would have expected a human to reply to the questions
    • I have added a 2 line patch to fix this but I have no rush to get it in the tree
  • I have tested something John wanted me to try and found that it did not work as he wanted.
    • I started my buildbot configuration with 3 slaves at the same time
    • The 1st slave picks "af" as the locale to work on
    • The 2nd slave picks "ar" as the locale to work on
    • The 3rd slave picks "be" as the locale to work on
    • I killed the 2nd slave -- therefore "ar" was not completed
    • When the 1st slave was done it picked up "bg" as the next locale to work on instead of the unfinished "ar"
    • As you can see the locale "ar" would have never been processed in this run
  • John says that he worked at some point with buildbot and saw that unfinished BUILDS where returned to the queue and taken by the next available slave. Since my Build and PeriodicL10n classes are "custom-made", they might be some features missing

Rewriting tasks/goals

Here is the list of problems/tasks/goals that I have, the type of problem that they are and the priority that they should be given:
  • P1-DEFECT - Use buildbot sendchange to FORCE the processing of a single locale
  • P2-DEFECT - Use same timestamp as en-US
  • P3-DEFECT - If locale in process and proccess not completed it should be put back into the queue so it can be reassigned
  • P3-CONFIG - Triger l10n repackages after succesful en-US build
  • P3-PERF - How to deal with common steps? They could be executed only once before processing the first locale
  • P3-PERF - When to clobber? There are various scenarios to consider
  • P4-CONFIG - A lot of small steps should be unified under a module in buildbot custom
  • P4-CONFIG - Add configuration to staging 1.8
  • P4-CONFIG - WGET - what is the exact URL of the latest en-US to get?
  • P4-CONFIG -
  • P4-CONFIG - l10n verify - I have seen that being run in production, more research to be done
  • P4-CONFIG - Push && Announce - After each locale is processed, they have to be pushed to an FTP server and being announced via email or other

What does these priorities mean? I see 2 types of problems in here.
  1. Problems that require research, time and a lot of thinking since they are not implemented anywhere else OR I do not know how to do it
  2. Problems that require less brain power and it is just a matter of putting pieces together of concepts that are being used somewhere else

The first type of problems are the ones as priority 1 or 2, which have more of my attention and dedication. The other ones could be easily solved by anyone on Mozilla without too much effort.

There are more problems to be solved in l10n-build but this list just narrows it to what is required to move from the tinderbox infrastructure to buildbot.

Any feedback and questions are welcomed.

Monday, June 09, 2008

l10n repackaging - part 3 (it feels so goooood to have a solution)

I started this past week to work on the code (which is highly inspired on Axel's work for the l10n build processes) that will allow us to distribute l10n repackages between slaves to do repackages of all the locales we have.

In my previous blog post I was worried on how to get the master to have the latest information of all the locales we have without a) having to restart a buildbot master to grab the latest list of locales and b) not doing extremely hacky things with buildbot.

I wanted to try a couple of things by making the slaves do some work as you can see in this quote from last post:
How can we change this?
  • An initial slave could do some "thinking" and notify "someone" (an object) in the master which locales to be repackaged
  • An initial slave checks out a file, uploads it to the master and somehow notify the master to reconfigure itself
BUT I realized that I should go to the moment that all the build requests are generated and just before that get the latest all-locales file from the repository.

The "good-enough" solution

  def getLocales(self):
"""It checks out the all-locales file"""
Thanks to bsmedberg in this one
tuple = subprocess.Popen(
['cvs', '',
'co', '-p', 'mozilla/browser/locales/all-locales'],
list = tuple[0].split('\n')
if list[-1]=='':
list = list[0:-1] #get rid of the last one
return list

def doPeriodicBuild(self):
if self.locales:
locales = self.locales
locales = self.getLocales() <-- We get a list with the latest list of locales

#look line
for locale in locales: <-- We create a non-merge-able
build request per locale in the list
obj = PeriodicL10n.BuildDesc(locale)
#print obj.locale
self.queue.insert(0, obj)
bs = buildset.BuildSet(self.builderNames,


What do we have solved so far

  • By using NoMergeStamp, we have build requests that do not get merged by buildbot
  • The function getLocales() will always generate the full list of all-locales without doing anything hacky with buildbot and always generate the right amount of buildrequests
  • In the function PeriodicL10n.BuildDesc(locale) we generate objects that contain a property "locale" which in a later process gets passed to a Build object and therefore we can use WithProperties("l10n/%(locale)s") which is a class that will generate a string with values extracted from the current build object of a step. For example:
  • We have a queue with Builds that are taken every time there is a slave available, therefore, the more slaves we have ---> the shorter it takes to do the whole process

Great relief

I was really frustrated just before I reached the previous solution because I did not want to spend what it was going to be a lot of "trial and fail" with the different options which could have led to very complicated solutions or dead end roads.
I am glad that I did got rid of what it was for me the biggest bug of my project and now I can dedicate my self to put all the pieces of my research into a bunch of steps in buildbots which should be able

What is to come ...

I still haven't received any feedback but that is fine because I still have to continue working on what the l10n repackage of a single locale involves, which I will describe in a later post, but for now let me list what is in my mind of things left to be done:
  1. Write and test the set of steps that generate a single locale (I am half way through)
  2. Research what the push and announce steps do (this needs further explanation)
  3. There are common steps to all locales that a slave has to do. This means that a slave might be repeating the same task from one locale to another one. It might be interesting to keep some of the work done for a previous locale for the current one. 1) checkout of firefox's code, 2) make configure step, 3) download of latest en-US and 4) partial compilation of some objects
  4. It is really important to check out the same timestamp of firefox's code for all locales. I have an eye on "Trigering Schedulers" and the last comment on it: "This is useful to ensure that all of the builds use exactly the same SourceStamp, even if other Changes have occurred while the build was running."

NOTE: You can see an image on the right side that shows which step I am at right now. "make installers" of a single locale. Soon we should see it green as well

Tuesday, June 03, 2008

l10n repackaging - part 2

After 5 weeks in my internship and 2 weeks after my meeting with Axel Hecht. I have started to get some results that help us understand the problem that I am facing.

This is the bug that I am working on: "Bug 434878 - change how automation does l10n repacks" and here is the tar ball (you must have buildbot and others installed - here is a link with a script to help you out) that has a script that starts everything for you so you can see it running.

NOTE: I am reducing each section to focus in one part of the major problem, which will be mentioned in another blog posts

Main goal
  • Change how l10n (localization) repackages happen
  • Not scalable. Currently we have to do ALL repackages even if we just want one
  • Code difficult to follow
  • Not easy to do big changes on how things get done
  • Use buildbot to help us
  • Parallel the repackages. This will help especially to improve the process in Windows machines since it takes forever and a little more to finish the whole chunk of repackages

"Hello, I am a slave. What do you want me to do?"
Buildbot has a master who tells slaves what to do, but the issue is that the master gets configured at startup. Therefore for now:
  • I have the list of locales hard-coded in the master's configuration file (this is not 100% true, but let's pretend it is)
Having the list of locales hard-coded means that I have to modify the configuration file and reconfigure the master again, which is not a good idea since there can be other builds happening AND more important than that is that it won't be automated but it will dependent on a human to logging to the master and reconfigure it.

How can we change this?
  • An initial slave could do some "thinking" and notify "someone" (an object) in the master which locales to be repackaged
  • An initial checks out a file, uploads it to the master and somehow notify the master to reconfigure itself
We will try to discard some options, find new ones and even try the "hacky" ones.
Using a slave, as Axel comments in the bug, is not appropriate for a slave to be saying what has to be built and what not.

This is the point where Axel and others have tried to bring a solution but the technology (buildbot) reaches its boundaries. Unless we have completely forgotten about a feature, we will have to come with a way around

armenzg: Mozilla internship (50+ kms on a broken bicycle)

You know how lazy I am so I will be reusing my friend's blog post:

Yes, I did it and I have to give thanks to Lukas for being with me in this great adventure

Aside of that, last weekend I spent in L.A., which was an amazing time at my uncle's home and meeting all the people from the church in L.A.

I did various things from seeing snow, driving GoKarts, watch a play, a movie and have an amazing "futbol" game (in which I scored a goal like my friend Hacheek!).

"oh yeah baby, I can't wait to see SATC; I have been waiting all year to watch it"

Sunday, May 18, 2008

l10n repackaging - part 1

My name is Armen Zambrano Gasparnian and I am one of the two release engineering interns with Lukas Blakk. My current project is taking me to research on how the l10n repackages get done (this process generates the l10n builds for all different languages) and make improvements to it

I will describe the situation and some of the problems that we have currently:
  • l10n nightlies happen in old dedicated machines instead of running in generic buildslaves. This means if the machine fails, there are no nightlies at all. (Read more about inconveniences of not using generic machines in John O'Duinn's blog)
  • l10n nightlies are not been kept in our servers (unless I am mistaken); for each locale the only one build available is the "latest" one (latest-trunk-l10n, latest-Mozilla1.8-l10n) which gets overwritten every night. By being "kept" means that we don't keep the l10n build of a specific day, even though in the tinderbox's log you always get a URL, to grab the file (log, e.g. nightly/2008-05-17-09-trunk-l10n)
  • The previous might be a desired effect, since l10n repackages are not computational costly but storing them it is, all you have to do to get an l10n build is ... ... ... ... ... (I am sorry, it is not that straight forward). I have written this past Friday a little script that does some parts of that process for an specific locale, in this case, "af" (--->create locale script<---). The mozconfig used from the source code has an l10n tag (There are 4 tags: l10n, l10n_release, MOZILLA_1_8_l10n, MOZILLA_1_8_BRANCH_l10n_release). From running the previous script, I obtain the "af" .dmg file (inside of browser/dist) and the af.xpi (browser/dist/install/mac-xpi), which are the files that I want for a mac
One of our first goals is:
  • We want to make l10n repacks to happen in such way that can be scalable. Currently it works like this (which is not scalable):
  1. We do a "make configure"
  2. Download all locales
  3. Download from nightly/latest-trunk the latest en-US archive file (Isn't it dangerous to download something that can be overwritten while downloading it??)
  4. Do the repack for all locales (~1min per locale)
  5. After that, we upload all locales

  • A better way would be to break this down by 1) creating a locale 2) upload it and 3) announce it. This would allow to do l10n repackaging in parallel which would be a great improvement
  • This would also allows to generate "just" one locale if needed. Right now, we have to generate all of them even if we just want one

I don't want to prolong this by explaining how the l10n repackaging works currently but I will do so when we file this week the bug that will cover this project and will allow more discussion

NOTE = Just FYI we do l10n repackages when we have releases in the automation system instead of using the old tinder client machines and is just slightly different by using Bootstrap/Step/

Friday, May 16, 2008

Firefox Release Candidate 1 - Test days

Yes, if you did not know it, you can use the Release Candidate 1 which will be Firefox 3 unless we find any dramatic bug.

This post is just to announce that there is going to be Test days in these coming Fridays starting today.

Too late notice but will be a good experience for those who haven't tried yet a Test Day

Thursday, May 01, 2008

armenzg: Mozilla internship (week 1)

How to start a blog post to convey so many things that are happening to me?

As many of you know, I am doing an internship with Mozilla Corporation who they deliver to the world an internet browser called Mozilla Firefox. This company is owned by Mozilla Foundation which is a non-profit organization, therefore, the desires of the companies are after creating an open web by being themselves open and allowing the source code they use to generate the browser (I say "the" instead of "their" since the code is owned by no one).

Introduction made, I will center on what I have done in my first week (non technical events)

When I arrived I was received by Gary (originator of Rumbling Edge) the first interim they had this year who came to pick me up with the car and move with me to the Oakwood Aparments in Mountain View (part of San Jose) close to the Mozilla Headquartes (if you look at this map I sit on the second floor exactly around where the RED PIN is)

After that he decided to take me to Mountain Hamilton where the Lick Observatory is. Gary is a person who loves nature and likes to make the people feel welcome, a really interesting person to meet for his unique involvement with the Mozilla Community.

Next day before going to the office, he decided to go and buy a cup for his brother to the "The Company Store" which some of you might know that is in One Infinte Loop (I am sorry Huanlu, you have to take it how it comes) which is part of the Apple's Headquartes. The store looks like a cloth stoe as Zara but with iPhones and iPods instead.

And finally I will be really happy to receive Lukas Blakk (Source Server Symbol) and Cesar Oliveira (Active Directory) who are also Seneca students as me part of Dave Humphrey's open source children. We also have around here, Anthony Hughes (interim of SongBird) which we will be visiting n San Francisco.

I am working in a really cool and heavy into details project under the mentoring of John O'Duinn who is the manager of the release team. He also got a good MacBook Pro for all my hacking and honestly I really appreciate speciall since my laptop is giving too many problems and the humongous screen that they got me is good since my eyes are like my grandma's

Anyways, more to come by the end of the summer... :)

Saturday, April 19, 2008

armenzg: Hera Try Server Unit Tests status

The Mozilla@Seneca Hera Try Server has been able to run Unit Tests after uploading a patch during this week!

This is basically what it does:
  • Upload a patch through thos form sendchange_test.cgi (You need credentials)
  • The in the build master checks the folder on the try server where the patch are and see if there is any one new. Right not it checks every 1 or 2mins but we will change this to incron (Upon new file creation - This means immediate response)
  • The perl script copies all new patches locally and sends the buildbot a message "builbot sendchange"
  • This triggers the buildbot to make the "tests" branch to get started
  • It applies the patch to the latest source code, it builds, it uploads the build back to the try server through SCP and SSH (this allows the developer to download it) and it runs all 6 unit test suites against that build

Left to be done:

  • Add incron for the to run
  • Put the waterfall view behind some security measures (We don't want random people to just "force builds" and loose CPU while we want identified users to be able to)
  • Add ssh key pairs for the mac and the windows machine
  • Create the slave on the windows machine
  • Once done, clone all linux machines and the windows machines
  • Modify the sendchange_test.cgi file to identify the user who uploaded the patch (right now, every patch says to be uploaded by "armenzg")
  • Unify the sendchange_test.cgi form to allow users to upload patches and just get builds without "tests"
  • It will be nice to notify the user when the builds are done OR when the unit tests are run (I do not know how)

Another idea - Regionalized builds

I believe that it would be a good project to allow the Hera server to spit back a regionalised build, using Rueen auto-localization tool. This obviously requires more research, but I believe it is viable project to try.

This post could have been much more technical and longer but I decided to KISS since I won't have time to spend time documenting until Tuesday or Wednesday.

Friday, April 04, 2008

armenzg: Where do I display the unit tests?

Unfortunately for me, the major system that I am implementing this semester has drained too many hours (~150hours accumulated) and that for being a SINGLE course is way too much. You can tell of this because the quantity of work I have generate for each release of my Mozilla project (integrate unit tests to Seneca's try server) is a one day worth of research and work. I expect that after this coming Wednesday (final run) I could rest of that project and dedicate myself to my Mozilla project and my upcoming exams.

For this release I will mention what I have tried and found out. Two major things that I have been working on and have to get done/fixed:
  • The unit tests need a DISPLAY in which to show up. I tried to see what is going in the slaves and I have to do one of two things "get someone with root access to either set it to boot to runlevel 5, or do init 5 after it boots and that'll start up the desktop and you'll be able to log in via vmware console" OR "look up an Xvnc tutorial and figure it out (does not require root access :) )" (thanks a lot to rhelmer and I wish you the best!)
  • Through "Seneca Buildbot Try Server - Unit Tests enabled (soon)"(for security reasons it needs credentials - Look at the screenshot) you will be able to upload a patch and will run the unit tests but for now I will have to see if the perl script is properly set up. So far I know that I can submit patches to the folder "patches" in the try server. This will have to be retrieved by the buildslave to apply it to the latest source code.
NOTE: If your force a build of the unit test column, you will see that I have removed a lot of the steps, just to be able to run the test as soon as possible

Friday, March 21, 2008

armenzg: Sess16 - Automated testing on Hera

This blog post announces that I have got my hands dirty adding the steps to do automated builds on buildbot with testings on the hera server.
I have got access this last past Tuesday and the last 4 weeks I was working on the buildbot configuration for CAIRO, and this is the summary of what I have tried to do:
  • Access the buildmaster in
  • Modify the master.cfg to add the build steps for the linux machine
  • Add file and the mozconfig-linux-tests
My goal was to force a complete build with tests without ruining Adam's configuration (he set up the try server)

NOTE: This buildbot setup will allow to force builds with tests and builds without tests

NOTE: You can see the second yellow column that indicates that is building - check it yourself OR force a build (and go for a coffe)


As always, I keep a section for problems and solutions for future reference:
  • I used "buildbot restart ." instead of "buildbot reconfig ." and therefore I did put the try server down :( - BTW, do not use "restart" if you want to just re-read master.cfg so your changes are applied, because instead of using the last working copy of master.cfg (as it says) it just dies and never starts again
  • I had to put some of the environment variables (MozillaEnvironments['centos']) inside buildcustom/ file since it was defined in there and also in my code but did not catch it this way
  • At some point my "mozconfig-linux" file contained CRLF and I would get "line 17 errors" when I had only 9 lines; I used file mozconfig-linux-tests to confirm and dos2unix mozconfig-linux-tests to fix it (thanks rhelmer) - I remember the "line" errors but I can't find the log that said it - When I do cat .mozconfig, you can actually see that there are extra new lines in the output)
  • There are has been a lot of small problems that I have run into but not relevant


I am in the Hera server, I can make changes and I have got a build with automated testing.

What is left?

  • Make the other machines to build too (I do not know if they are up and running)
  • Learn how to make it build following the "tryserver" way (give a patch and run the build with tests)
  • Find out why the compiling takes forever

Friday, March 07, 2008

armenzg: Session 15: When things go wrong

I was trying to write as a file containing steps commands which derived from ShellCommand so I could have logically related steps grouped together. For instance, "", "make" and "make install", since all of them have to run to count as a "good" build. Another reason was to have a summary of the tests run, instead of having a look at the STDIO to see the results of the tests.

Well, I decided to make it happen at school were we have computers (being setup by us, here and there) for our Open Source Projects, but things went wrong:

  1. I try to get buildbot and twisted on the mac mini
  2. I want to use package manager: macPorts cannot reach its rsync server; fink can reach a cvs repository to selfupdate but takes for ever
  3. I decided to get tarball and build it myself; Buildbot only works with Twisted 2.5 but I would only have 2.4 even tough I build it (sudo python ./ install)
  4. I realized that buildbot got installed under /Libraries/python/2.5/site-packages (I think) and twisted too, but it seemed to don't catch it as 2.5
  5. I found through a blog post (I left the link at school) talking about it and it seemed I had to export PYTHONPATH to point at the right place
  6. Now I should had been ready, but when I tried to start my buildbot master I got this message:
    Traceback (most recent call last):
    File "/Library/Python/2.5/site-packages/twisted/application/", line 379, in run
    File "/Library/Python/2.5/site-packages/twisted/scripts/", line 23, in runApp
    File "/Library/Python/2.5/site-packages/twisted/application/", line 157, in run
    self.application = self.createOrGetApplication()
    File "/Library/Python/2.5/site-packages/twisted/application/", line 207, in createOrGetApplication
    application = getApplication(self.config, passphrase)
    --- ---
    File "/Library/Python/2.5/site-packages/twisted/application/", line 218, in getApplication
    application = service.loadApplication(filename, style, passphrase)
    File "/Library/Python/2.5/site-packages/twisted/application/", line 341, in loadApplication
    application = sob.loadValueFromFile(filename, 'application', passphrase)
    File "/Library/Python/2.5/site-packages/twisted/persisted/", line 215, in loadValueFromFile
    exec fileObj in d, d
    File "buildbot.tac", line 9, in
    BuildMaster(basedir, configfile).setServiceParent(application)
    File "/Library/Python/2.5/site-packages/buildbot/", line 370, in __init__
    self.status = Status(self.botmaster, self.basedir)
    File "/Library/Python/2.5/site-packages/buildbot/status/", line 1779, in __init__
    assert os.path.isdir(basedir)
And this is where I basically left it and I should find why next time I go to school or just choose one of the Linux computers and run it there, but I can't promise that everything will be easy.

CairoTests class

class CairoTests(ShellCommand):
name = "Run Cairo Tests"
warnOnFailure = True
description = ["running tests"]
descriptionDone = ["tests completed"]
command = ["make", "test"]
workdir = "cairo"

def createSummary(self, log):
passCount = 0
failCount = 0
for line in log.readlines():
if "PASS" in line:
passCount = passCount + 1
if "FAIL" in line:
failCount = failCount + 1
summary = "TestsResults:" + str(passCount) + "/" + str(failCount) + "\n"
self.addCompleteLog('summary', summary)

def evaluateCommand(self, cmd):
superResult = ShellCommand.evaluateCommand(self, cmd)
if SUCCESS != superResult:
if None !='FAIL', cmd.logs['stdio'].getText()):
return SUCCESS

After this blog post I will have to change this code because I believe it does not capture properly how many tests failed or passed.
Cairo's tests have the keywords PASS, XFAIL, UNTESTED and FAIL at the end of some lines and the summary of each group of tests is after the block of tests and appears at the beginning of the line followed by a semicolon. Check the next quoted text:

TESTING svg-surface-source
Test using various surfaces as the source
svg-surface-source-image-argb32 [0]: PASS
svg-surface-source-image-argb32 [25]: PASS

<-- There are more lines but nothing relevant -->

svg-surface-source-xlib-fallback-rgb24 [0]: UNTESTED
Failed to open display: :0.0
Failed to open display: :0.0
svg-surface-source-xlib-fallback-rgb24 [25]: UNTESTED
PASS: svg-surface-source <-- This is what I call the summary of the block of tests

We also have at the end of the STDIO that shows the results of all tests and my summary should match that OR maybe I should just show that in the summary instead:

7 of 156 tests did not behave as expected (4 unexpected passes)
Please report to
make[4]: *** [check-TESTS] Error 1
Failed tests:
ft-text-vertical-layout-type1: image test-fallback test-meta test-paginated
Failures per surface - image: 1, test-fallback: 1, test-meta: 1, test-paginated: 1.

Next releases

It seems that the Hera Try Server will be up and running something really soon and I will have to make sure that the tests are being run properly and that it meets the desires of the Mozilla community. Adam has been doing a great job and as I have heard from Shaver they will be really happy to see this working.

In my next release I will expect to have the Cairo steps for Windows and the performance tests (which I thought I could have got done for this week but did not happen) and whatever is required for me to work on the Try Server

Friday, February 29, 2008

armenzg: Sess14: Building Cairo on Mac; No problems?? No way!!

It has been quite a long time and time constraints are always flying on top of me (like most of you reading this)
Anyways, I am going to work on writing the steps to build Cairo on Mac, let's see if there are differences

NOTE: This has been my longest session when I expected to spend just an hour or two - I have found so many obstacles that I cannot recall all of them

NOTE2: No performance tests and the "make test" takes a pretty LONG TIME - More than 30 minutes (In Linux I believe it took shorter time, not sure)


  • sudo port install git-core
  • sudo port install buildbot
svn is not in ports' repository (I think I just saw svn extensions for python and perl) so I can get if with Fink or get an installer (Subversion .dmg); I decide to go the command line way:
  • sudo port install autoconf
  • sudo port install automake
  • sudo port install libtool //I think I did: fink install libtool instead


  • mkdir sandbox && cd sandbox
  • buildbot create-master cairomaster
  • cd cairomaster
  • I tried to checkout buildbot-configs/cairo-one-ubuntu-slave/master.cfg from my repository but svn does not checkout single files; But I placed the master.cfg to the ~/cairomaster folder
  • buildbot create-slave ./cairoslave localhost:9876 macslave slavepassword


  • buildbot start cairomaster/
  • buildbot start cairoslave/
  • firefox http://localhost:8020


One of my longest sessions and with more problems ever!!!! Read below all my obstacles and I hope that all that I have done will help the cairo developers, in fact, cworth from the cairo channel was really helpful and he made some changes to the INSTALL file to reveal some of the things I have discovered. I love the new section called "Extremely detailed build instructions" that he has added :D


The autoconf and automake that come with MacOS X are not good for what I am trying to do ( ); I don't know if both or just one of them, but read below:

  • sudo port install automake // because when I try to do I get the following error (even though I have libtool installed):
    NOTE2: later when trying to ./ in cairo's folder I had a similar problem and I got libtool from Fink since MacPort was skipping since I already had libtool; After that it worked

    Can't exec "libtoolize": No such file or directory at /usr/share/autoconf/Autom4te/ line 288, line 3.
    autoreconf: failed to run libtoolize: No such file or directory
  • I don't know what I did, I might have got autoconf from Fink or MacPorts // when I try to run ./ with cairo I get this error:
    ./ line 177: libtoolize: command not found
  • I did not know how to configure PKG_CONFIG_PATH but now I do:
    PKG_CONFIG_PATH=/Users/armenzg/Sandbox/libs/lib/pkgconfig //since I configured pixman to build there and it created a pkgconfig folder with a .pc file
    export PKG_CONFIG_PATH // to export to the environment variables
  • After setting up the PKG_CONFIG_PATH properly and tried again to ./
    configure: error: Cairo requires at least one font backend.
    Please install freetype and fontconfig, then try again:
  • I ended up doing "sudo fink install fontconfig" because neither through Fink or MacPorts I got it working
  • Another problem, after doing a "make":
    libtool: ltconfig version `' does not match version `1.3.5'
    Fatal configuration error. See the libtool docs for more information.
    make[2]: *** [libcairo_la-cairo.lo] Error 1
  • The libtool that comes with MacOS X is not the one we want (thanks cworth from #cairo channel on freenode):
    LIBTOOLIZE=glibtoolize ./ --prefix=/Users/armenzg/Sandbox/libs
  • This last problem was found when I wrote the buildbot steps and I was trying to set environment variable PKG_CONFIG_PATH but that is not Shell Command and I found through bhearshum that I can do this:
    macFactory.addStep(step.ShellCommand, name="autogen for cairo",
    descriptionDone="autogen for cairo",
    env={"PKG_CONFIG_PATH": '%s/lib/lib/pkgconfig' % mac_build_prefix,
    "LIBTOOLIZE": 'glibtoolize'})

Tuesday, February 12, 2008

armenzg: Sess13: Buildbot with Cairo

In this post you will see that I have set up a master.cfg file to build cairo and run the tests BUT not the performance tests

My results:

What the buildbot steps do is basically this:

  • Remove folders "pixman" and "cairo"
  • Checkout from GIT repository "pixman" and "cairo"
  • Build pixman
    • ./
    • ./configure --prefix=/home/armen/lib //This is to avoid to do "sudo make install"
    • make
    • make install
  • Build Cairo
    • ./
    • ./configure --prefix=/home/armen/lib
    • make
    • make install
  • Run Cairo's Tests
    • make test
    • cd test && make html //This generates html file that displays images, all together is around 35MB
  • Run Cairo's performance tests
    • TO BE DONE


  • How do I pack the results and provide it? Upon request? After each build?
  • PKG_CONFIG_PATH tells pkg-config where to look for information about packages and we use pkg-config to locate things like glib and cairo.
    My question is when am I supposed to run export PKG_CONFIG_PATH=/home/armen/lib?? //I don't have it in my master.cfg
  • I still have to run the performance tests and see what happens



FROM THIS POINT ON, these steps are my first attempt and you should not waste your time reading it, but it is good for learning purposes

This is a FRESH manual build of Cairo, the steps to set up the environment are ommited.
I have a problem/question relating that I have to raise privileges to do "make install", will this be OK with buildbot? Is there a way around?

  • mkdir cairo & cd cairo
  • git clone git://
  • git clone git://
  • cd pixman && ./
  • ./configure
  • make
  • make install //I have a problem can I set up buildbot to use SUDO??
    make[2]: Entering directory `/home/armen/sandbox/cairo2/pixman/pixman'
    test -z "/usr/local/lib" || /bin/mkdir -p "/usr/local/lib"
    /bin/bash ../libtool --mode=install /usr/bin/install -c '' '/usr/local/lib/'
    /usr/bin/install -c .libs/ /usr/local/lib/
    /usr/bin/install: cannot remove `/usr/local/lib/': Permission denied
    make[2]: *** [install-libLTLIBRARIES] Error 1

  • sudo make install //I had to use SUDO
  • whereis libpixman-1
  • cd ../cairo && ./
  • ./configure
  • make
  • make install //the same problem with SUDO
    make[2]: Entering directory `/home/armen/sandbox/cairo2/cairo/src'
    test -z "/usr/local/lib" || /bin/mkdir -p "/usr/local/lib"
    /bin/bash ../libtool --mode=install /usr/bin/install -c '' '/usr/local/lib/'
    /usr/bin/install -c .libs/ /usr/local/lib/
    /usr/bin/install: cannot create regular file `/usr/local/lib/': Permission denied
  • sudo make install

  • make test //takes long time
  • cd test && make html

Tuesday, February 05, 2008

armenzg: Sess12: Automated tests running and getting results

As you know from last post I have set up buildbot on my laptop to build Firefox and run automated tests every 50 mins.
I started to build properly around midnight when BUILD #5 started which was completed successfully.

Let me talk about the results of BUILD #6(which has already build once on BUILD#5), #7 & #8:

checkout and compile2-3 mins2-3 mins2-3 mins
MAKE CHECK1hr 8mins1hr 20mins42 mins
BROWSER CHROME2mins2mins2mins

I have marked in red those values that are over 20mins
We can easily notice that testing takes A LOT OF TIME:
  • Estimated time for all 6 tests: 2 hours
  • DEP BUILD takes 2-3mins
  • a CLEAN CHECKOUT & CLEAN BUILD takes 41mins (value comes from BUILD# 5)


  • The buildbot's waterfall view is really interesting and allows you to view how the build is going, but the summaries of the tests are really poor and were meant to output on Tinderbox, therefore the "
    " that are shown when looking the summaries of the tests


  • I had FF2 open to see the results of the build and when it reached the mochitests it required to open FF3, therefore, there was an alert saying that Firefox was open bla bla bla.
  • At that point, I thought that it waits for me to press "OK" and then run the test but if so, how did I got the results during of builds 6, 7 & 8 if I was sleeping?
  • I remember that during the night time I woke up and I had to select "OK" and it seems that only after that it run
  • I have tried to modify the "perl" step that runs the mochitest by adding "--setenv=MOZ_NO_REMOTE=1" or "--appname='../../../bin/dist/firefox --no-remote" but this did not make any change (BTW I restarted to be able to do apply the changes) and I have also tried to run it manually and I would still get that alert that would wait for a user "OK"


  • shall I try to set up a "try server" style set up on my computer? Adam suggested not too, because he will take care of doing it for the Hera cluster
  • would it be interesting to run tests in parallel? Let see what the timings are in the Hera cluster, I bet is better than in my laptop. I believe it could be done if a slave would pass to other computers the BUILD OBJDIR over the net and give them the steps to run the MAKE CHECK (which takes the longest out of all tests) and get the results back
  • which others things shall I add to my steps? Bonsai-poller? Allow clobbering?
  • we should set up a way for the developer to get in their inbox the results of the build/test

In this week's iteration I do the same but with Cairo and see what Adam does with the Hera cluster and try to put my stuff there

Friday, February 01, 2008

armenzg: Sess11: How to set up Buildbot to build Firefox and run automated tests

I have modified the master.cfg file to be able to schedule build with unit tests every 50 minutes on my laptop.
I have reused this file
and the modified files are posted here:

NOTE: I already have Buildbot set up from last blog post
  • I checked out the buildbot-config for unit test firefox with places (which I am not going to use)
    cvs -d co mozilla/tools/buildbot-configs/testing/unittest/
  • I moved the checked out files to my ~/BuildbotMaster folder
  • Made all needed changes to set up only one buildslave - These are the main configuration
    c['buildbotURL'] = "http://localhost:8010/"
    c['slavePortnum'] = 9989
    c['status'].append(html.Waterfall(http_port=8010, css="/home/armen/BuildbotMaster/waterfall.css"))
    c['bots'] = [("slave1", "slavepassword1")]
    c['schedulers'].append(Periodic(name="50 minutes build scheduler",
    builderNames=["Linux Ubuntu7.1 dep unit test"],
    firefox_trunk_ubuntu_builder = {
    'name': "Linux Ubuntu7.1 dep unit test",
    'slavenames': ['slave1'],
    'builddir': "trunk_ubuntu",
    'factory': ubuntuFactory,
    'category': "Firefox"}
  • I have also set up in the buildbot master folder a mozconfig-armen file that create a non-static build and enables tests. This mozconfig-armen gets downloaded by the slave
  • buildbot start ~/BuildbotMaster
  • buildbot start ~/Buildslave
  • I open http://localhost:8010 and I can see the results on it
The problem was that it would never pass the step " update stdio", because it required human intervention asking to say "yes" or "no" about accepting certain key identifying the server (or something like that)
If I did this step manually would be:
  • cvs -d co mozilla/
    which got me this message: "Permission denied (publickey,gssapi-with-mic)"
Therefore, I decided to use this command instead:
  • cvs -d co
And I updated the master.cfg accordingly by changing it to:
  • CVSROOT = ""
I restarted then the master by doing this:
  • builbot restart ~/BuildbotMaster
Now all that is left is to leave the computer work on it and give me results tomorrow when I wake up.

Image 1. Before fixing CVSROOT
Image 2. After fixing CVSROOT